uecfoodpix
Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (12.2%) to scientific vocabulary
Repository
Basic Info
- Host: GitHub
- Owner: LennyYiWANG
- License: apache-2.0
- Language: Jupyter Notebook
- Default Branch: main
- Size: 19.6 MB
Statistics
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
- Releases: 0
Metadata Files
README.md
Foodseg-uecfoodpix
This repo implements the deeplabv3+ training for UECFoodPIX complete dataset. And this repository implements the baseline for FoodSAM: Any Food Segmentation. 备注:完整报告和项目描述请见 食品识别.ipynb
Installation
a. Create a conda virtual environment and activate it.
shell
conda create -n foodseg-uec python=3.8 -y
conda activate foodseg-uec
b. Install PyTorch and torchvision following the official instructions. Here we use PyTorch 1.10.1 and CUDA 11.3. You may also switch to other version by specifying the version number.
shell
conda install pytorch==1.10.1 torchvision==0.12.2 cudatoolkit=11.3 -c pytorch -c conda-forge -y
c. Install MMCV following the official instructions.
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html
d. Clone this repo.
git clone https://github.com/HitBadTrap/Foodseg-uecfoodpix.git
cd Foodseg-uecfoodpix
pip install -e . # or "python setup.py develop"
Testing
Run the following commands to evaluate the given checkpoint:
python tools/test.py [config] [checkpoint] --show-dir [output_dir] --show(optional)
You can append --show to generate visualization results in the output_dir/vis_image.
For our testing example, move the downloaded checkpoint file into ckpts directory, then run
python tools/test.py ./configs/deeplabv3plus/deeplabv3plus_r101-d8_4xb4-80k_uecfoodpix-320x320.py ./ckpts/best_mIoU_iter_24000.pth --show-dir output --show
Training
1. For single-gpu training, run the following command:
python tools/train.py [config]
2. For multi-gpu training, run the following commands:
bash tools/dist_train.sh [config] [num_gpu]
The default config is ./configs/deeplabv3plus/deeplabv3plusr101-d84xb4-80k_uecfoodpix-320x320.py
For our training example: ```
single-gpu training
python tools/train.py ./configs/deeplabv3plus/deeplabv3plusr101-d84xb4-80k_uecfoodpix-320x320.py
multi-gpu training
bash tools/disttrain.sh ./configs/deeplabv3plus/deeplabv3plusr101-d84xb4-80kuecfoodpix-320x320.py 2 ```
Results
| Method | mIou | aAcc | mAcc | Model | Training Log | :-: | :- | -: | :-: | :-: | :-: | |deeplabV3+ (baseline)| 65.61 |88.20| 77.56 | Link | Link FoodSAM | 66.14 |88.47 |78.01 | | |
Acknowledgements
A large part of the code is borrowed from mmsegmentation
License
The model is licensed under the Apache 2.0 license.
Citation
If you want to cite our work, please use this:
``` @misc{lan2023foodsam, title={FoodSAM: Any Food Segmentation}, author={Xing Lan and Jiayi Lyu and Hanyu Jiang and Kun Dong and Zehai Niu and Yi Zhang and Jian Xue}, year={2023}, eprint={2308.05938}, archivePrefix={arXiv}, primaryClass={cs.CV} }
```
Owner
- Name: Hlabkr
- Login: LennyYiWANG
- Kind: user
- Location: Sydney
- Repositories: 1
- Profile: https://github.com/LennyYiWANG
Citation (CITATION.cff)
cff-version: 1.2.0 message: "If you use this software, please cite it as below." authors: - name: "MMSegmentation Contributors" title: "OpenMMLab Semantic Segmentation Toolbox and Benchmark" date-released: 2020-07-10 url: "https://github.com/open-mmlab/mmsegmentation" license: Apache-2.0
GitHub Events
Total
- Push event: 3
- Create event: 2
Last Year
- Push event: 3
- Create event: 2
Dependencies
- pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
- pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
- albumentations >=0.3.2
- docutils ==0.16.0
- myst-parser *
- sphinx ==4.0.2
- sphinx_copybutton *
- sphinx_markdown_tables *
- urllib3 <2.0.0
- mmcv >=2.0.0rc4
- mmengine >=0.5.0,<1.0.0
- cityscapesscripts *
- nibabel *
- mmcv >=2.0.0rc1,<2.1.0
- mmengine >=0.4.0,<1.0.0
- prettytable *
- scipy *
- torch *
- torchvision *
- matplotlib *
- numpy *
- packaging *
- prettytable *
- scipy *
- codecov * test
- flake8 * test
- interrogate * test
- pytest * test
- xdoctest >=0.10.0 test
- yapf * test