opennn
Open Neural Networks project for image classification task
Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (10.0%) to scientific vocabulary
Keywords
Repository
Open Neural Networks project for image classification task
Basic Info
- Host: GitHub
- Owner: epishchik
- License: mit
- Language: Python
- Default Branch: main
- Homepage: https://t.me/Evgenii_Pishchik
- Size: 243 MB
Statistics
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
- Releases: 11
Topics
Metadata Files
README.md
[](https://github.com/Pe4enIks/OpenNN/blob/main/docker/Dockerfile) [](https://pytorch.org)
Table of content
- Quick start
- Warnings
- Encoders
- Decoders
- Pretrained
- Pretrained old configs fixes
- Datasets
- Losses
- Metrics
- Optimizers
- Schedulers
- Examples
- Wandb
Quick start
1. Straight install.
1.1 Install torch with cuda.
bash
pip install -U torch --extra-index-url https://download.pytorch.org/whl/cu113
1.2 Install opennn_pytorch.
bash
pip install -U opennn_pytorch
2. Dockerfile.
bash
cd docker/
docker build -t opennn:latest .
Warnings
- Cuda is only supported for nvidia graphics cards.
- Alexnet decoder doesn't support bce losses family.
- Sometimes combination of dataset/encoder/decoder/loss will give bad results, try to combine others.
- Custom cross-entropy support only mode when preds have (n, c) shape and labels have (n) shape.
- Not all options in transform.yaml and config.yaml are required.
- Mean and std in datasets section must be used in transform.yaml, for example [mean=[0.2859], std=[0.3530]] -> normalize: [[0.2859], [0.3530]]
Encoders
- LeNet [paper] [code]
- AlexNet [paper] [code]
- GoogleNet [paper] [code]
- ResNet18 [paper] [code]
- ResNet34 [paper] [code]
- ResNet50 [paper] [code]
- ResNet101 [paper] [code]
- ResNet152 [paper] [code]
- MobileNet [paper] [code]
- VGG-11 [paper] [code]
- VGG-16 [paper] [code]
- VGG-19 [paper] [code]
Decoders
Pretrained
LeNet
AlexNet
GoogleNet
ResNet
MobileNet
VGG
Pretrained configs issues
Config file changed, check configs folder!!! 1. Config must include testpart value, (trainpart + validpart + testpart) value can be < 1.0. 2. You will able to add wandb structure for logging in wandb. 3. Full restructure into branches structure.
Datasets
Dataset parameters: - MNIST [classes=10] [mean=[0.1307], std=[0.3801]] - FASHION-MNIST [classes=10] [mean=[0.2859], std=[0.3530]] - CIFAR-10 [classes=10] [mean=[0.491, 0.482, 0.446], std=[0.247, 0.243, 0.261]] - CIFAR-100 [classes=100] [mean=[0.5071, 0.4867, 0.4408], std=[0.2675, 0.2565, 0.2761]]
Losses
Metrics
Optimizers
Schedulers
Examples
- Run from yaml config. ```python from opennn_pytorch import run
config = 'path to yaml config' # check configs folder run(config) ```
- Get encoder and decoder. ```python from opennnpytorch.encoder import getencoder from opennnpytorch.decoder import getdecoder
encodername = 'ResNet18' decodername = 'AlexNet' decodermode = 'Single' inputchannels = 1 number_classes = 10 device = 'cuda:0'
encoder = getencoder(encodername, input_channels).to(device)
model = getdecoder(decodername, encoder, numberclasses, decodermode, device).to(device) ```
3.1 Get dataset. ```python from opennnpytorch.dataset import getdataset from torchvision import transforms
transformconfig = 'path to transform yaml config' datasetname = 'MNIST' datafiles = None trainpart = 0.7 validpart = 0.2 test_part = 0.05
transformlst = opennnpytorch.transformslst(transformconfig) transform = transforms.Compose(transform_lst)
traindata, validdata, testdata = getdataset(datasetname, trainpart, validpart, testpart, transform, datafiles) ```
3.2 Get custom dataset. ```python from opennnpytorch.dataset import getdataset from torchvision import transforms
transformconfig = 'path to transform yaml config' datasetname = 'CUSTOM' images = 'path to folder with images' annotation = 'path to annotation yaml file with image: class structure' datafiles = (images, annotation) trainpart = 0.7 validpart = 0.2 test_part = 0.05
transformlst = opennnpytorch.transformslst(transformconfig) transform = transforms.Compose(transform_lst)
traindata, validdata, testdata = getdataset(datasetname, trainpart, validpart, testpart, transform, datafiles) ```
- Get optimizer. ```python from opennnpytorch.optimizer import getoptimizer
optimizername = 'RAdam' lr = 1e-3 weightdecay = 1e-5 optimizerparams = {'lr': lr, 'weightdecay': weight_decay}
optimizer = getoptimizer(optimizername, model, optimizer_params) ```
- Get scheduler. ```python from opennnpytorch.scheduler import getscheduler
schedulername = 'PolynomialLRDecay' schedulertype = 'custom' schedulerparams = {'maxdecaysteps': 20, 'endlearning_rate': 0.0005, 'power': 0.9}
scheduler = getscheduler(schedulername, optimizer, schedulerparams, schedulertype) ```
- Get loss function. ```python from opennnpytorch.loss import getloss
lossfn = 'L1Loss' lossfn, onehot = getloss(loss_fn) ```
- Get metrics functions. ```python from opennnpytorch.metric import getmetric
metricsnames = ['accuracy', 'precision', 'recall', 'f1score'] numberclasses = 10 metricsfn = getmetric(metricsnames, nc=number_classes) ```
- Train/Test. ```python from opennn_pytorch.algo import train, test, prediction import random
algorithm = 'train' batchsize = 16 classnames = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'] numberclasses = 10 saveevery = 5 epochs = 20 wandbflag = True wandbmetrics = ['accuracy', 'f1_score']
traindataloader = torch.utils.data.DataLoader(traindata, batchsize=batchsize, shuffle=True)
validdataloader = torch.utils.data.DataLoader(validdata, batchsize=batchsize, shuffle=False)
testdataloader = torch.utils.data.DataLoader(testdata, batch_size=1, shuffle=False)
if algorithm == 'train': train(traindataloader, validdataloader, model, optimizer, scheduler, lossfn, metricsfn, epochs, checkpoints, logs, device, saveevery, onehot, numberclasses, wandbflag, wandbmetrics) elif algorithm == 'test': testlogs = test(testdataloader, model, lossfn, metricsfn, logs, device, onehot, numberclasses, wandbflag, wandbmetrics) if pred: indices = random.sample(range(0, len(testdata)), 10) os.mkdir(testlogs + '/prediction', 0o777) for i in range(10): tmprange = range(numberclasses) tmpdct = {i: names[i] for i in tmprange} prediction(testdata, model, device, tmpdct, testlogs + f'/prediction/{i}', indices[i]) ```
Wandb
Wandb is very powerful logging tool, you will able to log metrics, hyperparamets, model hooks etc.
bash
wandb login
<your api token from wandb.ai>


Citation
Project citation.
License
Project is distributed under MIT License.
Owner
- Name: Evgenii Pishchik
- Login: epishchik
- Kind: user
- Location: Moscow, Russia
- Repositories: 1
- Profile: https://github.com/epishchik
Citation (CITATION.cff)
cff-version: 1.2.0 message: "If you use this software, please cite it as below." authors: - name: "OpenNN Contributors" title: "OpenNN library for classification task" date-released: 2022-04-03 url: "https://github.com/Pe4enIks/OpenNN" license: MIT
GitHub Events
Total
Last Year
Issues and Pull Requests
Last synced: 5 months ago
All Time
- Total issues: 0
- Total pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Total issue authors: 0
- Total pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- ubuntu 20.04 build