https://github.com/chris-santiago/autoencoders

Experimenting with various autoencoder architectures.

https://github.com/chris-santiago/autoencoders

Science Score: 10.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
    1 of 2 committers (50.0%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.4%) to scientific vocabulary

Keywords

autoencoder denoising-autoencoders hydra pytorch pytorch-lightning self-supervised-learning siamese-network simsiam-pytorch taskfile

Keywords from Contributors

transformers
Last synced: 5 months ago · JSON representation

Repository

Experimenting with various autoencoder architectures.

Basic Info
  • Host: GitHub
  • Owner: chris-santiago
  • License: mit
  • Language: Jupyter Notebook
  • Default Branch: master
  • Homepage:
  • Size: 3.49 MB
Statistics
  • Stars: 1
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Topics
autoencoder denoising-autoencoders hydra pytorch pytorch-lightning self-supervised-learning siamese-network simsiam-pytorch taskfile
Created over 2 years ago · Last pushed over 2 years ago
Metadata Files
Readme Changelog License

README.md

Experimenting with AutoEncoders

This repo (roughly) experiments with various autoencoder architectures to tease out their ability to learn meaningful representations of data in an unsupervised setting.

  • Each model is trained (unsupervised) on the 60k MNIST training set.
  • Models are evaluated via transfer learning on the 10k MNIST test set.
  • Each model's respective encodings of varying subsets of the 10k MNIST test set are input to train a linear classifier:
    • Each linear classifier is trained on increasing split sizes ([10, 100, 1000, 2000, 4000, 8000]) and evaluated on remaining subset of the 10K MNIST test set.
    • See autoencoders/compare.py for details.
  • Classifier performance is measured by mulit-class accuracy and (ROC)AUC.

Notes

  • Most models follow the architectures from their respective papers (see autoencoders/models/).
  • Hyperparameter optimization is not performed on the encoder models nor the linear classifiers.
  • Model specification/configuration located in the outputs/ directory.

Results

Install

Create a virtual environment with Python >= 3.10 and install from git:

bash pip install git+https://github.com/chris-santiago/autoencoders.git conda env create -f environment.yml cd autoencoders pip install -e .

Use

Prerequisites

Hydra

This project uses Hydra for managing configuration CLI arguments. See autoencoders/conf for full configuration details.

Task

This project uses Task as a task runner. Though the underlying Python commands can be executed without it, we recommend installing Task for ease of use. Details located in Taskfile.yml.

Current commands

```bash

task -l task: Available tasks for this project: * check-config: Check Hydra configuration * compare: Compare encoders using linear baselines * eval-downstream: Evaluate encoders using linear baselines * plot-downstream: Plot encoder performance on downstream tasks * train: Train a model * wandb: Login to Weights & Biases ```

Example: Train a SiDAE model

The -- forwards CLI arguments to Hydra.

bash task train -- model=sidae2 data=simsiam callbacks=siam

Weights and Biases

This project is set up to log experiment results with Weights and Biases. It expects an API key within a .env file in the root directory:

toml WANDB_KEY=<my-super-secret-key>

Users can configure different logger(s) within the conf/trainer/default.yaml file.

Documentation

~~Documentation hosted on Github Pages: https://chris-santiago.github.io/autoencoders/~~

Owner

  • Name: Chris Santiago
  • Login: chris-santiago
  • Kind: user

GitHub Events

Total
Last Year

Committers

Last synced: over 1 year ago

All Time
  • Total Commits: 67
  • Total Committers: 2
  • Avg Commits per committer: 33.5
  • Development Distribution Score (DDS): 0.03
Past Year
  • Commits: 2
  • Committers: 1
  • Avg Commits per committer: 2.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
chris-santiago c****o@g****u 65
Chris Santiago 4****o 2
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 11 months ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels

Dependencies

.github/workflows/docs.yaml actions
  • actions/cache v2 composite
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
.github/workflows/publish.yaml actions
  • actions/checkout v3 composite
  • actions/setup-python v3 composite
  • pypa/gh-action-pypi-publish 27b31702a0e7fc50959f5ad993c78deac1bdfc29 composite
environment.yml pypi
pyproject.toml pypi
  • hydra-core >=1.3.2
  • hydra-joblib-launcher >=1.2.0
  • matplotlib >=3.7.2
  • python-dotenv >=1.0.0
  • pytorch-lightning >2.0
  • rich >=13.5.2
  • scikit-learn >=1.3.0
  • torch >=2.0.1
  • torchmetrics >=1.0.3
  • torchvision >=0.15.2
  • wandb >=0.15.8
requirements.txt pypi
  • Click ==8.1.7
  • GitPython ==3.1.32
  • MarkupSafe ==2.1.3
  • PyYAML ==6.0.1
  • aiohttp ==3.8.5
  • aiosignal ==1.3.1
  • antlr4-python3-runtime ==4.9.3
  • appdirs ==1.4.4
  • async-timeout ==4.0.3
  • attrs ==23.1.0
  • certifi ==2023.7.22
  • charset-normalizer ==3.2.0
  • contourpy ==1.1.0
  • cycler ==0.11.0
  • docker-pycreds ==0.4.0
  • filelock ==3.12.2
  • fonttools ==4.42.1
  • frozenlist ==1.4.0
  • fsspec ==2023.6.0
  • gitdb ==4.0.10
  • hydra-core ==1.3.2
  • hydra-joblib-launcher ==1.2.0
  • idna ==3.4
  • jinja2 ==3.1.2
  • joblib ==1.3.2
  • kiwisolver ==1.4.4
  • lightning-utilities ==0.9.0
  • markdown-it-py ==3.0.0
  • matplotlib ==3.7.2
  • mdurl ==0.1.2
  • mpmath ==1.3.0
  • multidict ==6.0.4
  • networkx ==3.1
  • numpy ==1.25.2
  • omegaconf ==2.3.0
  • packaging ==23.1
  • pathtools ==0.1.2
  • pillow ==10.0.0
  • protobuf ==4.24.1
  • psutil ==5.9.5
  • pygments ==2.16.1
  • pyparsing ==3.0.9
  • python-dateutil ==2.8.2
  • python-dotenv ==1.0.0
  • pytorch-lightning ==2.0.8
  • requests ==2.31.0
  • rich ==13.5.2
  • scikit-learn ==1.3.0
  • scipy ==1.9.3
  • sentry-sdk ==1.29.2
  • setproctitle ==1.3.2
  • setuptools ==68.1.2
  • six ==1.16.0
  • smmap ==5.0.0
  • sympy ==1.12
  • threadpoolctl ==3.2.0
  • torch ==2.0.1
  • torchmetrics ==1.0.3
  • torchvision ==0.15.2
  • tqdm ==4.66.1
  • typing-extensions ==4.7.1
  • urllib3 ==1.26.16
  • wandb ==0.15.8
  • yarl ==1.9.2