backprop

Backprop makes it simple to use, finetune, and deploy state-of-the-art ML models.

https://github.com/backprop-ai/backprop

Science Score: 10.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
    1 of 10 committers (10.0%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (15.1%) to scientific vocabulary

Keywords

bert fine-tuning image-classification language-model multilingual-models natural-language-processing nlp question-answering text-classification transfer-learning transformers

Keywords from Contributors

interpretability standardization animal hack
Last synced: 6 months ago · JSON representation

Repository

Backprop makes it simple to use, finetune, and deploy state-of-the-art ML models.

Basic Info
  • Host: GitHub
  • Owner: backprop-ai
  • License: other
  • Language: Python
  • Default Branch: main
  • Homepage: https://backprop.co
  • Size: 5.46 MB
Statistics
  • Stars: 242
  • Watchers: 15
  • Forks: 11
  • Open Issues: 5
  • Releases: 0
Topics
bert fine-tuning image-classification language-model multilingual-models natural-language-processing nlp question-answering text-classification transfer-learning transformers
Created over 5 years ago · Last pushed almost 5 years ago
Metadata Files
Readme License

README.md

Backprop

Backprop makes it simple to use, finetune, and deploy state-of-the-art ML models.

Solve a variety of tasks with pre-trained models or finetune them in one line for your own tasks.

Out of the box tasks you can solve with Backprop:

  • Conversational question answering in English
  • Text Classification in 100+ languages
  • Image Classification
  • Text Vectorisation in 50+ languages
  • Image Vectorisation
  • Summarisation in English
  • Emotion detection in English
  • Text Generation

For more specific use cases, you can adapt a task with little data and a single line of code via finetuning.

| ⚡ Getting started | Installation, few minute introduction | | :---------------------------------------------------- | :-------------------------------------------------------- | | 💡 Examples | Finetuning and usage examples | | 📙 Docs | In-depth documentation about task inference and finetuning | | ⚙️ Models | Overview of available models |

Getting started

Installation

Install Backprop via PyPi:

bash pip install backprop

Basic task inference

Tasks act as interfaces that let you easily use a variety of supported models.

```python import backprop

context = "Take a look at the examples folder to see use cases!"

qa = backprop.QA()

Start building!

answer = qa("Where can I see what to build?", context)

print(answer)

Prints

"the examples folder" ```

You can run all tasks and models on your own machine, or in production with our inference API, simply by specifying your api_key.

See how to use all available tasks.

Basic finetuning and uploading

Each task implements finetuning that lets you adapt a model for your specific use case in a single line of code.

A finetuned model is easy to upload to production, letting you focus on building great applications.

```python import backprop

tg = backprop.TextGeneration("t5-small")

Any text works as training data

inp = ["I really liked the service I received!", "Meh, it was not impressive."] out = ["positive", "negative"]

Finetune with a single line of code

tg.finetune({"inputtext": inp, "outputtext": out})

Use your trained model

prediction = tg("I enjoyed it!")

print(prediction)

Prints

"positive"

Upload to Backprop for production ready inference

Describe your model

name = "t5-sentiment" description = "Predicts positive and negative sentiment"

tg.upload(name=name, description=description, api_key="abc") ```

See finetuning for other tasks.

Why Backprop?

  1. No experience needed
  • Entrance to practical AI should be simple
  • Get state-of-the-art performance in your task without being an expert
  1. Data is a bottleneck
  • Solve real world tasks without any data
  • With transfer learning, even a small amount of data can adapt a task to your niche requirements
  1. There are an overwhelming amount of models
  • We offer a curated selection of the best open-source models and make them simple to use
  • A few general models can accomplish more with less optimisation
  1. Deploying models cost effectively is hard work
    • If our models suit your use case, no deployment is needed: just call our API
    • Adapt and deploy your own model with just a few lines of code
    • Our API scales, is always available, and you only pay for usage

Examples

Documentation

Check out our docs for in-depth task inference and finetuning.

Model Hub

Curated list of state-of-the-art models.

Demos

Zero-shot image classification with CLIP.

Credits

Backprop relies on many great libraries to work, most notably:

Feedback

Found a bug or have ideas for new tasks and models? Open an issue.

Owner

  • Name: Backprop
  • Login: backprop-ai
  • Kind: organization
  • Email: hello@backprop.co

Making machine learning easy for every developer.

GitHub Events

Total
Last Year

Committers

Last synced: over 2 years ago

All Time
  • Total Commits: 188
  • Total Committers: 10
  • Avg Commits per committer: 18.8
  • Development Distribution Score (DDS): 0.404
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Kristo Ojasaar o****o@g****m 112
Owen LaCava o****a@g****m 48
ojasaar 7****r 14
Cameron Wood c****1@g****m 3
LaCava l****a@W****n 3
github-actions[bot] 4****] 3
Ramon Catane r****e@g****m 2
Eunho Lee e****n@g****m 1
drycoco k****5@s****k 1
RamonMamon 3****n 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 7 months ago

All Time
  • Total issues: 9
  • Total pull requests: 14
  • Average time to close issues: 3 days
  • Average time to close pull requests: about 19 hours
  • Total issue authors: 6
  • Total pull request authors: 4
  • Average comments per issue: 2.22
  • Average comments per pull request: 0.07
  • Merged pull requests: 14
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • chiragsanghvi10 (3)
  • singularity014 (2)
  • rishav1122 (1)
  • VaibhavDS19 (1)
  • karan-jgu (1)
  • Mattdinina (1)
Pull Request Authors
  • LaCavao (6)
  • ojasaar (6)
  • cameron-wood (1)
  • lucky7323 (1)
Top Labels
Issue Labels
Pull Request Labels

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 154 last-month
  • Total dependent packages: 0
  • Total dependent repositories: 2
  • Total versions: 17
  • Total maintainers: 1
pypi.org: backprop

Backprop

  • Versions: 17
  • Dependent Packages: 0
  • Dependent Repositories: 2
  • Downloads: 154 Last month
Rankings
Stargazers count: 4.3%
Dependent packages count: 10.1%
Forks count: 10.5%
Average: 11.3%
Dependent repos count: 11.6%
Downloads: 20.1%
Maintainers (1)
Last synced: 6 months ago

Dependencies

requirements.txt pypi
  • dill *
  • efficientnet_pytorch *
  • ftfy *
  • pytorch_lightning >=1.2.0,<1.3.0
  • sentence_transformers >=0.4.1.2
  • torch <1.8.0
  • torchtext <0.9.0
  • torchvision <0.9.0
  • transformers >=4.3.2,<4.5.0
setup.py pypi
  • dill *
  • efficientnet_pytorch *
  • ftfy *
  • pytorch_lightning >=1.2.0,<1.3.0
  • sentence_transformers >=0.4.1.2
  • torch <1.8.0
  • torchtext <0.9.0
  • torchvision <0.9.0
  • transformers >=4.3.2,<4.5.0