ocotillo

Performant and accurate speech recognition built on Pytorch

https://github.com/neonbjb/ocotillo

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.2%) to scientific vocabulary
Last synced: 7 months ago · JSON representation ·

Repository

Performant and accurate speech recognition built on Pytorch

Basic Info
  • Host: GitHub
  • Owner: neonbjb
  • License: other
  • Language: Python
  • Default Branch: main
  • Size: 3.91 MB
Statistics
  • Stars: 254
  • Watchers: 8
  • Forks: 26
  • Open Issues: 3
  • Releases: 3
Created about 4 years ago · Last pushed almost 4 years ago
Metadata Files
Readme License Citation

README.md

🌵 ocotillo - A fast, accurate and super simple speech recognition model

This repo is for ocotillo, a pytorch-based ML model that does state-of-the-art English speech transcription. While this is not necessarily difficult to accomplish with the libraries available today, every one that I have run to is excessively complicated and therefore difficult to use. Ocotillo is dirt simple. The APIs I offer have almost no configuration options: just feed your speech in and go.

It's also fast. It traces the underlying model to torchscript. This means most of the heavy lifting is done in C++. The transcribe.py script achieves a processing rate 329x faster than realtime on an NVIDIA A5000 GPU when transcribing batches of 16 audio files at once.

Model Description

ocotillo uses a model pre-trained with wav2vec2 and fine-tuned for speech recognition. This model is hosted by HuggingFace's transformers API, and pretrained weights have been provided by Facebook/Meta. The specific model being used is jbetker/wav2vec2-large-robust-ft-libritts-voxpopuli, which I personally fine-tuned from existing wav2vec2 checkpoints to also predict punctuation. This makes ocotillo useful for generating transcriptions which will be used for TTS.

A special thanks goes out to Patrick von Platen, who contributed (wrote?) the model to huggingface and maintains the API that does all the heavy lifting. His fantastic blog posts were instrumental in building this repo. In particular, this one on finetuning wav2vec and this one on leveraging a language model with wav2vec.

Instructions for use

There are several ways to use ocotillo, described below. First you need to install PyTorch:

https://pytorch.org/get-started/locally/

Then, clone ocotillo and install its dependencies:

shell git clone https://github.com/neonbjb/ocotillo.git cd ocotillo python setup.py install

Simple CLI

This is the most dead-simple way to get started with ocotillo. Find an audio clip on your computer, and run:

shell ocotillo path/to/audio/clip.mp3

Batch CLI

A script is included, transcribe.py. This script searches for all audio files in a directory and transcribes all the files found. Sample usage:

shell python transcribe.py --path /my/audio/folder --model_path pretrained_model_path.pth --cuda=0

This will use a GPU to transcribe audio files found in /my/audio/folder. Transcription results will be written to results.tsv.

API

This repo contains a class called transcribe.Transcriber, which can be used to transcribe audio data into text. Usage looks like the following:

```python from ocotillo.transcribe import Transcriber

transcriber = Transcriber(oncuda=False) audio = loadaudio('data/obama.mp3', 44100) print(transcriber.transcribe(audio, sample_rate=44100)) ```

This will automatically download the 'large' model and use it to perform transcription on the CPU. Options to specify a smaller model, perform transcription on a GPU, and perform batch transcription are available. See api.py.

Transcriber works with numpy arrays and torch arrays. Audio data must be fp32 on the range [-1,1]. A demo colab notebook that uses the API is included: asr_demo.ipynb.

HTTP server with Mycroft support

This will allow you to run a speech-to-text server that operates the ocotillo model. The protocol was specifically designed to work with the open source assistant Mycroft.

This server does not need to run on the same device as you run mycroft (but your mycroft device needs to be on the same network, or you need to expose your server to the web - not recommended).

Responses are fast and high quality. On a modern x86 CPU, expect responses to most queries in under a second. On CUDA, responses take less than a tenth of a second (most of which is data processing - model inference is on the order of 10s of milliseconds). I have not tested ocotillo on embedded hardware like the Pi.

  1. Install Flask: pip install flask.
  2. Start server: python stt_server.py. CUDA device 0 is used by default, specify --cuda=-1 to run on CPU.
  3. (optional) Install Mycroft: https://mycroft.ai/get-started/
  4. From mycroft build directory: bin/mycroft-config edit user
  5. Add the following code: json { "stt": { "deepspeech_server": { "uri": "http://<your_ip_address>/stt" }, "module": "deepspeech_server" }, }
  6. Restart mycroft: ./stop-mycroft.sh && ./start-mycroft.sh

Owner

  • Name: James Betker
  • Login: neonbjb
  • Kind: user
  • Location: CO
  • Company: OpenAI

Latent Analyst, Entropy Wrangler

Citation (CITATION.cff)

cff-version: 1.3.0
message: "If you use this software, please cite it as below."
authors:
- family-names: "Betker"
  given-names: "James"
  orcid: "https://orcid.org/my-orcid?orcid=0000-0003-3259-4862"
title: "ocotillo speech recognition"
version: 1.0.5
date-released: 2022-02-01
url: "https://github.com/neonbjb/ocotillo"

GitHub Events

Total
  • Watch event: 7
Last Year
  • Watch event: 7

Committers

Last synced: 11 months ago

All Time
  • Total Commits: 48
  • Total Committers: 1
  • Avg Commits per committer: 48.0
  • Development Distribution Score (DDS): 0.0
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
James Betker j****r@g****m 48

Issues and Pull Requests

Last synced: 8 months ago

All Time
  • Total issues: 4
  • Total pull requests: 0
  • Average time to close issues: about 15 hours
  • Average time to close pull requests: N/A
  • Total issue authors: 4
  • Total pull request authors: 0
  • Average comments per issue: 1.25
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • agupta54 (1)
  • Jacob-Bishop (1)
  • wavymulder (1)
  • HobisPL (1)
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 6 last-month
  • Total dependent packages: 0
  • Total dependent repositories: 1
  • Total versions: 6
  • Total maintainers: 1
pypi.org: ocotillo

A simple & fast speech transcription toolkit

  • Versions: 6
  • Dependent Packages: 0
  • Dependent Repositories: 1
  • Downloads: 6 Last month
Rankings
Stargazers count: 4.6%
Forks count: 8.2%
Dependent packages count: 10.0%
Average: 18.3%
Dependent repos count: 21.7%
Downloads: 46.9%
Maintainers (1)
Last synced: 8 months ago

Dependencies

requirements.txt pypi
  • audio2numpy *
  • ffmpeg *
  • requests *
  • scipy *
  • tokenizers *
  • torch >=1.8
  • torchaudio >0.9
  • tqdm *
  • transformers *
setup.py pypi
  • audio2numpy *
  • ffmpeg *
  • requests *
  • scipy *
  • tokenizers *
  • torch >=1.8
  • torchaudio >0.9
  • tqdm *
  • transformers *