https://github.com/ccoreilly/deepspeech-catala
Deepspeech ASR Model for the Catalan Language
Science Score: 13.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
○.zenodo.json file
-
○DOI references
-
○Academic publication links
-
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (2.9%) to scientific vocabulary
Keywords
Repository
Deepspeech ASR Model for the Catalan Language
Basic Info
Statistics
- Stars: 17
- Watchers: 6
- Forks: 0
- Open Issues: 0
- Releases: 11
Topics
Metadata Files
README.md
Deepspeech Catal
An ASR model created with the Mozilla DeepSpeech engine. For a comparison with other catalan ASR models check the Catalan Speech Recognition Benchmark
Model de reconeixement de la parla creat amb el motor DeepSpeech de Mozilla. Us podeu descarregar l'ltima versi aqu.
Motivaci
La motivaci principal s la d'aprendre, pel que el model evoluciona constantment a mida que vaig fent proves, per tamb la de contribur a millorar la presncia del catal en les tecnologies de la parla lliures i obertes.
Com fer-lo servir
Descarregueu-vos el model i l'scorer i feu servir el motor d'inferncia deepspeech per a inferir el text d'un arxiu audio (16Hz mono WAV)
$ pip install deepspeech
$ deepspeech --model deepspeech-catala.pbmm --scorer kenlm.scorer --audio file.wav
Corpus emprats
En la taula comparativa de models es fa referncia als segents corpus de veu en catal. Alguns s'han fet servir per entrenar models mentre que altres exclusivament per l'avaluaci.
- CV4: Common Voice Corpus 4 (ca295h2019-12-10) [link]
- CV5.1: Common Voice Corpus 5.1 (ca579h2020-06-22) [link]
- CV6.1: Common Voice Corpus 6.1 (ca748h2020-12-11) [link]
- PPC: ParlamentParla Clean de CollectivaT [link]
- FC: FestCat [link]
- GC: Google Crowdsourced [link]
- SJ: Un corpus privat basat en l'audiollibre La llegenda de Sant Jordi de Care Santos i Dani Cruz
Models de llenguatge (Scorer)
Tamb anomenat "Scorer" al DeepSpeech, ja que "puntua" la probabilitat que una paraula vingui desprs d'una altra. Els models de llenguatge que es fan servir habitualment en el reconeixement de la parla sn N-Grames que representen la probabilitat de subcadenes de paraules de mida n on 1 n N.
Un mateix model acstic donar diferents resultats segons el model de llenguatge que fem servir i s aconsellable adaptar el model de llenguatge al domini
lingstic de la nostra aplicaci. Durant l'entrenament i l'avaluaci dels diferents models he anat provant diferents models de llenguatge basats en conjunts de dades que podeu trobar al directori lm d'aquest repositori.
Comparativa de models
A continuaci una comparativa de les diferents versions del model, el corpus i scorer emprats i el resultats de l'avaluaci (WER).
Les versions anteriors a la 0.4.0 feien servir un alfabet sense vocals accentuades pel que no es consideren en la comparativa.
WER del dataset test de cada model
El dataset test de cada model s diferent pel que no es poden comparar entre s per s'afegeix a mode de documentaci.
| Model | Model Base | Dropped layers | Versi DeepSpeech | Corpus | Scorer | WER | | ----- | ------------ | -------------- | ----------------- | ---------------- | ------ | ------ | | 0.4 | Angls 0.7.0 | 1 | 0.7.0 | CV4 | Oscar | 30,16% | | 0.5 | Angls 0.7.0 | 1 | 0.7.0 | CV4 | Oscar | 29,66% | | 0.6 | Angls 0.7.0 | 1 | 0.7.0 | CV4 + PPC | Oscar | 13,85% | | 0.7 | Angls 0.7.2 | 1 | 0.7.0 | CV4 + PPC + FC | TV3 | 16,95% | | 0.8 | Angls 0.8.0 | 1 | 0.8.0 | CV5.1 + PPC + FC | TV3 | 19,35% | | 0.9 | cap | - | 0.8.0 | CV5.1 + PPC + FC | TV3 | 20,12% | | 0.10 | Angls 0.8.0 | 3 | 0.8.0 | CV5.1 + PPC + FC | TV3 | 19,07% | | 0.11 | Angls 0.8.0 | 1 | 0.8.0 | CV5.1 + PPC + FC | Oscar | 15,81% | | 0.12 | Angls 0.8.0 | 1 | 0.8.0 | CV5.1 + PPC | Oscar | 14,06% | | 0.13 | Catal 0.12 | 0 | 0.9.2 | CV6.1 + PPC | Oscar | 12,44% | | 0.14 | Angls 0.9.2 | 1 | 0.9.2 | CV6.1 + PPC | Oscar | 13,29% |
WER del corpus Google Crowdsourced
| Model | Model Base | Dropped layers | Versi DeepSpeech | Corpus | Scorer | WER | | ----- | ------------ | -------------- | ----------------- | ---------------- | ------ | ------ | | 0.6 | Angls 0.7.0 | 1 | 0.7.0 | CV4 + PPC | Oscar* | 12,75% | | 0.7 | Angls 0.7.2 | 1 | 0.7.0 | CV4 + PPC + FC | TV3 | 21,69% | | 0.8 | Angls 0.8.0 | 1 | 0.8.0 | CV5.1 + PPC + FC | TV3 | 14,47% | | 0.9 | cap | - | 0.8.0 | CV5.1 + PPC + FC | TV3 | 31,88% | | 0.10 | Angls 0.8.0 | 3 | 0.8.0 | CV5.1 + PPC + FC | TV3 | 16,05% | | 0.11 | Angls 0.8.0 | 1 | 0.8.0 | CV5.1 + PPC + FC | Oscar* | 29,93% | | 0.12 | Angls 0.8.0 | 1 | 0.8.0 | CV5.1 + PPC | Oscar | 17,34% | | 0.13 | Catal 0.12 | 0 | 0.9.2 | CV6.1 + PPC | Oscar* | 9,07% | | 0.14 | Angls 0.9.2 | 1 | 0.9.2 | CV6.1 + PPC | Oscar* | 9,05% |
(*) L'scorer Oscar cont les probabilitats extretes de les transcripcions del dataset pel que la WER est esbiaixada.
WER del corpus Sant Jordi
| Model | Model Base | Dropped layers | Versi DeepSpeech | Corpus | Scorer | WER | | ----- | ------------ | -------------- | ----------------- | ---------------- | ------ | ------ | | 0.6 | Angls 0.7.0 | 1 | 0.7.0 | CV4 + PPC | Oscar | 28,45% | | 0.7 | Angls 0.7.2 | 1 | 0.7.0 | CV4 + PPC + FC | TV3 | 44,88% | | 0.8 | Angls 0.8.0 | 1 | 0.8.0 | CV5.1 + PPC + FC | TV3 | 54,31% | | 0.9 | cap | - | 0.8.0 | CV5.1 + PPC + FC | TV3 | 50,10% | | 0.10 | Angls 0.8.0 | 3 | 0.8.0 | CV5.1 + PPC + FC | TV3 | 46,89% | | 0.11 | Angls 0.8.0 | 1 | 0.8.0 | CV5.1 + PPC + FC | Oscar | 45,89% | | 0.12 | Angls 0.8.0 | 1 | 0.8.0 | CV5.1 + PPC | Oscar | 22,65% | | 0.13 | Catal 0.12 | 0 | 0.9.2 | CV6.1 + PPC | Oscar | 20,04% | | 0.14 | Angls 0.9.2 | 1 | 0.9.2 | CV6.1 + PPC | Oscar | 18,84% |
Possibles segents passos
- Ampliar el corpus de dades d'entrenament
- Optimitzar els parmetres del model
- Avaluar el model amb un corpus ms variat (variants dialectals, soroll, context informal)
Deepspeech Catalan ASR Model
Motivation
The main motivation of this project is to learn how to creat ASR models using Mozilla's DeepSpeech engine so the model is constantly evolving. Moreover I wanted to see what was possible with the currently released CommonVoice catalan language dataset.
Usage
Download the model and the scorer and use the deepspeech engine to infer text from an audio file (16Hz mono WAV)
$ pip install deepspeech@0.7.1
$ deepspeech --model deepspeech-catala-0.6.0.pbmm --scorer kenlm.scorer --audio file.wav
Model comparison
What follows is a comparison of the different published model versions, the dataset used and the accuracy of each model.
Test corpus from ParlamentParla dataset
Note: For version 0.6.0 the whole CommonVoice dataset (train, dev and test files) was combined with the clean dataset of ParlamentParla, shuffled and split in train/dev/test files using a 75/20/5 ratio. Due to this fact, a comparison between the models can only be made by using 1713 sentences from the ParlamentParla dataset not seen by any model during training.
| Model | Corpus | Augmentation | WER | CER | Loss | | --------------------------------------------------------------------- | --------------------------------- | ------------ | ------ | ------ | ------ | | deepspeech-catala@0.4.0 | CommonVoice | No | 30,16% | 13,79% | 112,96 | | deepspeech-catala@0.5.0 | CommonVoice | S | 29,66% | 13,84% | 108,52 | | deepspeech-catala@0.6.0 | CommonVoice + ParlamentParlaClean | No | 13,85% | 5,62% | 50,49 | | stashify@deepspeech_cat | CommonVoice? | S | 22,62% | 13,59% | 80,45 |
Test corpus from the FestCat dataset
| Model | Corpus | Augmentation | WER | CER | Loss | | --------------------------------------------------------------------- | --------------------------------- | ------------ | ------ | ------ | ------ | | deepspeech-catala@0.4.0 | CommonVoice | No | 77,60% | 65,62% | 243,25 | | deepspeech-catala@0.5.0 | CommonVoice | S | 78,12% | 65,61% | 235,60 | | deepspeech-catala@0.6.0 | CommonVoice + ParlamentParlaClean | No | 76,10% | 65,16% | 240,69 | | stashify@deepspeech_cat | CommonVoice? | S | 80,58% | 66,82% | 180,81 |
Validating the models against the FestCat dataset shows that the models do not generalize well. This corpus has a higer variability in the word count of the test sentences, with 90% of the sentences containing an evenly distributed amount of words between 2 and 23, whilst most of the sentences in the CommonVoice corpus contain between 3 and 16 words.
As expected, validating the models against a test set containing only sentences with 4 or more words improves accuracy:
| Model | Corpus | Augmentation | WER | CER | Loss | | --------------------------------------------------------------------- | --------------------------------- | ------------ | ------ | ------ | ------ | | deepspeech-catala@0.4.0 | CommonVoice | No | 58,78% | 46,61% | 193,85 | | deepspeech-catala@0.5.0 | CommonVoice | S | 58,94% | 46,47% | 188,42 | | deepspeech-catala@0.6.0 | CommonVoice + ParlamentParlaClean | No | 56,68% | 46,00% | 189,03 | | stashify@deepspeech_cat | CommonVoice? | S | 61,11% | 48,16% | 144,78 |
Possible next steps
- Expand the training data with other free datasets
- Tune the model parameters to improve performance
- Validate the models with more varied test datasets (dialects, noise)
Owner
- Name: Ciaran O'Reilly
- Login: ccoreilly
- Kind: user
- Location: Berlin
- Company: @parloa
- Website: https://oreilly.cat
- Repositories: 51
- Profile: https://github.com/ccoreilly
GitHub Events
Total
Last Year
Committers
Last synced: over 1 year ago
Top Committers
| Name | Commits | |
|---|---|---|
| Ciaran O'Reilly | c****n@o****t | 17 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 10 months ago
All Time
- Total issues: 1
- Total pull requests: 0
- Average time to close issues: 7 months
- Average time to close pull requests: N/A
- Total issue authors: 1
- Total pull request authors: 0
- Average comments per issue: 1.0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- bloodbare (1)