allophant

A multilingual phoneme recognizer capable of generalizing zero-shot to unseen phoneme inventories.

https://github.com/kgnlp/allophant

Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.5%) to scientific vocabulary

Keywords

cross-lingual machine-learning multilingual neural-networks phoneme-recognition speech-recognition zero-shot
Last synced: 6 months ago · JSON representation ·

Repository

A multilingual phoneme recognizer capable of generalizing zero-shot to unseen phoneme inventories.

Basic Info
  • Host: GitHub
  • Owner: kgnlp
  • License: mit
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 1.3 MB
Statistics
  • Stars: 25
  • Watchers: 5
  • Forks: 2
  • Open Issues: 0
  • Releases: 1
Topics
cross-lingual machine-learning multilingual neural-networks phoneme-recognition speech-recognition zero-shot
Created over 2 years ago · Last pushed 11 months ago
Metadata Files
Readme License Citation

README.md

Allophant

Allophant is a multilingual phoneme recognizer trained on spoken sentences in 34 languages, capable of generalizing zero-shot to unseen phoneme inventories.

This implementation was utilized in our INTERSPEECH 2023 paper "Allophant: Cross-lingual Phoneme Recognition with Articulatory Attributes" (Citation)

Checkpoints

Pre-trained checkpoints for all evaluated models can be found on Hugging Face:

| Model Name | UCLA Phonetic Corpus (PER) | UCLA Phonetic Corpus (AER) | Common Voice (PER) | Common Voice (AER) | | ---------------- | -------------------------: | -------------------------: | -----------------: | -----------------: | | Multitask | 45.62% | 19.44% | 34.34% | 8.36% | | Hierarchical | 46.09% | 19.18% | 34.35% | 8.56% | | Multitask Shared | 46.05% | 19.52% | 41.20% | 8.88% | | Baseline Shared | 48.25% | - | 45.35% | - | | Baseline | 57.01% | - | 46.95% | - |

Note that our baseline models were trained without phonetic feature classifiers and therefore only support phoneme recognition.

Result Files

JSON files containing detailed error rates and statistics for all languages can be found in the interspeech_results directory. Results on the UCLA Phonetic Corpus are stored in files ending in "ucla", while files containing results on the training subset of languages from Mozilla Common Voice end in "commonvoice". See Error Rates for more information.

Installation

System Dependencies

For most Linux and macOS systems, pre-built binaries are available via pip. For installation on other platforms or when building from source, a Rust compiler is required for building the native pyo3 extension. Rustup is recommended for managing Rust installations.

Optional

Torchaudio mp3 support requires ffmpeg to be installed on the system. E.g. for Debian-based Linux distributions:

bash sudo apt update && sudo apt install ffmpeg For transcribing training and evaluation data with eSpeak NG G2P, The espeak-ng package is required:

bash sudo apt install espeak-ng

We transcribed Common Voice using version 1.51 for our paper.

Allophant Package

Allophant can be installed via pip:

bash pip install allophant

Note that the package currently requires Python >= 3.10 and was tested on 3.12. For use on GPU, torch and torchaudio may need to be manually installed for your required CUDA or ROCm version. (PyTorch installation)

For development, an editable package can be installed as follows:

bash git clone https://github.com/kgnlp/allophant cd allophant pip install -e allophant

Usage

Inference With Pre-trained Models

A pre-trained model can be loaded with the allophant package from a huggingface checkpoint or local file:

```python from allophant.estimator import Estimator

device = "cpu" model, attributeindexer = Estimator.restore("kgnlp/allophant", device=device) supportedfeatures = attributeindexer.featurenames

The phonetic feature categories supported by the model, including "phonemes"

print(supported_features) ``` Allophant supports decoding custom phoneme inventories, which can be constructed in multiple ways:

```python

1. For a single language:

inventory = attributeindexer.phonemeinventory("es")

2. For multiple languages, e.g. in code-switching scenarios

inventory = attributeindexer.phonemeinventory(["es", "it"])

3. Any custom selection of phones for which features are available in the Allophoible database

inventory = ['a', 'ai̯', 'au̯', 'b', 'e', 'eu̯', 'f', 'ɡ', 'l', 'ʎ', 'm', 'ɲ', 'o', 'p', 'ɾ', 's', 't̠ʃ'] ````

Audio files can then be loaded, resampled and transcribed using the given inventory by first computing the log probabilities for each classifier:

```python import torch import torchaudio from allophant.dataset_processing import Batch

Load an audio file and resample the first channel to the sample rate used by the model

audio, samplerate = torchaudio.load("utterance.wav") audio = torchaudio.functional.resample(audio[:1], samplerate, model.sample_rate)

Construct a batch of 0-padded single channel audio, lengths and language IDs

Language ID can be 0 for inference

batch = Batch(audio, torch.tensor([audio.shape[1]]), torch.zeros(1)) modeloutputs = model.predict( batch.to(device), attributeindexer.compositionfeaturematrix(inventory).to(device) ) ```

Finally, the log probabilities can be decoded into the recognized phonemes or phonetic features:

```python from allophant import predictions

Create a feature mapping for your inventory and CTC decoders for the desired feature set

inventoryindexer = attributeindexer.attributes.subset(inventory) ctcdecoders = predictions.featuredecoders(inventoryindexer, featurenames=supported_features)

for featurename, decoder in ctcdecoders.items(): decoded = decoder(modeloutputs.outputs[featurename].transpose(1, 0), modeloutputs.lengths) # Print the feature name and values for each utterance in the batch for [hypothesis] in decoded: # NOTE: token indices are offset by one due to the token used during decoding recognized = inventoryindexer.featurevalues(featurename, hypothesis.tokens - 1) print(feature_name, recognized) ```

Configuration

To specify options for preprocessing, training, and the model architecture, a configuration file in TOML format can be passed to most commands. For automation purposes, JSON configuration files can be used instead with the --config-json-data/-j flag. To start, a default configuration file with comments can be generated as follows:

bash allophant generate-config [path/to/config]

Preprocessing

The allophant-data command contains all functionality for corpus processing and management available in allophant. For training, corpora without phoneme-level transcriptions have to be transcribed beforehand with a grapheme-to-phoneme model.

Transcription

Phoneme transcriptions for a supported corpus format can be generated with transcribe. For instance, for transcribing the German and English subsets of a corpus with eSpeak NG and PHOIBLE features from Allophoible using a batch size of 512 and at most 15,000 utterances per language:

bash allophant-data transcribe -p -e espeak-ng -b 512 -l de,en -t 15000 -f phoible path/to/corpus -o transcribed_data

Note that no audio data is moved or copied in this process. All commands that load corpora also accept a path to the *.bin transcription file directly instead of a directory. This allows loading only specific splits, such as loading only the test split for evaluation.

Utterance Lengths

As an optional step, utterance lengths can be extracted from a transcribed corpus for more memory efficient batching. If a subset of the corpus was transcribed, lengths will only be stored for the transcribed utterances.

bash allophant-data save-lengths [-c /path/to/config.toml] path/to/transcribed_corpus path/to/output

Training

During training, the best checkpoint is saved after each evaluation step to the path provided via the --save-path/-s flag. To save every checkpoint instead, a directory needs to be passed to --save-path/-s and the --save-all/-a flag included. The number of worker threads is auto-detected from the number of available CPU threads but can be set manually with -w number. To train only on the CPU instead of using CUDA, the --cpu flag can be used. Finally, any progress logging to stderr can be disabled with --no-progress.

bash allophant train [-c /path/to/config.toml] [-w number] [--cpu] [--no-progress] [--save-all] [-s /path/to/checkpoint.pt] [-l /path/to/lengths] path/to/transcribed_corpus

Note that at least the --lengths/-l flag with a path to previously computed utterance lengths has to be specified when the "frames" batching mode is enabled.

Evaluation

Test Data Inference

For evaluation, test data can be transcribed with the predict sub-command. The resulting file contains metadata, transcriptions for phonemes and features, and gold standard labels from the test data.

bash allophant predict [--cpu] [-w number] [-t {ucla-phonetic,common-voice}] [-f phonemes,feature1,feature2] [--fix-unicode] [--training-languages {include,exclude,only}] [-m {frames,utterances}] [-s number] [--language-phonemes] [--no-progress] [-c] [-o /path/to/prediction_file.jsonl] /path/to/dataset huggingface/model_id or /path/to/checkpoint

Use --dataset-type/-t to select the data set type. Note that only Common Voice and the UCLA Phonetic Corpus are currently supported. Predictions will either be printed to stdout or saved to a file given by --output/-o. Gzip compression is either inferred from a ".jsonl.gz" extension or can be forced with the --compress/-c flag. The --training-languages argument allows filtering utterances based on the languages that also occur in the training data, and should be set to "exclude" for zero-shot evaluation.

Using --feature-subset/-f, a comma separated list of features or "phoneme" such as syllabic,round,phoneme can be provided to predict only the given subset of classes. With the --fix-unicode option, predict attempts to resolve issues of phonemes from the test data missing from the database due to differences in their unicode binary representation.

The batch sizes defined in the model configuration for training can be overwritten with the --batch-size/-s and --batch-mode/-m. Note that if the batch mode is set to "utterance" either in the model configuration or by setting the --batch-mode/-m flag, utterance lengths have to be provided via the --lengths/-l argument. A beam size can be specified for CTC decoding with beam search (--ctc-beam/-b). We used a beam size of 1 for greedy decoding in our paper.

Error Rates

The evaluate sub-command computes edit statistics and phoneme and attribute error rates for each language of a given corpus or split.

bash allophant evaluate [--fix-unicode] [--no-remap] [--split-complex] [-j] [--no-progress] [-o path/to/results.json] [-d] path/to/predictions.jsonl

Without --no-remap, transcriptions are mapped to language inventories using the same mapping scheme used during training. In our paper, all results were computed without this mapping, meaning that the transcriptions were directly compared to labels without an additional mapping step. If --fix-unicode was used during prediction, it should also be used in evaluate. Evaluation supports splitting any complex phoneme segments before computing error statistics with the --split-complex/-s flag.

For further analysis of evaluation results, JSON output should be enabled via the --json/-j flag. The JSON file can then be read using allophant.evaluation.EvaluationResults. For quick inspection of human-readable (average) error rates from evaluation results saved in JSON format, use allophant-error-rates:

bash allophant-error-rates path/to/results_file

Allophoible Inventories

Inventories and feature sets preprocessed from (a subset of) Allophoible for training can be extracted with the allophant-features command.

bash allophant-features [-p /path/to/allophoible.csv] [--remove-zero] [--prefer-allophant-dialects] [-o /path/to/output.csv] [en,fr,ko,ar,...]

Citation

When using our work, please cite our paper as follows:

bibtex @inproceedings{glocker2023allophant, title={Allophant: Cross-lingual Phoneme Recognition with Articulatory Attributes}, author={Glocker, Kevin and Herygers, Aaricia and Georges, Munir}, year={2023}, booktitle={{Proc. Interspeech 2023}}, month={8}}

Owner

  • Name: Kevin G
  • Login: kgnlp
  • Kind: user
  • Location: Ingolstadt, Germany

PhD Researcher, Natural Language Understanding

Citation (CITATION.cff)

cff-version: 1.2.0
message: If you use this software, please cite the article from preferred-citation.
authors:
  - family-names: Glocker
    given-names: Kevin
    orcid: "https://orcid.org/0009-0001-9364-0298"
  - family-names: Herygers
    given-names: Aaricia
    orcid: "https://orcid.org/0009-0004-5830-7571"
  - family-names: Georges
    given-names: Munir
    orcid: "https://orcid.org/0000-0002-5542-149X"
title: 'Allophant: Cross-lingual Phoneme Recognition with Articulatory Attributes'
version: 1.0.0
date-released: '2024-10-20'
preferred-citation:
  authors:
    - family-names: Glocker
      given-names: Kevin
    - family-names: Herygers
      given-names: Aaricia
    - family-names: Georges
      given-names: Munir
  title: 'Allophant: Cross-lingual Phoneme Recognition with Articulatory Attributes'
  type: conference-paper
  year: '2023'
  collection-title: Proc. Interspeech 2023
  conference: {}
  publisher: {}

GitHub Events

Total
  • Create event: 1
  • Release event: 1
  • Issues event: 4
  • Watch event: 15
  • Issue comment event: 3
  • Push event: 9
  • Pull request event: 1
  • Fork event: 2
Last Year
  • Create event: 1
  • Release event: 1
  • Issues event: 4
  • Watch event: 15
  • Issue comment event: 3
  • Push event: 9
  • Pull request event: 1
  • Fork event: 2

Issues and Pull Requests

Last synced: 8 months ago

All Time
  • Total issues: 3
  • Total pull requests: 1
  • Average time to close issues: 4 months
  • Average time to close pull requests: 8 minutes
  • Total issue authors: 3
  • Total pull request authors: 1
  • Average comments per issue: 0.67
  • Average comments per pull request: 1.0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 2
  • Pull requests: 1
  • Average time to close issues: 3 days
  • Average time to close pull requests: 8 minutes
  • Issue authors: 2
  • Pull request authors: 1
  • Average comments per issue: 1.0
  • Average comments per pull request: 1.0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • luh-t-to (1)
  • Ryu1845 (1)
  • MichaelTheSlav (1)
Pull Request Authors
  • mohsen-goodarzi (2)
Top Labels
Issue Labels
Pull Request Labels

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 97 last-month
  • Total dependent packages: 0
  • Total dependent repositories: 0
  • Total versions: 1
  • Total maintainers: 1
pypi.org: allophant

A multilingual phoneme recognizer capable of generalizing zero-shot to unseen phoneme inventories.

  • Versions: 1
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 97 Last month
Rankings
Dependent packages count: 10.1%
Average: 33.6%
Dependent repos count: 57.1%
Maintainers (1)
Last synced: 6 months ago