arranger

Official Implementation of "Towards Automatic Instrumentation by Learning to Separate Parts in Symbolic Multitrack Music" (ISMIR 2021)

https://github.com/salu133445/arranger

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (9.2%) to scientific vocabulary

Keywords

machine-learning music music-information-retrieval tensorflow
Last synced: 6 months ago · JSON representation ·

Repository

Official Implementation of "Towards Automatic Instrumentation by Learning to Separate Parts in Symbolic Multitrack Music" (ISMIR 2021)

Basic Info
Statistics
  • Stars: 59
  • Watchers: 3
  • Forks: 8
  • Open Issues: 1
  • Releases: 0
Topics
machine-learning music music-information-retrieval tensorflow
Created about 5 years ago · Last pushed over 2 years ago
Metadata Files
Readme Funding License Citation

README.md

Arranger

This repository contains the official implementation of "Towards Automatic Instrumentation by Learning to Separate Parts in Symbolic Multitrack Music" (ISMIR 2021).

Towards Automatic Instrumentation by Learning to Separate Parts in Symbolic Multitrack Music
Hao-Wen Dong, Chris Donahue, Taylor Berg-Kirkpatrick and Julian McAuley
Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), 2021
[homepage] [paper] [video] [slides] [video (long)] [slides (long)] [code]

Content

Prerequisites

You can install the dependencies by running pipenv install (recommended) or python3 setup.py install -e .. Python>3.6 is required.

Directory structure

text ├─ analysis Notebooks for analysis ├─ scripts Scripts for running experiments ├─ models Pretrained models └─ arranger Main Python module ├─ config.yaml Configuration file ├─ data Code for collecting and processing data ├─ common Most-common algorithm ├─ zone Zone-based algorithm ├─ closest Closest-pitch algorithm ├─ lstm LSTM model └─ transformer Transformer model

Data Collection

Bach Chorales

```python

Collect Bach chorales from the music21 corpus

import shutil import music21.corpus

for path in music21.corpus.getComposer("bach"): if path.suffix in (".mxl", ".xml"): shutil.copyfile(path, "data/bach/raw/" + path.name) ```

MusicNet

```sh

Download the metadata

wget -O data/musicnet https://homes.cs.washington.edu/~thickstn/media/musicnet_metadata.csv ```

NES Music Database

```sh

Download the dataset

wget -O data/nes http://deepyeti.ucsd.edu/cdonahue/nesmdb/nesmdb_midi.tar.gz

Extract the archive

tar zxf data/nes/nesmdb_midi.tar.gz

Rename the folder for consistency

mv nesmdb_midi/ raw/ ```

Lakh MIDI Dataset (LMD)

```sh

Download the dataset

wget -O data/lmd http://hog.ee.columbia.edu/craffel/lmd/lmd_matched.tar.gz

Extract the archive

tar zxf data/lmd/lmd_matched.tar.gz

Rename the folder for consistency

mv lmd_matched/ raw/

Download the filenames

wget -O data/lmd http://hog.ee.columbia.edu/craffel/lmd/md5topaths.json ```

Data Preprocessing

The following commands assume Bach chorales. You might want to replace the dataset identifier bach with identifiers of other datasets (musicnet for MusicNet, nes for NES Music Database and lmd for Lakh MIDI Dataset).

```sh

Preprocess the data

python3 arranger/data/collect_bach.py -i data/bach/raw/ -o data/bach/json/ -j 1

Collect training data

python3 arranger/data/collect.py -i data/bach/json/ -o data/bach/s500m_10/ -d bach -s 500 -m 10 -j 1 ```

Models

  • LSTM model
    • arranger/lstm/train.py: Train the LSTM model
    • arranger/lstm/infer.py: Infer with the LSTM model
  • Transformer model
    • arranger/transformer/train.py: Train the Transformer model
    • arranger/transformer/infer.py: Infer with the Transformer model

Pretrained Models

Pretrained models can be found in the models/ directory.

To run a pretrained model, please pass the corresponding command line options to the infer.py scripts. You may want to follow the commands used in the experiment scripts provided in scripts/infer_*.sh.

For example, use the following command to run the pretrained BiLSTM model with embeddings.

```sh

Assuming we are at the root of the repository

cp models/bach/lstm/bidirectionalembedding/bestmodels.hdf5 OUTPUTDIRECTORY python3 arranger/lstm/infer.py \ -i {INPUTDIRECTORY} -o {OUTPUT_DIRECTORY} \ -d bach -g 0 -bi -pe -bp -be -fi ```

The input directory (INPUT_DIRECTORY) contains the input JSON files, which can be generated by muspy.save(). The output directory (OUTPUT_DIRECTORY) should contain the pretrained model and will contain the output files. The -d bach option indicates that we are using the Bach chorale dataset. The -g 0 option will run the model on the first GPU. The -bi -pe -bp -be -fi specifies the model options (run python3 arranger/lstm/infer.py -h for more information).

Baseline algorithms

  • Most-common algorithm
    • arranger/common/learn.py: Learn the most common label
    • arranger/common/infer.py: Infer with the most-common algorithm
  • Zone-based algorithm
    • arranger/zone/learn.py: Learn the optimal zone setting
    • arranger/zone/infer.py: Infer with the zone-based algorithm
  • Closest-pitch algorithm
    • arranger/closest/infer.py: Infer with the closest-pitch algorithm
  • MLP model
    • arranger/mlp/train.py: Train the MLP model
    • arranger/mlp/infer.py: Infer with the MLP model

Configuration

In arranger/config.yaml, you can configure the MIDI program numbers used for each track in the sample files generated. You can also configure the color of the generated sample piano roll visualization.

Citation

Please cite the following paper if you use the code provided in this repository.

Hao-Wen Dong, Chris Donahue, Taylor Berg-Kirkpatrick and Julian McAuley, "Towards Automatic Instrumentation by Learning to Separate Parts in Symbolic Multitrack Music," Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), 2021.

bibtex @inproceedings{dong2021arranger, author = {Hao-Wen Dong and Chris Donahue and Taylor Berg-Kirkpatrick and Julian McAuley}, title = {Towards Automatic Instrumentation by Learning to Separate Parts in Symbolic Multitrack Music}, booktitle = {Proceedings of the International Society for Music Information Retrieval Conference (ISMIR)}, year = 2021, }

Owner

  • Name: Hao-Wen (Herman) Dong 董皓文
  • Login: salu133445
  • Kind: user
  • Location: USA/Taiwan
  • Company: UC San Diego

Assistant Professor at University of Michigan | PhD from UC San Diego | Human-Centered Generative AI for Content Generation

Citation (CITATION.cff)

cff-version: 1.2.0
message: If you use this software, please cite it as below.
authors:
  - family-names: Dong
    given-names: Hao-Wen
title: Arranger
preferred-citation:
  type: article
  authors:
    - family-names: Dong
      given-names: Hao-Wen
    - family-names: Donahue
      given-names: Chris
    - family-names: Berg-Kirkpatrick
      given-names: Taylor
    - family-names: McAuley
      given-names: Julian
  title: Towards Automatic Instrumentation by Learning to Separate Parts in Symbolic Multitrack Music
  journal: Proceedings of the International Society for Music Information Retrieval Conference (ISMIR)
  year: 2021
date-released: 2021-05-20
license: MIT
url: https://salu133445.github.io/arranger/
repository-code: https://github.com/salu133445/arranger

GitHub Events

Total
  • Watch event: 4
  • Fork event: 1
Last Year
  • Watch event: 4
  • Fork event: 1

Issues and Pull Requests

Last synced: over 1 year ago

All Time
  • Total issues: 1
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 1
  • Total pull request authors: 0
  • Average comments per issue: 0.0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • asigalov61 (1)
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels

Dependencies

Pipfile pypi
  • arranger *
requirements.txt pypi
  • Keras-Applications ==1.0.8
  • Keras-Preprocessing ==1.1.2
  • Markdown ==3.3.3
  • Pillow ==8.1.0
  • PyYAML ==5.4.1
  • Pygments ==2.7.4
  • Werkzeug ==1.0.1
  • absl-py ==0.11.0
  • appdirs ==1.4.4
  • astor ==0.8.1
  • astroid ==2.4.2
  • astunparse ==1.6.3
  • backcall ==0.2.0
  • black ==20.8b1
  • cachetools ==4.2.1
  • certifi ==2020.12.5
  • chardet ==4.0.0
  • click ==7.1.2
  • cycler ==0.10.0
  • decorator ==4.4.2
  • flake8 ==3.8.4
  • flake8-docstrings ==1.5.0
  • gast ==0.3.3
  • google-auth ==1.24.0
  • google-auth-oauthlib ==0.4.2
  • google-pasta ==0.2.0
  • grpcio ==1.35.0
  • h5py ==2.10.0
  • idna ==2.10
  • imageio ==2.9.0
  • importlib-metadata ==3.4.0
  • ipykernel ==5.4.3
  • ipython ==7.19.0
  • ipython-genutils ==0.2.0
  • isort ==5.7.0
  • jedi ==0.18.0
  • joblib ==1.0.0
  • jupyter-client ==6.1.11
  • jupyter-core ==4.7.0
  • kiwisolver ==1.3.1
  • lazy-object-proxy ==1.4.3
  • matplotlib ==3.3.4
  • mccabe ==0.6.1
  • mido ==1.2.9
  • more-itertools ==8.6.0
  • music21 ==6.5.0
  • muspy ==0.3.0
  • mypy ==0.800
  • mypy-extensions ==0.4.3
  • numpy ==1.18.5
  • oauthlib ==3.1.0
  • opt-einsum ==3.3.0
  • parso ==0.8.1
  • pathspec ==0.8.1
  • pexpect ==4.8.0
  • pickleshare ==0.7.5
  • pretty-midi ==0.2.9
  • prompt-toolkit ==3.0.14
  • protobuf ==3.14.0
  • ptyprocess ==0.7.0
  • pyasn1 ==0.4.8
  • pyasn1-modules ==0.2.8
  • pycodestyle ==2.6.0
  • pydocstyle ==5.1.1
  • pyflakes ==2.2.0
  • pylint ==2.6.0
  • pyparsing ==2.4.7
  • pypianoroll ==1.0.3
  • python-dateutil ==2.8.1
  • pyzmq ==22.0.2
  • regex ==2020.11.13
  • requests ==2.25.1
  • requests-oauthlib ==1.3.0
  • rsa ==4.7
  • scikit-learn ==0.24.1
  • scipy ==1.6.0
  • six ==1.15.0
  • snowballstemmer ==2.1.0
  • tensorboard ==2.4.1
  • tensorboard-plugin-wit ==1.8.0
  • tensorflow ==2.3.2
  • tensorflow-estimator ==2.3.0
  • termcolor ==1.1.0
  • threadpoolctl ==2.1.0
  • toml ==0.10.2
  • tornado ==6.1
  • tqdm ==4.56.0
  • traitlets ==5.0.5
  • typed-ast ==1.4.2
  • typing-extensions ==3.7.4.3
  • urllib3 ==1.26.3
  • wcwidth ==0.2.5
  • webcolors ==1.11.1
  • wrapt ==1.12.1
  • zipp ==3.4.0
setup.py pypi
  • imageio >=2.9
  • muspy >=0.3
  • scikit-learn *
  • tensorflow <2.4
pyproject.toml pypi