https://github.com/awslabs/multi-model-server

Multi Model Server is a tool for serving neural net models for inference

https://github.com/awslabs/multi-model-server

Science Score: 23.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
    3 of 69 committers (4.3%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.0%) to scientific vocabulary

Keywords

ai deep-learning inference mxnet neural-network onnx server

Keywords from Contributors

gluon nlu labels amazon-textract unit-test tensor gluonnlp natural-language-generation natural-language-inference natural-language-understanding
Last synced: 5 months ago · JSON representation

Repository

Multi Model Server is a tool for serving neural net models for inference

Basic Info
  • Host: GitHub
  • Owner: awslabs
  • License: apache-2.0
  • Language: Java
  • Default Branch: master
  • Homepage:
  • Size: 36.9 MB
Statistics
  • Stars: 1,015
  • Watchers: 48
  • Forks: 232
  • Open Issues: 102
  • Releases: 24
Topics
ai deep-learning inference mxnet neural-network onnx server
Created over 8 years ago · Last pushed almost 2 years ago
Metadata Files
Readme License

README.md

Multi Model Server

| ubuntu/python-2.7 | ubuntu/python-3.6 | |---------|---------| | Python3 Build Status | Python2 Build Status |

Multi Model Server (MMS) is a flexible and easy to use tool for serving deep learning models trained using any ML/DL framework.

Use the MMS Server CLI, or the pre-configured Docker images, to start a service that sets up HTTP endpoints to handle model inference requests.

A quick overview and examples for both serving and packaging are provided below. Detailed documentation and examples are provided in the docs folder.

Join our slack channel to get in touch with development team, ask questions, find out what's cooking and more!

Contents of this Document

Other Relevant Documents

Quick Start

Prerequisites

Before proceeding further with this document, make sure you have the following prerequisites. 1. Ubuntu, CentOS, or macOS. Windows support is experimental. The following instructions will focus on Linux and macOS only. 1. Python - Multi Model Server requires python to run the workers. 1. pip - Pip is a python package management system. 1. Java 8 - Multi Model Server requires Java 8 to start. You have the following options for installing Java 8:

For Ubuntu:
```bash
sudo apt-get install openjdk-8-jre-headless
```

For CentOS:
```bash
sudo yum install java-1.8.0-openjdk
```

For macOS:
```bash
brew tap homebrew/cask-versions
brew update
brew cask install adoptopenjdk8
```

Installing Multi Model Server with pip

Setup

Step 1: Setup a Virtual Environment

We recommend installing and running Multi Model Server in a virtual environment. It's a good practice to run and install all of the Python dependencies in virtual environments. This will provide isolation of the dependencies and ease dependency management.

One option is to use Virtualenv. This is used to create virtual Python environments. You may install and activate a virtualenv for Python 2.7 as follows:

bash pip install virtualenv

Then create a virtual environment: ```bash

Assuming we want to run python2.7 in /usr/local/bin/python2.7

virtualenv -p /usr/local/bin/python2.7 /tmp/pyenv2

Enter this virtual environment as follows

source /tmp/pyenv2/bin/activate ```

Refer to the Virtualenv documentation for further information.

Step 2: Install MXNet MMS won't install the MXNet engine by default. If it isn't already installed in your virtual environment, you must install one of the MXNet pip packages.

For CPU inference, mxnet-mkl is recommended. Install it as follows:

```bash

Recommended for running Multi Model Server on CPU hosts

pip install mxnet-mkl ```

For GPU inference, mxnet-cu92mkl is recommended. Install it as follows:

```bash

Recommended for running Multi Model Server on GPU hosts

pip install mxnet-cu92mkl ```

Step 3: Install or Upgrade MMS as follows:

```bash

Install latest released version of multi-model-server

pip install multi-model-server ```

To upgrade from a previous version of multi-model-server, please refer migration reference document.

Notes: * A minimal version of model-archiver will be installed with MMS as dependency. See model-archiver for more options and details. * See the advanced installation page for more options and troubleshooting.

Serve a Model

Once installed, you can get MMS model server up and running very quickly. Try out --help to see all the CLI options available.

bash multi-model-server --help

For this quick start, we'll skip over most of the features, but be sure to take a look at the full server docs when you're ready.

Here is an easy example for serving an object classification model: bash multi-model-server --start --models squeezenet=https://s3.amazonaws.com/model-server/model_archive_1.0/squeezenet_v1.1.mar

With the command above executed, you have MMS running on your host, listening for inference requests. Please note, that if you specify model(s) during MMS start - it will automatically scale backend workers to the number equal to available vCPUs (if you run on CPU instance) or to the number of available GPUs (if you run on GPU instance). In case of powerful hosts with a lot of compute resoures (vCPUs or GPUs) this start up and autoscaling process might take considerable time. If you would like to minimize MMS start up time you can try to avoid registering and scaling up model during start up time and move that to a later point by using corresponding Management API calls (this allows finer grain control to how much resources are allocated for any particular model).

To test it out, you can open a new terminal window next to the one running MMS. Then you can use curl to download one of these cute pictures of a kitten and curl's -o flag will name it kitten.jpg for you. Then you will curl a POST to the MMS predict endpoint with the kitten's image.

kitten

In the example below, we provide a shortcut for these steps.

bash curl -O https://s3.amazonaws.com/model-server/inputs/kitten.jpg curl -X POST http://127.0.0.1:8080/predictions/squeezenet -T kitten.jpg

The predict endpoint will return a prediction response in JSON. It will look something like the following result:

json [ { "probability": 0.8582232594490051, "class": "n02124075 Egyptian cat" }, { "probability": 0.09159987419843674, "class": "n02123045 tabby, tabby cat" }, { "probability": 0.0374876894056797, "class": "n02123159 tiger cat" }, { "probability": 0.006165083032101393, "class": "n02128385 leopard, Panthera pardus" }, { "probability": 0.0031716004014015198, "class": "n02127052 lynx, catamount" } ]

You will see this result in the response to your curl call to the predict endpoint, and in the server logs in the terminal window running MMS. It's also being logged locally with metrics.

Other models can be downloaded from the model zoo, so try out some of those as well.

Now you've seen how easy it can be to serve a deep learning model with MMS! Would you like to know more?

Stopping the running model server

To stop the current running model-server instance, run the following command: bash $ multi-model-server --stop You would see output specifying that multi-model-server has stopped.

Create a Model Archive

MMS enables you to package up all of your model artifacts into a single model archive. This makes it easy to share and deploy your models. To package a model, check out model archiver documentation

Recommended production deployments

  • MMS doesn't provide authentication. You have to have your own authentication proxy in front of MMS.
  • MMS doesn't provide throttling, it's vulnerable to DDoS attack. It's recommended to running MMS behind a firewall.
  • MMS only allows localhost access by default, see Network configuration for detail.
  • SSL is not enabled by default, see Enable SSL for detail.
  • MMS use a config.properties file to configure MMS's behavior, see Manage MMS page for detail of how to configure MMS.
  • For better security, we recommend running MMS inside docker container. This project includes Dockerfiles to build containers recommended for production deployments. These containers demonstrate how to customize your own production MMS deployment. The basic usage can be found on the Docker readme.

Other Features

Browse over to the Docs readme for the full index of documentation. This includes more examples, how to customize the API service, API endpoint details, and more.

External demos powered by MMS

Here are some example demos of deep learning applications, powered by MMS:

| | | |:------:|:-----------:| | Product Review Classification demo4 |Visual Search demo1| | Facial Emotion Recognition demo2 |Neural Style Transfer demo3 |

Contributing

We welcome all contributions!

To file a bug or request a feature, please file a GitHub issue. Pull requests are welcome.

Owner

  • Name: Amazon Web Services - Labs
  • Login: awslabs
  • Kind: organization
  • Location: Seattle, WA

AWS Labs

GitHub Events

Total
  • Watch event: 19
  • Fork event: 3
Last Year
  • Watch event: 19
  • Fork event: 3

Committers

Last synced: almost 3 years ago

All Time
  • Total Commits: 1,029
  • Total Committers: 69
  • Avg Commits per committer: 14.913
  • Development Distribution Score (DDS): 0.801
Top Committers
Name Email Commits
Dantu v****k@g****m 205
Frank Liu l****n@a****m 170
vrakesh r****v@g****m 80
Ruofei Yu y****i@g****m 71
Aaron Markham m****a@a****m 70
Piyush Ghai g****8@o****u 64
Khedia k****a@8****m 54
Wang w****o@9****m 49
Denis Davydenko d****a@g****m 18
vdantu 3****u@u****m 17
alexwong 1****g@u****m 14
Hagay Lupesko l****o@g****m 13
Vasudevan r****s@8****m 12
Naveen Swamy m****n@g****m 11
Thomas Delteil t****1@g****m 10
Aaqib m****b@g****m 10
Jonathan Esterhazy e****z@a****m 9
abhinavs95 a****1@g****m 8
Frank Liu f****0@g****m 8
Aaron Markham a****m@f****m 8
Zach Kimberg k****z@a****m 8
Ubuntu u****u@i****l 7
Sandeep Krishnamurthy s****8@g****m 7
Alex Gladkov g****a@l****m 7
kevinthesun k****y@g****m 7
Hagay Lupesko l****o@u****m 6
SK s****k@a****m 6
root r****t@i****l 6
Jiajie Chen j****9@g****m 5
Viacheslav Kovalevskyi v****k@a****m 5
and 39 more...

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 62
  • Total pull requests: 50
  • Average time to close issues: 7 months
  • Average time to close pull requests: about 2 months
  • Total issue authors: 53
  • Total pull request authors: 19
  • Average comments per issue: 1.29
  • Average comments per pull request: 0.34
  • Merged pull requests: 24
  • Bot issues: 0
  • Bot pull requests: 2
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • lxning (3)
  • xuweidongkobe (3)
  • n0thing233 (2)
  • yangjian1218 (2)
  • kaushal-idx (2)
  • wdh234 (2)
  • nskool (2)
  • fm1ch4 (1)
  • wojciechrauk-plutoflume (1)
  • HahTK (1)
  • namannandan (1)
  • glc-froussel (1)
  • tbagrel1 (1)
  • yinsong1986 (1)
  • msameedkhan (1)
Pull Request Authors
  • maheshambule (7)
  • maaquib (7)
  • lxning (7)
  • shivamshriwas (4)
  • nskool (3)
  • frankfliu (3)
  • AnupamAS0x1 (2)
  • c2zwdjnlcg (2)
  • dependabot[bot] (2)
  • davidthomas426 (2)
  • prashantsail (2)
  • aws-taylor (1)
  • dhanainme (1)
  • vkorf (1)
  • mbercin (1)
Top Labels
Issue Labels
bug (3) feature request (1) question (1)
Pull Request Labels
dependencies (2)

Packages

  • Total packages: 2
  • Total downloads:
    • pypi 181,448 last-month
  • Total dependent packages: 1
    (may contain duplicates)
  • Total dependent repositories: 12
    (may contain duplicates)
  • Total versions: 1,139
  • Total maintainers: 1
pypi.org: multi-model-server

Multi Model Server is a tool for serving neural net models for inference

  • Versions: 1,113
  • Dependent Packages: 1
  • Dependent Repositories: 12
  • Downloads: 181,448 Last month
Rankings
Downloads: 1.3%
Stargazers count: 2.1%
Average: 3.1%
Forks count: 3.4%
Dependent repos count: 4.2%
Dependent packages count: 4.8%
Maintainers (1)
Last synced: 6 months ago
proxy.golang.org: github.com/awslabs/multi-model-server
  • Versions: 26
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent packages count: 6.5%
Average: 6.7%
Dependent repos count: 7.0%
Last synced: 6 months ago

Dependencies

frontend/modelarchive/build.gradle maven
  • com.google.code.gson:gson ${gson_version} compile
  • commons-io:commons-io 2.6 compile
  • org.apache.logging.log4j:log4j-slf4j-impl ${slf4j_log4j_version} compile
  • org.slf4j:slf4j-api ${slf4j_api_version} compile
  • commons-cli:commons-cli ${commons_cli_version} testCompile
  • org.testng:testng ${testng_version} testCompile
frontend/server/build.gradle maven
  • com.lmax:disruptor ${lmax_disruptor_version} compile
  • commons-cli:commons-cli ${commons_cli_version} compile
  • io.netty:netty-all ${netty_version} compile
  • software.amazon.ai:mms-plugins-sdk ${mms_server_sdk_version} compile
  • org.testng:testng ${testng_version} testCompile
plugins/endpoints/build.gradle maven
  • com.google.code.gson:gson ${gson_version} compile
  • software.amazon.ai:mms-plugins-sdk ${mms_server_sdk_version} compile
serving-sdk/pom.xml maven
  • junit:junit 4.13.1 test
  • org.mockito:mockito-all 1.10.19 test
tests/performance/requirements.txt pypi
  • awscli ==1.18.80 test
  • boto3 ==1.14.3 test
  • click ==7.1.2 test
  • gevent ==20.5.2 test
  • junitparser ==1.4.1 test
  • pandas ==1.0.3 test
  • pathlib ==1.0.1 test
  • tabulate ==0.8.7 test
  • termcolor ==1.1.0 test
  • tqdm ==4.40.0 test
  • vjunit * test
examples/densenet_pytorch/Dockerfile docker
  • awsdeeplearningteam/multi-model-server base-cpu-py3.6 build
examples/sockeye_translate/Dockerfile docker
  • nvidia/cuda 9.2-cudnn7-runtime-ubuntu18.04 build
frontend/build.gradle maven
frontend/cts/build.gradle maven
plugins/build.gradle maven
model-archiver/setup.py pypi
setup.py pypi