Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (17.3%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Basic Info
  • Host: GitHub
  • Owner: MrNocTV
  • License: apache-2.0
  • Language: HTML
  • Default Branch: main
  • Size: 111 MB
Statistics
  • Stars: 0
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created about 3 years ago · Last pushed about 3 years ago
Metadata Files
Readme Changelog Contributing License Citation Security

README.md

[gradio](https://gradio.app)
Build & share delightful machine learning apps easily [![gradio-backend](https://github.com/gradio-app/gradio/actions/workflows/backend.yml/badge.svg)](https://github.com/gradio-app/gradio/actions/workflows/backend.yml) [![gradio-ui](https://github.com/gradio-app/gradio/actions/workflows/ui.yml/badge.svg)](https://github.com/gradio-app/gradio/actions/workflows/ui.yml) [codecov](https://app.codecov.io/gh/gradio-app/gradio) [![PyPI](https://img.shields.io/pypi/v/gradio)](https://pypi.org/project/gradio/) [![PyPI downloads](https://img.shields.io/pypi/dm/gradio)](https://pypi.org/project/gradio/) ![Python version](https://img.shields.io/badge/python-3.7+-important) [![Twitter follow](https://img.shields.io/twitter/follow/gradio?style=social&label=follow)](https://twitter.com/gradio) [Website](https://gradio.app) | [Documentation](https://gradio.app/docs/) | [Guides](https://gradio.app/guides/) | [Getting Started](https://gradio.app/getting_started/) | [Examples](demo/)

Gradio: Build Machine Learning Web Apps — in Python

Gradio is an open-source Python library that is used to build machine learning and data science demos and web applications.

With Gradio, you can quickly create a beautiful user interface around your machine learning models or data science workflow and let people "try it out" by dragging-and-dropping in their own images, pasting text, recording their own voice, and interacting with your demo, all through the browser.

Interface montage

Gradio is useful for:

  • Demoing your machine learning models for clients/collaborators/users/students.

  • Deploying your models quickly with automatic shareable links and getting feedback on model performance.

  • Debugging your model interactively during development using built-in manipulation and interpretation tools.

Quickstart

Prerequisite: Gradio requires Python 3.7 or higher, that's all!

What Does Gradio Do?

One of the best ways to share your machine learning model, API, or data science workflow with others is to create an interactive app that allows your users or colleagues to try out the demo in their browsers.

Gradio allows you to build demos and share them, all in Python. And usually in just a few lines of code! So let's get started.

Hello, World

To get Gradio running with a simple "Hello, World" example, follow these three steps:

1. Install Gradio using pip:

bash pip install gradio

2. Run the code below as a Python script or in a Jupyter Notebook (or Google Colab):

```python import gradio as gr

def greet(name): return "Hello " + name + "!"

demo = gr.Interface(fn=greet, inputs="text", outputs="text") demo.launch() ```

3. The demo below will appear automatically within the Jupyter Notebook, or pop in a browser on http://localhost:7860 if running from a script:

`hello_world` demo

The Interface Class

You'll notice that in order to make the demo, we created a gradio.Interface. This Interface class can wrap any Python function with a user interface. In the example above, we saw a simple text-based function, but the function could be anything from music generator to a tax calculator to the prediction function of a pretrained machine learning model.

The core Interface class is initialized with three required parameters:

  • fn: the function to wrap a UI around
  • inputs: which component(s) to use for the input (e.g. "text", "image" or "audio")
  • outputs: which component(s) to use for the output (e.g. "text", "image" or "label")

Let's take a closer look at these components used to provide input and output.

Components Attributes

We saw some simple Textbox components in the previous examples, but what if you want to change how the UI components look or behave?

Let's say you want to customize the input text field — for example, you wanted it to be larger and have a text placeholder. If we use the actual class for Textbox instead of using the string shortcut, you have access to much more customizability through component attributes.

```python import gradio as gr

def greet(name): return "Hello " + name + "!"

demo = gr.Interface( fn=greet, inputs=gr.Textbox(lines=2, placeholder="Name Here..."), outputs="text", ) demo.launch() ```

`hello_world_2` demo

Multiple Input and Output Components

Suppose you had a more complex function, with multiple inputs and outputs. In the example below, we define a function that takes a string, boolean, and number, and returns a string and number. Take a look how you pass a list of input and output components.

```python import gradio as gr

def greet(name, ismorning, temperature): salutation = "Good morning" if ismorning else "Good evening" greeting = f"{salutation} {name}. It is {temperature} degrees today" celsius = (temperature - 32) * 5 / 9 return greeting, round(celsius, 2)

demo = gr.Interface( fn=greet, inputs=["text", "checkbox", gr.Slider(0, 100)], outputs=["text", "number"], ) demo.launch() ```

`hello_world_3` demo

You simply wrap the components in a list. Each component in the inputs list corresponds to one of the parameters of the function, in order. Each component in the outputs list corresponds to one of the values returned by the function, again in order.

An Image Example

Gradio supports many types of components, such as Image, DataFrame, Video, or Label. Let's try an image-to-image function to get a feel for these!

```python import numpy as np import gradio as gr

def sepia(inputimg): sepiafilter = np.array([ [0.393, 0.769, 0.189], [0.349, 0.686, 0.168], [0.272, 0.534, 0.131] ]) sepiaimg = inputimg.dot(sepiafilter.T) sepiaimg /= sepiaimg.max() return sepiaimg

demo = gr.Interface(sepia, gr.Image(shape=(200, 200)), "image") demo.launch() ```

`sepia_filter` demo

When using the Image component as input, your function will receive a NumPy array with the shape (width, height, 3), where the last dimension represents the RGB values. We'll return an image as well in the form of a NumPy array.

You can also set the datatype used by the component with the type= keyword argument. For example, if you wanted your function to take a file path to an image instead of a NumPy array, the input Image component could be written as:

python gr.Image(type="filepath", shape=...)

Also note that our input Image component comes with an edit button 🖉, which allows for cropping and zooming into images. Manipulating images in this way can help reveal biases or hidden flaws in a machine learning model!

You can read more about the many components and how to use them in the Gradio docs.

Blocks: More Flexibility and Control

Gradio offers two classes to build apps:

1. Interface, that provides a high-level abstraction for creating demos that we've been discussing so far.

2. Blocks, a low-level API for designing web apps with more flexible layouts and data flows. Blocks allows you to do things like feature multiple data flows and demos, control where components appear on the page, handle complex data flows (e.g. outputs can serve as inputs to other functions), and update properties/visibility of components based on user interaction — still all in Python. If this customizability is what you need, try Blocks instead!

Hello, Blocks

Let's take a look at a simple example. Note how the API here differs from Interface.

```python import gradio as gr

def greet(name): return "Hello " + name + "!"

with gr.Blocks() as demo: name = gr.Textbox(label="Name") output = gr.Textbox(label="Output Box") greetbtn = gr.Button("Greet") greetbtn.click(fn=greet, inputs=name, outputs=output)

demo.launch() ```

`hello_blocks` demo

Things to note:

  • Blocks are made with a with clause, and any component created inside this clause is automatically added to the app.
  • Components appear vertically in the app in the order they are created. (Later we will cover customizing layouts!)
  • A Button was created, and then a click event-listener was added to this button. The API for this should look familiar! Like an Interface, the click method takes a Python function, input components, and output components.

More Complexity

Here's an app to give you a taste of what's possible with Blocks:

```python import numpy as np import gradio as gr

def flip_text(x): return x[::-1]

def flip_image(x): return np.fliplr(x)

with gr.Blocks() as demo: gr.Markdown("Flip text or image files using this demo.") with gr.Tabs(): with gr.TabItem("Flip Text"): textinput = gr.Textbox() textoutput = gr.Textbox() textbutton = gr.Button("Flip") with gr.TabItem("Flip Image"): with gr.Row(): imageinput = gr.Image() imageoutput = gr.Image() imagebutton = gr.Button("Flip")

text_button.click(flip_text, inputs=text_input, outputs=text_output)
image_button.click(flip_image, inputs=image_input, outputs=image_output)

demo.launch() ```

`blocks_flipper` demo

A lot more going on here! We'll cover how to create complex Blocks apps like this in the building with blocks section for you.

Congrats, you're now familiar with the basics of Gradio! 🥳 Go to our next guide to learn more about the key features of Gradio.

Open Source Stack

Gradio is built with many wonderful open-source libraries, please support them as well!

huggingface python fastapi encode svelte vite pnpm tailwind

License

Gradio is licensed under the Apache License 2.0 found in the LICENSE file in the root directory of this repository.

Citation

Also check out the paper Gradio: Hassle-Free Sharing and Testing of ML Models in the Wild, ICML HILL 2019, and please cite it if you use Gradio in your work.

@article{abid2019gradio, title = {Gradio: Hassle-Free Sharing and Testing of ML Models in the Wild}, author = {Abid, Abubakar and Abdalla, Ali and Abid, Ali and Khan, Dawood and Alfozan, Abdulrahman and Zou, James}, journal = {arXiv preprint arXiv:1906.02569}, year = {2019}, }

See Also

Owner

  • Name: loctv
  • Login: MrNocTV
  • Kind: user
  • Location: Vietnam
  • Company: Line Technology Vietnam

Copy-paste specialist

Citation (CITATION.cff)

cff-version: 1.2.0
message: Please cite this project using these metadata.
title: "Gradio: Hassle-free sharing and testing of ML models in the wild"
abstract: >-
  Accessibility is a major challenge of machine learning (ML).
  Typical ML models are built by specialists and require
  specialized hardware/software as well as ML experience to
  validate. This makes it challenging for non-technical
  collaborators and endpoint users (e.g. physicians) to easily
  provide feedback on model development and to gain trust in
  ML. The accessibility challenge also makes collaboration
  more difficult and limits the ML researcher's exposure to
  realistic data and scenarios that occur in the wild. To
  improve accessibility and facilitate collaboration, we
  developed an open-source Python package, Gradio, which
  allows researchers to rapidly generate a visual interface
  for their ML models. Gradio makes accessing any ML model as
  easy as sharing a URL. Our development of Gradio is informed
  by interviews with a number of machine learning researchers
  who participate in interdisciplinary collaborations. Their
  feedback identified that Gradio should support a variety of
  interfaces and frameworks, allow for easy sharing of the
  interface, allow for input manipulation and interactive
  inference by the domain expert, as well as allow embedding
  the interface in iPython notebooks. We developed these
  features and carried out a case study to understand Gradio's
  usefulness and usability in the setting of a machine
  learning collaboration between a researcher and a
  cardiologist.
authors:
  - family-names: Abid
    given-names: Abubakar
  - family-names: Abdalla
    given-names: Ali
  - family-names: Abid
    given-names: Ali
  - family-names: Khan
    given-names: Dawood
  - family-names: Alfozan
    given-names: Abdulrahman
  - family-names: Zou
    given-names: James
doi: 10.48550/arXiv.1906.02569
date-released: 2019-06-06
url: https://arxiv.org/abs/1906.02569

GitHub Events

Total
Last Year

Issues and Pull Requests

Last synced: 11 months ago

All Time
  • Total issues: 0
  • Total pull requests: 3
  • Average time to close issues: N/A
  • Average time to close pull requests: 4 minutes
  • Total issue authors: 0
  • Total pull request authors: 1
  • Average comments per issue: 0
  • Average comments per pull request: 0.0
  • Merged pull requests: 3
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
  • MrNocTV (3)
Top Labels
Issue Labels
Pull Request Labels

Dependencies

.github/workflows/backend.yml actions
  • FedericoCarboni/setup-ffmpeg v2 composite
  • actions/cache v3 composite
  • actions/checkout v3 composite
  • actions/setup-node v3 composite
  • actions/setup-python v3 composite
  • codecov/codecov-action v3 composite
  • pnpm/action-setup v2.2.2 composite
.github/workflows/build-pr.yml actions
  • actions/checkout v3 composite
  • actions/setup-node v3 composite
  • actions/setup-python v3 composite
  • actions/upload-artifact v3 composite
  • pnpm/action-setup v2.2.2 composite
.github/workflows/build-version-docs.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v3 composite
  • peter-evans/create-pull-request v4 composite
.github/workflows/check-changelog.yml actions
  • actions/checkout v2 composite
.github/workflows/check-demo-notebooks.yml actions
  • actions/checkout v2 composite
  • actions/upload-artifact v3 composite
.github/workflows/comment-pr.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v3 composite
  • thollander/actions-comment-pull-request v2 composite
.github/workflows/delete-stale-spaces.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v3 composite
.github/workflows/deploy-pr-to-spaces.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v3 composite
  • thollander/actions-comment-pull-request v1 composite
.github/workflows/deploy-pypi.yml actions
  • EndBug/add-and-commit v9 composite
  • actions/checkout v3 composite
  • actions/setup-python v3 composite
  • pnpm/action-setup v2.2.2 composite
  • pypa/gh-action-pypi-publish 27b31702a0e7fc50959f5ad993c78deac1bdfc29 composite
  • softprops/action-gh-release v1 composite
.github/workflows/large-files.yml actions
  • actions/checkout v2 composite
  • actionsdesk/lfs-warning v3.2 composite
.github/workflows/ui.yml actions
  • actions/checkout v3 composite
  • actions/setup-node v3 composite
  • actions/setup-python v3 composite
  • actions/upload-artifact v3 composite
  • pnpm/action-setup v2.2.1 composite
website/homepage/Dockerfile docker
  • python 3.8 build
ui/package.json npm
  • @types/three ^0.138.0 development
  • @gradio/tootils workspace:^0.0.1
  • @playwright/test ^1.27.1
  • @sveltejs/vite-plugin-svelte ^1.0.0-next.44
  • @tailwindcss/forms ^0.5.0
  • @testing-library/dom ^8.11.3
  • @testing-library/svelte ^3.1.0
  • @testing-library/user-event ^13.5.0
  • autoprefixer ^10.4.4
  • babylonjs ^5.17.1
  • babylonjs-loaders ^5.17.1
  • happy-dom ^2.49.0
  • msw ^1.0.0
  • node-html-parser ^5.3.3
  • npm-run-all ^4.1.5
  • playwright ^1.27.1
  • plotly.js-dist-min ^2.10.1
  • polka ^1.0.0-next.22
  • pollen-css ^4.6.1
  • postcss ^8.4.6
  • postcss-custom-media 8
  • postcss-nested ^5.0.6
  • postcss-prefix-selector ^1.16.0
  • prettier ^2.6.2
  • prettier-plugin-css-order ^1.3.0
  • prettier-plugin-svelte ^2.7.0
  • sirv ^2.0.2
  • sirv-cli ^2.0.2
  • svelte ^3.49.0
  • svelte-check ^2.8.0
  • svelte-i18n ^3.3.13
  • svelte-preprocess ^4.10.6
  • tailwindcss ^3.1.6
  • tinyspy ^0.3.0
  • typescript ^4.7.4
  • vite ^2.9.5
  • vitest ^0.12.7
ui/packages/_cdn-test/package.json npm
  • vite ^2.9.9 development
ui/packages/app/package.json npm
  • @gradio/accordion workspace:^0.0.1
  • @gradio/atoms workspace:^0.0.1
  • @gradio/audio workspace:^0.0.1
  • @gradio/button workspace:^0.0.1
  • @gradio/chart workspace:^0.0.1
  • @gradio/chatbot workspace:^0.0.1
  • @gradio/file workspace:^0.0.1
  • @gradio/form workspace:^0.0.1
  • @gradio/gallery workspace:^0.0.1
  • @gradio/highlighted-text workspace:^0.0.1
  • @gradio/html workspace:^0.0.1
  • @gradio/icons workspace:^0.0.1
  • @gradio/image workspace:^0.0.1
  • @gradio/json workspace:^0.0.1
  • @gradio/label workspace:^0.0.1
  • @gradio/markdown workspace:^0.0.1
  • @gradio/model3D workspace:^0.0.1
  • @gradio/plot workspace:^0.0.1
  • @gradio/table workspace:^0.0.1
  • @gradio/tabs workspace:^0.0.1
  • @gradio/theme workspace:^0.0.1
  • @gradio/upload workspace:^0.0.1
  • @gradio/upload-button workspace:^0.0.1
  • @gradio/utils workspace:^0.0.1
  • @gradio/video workspace:^0.0.1
  • d3-dsv ^3.0.1
  • mime-types ^2.1.34
  • postcss ^8.4.21
  • postcss-prefix-selector ^1.16.0
  • svelte ^3.25.1
  • svelte-i18n ^3.3.13
ui/packages/atoms/package.json npm
  • @gradio/utils workspace:^0.0.1
ui/packages/audio/package.json npm
  • @gradio/atoms workspace:^0.0.1
  • @gradio/icons workspace:^0.0.1
  • @gradio/upload workspace:^0.0.1
  • extendable-media-recorder ^7.0.2
  • extendable-media-recorder-wav-encoder ^7.0.76
  • svelte-range-slider-pips ^2.0.1
ui/packages/button/package.json npm
  • @gradio/utils workspace:^0.0.1
ui/packages/chart/package.json npm
  • @types/d3-dsv ^3.0.0 development
  • @types/d3-scale ^4.0.2 development
  • @types/d3-shape ^3.0.2 development
  • @gradio/icons workspace:^0.0.1
  • @gradio/theme workspace:^0.0.1
  • @gradio/tooltip workspace:^0.0.1
  • @gradio/utils workspace:^0.0.1
  • d3-dsv ^3.0.1
  • d3-scale ^4.0.2
  • d3-shape ^3.1.0
ui/packages/chatbot/package.json npm
  • @gradio/theme workspace:^0.0.1
  • @gradio/utils workspace:^0.0.1
ui/packages/file/package.json npm
  • @gradio/atoms workspace:^0.0.1
  • @gradio/icons workspace:^0.0.1
  • @gradio/upload workspace:^0.0.1
ui/packages/form/package.json npm
  • @gradio/atoms workspace:^0.0.1
  • @gradio/icons workspace:^0.0.1
  • @gradio/utils workspace:^0.0.1
ui/packages/gallery/package.json npm
  • @gradio/atoms workspace:^0.0.1
  • @gradio/icons workspace:^0.0.1
  • @gradio/image workspace:^0.0.1
  • @gradio/upload workspace:^0.0.1
  • @gradio/utils workspace:^0.0.1
ui/packages/highlighted-text/package.json npm
  • @gradio/theme workspace:^0.0.1
  • @gradio/utils workspace:^0.0.1
ui/packages/image/package.json npm
  • @gradio/atoms workspace:^0.0.1
  • @gradio/icons workspace:^0.0.1
  • @gradio/upload workspace:^0.0.1
  • @gradio/utils workspace:^0.0.1
  • cropperjs ^1.5.12
  • lazy-brush ^1.0.1
  • resize-observer-polyfill ^1.5.1
ui/packages/json/package.json npm
  • @gradio/atoms workspace:^0.0.1
  • @gradio/icons workspace:^0.0.1
ui/packages/label/package.json npm
  • @gradio/utils workspace:^0.0.1
ui/packages/model3D/package.json npm
  • @gradio/atoms workspace:^0.0.1
  • @gradio/icons workspace:^0.0.1
  • @gradio/upload workspace:^0.0.1
  • babylonjs ^4.2.1
  • babylonjs-loaders ^4.2.1
ui/packages/plot/package.json npm
  • @gradio/atoms workspace:^0.0.1
  • @gradio/icons workspace:^0.0.1
  • @gradio/theme workspace:^0.0.1
  • @gradio/utils workspace:^0.0.1
  • @rollup/plugin-json ^5.0.2
  • plotly.js-dist-min ^2.10.1
  • svelte-vega ^1.2.0
  • vega ^5.22.1
  • vega-lite *
ui/packages/table/package.json npm
  • @gradio/button workspace:^0.0.1
  • @gradio/upload workspace:^0.0.1
  • @gradio/utils workspace:^0.0.1
  • @types/d3-dsv ^3.0.0
  • d3-dsv ^3.0.1
  • dequal ^2.0.2
ui/packages/upload/package.json npm
  • @gradio/atoms workspace:^0.0.1
  • @gradio/icons workspace:^0.0.1
ui/packages/upload-button/package.json npm
  • @gradio/button workspace:^0.0.1
  • @gradio/upload workspace:^0.0.1
  • @gradio/utils workspace:^0.0.1
ui/packages/utils/package.json npm
  • @gradio/theme workspace:^0.0.1
ui/packages/video/package.json npm
  • @gradio/atoms workspace:^0.0.1
  • @gradio/icons workspace:^0.0.1
  • @gradio/image workspace:^0.0.1
  • @gradio/upload workspace:^0.0.1
ui/packages/workbench/package.json npm
  • @sveltejs/adapter-auto next development
  • @sveltejs/kit ^1.0.0-next.318 development
  • autoprefixer ^10.4.2 development
  • postcss ^8.4.5 development
  • postcss-load-config ^3.1.1 development
  • svelte-check ^2.2.6 development
  • svelte-preprocess ^4.10.1 development
  • tailwindcss ^3.0.12 development
  • tslib ^2.3.1 development
  • typescript ~4.5.4 development
  • @gradio/accordion workspace:^0.0.1
  • @gradio/atoms workspace:^0.0.1
  • @gradio/audio workspace:^0.0.1
  • @gradio/button workspace:^0.0.1
  • @gradio/chart workspace:^0.0.1
  • @gradio/chatbot workspace:^0.0.1
  • @gradio/file workspace:^0.0.1
  • @gradio/form workspace:^0.0.1
  • @gradio/gallery workspace:^0.0.1
  • @gradio/highlighted-text workspace:^0.0.1
  • @gradio/html workspace:^0.0.1
  • @gradio/icons workspace:^0.0.1
  • @gradio/image workspace:^0.0.1
  • @gradio/json workspace:^0.0.1
  • @gradio/label workspace:^0.0.1
  • @gradio/markdown workspace:^0.0.1
  • @gradio/model3D workspace:^0.0.1
  • @gradio/plot workspace:^0.0.1
  • @gradio/table workspace:^0.0.1
  • @gradio/tabs workspace:^0.0.1
  • @gradio/theme workspace:^0.0.1
  • @gradio/upload workspace:^0.0.1
  • @gradio/upload-button workspace:^0.0.1
  • @gradio/video workspace:^0.0.1
  • svelte >=3.44.0 <4.0.0
ui/pnpm-lock.yaml npm
  • 563 dependencies
website/homepage/package-lock.json npm
  • 178 dependencies
website/homepage/package.json npm
  • @fullhuman/postcss-purgecss ^4.0.3
  • @tailwindcss/forms ^0.5.0
  • @tailwindcss/typography ^0.5.4
  • autoprefixer ^10.4.0
  • cssnano ^5.0.8
  • postcss-cli ^9.0.1
  • postcss-hash ^3.0.0
  • tailwindcss ^3.0.24
demo/Echocardiogram-Segmentation/requirements.txt pypi
  • matplotlib *
  • numpy *
  • torch ==1.6.0
  • torchvision ==0.7.0
  • wget *
demo/altair_plot/requirements.txt pypi
  • altair *
  • vega_datasets *
demo/animeganv2/requirements.txt pypi
  • Pillow *
  • cmake *
  • gdown *
  • numpy *
  • onnxruntime-gpu *
  • opencv-python-headless *
  • scipy *
  • torch *
  • torchvision *
demo/blocks_flag/requirements.txt pypi
  • numpy *
demo/blocks_interpretation/requirements.txt pypi
  • matplotlib *
  • shap *
  • torch *
  • transformers *
demo/blocks_multiple_event_triggers/requirements.txt pypi
  • plotly *
  • pypistats *
demo/blocks_speech_text_sentiment/requirements.txt pypi
  • torch *
  • transformers *
demo/bokeh_plot/requirements.txt pypi
  • bokeh >=3.0
  • xyzservices *
demo/chicago-bikeshare-dashboard/requirements.txt pypi
  • SQLAlchemy *
  • matplotlib *
  • psycopg2 *
demo/clustering/requirements.txt pypi
  • matplotlib >=3.5.2
  • scikit-learn >=1.0.1
demo/color_generator/requirements.txt pypi
  • numpy *
  • opencv-python *
demo/color_picker/requirements.txt pypi
  • Pillow *
demo/dashboard/requirements.txt pypi
  • plotly *
demo/depth_estimation/requirements.txt pypi
  • Pillow *
  • jinja2 *
  • numpy *
  • open3d *
  • torch *
  • transformers add_dpt_redesign
demo/diffusers_with_batching/requirements.txt pypi
  • diffusers *
  • torch *
  • transformers *
demo/digit_classifier/requirements.txt pypi
  • tensorflow *
demo/english_translator/requirements.txt pypi
  • torch *
  • transformers *
demo/fake_diffusion/requirements.txt pypi
  • numpy *
demo/fraud_detector/requirements.txt pypi
  • pandas *
demo/generate_english_german/requirements.txt pypi
  • torch *
  • transformers *
demo/generate_tone/requirements.txt pypi
  • numpy *
demo/gif_maker/requirements.txt pypi
  • opencv-python *
demo/image_classification/requirements.txt pypi
  • torch *
  • torchvision *
demo/image_classifier/requirements.txt pypi
  • numpy *
  • tensorflow *
demo/image_classifier_2/requirements.txt pypi
  • pillow *
  • torch *
  • torchvision *
demo/image_classifier_interpretation/requirements.txt pypi
  • numpy *
  • tensorflow *
demo/image_segmentation/requirements.txt pypi
  • numpy *
  • scipy *
  • torch *
  • transformers *
demo/interpretation_component/requirements.txt pypi
  • shap *
  • torch *
  • transformers *
demo/lineplot_component/requirements.txt pypi
  • vega_datasets *
demo/live_dashboard/requirements.txt pypi
  • plotly *
demo/main_note/requirements.txt pypi
  • matplotlib *
  • numpy *
  • scipy *
demo/map_airbnb/requirements.txt pypi
  • plotly *
demo/musical_instrument_identification/requirements.txt pypi
  • gdown *
  • librosa ==0.9.2
  • torch ==1.12.0
  • torchaudio ==0.12.0
  • torchvision ==0.13.0
demo/native_plots/requirements.txt pypi
  • vega_datasets *
demo/neon-tts-plugin-coqui/requirements.txt pypi
  • neon-tts-plugin-coqui ==0.4.1a9
demo/ner_pipeline/requirements.txt pypi
  • torch *
  • transformers *
demo/outbreak_forecast/requirements.txt pypi
  • altair *
  • bokeh *
  • matplotlib *
  • numpy *
  • plotly *
demo/pictionary/requirements.txt pypi
  • gdown *
  • torch *
demo/plot_component/requirements.txt pypi
  • matplotlib *
  • numpy *
demo/progress/requirements.txt pypi
  • datasets *
  • tqdm *
demo/progress_component/requirements.txt pypi
  • tqdm *
demo/sales_projections/requirements.txt pypi
  • matplotlib *
  • numpy *
  • pandas *
demo/same-person-or-different/requirements.txt pypi
  • torchaudio *
demo/scatterplot_component/requirements.txt pypi
  • vega_datasets *
demo/sentiment_analysis/requirements.txt pypi
  • nltk *
demo/sine_curve/requirements.txt pypi
  • plotly *
demo/spectogram/requirements.txt pypi
  • matplotlib *
  • numpy *
  • scipy *
demo/stable-diffusion/requirements.txt pypi
  • diffusers *
  • ftfy *
  • nvidia-ml-py3 *
  • torch *
  • transformers *
demo/stock_forecast/requirements.txt pypi
  • matplotlib *
  • numpy *
demo/streaming_stt/requirements.txt pypi
  • deepspeech ==0.9.3
demo/streaming_wav2vec/requirements.txt pypi
  • torch *
  • transformers *
demo/text_analysis/requirements.txt pypi
  • spacy *
demo/text_generation/requirements.txt pypi
  • gradio *
  • torch *
demo/timeseries-forecasting-with-prophet/requirements.txt pypi
  • pandas *
  • plotly *
  • prophet ==1.1
  • pypistats *
demo/titanic_survival/requirements.txt pypi
  • numpy *
  • pandas *
  • scikit-learn *
demo/translation/requirements.txt pypi
  • gradio *
  • torch *
demo/unified_demo_text_generation/requirements.txt pypi
  • torch *
  • transformers *
demo/unispeech-speaker-verification/requirements.txt pypi
  • torchaudio *
demo/white_noise_vid_not_playable/requirements.txt pypi
  • opencv-python *
demo/xgboost-income-prediction-with-explainability/requirements.txt pypi
  • datasets *
  • matplotlib *
  • pandas *
  • shap *
  • xgboost *
requirements.txt pypi
  • Jinja2 *
  • aiofiles *
  • aiohttp *
  • altair >=4.2.0
  • fastapi *
  • ffmpy *
  • fsspec *
  • httpx *
  • markdown-it-py >=2.0.0
  • markupsafe *
  • matplotlib *
  • mdit-py-plugins <=0.3.3
  • numpy *
  • orjson *
  • pandas *
  • pillow *
  • pycryptodome *
  • pydantic *
  • pydub *
  • python-multipart *
  • pyyaml *
  • requests *
  • typing_extensions *
  • uvicorn *
  • websockets >=10.0
test/requirements.in pypi
  • IPython * test
  • altair * test
  • asyncio * test
  • black * test
  • boto3 * test
  • comet_ml * test
  • coverage * test
  • fastapi >=0.87.0 test
  • flake8 * test
  • httpx * test
  • huggingface_hub * test
  • isort * test
  • mlflow * test
  • pydantic * test
  • pytest * test
  • pytest-asyncio * test
  • pytest-cov * test
  • respx * test
  • scikit-image * test
  • shap * test
  • torch * test
  • tqdm * test
  • transformers * test
  • vega_datasets * test
  • wandb * test