custom-gradio
Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (17.3%) to scientific vocabulary
Repository
Basic Info
- Host: GitHub
- Owner: MrNocTV
- License: apache-2.0
- Language: HTML
- Default Branch: main
- Size: 111 MB
Statistics
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
- Releases: 0
Metadata Files
README.md
Build & share delightful machine learning apps easily [](https://github.com/gradio-app/gradio/actions/workflows/backend.yml) [](https://github.com/gradio-app/gradio/actions/workflows/ui.yml) [
Gradio: Build Machine Learning Web Apps — in Python
Gradio is an open-source Python library that is used to build machine learning and data science demos and web applications.
With Gradio, you can quickly create a beautiful user interface around your machine learning models or data science workflow and let people "try it out" by dragging-and-dropping in their own images, pasting text, recording their own voice, and interacting with your demo, all through the browser.

Gradio is useful for:
Demoing your machine learning models for clients/collaborators/users/students.
Deploying your models quickly with automatic shareable links and getting feedback on model performance.
Debugging your model interactively during development using built-in manipulation and interpretation tools.
Quickstart
Prerequisite: Gradio requires Python 3.7 or higher, that's all!
What Does Gradio Do?
One of the best ways to share your machine learning model, API, or data science workflow with others is to create an interactive app that allows your users or colleagues to try out the demo in their browsers.
Gradio allows you to build demos and share them, all in Python. And usually in just a few lines of code! So let's get started.
Hello, World
To get Gradio running with a simple "Hello, World" example, follow these three steps:
1. Install Gradio using pip:
bash
pip install gradio
2. Run the code below as a Python script or in a Jupyter Notebook (or Google Colab):
```python import gradio as gr
def greet(name): return "Hello " + name + "!"
demo = gr.Interface(fn=greet, inputs="text", outputs="text") demo.launch() ```
3. The demo below will appear automatically within the Jupyter Notebook, or pop in a browser on http://localhost:7860 if running from a script:

The Interface Class
You'll notice that in order to make the demo, we created a gradio.Interface. This Interface class can wrap any Python function with a user interface. In the example above, we saw a simple text-based function, but the function could be anything from music generator to a tax calculator to the prediction function of a pretrained machine learning model.
The core Interface class is initialized with three required parameters:
fn: the function to wrap a UI aroundinputs: which component(s) to use for the input (e.g."text","image"or"audio")outputs: which component(s) to use for the output (e.g."text","image"or"label")
Let's take a closer look at these components used to provide input and output.
Components Attributes
We saw some simple Textbox components in the previous examples, but what if you want to change how the UI components look or behave?
Let's say you want to customize the input text field — for example, you wanted it to be larger and have a text placeholder. If we use the actual class for Textbox instead of using the string shortcut, you have access to much more customizability through component attributes.
```python import gradio as gr
def greet(name): return "Hello " + name + "!"
demo = gr.Interface( fn=greet, inputs=gr.Textbox(lines=2, placeholder="Name Here..."), outputs="text", ) demo.launch() ```

Multiple Input and Output Components
Suppose you had a more complex function, with multiple inputs and outputs. In the example below, we define a function that takes a string, boolean, and number, and returns a string and number. Take a look how you pass a list of input and output components.
```python import gradio as gr
def greet(name, ismorning, temperature): salutation = "Good morning" if ismorning else "Good evening" greeting = f"{salutation} {name}. It is {temperature} degrees today" celsius = (temperature - 32) * 5 / 9 return greeting, round(celsius, 2)
demo = gr.Interface( fn=greet, inputs=["text", "checkbox", gr.Slider(0, 100)], outputs=["text", "number"], ) demo.launch() ```

You simply wrap the components in a list. Each component in the inputs list corresponds to one of the parameters of the function, in order. Each component in the outputs list corresponds to one of the values returned by the function, again in order.
An Image Example
Gradio supports many types of components, such as Image, DataFrame, Video, or Label. Let's try an image-to-image function to get a feel for these!
```python import numpy as np import gradio as gr
def sepia(inputimg): sepiafilter = np.array([ [0.393, 0.769, 0.189], [0.349, 0.686, 0.168], [0.272, 0.534, 0.131] ]) sepiaimg = inputimg.dot(sepiafilter.T) sepiaimg /= sepiaimg.max() return sepiaimg
demo = gr.Interface(sepia, gr.Image(shape=(200, 200)), "image") demo.launch() ```

When using the Image component as input, your function will receive a NumPy array with the shape (width, height, 3), where the last dimension represents the RGB values. We'll return an image as well in the form of a NumPy array.
You can also set the datatype used by the component with the type= keyword argument. For example, if you wanted your function to take a file path to an image instead of a NumPy array, the input Image component could be written as:
python
gr.Image(type="filepath", shape=...)
Also note that our input Image component comes with an edit button 🖉, which allows for cropping and zooming into images. Manipulating images in this way can help reveal biases or hidden flaws in a machine learning model!
You can read more about the many components and how to use them in the Gradio docs.
Blocks: More Flexibility and Control
Gradio offers two classes to build apps:
1. Interface, that provides a high-level abstraction for creating demos that we've been discussing so far.
2. Blocks, a low-level API for designing web apps with more flexible layouts and data flows. Blocks allows you to do things like feature multiple data flows and demos, control where components appear on the page, handle complex data flows (e.g. outputs can serve as inputs to other functions), and update properties/visibility of components based on user interaction — still all in Python. If this customizability is what you need, try Blocks instead!
Hello, Blocks
Let's take a look at a simple example. Note how the API here differs from Interface.
```python import gradio as gr
def greet(name): return "Hello " + name + "!"
with gr.Blocks() as demo: name = gr.Textbox(label="Name") output = gr.Textbox(label="Output Box") greetbtn = gr.Button("Greet") greetbtn.click(fn=greet, inputs=name, outputs=output)
demo.launch() ```

Things to note:
Blocksare made with awithclause, and any component created inside this clause is automatically added to the app.- Components appear vertically in the app in the order they are created. (Later we will cover customizing layouts!)
- A
Buttonwas created, and then aclickevent-listener was added to this button. The API for this should look familiar! Like anInterface, theclickmethod takes a Python function, input components, and output components.
More Complexity
Here's an app to give you a taste of what's possible with Blocks:
```python import numpy as np import gradio as gr
def flip_text(x): return x[::-1]
def flip_image(x): return np.fliplr(x)
with gr.Blocks() as demo: gr.Markdown("Flip text or image files using this demo.") with gr.Tabs(): with gr.TabItem("Flip Text"): textinput = gr.Textbox() textoutput = gr.Textbox() textbutton = gr.Button("Flip") with gr.TabItem("Flip Image"): with gr.Row(): imageinput = gr.Image() imageoutput = gr.Image() imagebutton = gr.Button("Flip")
text_button.click(flip_text, inputs=text_input, outputs=text_output)
image_button.click(flip_image, inputs=image_input, outputs=image_output)
demo.launch() ```

A lot more going on here! We'll cover how to create complex Blocks apps like this in the building with blocks section for you.
Congrats, you're now familiar with the basics of Gradio! 🥳 Go to our next guide to learn more about the key features of Gradio.
Open Source Stack
Gradio is built with many wonderful open-source libraries, please support them as well!
License
Gradio is licensed under the Apache License 2.0 found in the LICENSE file in the root directory of this repository.
Citation
Also check out the paper Gradio: Hassle-Free Sharing and Testing of ML Models in the Wild, ICML HILL 2019, and please cite it if you use Gradio in your work.
@article{abid2019gradio,
title = {Gradio: Hassle-Free Sharing and Testing of ML Models in the Wild},
author = {Abid, Abubakar and Abdalla, Ali and Abid, Ali and Khan, Dawood and Alfozan, Abdulrahman and Zou, James},
journal = {arXiv preprint arXiv:1906.02569},
year = {2019},
}
See Also
- The Gradio Discord Bot, a Discord bot that allows you to try any Hugging Face Space that is running a Gradio demo as a Discord bot.
Owner
- Name: loctv
- Login: MrNocTV
- Kind: user
- Location: Vietnam
- Company: Line Technology Vietnam
- Website: loctv.wordpress.com
- Repositories: 4
- Profile: https://github.com/MrNocTV
Copy-paste specialist
Citation (CITATION.cff)
cff-version: 1.2.0
message: Please cite this project using these metadata.
title: "Gradio: Hassle-free sharing and testing of ML models in the wild"
abstract: >-
Accessibility is a major challenge of machine learning (ML).
Typical ML models are built by specialists and require
specialized hardware/software as well as ML experience to
validate. This makes it challenging for non-technical
collaborators and endpoint users (e.g. physicians) to easily
provide feedback on model development and to gain trust in
ML. The accessibility challenge also makes collaboration
more difficult and limits the ML researcher's exposure to
realistic data and scenarios that occur in the wild. To
improve accessibility and facilitate collaboration, we
developed an open-source Python package, Gradio, which
allows researchers to rapidly generate a visual interface
for their ML models. Gradio makes accessing any ML model as
easy as sharing a URL. Our development of Gradio is informed
by interviews with a number of machine learning researchers
who participate in interdisciplinary collaborations. Their
feedback identified that Gradio should support a variety of
interfaces and frameworks, allow for easy sharing of the
interface, allow for input manipulation and interactive
inference by the domain expert, as well as allow embedding
the interface in iPython notebooks. We developed these
features and carried out a case study to understand Gradio's
usefulness and usability in the setting of a machine
learning collaboration between a researcher and a
cardiologist.
authors:
- family-names: Abid
given-names: Abubakar
- family-names: Abdalla
given-names: Ali
- family-names: Abid
given-names: Ali
- family-names: Khan
given-names: Dawood
- family-names: Alfozan
given-names: Abdulrahman
- family-names: Zou
given-names: James
doi: 10.48550/arXiv.1906.02569
date-released: 2019-06-06
url: https://arxiv.org/abs/1906.02569
GitHub Events
Total
Last Year
Issues and Pull Requests
Last synced: 11 months ago
All Time
- Total issues: 0
- Total pull requests: 3
- Average time to close issues: N/A
- Average time to close pull requests: 4 minutes
- Total issue authors: 0
- Total pull request authors: 1
- Average comments per issue: 0
- Average comments per pull request: 0.0
- Merged pull requests: 3
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
- MrNocTV (3)
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- FedericoCarboni/setup-ffmpeg v2 composite
- actions/cache v3 composite
- actions/checkout v3 composite
- actions/setup-node v3 composite
- actions/setup-python v3 composite
- codecov/codecov-action v3 composite
- pnpm/action-setup v2.2.2 composite
- actions/checkout v3 composite
- actions/setup-node v3 composite
- actions/setup-python v3 composite
- actions/upload-artifact v3 composite
- pnpm/action-setup v2.2.2 composite
- actions/checkout v3 composite
- actions/setup-python v3 composite
- peter-evans/create-pull-request v4 composite
- actions/checkout v2 composite
- actions/checkout v2 composite
- actions/upload-artifact v3 composite
- actions/checkout v3 composite
- actions/setup-python v3 composite
- thollander/actions-comment-pull-request v2 composite
- actions/checkout v3 composite
- actions/setup-python v3 composite
- actions/checkout v3 composite
- actions/setup-python v3 composite
- thollander/actions-comment-pull-request v1 composite
- EndBug/add-and-commit v9 composite
- actions/checkout v3 composite
- actions/setup-python v3 composite
- pnpm/action-setup v2.2.2 composite
- pypa/gh-action-pypi-publish 27b31702a0e7fc50959f5ad993c78deac1bdfc29 composite
- softprops/action-gh-release v1 composite
- actions/checkout v2 composite
- actionsdesk/lfs-warning v3.2 composite
- actions/checkout v3 composite
- actions/setup-node v3 composite
- actions/setup-python v3 composite
- actions/upload-artifact v3 composite
- pnpm/action-setup v2.2.1 composite
- python 3.8 build
- @types/three ^0.138.0 development
- @gradio/tootils workspace:^0.0.1
- @playwright/test ^1.27.1
- @sveltejs/vite-plugin-svelte ^1.0.0-next.44
- @tailwindcss/forms ^0.5.0
- @testing-library/dom ^8.11.3
- @testing-library/svelte ^3.1.0
- @testing-library/user-event ^13.5.0
- autoprefixer ^10.4.4
- babylonjs ^5.17.1
- babylonjs-loaders ^5.17.1
- happy-dom ^2.49.0
- msw ^1.0.0
- node-html-parser ^5.3.3
- npm-run-all ^4.1.5
- playwright ^1.27.1
- plotly.js-dist-min ^2.10.1
- polka ^1.0.0-next.22
- pollen-css ^4.6.1
- postcss ^8.4.6
- postcss-custom-media 8
- postcss-nested ^5.0.6
- postcss-prefix-selector ^1.16.0
- prettier ^2.6.2
- prettier-plugin-css-order ^1.3.0
- prettier-plugin-svelte ^2.7.0
- sirv ^2.0.2
- sirv-cli ^2.0.2
- svelte ^3.49.0
- svelte-check ^2.8.0
- svelte-i18n ^3.3.13
- svelte-preprocess ^4.10.6
- tailwindcss ^3.1.6
- tinyspy ^0.3.0
- typescript ^4.7.4
- vite ^2.9.5
- vitest ^0.12.7
- vite ^2.9.9 development
- @gradio/accordion workspace:^0.0.1
- @gradio/atoms workspace:^0.0.1
- @gradio/audio workspace:^0.0.1
- @gradio/button workspace:^0.0.1
- @gradio/chart workspace:^0.0.1
- @gradio/chatbot workspace:^0.0.1
- @gradio/file workspace:^0.0.1
- @gradio/form workspace:^0.0.1
- @gradio/gallery workspace:^0.0.1
- @gradio/highlighted-text workspace:^0.0.1
- @gradio/html workspace:^0.0.1
- @gradio/icons workspace:^0.0.1
- @gradio/image workspace:^0.0.1
- @gradio/json workspace:^0.0.1
- @gradio/label workspace:^0.0.1
- @gradio/markdown workspace:^0.0.1
- @gradio/model3D workspace:^0.0.1
- @gradio/plot workspace:^0.0.1
- @gradio/table workspace:^0.0.1
- @gradio/tabs workspace:^0.0.1
- @gradio/theme workspace:^0.0.1
- @gradio/upload workspace:^0.0.1
- @gradio/upload-button workspace:^0.0.1
- @gradio/utils workspace:^0.0.1
- @gradio/video workspace:^0.0.1
- d3-dsv ^3.0.1
- mime-types ^2.1.34
- postcss ^8.4.21
- postcss-prefix-selector ^1.16.0
- svelte ^3.25.1
- svelte-i18n ^3.3.13
- @gradio/utils workspace:^0.0.1
- @gradio/atoms workspace:^0.0.1
- @gradio/icons workspace:^0.0.1
- @gradio/upload workspace:^0.0.1
- extendable-media-recorder ^7.0.2
- extendable-media-recorder-wav-encoder ^7.0.76
- svelte-range-slider-pips ^2.0.1
- @gradio/utils workspace:^0.0.1
- @types/d3-dsv ^3.0.0 development
- @types/d3-scale ^4.0.2 development
- @types/d3-shape ^3.0.2 development
- @gradio/icons workspace:^0.0.1
- @gradio/theme workspace:^0.0.1
- @gradio/tooltip workspace:^0.0.1
- @gradio/utils workspace:^0.0.1
- d3-dsv ^3.0.1
- d3-scale ^4.0.2
- d3-shape ^3.1.0
- @gradio/theme workspace:^0.0.1
- @gradio/utils workspace:^0.0.1
- @gradio/atoms workspace:^0.0.1
- @gradio/icons workspace:^0.0.1
- @gradio/upload workspace:^0.0.1
- @gradio/atoms workspace:^0.0.1
- @gradio/icons workspace:^0.0.1
- @gradio/utils workspace:^0.0.1
- @gradio/atoms workspace:^0.0.1
- @gradio/icons workspace:^0.0.1
- @gradio/image workspace:^0.0.1
- @gradio/upload workspace:^0.0.1
- @gradio/utils workspace:^0.0.1
- @gradio/theme workspace:^0.0.1
- @gradio/utils workspace:^0.0.1
- @gradio/atoms workspace:^0.0.1
- @gradio/icons workspace:^0.0.1
- @gradio/upload workspace:^0.0.1
- @gradio/utils workspace:^0.0.1
- cropperjs ^1.5.12
- lazy-brush ^1.0.1
- resize-observer-polyfill ^1.5.1
- @gradio/atoms workspace:^0.0.1
- @gradio/icons workspace:^0.0.1
- @gradio/utils workspace:^0.0.1
- @gradio/atoms workspace:^0.0.1
- @gradio/icons workspace:^0.0.1
- @gradio/upload workspace:^0.0.1
- babylonjs ^4.2.1
- babylonjs-loaders ^4.2.1
- @gradio/atoms workspace:^0.0.1
- @gradio/icons workspace:^0.0.1
- @gradio/theme workspace:^0.0.1
- @gradio/utils workspace:^0.0.1
- @rollup/plugin-json ^5.0.2
- plotly.js-dist-min ^2.10.1
- svelte-vega ^1.2.0
- vega ^5.22.1
- vega-lite *
- @gradio/button workspace:^0.0.1
- @gradio/upload workspace:^0.0.1
- @gradio/utils workspace:^0.0.1
- @types/d3-dsv ^3.0.0
- d3-dsv ^3.0.1
- dequal ^2.0.2
- @gradio/atoms workspace:^0.0.1
- @gradio/icons workspace:^0.0.1
- @gradio/button workspace:^0.0.1
- @gradio/upload workspace:^0.0.1
- @gradio/utils workspace:^0.0.1
- @gradio/theme workspace:^0.0.1
- @gradio/atoms workspace:^0.0.1
- @gradio/icons workspace:^0.0.1
- @gradio/image workspace:^0.0.1
- @gradio/upload workspace:^0.0.1
- @sveltejs/adapter-auto next development
- @sveltejs/kit ^1.0.0-next.318 development
- autoprefixer ^10.4.2 development
- postcss ^8.4.5 development
- postcss-load-config ^3.1.1 development
- svelte-check ^2.2.6 development
- svelte-preprocess ^4.10.1 development
- tailwindcss ^3.0.12 development
- tslib ^2.3.1 development
- typescript ~4.5.4 development
- @gradio/accordion workspace:^0.0.1
- @gradio/atoms workspace:^0.0.1
- @gradio/audio workspace:^0.0.1
- @gradio/button workspace:^0.0.1
- @gradio/chart workspace:^0.0.1
- @gradio/chatbot workspace:^0.0.1
- @gradio/file workspace:^0.0.1
- @gradio/form workspace:^0.0.1
- @gradio/gallery workspace:^0.0.1
- @gradio/highlighted-text workspace:^0.0.1
- @gradio/html workspace:^0.0.1
- @gradio/icons workspace:^0.0.1
- @gradio/image workspace:^0.0.1
- @gradio/json workspace:^0.0.1
- @gradio/label workspace:^0.0.1
- @gradio/markdown workspace:^0.0.1
- @gradio/model3D workspace:^0.0.1
- @gradio/plot workspace:^0.0.1
- @gradio/table workspace:^0.0.1
- @gradio/tabs workspace:^0.0.1
- @gradio/theme workspace:^0.0.1
- @gradio/upload workspace:^0.0.1
- @gradio/upload-button workspace:^0.0.1
- @gradio/video workspace:^0.0.1
- svelte >=3.44.0 <4.0.0
- 563 dependencies
- 178 dependencies
- @fullhuman/postcss-purgecss ^4.0.3
- @tailwindcss/forms ^0.5.0
- @tailwindcss/typography ^0.5.4
- autoprefixer ^10.4.0
- cssnano ^5.0.8
- postcss-cli ^9.0.1
- postcss-hash ^3.0.0
- tailwindcss ^3.0.24
- matplotlib *
- numpy *
- torch ==1.6.0
- torchvision ==0.7.0
- wget *
- altair *
- vega_datasets *
- Pillow *
- cmake *
- gdown *
- numpy *
- onnxruntime-gpu *
- opencv-python-headless *
- scipy *
- torch *
- torchvision *
- numpy *
- matplotlib *
- shap *
- torch *
- transformers *
- plotly *
- pypistats *
- torch *
- transformers *
- bokeh >=3.0
- xyzservices *
- SQLAlchemy *
- matplotlib *
- psycopg2 *
- matplotlib >=3.5.2
- scikit-learn >=1.0.1
- numpy *
- opencv-python *
- Pillow *
- plotly *
- Pillow *
- jinja2 *
- numpy *
- open3d *
- torch *
- transformers add_dpt_redesign
- diffusers *
- torch *
- transformers *
- tensorflow *
- torch *
- transformers *
- numpy *
- pandas *
- torch *
- transformers *
- numpy *
- opencv-python *
- torch *
- torchvision *
- numpy *
- tensorflow *
- pillow *
- torch *
- torchvision *
- numpy *
- tensorflow *
- numpy *
- scipy *
- torch *
- transformers *
- shap *
- torch *
- transformers *
- vega_datasets *
- plotly *
- matplotlib *
- numpy *
- scipy *
- plotly *
- gdown *
- librosa ==0.9.2
- torch ==1.12.0
- torchaudio ==0.12.0
- torchvision ==0.13.0
- vega_datasets *
- neon-tts-plugin-coqui ==0.4.1a9
- torch *
- transformers *
- altair *
- bokeh *
- matplotlib *
- numpy *
- plotly *
- gdown *
- torch *
- matplotlib *
- numpy *
- datasets *
- tqdm *
- tqdm *
- matplotlib *
- numpy *
- pandas *
- torchaudio *
- vega_datasets *
- nltk *
- plotly *
- matplotlib *
- numpy *
- scipy *
- diffusers *
- ftfy *
- nvidia-ml-py3 *
- torch *
- transformers *
- matplotlib *
- numpy *
- deepspeech ==0.9.3
- torch *
- transformers *
- spacy *
- gradio *
- torch *
- pandas *
- plotly *
- prophet ==1.1
- pypistats *
- numpy *
- pandas *
- scikit-learn *
- gradio *
- torch *
- torch *
- transformers *
- torchaudio *
- opencv-python *
- datasets *
- matplotlib *
- pandas *
- shap *
- xgboost *
- Jinja2 *
- aiofiles *
- aiohttp *
- altair >=4.2.0
- fastapi *
- ffmpy *
- fsspec *
- httpx *
- markdown-it-py >=2.0.0
- markupsafe *
- matplotlib *
- mdit-py-plugins <=0.3.3
- numpy *
- orjson *
- pandas *
- pillow *
- pycryptodome *
- pydantic *
- pydub *
- python-multipart *
- pyyaml *
- requests *
- typing_extensions *
- uvicorn *
- websockets >=10.0
- IPython * test
- altair * test
- asyncio * test
- black * test
- boto3 * test
- comet_ml * test
- coverage * test
- fastapi >=0.87.0 test
- flake8 * test
- httpx * test
- huggingface_hub * test
- isort * test
- mlflow * test
- pydantic * test
- pytest * test
- pytest-asyncio * test
- pytest-cov * test
- respx * test
- scikit-image * test
- shap * test
- torch * test
- tqdm * test
- transformers * test
- vega_datasets * test
- wandb * test