https://github.com/zenml-io/zenml

ZenML 🙏: MLOps for Reliable AI: from Classical AI to Agents. https://zenml.io.

https://github.com/zenml-io/zenml

Science Score: 36.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • â—‹
    CITATION.cff file
  • âś“
    codemeta.json file
    Found codemeta.json file
  • âś“
    .zenodo.json file
    Found .zenodo.json file
  • â—‹
    DOI references
  • â—‹
    Academic publication links
  • âś“
    Committers with academic emails
    1 of 114 committers (0.9%) from academic institutions
  • â—‹
    Institutional organization owner
  • â—‹
    JOSS paper metadata
  • â—‹
    Scientific vocabulary similarity
    Low similarity (14.0%) to scientific vocabulary

Keywords

ai automl data-science deep-learning devops-tools hacktoberfest llm llmops machine-learning metadata-tracking ml mlops pipelines production-ready pytorch tensorflow workflow zenml

Keywords from Contributors

agents langchain data-profilers transformers datacleaner developer-tools pipeline-testing jax gemini fine-tuning
Last synced: 5 months ago · JSON representation

Repository

ZenML 🙏: MLOps for Reliable AI: from Classical AI to Agents. https://zenml.io.

Basic Info
  • Host: GitHub
  • Owner: zenml-io
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Homepage: https://zenml.io
  • Size: 698 MB
Statistics
  • Stars: 4,861
  • Watchers: 43
  • Forks: 535
  • Open Issues: 90
  • Releases: 142
Topics
ai automl data-science deep-learning devops-tools hacktoberfest llm llmops machine-learning metadata-tracking ml mlops pipelines production-ready pytorch tensorflow workflow zenml
Created over 5 years ago · Last pushed 6 months ago
Metadata Files
Readme Contributing License Code of conduct Security Roadmap Agents Cla

README.md


ZenML Header

Your unified toolkit for shipping everything from decision trees to complex AI agents, built on the MLOps principles you already trust.

[![PyPi][pypi-shield]][pypi-url] [![PyPi][pypiversion-shield]][pypi-url] [![PyPi][downloads-shield]][downloads-url] [![Contributors][contributors-shield]][contributors-url] [![License][license-shield]][license-url]

Features Roadmap Report Bug Sign up for ZenML Pro Blog Podcast

For the latest release, see the release notes.


ZenML is a unified MLOps framework that extends the battle-tested principles you rely on for classical ML to the new world of AI agents. It's one platform to develop, evaluate, and deploy your entire AI portfolio - from decision trees to complex multi-agent systems. By providing a single framework for your entire AI stack, ZenML enables developers across your organization to collaborate more effectively without maintaining separate toolchains for models and agents.

The Problem: MLOps Works for Models, But What About AI?

No MLOps for modern AI

You're an ML engineer. You've perfected deploying scikit-learn models and wrangling PyTorch jobs. Your MLOps stack is dialed in. But now, you're being asked to build and ship AI agents, and suddenly your trusted toolkit is starting to crack.

  • The Adaptation Struggle: Your MLOps habits (rigorous testing, versioning, CI/CD) dont map cleanly onto agent development. How do you version a prompt? How do you regression test a non-deterministic system? The tools that gave you confidence for models now create friction for agents.

  • The Divided Stack: To cope, teams are building a second, parallel stack just for LLM-based systems. Now youre maintaining two sets of tools, two deployment pipelines, and two mental models. Your classical models live in one world, your agents in another. It's expensive, complex, and slows everyone down.

  • The Broken Feedback Loop: Getting an agent from your local environment to production is a slow, painful journey. By the time you get feedback on performance, cost, or quality, the requirements have already changed. Iteration is a guessing game, not a data-driven process.

The Solution: One Framework for your Entire AI Stack

Stop maintaining two separate worlds. ZenML is a unified MLOps framework that extends the battle-tested principles you rely on for classical ML to the new world of AI agents. Its one platform to develop, evaluate, and deploy your entire AI portfolio.

```python

Morning: Your sklearn pipeline is still versioned and reproducible.

trainanddeploy_classifier()

Afternoon: Your new agent evaluation pipeline uses the same logic.

evaluateanddeploy_agent()

Same platform. Same principles. New possibilities.

```

With ZenML, you're not replacing your knowledge; you're extending it. Use the pipelines and practices you already know to version, test, deploy, and monitor everything from classic models to the most advanced agents.

See It In Action: Multi-Agent Architecture Comparison

The Challenge: Your team built three different customer service agents. Which one should go to production? With ZenML, you can build a reproducible pipeline to test them on real data and make a data-driven decision, with full observability via Langgraph, LiteLLM & Langfuse.

https://github.com/user-attachments/assets/edeb314c-fe07-41ba-b083-cd9ab11db4a7

```python from zenml import pipeline, step from zenml.types import HTMLString import pandas as pd

@step def loadrealconversations() -> pd.DataFrame: """Load customer service queries for testing.""" return loadcustomerqueries()

@step def trainintentclassifier(queries: pd.DataFrame): """Train a scikit-learn classifier alongside your agents.""" return trainsklearnpipeline(queries)

@step def loadprompts() -> dict: """Load prompts as versioned ZenML artifacts.""" return loadagentpromptsfrom_files()

@step def runarchitecturecomparison(queries: pd.DataFrame, classifier, prompts: dict) -> tuple: """Test three different agent architectures on the same data.""" architectures = { "singleagent": SingleAgentRAG(prompts), "multispecialist": MultiSpecialistAgents(prompts), "langgraph_workflow": LangGraphAgent(prompts) # Real LangGraph implementation! }

# ZenML automatically versions agent code, prompts, and configurations
# LiteLLM provides unified access to 100+ LLM providers
# Langgraph orchestrates a multi-agent graph
# Langfuse tracks costs, performance, and traces for full observability
results = test_all_architectures(queries, architectures)
mermaid_diagram = generate_langgraph_visualization()

return results, mermaid_diagram

@step def evaluateanddecide(queries: pd.DataFrame, results: dict) -> HTMLString: """Generate beautiful HTML report with winner selection.""" return createstyledcomparison_report(results)

@pipeline def compareagentarchitectures(): """Data-driven agent architecture decisions with full MLOps tracking.""" queries = loadrealconversations() prompts = loadprompts() # Prompts as versioned artifacts classifier = trainintentclassifier(queries) results, viz = runarchitecturecomparison(queries, classifier, prompts) report = evaluateand_decide(queries, results)

if name == "main": compareagentarchitectures() # Rich visualizations automatically appear in ZenML dashboard ```

** See the complete working example ** Prefer a smaller end-to-end template? Check out the Minimal Agent Production example a lightweight document analysis service with pipelines, evaluation, and a simple web UI.

The Result: A clear winner is selected based on data, not opinions. You have full lineage from the test data and agent versions to the final report and deployment decision.

Development lifecycle

Get Started (5 minutes)

Architecture Overview

ZenML uses a client-server architecture with an integrated web dashboard (zenml-io/zenml-dashboard) for pipeline visualization and management:

  • Local Development: pip install "zenml[server]" - runs both client and server locally
  • Production: Deploy server separately, connect with pip install zenml + zenml login <server-url>

```bash

Install ZenML with server capabilities

pip install "zenml[server]"

Install required dependencies

pip install scikit-learn openai numpy

Initialize your ZenML repository

zenml init

Start local server or connect to a remote one

zenml login

Set OpenAI API key (optional)

export OPENAIAPIKEY=sk-svv.... ```

Your First Pipeline (2 minutes)

```python

simple_pipeline.py

from zenml import pipeline, step from sklearn.ensemble import RandomForestClassifier from sklearn.datasets import makeclassification from sklearn.modelselection import traintestsplit from sklearn.metrics import accuracyscore from typing import Tuple from typingextensions import Annotated import numpy as np

@step def createdataset() -> Tuple[ Annotated[np.ndarray, "Xtrain"], Annotated[np.ndarray, "Xtest"], Annotated[np.ndarray, "ytrain"], Annotated[np.ndarray, "ytest"] ]: """Generate a simple classification dataset.""" X, y = makeclassification(nsamples=100, nfeatures=4, nclasses=2, randomstate=42) Xtrain, Xtest, ytrain, ytest = traintestsplit(X, y, testsize=0.2, randomstate=42) return Xtrain, Xtest, ytrain, ytest

@step def trainmodel(Xtrain: np.ndarray, ytrain: np.ndarray) -> RandomForestClassifier: """Train a simple sklearn model.""" model = RandomForestClassifier(nestimators=10, randomstate=42) model.fit(Xtrain, y_train) return model

@step def evaluatemodel(model: RandomForestClassifier, Xtest: np.ndarray, ytest: np.ndarray) -> float: """Evaluate the model accuracy.""" predictions = model.predict(Xtest) return accuracyscore(ytest, predictions)

@step def generate_summary(accuracy: float) -> str: """Use OpenAI to generate a model summary.""" import openai

client = openai.OpenAI()  # Set OPENAI_API_KEY environment variable
response = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[{
        "role": "user", 
        "content": f"Write a brief summary of a ML model with {accuracy:.2%} accuracy."
    }],
    max_tokens=50
)
return response.choices[0].message.content

@pipeline def simplemlpipeline(): """A simple pipeline combining sklearn and OpenAI.""" Xtrain, Xtest, ytrain, ytest = createdataset() model = trainmodel(Xtrain, ytrain) accuracy = evaluatemodel(model, Xtest, ytest) try: import openai # noqa: F401 generatesummary(accuracy) except ImportError: print("OpenAI is not installed. Skipping summary generation.")

if name == "main": result = simplemlpipeline() ```

Run it: bash export OPENAI_API_KEY="your-api-key-here" python simple_pipeline.py

Chat With Your Pipelines: ZenML MCP Server

Stop clicking through dashboards to understand your ML workflows. The ZenML MCP Server lets you query your pipelines, analyze runs, and trigger deployments using natural language through Claude Desktop, Cursor, or any MCP-compatible client.

"Which pipeline runs failed this week and why?" "Show me accuracy metrics for all my customer churn models" "Trigger the latest fraud detection pipeline with production data"

Quick Setup: 1. Download the .dxt file from zenml-io/mcp-zenml 2. Drag it into Claude Desktop settings 3. Add your ZenML server URL and API key 4. Start chatting with your ML infrastructure

The MCP (Model Context Protocol) integration transforms your ZenML metadata into conversational insights, making pipeline debugging and analysis as easy as asking a question. Perfect for teams who want to democratize access to ML operations without requiring dashboard expertise.

Learn More

Getting Started Resources

The best way to learn about ZenML is through our comprehensive documentation and tutorials:

For visual learners, start with this 11-minute introduction:

Introductory Youtube Video

Production Examples

  1. Agent Architecture Comparison - Compare AI agents with LangGraph workflows, LiteLLM integration, and automatic visualizations via custom materializers
  2. Minimal Agent Production - Document analysis service with pipelines, evaluation, and web UI
  3. E2E Batch Inference - Complete MLOps pipeline with feature engineering
  4. LLM RAG Pipeline - Production RAG with evaluation loops
  5. Agentic Workflow (Deep Research) - Orchestrate your agents with ZenML
  6. Fine-tuning Pipeline - Fine-tune and deploy LLMs

Deployment Options

For Teams: - Self-hosted - Deploy on your infrastructure with Helm/Docker - ZenML Pro - Managed service with enterprise support (free trial)

Infrastructure Requirements: - Docker (or Kubernetes for production) - Object storage (S3/GCS/Azure) - MySQL-compatible database (MySQL 8.0+ or MariaDB) - Complete requirements

Books & Resources

LLM Engineer's Handbook Cover Machine Learning Engineering with Python Cover

ZenML is featured in these comprehensive guides to production AI systems.

Join ML Engineers Building the Future of AI

Contribute: - Star us on GitHub - Help others discover ZenML - Contributing Guide - Start with good-first-issue - Write Integrations - Add your favorite tools

Stay Updated: - Public Roadmap - See what's coming next - Blog - Best practices and case studies - Slack - Talk with AI practitioners

FAQs from ML Engineers Like You

Q: "Do I need to rewrite my agents or models to use ZenML?"

A: No. Wrap your existing code in a @step. Keep using scikit-learn, PyTorch, LangGraph, LlamaIndex, or raw API calls. ZenML orchestrates your tools, it doesn't replace them.

Q: "How is this different from LangSmith/Langfuse?"

A: They provide excellent observability for LLM applications. We orchestrate the full MLOps lifecycle for your entire AI stack. With ZenML, you manage both your classical ML models and your AI agents in one unified framework, from development and evaluation all the way to production deployment.

Q: "Can I use my existing MLflow/W&B setup?"

A: Yes! ZenML integrates with both MLflow and Weights & Biases. Your experiments, our pipelines.

Q: "Is this just MLflow with extra steps?"

A: No. MLflow tracks experiments. We orchestrate the entire development process from training and evaluation to deployment and monitoring for both models and agents.

Q: "How do I configure ZenML with Kubernetes?"

A: ZenML integrates with Kubernetes through the native Kubernetes orchestrator, Kubeflow, and other K8s-based orchestrators. See our Kubernetes orchestrator guide and Kubeflow guide, plus deployment documentation.

Q: "What about cost? I can't afford another platform."

A: ZenML's open-source version is free forever. You likely already have the required infrastructure (like a Kubernetes cluster and object storage). We just help you make better use of it for MLOps.

VS Code Extension

Manage pipelines directly from your editor:

VS Code Extension in Action!
ZenML Extension

Install from VS Code Marketplace.

License

ZenML is distributed under the terms of the Apache License Version 2.0. See LICENSE for details.

Owner

  • Name: ZenML
  • Login: zenml-io
  • Kind: organization
  • Email: support@zenml.io
  • Location: Germany

Building production MLOps tooling.

Committers

Last synced: 10 months ago

All Time
  • Total Commits: 6,707
  • Total Committers: 114
  • Avg Commits per committer: 58.833
  • Development Distribution Score (DDS): 0.798
Past Year
  • Commits: 931
  • Committers: 33
  • Avg Commits per committer: 28.212
  • Development Distribution Score (DDS): 0.765
Top Committers
Name Email Commits
Hamza Tahir h****1@g****m 1,355
Alex Strick van Linschoten s****l 1,010
Baris Can Durak b****k@h****m 1,000
Michael Schuster s****i 875
Stefan Nica s****n@z****o 575
Nicholas Junge n****s@m****o 298
Alexej Penner t****r@g****m 265
Felix Altenberger f****x@z****o 201
Andrei Vishniakov 3****v 192
Safoine El Khabich 3****e 164
Hamza Tahir h****a@m****o 136
baris b****s@m****o 108
Jayesh Sharma w****h@o****m 93
benkoller k****t@b****e 77
github-actions g****s@g****m 66
Alex Strick van Linschoten 9****l 35
Christian Versloot c****t@i****l 33
Dickson Neoh d****h@g****m 24
github-actions[bot] 4****] 13
val3nt-ml v****t@g****m 13
James W. Browne j****s@z****o 11
Gabriel Martín Blázquez g****v@g****m 10
François SERRA f****a@a****m 9
Kamalesh Palanisamy k****0@g****m 9
dependabot[bot] 4****] 7
Priyadutt 6****t 7
José Lopez j****a@r****u 7
SKRohit r****8@g****m 5
ramitsurana r****a@g****m 4
jlopezpena j****a 3
and 84 more...

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 207
  • Total pull requests: 3,231
  • Average time to close issues: 2 months
  • Average time to close pull requests: 9 days
  • Total issue authors: 104
  • Total pull request authors: 79
  • Average comments per issue: 2.1
  • Average comments per pull request: 2.05
  • Merged pull requests: 2,521
  • Bot issues: 16
  • Bot pull requests: 214
Past Year
  • Issues: 71
  • Pull requests: 1,342
  • Average time to close issues: 11 days
  • Average time to close pull requests: 4 days
  • Issue authors: 36
  • Pull request authors: 38
  • Average comments per issue: 0.94
  • Average comments per pull request: 1.87
  • Merged pull requests: 986
  • Bot issues: 0
  • Bot pull requests: 151
Top Authors
Issue Authors
  • strickvl (20)
  • github-actions[bot] (16)
  • schustmi (13)
  • bcdurak (9)
  • htahir1 (8)
  • JustGitting (7)
  • christianversloot (6)
  • jlopezpena (5)
  • HitainKakkar (4)
  • AlexejPenner (4)
  • patricksavill (3)
  • francoisserra (3)
  • soubenz (3)
  • decadance-dance (2)
  • adamwawrzynski (2)
Pull Request Authors
  • schustmi (622)
  • strickvl (563)
  • avishniakov (363)
  • bcdurak (322)
  • stefannica (275)
  • htahir1 (222)
  • safoinme (152)
  • AlexejPenner (114)
  • fa9r (87)
  • wjayesh (86)
  • github-actions[bot] (82)
  • runllm-pr-agent[bot] (69)
  • dependabot[bot] (63)
  • christianversloot (54)
  • francoisserra (9)
Top Labels
Issue Labels
bug (119) enhancement (38) planned (17) cache-miss (16) CI (16) good first issue (15) internal (15) documentation (5) good-second-issue (3) run-slow-ci (3) feature (3) security (1) help wanted (1) dependencies (1) breaking-change (1) requires-frontend-changes (1)
Pull Request Labels
internal (2,631) enhancement (950) run-slow-ci (827) bug (802) documentation (559) backport (200) dependencies (155) python (64) breaking-change (55) CI (36) tests (21) security (14) codex (10) fix (9) release (8) good first issue (7) requires-frontend-changes (6) P2 (4) planned (3) P1 (2) staging-workspace (2) good-second-issue (1) feature (1)

Packages

  • Total packages: 3
  • Total downloads:
    • pypi 42,317 last-month
  • Total docker downloads: 19
  • Total dependent packages: 2
    (may contain duplicates)
  • Total dependent repositories: 44
    (may contain duplicates)
  • Total versions: 560
  • Total maintainers: 2
  • Total advisories: 13
pypi.org: zenml

ZenML: Write production-ready ML code.

  • Versions: 170
  • Dependent Packages: 2
  • Dependent Repositories: 44
  • Downloads: 38,374 Last month
  • Docker Downloads: 19
Rankings
Stargazers count: 1.3%
Dependent repos count: 2.2%
Downloads: 2.7%
Average: 2.7%
Forks count: 2.8%
Dependent packages count: 3.2%
Docker downloads count: 4.1%
Maintainers (1)
Last synced: 6 months ago
pypi.org: mseep-zenml

ZenML: Write production-ready ML code.

  • Versions: 1
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 22 Last month
Rankings
Dependent packages count: 8.8%
Average: 29.3%
Dependent repos count: 49.7%
Maintainers (1)
Last synced: 6 months ago
pypi.org: zenml-nightly

ZenML: Write production-ready ML code.

  • Versions: 389
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 3,921 Last month
Rankings
Dependent packages count: 10.1%
Average: 38.4%
Dependent repos count: 66.7%
Maintainers (1)
Last synced: 6 months ago

Dependencies

.github/actions/setup_environment/action.yml actions
  • actions/cache v2.1.6 composite
  • actions/setup-python v2 composite
  • snok/install-poetry v1.3.1 composite
.github/workflows/integration-test.yml actions
  • ./.github/actions/setup_environment * composite
  • actions/checkout v2 composite
  • aws-actions/configure-aws-credentials v1 composite
  • easimon/maximize-build-space master composite
  • google-github-actions/auth v1 composite
  • google-github-actions/get-gke-credentials v0 composite
  • google-github-actions/setup-gcloud v1 composite
  • mxschmitt/action-tmate v3 composite
.github/workflows/mixpanel-test-data.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
.github/workflows/pr_labeler.yml actions
  • JulienKode/team-labeler-action v0.1.1 composite
  • TimonVS/pr-labeler-action v3 composite
.github/workflows/publish_api_docs.yml actions
  • ./.github/actions/setup_environment * composite
  • actions/checkout v2 composite
  • actions/setup-node v2 composite
  • actions/setup-python v2 composite
.github/workflows/publish_docker_image.yml actions
  • actions/checkout v2 composite
  • google-github-actions/setup-gcloud v0 composite
.github/workflows/publish_to_pypi.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
  • snok/install-poetry v1 composite
.github/workflows/ci.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
.github/workflows/codeql.yml actions
  • actions/checkout v3 composite
  • github/codeql-action/analyze v2 composite
  • github/codeql-action/init v2 composite
.github/workflows/image-optimiser.yml actions
  • actions/checkout v3 composite
  • calibreapp/image-actions main composite
.github/workflows/publish_helm_chart.yml actions
  • actions/checkout v3 composite
  • aws-actions/amazon-ecr-login v1 composite
  • aws-actions/configure-aws-credentials v2 composite
.github/workflows/release.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
.github/workflows/setup-python-environment.yml actions
  • ./.github/actions/setup_environment * composite
  • actions/checkout v3 composite
  • crate-ci/typos master composite
  • gaurav-nelson/github-action-markdown-link-check v1 composite
  • mxschmitt/action-tmate v3 composite
.github/workflows/templates-test.yml actions
  • jenseng/dynamic-uses v1 composite
.github/workflows/trivy-zenml-core.yml actions
  • actions/checkout v3 composite
  • aquasecurity/trivy-action master composite
  • github/codeql-action/upload-sarif v2 composite
.github/workflows/trivy-zenserver.yml actions
  • actions/checkout v3 composite
  • aquasecurity/trivy-action master composite
  • github/codeql-action/upload-sarif v2 composite
examples/quickstart/requirements.txt pypi
  • notebook *
  • pyarrow *
  • scikit-learn <1.3
  • zenml >=0.50.0
.github/workflows/update-templates-to-examples.yml actions
  • actions/checkout v3 composite
  • actions/github-script v4 composite
  • zenml-io/template-e2e-batch/.github/actions/e2e_template_test main composite
examples/e2e/requirements.txt pypi
  • zenml *
examples/generative_chat/requirements.txt pypi
  • faiss-cpu ==
examples/label_studio_text_annotation/requirements.txt pypi
  • accelerate >=0.20.1
  • datasets *
  • evaluate *
  • pytorch_lightning *
  • scikit-learn *
  • transformers *
pyproject.toml pypi
  • Jinja2 *
  • adlfs >=2021.10.0
  • alembic ~1.8.1
  • aws-profile-manager >=0.5.0
  • azure-identity >=1.4.0
  • azure-keyvault-secrets >=4.0.0
  • azure-mgmt-containerregistry >=10.0.0
  • azure-mgmt-containerservice >=20.0.0
  • azure-mgmt-resource >=21.0.0
  • azure-mgmt-storage >=20.0.0
  • azure-storage-blob >=12.0.0
  • bandit ^1.7.5
  • black ^23.10.0
  • boto3 >=1.16.0,<=1.24.59
  • click ^8.0.1,<8.1.4
  • click-params ^0.3.0
  • cloudpickle >=2.0.0,<3
  • copier >=8.1.0
  • coverage ^5.5
  • darglint ^1.8.1
  • distro ^1.6.0
  • docker ~6.1.0
  • fastapi >=0.75,<0.100
  • fastapi-utils ~0.2.1
  • gcsfs 2022.11.0
  • gitpython ^3.1.18
  • google-cloud-container >=2.21.0
  • google-cloud-secret-manager >=2.12.5
  • google-cloud-storage >=2.9.0
  • httplib2 <0.20,>=0.19.1
  • hvac >=0.11.2
  • hypothesis ^6.43.1
  • ipinfo >=4.4.3
  • jinja2-time ^0.2.0
  • kubernetes >=18.20.0
  • mike ^1.1.2
  • mkdocs ^1.2.3
  • mkdocs-awesome-pages-plugin ^2.6.1
  • mkdocs-material ^8.1.7
  • mkdocstrings ^0.17.0
  • mlstacks 0.7.8
  • mypy 1.6.1
  • orjson ~3.8.3
  • pandas >=1.1.5
  • passlib ~1.7.4
  • pre-commit ^2.14.0
  • psutil >=5.0.0
  • pydantic <1.11, >=1.9.0
  • pyjwt 2.7.*
  • pyment ^0.3.3
  • pymysql ~1.0.2
  • pyparsing <3,>=2.4.0
  • pytest ^7.4.0
  • pytest-clarity ^1.0.1
  • pytest-mock ^3.6.1
  • pytest-randomly ^3.10.1
  • python >=3.8,<3.12
  • python-dateutil ^2.8.1
  • python-multipart ~0.0.5
  • python-terraform ^0.10.1
  • pyyaml >=6.0.1
  • rich >=12.0.0
  • ruff ^0.1.0
  • s3fs 2022.11.0
  • sqlalchemy_utils 0.38.3
  • sqlmodel 0.0.8
  • tox ^3.24.3
  • types-Markdown ^3.3.6
  • types-Pillow ^9.2.1
  • types-PyMySQL ^1.0.4
  • types-PyYAML ^6.0.0
  • types-certifi ^2021.10.8.0
  • types-croniter ^1.0.2
  • types-futures ^3.3.1
  • types-passlib ^1.7.7
  • types-protobuf ^3.18.0
  • types-psutil ^5.8.13
  • types-python-dateutil ^2.8.2
  • types-python-slugify ^5.0.2
  • types-redis ^4.1.19
  • types-requests ^2.27.11
  • types-setuptools ^57.4.2
  • types-six ^1.16.2
  • types-termcolor ^1.1.2
  • typing-extensions >=3.7.4
  • uvicorn >=0.17.5
src/zenml/integrations/gcp/orchestrators/vertex_scheduler/requirements.txt pypi
  • google-api-python-client >=1.7.8,<2
  • google-cloud-aiplatform *
.github/workflows/nightly_build.yml actions
.github/workflows/publish_to_pypi_nightly.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v2 composite
  • snok/install-poetry v1 composite
examples/nlp-case/gradio/Dockerfile docker
  • python 3.9 build
examples/nlp-case/gradio/requirements.txt pypi
  • IPython ==7.34.0
  • datasets ==2.12.0
  • gradio *
  • nltk *
  • numpy ==1.22.4
  • pandas ==1.5.3
  • scikit-learn ==1.2.2
  • session_info ==1.0.0
  • torch *
  • torchaudio *
  • torchvision *
  • transformers ==4.28.1
examples/nlp-case/requirements.txt pypi
  • accelerate *
  • gradio *
  • torchvision *
  • zenml ==0.50.0