i4-0-client-py

TAO71 I4.0 is an AI created by TAO71 in Python.

https://github.com/tao71-ai/i4.0

Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (7.2%) to scientific vocabulary

Keywords

ai api artificial-intelligence chatbot chatbots client diffusers gpt4all image2text linux llama-cpp-python python python3 python311 server text2image transformers
Last synced: 4 months ago · JSON representation ·

Repository

TAO71 I4.0 is an AI created by TAO71 in Python.

Basic Info
  • Host: GitHub
  • Owner: TAO71-AI
  • License: other
  • Language: Python
  • Default Branch: main
  • Homepage: https://tao71.org
  • Size: 5.88 MB
Statistics
  • Stars: 6
  • Watchers: 3
  • Forks: 0
  • Open Issues: 0
  • Releases: 58
Topics
ai api artificial-intelligence chatbot chatbots client diffusers gpt4all image2text linux llama-cpp-python python python3 python311 server text2image transformers
Created over 2 years ago · Last pushed 5 months ago
Metadata Files
Readme License Code of conduct Citation

README.md

Thumbnail

What is this?

TAO71 I4.0 is an AI created by TAO71 in Python. It uses LLaMA-CPP-Python, Hugging Face Transformers, Hugging Face Diffusers, etc.

!IMPORTANT In order to use I4.0 as expected, you must create a Python VENV. (SERVERS) It's recommended to use Python 3.11.9 (or Python 3.11), but Python 3.12 should also be compatible.

Hardware requirements

Servers

CPU

Any x86_64 CPU with AVX or AVX2 and 2+ cores.

GPU

A GPU is optional, but recommended.

Any GPU with NVIDIA CUDA, ROCm or SYCL for Hugging Face Transformers, Hugging Face Diffusers or any GPU compatible with Vulkan for GPT4All or LLaMA-CPP-Python.

RAM

At least 4 GB for basic usage.

[!NOTE] We are not including the RAM required for the OS here.

OS

Any GNU/Linux distribution that supports Python 3.11.

Python version

3.11

Clients

CPU

Any CPU. Due to encryption we recommend a 2+ cores CPU or set the encryption to a fastest one.

[!WARNING] This wasn't tested in some ARM CPUs.

RAM

Should work with about 384 MB, but 512 MB or 1 GB is recommended.

[!NOTE] We are not including the RAM required for the OS here.

OS

Any OS that supports Python 3.11 or higher.

[!NOTE] Only tested in Windows 10, Windows 11 and Arch Linux.

Python version

3.11, 3.12 or 3.13

License

TAO71 License 111

Contributors/Helpers:

Alcoft

AlcoftTAO

Programmer of I4.0.

Dinolt

DINOLT

I4.0's designer, created all the images of I4.0.

Owner

  • Name: TAO71-AI
  • Login: TAO71-AI
  • Kind: organization

Citation (CITATIONS.bib)

# This repository
@misc{I4.0,
  author = {TAO71-AI},
  title = {I4.0},
  year = {2024},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/TAO71-AI/I4.0}}
}

# GPT4All
@misc{gpt4all,
  author = {Yuvanesh Anand and Zach Nussbaum and Brandon Duderstadt and Benjamin Schmidt and Andriy Mulyar},
  title = {GPT4All: Training an Assistant-style Chatbot with Large Scale Data Distillation from GPT-3.5-Turbo},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/nomic-ai/gpt4all}},
}

# Transformers
@inproceedings{wolf-etal-2020-transformers,
    title = "Transformers: State-of-the-Art Natural Language Processing",
    author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
    booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
    month = oct,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
    pages = "38--45"
}

# Datasets
@inproceedings{lhoest-etal-2021-datasets,
    title = "Datasets: A Community Library for Natural Language Processing",
    author = "Lhoest, Quentin  and
      Villanova del Moral, Albert  and
      Jernite, Yacine  and
      Thakur, Abhishek  and
      von Platen, Patrick  and
      Patil, Suraj  and
      Chaumond, Julien  and
      Drame, Mariama  and
      Plu, Julien  and
      Tunstall, Lewis  and
      Davison, Joe  and
      {\v{S}}a{\v{s}}ko, Mario  and
      Chhablani, Gunjan  and
      Malik, Bhavitvya  and
      Brandeis, Simon  and
      Le Scao, Teven  and
      Sanh, Victor  and
      Xu, Canwen  and
      Patry, Nicolas  and
      McMillan-Major, Angelina  and
      Schmid, Philipp  and
      Gugger, Sylvain  and
      Delangue, Cl{\'e}ment  and
      Matussi{\`e}re, Th{\'e}o  and
      Debut, Lysandre  and
      Bekman, Stas  and
      Cistac, Pierric  and
      Goehringer, Thibault  and
      Mustar, Victor  and
      Lagunas, Fran{\c{c}}ois  and
      Rush, Alexander  and
      Wolf, Thomas",
    booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
    month = nov,
    year = "2021",
    address = "Online and Punta Cana, Dominican Republic",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.emnlp-demo.21",
    pages = "175--184",
    abstract = "The scale, variety, and quantity of publicly-available NLP datasets has grown rapidly as researchers propose new tasks, larger models, and novel benchmarks. Datasets is a community library for contemporary NLP designed to support this ecosystem. Datasets aims to standardize end-user interfaces, versioning, and documentation, while providing a lightweight front-end that behaves similarly for small datasets as for internet-scale corpora. The design of the library incorporates a distributed, community-driven approach to adding datasets and documenting usage. After a year of development, the library now includes more than 650 unique datasets, has more than 250 contributors, and has helped support a variety of novel cross-dataset research projects and shared tasks. The library is available at https://github.com/huggingface/datasets.",
    eprint={2109.02846},
    archivePrefix={arXiv},
    primaryClass={cs.CL},
}

# Diffusers
@misc{von-platen-etal-2022-diffusers,
  author = {Patrick von Platen and Suraj Patil and Anton Lozhkov and Pedro Cuenca and Nathan Lambert and Kashif Rasul and Mishig Davaadorj and Thomas Wolf},
  title = {Diffusers: State-of-the-art diffusion models},
  year = {2022},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/huggingface/diffusers}}
}

# NumPy
@ARTICLE{2020NumPy-Array,
  author  = {Harris, Charles R. and Millman, K. Jarrod and
            van der Walt, Stéfan J and Gommers, Ralf and
            Virtanen, Pauli and Cournapeau, David and
            Wieser, Eric and Taylor, Julian and Berg, Sebastian and
            Smith, Nathaniel J. and Kern, Robert and Picus, Matti and
            Hoyer, Stephan and van Kerkwijk, Marten H. and
            Brett, Matthew and Haldane, Allan and
            Fernández del Río, Jaime and Wiebe, Mark and
            Peterson, Pearu and Gérard-Marchant, Pierre and
            Sheppard, Kevin and Reddy, Tyler and Weckesser, Warren and
            Abbasi, Hameer and Gohlke, Christoph and
            Oliphant, Travis E.},
  title   = {Array programming with {NumPy}},
  journal = {Nature},
  year    = {2020},
  volume  = {585},
  pages   = {357–362},
  doi     = {10.1038/s41586-020-2649-2}
}

# Pytorch
@inproceedings{Paszke_PyTorch_An_Imperative_2019,
author = {Paszke, Adam and Gross, Sam and Massa, Francisco and Lerer, Adam and Bradbury, James and Chanan, Gregory and Killeen, Trevor and Lin, Zeming and Gimelshein, Natalia and Antiga, Luca and Desmaison, Alban and Kopf, Andreas and Yang, Edward and DeVito, Zachary and Raison, Martin and Tejani, Alykhan and Chilamkurthy, Sasank and Steiner, Benoit and Fang, Lu and Bai, Junjie and Chintala, Soumith},
booktitle = {Advances in Neural Information Processing Systems 32},
editor = {Wallach, H. and Larochelle, H. and Beygelzimer, A. and d'Alché-Buc, F. and Fox, E. and Garnett, R.},
pages = {8024--8035},
publisher = {Curran Associates, Inc.},
title = {{PyTorch: An Imperative Style, High-Performance Deep Learning Library}},
url = {http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf},
year = {2019}
}

# Tensorflow
@software{Abadi_TensorFlow_Large-scale_machine_2015,
author = {Abadi, Martín and Agarwal, Ashish and Barham, Paul and Brevdo, Eugene and Chen, Zhifeng and Citro, Craig and Corrado, Greg S. and Davis, Andy and Dean, Jeffrey and Devin, Matthieu and Ghemawat, Sanjay and Goodfellow, Ian and Harp, Andrew and Irving, Geoffrey and Isard, Michael and Jozefowicz, Rafal and Jia, Yangqing and Kaiser, Lukasz and Kudlur, Manjunath and Levenberg, Josh and Mané, Dan and Schuster, Mike and Monga, Rajat and Moore, Sherry and Murray, Derek and Olah, Chris and Shlens, Jonathon and Steiner, Benoit and Sutskever, Ilya and Talwar, Kunal and Tucker, Paul and Vanhoucke, Vincent and Vasudevan, Vijay and Viégas, Fernanda and Vinyals, Oriol and Warden, Pete and Wattenberg, Martin and Wicke, Martin and Yu, Yuan and Zheng, Xiaoqiang},
doi = {10.5281/zenodo.4724125},
license = {Apache-2.0},
month = nov,
title = {{TensorFlow, Large-scale machine learning on heterogeneous systems}},
year = {2015}
}

# Keras-NLP
@misc{kerasnlp2022,
  title={KerasNLP},
  author={Watson, Matthew, and Qian, Chen, and Bischof, Jonathan and Chollet, 
  Fran\c{c}ois and others},
  year={2022},
  howpublished={\url{https://github.com/keras-team/keras-nlp}},
}

# SciPy
@ARTICLE{2020SciPy-NMeth,
  author  = {Virtanen, Pauli and Gommers, Ralf and Oliphant, Travis E. and
            Haberland, Matt and Reddy, Tyler and Cournapeau, David and
            Burovski, Evgeni and Peterson, Pearu and Weckesser, Warren and
            Bright, Jonathan and {van der Walt}, St{\'e}fan J. and
            Brett, Matthew and Wilson, Joshua and Millman, K. Jarrod and
            Mayorov, Nikolay and Nelson, Andrew R. J. and Jones, Eric and
            Kern, Robert and Larson, Eric and Carey, C J and
            Polat, {\.I}lhan and Feng, Yu and Moore, Eric W. and
            {VanderPlas}, Jake and Laxalde, Denis and Perktold, Josef and
            Cimrman, Robert and Henriksen, Ian and Quintero, E. A. and
            Harris, Charles R. and Archibald, Anne M. and
            Ribeiro, Ant{\^o}nio H. and Pedregosa, Fabian and
            {van Mulbregt}, Paul and {SciPy 1.0 Contributors}},
  title   = {{{SciPy} 1.0: Fundamental Algorithms for Scientific
            Computing in Python}},
  journal = {Nature Methods},
  year    = {2020},
  volume  = {17},
  pages   = {261--272},
  url     = {https://doi.org/10.1038/s41592-019-0686-2},
  adsurl  = {https://ui.adsabs.harvard.edu/abs/2020NatMe..17..261V},
  doi     = {10.1038/s41592-019-0686-2},
}

# Pandas
@software{The_pandas_development_team_pandas-dev_pandas_Pandas,
author = {{The pandas development team}},
doi = {10.5281/zenodo.3509134},
license = {BSD-3-Clause},
title = {{pandas-dev/pandas: Pandas}},
url = {https://github.com/pandas-dev/pandas}
}

GitHub Events

Total
  • Release event: 17
  • Delete event: 1
  • Push event: 47
  • Pull request event: 11
  • Create event: 17
Last Year
  • Release event: 17
  • Delete event: 1
  • Push event: 47
  • Pull request event: 11
  • Create event: 17

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 121 last-month
  • Total dependent packages: 0
  • Total dependent repositories: 0
  • Total versions: 16
  • Total maintainers: 1
pypi.org: i4-0-client-py

Client Python bindings for I4.0.

  • Versions: 16
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 121 Last month
Rankings
Dependent packages count: 9.8%
Stargazers count: 22.6%
Average: 29.9%
Forks count: 32.0%
Dependent repos count: 55.4%
Maintainers (1)
Last synced: 4 months ago

Dependencies

LibI4/Dockerfile docker
  • python 3.9 build
LibI4/Python_AI/Extra/requirements.txt pypi
  • asyncio >=
  • discord.py >=
  • pygame >=
  • speechrecognition >=
  • voicevox-client >=
  • websockets >=
LibI4/Python_AI/OtherPythonApps/requirements.txt pypi
  • keyboard >=
  • numpy >=
  • openai-whisper >=
  • opencv-python >=
  • pyaudio >=
  • pyautogui >=
  • screeninfo >=
  • soundfile >=
  • speechrecognition >=
LibI4/Python_AI/requirements.txt pypi
  • asyncio >=
  • beautifulsoup4 >=
  • datasets >=
  • diffusers >=
  • gpt4all >=
  • keras >=
  • keras-nlp >=
  • mysql-connector-python >=
  • numpy >=
  • openai >=
  • requests >=
  • scipy >=
  • sockets >=
  • speechrecognition >=
  • tensorflow >=
  • tflite >=
  • torch >=
  • torchtext >=
  • torchvision >=
  • transformers >=
  • websockets >=
LibI4/Python_AI/requirements_amdgpu.txt pypi
  • torch >=
  • torchaudio >=
  • torchvision >=
LibI4/Python_AI/requirements_recommended.txt pypi
  • accelerate >=
  • runhouse >=
  • sacremoses >=
  • safetensors >=
  • scipy >=
  • sentencepiece >=