https://github.com/asanchezyali/ai-avatar

Zippy Talking Avatar uses Azure Cognitive Services and OpenAI API to generate text and speech. It is built with Next.js and Tailwind CSS. This avatar responds to user input by generating both text and speech, offering a dynamic and immersive user experience

https://github.com/asanchezyali/ai-avatar

Science Score: 13.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.7%) to scientific vocabulary

Keywords

ai ai-avatars azure-cognitive-services digital-human gpt langchain lip-sync llm openai talking-head talking-heads tts-api visemes
Last synced: 6 months ago · JSON representation

Repository

Zippy Talking Avatar uses Azure Cognitive Services and OpenAI API to generate text and speech. It is built with Next.js and Tailwind CSS. This avatar responds to user input by generating both text and speech, offering a dynamic and immersive user experience

Basic Info
  • Host: GitHub
  • Owner: asanchezyali
  • License: gpl-3.0
  • Language: TypeScript
  • Default Branch: main
  • Homepage:
  • Size: 1.75 MB
Statistics
  • Stars: 11
  • Watchers: 1
  • Forks: 3
  • Open Issues: 0
  • Releases: 0
Topics
ai ai-avatars azure-cognitive-services digital-human gpt langchain lip-sync llm openai talking-head talking-heads tts-api visemes
Created about 2 years ago · Last pushed over 1 year ago
Metadata Files
Readme License

README.md

https://github.com/asanchezyali/zippy-ai-bot/assets/29262782/933ce0c3-434b-45f8-8c27-6a8669da0407

Zippy Talking Avatar with Azure Cognitive and Langchain

Zippy Talking Avatar uses Azure Cognitive Services and OpenAI API to generate text and speech. It is built with Next.js and Tailwind CSS. This avatar responds to user input by generating both text and speech, offering a dynamic and immersive user experience. You can learn more about Zippy Talking Avatar by visiting the Kraken the Code: How to Build a Talking Avatar website.

I have made this Discord channel available: Math & Code to resolve doubts about the configurations of this project in development.

How it works

Zippy seamlessly blends the power of multiple AI technologies to create a natural and engaging conversational experience:

  1. Text Input: Start the conversation by typing your message in the provided text box.
  2. OpenAI API Response Generation: Your text is forwarded to OpenAI API, which crafts a coherent and meaningful response.
  3. Azure Cognitive Services: Speech Synthesis: Azure's text-to-speech capabilities transform OpenAI API's response into natural-sounding audio.
  4. Viseme Generation: Azure creates accurate visemes (visual representations of speech sounds) to match the audio.
  5. Synchronized Delivery: The generated audio and visemes are delivered to Zippy, bringing the avatar to life with synchronized lip movements and spoken words.
    Logo

Getting Started

Prerequisites

  1. Azure subscription - Create a free account.
  2. Create a Speech resource in the Azure portal.
  3. Your Speech resource key and region. After your Speech resource is deployed, select Go to resource to view and manage keys. For more information about Azure AI services resources, see Get the keys for your resource.
  4. OpenAI subscription - Create one.
  5. Creare a new secret key in the OpenAI portal.
  6. Node.js and npm (or yarn)

Installation

  1. Clone this repository

bash git clone git@github.com:Monadical-SAS/zippy-avatar-ai.git

  1. Navigate to the project directory

bash cd zippy-avatar-ai

  1. Install dependencies bash npm install # or yarn install
  2. Create a .env.development file in the root directory of the project and add the following environment variables:

```bash

AZURE

NEXTPUBLICSPEECHKEY=<YOURAZURESPEECHKEY> NEXTPUBLICSPEECHREGION=<YOURAZURESPEECHREGION>

OPENAI

NEXTPUBLICOPENAIAPIKEY= ```

  1. Run the development server:

```bash npm run dev

or

yarn dev ```

Open http://localhost:3000 with your browser to see the result.

Additional information

Learn More

To learn more about Next.js, take a look at the following resources:

You can check out the Next.js GitHub repository - your feedback and contributions are welcome!

Deploy on Vercel

The easiest way to deploy your Next.js app is to use the Vercel Platform from the creators of Next.js.

Check out our Next.js deployment documentation for more details.

Owner

  • Name: Alejandro Sánchez Yalí
  • Login: asanchezyali
  • Kind: user
  • Company: Monadical

Mathematician with experience in Software Development, Data Science and Blockchain

GitHub Events

Total
  • Watch event: 11
  • Fork event: 3
Last Year
  • Watch event: 11
  • Fork event: 3

Dependencies

poetry.lock pypi
  • azure-cognitiveservices-speech 1.34.0
pyproject.toml pypi
  • azure-cognitiveservices-speech ^1.32.1
  • python ^3.11
package-lock.json npm
  • 346 dependencies
package.json npm
  • @types/node ^20 development
  • @types/react ^18 development
  • @types/react-dom ^18 development
  • autoprefixer ^10.0.1 development
  • eslint ^8 development
  • eslint-config-next 14.0.3 development
  • postcss ^8 development
  • tailwindcss ^3.3.0 development
  • typescript ^5 development
  • microsoft-cognitiveservices-speech-sdk ^1.33.1
  • next 14.0.3
  • react ^18
  • react-dom ^18