hume-typescript-sdk
Add Hume AI to any TypeScript project
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (9.9%) to scientific vocabulary
Keywords
Repository
Add Hume AI to any TypeScript project
Basic Info
- Host: GitHub
- Owner: HumeAI
- License: mit
- Language: TypeScript
- Default Branch: main
- Homepage: https://hume.ai
- Size: 3.45 MB
Statistics
- Stars: 65
- Watchers: 11
- Forks: 16
- Open Issues: 17
- Releases: 108
Topics
Metadata Files
README.md
Documentation
API reference documentation is available here.
Installation
npm i hume
Usage
```typescript import { HumeClient } from "hume";
const hume = new HumeClient({ apiKey: "YOURAPIKEY", });
const job = await hume.expressionMeasurement.batch.startInferenceJob({ models: { face: {}, }, urls: ["https://hume-tutorials.s3.amazonaws.com/faces.zip"], });
console.log("Running...");
await job.awaitCompletion();
const predictions = await hume.expressionMeasurement.batch.getJobPredictions(job.jobId);
console.log(predictions); ```
Namespaces
This SDK contains the APIs for expression measurement, empathic voice and custom models. Even if you do not plan on using more than one API to start, the SDK provides easy access in case you find additional APIs in the future.
Each API is namespaced accordingly:
```typescript import { HumeClient } from "hume";
const hume = new HumeClient({ apiKey: "YOURAPIKEY" });
hume.expressionMeasurement. // APIs specific to Expression Measurement
hume.emapthicVoice. // APIs specific to Empathic Voice ```
Websockets
The SDK supports interacting with both WebSocket and REST APIs.
Request-Reply
The SDK supports a request-reply pattern for the streaming expression measurement API.
You'll be able to pass an inference request and await till the response is received.
```typescript import { HumeClient } from "hume";
const hume = new HumeClient({ apiKey: "YOURAPIKEY", });
const socket = hume.expressionMeasurement.stream.connect({ config: { language: {}, }, });
for (const sample of samples) { const result = await socket.sendText({ text: sample }); console.log(result); } ```
Empathic Voice
The SDK supports sending and receiving audio from Empathic Voice.
```typescript import { HumeClient } from "hume";
const hume = new HumeClient({ apiKey: "<>", secretKey: "<>", });
const socket = hume.empathicVoice.chat.connect();
socket.on("message", (message) => { if (message.type === "audio_output") { const decoded = Buffer.from(message.data, "base64"); // play decoded message } });
// optional utility to wait for socket to be open await socket.tillSocketOpen();
socket.sendUserInput("Hello, how are you?"); ```
Errors
When the API returns a non-success status code (4xx or 5xx response), a subclass of HumeError will be thrown:
```typescript import { HumeError, HumeTimeoutError } from "hume";
try { await hume.expressionMeasurement.batch.startInferenceJob(/* ... */); } catch (err) { if (err instanceof HumeTimeoutError) { console.log("Request timed out", err); } else if (err instanceof HumeError) { // catch all errros console.log(err.statusCode); console.log(err.message); console.log(err.body); } } ```
Retries
409 Conflict, 429 Rate Limit, and >=500 Internal errors will all be retried twice with exponential bakcoff. You can use the maxRetries option to configure this behavior:
typescript
await hume.expressionMeasurement.batch.startInferenceJob(..., {
maxRetries: 0, // disable retries
});
Timeouts
By default, the SDK has a timeout of 60s. You can use the timeoutInSeconds option to configure
this behavior
typescript
await hume.expressionMeasurement.batch.startInferenceJob(..., {
timeoutInSeconds: 10, // timeout after 10 seconds
});
Beta Status
This SDK is in beta, and there may be breaking changes between versions without a major version update. Therefore, we recommend pinning the package version to a specific version. This way, you can install the same version each time without breaking changes.
Contributing
While we value open-source contributions to this SDK, this library is generated programmatically. Additions made directly to this library would have to be moved over to our generation code, otherwise they would be overwritten upon the next generated release. Feel free to open a PR as a proof of concept, but know that we will not be able to merge it as-is. We suggest opening an issue first to discuss with us!
On the other hand, contributions to the README are always very welcome!
Owner
- Name: Hume AI
- Login: HumeAI
- Kind: organization
- Email: dev@hume.ai
- Website: https://hume.ai/
- Twitter: hume_ai
- Repositories: 19
- Profile: https://github.com/HumeAI
A unified platform for human understanding
Citation (CITATIONS.md)
# Citations
To cite Hume's expressive communication platform, please reference one or more of the papers relevant to your application.
| Publication | Year | Modality | BibTeX |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | :--: | :------: | :---------: |
| [Self-report captures 27 distinct categories of emotion bridged by continuous gradients](https://doi.org/10.1073/pnas.1702247114) | 2017 | multi | [Cite](#1) |
| [Mapping the Passions: Toward a High-Dimensional Taxonomy of Emotional Experience and Expression](https://doi.org/10.1177/1529100619850176) | 2019 | multi | [Cite](#2) |
| [The primacy of categories in the recognition of 12 emotions in speech prosody across two cultures](https://doi.org/10.1038/s41562-019-0533-6) | 2019 | voice | [Cite](#3) |
| [Mapping 24 emotions conveyed by brief human vocalization](https://doi.org/10.1037/amp0000399) | 2019 | voice | [Cite](#4) |
| [Emotional expression: Advances in basic emotion theory](https://doi.org/10.1007%2Fs10919-019-00293-3) | 2019 | multi | [Cite](#5) |
| [What the face displays: Mapping 28 emotions conveyed by naturalistic expression](https://doi.org/10.1037/amp0000488) | 2020 | face | [Cite](#6) |
| [The neural representation of visually evoked emotion is high-dimensional, categorical, and distributed across transmodal brain regions](https://doi.org/10.1016/j.isci.2020.101060) | 2020 | multi | [Cite](#7) |
| [What music makes us feel: At least 13 dimensions organize subjective experiences associated with music across different cultures](https://doi.org/10.1073/pnas.1910704117) | 2020 | music | [Cite](#8) |
| [GoEmotions: A Dataset of Fine-Grained Emotions](https://doi.org/10.18653/v1/2020.acl-main.372) | 2020 | text | [Cite](#9) |
| [Universal facial expressions uncovered in art of the ancient Americas: A computational approach](https://doi.org/10.1126/sciadv.abb1005) | 2020 | face | [Cite](#10) |
| [Sixteen facial expressions occur in similar contexts worldwide](https://doi.org/10.1038/s41586-020-3037-7) | 2021 | face | [Cite](#11) |
| [The MuSe 2022 Multimodal Sentiment Analysis Challenge: Humor, Emotional Reactions, and Stress](https://doi.org/10.48550/arXiv.2207.05691) | 2022 | multi | [Cite](#12) |
| [The ACII 2022 Affective Vocal Bursts Workshop & Competition: Understanding a critically understudied modality of emotional expression](https://doi.org/10.48550/arXiv.2207.03572) | 2022 | voice | [Cite](#13) |
| [The ICML 2022 Expressive Vocalizations Workshop and Competition: Recognizing, Generating, and Personalizing Vocal Bursts](https://doi.org/10.48550/arXiv.2205.01780) | 2022 | voice | [Cite](#14) |
| [Intersectionality in emotion signaling and recognition: The influence of gender, ethnicity, and social class](https://doi.org/10.1037/emo0001082) | 2022 | body | [Cite](#15) |
| [How emotions, relationships, and culture constitute each other: advances in social functionalist theory](https://doi.org/10.1080/02699931.2022.2047009) | 2022 | multi | [Cite](#16) |
| [State & Trait Measurement from Nonverbal Vocalizations: A Multi-Task Joint Learning Approach](https://doi.org/10.21437/Interspeech.2022-10927) | 2022 | voice | [Cite](#17) |
## BibTeX
### <a id="1"></a>
```bibtex
@article{cowen2017self,
title={Self-report captures 27 distinct categories of emotion bridged by continuous gradients},
author={Cowen, Alan S and Keltner, Dacher},
journal={Proceedings of the national academy of sciences},
volume={114},
number={38},
pages={E7900--E7909},
year={2017},
publisher={National Acad Sciences}
}
```
### <a id="2"></a>
```bibtex
@article{cowen2019mapping,
title={Mapping the passions: Toward a high-dimensional taxonomy of emotional experience and expression},
author={Cowen, Alan and Sauter, Disa and Tracy, Jessica L and Keltner, Dacher},
journal={Psychological Science in the Public Interest},
volume={20},
number={1},
pages={69--90},
year={2019},
publisher={Sage Publications Sage CA: Los Angeles, CA}
}
```
### <a id="3"></a>
```bibtex
@article{cowen2019primacy,
title={The primacy of categories in the recognition of 12 emotions in speech prosody across two cultures},
author={Cowen, Alan S and Laukka, Petri and Elfenbein, Hillary Anger and Liu, Runjing and Keltner, Dacher},
journal={Nature human behaviour},
volume={3},
number={4},
pages={369--382},
year={2019},
publisher={Nature Publishing Group}
}
```
### <a id="4"></a>
```bibtex
@article{cowen2019mapping,
title={Mapping 24 emotions conveyed by brief human vocalization.},
author={Cowen, Alan S and Elfenbein, Hillary Anger and Laukka, Petri and Keltner, Dacher},
journal={American Psychologist},
volume={74},
number={6},
pages={698},
year={2019},
publisher={American Psychological Association}
}
```
### <a id="5"></a>
```bibtex
@article{keltner2019emotional,
title={Emotional expression: Advances in basic emotion theory},
author={Keltner, Dacher and Sauter, Disa and Tracy, Jessica and Cowen, Alan},
journal={Journal of nonverbal behavior},
volume={43},
number={2},
pages={133--160},
year={2019},
publisher={Springer}
}
```
### <a id="6"></a>
```bibtex
@article{cowen2020face,
title={What the face displays: Mapping 28 emotions conveyed by naturalistic expression.},
author={Cowen, Alan S and Keltner, Dacher},
journal={American Psychologist},
volume={75},
number={3},
pages={349},
year={2020},
publisher={American Psychological Association}
}
```
### <a id="7"></a>
```bibtex
@article{horikawa2020neural,
title={The neural representation of visually evoked emotion is high-dimensional, categorical, and distributed across transmodal brain regions},
author={Horikawa, Tomoyasu and Cowen, Alan S and Keltner, Dacher and Kamitani, Yukiyasu},
journal={Iscience},
volume={23},
number={5},
pages={101060},
year={2020},
publisher={Elsevier}
}
```
### <a id="8"></a>
```bibtex
@article{cowen2020music,
title={What music makes us feel: At least 13 dimensions organize subjective experiences associated with music across different cultures},
author={Cowen, Alan S and Fang, Xia and Sauter, Disa and Keltner, Dacher},
journal={Proceedings of the National Academy of Sciences},
volume={117},
number={4},
pages={1924--1934},
year={2020},
publisher={National Acad Sciences}
}
```
### <a id="9"></a>
```bibtex
@article{demszky2020goemotions,
title={GoEmotions: A dataset of fine-grained emotions},
author={Demszky, Dorottya and Movshovitz-Attias, Dana and Ko, Jeongwoo and Cowen, Alan and Nemade, Gaurav and Ravi, Sujith},
journal={arXiv preprint arXiv:2005.00547},
year={2020}
}
```
### <a id="10"></a>
```bibtex
@article{cowen2020universal,
title={Universal facial expressions uncovered in art of the ancient Americas: A computational approach},
author={Cowen, Alan S and Keltner, Dacher},
journal={Science advances},
volume={6},
number={34},
pages={eabb1005},
year={2020},
publisher={American Association for the Advancement of Science}
}
```
### <a id="11"></a>
```bibtex
@article{cowen2021sixteen,
title={Sixteen facial expressions occur in similar contexts worldwide},
author={Cowen, Alan S and Keltner, Dacher and Schroff, Florian and Jou, Brendan and Adam, Hartwig and Prasad, Gautam},
journal={Nature},
volume={589},
number={7841},
pages={251--257},
year={2021},
publisher={Nature Publishing Group}
}
```
### <a id="12"></a>
```bibtex
@article{christ2022muse,
title={The MuSe 2022 Multimodal Sentiment Analysis Challenge: Humor, Emotional Reactions, and Stress},
author={Christ, Lukas and Amiriparian, Shahin and Baird, Alice and Tzirakis, Panagiotis and Kathan, Alexander and M{\"u}ller, Niklas and Stappen, Lukas and Me{\ss}ner, Eva-Maria and K{\"o}nig, Andreas and Cowen, Alan and others},
year={2022}
}
```
### <a id="13"></a>
```bibtex
@article{baird2022acii,
title={The ACII 2022 Affective Vocal Bursts Workshop \& Competition: Understanding a critically understudied modality of emotional expression},
author={Baird, Alice and Tzirakis, Panagiotis and Brooks, Jeffrey A and Gregory, Christopher B and Schuller, Bj{\"o}rn and Batliner, Anton and Keltner, Dacher and Cowen, Alan},
journal={arXiv preprint arXiv:2207.03572},
year={2022}
}
```
### <a id="14"></a>
```bibtex
@article{baird2022icml,
title={The ICML 2022 Expressive Vocalizations Workshop and Competition: Recognizing, Generating, and Personalizing Vocal Bursts},
author={Baird, Alice and Tzirakis, Panagiotis and Gidel, Gauthier and Jiralerspong, Marco and Muller, Eilif B and Mathewson, Kory and Schuller, Bj{\"o}rn and Cambria, Erik and Keltner, Dacher and Cowen, Alan},
journal={arXiv preprint arXiv:2205.01780},
year={2022}
}
```
### <a id="15"></a>
```bibtex
@article{monroy2022intersectionality,
title={Intersectionality in emotion signaling and recognition: The influence of gender, ethnicity, and social class.},
author={Monroy, Maria and Cowen, Alan S and Keltner, Dacher},
journal={Emotion},
year={2022},
publisher={American Psychological Association}
}
```
### <a id="16"></a>
```bibtex
@article{keltner2022emotions,
title={How emotions, relationships, and culture constitute each other: advances in social functionalist theory},
author={Keltner, Dacher and Sauter, Disa and Tracy, Jessica L and Wetchler, Everett and Cowen, Alan S},
journal={Cognition and Emotion},
volume={36},
number={3},
pages={388--401},
year={2022},
publisher={Taylor \& Francis}
}
```
### <a id="17"></a>
```bibtex
@inproceedings{baird22_interspeech,
author={Alice Baird and Panagiotis Tzirakis and Jeff Brooks and Lauren Kim and Michael Opara and Chris Gregory and Jacob Metrick and Garrett Boseck and Dacher Keltner and Alan Cowen},
title={{State & Trait Measurement from Nonverbal Vocalizations: A Multi-Task Joint Learning Approach}},
year=2022,
booktitle={Proc. Interspeech 2022},
pages={2028--2032},
doi={10.21437/Interspeech.2022-10927}
}
```
GitHub Events
Total
- Create event: 268
- Issues event: 11
- Release event: 30
- Watch event: 40
- Delete event: 218
- Issue comment event: 109
- Push event: 181
- Pull request review comment event: 12
- Pull request review event: 72
- Pull request event: 479
- Fork event: 12
Last Year
- Create event: 268
- Issues event: 11
- Release event: 30
- Watch event: 40
- Delete event: 218
- Issue comment event: 109
- Push event: 181
- Pull request review comment event: 12
- Pull request review event: 72
- Pull request event: 479
- Fork event: 12
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 9
- Total pull requests: 435
- Average time to close issues: 8 days
- Average time to close pull requests: 6 days
- Total issue authors: 7
- Total pull request authors: 14
- Average comments per issue: 0.67
- Average comments per pull request: 0.24
- Merged pull requests: 100
- Bot issues: 4
- Bot pull requests: 362
Past Year
- Issues: 8
- Pull requests: 299
- Average time to close issues: 8 days
- Average time to close pull requests: 7 days
- Issue authors: 7
- Pull request authors: 11
- Average comments per issue: 0.75
- Average comments per pull request: 0.28
- Merged pull requests: 47
- Bot issues: 3
- Bot pull requests: 268
Top Authors
Issue Authors
- dependabot[bot] (2)
- fern-api[bot] (2)
- bitnom (1)
- chikingsley (1)
- TravisBumgarner (1)
- kashifnazeer62 (1)
- twitchard (1)
Pull Request Authors
- fern-api[bot] (185)
- dependabot[bot] (177)
- twitchard (16)
- dsinghvi (15)
- zgreathouse (12)
- fern-bot (12)
- zachkrall (9)
- gregorybchris (3)
- iankelk (1)
- yinishi (1)
- francamps (1)
- chikingsley (1)
- ivaaan (1)
- armandobelardo (1)