gensim

Topic Modelling for Humans

https://github.com/piskvorky/gensim

Science Score: 77.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 3 DOI reference(s) in README
  • Academic publication links
    Links to: scholar.google, zenodo.org
  • Committers with academic emails
    26 of 453 committers (5.7%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (17.7%) to scientific vocabulary

Keywords

data-mining data-science document-similarity fasttext gensim information-retrieval machine-learning natural-language-processing neural-network nlp python topic-modeling word-embeddings word-similarity word2vec

Keywords from Contributors

closember regression-models hypothesis-testing prediction count-model econometrics timeseries-analysis robust-estimation generalized-linear-models jax

Scientific Fields

Mathematics Computer Science - 38% confidence
Last synced: 4 months ago · JSON representation ·

Repository

Topic Modelling for Humans

Basic Info
Statistics
  • Stars: 16,150
  • Watchers: 423
  • Forks: 4,406
  • Open Issues: 429
  • Releases: 43
Topics
data-mining data-science document-similarity fasttext gensim information-retrieval machine-learning natural-language-processing neural-network nlp python topic-modeling word-embeddings word-similarity word2vec
Created almost 15 years ago · Last pushed 6 months ago
Metadata Files
Readme Changelog Contributing Funding License Citation Security

README.md

gensim – Topic Modelling in Python

Build Status GitHub release Downloads DOI Mailing List Follow

Gensim is a Python library for topic modelling, document indexing and similarity retrieval with large corpora. Target audience is the natural language processing (NLP) and information retrieval (IR) community.

⚠️ Want to help out? Sponsor Gensim ❤️

⚠️ Gensim is in stable maintenance mode: we are not accepting new features, but bug and documentation fixes are still welcome! ⚠️

Features

  • All algorithms are memory-independent w.r.t. the corpus size (can process input larger than RAM, streamed, out-of-core),
  • Intuitive interfaces
    • easy to plug in your own input corpus/datastream (trivial streaming API)
    • easy to extend with other Vector Space algorithms (trivial transformation API)
  • Efficient multicore implementations of popular algorithms, such as online Latent Semantic Analysis (LSA/LSI/SVD), Latent Dirichlet Allocation (LDA), Random Projections (RP), Hierarchical Dirichlet Process (HDP) or word2vec deep learning.
  • Distributed computing: can run Latent Semantic Analysis and Latent Dirichlet Allocation on a cluster of computers.
  • Extensive documentation and Jupyter Notebook tutorials.

If this feature list left you scratching your head, you can first read more about the Vector Space Model and unsupervised document analysis on Wikipedia.

Installation

This software depends on NumPy, a Python package for scientific computing. Please bear in mind that building NumPy from source (e.g. by installing gensim on a platform which lacks NumPy .whl distribution) is a non-trivial task involving linking NumPy to a BLAS library.
It is recommended to provide a fast one (such as MKL, ATLAS or OpenBLAS) which can improve performance by as much as an order of magnitude. On OSX, NumPy picks up its vecLib BLAS automatically, so you don’t need to do anything special.

Install the latest version of gensim:

bash pip install --upgrade gensim

Or, if you have instead downloaded and unzipped the source tar.gz package:

bash tar -xvzf gensim-X.X.X.tar.gz cd gensim-X.X.X/ pip install .

For alternative modes of installation, see the documentation.

Gensim is being continuously tested under all supported Python versions. Support for Python 2.7 was dropped in gensim 4.0.0 – install gensim 3.8.3 if you must use Python 2.7.

How come gensim is so fast and memory efficient? Isn’t it pure Python, and isn’t Python slow and greedy?

Many scientific algorithms can be expressed in terms of large matrix operations (see the BLAS note above). Gensim taps into these low-level BLAS libraries, by means of its dependency on NumPy. So while gensim-the-top-level-code is pure Python, it actually executes highly optimized Fortran/C under the hood, including multithreading (if your BLAS is so configured).

Memory-wise, gensim makes heavy use of Python’s built-in generators and iterators for streamed data processing. Memory efficiency was one of gensim’s design goals, and is a central feature of gensim, rather than something bolted on as an afterthought.

Documentation

Support

For commercial support, please see Gensim sponsorship.

Ask open-ended questions on the public Gensim Mailing List.

Raise bugs on Github but please make sure you follow the issue template. Issues that are not bugs or fail to provide the requested details will be closed without inspection.


Adopters

| Company | Logo | Industry | Use of Gensim | |---------|------|----------|---------------| | RARE Technologies | rare | ML & NLP consulting | Creators of Gensim – this is us! | | Amazon | amazon | Retail | Document similarity. | | National Institutes of Health | nih | Health | Processing grants and publications with word2vec. | | Cisco Security | cisco | Security | Large-scale fraud detection. | | Mindseye | mindseye | Legal | Similarities in legal documents. | | Channel 4 | channel4 | Media | Recommendation engine. | | Talentpair | talent-pair | HR | Candidate matching in high-touch recruiting. | | Juju | juju | HR | Provide non-obvious related job suggestions. | | Tailwind | tailwind | Media | Post interesting and relevant content to Pinterest. | | Issuu | issuu | Media | Gensim's LDA module lies at the very core of the analysis we perform on each uploaded publication to figure out what it's all about. | | Search Metrics | search-metrics | Content Marketing | Gensim word2vec used for entity disambiguation in Search Engine Optimisation. | | 12K Research | 12k| Media | Document similarity analysis on media articles. | | Stillwater Supercomputing | stillwater | Hardware | Document comprehension and association with word2vec. | | SiteGround | siteground | Web hosting | An ensemble search engine which uses different embeddings models and similarities, including word2vec, WMD, and LDA. | | Capital One | capitalone | Finance | Topic modeling for customer complaints exploration. |


Citing gensim

When citing gensim in academic papers and theses, please use this BibTeX entry:

@inproceedings{rehurek_lrec,
      title = {{Software Framework for Topic Modelling with Large Corpora}},
      author = {Radim {\v R}eh{\r u}{\v r}ek and Petr Sojka},
      booktitle = {{Proceedings of the LREC 2010 Workshop on New
           Challenges for NLP Frameworks}},
      pages = {45--50},
      year = 2010,
      month = May,
      day = 22,
      publisher = {ELRA},
      address = {Valletta, Malta},
      note={\url{http://is.muni.cz/publication/884893/en}},
      language={English}
}

Owner

  • Name: Radim Řehůřek
  • Login: piskvorky
  • Kind: user
  • Location: Prague
  • Company: @pii-tools

Creator of Gensim. Founder and CTO at pii-tools.com. I love history and beginnings in general.

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
- family-names: "Řehůřek"
  given-names: "Radim"
title: "Gensim: Topic modelling for humans"
version: 4.1.0
url: "https://github.com/RaRe-Technologies/gensim"
preferred-citation:
  type: conference-paper
  authors:
  - family-names: "Řehůřek"
    given-names: "Radim"
  - family-names: "Sojka"
    given-names: "Petr"
  publisher:
    name: "University of Malta"
  date-published: "2010-05-22"
  year: 2010
  month: 5
  start: 45 # First page number
  end: 50 # Last page number
  pages: 5
  title: "Software Framework for Topic Modelling with Large Corpora"
  languages: ["eng"]
  url: "http://is.muni.cz/publication/884893/en"
  conference:
    name: "Proceedings of LREC 2010 workshop New Challenges for NLP Frameworks"
    city: Valetta
    country: MT
    location: "University of Malta, Valletta, Malta"

GitHub Events

Total
  • Issues event: 25
  • Watch event: 536
  • Member event: 1
  • Issue comment event: 88
  • Push event: 5
  • Pull request review event: 3
  • Pull request review comment event: 4
  • Pull request event: 16
  • Fork event: 62
  • Create event: 1
Last Year
  • Issues event: 25
  • Watch event: 536
  • Member event: 1
  • Issue comment event: 88
  • Push event: 5
  • Pull request review event: 3
  • Pull request review comment event: 4
  • Pull request event: 16
  • Fork event: 62
  • Create event: 1

Committers

Last synced: 8 months ago

All Time
  • Total Commits: 3,854
  • Total Committers: 453
  • Avg Commits per committer: 8.508
  • Development Distribution Score (DDS): 0.697
Past Year
  • Commits: 14
  • Committers: 6
  • Avg Commits per committer: 2.333
  • Development Distribution Score (DDS): 0.429
Top Committers
Name Email Commits
Radim Řehůřek r****k@s****z 1,168
piskvorky p****y@9****5 259
Michael Penkov m@p****v 231
Lev Konstantinovskiy l****t@g****m 226
Gordon Mohr g****t@g****m 162
Chinmaya Pancholi c****3@g****m 160
Menshikh Ivan m****v@g****m 141
Christopher Corley c****y@u****u 51
Stefan Otte s****e@g****m 40
Jayant Jain j****2@g****m 39
Vít Novotný w****o@m****z 39
sebastien-j s****n@m****a 37
Parul Sethi p****i@g****m 36
Sweeney, Mack m****y@c****m 33
Federico Barrios f****s@f****r 32
Matti Lyra m****a@s****k 31
Bhargav Srinivasa b****r@g****m 30
horpto _****_@h****u 30
Jan Zikeš z****0@g****m 29
Ólavur Mortensen o****n@g****m 28
mataddy m****y@o****m 27
Mohit Rathore m****r@g****m 27
Paul Wise p****3@b****t 27
Devashish Deshpande a****2@g****m 19
Lars Buitinck l****k@e****l 18
Prakhar Pratyush e****b@g****m 17
Dave Challis s****s@g****m 15
Stephan Gabler s****r@g****m 14
Tim Emerick t****k@g****m 13
Keiran Thompson t****n@g****m 13
and 423 more...

Issues and Pull Requests

Last synced: 4 months ago

All Time
  • Total issues: 85
  • Total pull requests: 63
  • Average time to close issues: 3 months
  • Average time to close pull requests: 3 months
  • Total issue authors: 70
  • Total pull request authors: 31
  • Average comments per issue: 3.85
  • Average comments per pull request: 2.4
  • Merged pull requests: 29
  • Bot issues: 0
  • Bot pull requests: 20
Past Year
  • Issues: 17
  • Pull requests: 13
  • Average time to close issues: 6 days
  • Average time to close pull requests: about 17 hours
  • Issue authors: 16
  • Pull request authors: 8
  • Average comments per issue: 0.65
  • Average comments per pull request: 0.85
  • Merged pull requests: 1
  • Bot issues: 0
  • Bot pull requests: 1
Top Authors
Issue Authors
  • gojomo (6)
  • muhputre066 (4)
  • mpenkov (3)
  • yurivict (3)
  • filip-komarzyniec (2)
  • 1641004802 (2)
  • jchwenger (2)
  • Buhpufilma (2)
  • DavidNemeskey (2)
  • ppdk-data (2)
  • mspezio (2)
  • BeenKim (1)
  • bhomass (1)
  • gitlGl (1)
  • juhoinkinen (1)
Pull Request Authors
  • dependabot[bot] (26)
  • julianpollmann (7)
  • mpenkov (6)
  • Crosswind (4)
  • YoungMind1 (4)
  • fabriciorsf (4)
  • pabs3 (2)
  • jicruz96 (2)
  • filip-komarzyniec (2)
  • hechth (2)
  • gojomo (2)
  • r4plh (2)
  • chihebIA (2)
  • wittyicon29 (2)
  • Hoasker (2)
Top Labels
Issue Labels
bug (6) difficulty easy (4) awaiting reply (4) good first issue (3) difficulty medium (2) fasttext (2) housekeeping (2) documentation (2) impact MEDIUM (2) impact HIGH (2) reach HIGH (1) difficulty hard (1) feature (1) dependencies (1) security (1) reach MEDIUM (1) testing (1) performance (1) reach LOW (1) Hacktoberfest (1)
Pull Request Labels
dependencies (27) documentation (7) stale (2) bug (1) housekeeping (1)

Dependencies

requirements_docs.txt pypi
  • Pyro4 ==4.77
  • Sphinx ==3.5.2
  • annoy ==1.16.2
  • memory-profiler ==0.55.0
  • nltk ==3.4.5
  • nmslib ==2.1.1
  • pandas ==1.2.3
  • pyemd ==0.5.1
  • scikit-learn ==0.24.1
  • sphinx-gallery ==0.8.2
  • sphinxcontrib-napoleon ==0.7
  • sphinxcontrib-programoutput ==0.15
  • statsmodels ==0.12.2
  • testfixtures ==6.17.1
setup.py pypi
  • NUMPY_STR ,
  • scipy *
  • smart_open *