Science Score: 41.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: scholar.google
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.8%) to scientific vocabulary

Keywords

bm25 flashtext information-retrieval machine-learning natural-language-processing neural-networks neural-search nlp question-answering reader retrieval search searching semantic-search vector-search
Last synced: 6 months ago · JSON representation ·

Repository

Neural Search

Basic Info
  • Host: GitHub
  • Owner: raphaelsty
  • License: mit
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 41.6 MB
Statistics
  • Stars: 328
  • Watchers: 8
  • Forks: 15
  • Open Issues: 4
  • Releases: 23
Topics
bm25 flashtext information-retrieval machine-learning natural-language-processing neural-networks neural-search nlp question-answering reader retrieval search searching semantic-search vector-search
Created about 4 years ago · Last pushed over 1 year ago
Metadata Files
Readme License Citation

README.md

Cherche

Neural search

documentation Demo license

Cherche enables the development of a neural search pipeline that employs retrievers and pre-trained language models both as retrievers and rankers. The primary advantage of Cherche lies in its capacity to construct end-to-end pipelines. Additionally, Cherche is well-suited for offline semantic search due to its compatibility with batch computation.

Here are some of the features Cherche offers:

Live demo of a NLP search engine powered by Cherche

Alt text

Installation 🤖

To install Cherche for use with a simple retriever on CPU, such as TfIdf, Flash, Lunr, Fuzz, use the following command:

sh pip install cherche

To install Cherche for use with any semantic retriever or ranker on CPU, use the following command:

sh pip install "cherche[cpu]"

Finally, if you plan to use any semantic retriever or ranker on GPU, use the following command:

sh pip install "cherche[gpu]"

By following these installation instructions, you will be able to use Cherche with the appropriate requirements for your needs.

Documentation

Documentation is available here. It provides details about retrievers, rankers, pipelines and examples.

QuickStart 📑

Documents

Cherche allows findings the right document within a list of objects. Here is an example of a corpus.

```python from cherche import data

documents = data.load_towns()

documents[:3] [{'id': 0, 'title': 'Paris', 'url': 'https://en.wikipedia.org/wiki/Paris', 'article': 'Paris is the capital and most populous city of France.'}, {'id': 1, 'title': 'Paris', 'url': 'https://en.wikipedia.org/wiki/Paris', 'article': "Since the 17th century, Paris has been one of Europe's major centres of science, and arts."}, {'id': 2, 'title': 'Paris', 'url': 'https://en.wikipedia.org/wiki/Paris', 'article': 'The City of Paris is the centre and seat of government of the region and province of Île-de-France.' }] ```

Retriever ranker

Here is an example of a neural search pipeline composed of a TF-IDF that quickly retrieves documents, followed by a ranking model. The ranking model sorts the documents produced by the retriever based on the semantic similarity between the query and the documents. We can call the pipeline using a list of queries and get relevant documents for each query.

```python from cherche import data, retrieve, rank from sentence_transformers import SentenceTransformer from lenlp import sparse

List of dicts

documents = data.load_towns()

Retrieve on fields title and article

retriever = retrieve.BM25( key="id", on=["title", "article"], documents=documents, k=30 )

Rank on fields title and article

ranker = rank.Encoder( key = "id", on = ["title", "article"], encoder = SentenceTransformer("sentence-transformers/all-mpnet-base-v2").encode, k = 3, )

Pipeline creation

search = retriever + ranker

search.add(documents=documents)

Search documents for 3 queries.

search(["Bordeaux", "Paris", "Toulouse"]) [[{'id': 57, 'similarity': 0.69513524}, {'id': 63, 'similarity': 0.6214994}, {'id': 65, 'similarity': 0.61809087}], [{'id': 16, 'similarity': 0.59158516}, {'id': 0, 'similarity': 0.58217555}, {'id': 1, 'similarity': 0.57944715}], [{'id': 26, 'similarity': 0.6925601}, {'id': 37, 'similarity': 0.63977146}, {'id': 28, 'similarity': 0.62772334}]] ```

We can map the index to the documents to access their contents using pipelines:

python search += documents search(["Bordeaux", "Paris", "Toulouse"]) [[{'id': 57, 'title': 'Bordeaux', 'url': 'https://en.wikipedia.org/wiki/Bordeaux', 'similarity': 0.69513524}, {'id': 63, 'title': 'Bordeaux', 'similarity': 0.6214994}, {'id': 65, 'title': 'Bordeaux', 'url': 'https://en.wikipedia.org/wiki/Bordeaux', 'similarity': 0.61809087}], [{'id': 16, 'title': 'Paris', 'url': 'https://en.wikipedia.org/wiki/Paris', 'article': 'Paris received 12.', 'similarity': 0.59158516}, {'id': 0, 'title': 'Paris', 'url': 'https://en.wikipedia.org/wiki/Paris', 'similarity': 0.58217555}, {'id': 1, 'title': 'Paris', 'url': 'https://en.wikipedia.org/wiki/Paris', 'similarity': 0.57944715}], [{'id': 26, 'title': 'Toulouse', 'url': 'https://en.wikipedia.org/wiki/Toulouse', 'similarity': 0.6925601}, {'id': 37, 'title': 'Toulouse', 'url': 'https://en.wikipedia.org/wiki/Toulouse', 'similarity': 0.63977146}, {'id': 28, 'title': 'Toulouse', 'url': 'https://en.wikipedia.org/wiki/Toulouse', 'similarity': 0.62772334}]]

Retrieve

Cherche provides retrievers that filter input documents based on a query.

  • retrieve.TfIdf
  • retrieve.BM25
  • retrieve.Lunr
  • retrieve.Flash
  • retrieve.Encoder
  • retrieve.DPR
  • retrieve.Fuzz
  • retrieve.Embedding

Rank

Cherche provides rankers that filter documents in output of retrievers.

Cherche rankers are compatible with SentenceTransformers models which are available on Hugging Face hub.

  • rank.Encoder
  • rank.DPR
  • rank.CrossEncoder
  • rank.Embedding

Question answering

Cherche provides modules dedicated to question answering. These modules are compatible with Hugging Face's pre-trained models and fully integrated into neural search pipelines.

Contributors 🤝

Cherche was created for/by Renault and is now available to all. We welcome all contributions.

Acknowledgements 👏

Lunr retriever is a wrapper around Lunr.py. Flash retriever is a wrapper around FlashText. DPR, Encode and CrossEncoder rankers are wrappers dedicated to the use of the pre-trained models of SentenceTransformers in a neural search pipeline.

Citations

If you use cherche to produce results for your scientific publication, please refer to our SIGIR paper:

bibtex @inproceedings{Sourty2022sigir, author = {Raphael Sourty and Jose G. Moreno and Lynda Tamine and Francois-Paul Servant}, title = {CHERCHE: A new tool to rapidly implement pipelines in information retrieval}, booktitle = {Proceedings of SIGIR 2022}, year = {2022} }

Dev Team 💾

The Cherche dev team is made up of Raphaël Sourty, François-Paul Servant, Nicolas Bizzozzero, Jose G Moreno. 🥳

Owner

  • Name: Raphael Sourty
  • Login: raphaelsty
  • Kind: user
  • Location: Paris
  • Company: LightOn

Machine Learning @lightonai

Citation (CITATION.bib)

@inproceedings{Sourty2022sigir,
    author = {Raphael Sourty and Jose G. Moreno and Lynda Tamine and Francois-Paul Servant},
    title = {CHERCHE: A new tool to rapidly implement pipelines in information retrieval},
    booktitle = {Proceedings of SIGIR 2022},
    year = {2022}
}

GitHub Events

Total
  • Issues event: 2
  • Watch event: 11
  • Issue comment event: 1
  • Fork event: 1
Last Year
  • Issues event: 2
  • Watch event: 11
  • Issue comment event: 1
  • Fork event: 1

Committers

Last synced: over 1 year ago

All Time
  • Total Commits: 182
  • Total Committers: 6
  • Avg Commits per committer: 30.333
  • Development Distribution Score (DDS): 0.203
Past Year
  • Commits: 17
  • Committers: 4
  • Avg Commits per committer: 4.25
  • Development Distribution Score (DDS): 0.529
Top Committers
Name Email Commits
Raphael Sourty r****y@g****m 145
Max Halford m****5@g****m 12
Raphael Sourty r****y@m****m 10
Raphael Sourty r****y@l****i 8
NicolasBizzozzero n****o@p****m 6
Devin Conathan d****n@g****m 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 8
  • Total pull requests: 11
  • Average time to close issues: 4 months
  • Average time to close pull requests: 4 days
  • Total issue authors: 6
  • Total pull request authors: 4
  • Average comments per issue: 0.63
  • Average comments per pull request: 0.82
  • Merged pull requests: 11
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 1
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 1
  • Pull request authors: 0
  • Average comments per issue: 0.0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • fpservant (3)
  • fivejjs (1)
  • tom9358 (1)
  • delmetni (1)
  • Balogunolalere (1)
  • robinsonkwame (1)
Pull Request Authors
  • raphaelsty (8)
  • MaxHalford (2)
  • dconathan (1)
  • NicolasBizzozzero (1)
Top Labels
Issue Labels
question (1)
Pull Request Labels
enhancement (4) documentation (2)

Dependencies

setup.py pypi