Fast, Consistent Tokenization of Natural Language Text
Fast, Consistent Tokenization of Natural Language Text - Published in JOSS (2018)
Science Score: 95.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 4 DOI reference(s) in README and JOSS metadata -
✓Academic publication links
Links to: joss.theoj.org -
✓Committers with academic emails
1 of 13 committers (7.7%) from academic institutions -
○Institutional organization owner
-
✓JOSS paper metadata
Published in Journal of Open Source Software
Keywords
nlp
peer-reviewed
r
r-package
rstats
text-mining
tokenizer
Keywords from Contributors
tidyverse
tidy-data
digital-history
history
spatial-data
osm-data
overpass-api
pm25
rti-micropem
data60uk
Last synced: 6 months ago
·
JSON representation
Repository
Fast, Consistent Tokenization of Natural Language Text
Basic Info
- Host: GitHub
- Owner: ropensci
- License: other
- Language: R
- Default Branch: master
- Homepage: https://docs.ropensci.org/tokenizers
- Size: 1.24 MB
Statistics
- Stars: 186
- Watchers: 16
- Forks: 25
- Open Issues: 1
- Releases: 7
Topics
nlp
peer-reviewed
r
r-package
rstats
text-mining
tokenizer
Created almost 10 years ago
· Last pushed almost 2 years ago
Metadata Files
Readme
Changelog
License
README.Rmd
---
output: github_document
pagetitle: "tokenizers: Fast, Consistent Tokenization of Natural Language Text"
---
```{r, echo = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
fig.path = "README-"
)
```
# tokenizers
[](https://cran.r-project.org/package=tokenizers)
[](https://doi.org/10.21105/joss.00655)
[](https://github.com/ropensci/software-review/issues/33)
[](https://cran.r-project.org/package=tokenizers)
[](https://travis-ci.org/ropensci/tokenizers)
[](https://codecov.io/github/ropensci/tokenizers?branch=master)
## Overview
This R package offers functions with a consistent interface to convert natural language text into tokens. It includes tokenizers for shingled n-grams, skip n-grams, words, word stems, sentences, paragraphs, characters, shingled characters, lines, Penn Treebank, and regular expressions, as well as functions for counting characters, words, and sentences, and a function for splitting longer texts into separate documents, each with the same number of words. The package is built on the [stringi](https://www.gagolewski.com/software/stringi/) and [Rcpp](https://www.rcpp.org/) packages for fast yet correct tokenization in UTF-8.
See the "[Introduction to the tokenizers Package](https://docs.ropensci.org/tokenizers/articles/introduction-to-tokenizers.html)" vignette for an overview of all the functions in this package.
This package complies with the standards for input and output recommended by the Text Interchange Formats. The TIF initiative was created at an rOpenSci meeting in 2017, and its recommendations are available as part of the [tif package](https://github.com/ropenscilabs/tif). See the "[The Text Interchange Formats and the tokenizers Package](https://docs.ropensci.org/tokenizers/articles/tif-and-tokenizers.html)" vignette for an explanation of how this package fits into that ecosystem.
## Suggested citation
If you use this package for your research, we would appreciate a citation.
```{r}
citation("tokenizers")
```
## Examples
The tokenizers in this package have a consistent interface. They all take either a character vector of any length, or a list where each element is a character vector of length one, or a data.frame that adheres to the [tif corpus format](https://github.com/ropenscilabs/tif). The idea is that each element (or row) comprises a text. Then each function returns a list with the same length as the input vector, where each element in the list contains the tokens generated by the function. If the input character vector or list is named, then the names are preserved, so that the names can serve as identifiers. For a tif-formatted data.frame, the `doc_id` field is used as the element names in the returned token list.
```{r}
library(magrittr)
library(tokenizers)
james <- paste0(
"The question thus becomes a verbal one\n",
"again; and our knowledge of all these early stages of thought and feeling\n",
"is in any case so conjectural and imperfect that farther discussion would\n",
"not be worth while.\n",
"\n",
"Religion, therefore, as I now ask you arbitrarily to take it, shall mean\n",
"for us _the feelings, acts, and experiences of individual men in their\n",
"solitude, so far as they apprehend themselves to stand in relation to\n",
"whatever they may consider the divine_. Since the relation may be either\n",
"moral, physical, or ritual, it is evident that out of religion in the\n",
"sense in which we take it, theologies, philosophies, and ecclesiastical\n",
"organizations may secondarily grow.\n"
)
names(james) <- "varieties"
tokenize_characters(james)[[1]] %>% head(50)
tokenize_character_shingles(james)[[1]] %>% head(20)
tokenize_words(james)[[1]] %>% head(10)
tokenize_word_stems(james)[[1]] %>% head(10)
tokenize_sentences(james)
tokenize_paragraphs(james)
tokenize_ngrams(james, n = 5, n_min = 2)[[1]] %>% head(10)
tokenize_skip_ngrams(james, n = 5, k = 2)[[1]] %>% head(10)
tokenize_ptb(james)[[1]] %>% head(10)
tokenize_lines(james)[[1]] %>% head(5)
```
The package also contains functions to count words, characters, and sentences, and these functions follow the same consistent interface.
```{r}
count_words(james)
count_characters(james)
count_sentences(james)
```
The `chunk_text()` function splits a document into smaller chunks, each with the same number of words.
## Contributing
Contributions to the package are more than welcome. One way that you can help is by using this package in your R package for natural language processing. If you want to contribute a tokenization function to this package, it should follow the same conventions as the rest of the functions whenever it makes sense to do so.
Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.
------------------------------------------------------------------------
[](https://ropensci.org)
Owner
- Name: rOpenSci
- Login: ropensci
- Kind: organization
- Email: info@ropensci.org
- Location: Berkeley, CA
- Website: https://ropensci.org/
- Twitter: rOpenSci
- Repositories: 307
- Profile: https://github.com/ropensci
JOSS Publication
Fast, Consistent Tokenization of Natural Language Text
Published
March 28, 2018
Volume 3, Issue 23, Page 655
Authors
Dmitry Selivanov
Open Data Science
Open Data Science
Tags
text mining tokenization natural language processingPapers & Mentions
Total mentions: 1
Dynamic Courtship Signals and Mate Preferences in Sepia plangon
- DOI: 10.3389/fphys.2020.00845
- OpenAlex ID: https://openalex.org/W3048130089
- Published: August 2020
Last synced: 4 months ago
GitHub Events
Total
- Watch event: 3
Last Year
- Watch event: 3
Committers
Last synced: 7 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Lincoln Mullen | l****n@l****m | 157 |
| Oliver Keyes | i****s@g****m | 13 |
| Dmitriy Selivanov | s****y@g****m | 6 |
| jrnold | j****d@g****m | 4 |
| Kenneth Benoit | k****t@l****k | 4 |
| tcharlon | c****n@p****m | 1 |
| Maëlle Salmon | m****n@y****e | 1 |
| Karthik Ram | k****m@g****m | 1 |
| Julia Silge | j****e@g****m | 1 |
| Jeroen Ooms | j****s@g****m | 1 |
| Hideaki Hayashi | h****h@g****m | 1 |
| Emil Hvitfeldt | e****t@g****m | 1 |
| ChrisMuir | c****A@g****m | 1 |
Committer Domains (Top 20 + Academic)
lse.ac.uk: 1
lincolnmullen.com: 1
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 64
- Total pull requests: 23
- Average time to close issues: 6 months
- Average time to close pull requests: 16 days
- Total issue authors: 29
- Total pull request authors: 12
- Average comments per issue: 4.41
- Average comments per pull request: 1.83
- Merged pull requests: 23
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- lmullen (24)
- dselivanov (5)
- juliasilge (2)
- fschaffner (2)
- hope-data-science (2)
- EmilHvitfeldt (2)
- alanault (2)
- randomgambit (2)
- maelle (2)
- Ironholds (2)
- rlumor (1)
- jeroen (1)
- fahadshery (1)
- statspro1 (1)
- harshakap (1)
Pull Request Authors
- Ironholds (4)
- lmullen (4)
- kbenoit (4)
- dselivanov (3)
- maelle (1)
- jrnold (1)
- hideaki (1)
- jeroen (1)
- EmilHvitfeldt (1)
- karthik (1)
- juliasilge (1)
- ChrisMuir (1)
Top Labels
Issue Labels
bug (1)
help wanted (1)
Pull Request Labels
Packages
- Total packages: 2
-
Total downloads:
- cran 33,219 last-month
- Total docker downloads: 142,616
-
Total dependent packages: 20
(may contain duplicates) -
Total dependent repositories: 39
(may contain duplicates) - Total versions: 17
- Total maintainers: 1
proxy.golang.org: github.com/ropensci/tokenizers
- Documentation: https://pkg.go.dev/github.com/ropensci/tokenizers#section-documentation
- License: other
-
Latest release: v0.3.0
published about 3 years ago
Rankings
Dependent packages count: 5.4%
Average: 5.6%
Dependent repos count: 5.8%
Last synced:
6 months ago
cran.r-project.org: tokenizers
Fast, Consistent Tokenization of Natural Language Text
- Homepage: https://docs.ropensci.org/tokenizers/
- Documentation: http://cran.r-project.org/web/packages/tokenizers/tokenizers.pdf
- License: MIT + file LICENSE
-
Latest release: 0.3.0
published about 3 years ago
Rankings
Downloads: 2.3%
Stargazers count: 2.3%
Forks count: 2.9%
Dependent packages count: 3.3%
Dependent repos count: 4.2%
Average: 5.9%
Docker downloads count: 20.3%
Maintainers (1)
Last synced:
6 months ago
Dependencies
DESCRIPTION
cran
- R >= 3.1.3 depends
- Rcpp >= 0.12.3 imports
- SnowballC >= 0.5.1 imports
- stringi >= 1.0.1 imports
- covr * suggests
- knitr * suggests
- rmarkdown * suggests
- stopwords >= 0.9.0 suggests
- testthat * suggests
Dockerfile
docker
- rocker/shiny-verse 4.3.2 build
docker-compose.yml
docker
