elastic

R client for the Elasticsearch HTTP API

https://github.com/ropensci/elastic

Science Score: 10.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
    1 of 18 committers (5.6%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (18.0%) to scientific vocabulary

Keywords

data-science database database-wrapper elasticsearch etl http json r r-package rstats

Keywords from Contributors

http-mock mock genome climate noaa spocc species gbif national-phenology-network phenology
Last synced: 6 months ago · JSON representation

Repository

R client for the Elasticsearch HTTP API

Basic Info
Statistics
  • Stars: 245
  • Watchers: 25
  • Forks: 58
  • Open Issues: 10
  • Releases: 15
Topics
data-science database database-wrapper elasticsearch etl http json r r-package rstats
Created over 12 years ago · Last pushed over 2 years ago
Metadata Files
Readme Contributing License

README.Rmd

elastic
=======

```{r echo=FALSE}
knitr::opts_chunk$set(
  comment = "#>",
  collapse = TRUE,
  warning = FALSE,
  message = FALSE
)
```

[![Project Status: Active – The project has reached a stable, usable state and is being actively developed.](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active)
[![R-check](https://github.com/ropensci/elastic/workflows/R-check/badge.svg)](https://github.com/ropensci/elastic/actions?query=workflow%3AR-check)
[![cran checks](https://cranchecks.info/badges/worst/elastic)](https://cranchecks.info/pkgs/elastic)
[![rstudio mirror downloads](https://cranlogs.r-pkg.org/badges/elastic?color=E664A4)](https://github.com/r-hub/cranlogs.app)
[![cran version](https://www.r-pkg.org/badges/version/elastic)](https://cran.r-project.org/package=elastic)


**A general purpose R interface to [Elasticsearch](https://www.elastic.co/elasticsearch/)**


## Elasticsearch info

* [Elasticsearch home page](https://www.elastic.co/elasticsearch/)
* [API docs](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html)


## Compatibility

This client is developed following the latest stable releases, currently `v7.10.0`. It is generally compatible with older versions of Elasticsearch. Unlike the [Python client](https://github.com/elastic/elasticsearch-py#compatibility), we try to keep as much compatibility as possible within a single version of this client, as that's an easier setup in R world.

## Security

You're fine running ES locally on your machine, but be careful just throwing up ES on a server with a public IP address - make sure to think about security.

* Elastic has paid products - but probably only applicable to enterprise users
* DIY security - there are a variety of techniques for securing your Elasticsearch installation. A number of resources are collected in a [blog post](https://recology.info/2015/02/secure-elasticsearch/) - tools include putting your ES behind something like Nginx, putting basic auth on top of it, using https, etc.

## Installation

Stable version from CRAN

```{r eval=FALSE}
install.packages("elastic")
```

Development version from GitHub

```{r eval=FALSE}
remotes::install_github("ropensci/elastic")
```

```{r}
library('elastic')
```

## Install Elasticsearch

* [Elasticsearch installation help](https://www.elastic.co/guide/en/elasticsearch/reference/current/install-elasticsearch.html)

__w/ Docker__

Pull the official elasticsearch image

```
# elasticsearch needs to have a version tag. We're pulling 7.10.1 here
docker pull elasticsearch:7.10.1
```

Then start up a container

```
docker run -d -p 9200:9200 elasticsearch:7.10.1
```

Then elasticsearch should be available on port 9200, try `curl localhost:9200` and you should get the familiar message indicating ES is on.

If you're using boot2docker, you'll need to use the IP address in place of localhost. Get it by doing `boot2docker ip`.

__on OSX__

+ Download zip or tar file from Elasticsearch [see here for download](https://www.elastic.co/downloads), e.g., `curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.10.0-darwin-x86_64.tar.gz`
+ Extract: `tar -zxvf elasticsearch-7.10.0-darwin-x86_64.tar.gz`
+ Move it: `sudo mv elasticsearch-7.10.0 /usr/local`
+ Navigate to /usr/local: `cd /usr/local`
+ Delete symlinked `elasticsearch` directory: `rm -rf elasticsearch`
+ Add shortcut: `sudo ln -s elasticsearch-7.10.0 elasticsearch` (replace version with your version)

You can also install via Homebrew: `brew install elasticsearch`

> Note: for the 1.6 and greater upgrades of Elasticsearch, they want you to have java 8 or greater. I downloaded Java 8 from here http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html and it seemed to work great.

## Upgrading Elasticsearch

I am not totally clear on best practice here, but from what I understand, when you upgrade to a new version of Elasticsearch, place old `elasticsearch/data` and `elasticsearch/config` directories into the new installation (`elasticsearch/` dir). The new elasticsearch instance with replaced data and config directories should automatically update data to the new version and start working. Maybe if you use homebrew on a Mac to upgrade it takes care of this for you - not sure.

Obviously, upgrading Elasticsearch while keeping it running is a different thing ([some help here from Elastic](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html)).

## Start Elasticsearch

* Navigate to elasticsearch: `cd /usr/local/elasticsearch`
* Start elasticsearch: `bin/elasticsearch`

I create a little bash shortcut called `es` that does both of the above commands in one step (`cd /usr/local/elasticsearch && bin/elasticsearch`).

## Initialization

The function `connect()` is used before doing anything else to set the connection details to your remote or local elasticsearch store. The details created by `connect()` are written to your options for the current session, and are used by `elastic` functions.

```{r}
x <- connect(port = 9200)
```

> If you're following along here with a local instance of Elasticsearch, you'll use `x` below to 
do more stuff.

For AWS hosted elasticsearch, make sure to specify path = "" and the correct port - transport schema pair.

```{r eval=FALSE}
connect(host = , path = "", port = 80, transport_schema  = "http")
  # or
connect(host = , path = "", port = 443, transport_schema  = "https")
```

If you are using Elastic Cloud or an installation with authentication (X-pack), make sure to specify path = "", user = "", pwd = "" and the correct port - transport schema pair.


```r
connect(host = , path = "", user="test", pwd = "1234", port = 9243, transport_schema  = "https")
```


## Get some data Elasticsearch has a bulk load API to load data in fast. The format is pretty weird though. It's sort of JSON, but would pass no JSON linter. I include a few data sets in `elastic` so it's easy to get up and running, and so when you run examples in this package they'll actually run the same way (hopefully). I have prepare a non-exported function useful for preparing the weird format that Elasticsearch wants for bulk data loads, that is somewhat specific to PLOS data (See below), but you could modify for your purposes. See `make_bulk_plos()` and `make_bulk_gbif()` [here](https://github.com/ropensci/elastic/blob/master/R/docs_bulk.r). ### Shakespeare data Elasticsearch provides some data on Shakespeare plays. I've provided a subset of this data in this package. Get the path for the file specific to your machine: ```{r echo=FALSE} library(elastic) x <- connect() if (x$es_ver() < 600) { shakespeare <- system.file("examples", "shakespeare_data.json", package = "elastic") } else { shakespeare <- system.file("examples", "shakespeare_data_.json", package = "elastic") shakespeare <- type_remover(shakespeare) } ``` ```{r eval=FALSE} shakespeare <- system.file("examples", "shakespeare_data.json", package = "elastic") # If you're on Elastic v6 or greater, use this one shakespeare <- system.file("examples", "shakespeare_data_.json", package = "elastic") shakespeare <- type_remover(shakespeare) ``` Then load the data into Elasticsearch: > make sure to create your connection object with `connect()` ```{r eval=FALSE} # x <- connect() # do this now if you didn't do this above invisible(docs_bulk(x, shakespeare)) ``` If you need some big data to play with, the shakespeare dataset is a good one to start with. You can get the whole thing and pop it into Elasticsearch (beware, may take up to 10 minutes or so.): ```sh curl -XGET https://download.elastic.co/demos/kibana/gettingstarted/shakespeare_6.0.json > shakespeare.json curl -XPUT localhost:9200/_bulk --data-binary @shakespeare.json ``` ### Public Library of Science (PLOS) data A dataset inluded in the `elastic` package is metadata for PLOS scholarly articles. Get the file path, then load: ```{r} if (index_exists(x, "plos")) index_delete(x, "plos") plosdat <- system.file("examples", "plos_data.json", package = "elastic") plosdat <- type_remover(plosdat) invisible(docs_bulk(x, plosdat)) ``` ### Global Biodiversity Information Facility (GBIF) data A dataset inluded in the `elastic` package is data for GBIF species occurrence records. Get the file path, then load: ```{r} if (index_exists(x, "gbif")) index_delete(x, "gbif") gbifdat <- system.file("examples", "gbif_data.json", package = "elastic") gbifdat <- type_remover(gbifdat) invisible(docs_bulk(x, gbifdat)) ``` GBIF geo data with a coordinates element to allow `geo_shape` queries ```{r} if (index_exists(x, "gbifgeo")) index_delete(x, "gbifgeo") gbifgeo <- system.file("examples", "gbif_geo.json", package = "elastic") gbifgeo <- type_remover(gbifgeo) invisible(docs_bulk(x, gbifgeo)) ``` ### More data sets There are more datasets formatted for bulk loading in the `sckott/elastic_data` GitHub repository. Find it at
## Search Search the `plos` index and only return 1 result ```{r} Search(x, index = "plos", size = 1)$hits$hits ``` Search the `plos` index, and query for _antibody_, limit to 1 result ```{r} Search(x, index = "plos", q = "antibody", size = 1)$hits$hits ``` ## Get documents Get document with id=4 ```{r} docs_get(x, index = 'plos', id = 4) ``` Get certain fields ```{r} docs_get(x, index = 'plos', id = 4, fields = 'id') ``` ## Get multiple documents via the multiget API Same index and different document ids ```{r} docs_mget(x, index = "plos", id = 1:2) ``` ## Parsing You can optionally get back raw `json` from `Search()`, `docs_get()`, and `docs_mget()` setting parameter `raw=TRUE`. For example: ```{r} (out <- docs_mget(x, index = "plos", id = 1:2, raw = TRUE)) ``` Then parse ```{r} jsonlite::fromJSON(out) ``` ## Known pain points * On secure Elasticsearch servers: * `HEAD` requests don't seem to work, not sure why * If you allow only `GET` requests, a number of functions that require `POST` requests obviously then won't work. A big one is `Search()`, but you can use `Search_uri()` to get around this, which uses `GET` instead of `POST`, but you can't pass a more complicated query via the body ## Screencast A screencast introducing the package: vimeo.com/124659179 ## Meta * Please [report any issues or bugs](https://github.com/ropensci/elastic/issues) * License: MIT * Get citation information for `elastic` in R doing `citation(package = 'elastic')` * Please note that this package is released with a [Contributor Code of Conduct](https://ropensci.org/code-of-conduct/). By contributing to this project, you agree to abide by its terms.

Owner

  • Name: rOpenSci
  • Login: ropensci
  • Kind: organization
  • Email: info@ropensci.org
  • Location: Berkeley, CA

GitHub Events

Total
  • Watch event: 4
  • Fork event: 1
Last Year
  • Watch event: 4
  • Fork event: 1

Committers

Last synced: 9 months ago

All Time
  • Total Commits: 832
  • Total Committers: 18
  • Avg Commits per committer: 46.222
  • Development Distribution Score (DDS): 0.036
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Scott Chamberlain m****s@g****m 802
Devin McCabe d****e@g****m 5
nelsonSchwarz n****n@m****m 4
steven2249 s****w@b****u 3
Dawei Lang c****g@i****m 3
Maëlle Salmon m****n@y****e 2
MusTheDataGuy m****z@g****m 2
Christopher Peters c****9@g****m 1
Jeroen Ooms j****s@g****m 1
Kyle Chung k****9@g****m 1
Pieter Provoost p****t@g****m 1
Raphael Saldanha r****a@g****m 1
Shivam s****3@g****m 1
Ugo Sangiorgi u****o@u****g 1
Weldon Sams w****s@g****m 1
colin c****n@t****r 1
cphaarmeyer 4****r 1
oleksii renov f****s@g****m 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 9 months ago

All Time
  • Total issues: 89
  • Total pull requests: 14
  • Average time to close issues: 11 months
  • Average time to close pull requests: 22 days
  • Total issue authors: 49
  • Total pull request authors: 10
  • Average comments per issue: 4.0
  • Average comments per pull request: 3.0
  • Merged pull requests: 9
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • sckott (30)
  • Jensxy (5)
  • tedmoorman (3)
  • maelle (3)
  • regisoc (2)
  • Aeilert (2)
  • rfsaldanha (2)
  • aleksaschmidt (1)
  • dpmccabe (1)
  • MonaxGT (1)
  • mayankgautam (1)
  • sarthi2395 (1)
  • AMR-KELEG (1)
  • bfgiordano (1)
  • blosloos (1)
Pull Request Authors
  • maelle (2)
  • M-YD (2)
  • sckott (2)
  • rfsaldanha (2)
  • Banjio (1)
  • ColinFay (1)
  • cphaarmeyer (1)
  • orenov (1)
  • dpmccabe (1)
Top Labels
Issue Labels
bulk (9) bug (7) features (3) lowpriority (2) question (2) elasticsearch-v6 (2)
Pull Request Labels
scroll (1) docs (1)

Packages

  • Total packages: 2
  • Total downloads:
    • cran 3,066 last-month
  • Total docker downloads: 131,392
  • Total dependent packages: 3
    (may contain duplicates)
  • Total dependent repositories: 6
    (may contain duplicates)
  • Total versions: 29
  • Total maintainers: 1
proxy.golang.org: github.com/ropensci/elastic
  • Versions: 15
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent packages count: 9.0%
Average: 9.6%
Dependent repos count: 10.2%
Last synced: 6 months ago
cran.r-project.org: elastic

General Purpose Interface to 'Elasticsearch'

  • Versions: 14
  • Dependent Packages: 3
  • Dependent Repositories: 6
  • Downloads: 3,066 Last month
  • Docker Downloads: 131,392
Rankings
Forks count: 1.1%
Stargazers count: 1.7%
Dependent packages count: 10.9%
Average: 11.6%
Dependent repos count: 12.0%
Downloads: 18.2%
Docker downloads count: 25.8%
Maintainers (1)
Last synced: 6 months ago

Dependencies

DESCRIPTION cran
  • R6 * imports
  • crul >= 0.9.0 imports
  • curl >= 2.2 imports
  • jsonlite >= 1.1 imports
  • utils * imports
  • testthat * suggests
.github/workflows/R-check.yml actions
  • actions/cache v2 composite
  • actions/checkout v2 composite
  • actions/upload-artifact v2 composite
  • r-lib/actions/setup-pandoc v1 composite
  • r-lib/actions/setup-r v1 composite
  • elasticsearch ${{ matrix.config.es_ver }} docker