Science Score: 49.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 5 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (20.5%) to scientific vocabulary
Last synced: 7 months ago · JSON representation

Repository

Basic Info
Statistics
  • Stars: 121
  • Watchers: 9
  • Forks: 3
  • Open Issues: 10
  • Releases: 5
Created over 2 years ago · Last pushed 7 months ago
Metadata Files
Readme Changelog License

README.Rmd

---
output: github_document
---



```{r, include = FALSE}
knitr::opts_chunk$set(
    collapse = TRUE,
  comment = "#>",
  fig.path = "man/figures/README-",
  out.width = "100%"
)
options(rollama_seed = 42)
```

# `rollama` rollama-logo


[![R-CMD-check](https://github.com/JBGruber/rollama/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/JBGruber/rollama/actions/workflows/R-CMD-check.yaml)
[![Codecov test coverage](https://codecov.io/gh/JBGruber/rollama/branch/main/graph/badge.svg)](https://app.codecov.io/gh/JBGruber/rollama?branch=main)
[![CRAN status](https://www.r-pkg.org/badges/version/rollama)](https://CRAN.R-project.org/package=rollama)
[![CRAN_Download_Badge](https://cranlogs.r-pkg.org/badges/grand-total/rollama)](https://cran.r-project.org/package=rollama)
[![arXiv:10.48550/arXiv.2404.07654](https://img.shields.io/badge/DOI-arXiv.2404.07654-B31B1B?logo=arxiv)](https://doi.org/10.48550/arXiv.2404.07654)
[![say-thanks](https://img.shields.io/badge/Say%20Thanks-!-1EAEDB.svg)](https://saythanks.io/to/JBGruber)


The goal of `rollama` is to wrap the Ollama API, which allows you to run different LLMs locally and create an experience similar to ChatGPT/OpenAI's API.
Ollama is very easy to deploy and handles a huge number of models.
Checkout the project here: .


## Installation

You can install this package from CRAN:

``` r
install.packages("rollama")
```

Or you can install the development version  of `rollama` from [GitHub](https://github.com/JBGruber/rollama). This version is updated more frequently and may contain bug fixes (or new bugs):

``` r
# install.packages("remotes")
remotes::install_github("JBGruber/rollama")
```

However, `rollama` is just the client package.
The models are run in `Ollama`, which you need to install on your system, on a remote system or through [Docker](https://docs.docker.com/desktop/).
The easiest way is to simply download and install the Ollama application from [their website](https://ollama.com/).
Once `Ollama` is running, you can see if you can access it with:

```{r}
rollama::ping_ollama()
```


### Installation of Ollama through Docker

For beginners we recommend to download Ollama from [their website](https://ollama.com/). However, if you are familiar with Docker, you can also run Ollama through Docker. The advantage of running things through Docker is that the application is isolated from the rest of your system, behaves the same on different systems, and is easy to download and update.
You can also get a nice web interface.
After making sure  [Docker](https://docs.docker.com/desktop/) is installed, you can simply use the Docker Compose file from [this gist](https://gist.github.com/JBGruber/73f9f49f833c6171b8607b976abc0ddc).

If you don’t know how to use Docker Compose, you can follow this [video](https://www.youtube.com/watch?v=iMyCdd5nP5U) to use the compose file and start Ollama and Open WebUI.

## Example

The first thing you should do after installation is to pull one of the models from .
By calling `pull_model()` without arguments, you are pulling the (current) default model  --- "llama3.1 8b":

```{r lib}
library(rollama)
```
```{r eval=FALSE}
pull_model()
```

There are two ways to communicate with the Ollama API.
You can make single requests, which does not store any history and treats each query as the beginning of a new chat:

```{r query}
# ask a single question
query("Why is the sky blue? Answer with one sentence.")
```

With the output argument, we can specify the format of the response. Available options include "text", "list", "data.frame", "response", "httr2_response", and "httr2_request":

```{r output}
# ask a single question and specify the output format
query("Why is the sky blue? Answer with one sentence." , output = "text")
```

Or you can use the `chat` function, treats all messages sent during an R session as part of the same conversation:

```{r chat}
# hold a conversation
chat("Why is the sky blue? Give a short answer.")
chat("And how do you know that? Give a short answer.")
```

If you are done with a conversation and want to start a new one, you can do that like so:

```{r new}
new_chat()
```

## Model parameters

You can set a number of model parameters, either by creating a new model, with a [modelfile](https://jbgruber.github.io/rollama/reference/create_model.html), or by including the parameters in the prompt:

```{r}
query("Why is the sky blue? Answer with one sentence.", output = "text",
      model_params = list(
        seed = 42,
        num_gpu = 0)
      )
```

```{r include=FALSE, results='asis'}
l <- readLines("https://raw.githubusercontent.com/ollama/ollama/main/docs/modelfile.md")
s <- grep("#### Valid Parameters and Values", l, fixed = TRUE)
e <- grep("### TEMPLATE", l, fixed = TRUE)
cat(l[s:e - 1], sep = "\n")
```


## Configuration

You can configure the server address, the system prompt and the model used for a query or chat.
If not configured otherwise, `rollama` assumes you are using the default port (11434) of a local instance ("localhost").
Let's make this explicit by setting the option:

```{r server}
options(rollama_server = "http://localhost:11434")
```

You can change how a model answers by setting a configuration or system message in plain English (or another language supported by the model):

```{r config}
options(rollama_config = "You make short answers understandable to a 5 year old")
query("Why is the sky blue?")
```

By default, the package uses the "llama3.1 8B" model. Supported models can be found at .
To download a specific model make use of the additional information available in "Tags" .
Change this via `rollama_model`:

```{r model}
options(rollama_model = "llama3.2:3b-instruct-q4_1")
# if you don't have the model yet: pull_model("llama3.2:3b-instruct-q4_1")
query("Why is the sky blue? Answer with one sentence.")
```

## Easy query generation

The `make_query` function simplifies the creation of structured queries, which can, for example, be used in [annotation tasks](https://jbgruber.github.io/rollama/articles/annotation.html#the-make_query-helper-function).

Main components (check the [documentation](https://jbgruber.github.io/rollama/articles/annotation.html#the-make_query-helper-function) for more options):

- **`text`**: The text(s) to classify.
- **`prompt`**: Could be a (classification) question
- **`system`**: Optional system prompt providing context or instructions for the task.
- **`examples`**: Optional prior examples for one-shot or few-shot learning (user messages and assistant responses).


**Zero-shot Example**  
In this example, the function is used without examples:

```{r make_query}
# Create a query using make_query
q_zs <- make_query(
  text = "the pizza tastes terrible",
  prompt = "Is this text: 'positive', 'neutral', or 'negative'?",
  system = "You assign texts into categories. Answer with just the correct category."
)
# Print the query
print(q_zs)
# Run the query
query(q_zs, output = "text")

```

## Learn more

- [Use rollama for annotation tasks](https://jbgruber.github.io/rollama/articles/annotation.html)
- [Annotate images](https://jbgruber.github.io/rollama/articles/image-annotation.html)
- [Get text embedding](https://jbgruber.github.io/rollama/articles/text-embedding.html)
- [Use more models (GGUF format) from Hugging Face](https://jbgruber.github.io/rollama/articles/hf-gguf.html)


## Citation

Please cite the package using the [pre print](https://arxiv.org/abs/2404.07654) DOI:   






Owner

  • Name: Johannes Gruber
  • Login: JBGruber
  • Kind: user
  • Location: Amsterdam/Wiesbaden
  • Company: VU Amsterdam

Post-Doc Researcher at VU Amsterdam

GitHub Events

Total
  • Create event: 8
  • Release event: 3
  • Issues event: 28
  • Watch event: 32
  • Delete event: 1
  • Issue comment event: 39
  • Push event: 89
  • Pull request review event: 2
  • Pull request event: 6
  • Fork event: 1
Last Year
  • Create event: 8
  • Release event: 3
  • Issues event: 28
  • Watch event: 32
  • Delete event: 1
  • Issue comment event: 39
  • Push event: 89
  • Pull request review event: 2
  • Pull request event: 6
  • Fork event: 1

Committers

Last synced: 11 months ago

All Time
  • Total Commits: 202
  • Total Committers: 2
  • Avg Commits per committer: 101.0
  • Development Distribution Score (DDS): 0.158
Past Year
  • Commits: 95
  • Committers: 2
  • Avg Commits per committer: 47.5
  • Development Distribution Score (DDS): 0.074
Top Committers
Name Email Commits
JBGruber J****r@g****m 170
Maximilian Weber w****a@g****m 32

Issues and Pull Requests

Last synced: 7 months ago

All Time
  • Total issues: 37
  • Total pull requests: 5
  • Average time to close issues: about 1 month
  • Average time to close pull requests: 11 days
  • Total issue authors: 18
  • Total pull request authors: 2
  • Average comments per issue: 2.03
  • Average comments per pull request: 0.2
  • Merged pull requests: 5
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 19
  • Pull requests: 4
  • Average time to close issues: 4 days
  • Average time to close pull requests: 14 days
  • Issue authors: 9
  • Pull request authors: 2
  • Average comments per issue: 1.74
  • Average comments per pull request: 0.25
  • Merged pull requests: 4
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • JBGruber (16)
  • bshor (4)
  • reijmerniek (2)
  • sadettindemirel (2)
  • jobreu (1)
  • datapumpernickel (1)
  • michalovadek (1)
  • Koalha (1)
  • thieled (1)
  • Edouard-Legoupil (1)
  • Arthur-Zestco (1)
  • YavuzMehmet2 (1)
  • florianm (1)
  • whweve (1)
  • kasperwelbers (1)
Pull Request Authors
  • JBGruber (4)
  • textspur (4)
Top Labels
Issue Labels
Pull Request Labels

Packages

  • Total packages: 3
  • Total downloads:
    • cran 423 last-month
  • Total dependent packages: 0
    (may contain duplicates)
  • Total dependent repositories: 0
    (may contain duplicates)
  • Total versions: 16
  • Total maintainers: 1
proxy.golang.org: github.com/JBGruber/rollama
  • Versions: 5
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent packages count: 6.5%
Average: 6.7%
Dependent repos count: 7.0%
Last synced: 7 months ago
proxy.golang.org: github.com/jbgruber/rollama
  • Versions: 5
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent packages count: 6.5%
Average: 6.7%
Dependent repos count: 7.0%
Last synced: 7 months ago
cran.r-project.org: rollama

Communicate with 'Ollama' to Run Large Language Models Locally

  • Versions: 6
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 423 Last month
Rankings
Dependent packages count: 28.2%
Dependent repos count: 36.1%
Average: 49.6%
Downloads: 84.6%
Maintainers (1)
Last synced: 7 months ago

Dependencies

DESCRIPTION cran
  • callr * imports
  • cli * imports
  • httr2 * imports
  • purrr * imports
.github/workflows/R-CMD-check.yaml actions
  • actions/checkout v3 composite
  • r-lib/actions/check-r-package v2 composite
  • r-lib/actions/setup-pandoc v2 composite
  • r-lib/actions/setup-r v2 composite
  • r-lib/actions/setup-r-dependencies v2 composite
.github/workflows/pkgdown.yaml actions
  • JamesIves/github-pages-deploy-action v4.4.1 composite
  • actions/checkout v3 composite
  • r-lib/actions/setup-pandoc v2 composite
  • r-lib/actions/setup-r v2 composite
  • r-lib/actions/setup-r-dependencies v2 composite
.github/workflows/rhub.yaml actions
  • r-hub/actions/checkout v1 composite
  • r-hub/actions/platform-info v1 composite
  • r-hub/actions/run-check v1 composite
  • r-hub/actions/setup v1 composite
  • r-hub/actions/setup-deps v1 composite
  • r-hub/actions/setup-r v1 composite