folktexts

Evaluate uncertainty, calibration, accuracy, and fairness of LLMs on real-world survey data!

https://github.com/socialfoundations/folktexts

Science Score: 52.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
    Organization socialfoundations has institutional domain (sf.is.mpg.de)
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (15.6%) to scientific vocabulary

Keywords

fairness large-language-models machine-learning python tabular-data transformers uncertainty
Last synced: 6 months ago · JSON representation ·

Repository

Evaluate uncertainty, calibration, accuracy, and fairness of LLMs on real-world survey data!

Basic Info
Statistics
  • Stars: 24
  • Watchers: 4
  • Forks: 4
  • Open Issues: 0
  • Releases: 1
Topics
fairness large-language-models machine-learning python tabular-data transformers uncertainty
Created about 2 years ago · Last pushed 11 months ago
Metadata Files
Readme License Citation

README.md

<!-- # Folktexts <!-- omit in toc -->

Tests status PyPI status Documentation status PyPI version PyPI - License Python compatibility Huggingface dataset

A toolbox for evaluating statistical properties of LLMs

Folktexts provides a suite of Q&A datasets for evaluating uncertainty, calibration, accuracy and fairness of LLMs on individual outcome prediction tasks. It provides a flexible framework to derive prediction tasks from survey data, translates them into natural text prompts, extracts LLM-generated risk scores, and computes statistical properties of these risk scores by comparing them to the ground truth outcomes.

Use folktexts to benchmark your LLM:

  • Pre-defined Q&A benchmark tasks are provided based on data from the American Community Survey (ACS). Each tabular prediction task from the popular folktables package is made available as a natural-language Q&A task.
  • Parsed and ready-to-use versions of each folktexts dataset can be found on Huggingface.
  • The package can be used to customize your tasks. Select a feature to define your prediciton target. Specify subsets of input features to vary outcome uncertainty. Modify prompting templates to evaluate mappings from tabular data to natural text prompts. Compare different methods to extract uncertainty values from LLM responses. Extract raw risk scores and outcomes to perform custom statistical evaluations. Package documentation can be found here.

folktexts-diagram

Table of contents <!-- omit in toc -->

Getting started

Installing

Install package from PyPI:

pip install folktexts

Basic setup

Go through the following steps to run the benchmark tasks. Alternatively, if you only want ready-to-use datasets, see this section.

  1. Create conda environment

conda create -n folktexts python=3.11 conda activate folktexts

  1. Install folktexts package

pip install folktexts

  1. Create models dataset and results folder

mkdir results mkdir models mkdir data

  1. Download transformers model and tokenizer download_models --model 'google/gemma-2b' --save-dir models

  2. Run benchmark on a given task

run_acs_benchmark --results-dir results --data-dir data --task 'ACSIncome' --model models/google--gemma-2b

Run run_acs_benchmark --help to get a list of all available benchmark flags.

Ready-to-use datasets

Ready-to-use Q&A datasets generated from the 2018 American Community Survey are available via Logo datasets.

py import datasets acs_task_qa = datasets.load_dataset( path="acruz/folktexts", name="ACSIncome", # Choose which task you want to load split="test") # Choose split according to your intended use case

Example usage

Example code snippet that loads a pre-trained model, collects and parses Q&A data for the income-prediction task, and computes risk scores on the test split.

```py

Load transformers model

from folktexts.llmutils import loadmodeltokenizer model, tokenizer = loadmodel_tokenizer("gpt2") # using tiny model as an example

from folktexts.acs import ACSDataset acstaskname = "ACSIncome" # Name of the benchmark ACS task to use

Create an object that classifies data using an LLM

from folktexts import TransformersLLMClassifier clf = TransformersLLMClassifier( model=model, tokenizer=tokenizer, task=acstaskname, )

NOTE: You can also use a web-hosted model like GPT4 using the WebAPILLMClassifier class

Use a dataset or feed in your own data

dataset = ACSDataset.makefromtask(acstaskname) # use .subsample(0.01) to get faster approximate results

You can compute risk score predictions using an sklearn-style interface

Xtest, ytest = dataset.gettest() testscores = clf.predictproba(Xtest) ```

If you only care about the overall benchmark results and not individual predictions, you can simply run the following code instead of using .predict_proba() directly: py from folktexts.benchmark import Benchmark, BenchmarkConfig bench = Benchmark.make_benchmark( task=acs_task_name, dataset=dataset, # These vars are defined in the snippet above model=model, tokenizer=tokenizer, numeric_risk_prompting=True, # See the full list of configs below in the README ) bench_results = bench.run(results_root_dir="results")

Example snippet showcasing how to fit the binarization threshold on a few training samples (note that this is not fine-tuning), and obtaining discretized predictions using .predict(). ```py

Optionally, you can fit the threshold based on a few samples

clf.fit(*dataset[0:100]) # (dataset[...] will access training data)

...in order to get more accurate binary predictions with .predict

testpreds = clf.predict(Xtest) ```

Benchmark features and options

Here's a summary list of the most important benchmark options/flags used in conjunction with the run_acs_benchmark command line script, or with the Benchmark class.

| Option | Description | Examples | |:---|:---|:---:| | --model | Name of the model on huggingface transformers, or local path to folder with pretrained model and tokenizer. Can also use web-hosted models with "[provider]/[model-name]". | meta-llama/Meta-Llama-3-8B, openai/gpt-4o-mini | | --task | Name of the ACS task to run benchmark on. | ACSIncome, ACSEmployment | | --results-dir | Path to directory under which benchmark results will be saved. | results | | --data-dir | Root folder to find datasets in (or download ACS data to). | ~/data | | --numeric-risk-prompting | Whether to use verbalized numeric risk prompting, i.e., directly query model for a probability estimate. By default will use standard multiple-choice Q&A, and extract risk scores from internal token probabilities. | Boolean flag (True if present, False otherwise) | | --use-web-api-model | Whether the given --model name corresponds to a web-hosted model or not. By default this is False (assumes a huggingface transformers model). If this flag is provided, --model must contain a litellm model identifier (examples here). | Boolean flag (True if present, False otherwise) | | --subsampling | Which fraction of the dataset to use for the benchmark. By default will use the whole test set. | 0.01 | | --fit-threshold | Whether to use the given number of samples to fit the binarization threshold. By default will use a fixed $t=0.5$ threshold instead of fitting on data. | 100 | | --batch-size | The number of samples to process in each inference batch. Choose according to your available VRAM. | 10, 32 |

Full list of options:

``` usage: runacsbenchmark [-h] --model MODEL --results-dir RESULTSDIR --data-dir DATADIR [--task TASK] [--few-shot FEW_SHOT] [--batch-size BATCH_SIZE] [--context-size CONTEXT_SIZE] [--fit-threshold FIT_THRESHOLD] [--subsampling SUBSAMPLING] [--seed SEED] [--use-web-api-model] [--dont-correct-order-bias] [--numeric-risk-prompting] [--reuse-few-shot-examples] [--use-feature-subset USEFEATURESUBSET] [--use-population-filter USEPOPULATIONFILTER] [--logger-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}]

Benchmark risk scores produced by a language model on ACS data.

options: -h, --help show this help message and exit --model MODEL [str] Model name or path to model saved on disk --results-dir RESULTSDIR [str] Directory under which this experiment's results will be saved --data-dir DATADIR [str] Root folder to find datasets on --task TASK [str] Name of the ACS task to run the experiment on --few-shot FEWSHOT [int] Use few-shot prompting with the given number of shots --batch-size BATCHSIZE [int] The batch size to use for inference --context-size CONTEXTSIZE [int] The maximum context size when prompting the LLM --fit-threshold FITTHRESHOLD [int] Whether to fit the prediction threshold, and on how many samples --subsampling SUBSAMPLING [float] Which fraction of the dataset to use (if omitted will use all data) --seed SEED [int] Random seed -- to set for reproducibility --use-web-api-model [bool] Whether use a model hosted on a web API (instead of a local model) --dont-correct-order-bias [bool] Whether to avoid correcting ordering bias, by default will correct it --numeric-risk-prompting [bool] Whether to prompt for numeric risk-estimates instead of multiple-choice Q&A --reuse-few-shot-examples [bool] Whether to reuse the same samples for few-shot prompting (or sample new ones every time) --use-feature-subset USEFEATURESUBSET [str] Optional subset of features to use for prediction, comma separated --use-population-filter USEPOPULATIONFILTER [str] Optional population filter for this benchmark; must follow the format 'column_name=value' to filter the dataset by a specific value. --logger-level {DEBUG,INFO,WARNING,ERROR,CRITICAL} [str] The logging level to use for the experiment ```

Evaluating feature importance

By evaluating LLMs on tabular classification tasks, we can use standard feature importance methods to assess which features the model uses to compute risk scores.

You can do so yourself by calling folktexts.cli.eval_feature_importance (add --help for a full list of options).

Here's an example for the Llama3-70B-Instruct model on the ACSIncome task (warning: takes 24h on an Nvidia H100): python -m folktexts.cli.eval_feature_importance --model 'meta-llama/Meta-Llama-3-70B-Instruct' --task ACSIncome --subsampling 0.1

feature importance on llama3 70b it

This script uses sklearn's permutation_importance to assess which features contribute the most for the ROC AUC metric (other metrics can be assessed using the --scorer [scorer] parameter).

FAQ

1. Q: Can I use folktexts with a different dataset?

**A:** **Yes!** Folktexts provides the whole ML pipeline needed to produce risk scores using LLMs, together with a few example ACS datasets. You can easily apply these same utilities to a different dataset following the [example jupyter notebook](notebooks/custom-dataset-example.ipynb).

2. Q: How do I create a custom prediction task based on American Community Survey data?

**A:** Simply create a new `TaskMetadata` object with the parameters you want. Follow the [example jupyter notebook](notebooks/custom-acs-task-example.ipynb) for more details.

3. Q: Can I use folktexts with closed-source models?

**A:** **Yes!** We provide compatibility with local LLMs via [🤗 transformers](https://github.com/huggingface/transformers) and compatibility with web-hosted LLMs via [litellm](https://github.com/BerriAI/litellm). For example, you can use `--model='gpt-4o' --use-web-api-model` to use GPT-4o when calling the `run_acs_benchmark` script. [Here's a complete list](https://docs.litellm.ai/docs/providers/openai#openai-chat-completion-models) of compatible OpenAI models. Note that some models are not compatible as they don't enable access to log-probabilities.
Using models through a web API requires installing extra optional dependencies with `pip install 'folktexts[apis]'`.

4. Q: Can I use folktexts to fine-tune LLMs on survey prediction tasks?

**A:** The package does not feature specific fine-tuning functionality, but you can use the data and Q&A prompts generated by `folktexts` to fine-tune an LLM for a specific prediction task.

<!-- **A:** Yes. Although the package does not feature specific fine-tuning functionality, you can use the data and Q&A prompts generated by `folktexts` to fine-tune an LLM for a specific prediction task. Follow the [example jupyter notebook](notebooks/finetuning-llms-example.ipynb) for more details. In the future we may bring this functionality into the main package implementation. -->

Citation

bib @inproceedings{cruz2024evaluating, title={Evaluating language models as risk scores}, author={Andr\'{e} F. Cruz and Moritz Hardt and Celestine Mendler-D\"{u}nner}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=qrZxL3Bto9} }

License and terms of use

Code licensed under the MIT license.

The American Community Survey (ACS) Public Use Microdata Sample (PUMS) is governed by the U.S. Census Bureau terms of service.

Owner

  • Name: Social Foundations of Computation
  • Login: socialfoundations
  • Kind: organization
  • Email: sf-admin@is.mpg.de
  • Location: Germany

Max Planck Institute for Intelligent Systems, Tübingen

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
  - given-names: "André F"
    family-names: "Cruz"
  - given-names: "Moritz"
    family-names: "Hardt"
  - given-names: "Celestine"
    family-names: "Mendler-Dünner"
title: "Evaluating language models as risk scores"
preferred-citation:
  type: article
  authors:
    - given-names: "André F"
      family-names: "Cruz"
    - given-names: "Moritz"
      family-names: "Hardt"
    - given-names: "Celestine"
      family-names: "Mendler-Dünner"
  title: "Evaluating language models as risk scores"
  year: 2024
  journal: "The Thirty-eighth Conference on Neural Information Processing Systems Datasets and Benchmarks Track"
  url: "https://openreview.net/forum?id=qrZxL3Bto9"

GitHub Events

Total
  • Issues event: 2
  • Watch event: 17
  • Delete event: 5
  • Issue comment event: 3
  • Push event: 108
  • Pull request review event: 1
  • Pull request event: 22
  • Fork event: 4
  • Create event: 11
Last Year
  • Issues event: 2
  • Watch event: 17
  • Delete event: 5
  • Issue comment event: 3
  • Push event: 108
  • Pull request review event: 1
  • Pull request event: 22
  • Fork event: 4
  • Create event: 11

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 1
  • Total pull requests: 8
  • Average time to close issues: 21 days
  • Average time to close pull requests: about 20 hours
  • Total issue authors: 1
  • Total pull request authors: 4
  • Average comments per issue: 2.0
  • Average comments per pull request: 0.0
  • Merged pull requests: 5
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 1
  • Pull requests: 8
  • Average time to close issues: 21 days
  • Average time to close pull requests: about 20 hours
  • Issue authors: 1
  • Pull request authors: 4
  • Average comments per issue: 2.0
  • Average comments per pull request: 0.0
  • Merged pull requests: 5
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • AndreFCruz (2)
  • mohitsharma29 (1)
Pull Request Authors
  • AndreFCruz (17)
  • celedue (2)
  • Ananya-Joshi (2)
Top Labels
Issue Labels
Pull Request Labels

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 63 last-month
  • Total dependent packages: 0
  • Total dependent repositories: 0
  • Total versions: 27
  • Total maintainers: 1
pypi.org: folktexts

Use LLMs to get classification risk scores on tabular tasks.

  • Documentation: https://folktexts.readthedocs.io/
  • License: MIT License Copyright (c) 2024 Social Foundations of Computation, at MPI-IS Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
  • Latest release: 0.1.1
    published 11 months ago
  • Versions: 27
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 63 Last month
Rankings
Dependent packages count: 10.2%
Average: 38.6%
Dependent repos count: 67.0%
Maintainers (1)
Last synced: 6 months ago

Dependencies

.github/workflows/python-docs.yml actions
  • actions/checkout v4 composite
  • actions/setup-python v4 composite
  • peaceiris/actions-gh-pages v3 composite
.github/workflows/python-publish.yml actions
  • actions/checkout v4 composite
  • actions/setup-python v4 composite
  • pypa/gh-action-pypi-publish release/v1 composite
.github/workflows/python-tests.yml actions
  • actions/checkout v4 composite
  • actions/setup-python v4 composite
pyproject.toml pypi
requirements/docs.txt pypi
  • myst-parser *
  • sphinx *
  • sphinx-autopackagesummary *
  • sphinx-copybutton *
  • sphinx_rtd_theme *
  • sphinxcontrib-bibtex *
  • sphinxemoji *
requirements/main.txt pypi
  • accelerate *
  • cloudpickle *
  • folktables *
  • matplotlib *
  • netcal *
  • numpy *
  • pandas *
  • protobuf *
  • scikit-learn *
  • seaborn *
  • sentencepiece *
  • torch *
  • tqdm *
  • transformers *
requirements/tests.txt pypi
  • coverage * test
  • flake8 * test
  • flake8-pyproject * test
  • mypy * test
  • pytest * test
requirements/cluster.txt pypi
  • htcondor *