symbolicai

A neurosymbolic perspective on LLMs

https://github.com/extensityai/symbolicai

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.1%) to scientific vocabulary

Keywords

large-language-models neurosymbolic-ai probabilistic-programming

Scientific Fields

Artificial Intelligence and Machine Learning Computer Science - 38% confidence
Last synced: 4 months ago · JSON representation ·

Repository

A neurosymbolic perspective on LLMs

Basic Info
  • Host: GitHub
  • Owner: ExtensityAI
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 29.4 MB
Statistics
  • Stars: 1,590
  • Watchers: 34
  • Forks: 76
  • Open Issues: 0
  • Releases: 64
Topics
large-language-models neurosymbolic-ai probabilistic-programming
Created about 3 years ago · Last pushed 4 months ago
Metadata Files
Readme Funding Citation

README.md

SymbolicAI: A neuro-symbolic perspective on LLMs

[![Documentation](https://img.shields.io/badge/Documentation-blue?style=for-the-badge)](https://extensityai.gitbook.io/symbolicai) [![Arxiv](https://img.shields.io/badge/Paper-32758e?style=for-the-badge)](https://arxiv.org/abs/2402.00854) [![DeepWiki](https://img.shields.io/badge/DeepWiki-yellow?style=for-the-badge)](https://deepwiki.com/ExtensityAI/symbolicai) [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/dinumariusc.svg?style=social&label=@DinuMariusC)](https://twitter.com/DinuMariusC) [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/symbolicapi.svg?style=social&label=@ExtensityAI)](https://twitter.com/ExtensityAI) [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/futurisold.svg?style=social&label=@futurisold)](https://x.com/futurisold)

What is SymbolicAI?

SymbolicAI is a neuro-symbolic framework, combining classical Python programming with the differentiable, programmable nature of LLMs in a way that actually feels natural in Python. It's built to not stand in the way of your ambitions. It's easily extensible and customizable to your needs by virtue of its modular design. It's quite easy to write your own engine, host locally an engine of your choice, or interface with tools like web search or image generation. To keep things concise in this README, we'll introduce two key concepts that define SymbolicAI: primitives and contracts.

❗️NOTE❗️ The framework's name is intended to credit the foundational work of Allen Newell and Herbert Simon that inspired this project.

Primitives

At the core of SymbolicAI are Symbol objects—each one comes with a set of tiny, composable operations that feel like native Python. python from symai import Symbol

Symbol comes in two flavours:

  1. Syntactic – behaves like a normal Python value (string, list, int ‐ whatever you passed in).
  2. Semantic – is wired to the neuro-symbolic engine and therefore understands meaning and context.

Why is syntactic the default? Because Python operators (==, ~, &, …) are overloaded in symai. If we would immediately fire the engine for every bitshift or comparison, code would be slow and could produce surprising side-effects. Starting syntactic keeps things safe and fast; you opt-in to semantics only where you need them.

How to switch to the semantic view

  1. At creation time

python S = Symbol("Cats are adorable", semantic=True) # already semantic print("feline" in S) # => True

  1. On demand with the .sem projection – the twin .syn flips you back:

python S = Symbol("Cats are adorable") # default = syntactic print("feline" in S.sem) # => True print("feline" in S) # => False

  1. Invoking dot-notation operations—such as .map() or any other semantic function—automatically switches the symbol to semantic mode:

python S = Symbol(['apple', 'banana', 'cherry', 'cat', 'dog']) print(S.map('convert all fruits to vegetables')) # => ['carrot', 'broccoli', 'spinach', 'cat', 'dog']

Because the projections return the same underlying object with just a different behavioural coat, you can weave complex chains of syntactic and semantic operations on a single symbol. Think of them as your building blocks for semantic reasoning. Right now, we support a wide range of primitives; check out the docs here, but here's a quick snack:

| Primitive/Operator | Category | Syntactic | Semantic | Description | |--------------------|-----------------|:---------:|:--------:|-------------| | == | Comparison | ✓ | ✓ | Tests for equality. Syntactic: literal match. Semantic: fuzzy/conceptual equivalence (e.g. 'Hi' == 'Hello'). | | + | Arithmetic | ✓ | ✓ | Syntactic: numeric/string/list addition. Semantic: meaningful composition, blending, or conceptual merge. | | & | Logical/Bitwise | ✓ | ✓ | Syntactic: bitwise/logical AND. Semantic: logical conjunction, inference, e.g., context merge. | | symbol[index] = value | Iteration | ✓ | ✓ | Set item or slice. | | .startswith(prefix) | String Helper | ✓ | ✓ | Check if a string starts with given prefix (in both modes). | | .choice(cases, default) | Pattern Matching| | ✓ | Select best match from provided cases. | | .foreach(condition, apply)| Execution Control | | ✓ | Apply action to each element. | | .cluster(**clustering_kwargs?) | Data Clustering | | ✓ | Cluster data into groups semantically. (uses sklearn's DBSCAN)| | .similarity(other, metric?, normalize?) | Embedding | | ✓ | Compute similarity between embeddings. | | ... | ... | ...| ... | ... |

Contracts

They say LLMs hallucinate—but your code can't afford to. That's why SymbolicAI brings Design by Contract principles into the world of LLMs. Instead of relying solely on post-hoc testing, contracts help build correctness directly into your design, everything packed into a decorator that will operate on your defined data models and validation constraints: ```python from symai import Expression from symai.strategy import contract from symai.models import LLMDataModel # Compatible with Pydantic's BaseModel from pydantic import Field, field_validator

▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

Data models ▬

– clear structure + rich Field descriptions power ▬

validation, automatic prompt templating & remedies ▬

▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

class DataModel(LLMDataModel): somefield: sometype = Field(description="very descriptive field", andothersupportedoptionshere="...")

@field_validator('some_field')
def validate_some_field(cls, v):
    # Custom basic validation logic can be added here too besides pre/post
    valid_opts = ['A', 'B', 'C']
    if v not in valid_opts:
        raise ValueError(f'Must be one of {valid_opts}, got "{v}".')
    return v

▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

The contracted expression class ▬

▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

@contract( # ── Remedies ─────────────────────────────────────────── # preremedy=True, # Try to fix bad inputs automatically postremedy=True, # Try to fix bad LLM outputs automatically accumulateerrors=True, # Feed history of errors to each retry verbose=True, # Nicely displays progress in terminal remedyretryparams=dict(tries=3, delay=0.4, maxdelay=4.0, jitter=0.15, backoff=1.8, graceful=False), ) class Agent(Expression): # # High-level behaviour: # . prompt – a *static description of what the LLM must do (mandatory) # 1. pre – sanity-check inputs (optional) # 2. act – mutate state (optional) # 3. LLM – generate expected answer (handled by SymbolicAI engine) # 4. post – ensure answer meets semantic rules (optional) # 5. forward (mandatory) # • if contract succeeded → return type validated LLM object # • else → graceful fallback answer # ... ```

Because we don't want to bloat this README file with long Python snippets, learn more about contracts here and here.

Installation

Core Features

To get started with SymbolicAI, you can install it using pip:

bash pip install symbolicai

Alternatively, clone the repository and set up a Python virtual environment using uv: bash git clone git@github.com:ExtensityAI/symbolicai.git cd symbolicai uv sync --python x.xx source ./.venv/bin/activate Running symconfig will now use this Python environment.

Optional Features

SymbolicAI uses multiple engines to process text, speech and images. We also include search engine access to retrieve information from the web. To use all of them, you will need to also install the following dependencies and assign the API keys to the respective engines. E.g.:

bash pip install "symbolicai[hf]", pip install "symbolicai[llamacpp]", pip install "symbolicai[bitsandbytes]", pip install "symbolicai[wolframalpha]", pip install "symbolicai[whisper]", pip install "symbolicai[webscraping]", pip install "symbolicai[serpapi]", pip install "symbolicai[services]", pip install "symbolicai[solver]"

Or, install all optional dependencies at once:

bash pip install "symbolicai[all]"

To install dependencies exactly as locked in the provided lock file: bash uv sync --frozen

To install optional extras via uv: bash uv sync --extra all # all optional extras uv sync --extra webscraping # only webscraping

❗️NOTE❗️Please note that some of these optional dependencies may require additional installation steps. Additionally, some are only experimentally supported now and may not work as expected. If a feature is extremely important to you, please consider contributing to the project or reaching out to us.

Configuration Management

SymbolicAI now features a configuration management system with priority-based loading. The configuration system looks for settings in three different locations, in order of priority:

  1. Debug Mode (Current Working Directory)

    • Highest priority
    • Only applies to symai.config.json
    • Useful for development and testing
  2. Environment-Specific Config (Python Environment)

    • Second priority
    • Located in {python_env}/.symai/
    • Ideal for project-specific settings
  3. Global Config (Home Directory)

    • Lowest priority
    • Located in ~/.symai/
    • Default fallback for all settings

Configuration Files

The system manages three main configuration files: - symai.config.json: Main SymbolicAI configuration - symsh.config.json: Shell configuration - symserver.config.json: Server configuration

Viewing Your Configuration

Before using the symai, we recommend inspecting your current configuration setup using the command below. It will start the initial packages caching and initializing the symbolicai configuration files.

```bash symconfig

UserWarning: No configuration file found for the environment. A new configuration file has been created at /.symai/symai.config.json. Please configure your environment.

```

You then must edit the symai.config.json file. A neurosymbolic engine is required to use the symai package. Read more about how to use a neuro-symbolic engine here.

This command will show: - All configuration locations - Active configuration paths - Current settings (with sensitive data truncated)

Configuration Priority Example

```console my_project/ # Debug mode (highest priority) └── symai.config.json # Only this file is checked in debug mode

{python_env}/.symai/ # Environment config (second priority) ├── symai.config.json ├── symsh.config.json └── symserver.config.json

~/.symai/ # Global config (lowest priority) ├── symai.config.json ├── symsh.config.json └── symserver.config.json ```

If a configuration file exists in multiple locations, the system will use the highest-priority version. If the environment-specific configuration is missing or invalid, the system will automatically fall back to the global configuration in the home directory.

Best Practices

  • Use the global config (~/.symai/) for your default settings
  • Use environment-specific configs for project-specific settings
  • Use debug mode (current directory) for development and testing
  • Run symconfig to inspect your current configuration setup

Configuration File

You can specify engine properties in a symai.config.json file in your project path. This will replace the environment variables. Example of a configuration file with all engines enabled: json { "NEUROSYMBOLIC_ENGINE_API_KEY": "<OPENAI_API_KEY>", "NEUROSYMBOLIC_ENGINE_MODEL": "gpt-4o", "SYMBOLIC_ENGINE_API_KEY": "<WOLFRAMALPHA_API_KEY>", "SYMBOLIC_ENGINE": "wolframalpha", "EMBEDDING_ENGINE_API_KEY": "<OPENAI_API_KEY>", "EMBEDDING_ENGINE_MODEL": "text-embedding-3-small", "SEARCH_ENGINE_API_KEY": "<PERPLEXITY_API_KEY>", "SEARCH_ENGINE_MODEL": "sonar", "TEXT_TO_SPEECH_ENGINE_API_KEY": "<OPENAI_API_KEY>", "TEXT_TO_SPEECH_ENGINE_MODEL": "tts-1", "INDEXING_ENGINE_API_KEY": "<PINECONE_API_KEY>", "INDEXING_ENGINE_ENVIRONMENT": "us-west1-gcp", "DRAWING_ENGINE_API_KEY": "<OPENAI_API_KEY>", "DRAWING_ENGINE_MODEL": "dall-e-3", "VISION_ENGINE_MODEL": "openai/clip-vit-base-patch32", "OCR_ENGINE_API_KEY": "<APILAYER_API_KEY>", "SPEECH_TO_TEXT_ENGINE_MODEL": "turbo", "SPEECH_TO_TEXT_API_KEY": "", "SUPPORT_COMMUNITY": true }

With these steps completed, you should be ready to start using SymbolicAI in your projects.

❗️NOTE❗️Our framework allows you to support us train models for local usage by enabling the data collection feature. On application startup we show the terms of services and you can activate or disable this community feature. We do not share or sell your data to 3rd parties and only use the data for research purposes and to improve your user experience. To change this setting open the symai.config.json and turn it on/off by setting the SUPPORT_COMMUNITY property to True/False via the config file or the respective environment variable.

❗️NOTE❗️By default, the user warnings are enabled. To disable them, export SYMAI_WARNINGS=0 in your environment variables.

Running tests

Some examples of running tests locally: ```bash

Run all tests

pytest tests

Run mandatory tests

pytest -m mandatory Be sure to have your configuration set up correctly before running the tests. You can also run the tests with coverage to see how much of the code is covered by tests: bash pytest --cov=symbolicai tests ```

🪜 Next Steps

Now, there are tools like DeepWiki that provide better documentation than we could ever write, and we don’t want to compete with that; we'll correct it where it's plain wrong. Please go read SymbolicAI's DeepWiki page. There's a lot of interesting stuff in there. Last but not least, check out our paper that describes the framework in detail. If you like watching videos, we have a series of tutorials that you can find here.

📜 Citation

bibtex @software{Dinu_SymbolicAI_2022, author = {Dinu, Marius-Constantin}, editor = {Leoveanu-Condrei, Claudiu}, title = {{SymbolicAI: A Neuro-Symbolic Perspective on Large Language Models (LLMs)}}, url = {https://github.com/ExtensityAI/symbolicai}, month = {11}, year = {2022} }

📝 License

This project is licensed under the BSD-3-Clause License - refer to the docs.

Like this Project?

If you appreciate this project, please leave a star ⭐️ and share it with friends and colleagues. To support the ongoing development of this project even further, consider donating. Thank you!

Donate

We are also seeking contributors or investors to help grow and support this project. If you are interested, please reach out to us.

📫 Contact

Feel free to contact us with any questions about this project via email, through our website, or find us on Discord: Discord

Owner

  • Name: ExtensityAI
  • Login: ExtensityAI
  • Kind: organization
  • Email: office@extensity.ai
  • Location: Austria

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
- family-names: "Dinu"
  given-names: "Marius-Constantin"
  orcid: "https://orcid.org/my-orcid?orcid=0000-0002-7518-3337"
editors:
- family-names: "Leoveanu-Condrei"
  given-names: "Claudiu"
title: "SymbolicAI: A Neuro-Symbolic Perspective on Large Language Models (LLMs)"
date-released: 2022-11-30
url: "https://github.com/Xpitfire/symbolicai"

GitHub Events

Total
  • Create event: 29
  • Release event: 24
  • Issues event: 3
  • Watch event: 425
  • Delete event: 1
  • Issue comment event: 7
  • Push event: 278
  • Pull request review comment event: 3
  • Pull request review event: 3
  • Pull request event: 14
  • Fork event: 23
Last Year
  • Create event: 29
  • Release event: 24
  • Issues event: 3
  • Watch event: 425
  • Delete event: 1
  • Issue comment event: 7
  • Push event: 278
  • Pull request review comment event: 3
  • Pull request review event: 3
  • Pull request event: 14
  • Fork event: 23

Committers

Last synced: over 1 year ago

All Time
  • Total Commits: 675
  • Total Committers: 9
  • Avg Commits per committer: 75.0
  • Development Distribution Score (DDS): 0.124
Past Year
  • Commits: 549
  • Committers: 7
  • Avg Commits per committer: 78.429
  • Development Distribution Score (DDS): 0.144
Top Committers
Name Email Commits
Xpitfire d****n@h****m 591
leoentersthevoid c****v@i****m 60
Claudiu Leoveanu l****c@a****m 9
Marius-Constantin Dinu m****s@a****n 4
Robin Goupil g****n@g****m 4
Markus Hofmarcher m****r@g****m 3
Markus Hofmarcher m****s@e****i 2
Ikko Eltociear Ashimine e****r@g****m 1
Tyson Wynne 3****0 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 4 months ago

Packages

  • Total packages: 3
  • Total downloads:
    • pypi 2,360 last-month
  • Total dependent packages: 0
    (may contain duplicates)
  • Total dependent repositories: 1
    (may contain duplicates)
  • Total versions: 238
  • Total maintainers: 2
proxy.golang.org: github.com/ExtensityAI/symbolicai
  • Versions: 64
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent packages count: 6.5%
Average: 6.7%
Dependent repos count: 7.0%
Last synced: 4 months ago
proxy.golang.org: github.com/extensityai/symbolicai
  • Versions: 64
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent packages count: 6.5%
Average: 6.7%
Dependent repos count: 7.0%
Last synced: 4 months ago
pypi.org: symbolicai

A Neurosymbolic Perspective on Large Language Models

  • Versions: 110
  • Dependent Packages: 0
  • Dependent Repositories: 1
  • Downloads: 2,360 Last month
Rankings
Downloads: 4.8%
Dependent packages count: 10.1%
Average: 12.2%
Dependent repos count: 21.6%
Maintainers (2)
Last synced: 4 months ago

Dependencies

pyproject.toml pypi
  • PyPDF2 *
  • accelerate *
  • beautifulsoup4 *
  • google-search-results *
  • ipython *
  • natsort *
  • numpy *
  • openai *
  • pandas *
  • pinecone-client *
  • python-box *
  • pyyaml *
  • rpyc *
  • scikit-learn *
  • selenium *
  • sentencepiece *
  • torch *
  • torchvision *
  • tqdm *
  • transformers *
  • webdriver-manager *
  • whisper *
  • wolframalpha *
docs/requirements.txt pypi
  • autodoc-pydantic ==1.8.0
  • autodoc-pydantic *
  • myst-nb ==0.17.2
  • myst_parser ==0.18.1
  • sphinx ==4.5.0
  • sphinx-autodocgen ==1.3
  • sphinx-book-theme ==1.0.1
  • sphinx-code-include ==1.1.1
  • sphinx-rst-builder ==0.0.3
  • sphinx_panels ==0.6.0
  • symbolicai *
  • toml ==0.10.2
setup.py pypi
environment.yml pypi
  • symbolicai *
  • sympy *