candela
A project developing a novel framework to enhance Large Language Model (LLM) reliability and transparency using a human-readable directive scaffold and blockchain anchoring for verifiable behaviour governance.
Science Score: 57.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 2 DOI reference(s) in README -
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (11.8%) to scientific vocabulary
Repository
A project developing a novel framework to enhance Large Language Model (LLM) reliability and transparency using a human-readable directive scaffold and blockchain anchoring for verifiable behaviour governance.
Basic Info
Statistics
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
- Releases: 1
Metadata Files
README.md
CANDELA
Compliant Auditable Natural-language Directive Enforcement & Ledger Anchoring
Illuminating AI: An Introduction to CANDELA
Large Language Models (LLMs) are transforming the way we write, code, and search. Their power is undeniable—and so are the challenges: hallucinations, instruction drift, and opaque internals.
CANDELA addresses this with an external software layer—the Directive Guardian—that enforces a clear, human-readable, machine-parsable Directive Scaffold.
How CANDELA keeps rules consistent and verifiable
- Verifiable rule-set integrity. Before any LLM interaction, the directive scaffold (stored as
src/directives_schema.json) is hashed (SHA-256). The digest is anchored on a public blockchain (testnet for PoC), making the rule-set transparent, tamper-evident, and publicly verifiable. - Runtime verification & guided output. The Guardian loads its local copy, recomputes the hash, and verifies it against the on-chain value. Only if they match does it proceed to guide the LLM’s output.
- Automated checks (selective). After generation, the Guardian can evaluate outputs against microdirectives—small, testable rules derived from the scaffold—used only where they add clarity.
- Transparent audit trails. Optionally anchor interaction-specific hashes for provenance when needed.
Key features
- Human-readable directives. Define and manage model behaviour with clear, interpretable rules that are easy to audit.
- Blockchain anchoring. Secure directive updates with public anchoring for transparency and tamper resistance.
- Verifiable governance. Enable collaborative, decentralised oversight of model behaviour.
- Python implementation. Accessible, extensible, and easy to integrate.
By bridging the gap between human oversight and machine intelligence, CANDELA aims to raise the standard for responsible AI development. Whether you are an AI researcher, developer, or an organisation seeking robust governance over LLM behaviours, CANDELA provides tools to establish trust, transparency, and accountability.
From Rule-Checker to Anti-Slop Quality-Token Engine
The new digital pollution. LLMs can produce vast quantities of low-effort text (AI slop) that clog search, poison future training data, and crowd out human craft. Spam filters catch some abuse; few systems reward the opposite: careful, human-authored work that meets rigorous standards.
Why CANDELA is the missing piece. Built for platforms, publishers, and researchers who need auditable, tamper-evident rules for LLM behaviour. CANDELA’s insight: outputs can be provably measured against an immutable rule-set.
Proof you can check now
1) View the canonical hash and transaction in
docs/ANCHORS.md.
2) Recompute locally:```bash sha256sum src/directives_schema.json
macOS:
shasum -a 256 src/directives_schema.json ```
Compare to the value in
docs/ANCHORS.md(and the linked on-chain record).
That infrastructure enables pass/fail gating. The next step is incentives: reward high-quality human work; make slop economically self-defeating.
Implementation roadmap (high level)
- P-0 — Guardian numeric output + scoring weights — ✅
- P-1 — AI-contamination plug-in interface — ⚙ in progress
- P-2 — Token scaffolding (mint/burn) on Sepolia — ⚙ in progress
- P-3 —
candela-claimCLI (local verify → record) — ⚙ in progress - P-4 — Pilot cohort + public leaderboard — planned
- P-5 — DAO / multisig governance — planned
Clone, test, reproduce
~~~bash git clone https://github.com/jebus197/CANDELA && cd CANDELA python3 -m pytest tests ~~~
New here? Read GETTING_Started.md for a 10-minute walkthrough.
Project Overview
CANDELA develops a framework to enhance LLM reliability and transparency via a human-readable directive scaffold and blockchain anchoring for verifiable behavioural governance.
Features
- Directive Scaffold: establish and enforce clear, human-readable rules for LLM behaviour.
- Blockchain Anchoring: record behavioural updates and directives on a blockchain for auditability and tamper-resistance.
- Transparency: every change is visible and verifiable, supporting open governance and trust.
- Python-Based: 100% Python implementation for easy integration and extensibility.
Getting Started
- Clone the repository
~~~bash git clone https://github.com/jebus197/CANDELA.git cd CANDELA ~~~
- Set up your environment
- Python 3.8+
- (Optional) virtual environment:
~~~bash python -m venv venv source venv/bin/activate # Windows: venv\Scripts\activate ~~~
- Install dependencies:
~~~bash pip install -r requirements.txt ~~~
- Run CANDELA
- See the
docs/folder and the Quick-Start for details.
Contributing
See CONTRIBUTING.md.
Licence
MIT — see LICENSE.
Owner
- Login: jebus197
- Kind: user
- Repositories: 1
- Profile: https://github.com/jebus197
Citation (CITATION.cff)
# CITATION.cff — CANDELA v0.2.0 (released 2025-08-05)
cff-version: 1.2.0
type: software
message: >
If you use or refer to the CANDELA project in your research,
please cite it using the metadata from this file.
title: "CANDELA: Compliant Auditable Natural-language Directive Enforcement & Ledger Anchoring"
version: "0.2.0"
date-released: "2025-08-05"
doi: 10.17605/OSF.IO/3S7BT
authors:
- family-names: Jackson
given-names: George
# orcid: "https://orcid.org/XXXX-XXXX-XXXX-XXXX"
abstract: >
CANDELA is an open-source framework that enhances the transparency and
auditability of large-language-model outputs via directive enforcement
and blockchain anchoring.
keywords:
- artificial intelligence
- large language models
- ai governance
- blockchain
- auditability
- responsible ai
license: MIT
identifiers:
- type: doi
value: 10.17605/OSF.IO/3S7BT
description: "Persistent OSF registration for CANDELA v0.2.0"
repository-code: "https://github.com/jebus197/CANDELA"
GitHub Events
Total
- Create event: 3
- Commit comment event: 1
- Release event: 1
- Push event: 79
Last Year
- Create event: 3
- Commit comment event: 1
- Release event: 1
- Push event: 79
Dependencies
- python-dotenv >=1.0
- requests >=2.0
- web3 >=6.10
- fastapi *
- pydantic *
- requests >=2.0
- uvicorn *
- web3.py *