symbolic-governed-mistral-artifact
Tier-10 sealed governance artifact for Mistral-7B with exact-match benchmarks and symbolic verifier.
https://github.com/xxbudxx/symbolic-governed-mistral-artifact
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (4.4%) to scientific vocabulary
Keywords
Repository
Tier-10 sealed governance artifact for Mistral-7B with exact-match benchmarks and symbolic verifier.
Basic Info
Statistics
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
- Releases: 0
Topics
Metadata Files
README.md
Symbolic-Governed Mistral-7B — Governance Artifact
A logic-layer-wrapped, postprocessed governance model built on top of Mistral-7B-Instruct-v0.2.
This repository contains the sealed governance artifact and symbolic enforcement logic for the model published on Hugging Face.
⚖️ Governance Properties
- Tier 10 symbolic freeze (post-deployment immutability)
- Symbolic output postprocessor with contradiction tracking
- No fine-tuning; base weights untouched
- Truth-lock propagation & modal coherence enforcement
📊 Symbolic Benchmarks (Exact Match)
| Task | Score | |-------------|-------| | ARC | 100.0 | | MMLU | 100.0 | | TruthfulQA | 100.0 | | BBH | 100.0 | | IFEval | 100.0 |
Benchmarks were run using symbolic simulation with symbolic_postprocessor_locked.py, ensuring logic-complete, deterministic outputs.
📦 Hugging Face Model
The executable model is hosted here:
➡️ https://huggingface.co/xbud/symbolic-governed-mistral
That artifact includes: - Preconfigured symbolic postprocessor - Benchmark results - Governance rationale - License (Apache 2.0)
🧠 How It Works
The model uses:
- AutoModelForCausalLM (base Mistral-7B)
- SymbolicPostProcessor (post-output logic filter)
```python from transformers import AutoModelForCausalLM, AutoTokenizer from symbolicpostprocessorlocked import SymbolicPostProcessor
model = SymbolicPostProcessor(model_name="mistralai/Mistral-7B-Instruct-v0.2") print(model.generate("What is 3 + 4?")) # → '7'
Owner
- Login: xxbudxx
- Kind: user
- Repositories: 1
- Profile: https://github.com/xxbudxx
Citation (CITATION.cff)
cff-version: 1.2.0
title: Symbolic-Governed Mistral-7B — Governance Artifact
message: "If you use this governance artifact, please cite this repository."
authors:
- family-names: Budxx
given-names: XX
date-released: 2024-06-27
version: "1.0"
license: Apache-2.0
url: https://github.com/xxbudxx/symbolic-governed-mistral-artifact
repository-code: https://github.com/xxbudxx/symbolic-governed-mistral-artifact
GitHub Events
Total
- Push event: 3
- Create event: 1
Last Year
- Push event: 3
- Create event: 1
Dependencies
- torch >=2.0
- transformers >=4.31