quantumve
Vision Transformer embeddings enable scalable quantum SVMs with real-world accuracy gains.
Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (10.5%) to scientific vocabulary
Keywords
Repository
Vision Transformer embeddings enable scalable quantum SVMs with real-world accuracy gains.
Basic Info
Statistics
- Stars: 3
- Watchers: 2
- Forks: 5
- Open Issues: 0
- Releases: 0
Topics
Metadata Files
README.md
QuantumVE: Quantum-Transformer Advantage Boost Over Classical ML
Breaking Discovery: Vision Transformer embeddings unlock quantum machine learning advantage! First systematic proof that embedding choice determines quantum kernel success, revealing fundamental synergy between transformer attention and quantum feature spaces.
- 📂 GitHub Repository: QuantumVE
- 📄 Research Paper: Embedding-Aware Quantum-Classical SVMs for Scalable Quantum Machine Learning
- 💻 Dataset on HuggingFace: QuantumEmbeddings
- Demo: Colab
🎯 Breakthrough Results
- 8.02% accuracy improvement on Fashion-MNIST vs classical SVMs
- 4.42% boost on MNIST dataset
- First evidence that ViT embeddings enable quantum advantage while CNN features show degradation
- 16-qubit tensor network simulation via cuTensorNet proving scalability
- Class-balanced k-means distillation for efficient quantum processing
Project Architecture
QuantumVE/
├── data_processing/ # Class-balanced k-means distillation procedures
├── embeddings/ # Vision Transformer & CNN embedding extraction
├── qve/ # Core quantum-classical modules and utilities
└── scripts/ # Experimental pipelines with cross-validation
├── classical_baseline.py # Traditional SVM benchmarks
├── cross_validation_baseline.py # Cross-validation framework
└── qsvm_cuda_embeddings.py # Our embedding-aware quantum method
🚀 Quick Start
1. Environment Setup
```bash
Create conda environment
conda create -n QuantumVE python=3.11 -y conda activate QuantumVE
Clone and install
git clone https://github.com/sebasmos/QuantumVE.git cd QuantumVE pip install -e .
For Ryzen devices - Install MPI
conda install -c conda-forge mpi4py openmpi ```
2. Download Pre-computed Embeddings
MNIST Embeddings:
bash
mkdir -p data && \
wget https://huggingface.co/datasets/sebasmos/QuantumEmbeddings/resolve/main/mnist_embeddings.zip && \
unzip mnist_embeddings.zip -d data && \
rm mnist_embeddings.zip
Fashion-MNIST Embeddings:
bash
mkdir -p data && \
wget https://huggingface.co/datasets/sebasmos/QuantumEmbeddings/resolve/main/fashionmnist_embeddings.zip && \
unzip fashionmnist_embeddings.zip -d data && \
rm fashionmnist_embeddings.zip
3. Run Experiments
Single Node: ```bash
Classical baseline with cross-validation
python scripts/classical_baseline.py
Cross-validation framework
python scripts/crossvalidationbaseline.py
Our embedding-aware quantum method
python scripts/qsvmcudaembeddings.py ```
Multi-Node with MPI: ```bash
Run with 2 processes
mpirun -np 2 python scripts/qsvmcudaembeddings.py mpirun -np 2 python scripts/crossvalidationbaseline.py ```
🔬 What Makes This Work?
Our key insight: embedding choice is critical for quantum advantage. While CNN features degrade in quantum systems, Vision Transformer embeddings create a unique synergy with quantum feature spaces, enabling measurable performance gains through:
- Class-balanced distillation reduces quantum overhead while preserving critical patterns
- ViT attention mechanisms align naturally with quantum superposition states
- Tensor network simulation scales to practical problem sizes (16+ qubits)
🤝 Contributing
We welcome contributions! Help us advance quantum machine learning:
- Fork the QuantumVE repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Submit a pull request with detailed description
Areas for contribution: - New embedding architectures (BERT, CLIP, etc.) - Additional quantum backends - Performance optimizations - Documentation improvements
🙏 Acknowledgements
This work was supported by the Google Cloud Research Credits program under award number GCP19980904.
📄 License
📚 Citation
Paper
bibtex
@article{Cajas2024_QuantumVE,
title={Embedding-Aware Quantum-Classical SVMs for Scalable Quantum Machine Learning},
author={Cajas Ordóñez, Sebastián Andrés and Torres Torres, Luis and Bifulco, Mario and Duran, Carlos and Bosch, Cristian and Simón Carbajo, Ricardo},
journal={arXiv preprint arXiv:2508.00024},
year={2024},
url={https://arxiv.org/abs/2508.00024}
}
Owner
- Name: Sebastian Cajas
- Login: sebasmos
- Kind: user
- Location: Paris
- Company: Université de Bordeaux - UdB
- Website: sebasmos.github.io
- Repositories: 7
- Profile: https://github.com/sebasmos
Citation (CITATION.cff)
# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!
cff-version: 1.3.0
title: >-
QuantumVE
message: >-
If you use this software, please cite it using the
metadata from this file.
type: software
authors:
- family-names: "Cajas Ordóñez"
given-names: "Sebastián Andrés"
alias: sebasmos
- family-names: "Torres Torres"
given-names: "Luis"
- family-names: "Bifulco"
given-names: "Mario"
- family-names: "Duran"
given-names: "Carlos"
- family-names: "Bosch"
given-names: "Cristian"
- family-names: "Simón Carbajo"
given-names: "Ricardo"
keywords:
- qsvm
repository-code: 'https://github.com/sebasmos/QuantumVE'
license: CC BY-NC-SA 4.0
date-released: '2025-07-28'
GitHub Events
Total
- Watch event: 4
- Member event: 1
- Push event: 23
- Fork event: 2
- Create event: 1
Last Year
- Watch event: 4
- Member event: 1
- Push event: 23
- Fork event: 2
- Create event: 1