https://github.com/cosmaadrian/acumen-compressor

Order-agnostic lossless compressor using BPE and Huffman Coding.

https://github.com/cosmaadrian/acumen-compressor

Science Score: 23.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org, scholar.google
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (13.0%) to scientific vocabulary

Keywords

byte-pair-encoding compression huffman-coding lossless
Last synced: 5 months ago · JSON representation

Repository

Order-agnostic lossless compressor using BPE and Huffman Coding.

Basic Info
  • Host: GitHub
  • Owner: cosmaadrian
  • License: mit
  • Language: Python
  • Default Branch: master
  • Homepage:
  • Size: 24.4 KB
Statistics
  • Stars: 1
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Topics
byte-pair-encoding compression huffman-coding lossless
Created over 1 year ago · Last pushed over 1 year ago
Metadata Files
Readme License

README.md

drawing Acumen 🗜️ Compressor 🗜️

Coded with love and coffee ☕ by Adrian Cosma. But I need more coffee!

Buy Me A Coffee

Description

AcumenCompressor is designed as a simple data compression tool, using Byte Pair Encoding (BPE), combined with Huffman codes. Compared to other compression algorithms like LZ77 / LZSS, which use a sliding-window approach to compression (i.e. gzip), BPE+HC is deterministic and should be agnostic to the order of target elements when compressing multiple files.

Moreover, the combination of BPE+HC obtains a better compression ratio compared to gzip at the expense of computation time.

But why?

When compressing a dataset, the order of samples in the dataset is random and using a tool like gzip will give slightly differently-sized compressed values depending on the order of examples. When conducting reproducible experiments, I found it important to have a deterministic .

For example, in the paper gzip Predicts Data-dependent Scaling Laws, the author used gzip to compress the dataset, but did not take into account the order of elements. While this is negligeable, using AcumenCompressor will give the same compressed value for a dataset.

But how?

This project makes heavy use of a modified version of karpathy/minbpe to compute the byte-pairs and uses soxofaan/dahuffman for arithmetic coding.

Installation

Install the pypi package via pip:

bash pip install -U acumencompressor

Alternatively, install directly via git: bash pip install -U git+https://github.com/cosmaadrian/acumen-compressor

Usage

Compressing a String

```python import acumencompressor as ac

import string import random import gzip

Make a test string.

testcorpus = ''.join(random.choice(string.asciiuppercase + string.digits) for _ in range(1000)) compressor = ac.AcumenCompressor(vocab_size = 512, verbose = True)

compressed = compressor.compress(test_corpus.encode('utf-8')) uncompressed = compressor.uncompress(compressed).decode('utf-8', errors = 'replace')

assert testcorpus == uncompressed assert len(compressed) < len(testcorpus.encode('utf-8')), "No compression performed!!!"

compressedgzip = gzip.compress(testcorpus.encode('utf-8'))

print("Size before compression:", len(test_corpus.encode('utf-8')))

1000

print("Size after compression:", len(compressed))

485

print("Size after gzip compression:", len(compressed_gzip))

695

print("Compression ratio:", round((1 - len(compressed) / len(test_corpus.encode('utf-8'))) * 100, 2), '%')

51.5 %

print("GZip Compression ratio:", round((1 - len(compressedgzip) / len(testcorpus.encode('utf-8'))) * 100, 2), '%')

30.5 %

```

Compressing a list of strings

Use the function AcumenCompressor.compress_list() to compress a list of strings regardless of order.

```python import acumencompressor as ac

import string import random import gzip

corpora = [ ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(1000)) for _ in range(100) ]

sizesgzip = [] sizesours = [] sizesoursorderless = [] for _ in range(10): random.shuffle(corpora) target_permutation = "".join(corpora)

# gzip
compressed_gzip = gzip.compress(target_permutation.encode('utf-8'))
sizes_gzip.append(len(compressed_gzip))

# AcumenCompressor
ac = AcumenCompressor(vocab_size = 300, verbose = True)
compressed_ac = ac.compress(target_permutation.encode('utf-8'))
sizes_ours.append(len(compressed_ac))

# AcumenCompressor order agnostic
ac = AcumenCompressor(vocab_size = 300, verbose = True)
compressed_ac = ac.compress_list([c.encode('utf-8') for c in corpora])
sizes_ours_orderless.append(len(b"".join(compressed_ac)))

print('gzip Sizes:', sizesgzip, np.mean(sizesgzip), np.std(sizesgzip)) print('AC Sizes:', sizesours, np.mean(sizesours), np.std(sizesours)) print('AC (orderless) Sizes:', sizesoursorderless, np.mean(sizesoursorderless), np.std(sizesoursorderless))

```

License

This repository uses MIT License.

Owner

  • Name: Adrian Cosma
  • Login: cosmaadrian
  • Kind: user
  • Location: Bucharest, Romania
  • Company: University Politehnica of Bucharest

Mercenary Researcher

GitHub Events

Total
Last Year

Dependencies

requirements.txt pypi
  • dahuffman *
setup.py pypi