https://github.com/root-11/tablite

multiprocessing enabled out-of-memory data analysis library for tabular data.

https://github.com/root-11/tablite

Science Score: 13.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.8%) to scientific vocabulary

Keywords

data-analysis data-science datatype disk etl excel filereader pandas pivot-tables python table tabular-data
Last synced: 6 months ago · JSON representation

Repository

multiprocessing enabled out-of-memory data analysis library for tabular data.

Basic Info
  • Host: GitHub
  • Owner: root-11
  • License: mit
  • Language: Python
  • Default Branch: master
  • Homepage:
  • Size: 24.6 MB
Statistics
  • Stars: 36
  • Watchers: 4
  • Forks: 7
  • Open Issues: 4
  • Releases: 99
Topics
data-analysis data-science datatype disk etl excel filereader pandas pivot-tables python table tabular-data
Created over 5 years ago · Last pushed 10 months ago
Metadata Files
Readme Changelog License

README.md

Tablite

Build status codecov Downloads Downloads PyPI version


Contents

Introduction

Tablite seeks to be the go-to library for manipulating tabular data with an api that is as close in syntax to pure python as possible.

Even smaller memory footprint

Tablite uses numpys fileformat as a backend with strong abstraction, so that copy, append & repetition of data is handled in pages. This is imperative for incremental data processing.

Tablite tests for memory footprint. One test compares the memory footprint of 10,000,000 integers where tablite will use < 1 Mb RAM in contrast to python which will require around 133.7 Mb of RAM (1M lists with 10 integers). Tablite also tests to assure that working with 1Tb of data is tolerable.

Tablite achieves this minimal memory footprint by using a temporary storage set in config.Config.workdir as tempfile.gettempdir()/tablite-tmp. If your OS (windows/linux/mac) sits on a SSD this will benefit from high IOPS and permit slices of 9,000,000,000 rows in less than a second.

Multiprocessing enabled by default

Tablite uses numpy whereever possible and applies multiprocessing for bypassing the GIL on all major operations. CSV import is performed in C through using nims compiler and is as fast the hardware allows.

All algorithms have been reworked to respect memory limits

Tablite respects the limits of free memory by tagging the free memory and defining task size before each memory intensive task is initiated (join, groupby, data import, etc). If you still run out of memory you may try to reduce the config.Config.PAGE_SIZE and rerun your program.

100% support for all python datatypes

Tablite wants to make it easy for you to work with data. tablite.Table's behave like a dict with lists:

my_table[column name] = [... data ...].

Tablite uses datatype mapping to native numpy types where possible and uses type mapping for non-native types such as timedelta, None, date, time… e.g. what you put in, is what you get out. This is inspired by bank python.

Light weight

Tablite is ~400 kB.

Helpful

Tablite wants you to be productive, so a number of helpers are available.

  • Table.import_file to import csv*, tsv, txt, xls, xlsx, xlsm, ods, zip and logs. There is automatic type detection (see tutorial.ipynb )
  • To peek into any supported file use get_headers which shows the first 10 rows.
  • Use mytable.rows and mytable.columns to iterate over rows or columns.
  • Create multi-key .index for quick lookups.
  • Perform multi-key .sort,
  • Filter using .any and .all to select specific rows.
  • use multi-key .lookup and .join to find data across tables.
  • Perform .groupby and reorganise data as a .pivot table with max, min, sum, first, last, count, unique, average, st.deviation, median and mode
  • Append / concatenate tables with += which automatically sorts out the columns - even if they're not in perfect order.
  • Should you tables be similar but not the identical you can use .stack to "stack" tables on top of each other

If you're still missing something add it to the wishlist


Installation

Get it from pypi: PyPI version

Install: pip install tablite
Usage: >>> from tablite import Table

Build & test

install nim >= 2.0.0

run: chmod +x ./build_nim.sh run: ./build_nim.sh

Should the default nim not be your desired taste, please use nims environment manager (atlas) and run source nim-2.0.0/activate.sh on UNIX or nim-2.0.0/activate.bat on windows.

install python >= 3.8 python -m venv /your/venv/dir activate /your/venv/dir pip install -r requirements.txt pip install -r requirements_for_testing.py pytest ./tests

Feature overview

|want to...| this way... | |---|---| |loop over rows| [ row for row in table.rows ]| |loop over columns| [ table[col_name] for col_name in table.columns ]| |slice | myslice = table['A', 'B', slice(0,None,15)]| |get column by name | my_table['A'] | |get row by index | my_table[9_000_000_001] | |value update| mytable['A'][2] = new value | |update w. list comprehension | mytable['A'] = [ x*x for x in mytable['A'] if x % 2 != 0 ]| |join| a_join = numbers.join(letters, left_keys=['colour'], right_keys=['color'], left_columns=['number'], right_columns=['letter'], kind='left')| | lookup| travel_plan = friends.lookup(bustable, (DataTypes.time(21, 10), "<=", 'time'), ('stop', "==", 'stop'))| | groupby| group_by = table.groupby(keys=['C', 'B'], functions=[('A', gb.count)])| | pivot table | my_pivot = t.pivot(rows=['C'], columns=['A'], functions=[('B', gb.sum), ('B', gb.count)], values_as_rows=False)| | index| indices = old_table.index(*old_table.columns)| | sort| lookup1_sorted = lookup_1.sort(**{'time': True, 'name':False, "sort_mode":'unix'})| | filter | true, false = unfiltered.filter( [{"column1": 'a', "criteria":">=", 'value2':3}, ... more criteria ... ], filter_type='all' )| | find any | any_even_rows = mytable.any('A': lambda x : x%2==0, 'B': lambda x > 0)| | find all | all_even_rows = mytable.all('A': lambda x : x%2==0, 'B': lambda x > 0)| | to json | json_str = my_table.to_json()| | from json | Table.from_json(json_str)|

API

To view the detailed API see api

Tutorial

To learn more see the tutorial.ipynb (Jupyter notebook)

Latest updates

See changelog.md

Credits

  • Eugene Antonov - the api documentation.
  • Audrius Kulikajevas - Edge case testing / various bugs, Jupyter notebook integration.
  • Ovidijus Grigas - various bugs, documentation.
  • Martynas Kaunas - GroupBy functionality.
  • Sergej Sinkarenko - various bugs.
  • Lori Cooper - spell checking.

Owner

  • Name: Bjorn Madsen
  • Login: root-11
  • Kind: user
  • Location: Edinburgh

GitHub Events

Total
  • Push event: 2
Last Year
  • Push event: 2

Committers

Last synced: over 1 year ago

All Time
  • Total Commits: 1,684
  • Total Committers: 12
  • Avg Commits per committer: 140.333
  • Development Distribution Score (DDS): 0.622
Past Year
  • Commits: 432
  • Committers: 6
  • Avg Commits per committer: 72.0
  • Development Distribution Score (DDS): 0.252
Top Committers
Name Email Commits
Bjorn Madsen d****n@g****m 636
root-11 b****n@g****m 467
Ratchet a****s@h****m 420
root-11 4****1 128
Eugene Antonov j****n@m****m 10
ovidijg o****s@d****m 9
Arturo Soucase a****e@d****m 5
Ovidijus Grigas O****s@d****m 3
github-actions[bot] g****] 3
ltaylor l****r@d****m 1
root-11 b****n@o****m 1
Ovidijus Grigas 3****i 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 8 months ago

All Time
  • Total issues: 45
  • Total pull requests: 123
  • Average time to close issues: 3 months
  • Average time to close pull requests: about 15 hours
  • Total issue authors: 12
  • Total pull request authors: 6
  • Average comments per issue: 3.07
  • Average comments per pull request: 0.59
  • Merged pull requests: 113
  • Bot issues: 0
  • Bot pull requests: 1
Past Year
  • Issues: 0
  • Pull requests: 1
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 1
  • Average comments per issue: 0
  • Average comments per pull request: 0.0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 1
Top Authors
Issue Authors
  • root-11 (24)
  • realratchet (6)
  • omenSi (3)
  • ttopi (2)
  • hgldarby (2)
  • ypanagis (1)
  • bjornmadsen (1)
  • dovydasrudys (1)
  • akash-goel (1)
  • rhs3i (1)
  • danieldjewell (1)
  • cerv15 (1)
Pull Request Authors
  • realratchet (130)
  • omenSi (8)
  • root-11 (5)
  • asoucase (3)
  • Jetman80 (1)
  • dependabot[bot] (1)
Top Labels
Issue Labels
enhancement (11) bug (3) question (2) wontfix (1) documentation (1)
Pull Request Labels
dependencies (1)

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 237 last-month
  • Total dependent packages: 0
  • Total dependent repositories: 1
  • Total versions: 175
  • Total maintainers: 1
pypi.org: tablite

multiprocessing enabled out-of-memory data analysis library for tabular data.

  • Versions: 175
  • Dependent Packages: 0
  • Dependent Repositories: 1
  • Downloads: 237 Last month
Rankings
Downloads: 6.5%
Dependent packages count: 10.1%
Forks count: 10.5%
Stargazers count: 10.8%
Average: 11.9%
Dependent repos count: 21.6%
Maintainers (1)
Last synced: 6 months ago

Dependencies

requirements.txt pypi
  • chardet ==5.0.0
  • h5py >=3.6.0
  • mplite ==1.1.0
  • numpy >=1.22.3
  • psutil >=5.9.0
  • pyexcel ==0.7.0
  • pyexcel-ods ==0.6.0
  • pyexcel-odsr ==0.6.0
  • pyexcel-xls ==0.7.0
  • pyexcel-xlsx ==0.6.0
  • pyperclip ==1.8.2
  • pyuca >=1.2
  • tqdm >=4.63.0
.github/workflows/codecov.yml actions
  • actions/checkout master composite
  • actions/setup-python master composite
  • codecov/codecov-action v1 composite
.github/workflows/publish.yml actions
  • actions/checkout v3 composite
  • pypa/gh-action-pypi-publish release/v1 composite
.github/workflows/python-test.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v3 composite
requirements_for_testing.txt pypi
  • pytest * test