hyperreal

A Python package for interpretive topic modelling

https://github.com/samhames/hyperreal

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
    3 of 6 committers (50.0%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (16.4%) to scientific vocabulary

Keywords from Contributors

similarity-networks social-media-analysis social-science
Last synced: 7 months ago · JSON representation ·

Repository

A Python package for interpretive topic modelling

Basic Info
  • Host: GitHub
  • Owner: SamHames
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 10.4 MB
Statistics
  • Stars: 15
  • Watchers: 3
  • Forks: 0
  • Open Issues: 10
  • Releases: 12
Created over 4 years ago · Last pushed about 2 years ago
Metadata Files
Readme License Citation

README.md

Hyperreal

Hyperreal is a Python tool for interactive qualitative analysis of large collections of documents.

Requirements

Hyperreal requires the installation of the Python programming language.

Installation

Hyperreal can be installed using Pip from the command line ( Windows, Mac) by running the following commands:

python -m pip install hyperreal

Usage

Hyperreal can be used in three different ways to flexibly support different use cases:

  • as a command line application
  • as a Python library
  • as a local web application

All of hyperreal's functionality is available from the Python library, but you will need to write Python code to use it directly. The command line interface allows for quick and repeatable experimentation and automation for standard data types - for example if you often work with Twitter data the command line will allow you to rapidly work with many different Twitter data collections. The web application is currently focused solely on creating and interactive editing of models.

Command Line

The following script gives a basic example of using the command line interface for hyperreal. This will work for cases where you have a plain text file (here called corpus.txt), with each document in the collection on its own line.

If you haven't worked with the command line before, you might find the following resources useful:

```

Create a corpus database from a plaintext file

hyperreal plaintext-corpus create corpus.txt corpus.db

Create an index from the corpus

hyperreal plaintext-corpus index corpus.db corpus_index.db

Create a model from that index, in this case with 128 clusters and

only include features present in 10 or more documents.

hyperreal model corpus_index.db --min-docs 10 --clusters 128

Use the web interface to serve the results of that modelling

After running this command point your web browser to http://localhost:8080

hyperreal plaintext-corpus serve corpus.db corpus_index.db

```

Library

This example script performs the same steps as the command line example.

``` python

from hyperreal import corpus, index

create and populate the corpus with some documents

c = corpus.PlainTextSqliteCorpus('corpus.db')

with open('corpus.txt', 'r') as f: # This will drop any line that has no text (such as a paragraph break) docs = (line for line in f if line.strip()) c.replace_docs(docs)

Index that corpus - note that we need to pass the corpus object for

initialisation.

idx = index.Index('corpus_index.db', corpus=c)

This only needs to be done once, unless the corpus changes.

idx.index()

Create a model on this index, with 128 clusters and only including features

that match at least 10 documents.

idx.initialiseclusters(nclusters=128, min_docs=10)

Refine the model for 10 iterations. Note that you can continue to refine

the model without initialising the clusters.

idx.refine_clusters(iterations=10)

Inspect the output of the model using the index instance (currently quite

limited). This will print the top 10 most frequent features in each

cluster.

for clusterid in idx.clusterids: clusterfeatures = idx.clusterfeatures(clusterid) for feature in clusterfeatures[:10]: print(feature)

Perform a boolean query on the corpus, looking for documents that contain

both apples AND oranges in the text field.

q = i[('text', 'apples')] & i[('text', 'oranges')]

Lookup all of the documents in the corpus that match this query.

docs = idx.get_docs(q)

'Pivot' the features in the index with respect to all cluster in the model.

This will show the top 10 features in each cluster that are similar to the

query.

for clusterdetail in idx.pivotclustersbyquery(query, topk=10): print(clusterdetail)

This will show the top 10 features for a selected set of cluster_ids.

for clusterdetail in idx.pivotclustersbyquery(query, clusterids=[3,5,7], topk=10): print(cluster_detail)

```

Development

Installation

To install the development version:

  1. Clone the repository using git.
  2. From the cloned repository, use pip for an editable install:

    pip install -e .

Running Tests

The full test suite and other checks are orchestrated via tox:

``` python -m pip install -e .[test]

To run just the testsuite

pytest

To run everything, including code formatting via black and check coverage

tox

```

Owner

  • Name: Sam Hames
  • Login: SamHames
  • Kind: user
  • Location: Australia

Citation (CITATION.cff)

cff-version: 1.2.0
title: 'Hyperreal: a tool for interpretive topic modelling'
message: >-
  If you use this software, please cite it using the
  metadata from this file.
type: software
doi: 10.5281/zenodo.7251335
authors:
  - given-names: Samuel
    family-names: Hames
    affiliation: The University of Queensland
    orcid: 'https://orcid.org/0000-0002-1824-2361'
  - given-names: Kateryna
    family-names: Kasianenko
    affiliation: Queensland University of Technology
    orcid: 'https://orcid.org/0000-0002-7159-5676'

GitHub Events

Total
  • Watch event: 2
Last Year
  • Watch event: 2

Committers

Last synced: about 3 years ago

All Time
  • Total Commits: 188
  • Total Committers: 6
  • Avg Commits per committer: 31.333
  • Development Distribution Score (DDS): 0.383
Top Committers
Name Email Commits
Sam Hames s****m@h****u 116
Sam Hames s****s@u****u 49
Sam Hames s****s@q****u 17
Kat Kasianenko k****o@g****m 2
MartinSchweinberger m****h@g****m 2
Sam Hames u****s@u****u 2
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: over 1 year ago

All Time
  • Total issues: 36
  • Total pull requests: 27
  • Average time to close issues: 17 days
  • Average time to close pull requests: 2 days
  • Total issue authors: 2
  • Total pull request authors: 2
  • Average comments per issue: 0.67
  • Average comments per pull request: 0.04
  • Merged pull requests: 26
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 4
  • Average time to close issues: N/A
  • Average time to close pull requests: 1 day
  • Issue authors: 0
  • Pull request authors: 1
  • Average comments per issue: 0
  • Average comments per pull request: 0.0
  • Merged pull requests: 4
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • SamHames (30)
  • katkasian (6)
Pull Request Authors
  • SamHames (24)
  • katkasian (2)
Top Labels
Issue Labels
feature (5) documentation (2) architecture (2) infrastructure (2) bug (1) speculative (1)
Pull Request Labels

Dependencies

.github/workflows/publish.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
.github/workflows/test_python.yml actions
  • actions/checkout v2 composite
  • actions/download-artifact v3 composite
  • actions/setup-python v2 composite
  • actions/upload-artifact v3 composite
pyproject.toml pypi
  • cherrypy >=18.6.0
  • click >=8.1.0
  • jinja2 >=3.1.0
  • lxml *
  • networkx >=3.0.0
  • pyroaring >=0.3.4
  • python-dateutil >=2.8.0
  • regex >=2022.4.24