Tissue Data Explorer: a website template for presenting tissue sample research findings

Tissue Data Explorer: a website template for presenting tissue sample research findings - Published in JOSS (2026)

https://github.com/tacc/tissue-data-explorer

Science Score: 95.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
  • .zenodo.json file
  • DOI references
    Found 1 DOI reference(s) in JOSS metadata
  • Academic publication links
  • Committers with academic emails
  • Institutional organization owner
    Organization tacc has institutional domain (www.tacc.utexas.edu)
  • JOSS paper metadata
    Published in Journal of Open Source Software
Last synced: 17 days ago · JSON representation ·

Repository

Basic Info
  • Host: GitHub
  • Owner: TACC
  • License: bsd-3-clause
  • Language: Python
  • Default Branch: main
  • Size: 4.18 MB
Statistics
  • Stars: 0
  • Watchers: 5
  • Forks: 0
  • Open Issues: 0
  • Releases: 1
Created about 1 year ago · Last pushed about 2 months ago
Metadata Files
Readme Contributing License Citation

README.md

Tissue Data Explorer

This project contains the code necessary to create a website that showcases scientific data collected on a tissue sample. Types of data handled by this example include multi-channel image stacks, volumetric map data, and 3D models. This repository includes synthetic data that can be used to stand up a demo website, as well as a configuration portal that can be used to upload project data to the site.

Prerequisites

  • Docker Desktop (version 4.43 or higher) or Docker Engine (version 28.3 or higher)
  • Docker Compose (version 2.38 or higher)

Preparing the production image

  1. Clone this repo

git clone https://github.com/TACC/tissue-data-explorer.git

  1. Add a .env file to the config_portal folder. See the section Log in credentials for configuration site for more information.

  2. Build the images. The build script as currently configured will create the production build for the linux/amd64 and linux/arm64 platforms. If neither of those platforms meet your needs, then you will need to update the platforms specified in docker-compose.yaml to fit the platform of your production server. See the Docker Compose documentation for more information.

Providing the value prod to the -d option of the build script will trigger a production build. You can specify a custom volume using the -v option as well.

./build.sh -e prod

  1. Replace the variables in the code snippet below with your username and the appropriate version tag and run the commands to publish the images to Docker Hub

docker tag tde-prod-display {username}/tissue-data-explorer-display:{tag} docker push {username}/tissue-data-explorer-display:{tag} docker tag tde-prod-config {username}/tissue-data-explorer-config:{tag} docker push {username}/tissue-data-explorer-config:{tag}

  1. On the production server, pull the newly published images from Docker Hub

docker pull {username}/tissue-data-explorer-display:{tag} docker pull {username}/tissue-data-explorer-config:{tag}

  1. Copy the docker-compose-prod.yaml file and your .env file for the production configuration app onto the production server. You will need to make the following changes to docker-compose-prod.yaml:
  • update the variables in the image names with your username and the version tag
  • for the envfile setting in the config service, update the path of the .env file relative to the compose file. So, if you copied docker-compose-prod.yaml and .env into the same directory, the path for envfile should be ./.env.
  1. Set up the shared volume. You can use the ./build/create_volume.sh script to do this. If you are starting from scratch, you can use the data in the data/start folder to see a demo version of the app.

You can copy the data/start folder and the ./build/create_volume.sh script onto the production server and make the volume there. You must provide the name of the volume you want to create as well as the path to the source dataset as inputs to ./build/create_volume.sh in that order:

./build/create_volume.sh config-data-prod ./data/start

Alternatively, if the volume is not too large, you can make the volume on another machine using the above steps, then publish it to Docker Hub and pull it onto the production server.

  1. Export the volume by opening Docker Desktop, opening the volume, clicking on "Quick export", then choosing the "Registry" option under "Local or Hub storage".
  2. On the production server, create the new volume, pull the volume image from Docker Hub, run it, and copy the data into the volume on the production server.

docker volume create config-data docker pull {username}/config-data:{tag} docker run -it --entrypoint /bin/sh --mount source=config-data,target=/config {username}/config-data:{tag} cp -r volume-data/* config

  1. Run the image. The production containers should restart automatically if the server reboots due to the Docker restart policy.

USERID=${UID} GROUPID=${GID} docker compose -f docker-compose-prod.yaml up

  1. Clean up old images by running the ./build/clean_images.sh script. This script will remove all containers and images with "tde-" in the name, then clean the Docker build cache.

Getting Started with Development

  1. Clone this repo

git clone https://github.com/TACC/tissue-data-explorer.git

  1. Add a .env file to the config_portal folder. See the section Log in credentials for configuration site for more information.

  2. Use the script ./build.sh to build a development environment. If this is your first time building a development environment, you will need to create a new volume for use with the environment by providing the -n option to the program. The default behavior is to populate your new volume with some test data:

./build.sh -n

Or, if you would prefer to use a minimum dataset, you can specify that by providing the value min to the -d option.

./build.sh -n -d min

You can also specify a custom volume name using the -v option.

Running the script builds the apps and starts the display app at localhost:8050 and the config app at localhost:8040.

Running tests locally

The script run_tests.sh in the root project folder creates docker containers for the display and configuration apps, fills them with test data, runs the tests for the display app and config app, and then deletes the test containers and test volume.

./run_tests.sh

Preparing images for display on the website

See scripts\image_prep.md folder for more information about how to prepare images for display on the website.

Log in credentials for configuration site

The configuration app requires a file named .env in the root config app folder that contains the app secret and the credentials of authorized configuration portal users. The app secret is saved in the SECRET_KEY variable in the .env file and should be generated in a cryptographically secure manner. The usernames and passwords for configuration portal users are stored in the ACCOUNTS variable. See the file .env.example for file syntax and location.

Datasets included in the codebase

The codebase includes two small datasets that will allow you to get started with the app before loading your own data into the tool. The min dataset has the bare minimum data required to run the app without errors, and with this dataset, the app will show several pages as blank. Start with this dataset if you are going to load in your own data. The start dataset contains enough data to demo the core features of the app. Use this dataset if you want to preview the features of the app without loading your own data, or if you want to test or develop features in the app.

Serving custom reports

You can configure a page of links to any websites of your choice by uploading the list of links you want to include to the configuration portal. If you have project results reported in static HTML pages, you can customize the example configuration shown in nginx/tde.conf to serve those static HTML pages from certain routes within the app, and list those links on the reports page.

Uploading large files

File uploads through the configuration portal are capped at 150MB per file. Larger files can be added to the display app docker container. An example script is at scripts\move-files-to-docker.sh.

Owner

  • Name: Texas Advanced Computing Center
  • Login: TACC
  • Kind: organization
  • Location: Austin, TX

JOSS Publication

Tissue Data Explorer: a website template for presenting tissue sample research findings
Published
March 23, 2026
Volume 11, Issue 119, Page 9218
Authors
James A. Labyer ORCID
Texas Advanced Computing Center, The University of Texas at Austin, Austin, TX, United States of America
Erik Ferlanti ORCID
Texas Advanced Computing Center, The University of Texas at Austin, Austin, TX, United States of America
Martha Campbell-Thompson ORCID
Department of Pathology, Immunology and Laboratory Medicine, University of Florida, Gainesville, FL, United States of America
Clayton E. Mathews ORCID
Department of Pathology, Immunology and Laboratory Medicine, University of Florida, Gainesville, FL, United States of America
Wei-Jun Qian ORCID
Biological Sciences Division, Pacific Northwest National Laboratory, Richland, WA, United States of America
James P. Carson ORCID
Texas Advanced Computing Center, The University of Texas at Austin, Austin, TX, United States of America
Editor
Sehrish Kanwal ORCID
Tags
biology spatial biology microscopy anatomy transcriptomics histology proteomics 3D models

Citation (CITATION.cff)

message: "If you use this software, please cite it using these metadata."
cff-version: 1.2.0
authors:
  - name: "James A. Labyer"
    orcid: https://orcid.org/0009-0003-8222-2079
  - name: "Erik Ferlanti"
    orcid: https://orcid.org/0000-0001-5128-1584
  - name: "Martha Campbell-Thompson"
    orcid: https://orcid.org/0000-0001-6878-1235
  - name: "Clayton E. Mathews"
    orcid: https://orcid.org/0000-0002-8817-6355
  - name: "Wei-Jun Qian"
    orcid: https://orcid.org/0000-0002-5393-2827
  - name: "James P. Carson"
    orcid: https://orcid.org/0000-0001-9009-5645
title: "Tissue Data Explorer"
version: 1.0.0

GitHub Events

Total
  • Delete event: 11
  • Member event: 1
  • Pull request event: 22
  • Issues event: 4
  • Issue comment event: 16
  • Push event: 48
  • Create event: 12
Last Year
  • Delete event: 4
  • Pull request event: 3
  • Issues event: 4
  • Issue comment event: 16
  • Push event: 19
  • Create event: 3

Committers

Last synced: 9 months ago

All Time
  • Total Commits: 24
  • Total Committers: 1
  • Avg Commits per committer: 24.0
  • Development Distribution Score (DDS): 0.0
Past Year
  • Commits: 24
  • Committers: 1
  • Avg Commits per committer: 24.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
James j****r@g****m 24

Issues and Pull Requests

Last synced: 2 months ago

All Time
  • Total issues: 2
  • Total pull requests: 22
  • Average time to close issues: 20 days
  • Average time to close pull requests: 1 minute
  • Total issue authors: 1
  • Total pull request authors: 1
  • Average comments per issue: 2.5
  • Average comments per pull request: 0.0
  • Merged pull requests: 20
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 2
  • Pull requests: 22
  • Average time to close issues: 20 days
  • Average time to close pull requests: 1 minute
  • Issue authors: 1
  • Pull request authors: 1
  • Average comments per issue: 2.5
  • Average comments per pull request: 0.0
  • Merged pull requests: 20
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • TaylorHo (2)
Pull Request Authors
  • james-labyer (22)
Top Labels
Issue Labels
Pull Request Labels

Dependencies

.github/workflows/codeql.yml actions
  • actions/checkout v4 composite
  • github/codeql-action/analyze v3 composite
  • github/codeql-action/init v3 composite
.github/workflows/python-app.yml actions
  • actions/checkout v4 composite
  • actions/setup-python v3 composite
  • astral-sh/ruff-action v2 composite
app/Dockerfile docker
  • python 3.13-bookworm build
poetry.lock pypi
  • bandit 1.8.3
  • blinker 1.9.0
  • certifi 2025.1.31
  • charset-normalizer 3.4.1
  • click 8.1.8
  • colorama 0.4.6
  • coverage 7.6.12
  • dash 2.18.2
  • dash-ag-grid 31.3.0
  • dash-bootstrap-components 1.6.0
  • dash-core-components 2.0.0
  • dash-html-components 2.0.0
  • dash-table 5.0.0
  • defusedxml 0.7.1
  • et-xmlfile 2.0.0
  • filetype 1.2.0
  • flask 3.0.3
  • flask-login 0.6.3
  • gunicorn 23.0.0
  • idna 3.10
  • importlib-metadata 8.6.1
  • iniconfig 2.0.0
  • itsdangerous 2.2.0
  • jinja2 3.1.5
  • markdown-it-py 3.0.0
  • markupsafe 3.0.2
  • mdurl 0.1.2
  • nest-asyncio 1.6.0
  • nh3 0.2.21
  • numpy 2.1.3
  • odfpy 1.4.1
  • openpyxl 3.1.5
  • packaging 24.2
  • pandas 2.2.3
  • pbr 6.1.1
  • pillow 11.0.0
  • plotly 5.24.1
  • pluggy 1.5.0
  • pygments 2.19.1
  • pytest 8.3.5
  • pytest-cov 6.0.0
  • python-calamine 0.3.1
  • python-dateutil 2.9.0.post0
  • python-dotenv 1.0.1
  • python-magic 0.4.27
  • pytz 2025.1
  • pywavefront 1.3.3
  • pyxlsb 1.0.10
  • pyyaml 6.0.2
  • requests 2.32.3
  • retrying 1.3.4
  • rich 13.9.4
  • ruff 0.8.6
  • setuptools 75.8.2
  • six 1.17.0
  • stevedore 5.4.1
  • tenacity 9.0.0
  • typing-extensions 4.12.2
  • tzdata 2025.1
  • urllib3 2.3.0
  • werkzeug 3.0.6
  • xlrd 2.0.1
  • xlsxwriter 3.2.2
  • zipp 3.21.0
pyproject.toml pypi
  • filetype ^1.2.0 config
  • flask-login ^0.6.3 config
  • python-calamine ^0.3.1 config
  • python-dotenv ^1.0.1 config
  • python-magic ^0.4.27 config
  • bandit ^1.8.0 develop
  • pytest ~8.3.4 develop
  • pytest-cov ~6.0.0 develop
  • dash ~2.18.2
  • dash-ag-grid ~31.3.0
  • dash-bootstrap-components ~1.6.0
  • gunicorn ~23.0.0
  • nh3 ^0.2.20
  • numpy ~2.1.3
  • pandas ^2.2.3
  • pillow ~11.0.0
  • plotly ~5.24.1
  • python ^3.13
  • pywavefront ~1.3.3
  • ruff ~0.8.2