net2vis

Automatic neural network visualizations generated in your browser!

https://github.com/viscom-ulm/net2vis

Science Score: 52.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
    Organization viscom-ulm has institutional domain (viscom.uni-ulm.de)
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.3%) to scientific vocabulary

Keywords

machine-learning-visualization neural-network-visualizations neural-networks
Last synced: 6 months ago · JSON representation ·

Repository

Automatic neural network visualizations generated in your browser!

Basic Info
  • Host: GitHub
  • Owner: viscom-ulm
  • License: mit
  • Language: JavaScript
  • Default Branch: master
  • Homepage:
  • Size: 6.26 MB
Statistics
  • Stars: 148
  • Watchers: 7
  • Forks: 18
  • Open Issues: 19
  • Releases: 0
Topics
machine-learning-visualization neural-network-visualizations neural-networks
Created about 7 years ago · Last pushed about 3 years ago
Metadata Files
Readme License Code of conduct Citation

README.md

Net2Vis Teaser Net2Vis Teaser_Legend

Net2Vis

:whitecheckmark: Automatic Network Visualization

:whitecheckmark: Levels of Abstraction

:whitecheckmark: Unified Design

Created by Alex Bäuerle, Christian van Onzenoodt and Timo Ropinski.

Accessible online.

What is this?

Net2Vis automatically generates abstract visualizations for convolutional neural networks from Keras code.

How does this help me?

When looking at publications that use neural networks for their techniques, it is still apparent how they differ. Most of them are handcrafted and thus lack a unified visual grammar. Handcrafting such visualizations also creates ambiguities and misinterpretations.

With Net2Vis, these problems are gone. It is designed to provide an abstract network visualization while still providing general information about individual layers. We reflect the number of features as well as the spatial resolution of the tensor in our glyph design. Layer-Types can be identified through colors. Since these networks can get fairly complex, we added the possibility to group layers and thus compact the network through replacing common layer sequences.

The best of it: Once the application runs, you just have to paste your Keras code into your browser and the visualization is automatically generated based on that. You still can tweak your visualizations and create abstractions before downloading them as SVG and PDF.

How can I use this?

Either, go to our Website, or install Net2Vis locally. Our website includes no setup, but might be slower and limited in network size depending on what you are working on. Installing this locally allows you to modify the functionality and might be better performing than the online version.

Installation

Starting with Net2Vis is pretty easy (assuming python3, tested to run on python 3.6-3.8, and npm).

  1. Clone this Repo
  2. For the Backend to work, we need Cairo and Docker installed on your machine. This is used for PDF conversion and running models pasted into the browser (more) secure.

For docker, the docker daemon needs to run. This way, we can run the pasted code within separate containers.

For starting up the backend, the following steps are needed:

  1. Go into the backend folder: cd backend
  2. Install backend dependencies by running pip3 install -r requirements.txt
  3. Install the docker container by running docker build --force-rm -t tf_plus_keras .
  4. To start the server, issue: python3 server.py

The frontend is a react application that can be started as follows:

  1. Go into the frontend folder: cd net2vis
  2. Install the javascript dependencies using: npm install
  3. Start the frontend application with: npm start

Model Presets

For local installations only: If you want to replicate any of the network figures in our paper, or just want to see examples for visualizations, we have included all network figures from our paper for you to experiment with. To access those simply use the following urls:

For most of these URL endings, you will probably also find networks in the official version, however, there is no guarantee that they wont have been changed.

Citation

If you find this code useful please consider citing us:

@article{bauerle2021net2vis,
  title={Net2vis--a visual grammar for automatically generating publication-tailored cnn architecture visualizations},
  author={B{\"a}uerle, Alex and Van Onzenoodt, Christian and Ropinski, Timo},
  journal={IEEE transactions on visualization and computer graphics},
  volume={27},
  number={6},
  pages={2980--2991},
  year={2021},
  publisher={IEEE}
}

Acknowlegements

This work was funded by the Carl-Zeiss-Scholarship for Ph.D. students.

Owner

  • Name: Visual Computing Group (Ulm University)
  • Login: viscom-ulm
  • Kind: organization
  • Location: Ulm, Germany

Citation (CITATION.cff)

# YAML 1.2
---
abstract: "To convey neural network architectures in publications, appropriate visualizations are of great importance. While most current deep learning papers contain such visualizations, these are usually handcrafted just before publication, which results in a lack of a common visual grammar, significant time investment, errors, and ambiguities. Current automatic network visualization tools focus on debugging the network itself and are not ideal for generating publication visualizations. Therefore, we present an approach to automate this process by translating network architectures specified in Keras into visualizations that can directly be embedded into any publication. To do so, we propose a visual grammar for convolutional neural networks (CNNs), which has been derived from an analysis of such figures extracted from all ICCV and CVPR papers published between 2013 and 2019. The proposed grammar incorporates visual encoding, network layout, layer aggregation, and legend generation. We have further realized our approach in an online system available to the community, which we have evaluated through expert feedback, and a quantitative study. It not only reduces the time needed to generate network visualizations for publications, but also enables a unified and unambiguous visualization design."
authors: 
  -
    affiliation: "Ulm University"
    family-names: "Bäuerle"
    given-names: Alex
    orcid: "https://orcid.org/0000-0003-3886-8799"
  -
    affiliation: "Ulm University"
    family-names: Onzenoodt
    given-names: Christian
    name-particle: van
    orcid: "https://orcid.org/0000-0002-5951-6795"
  -
    affiliation: "Ulm University"
    family-names: Ropinski
    given-names: Timo
    orcid: "https://orcid.org/0000-0002-7857-5512"
cff-version: "1.1.0"
date-released: 2021-02-08
doi: "10.1109/TVCG.2021.3057483"
message: "If you use this software, please cite it using these metadata."
title: "Net2Vis – A Visual Grammar for Automatically Generating Publication-Tailored CNN Architecture Visualizations"
...

GitHub Events

Total
  • Issues event: 2
  • Watch event: 24
  • Member event: 1
  • Issue comment event: 4
  • Fork event: 1
Last Year
  • Issues event: 2
  • Watch event: 24
  • Member event: 1
  • Issue comment event: 4
  • Fork event: 1

Dependencies

backend/Dockerfile docker
  • tensorflow/tensorflow latest build
net2vis/package-lock.json npm
  • 1763 dependencies
net2vis/package.json npm
  • @emotion/react ^11.4.1
  • @emotion/styled ^11.3.0
  • @mui/icons-material ^5.0.1
  • @mui/material ^5.0.2
  • ace-builds ^1.4.12
  • dagre ^0.8.5
  • file-saver ^2.0.5
  • http-proxy-middleware ^2.0.1
  • jquery ^3.6.0
  • jszip ^3.7.1
  • node-sass-chokidar ^1.5.0
  • npm ^7.21.0
  • npm-run-all ^4.1.5
  • prop-types ^15.7.2
  • randomstring ^1.2.1
  • react ^17.0.2
  • react-ace ^9.4.3
  • react-color ^2.19.3
  • react-dom ^17.0.2
  • react-dropzone ^11.3.4
  • react-redux ^7.2.4
  • react-router-dom ^5.2.0
  • react-scripts ^4.0.3
  • react-svg-tooltip 0.0.11
  • redux ^4.1.1
  • redux-thunk ^2.3.0
  • tinycolor2 ^1.4.2
backend/requirements.txt pypi
  • CairoSVG ==2.5.2
  • Flask ==1.1.2
  • Jinja2 ==2.11.3
  • Keras-Applications ==1.0.8
  • Keras-Preprocessing ==1.1.2
  • Markdown ==3.3.4
  • MarkupSafe ==1.1.1
  • Pillow >=8.2.0
  • Werkzeug ==1.0.1
  • absl-py ==0.12.0
  • astor ==0.8.1
  • astroid ==2.5.1
  • astunparse ==1.6.3
  • cached-property ==1.5.2
  • cachetools ==4.2.1
  • cairocffi ==1.2.0
  • certifi ==2020.12.5
  • cffi ==1.14.5
  • chardet ==4.0.0
  • click ==7.1.2
  • cssselect2 ==0.4.1
  • defusedxml ==0.7.1
  • docker ==4.4.4
  • epicbox ==1.1.0
  • flatbuffers ==1.12
  • gast ==0.4.0
  • google-auth ==1.28.0
  • google-auth-oauthlib ==0.4.3
  • google-pasta ==0.2.0
  • importlib-metadata ==3.7.3
  • isort ==5.8.0
  • itsdangerous ==1.1.0
  • lazy-object-proxy ==1.5.2
  • mccabe ==0.6.1
  • oauthlib ==3.1.0
  • onnx ==1.10.1
  • opt-einsum ==3.3.0
  • protobuf ==3.18.3
  • pyasn1 ==0.4.8
  • pyasn1-modules ==0.2.8
  • pycparser ==2.20
  • pylint ==2.7.2
  • python-dateutil ==2.8.1
  • requests ==2.25.1
  • requests-oauthlib ==1.3.0
  • rsa ==4.7.2
  • scipy ==1.6.1
  • six ==1.15.0
  • structlog ==21.1.0
  • tensorflow ==2.5.1
  • termcolor ==1.1.0
  • tinycss2 ==1.1.0
  • toml ==0.10.2
  • typed-ast ==1.4.2
  • typing-extensions ==3.7.4.3
  • uWSGI ==2.0.19.1
  • urllib3 ==1.26.5
  • webencodings ==0.5.1
  • websocket-client ==0.58.0
  • wrapt ==1.12.1
  • zipp ==3.4.1