ipfs-cid-hoarder

Client that tracks CIDs in the IPFS network pinning and requesting them to see for how long are they accessible.

https://github.com/cortze/ipfs-cid-hoarder

Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (8.8%) to scientific vocabulary

Keywords

dht go-ipfs ipfs kademlia-dht libp2p
Last synced: 6 months ago · JSON representation ·

Repository

Client that tracks CIDs in the IPFS network pinning and requesting them to see for how long are they accessible.

Basic Info
  • Host: GitHub
  • Owner: cortze
  • Language: Go
  • Default Branch: master
  • Homepage:
  • Size: 16.3 MB
Statistics
  • Stars: 7
  • Watchers: 2
  • Forks: 1
  • Open Issues: 0
  • Releases: 2
Topics
dht go-ipfs ipfs kademlia-dht libp2p
Created almost 4 years ago · Last pushed over 1 year ago
Metadata Files
Readme Citation

README.md

IPFS-CID-hoarder

An IPFS CID "crawler" that monitors the shared content in the IPFS Network. The tool will serve as the data-gathering part to study and measure Protocol Labs's RFM 17 and RFM 1 (Liveness of a document in the IPFS Network).

This tool also assisted in examining the provider record liveness of the Optimistic Provide algorithm .

Before you run

Be sure to save the logs generated by the tool to a file, because they are important for further analysis.

Motivation

In content sharing platforms, distributed or non-distributed ones, the content always needs to be stored somewhere. In the IPFS network, although the content may be in more than one location, it normally starts from the IPFS client/server that has the content and publishes the Provider Records (PR) to the rest of the network. This PR contains the link between the CID (content) that they are sharing and the multi-address where the content can be retrieved.

As explained in the Kademlia DHT paper, the PRs get shared with other K=20 peers of the network, corresponding K to the closest peers to the CID using the XOR distance. The described step corresponds to the PROVIDE method of the IPFS-Kad-DHT implementation, where the client/server finds out which peers are the closest ones to the CID, and then sends them an ADD_PROVIDE message to notify them that they are inside the set of closest peers to that specific content. After this process, any other peer that walks the IPFS DHT to retrieve that CID, will ask for the closest peers to the CID and it will end up asking one of these k=20 peers for the PR (ideal scenario).

The theory looks solid, the K=20 value was initially chosen to increase the resilience of the network to node churn. At the moment, the overall network seems to be working fine, although there are some concerns about the impact of Hydra nodes and the churn rate.

The fact that k=20 peers keep the records, means that as soon as one of them is actively keeping the PR, the content should be retrievable. However, if after 4 hours of publishing the PRs, only one of the 20 peers keeps the records, one could conclude that the network is exposed to a very high node churn rate, and therefore, the K value won’t longer be the appropriate one for the node churn and network size at that specific moment.

There are some concerns about the impact of the Hydra-boosters in the network as they represent a more centralized infrastructure than the one targeted by the IPFS network. Hydra nodes are placed in the network to accelerate the content discovery, and therefore, the performance of IPFS. However, are they the ones keeping up IPFS Network alive?

The focus of the study is to tackle these concerns by generating a tool that can follow up with the peers chosen to be PR Holders, bringing up more insights about the Provider Record Liveness.

Methodology

IPFS-CID-Hoarder is the tool that will collect the data for the study (currently in development stage).

Flags

As explained before, the tool will have a set of inputs to configure the study:

CID Hoarder Flags: - Log Level - Database Endpoint - CID content size - Already published cids - The port that the ipfs-cid-hoarder will user for the hosts - Hydra filter for peers that correspond to hydras, to avoid connections to them - A cid source that specifies from where shall the cids be generated/published - CID number that will be generated and publishd for the entire study - Batch size to reduce the overhead of the pinging the CIDs - CID track frequency - A config file that contains this exact set of flags - Total track time - K value for the Kad DHT configuration

Publisher

In the publisher of the CID-Hoarder, the tool will generate and track a set of CIDs over time and publish them in the cid network. The randomness of the CIDs helps to cover homogeneously the entire hash space, which consequently helps to understand which range of the hash space suffers more from node churn or lack of peers.

For the given number of CIDs and the number of workers, the tool will automatically publish the generated CIDs using the number of workers.

For example, if we want to generate 1000 CIDs with a worker number of 200 CIDs, the tool will spawn 200 workers to publish the CIDs concurrently.

NOTE: The Provider Records WON'T be republished after 22 hours. The goal of the study is to track the theoretical record lifetime for a range of CIDs across the hash space.

Pinger

Once each CID has been generated and published to the network or read from a file, The tool proceeds to persist to the DB the recorded information and sends the CID information to the CID Ping-Orchester routine.

The CID Ping-Orchester has a list of CIDs ordered by the next connection time. It uses this next connection time to determine whether the CID needs to be pinged again or not. If the CID’s next connection proves that the tracking frequency has passed since the last ping, it will add the CID info into a Ping Queue for the ping worker.

Each of the CID Ping workers will try to establish a connection with the PR Holders of the CID reader from the Ping Queue tracking the dial time, whether they are active or not, the reported error if the connection attempt failed, and whether they still keep the PR or not. Further more, the tool also walks the DHT looking for the K closest peers at the moment of pinging the PR Holders, keeping track of the total hops needed to get the closest peers and the minimum hops to know all of them and trying to retrieve the PR from the DHT itself.

After attempting to connect all the PR Holders of a CID, the tool persists the Ping Results and their summary (Fetch Results) into the DB.

The tool will keep pinging PR Holders every tick until the CID Track Time is completed.

By changing the configuration parameters of the tool, we will be able to generate a complete analysis of the Provider Record Liveness, including tests with different K values that will help us understand which value of K better fits the current network churn and size.

What does the hoarder keep track of?

For each generated CID, the tool will keep track of (these are tables inside a psql database):

CidInfo: - The cid itself - Generation Time - Number of peers requested to keep the PR - Creator of the cid - The k value - [] PR Holder Info - [] PR Ping Results - Provide Method Duration - Next ping time - The ping counter (how many times a cid should be pinged)

In the case of the publisher: after the generation and publication of the CID, the tool waits for the result of each ADD_PROVIDE message sent to the K closest peers.

Once we know which are the peers holding the PR, the tool keeps track of the given info from the peer,

PR Holder Info: - Peer ID - Multiaddresses - User Agent - Client Type - Client Version

In the case of the publisher: already from the first ADD_PROVIDE connection to the K closest peers, the tool fulfills the following table as a result of the PR holderΓÇÖs pinging process:

``` Fetch Results: (Summary of the K PR Ping Results) - CID - Fetch round - TotalHops - HopsToClosest - Fetch Time - Fetch Round Duration - PR Holders Ping Duration - Find-Providers Duration - Get K Close-Peers Duration - [] PR Ping Results - Is Retrievable - [] K Closest peers to the CID

PR Ping Rsults: (Individual ping for a Peer ID per round and CID) - CID - Peer ID - Ping Round - Fetch Time - Fetch Duration - Is Active - Has Records - Connection Error ```

Analysis of the results

The analyzer folder contains a preset of jupyter scripts to analyze and draw conclusions from the data gathered.

Cid distribution in hash space

Visualizes the homogenuity of cids in the hash space. The cid range in kademlia DHT is [0, 2^(256)-1], but here the cids are normalized to [0,1].

Cid pinging phase

This file tracks 3 basic things:

  • The median time of each ping round
  • Tracks the activity or onliness of the PR holders
    • Total PR holders
    • Only Hydra PR holders
    • Only non Hydra PR holders
  • Tracks whether the PR holders share the PRs
    • Total PR holders
    • Only Hydra PR holders
    • Only non Hydra PR holders

Cid publication phase

Extracts metrics from the publication phase like:

  1. Successful PR Holders CDF, PDF
  2. Total publication time distribution: CDF, PDS, Quartile Distributions
  3. Client distribution from the whole set of PR Holders
  4. Client distribution for the PR Holders of each CID

Hops analysis

Visualizes the hops taken by the getClosestPeers dht lookup, as a tree.

In degree ratio

The amount of PRs than remain inside the K closest peers over the ping rounds.

Log analyzer

Important: The below scripts must parse the logs generated by the cid hoarder. Be sure to save into a file so they can be read later.

Pr containing multiaddress study

The script is divided in three main stages:

  1. Analyzing individually the direct reply of the PR holders for the entire study
  2. Analyzing the reply of those peers sharing the PRs during the DHT lookup over the study
  3. Analyzing the final result of the DHT lookup over the study

Retrievability and multiaddresses study

The script is divided in three main stages:

  1. Analyzing individually the direct reply of the PR holders for the entire study
  2. Analyzing the reply of those peers sharing the PRs during the DHT lookup over the study
  3. Analyzing the final result of the DHT lookup over the study

Contribution of the tool in studies

Install

Requirements

To compile the tool you will need:

  • make
  • Go 1.17 (Go 1.18 is still not supported by few imported modules)
  • A postgres instance, doesn't matter if you are running it on a docker or locally

Compilation

Download the repo, install the dependencies, and compile it by yourself:

git clone https:github.com/cortze/ipfs-cid-hoarder.git cd ipfs-cid-hoarder make dependencies make install

You are ready to go!

Usage

The CID-Hoarder has a set of arguments to configure each of the studies:

``` ipfs-cid-hoarder run [command options] [arguments...]

OPTIONS: --log-level value verbosity of the logs that will be displayed debug,warn,info,error [$IPFSCIDHOARDERLOGLEVEL] --priv-key value REMOVED: Private key to initialize the host (to avoid generating node churn in the network) [$IPFSCIDHOARDERPRIVKEY] --database-endpoint value database enpoint (e.g. /Path/to/sqlite3.db) (default: ./data/ipfs-hoarder-db.db) [$IPFSCIDHOARDERDATABASEENDPOINT] --port value the port that the hosts will user in the hoarder(default: 9010) [$IPFSCIDHOARDERPORT] --cid-source value defines the mode where we want to run the tool random-content-gen, bitswap --cid-content-size value PROBABLY NOT NEEDED: size in KB of the random block generated (default: 1MB) [$IPFSCIDHOARDERCIDCONTENTSIZE] --cid-number value number of CIDs that will be generated for the study. These cids will be published. (default: 1000 CIDs) [$IPFSCIDHOARDERCIDNUMBER] --workers value max number of CIDs on each of the generation batch (default: 250 CIDs) [$IPFSCIDHOARDERBATCHSIZE] --req-interval value delay in minutes in between PRHolders pings for each CID (example '30m' - '1h' - '60s') (default: 30m) [$IPFSCIDHOARDERREQINTERVAL] --study-duration value max time for the study to run (example '24h', '35h', '48h') (default: 48h) [$IPFSCIDHOARDERSTUDYDURATION] --k value number of peers that we want to forward the Provider Records (default: K=20) [$IPFSCIDHOARDERK] --hydra-filter value boolean representation to activate or not the filter to avoid connections to hydras (default: false) [$IPFSCIDHOARDERHYDRAFILTER] --config-file value NOT YET TESTED: reads a config struct from the specified json file (default: config.json) [$IPFSCIDHOARDERCONFIGFILE] --help, -h show help (default: false) ```

Maintainers

@cortze

Contributing

The project is open for everyone to contribute!

Owner

  • Name: Mikel Cortes
  • Login: cortze
  • Kind: user
  • Location: Barcelona
  • Company: Barcelona Supercomputing Center

Research Engineer and Ph.D. student on Distributed P2P Networks. LIbp2p, IPFS, GossipSub, Eth CL, and more

Citation (CITATION.cff)

# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!

cff-version: 1.2.0
title: IPFS CID Hoarder
message: >-
  if you want to cite the code, please use the
  provided template
type: software
authors:
  - given-names: Mikel
    family-names: Cortes-Goicoechea
    email: cortze@protonmail.com
    affiliation: Barcelona Supercomputing Center
    orcid: 'https://orcid.org/0000-0003-3167-6014'
url: "https://github.com/cortze/ipfs-cid-hoarder"

GitHub Events

Total
  • Watch event: 2
Last Year
  • Watch event: 2

Committers

Last synced: about 1 year ago

All Time
  • Total Commits: 322
  • Total Committers: 3
  • Avg Commits per committer: 107.333
  • Development Distribution Score (DDS): 0.422
Past Year
  • Commits: 8
  • Committers: 2
  • Avg Commits per committer: 4.0
  • Development Distribution Score (DDS): 0.5
Top Committers
Name Email Commits
FotiosBistas 8****s 186
cortze m****3@g****m 97
cortze c****e@p****m 39

Issues and Pull Requests

Last synced: about 1 year ago

All Time
  • Total issues: 4
  • Total pull requests: 25
  • Average time to close issues: 2 months
  • Average time to close pull requests: 29 days
  • Total issue authors: 2
  • Total pull request authors: 2
  • Average comments per issue: 1.5
  • Average comments per pull request: 0.12
  • Merged pull requests: 24
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 3
  • Average time to close issues: N/A
  • Average time to close pull requests: 2 months
  • Issue authors: 0
  • Pull request authors: 2
  • Average comments per issue: 0
  • Average comments per pull request: 0.33
  • Merged pull requests: 3
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • FotiosBistas (3)
  • cortze (1)
Pull Request Authors
  • cortze (22)
  • FotiosBistas (3)
Top Labels
Issue Labels
Pull Request Labels
bug (1) enhancement (1)

Packages

  • Total packages: 1
  • Total downloads: unknown
  • Total dependent packages: 0
  • Total dependent repositories: 0
  • Total versions: 2
proxy.golang.org: github.com/cortze/ipfs-cid-hoarder
  • Versions: 2
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent packages count: 7.0%
Forks count: 7.0%
Average: 8.1%
Stargazers count: 9.0%
Dependent repos count: 9.3%
Last synced: 6 months ago

Dependencies

go.mod go
  • github.com/beorn7/perks v1.0.1
  • github.com/btcsuite/btcd v0.22.0-beta
  • github.com/cespare/xxhash/v2 v2.1.1
  • github.com/cheekybits/genny v1.0.0
  • github.com/cpuguy83/go-md2man/v2 v2.0.1
  • github.com/davidlazar/go-crypto v0.0.0-20200604182044-b73af7476f6c
  • github.com/ethereum/go-ethereum v1.10.8
  • github.com/flynn/noise v1.0.0
  • github.com/francoispqt/gojay v1.2.13
  • github.com/fsnotify/fsnotify v1.4.9
  • github.com/go-stack/stack v1.8.0
  • github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0
  • github.com/gogo/protobuf v1.3.2
  • github.com/golang/protobuf v1.5.2
  • github.com/golang/snappy v0.0.3
  • github.com/google/go-cmp v0.5.7
  • github.com/google/gopacket v1.1.19
  • github.com/google/uuid v1.3.0
  • github.com/gorilla/websocket v1.4.2
  • github.com/hashicorp/errwrap v1.0.0
  • github.com/hashicorp/go-multierror v1.1.1
  • github.com/hashicorp/golang-lru v0.5.5-0.20210104140557-80c98217689d
  • github.com/huin/goupnp v1.0.2
  • github.com/ipfs/go-cid v0.0.7
  • github.com/ipfs/go-datastore v0.5.0
  • github.com/ipfs/go-ipfs-util v0.0.2
  • github.com/ipfs/go-ipns v0.1.2
  • github.com/ipfs/go-log v1.0.5
  • github.com/ipfs/go-log/v2 v2.3.0
  • github.com/ipld/go-ipld-prime v0.9.0
  • github.com/jackc/chunkreader/v2 v2.0.1
  • github.com/jackc/pgconn v1.12.1
  • github.com/jackc/pgio v1.0.0
  • github.com/jackc/pgpassfile v1.0.0
  • github.com/jackc/pgproto3/v2 v2.3.0
  • github.com/jackc/pgservicefile v0.0.0-20200714003250-2b9c44734f2b
  • github.com/jackc/pgtype v1.11.0
  • github.com/jackc/pgx/v4 v4.16.1
  • github.com/jackc/puddle v1.2.1
  • github.com/jackpal/go-nat-pmp v1.0.2
  • github.com/jbenet/go-temp-err-catcher v0.1.0
  • github.com/jbenet/goprocess v0.1.4
  • github.com/kilic/bls12-381 v0.1.0
  • github.com/klauspost/compress v1.11.7
  • github.com/klauspost/cpuid/v2 v2.0.9
  • github.com/koron/go-ssdp v0.0.2
  • github.com/libp2p/go-addr-util v0.1.0
  • github.com/libp2p/go-buffer-pool v0.0.2
  • github.com/libp2p/go-cidranger v1.1.0
  • github.com/libp2p/go-conn-security-multistream v0.3.0
  • github.com/libp2p/go-eventbus v0.2.1
  • github.com/libp2p/go-flow-metrics v0.0.3
  • github.com/libp2p/go-libp2p v0.16.0
  • github.com/libp2p/go-libp2p-asn-util v0.1.0
  • github.com/libp2p/go-libp2p-autonat v0.6.0
  • github.com/libp2p/go-libp2p-blankhost v0.2.0
  • github.com/libp2p/go-libp2p-core v0.11.0
  • github.com/libp2p/go-libp2p-discovery v0.6.0
  • github.com/libp2p/go-libp2p-kad-dht v0.15.0
  • github.com/libp2p/go-libp2p-kbucket v0.4.7
  • github.com/libp2p/go-libp2p-mplex v0.4.1
  • github.com/libp2p/go-libp2p-nat v0.1.0
  • github.com/libp2p/go-libp2p-noise v0.3.0
  • github.com/libp2p/go-libp2p-peerstore v0.4.0
  • github.com/libp2p/go-libp2p-pnet v0.2.0
  • github.com/libp2p/go-libp2p-quic-transport v0.15.0
  • github.com/libp2p/go-libp2p-record v0.1.3
  • github.com/libp2p/go-libp2p-swarm v0.8.0
  • github.com/libp2p/go-libp2p-tls v0.3.1
  • github.com/libp2p/go-libp2p-transport-upgrader v0.5.0
  • github.com/libp2p/go-libp2p-yamux v0.6.0
  • github.com/libp2p/go-maddr-filter v0.1.0
  • github.com/libp2p/go-mplex v0.3.0
  • github.com/libp2p/go-msgio v0.1.0
  • github.com/libp2p/go-nat v0.1.0
  • github.com/libp2p/go-netroute v0.1.6
  • github.com/libp2p/go-openssl v0.0.7
  • github.com/libp2p/go-reuseport v0.1.0
  • github.com/libp2p/go-reuseport-transport v0.1.0
  • github.com/libp2p/go-sockaddr v0.1.1
  • github.com/libp2p/go-stream-muxer-multistream v0.3.0
  • github.com/libp2p/go-tcp-transport v0.4.0
  • github.com/libp2p/go-ws-transport v0.5.0
  • github.com/libp2p/go-yamux/v2 v2.3.0
  • github.com/lucas-clemente/quic-go v0.24.0
  • github.com/marten-seemann/qtls-go1-16 v0.1.4
  • github.com/marten-seemann/qtls-go1-17 v0.1.0
  • github.com/marten-seemann/tcp v0.0.0-20210406111302-dfbc87cc63fd
  • github.com/mattn/go-isatty v0.0.13
  • github.com/matttproud/golang_protobuf_extensions v1.0.1
  • github.com/miekg/dns v1.1.43
  • github.com/migalabs/armiarma v1.0.0
  • github.com/mikioh/tcpinfo v0.0.0-20190314235526-30a79bb1804b
  • github.com/mikioh/tcpopt v0.0.0-20190314235656-172688c1accc
  • github.com/minio/blake2b-simd v0.0.0-20160723061019-3f5f724cb5b1
  • github.com/minio/sha256-simd v1.0.0
  • github.com/mr-tron/base58 v1.2.0
  • github.com/multiformats/go-base32 v0.0.3
  • github.com/multiformats/go-base36 v0.1.0
  • github.com/multiformats/go-multiaddr v0.4.0
  • github.com/multiformats/go-multiaddr-dns v0.3.1
  • github.com/multiformats/go-multiaddr-fmt v0.1.0
  • github.com/multiformats/go-multibase v0.0.3
  • github.com/multiformats/go-multicodec v0.4.1
  • github.com/multiformats/go-multihash v0.0.15
  • github.com/multiformats/go-multistream v0.2.2
  • github.com/multiformats/go-varint v0.0.6
  • github.com/nxadm/tail v1.4.8
  • github.com/onsi/ginkgo v1.16.4
  • github.com/opentracing/opentracing-go v1.2.0
  • github.com/pkg/errors v0.9.1
  • github.com/polydawn/refmt v0.0.0-20190807091052-3d65705ee9f1
  • github.com/prometheus/client_golang v1.11.0
  • github.com/prometheus/client_model v0.2.0
  • github.com/prometheus/common v0.30.0
  • github.com/prometheus/procfs v0.7.3
  • github.com/protolambda/bls12-381-util v0.0.0-20210720105258-a772f2aac13e
  • github.com/protolambda/zrnt v0.22.0
  • github.com/protolambda/ztyp v0.1.9
  • github.com/russross/blackfriday/v2 v2.1.0
  • github.com/sirupsen/logrus v1.8.1
  • github.com/spacemonkeygo/spacelog v0.0.0-20180420211403-2296661a0572
  • github.com/stretchr/testify v1.7.1
  • github.com/syndtr/goleveldb v1.0.1-0.20210305035536-64b5b1c73954
  • github.com/urfave/cli/v2 v2.5.1
  • github.com/whyrusleeping/go-keyspace v0.0.0-20160322163242-5b898ac5add1
  • github.com/whyrusleeping/multiaddr-filter v0.0.0-20160516205228-e903e4adabd7
  • go.opencensus.io v0.23.0
  • go.uber.org/atomic v1.9.0
  • go.uber.org/multierr v1.7.0
  • go.uber.org/zap v1.19.0
  • golang.org/x/crypto v0.0.0-20210921155107-089bfa567519
  • golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3
  • golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f
  • golang.org/x/sync v0.0.0-20210220032951-036812b2e83c
  • golang.org/x/sys v0.0.0-20211019181941-9d821ace8654
  • golang.org/x/text v0.3.7
  • golang.org/x/tools v0.1.11-0.20220316014157-77aa08bb151a
  • golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1
  • google.golang.org/protobuf v1.27.1
  • gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7
  • gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b
go.sum go
  • 1489 dependencies
analyzer/requirements.txt pypi
  • Jinja2 ==3.1.2
  • MarkupSafe ==2.1.1
  • Pillow ==9.1.1
  • Pygments ==2.12.0
  • Send2Trash ==1.8.0
  • argon2-cffi ==21.3.0
  • argon2-cffi-bindings ==21.2.0
  • asttokens ==2.0.5
  • attrs ==21.4.0
  • backcall ==0.2.0
  • base58 ==1.0.3
  • beautifulsoup4 ==4.11.1
  • bleach ==5.0.0
  • cffi ==1.15.0
  • cycler ==0.11.0
  • debugpy ==1.6.0
  • decorator ==5.1.1
  • defusedxml ==0.7.1
  • entrypoints ==0.4
  • executing ==0.8.3
  • fastjsonschema ==2.15.3
  • fonttools ==4.33.3
  • importlib-resources ==5.7.1
  • ipykernel ==6.13.0
  • ipython ==8.3.0
  • ipython-genutils ==0.2.0
  • jedi ==0.18.1
  • jsonschema ==4.5.1
  • jupyter-client ==7.3.1
  • jupyter-core ==4.10.0
  • jupyterlab-pygments ==0.2.2
  • kiwisolver ==1.4.2
  • matplotlib ==3.5.2
  • matplotlib-inline ==0.1.3
  • mistune ==0.8.4
  • morphys ==1.0
  • multihash ==0.1.1
  • nbclient ==0.6.3
  • nbconvert ==6.5.0
  • nbformat ==5.4.0
  • nest-asyncio ==1.5.5
  • notebook ==6.4.11
  • numpy ==1.22.4
  • packaging ==21.3
  • pandas ==1.4.2
  • pandocfilters ==1.5.0
  • parso ==0.8.3
  • pexpect ==4.8.0
  • pickleshare ==0.7.5
  • prometheus-client ==0.14.1
  • prompt-toolkit ==3.0.29
  • psutil ==5.9.1
  • psycopg2 ==2.9.3
  • ptyprocess ==0.7.0
  • pure-eval ==0.2.2
  • py-cid ==0.3.0
  • py-multibase ==1.0.3
  • py-multicodec ==0.2.1
  • py-multihash ==0.2.3
  • pycparser ==2.21
  • pyparsing ==3.0.9
  • pyrsistent ==0.18.1
  • python-baseconv ==1.2.2
  • python-dateutil ==2.8.2
  • pytz ==2022.1
  • pyzmq ==23.0.0
  • six ==1.16.0
  • soupsieve ==2.3.2.post1
  • stack-data ==0.2.0
  • terminado ==0.15.0
  • tinycss2 ==1.1.1
  • tornado ==6.1
  • traitlets ==5.2.1.post0
  • varint ==1.0.2
  • wcwidth ==0.2.5
  • webencodings ==0.5.1
  • zipp ==3.8.0
Dockerfile docker
  • debian buster-slim build
  • golang 1.19.7-buster build