encoda

↔️ A format converter for Stencila documents

https://github.com/stencila/encoda

Science Score: 26.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.7%) to scientific vocabulary

Keywords

converter document nodejs pandoc remark semantic

Keywords from Contributors

generative-model transformers cryptocurrencies gym-environments reinforcement-learning robustness pypy mot argument-parser jekyll
Last synced: 7 months ago · JSON representation

Repository

↔️ A format converter for Stencila documents

Basic Info
Statistics
  • Stars: 35
  • Watchers: 9
  • Forks: 10
  • Open Issues: 93
  • Releases: 252
Topics
converter document nodejs pandoc remark semantic
Created over 8 years ago · Last pushed 7 months ago
Metadata Files
Readme Changelog Contributing License Codemeta

README.md

Encoda

Announcement

For some time our main focus has been a rewrite of our Stencila platform (v2). We are porting over codecs and other functionality from this repo to the v2 of our Stencila platform.

This means that we are unable to be so active with contributions to this project. eLife have volunteered to continue development on this repository until such a time as the v2 of Stencila can meet their needs. They only have need of the codecs to convert from JATS to JSON.

From v2.0.0 of this project we will only continue to support the conversion from JATS to JSON. If you want to use any other formats then please see if they are available in Stencila or use the v1.0.3 release.

We have chosen at this time to leave the rest of the README.md as it was before the removal of much of the code. This does mean that the README.md is out of date. We may prioritise updating it in future.

Codecs for structured, semantic, composable, and executable documents

Build status Code coverage NPM Docs

Introduction

"A codec is a device or computer program for encoding or decoding a digital data stream or signal. Codec is a portmanteau of coder-decoder. - Wikipedia

Encoda provides a collection of codecs for converting between, and composing together, documents in various formats. The aim is not to achieve perfect lossless conversion between alternative document formats; there are already several tools for that. Instead the focus of Encoda is to use existing tools to encode and compose semantic documents in alternative formats.

Formats

As far as possible, Encoda piggybacks on top of existing tools for parsing and serializing documents in various formats. It uses extensions to schema.org as the central data model for all documents and for many formats, it simply transforms the data model of the external tool (e.g. Pandoc types, SheetJS spreadsheet model) to that schema ("decoding") and back again ("encoding"). In this sense, you can think of Encoda as a Rosetta Stone with schema.org at it's centre.

⚡ Tip: If a codec for your favorite format is missing below, see if there is already an issue for it and 👍 or comment. If there is no issue regarding the converter you need, feel free to create one.

| Format | Codec | Powered by | Status | | ---------------------------- | ------------- | ---------------------- | ------ | | Text | | Plain text | txt | toString | ✔ | | Markdown | md | Remark | ✔ | | LaTex | latex | Pandoc | α | | Microsoft Word | docx | Pandoc | β | | Google Docs | gdoc | JSON | β | | Open Document Text | odt | Pandoc | α | | HTML | html | jsdom, hyperscript | ✔ | | JATS XML | jats | xml-js | ✔ | | | jats-pandoc | Pandoc | β | | Portable Document Format | pdf | pdf-lib, Puppeteer | β | | Math | | TeX | tex | mathconverter | ✔ | | MathML | mathml | MathJax | ✔ | | Visualization | | Plotly | plotly | Plotly.js | ✔ | | Vega / Vega-Lite | vega | Vega | ✔ | | Bibliographic | | Citation Style Language JSON | csl | Citation.js | ✔ | | BibTeX | bib | Citation.js | ✔ | | Notebooks | | Jupyter | ipynb | JSON | ✔ | | RMarkdown | xmd | Remark | ✔ | | Spreadsheets | | Microsoft Excel | xlsx | SheetJS | β | | Open Document Spreadsheet | ods | SheetJS | β | | Tabular data | | CSV | csv | SheetJS | β | | Tabular Data Package | tdp | datapackage-js | α | | Collections | | Filesystem Directory | dir | fs | β | | Data interchange, other | | JSON | json | JSON | ✔ | | JSON-LD | jsonld | jsonld.js | ✔ | | JSON5 | json5 | json5 | ✔ | | YAML | yaml | js-yaml | ✔ | | Pandoc | pandoc | Pandoc | ✔ | | Reproducible PNG | rpng | Puppeteer | ✔ | | XML | xml | xml-js | ✔ |

Key

  • ✗: Not yet implemented
  • α: Alpha, initial implementation
  • β: Beta, ready for user testing
  • ✔: Ready for production use

Publishers

Several of the codecs in Encoda, deal with fetching content from a particular publisher. For example, to get an eLife article and read it in Markdown:

bash stencila convert https://elifesciences.org/articles/45187v2 ye-et-al-2019.md

Some of these publisher codecs deal with meta data. e.g.

bash stencila convert "Watson and Crick 1953" - --from crossref --to yaml

yaml type: Article title: Genetical Implications of the Structure of Deoxyribonucleic Acid authors: - familyNames: - WATSON givenNames: - J. D. type: Person - familyNames: - CRICK givenNames: - F. H. C. type: Person datePublished: '1953,5' isPartOf: issueNumber: '4361' isPartOf: volumeNumber: '171' isPartOf: title: Nature type: Periodical type: PublicationVolume type: PublicationIssue

| Source | Codec | Base codec/s | Status | Coverage | | ---------------------- | ---------- | ------------------------------------ | ------ | ----------------- | | General | | HTTP | http | Based on Content-Type or extension | β | ![][http-cov] | | Person | | ORCID | orcid | jsonld | β | ![][orcid-cov] | | Article metadata | | DOI | doi | csl | β | ![][doi-cov] | | Crossref | crossref | jsonld | β | ![][crossref-cov] | | Article content | | eLife | elife | jats | β | ![][elife-cov] | | PLoS | plos | jats | β | ![][plos-cov] |

Install

The easiest way to use Encoda is to install the stencila command line tool. Encoda powers stencila convert, and other commands, in that CLI. However, the version of Encoda in stencila, can lag behind the version in this repo. So if you want the latest functionality, install Encoda as a Node.js package:

bash npm install @stencila/encoda --global

Use

Encoda is intended to be used primarily as a library for other applications. However, it comes with a simple command line script which allows you to use the convert function directly.

Converting files

bash encoda convert notebook.ipynb notebook.docx

Encoda will determine the input and output formats based on the file extensions. You can override these using the --from and --to options. e.g.

bash encoda convert notebook.ipynb notebook.xml --to jats

You can also convert to more than one file / format (in this case the --to argument only applies to the first output file) e.g.

bash encoda convert report.docx report.Rmd report.html report.jats

Converting folders

You can decode an entire directory into a Collection. Encoda will traverse the directory, including subdirectories, decoding each file matching your glob pattern. You can then encode the Collection using the dir codec into a tree of HTML files e.g.

bash encoda convert myproject myproject-published --to dir --pattern '**/*.{rmd, csv}'

Converting command line input

You can also read content from the first argument. In that case, you'll need to specifying the --from format e.g.

bash encoda convert "{type: 'Paragraph', content: ['Hello world!']}" --from json5 paragraph.md

You can send output to the console by using - as the second argument and specifying the --to format e.g.

bash encoda convert paragraph.md - --to yaml

| Option | Description | | -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | --from | The format of the input content e.g. --from md | | --to | The format for the output content e.g. --to html | | --theme | The theme for the output (only applies to HTML, PDF and RPNG output) e.g. --theme eLife. Either a Thema theme name or a path/URL to a directory containing a styles.css and a index.js file. | | --standalone | Generate a standalone document, not a fragment (default true) | | --bundle | Bundle all assets (e.g images, CSS and JS) into the document (default false) | | --debug | Print debugging information |

Using with Executa

Encoda exposes the decode and encode methods of the Executa API. Register Encoda so that it can be discovered by other executors on your machine,

bash npm run register

You can then use Encoda as a plugin for Executa that provides additional format conversion capabilities. For example, you can use the query REPL on a Markdown document:

bash npx executa query CHANGELOG.md --repl

You can then use the REPL to explore the structure of the document and do things like create summary documents from it. For example, lets say from some reason we wanted to create a short JATS XML file with the five most recent releases of this package:

jmp > %format jats jmp > %dest latest-releases.jats.xml jmp > {type: 'Article', content: content[? type==`Heading` && depth==`1`] | [1:5]}

Which creates the latest-major-releases.jats.xml file:

xml <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.1 20151215//EN" "JATS-archivearticle1.dtd"> <article xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article"> <front> <title-group> <article-title/> </title-group> <contrib-group/> </front> <body> <sec> <title> <ext-link ext-link-type="uri" xlink:href="https://github.com/stencila/encoda/compare/v0.79.0...v0.80.0">0.80.0</ext-link> (2019-09-30) </title> </sec> ...

You can query a document in any format supported by Encoda. As another example, lets' fetch a CSV file from Github and get the names of it's columns:

bash npx executa query https://gist.githubusercontent.com/jncraton/68beb88e6027d9321373/raw/381dcf8c0d4534d420d2488b9c60b1204c9f4363/starwars.csv --repl 🛈 INFO encoda:http Fetching "https://gist.githubusercontent.com/jncraton/68beb88e6027d9321373/raw/381dcf8c0d4534d420d2488b9c60b1204c9f4363/starwars.csv" jmp > columns[].name [ 'SetID', 'Number', 'Variant', 'Theme', 'Subtheme', 'Year', 'Name', 'Minifigs', 'Pieces', 'UKPrice', 'USPrice', 'CAPrice', 'EUPrice', 'ImageURL', 'Owned', 'Wanted', 'QtyOwned', ] jmp >

See the %help REPL command for more examples.

Note: If you have executa installed globally, then the npx prefix above is not necessary.

Documentation

Self-hoisted (documentation converted from various formats to html) and API documentation (generated from source code) is available at: https://stencila.github.io/encoda.

Develop

Check how to contribute back to the project. All PRs are most welcome! Thank you!

Clone the repository and install a development environment:

bash git clone https://github.com/stencila/encoda.git cd encoda npm install

You can manually test conversion using current TypeScript src using:

bash npm start -- convert simple.md simple.html

That can be slow because the TypeScript has to be compiled on the fly (using ts-node). Alternatively, compile the TypeScript to JavaScript first, and then run node on the dist folder:

bash npm run build:dist node dist convert simple.md simple.html

If you are using VSCode, you can use the Auto Attach feature to attach to the CLI when running the debug NPM script:

bash npm run debug -- convert simple.gdoc simple.ipynb

A simple script to convert JATS to JSON:

bash cat simple-jats.xml | npm run convert-jats --silent > simple.json

Testing

Running tests locally

Run the test suite using:

bash npm test

Or, run a single test file e.g.

bash npx jest tests/xlsx.test.ts --watch

To display debug logs during testing set the environment variable DEBUG=1, e.g.

bash DEBUG=1 npm test

To get coverage statistics:

bash npm run cover

There's also a Makefile if you prefer to run tasks that way e.g.

bash make lint cover

Running test in Docker

You can also test this package using with a Docker container:

bash npm run test:docker

Writing tests

Recording and using network fixtures

As far as possible, tests should be able to run with no network access. We use Nock Back to record and play back network requests and responses. Use the nockRecord helper function for this with the convention of starting the fixture file with nock-record- e.g.

ts const stopRecording = await nockRecord('nock-record-<name-of-test>.json') // Do some things that connect to the interwebs stopRecording()

Note that the HTTP fetcher implements caching so that you may need to remove the cache for the recording of fixtures to work e.g. rm -rf /tmp/stencila/encoda/cache/.

If there are changes in the URLs that your test fetches, or you want to check that your test is still works against an external API that may have changed, remove the Nock recording and rerun the test e.g.,

sh rm src/codecs/elife/__fixtures__/nock-record-*.json npx jest src/codecs/elife/ --testTimeout 30000

Contribute

We 💕 contributions! All contributions: ideas 🤔, examples 💡, bug reports 🐛, documentation 📖, code 💻, questions 💬. See CONTRIBUTING.md for more on where to start. You can also provide your feedback on the Community Forum and Gitter channel.

Contributors


Aleksandra Pawlik

💻 📖 🐛

Nokome Bentley

💻 📖 🐛

Jacqueline

📖 🎨

Hamish Mackenzie

💻 📖

Alex Ketch

💻 📖 🎨

Ben Shaw

💻 🐛

Phil Neff

🐛

Raniere Silva

📖

Lorenzo Cangiano

🐛

FAtherden-eLife

🐛 🎨

Giorgio Sironi

👀
Add a contributor... To add youself, or someone else, to the above list, either, 1. Ask the [@all-contributors bot](https://allcontributors.org/docs/en/bot/overview) to do it for you by commenting on an issue or PR like this: > @all-contributors please add @octocat for bugs, tests and code 2. Use the [`all-contributors` CLI](https://allcontributors.org/docs/en/cli/overview) to do it yourself: ```bash npx all-contributors add octocat bugs, tests, code ``` See the list of [contribution types](https://allcontributors.org/docs/en/emoji-key).

Acknowledgments

Encoda relies on many awesome opens source tools (see package.json for the complete list). We are grateful ❤ to their developers and contributors for all their time and energy. In particular, these tools do a lot of the heavy lifting 💪 under the hood.

| Tool | Use | | ------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Ajv | Ajv is "the fastest JSON Schema validator for Node.js and browser". Ajv is not only fast, it also has an impressive breadth of functionality. We use Ajv for the validate() and coerce() functions to ensure that ingested data is valid against the Stencila schema. | | Citation.js | Citation.js converts bibliographic formats like BibTeX, BibJSON, DOI, and Wikidata to CSL-JSON. We use it to power the codecs for those formats and APIs. | | Frictionless Data | datapackage-js from the team at Frictionless Data is a Javascript library for working with Data Packages. It does a lot of the work in converting between Tabular Data Packages and Stencila Datatables. | | Glitch Digital | Glitch Digital's structured-data-testing-tool is a library and command line tool to help inspect and test for Structured Data. We use it to check that the HTML generated by Encoda can be read by bots 🤖 | | Pa11y | Pa11y provides a range of free and open source tools to help designers and developers make their web pages more accessible. We use pa11y to test that HTML generated produced by Encoda meets the Web Content Accessibility Guidelines (WCAG) and Axe rule set. | | Pandoc | Pandoc is a "universal document converter". It's able to convert between an impressive number of formats for textual documents. Our Typescript definitions for Pandoc's AST allow us to leverage this functionality from within Node.js while maintaining type safety. Pandoc powers our converters for Word, JATS and Latex. We have contributed to Pandoc, including developing its JATS reader. | | Puppeteer | Puppeteer is a Node library which provides a high-level API to control Chrome. We use it to take screenshots of HTML snippets as part of generating rPNGs and we plan to use it for generating PDFs. | | Remark | Remark is an ecosystem of plugins for processing Markdown. It's part of the unified framework for processing text with syntax trees - a similar approach to Pandoc but in Javascript. We use Remark as our Markdown parser because of it's extensibility. | | SheetJs | SheetJs is a Javascript library for parsing and writing various spreadsheet formats. We use their community edition to power converters for CSV, Excel, and Open Document Spreadsheet formats. They also have a pro version if you need extra support and functionality. |

Many thanks ❤ to the Alfred P. Sloan Foundation and eLife for funding development of this tool.

Owner

  • Name: Stencila
  • Login: stencila
  • Kind: organization
  • Email: hello@stenci.la

CodeMeta (codemeta.json)

{
  "name": "encoda",
  "softwareVersion": "0.117.0",
  "description": "Stencila plugin for document format conversion",
  "installUrl": [
    "https://www.npmjs.com/package/@stencila/encoda",
    "https://github.com/stencila/encoda/releases"
  ],
  "featureList": [
    {
      "title": "convert",
      "type": "object",
      "required": [
        "input",
        "output"
      ],
      "properties": {
        "input": {
          "description": "The URL to read content from.",
          "type": "string",
          "pattern": "^(file|https?|stdio|stdin|string)://.*"
        },
        "output": {
          "description": "The URL to write the content to.",
          "type": "string",
          "pattern": "^(file|stdio|stdout|string)://.*"
        },
        "from": {
          "description": "Format to import the document from. Defaults to the file extension (or media type, for remote URLs).",
          "type": "string",
          "enum": [
            "elife",
            "plos",
            "doi",
            "orcid",
            "http",
            "dir",
            "dar",
            "csv",
            "ods",
            "tdp",
            "xlsx",
            "docx",
            "gdoc",
            "html",
            "ipynb",
            "jats",
            "jats-pandoc",
            "latex",
            "md",
            "odt",
            "pdf",
            "txt",
            "xmd",
            "mathml",
            "tex",
            "dmagic",
            "rpng",
            "png",
            "plotly",
            "yaml",
            "pandoc",
            "json5",
            "jsonld",
            "json",
            "xml",
            "rmd"
          ]
        },
        "to": {
          "description": "Format to export the documents to. Defaults to the file extension.",
          "type": "string",
          "enum": [
            "elife",
            "plos",
            "doi",
            "orcid",
            "http",
            "dir",
            "dar",
            "csv",
            "ods",
            "tdp",
            "xlsx",
            "docx",
            "gdoc",
            "html",
            "ipynb",
            "jats",
            "jats-pandoc",
            "latex",
            "md",
            "odt",
            "pdf",
            "txt",
            "xmd",
            "mathml",
            "tex",
            "dmagic",
            "rpng",
            "png",
            "plotly",
            "yaml",
            "pandoc",
            "json5",
            "jsonld",
            "json",
            "xml",
            "rmd"
          ]
        },
        "cache": {
          "description": "Use and store cached content (for http:// URLs only).",
          "type": "boolean"
        },
        "upcast": {
          "description": "Upcast the document after it is imported?",
          "type": "boolean",
          "const": false
        },
        "downcast": {
          "description": "Downcast the document before it is exported?",
          "type": "boolean",
          "const": false
        },
        "validate": {
          "description": "Validate the document after it is imported?",
          "type": "boolean",
          "const": true
        }
      }
    },
    {
      "title": "decode",
      "description": "Decode content of a specific format into a Stencila node.",
      "required": [
        "content",
        "format"
      ],
      "properties": {
        "content": {
          "description": "The content to be decoded",
          "type": "string"
        },
        "format": {
          "description": "The format to be decoded from",
          "enum": [
            "elife",
            "plos",
            "doi",
            "orcid",
            "http",
            "dir",
            "dar",
            "csv",
            "ods",
            "tdp",
            "xlsx",
            "docx",
            "gdoc",
            "html",
            "ipynb",
            "jats",
            "jats-pandoc",
            "latex",
            "md",
            "odt",
            "pdf",
            "txt",
            "xmd",
            "mathml",
            "tex",
            "dmagic",
            "rpng",
            "png",
            "plotly",
            "yaml",
            "pandoc",
            "json5",
            "jsonld",
            "json",
            "xml",
            "rmd"
          ]
        }
      },
      "interruptible": false
    },
    {
      "title": "encode",
      "description": "Encode a Stencila node to content of a specific format.",
      "required": [
        "node",
        "format"
      ],
      "properties": {
        "node": {
          "description": "The node to be encoded"
        },
        "format": {
          "description": "The format to be encoded to",
          "enum": [
            "elife",
            "plos",
            "doi",
            "orcid",
            "http",
            "dir",
            "dar",
            "csv",
            "ods",
            "tdp",
            "xlsx",
            "docx",
            "gdoc",
            "html",
            "ipynb",
            "jats",
            "jats-pandoc",
            "latex",
            "md",
            "odt",
            "pdf",
            "txt",
            "xmd",
            "mathml",
            "tex",
            "dmagic",
            "rpng",
            "png",
            "plotly",
            "yaml",
            "pandoc",
            "json5",
            "jsonld",
            "json",
            "xml",
            "rmd"
          ]
        },
        "theme": {
          "description": "The theme for the exported content (only applies to some formats)",
          "type": "string"
        }
      },
      "interruptible": false
    },
    {
      "title": "get",
      "description": "Get a variable from a document.",
      "required": [
        "name"
      ],
      "properties": {
        "name": {
          "description": "The name of the variable.",
          "type": "string"
        }
      }
    },
    {
      "title": "import",
      "description": "Import a document from a URL.",
      "required": [
        "input"
      ],
      "properties": {
        "input": {
          "description": "The URL to read content from.",
          "type": "string",
          "pattern": "^(file|https?|stdio|stdin|string)://.*"
        },
        "format": {
          "description": "Format to import the document from. Defaults to the file extension (or media type, for remote URLs).",
          "type": "string",
          "enum": [
            "elife",
            "plos",
            "doi",
            "orcid",
            "http",
            "dir",
            "dar",
            "csv",
            "ods",
            "tdp",
            "xlsx",
            "docx",
            "gdoc",
            "html",
            "ipynb",
            "jats",
            "jats-pandoc",
            "latex",
            "md",
            "odt",
            "pdf",
            "txt",
            "xmd",
            "mathml",
            "tex",
            "dmagic",
            "rpng",
            "png",
            "plotly",
            "yaml",
            "pandoc",
            "json5",
            "jsonld",
            "json",
            "xml",
            "rmd"
          ]
        },
        "cache": {
          "description": "Use and store cached content (for http:// URLs only).",
          "type": "boolean"
        },
        "upcast": {
          "description": "Upcast the document after it is imported?",
          "type": "boolean",
          "const": false
        },
        "validate": {
          "description": "Validate the document after it is imported?",
          "type": "boolean",
          "const": true
        }
      }
    },
    {
      "title": "pull",
      "description": "Pull file/s from a URL to the file system",
      "required": [
        "input",
        "output"
      ],
      "properties": {
        "input": {
          "description": "The URL to fetch.",
          "type": "string",
          "pattern": "^(https?|file|stdio|stdin|string)://.*"
        },
        "output": {
          "description": "The file path to write to",
          "type": "string"
        }
      }
    },
    {
      "title": "read",
      "description": "Read content from a URL.",
      "required": [
        "input"
      ],
      "properties": {
        "input": {
          "description": "The URL to read content from.",
          "type": "string",
          "pattern": "^(file|https?|stdio|stdin|string)://.*"
        },
        "cache": {
          "description": "Use and store cached content (for http:// URLs only).",
          "type": "boolean"
        }
      }
    },
    {
      "title": "select",
      "description": "Select child nodes from a node.",
      "required": [
        "node",
        "query"
      ],
      "properties": {
        "node": {
          "description": "The node to select from."
        },
        "query": {
          "description": "The query to run against the node.",
          "type": "string"
        },
        "lang": {
          "description": "The language that the query is written in.",
          "enum": [
            "simplepath"
          ]
        }
      }
    },
    {
      "title": "set",
      "description": "Set a variable in a document.",
      "required": [
        "name",
        "value"
      ],
      "properties": {
        "name": {
          "description": "The name of the variable to set.",
          "type": "string"
        },
        "value": {
          "description": "The value to to set the variable to."
        }
      }
    },
    {
      "title": "validate",
      "description": "Validate a node against the Stencila Schema.",
      "required": [
        "node"
      ],
      "properties": {
        "node": {
          "description": "The node to validate."
        },
        "force": {
          "description": "Coerce the node to ensure it is valid (e.g. dropping properties)?",
          "type": "boolean",
          "const": true
        }
      }
    },
    {
      "title": "write",
      "description": "Write content to a URL.",
      "required": [
        "content",
        "output"
      ],
      "properties": {
        "content": {
          "description": "The content to write",
          "type": "string"
        },
        "output": {
          "description": "The URL to write the content to.",
          "type": "string",
          "pattern": "^(file|stdio|stdout|string)://.*"
        }
      }
    }
  ]
}

GitHub Events

Total
  • Watch event: 1
  • Delete event: 20
  • Issue comment event: 6
  • Push event: 251
  • Pull request review comment event: 9
  • Pull request review event: 12
  • Pull request event: 43
  • Fork event: 1
  • Create event: 23
Last Year
  • Watch event: 1
  • Delete event: 20
  • Issue comment event: 6
  • Push event: 251
  • Pull request review comment event: 9
  • Pull request review event: 12
  • Pull request event: 43
  • Fork event: 1
  • Create event: 23

Committers

Last synced: 11 months ago

All Time
  • Total Commits: 2,919
  • Total Committers: 26
  • Avg Commits per committer: 112.269
  • Development Distribution Score (DDS): 0.396
Past Year
  • Commits: 141
  • Committers: 5
  • Avg Commits per committer: 28.2
  • Development Distribution Score (DDS): 0.532
Top Committers
Name Email Commits
Nokome Bentley n****e@s****a 1,764
Renovate Bot b****t@r****m 462
Stencila CI Bot ci@s****a 257
Alex Ketch a****x@k****e 162
Nathan Lisgo n****n@l****k 66
Ben Shaw b****n@b****z 31
Aleksandra Pawlik a****a@s****a 27
Will Byrne w****e@g****m 25
Nokome Bentley me@n****e 22
Robert Gieseke r****g@w****e 22
A. K 7****m 13
Alexandr Sisiuc a****c@e****m 13
renovate[bot] 2****] 11
Jacqueline Wijaya j****a@g****m 9
Nokome Bentley n****e@d****z 9
dependabot[bot] 4****] 7
allcontributors[bot] 4****] 4
greenkeeper[bot] 2****] 4
Hamish Mackenzie H****e@g****m 2
Raniere Silva r****e@r****m 2
Calin Gabriel g****n@e****m 2
0xflotus 0****s@g****m 1
Fabio K. Mendes b****a@g****m 1
Alex a****x@P****l 1
Scott Aubrey s****t@a****k 1
semantic-release-bot s****t@m****t 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 7 months ago

All Time
  • Total issues: 7
  • Total pull requests: 152
  • Average time to close issues: 11 months
  • Average time to close pull requests: 4 months
  • Total issue authors: 6
  • Total pull request authors: 9
  • Average comments per issue: 2.29
  • Average comments per pull request: 1.18
  • Merged pull requests: 58
  • Bot issues: 1
  • Bot pull requests: 122
Past Year
  • Issues: 0
  • Pull requests: 29
  • Average time to close issues: N/A
  • Average time to close pull requests: 29 days
  • Issue authors: 0
  • Pull request authors: 4
  • Average comments per issue: 0
  • Average comments per pull request: 0.21
  • Merged pull requests: 12
  • Bot issues: 0
  • Bot pull requests: 15
Top Authors
Issue Authors
  • nokome (2)
  • alex-ketch (1)
  • renovate[bot] (1)
  • rgieseke (1)
  • fred-atherden (1)
Pull Request Authors
  • renovate[bot] (118)
  • dependabot[bot] (17)
  • nlisgo (13)
  • rgieseke (10)
  • will-byrne (6)
  • nokome (4)
  • soggy-mushroom (2)
  • alex-ketch (1)
  • lsh-0 (1)
Top Labels
Issue Labels
good first issue (1) ⚠️ Medium priority (1) released (1) 🏢 eLife (1)
Pull Request Labels
released (41) dependencies (17)

Dependencies

package-lock.json npm
  • 1921 dependencies
package.json npm
  • @semantic-release/exec 6.0.2 development
  • @stencila/dev-config 3.0.4 development
  • @testing-library/dom 8.11.1 development
  • @testing-library/jest-dom 5.15.0 development
  • @types/async-lock 1.1.3 development
  • @types/content-type 1.1.5 development
  • @types/escape-html 1.0.1 development
  • @types/fs-extra 9.0.13 development
  • @types/github-slugger 1.3.0 development
  • @types/hyperscript 0.0.4 development
  • @types/jest 27.0.2 development
  • @types/js-beautify 1.13.3 development
  • @types/js-yaml 4.0.2 development
  • @types/jsdom 16.2.13 development
  • @types/json5 2.2.0 development
  • @types/jsonld 1.5.6 development
  • @types/mdast 3.0.7 development
  • @types/mime 2.0.3 development
  • @types/minimist 1.2.2 development
  • @types/node 16.11.7 development
  • @types/pa11y 5.3.3 development
  • @types/papaparse 5.3.1 development
  • @types/parse-author 2.0.1 development
  • @types/punycode 2.1.0 development
  • @types/tar 6.1.1 development
  • @types/testing-library__dom 7.5.0 development
  • @types/unist 2.0.4 development
  • callsites 4.0.0 development
  • csl-json 0.1.0 development
  • delay 5.0.0 development
  • dependency-check 4.1.0 development
  • googleapis 95.0.0 development
  • jest 27.3.1 development
  • jest-file-snapshot 0.5.0 development
  • jest-matcher-utils 27.3.1 development
  • json-schema-to-typescript 10.1.5 development
  • markdown-toc 1.2.0 development
  • nock 13.2.1 development
  • pa11y 6.1.0 development
  • structured-data-testing-tool 4.5.0 development
  • ts-jest 27.0.7 development
  • ts-node 10.4.0 development
  • typedoc 0.22.9 development
  • typescript 4.3.5 development
  • @stencila/jesta ^1.10.5
  • @stencila/logga ^4.0.0
  • @stencila/thema ^2.24.4
  • appdata-path ^1.0.0
  • asciimath2tex https://github.com/christianp/asciimath2tex/tarball/dedc42ddfdb80678bfb09864cfa76afb0a4b5f44
  • async-lock ^1.3.0
  • bin-wrapper ^4.1.0
  • citation-js ^0.5.1
  • collapse-whitespace ^1.1.7
  • content-type ^1.0.4
  • datapackage ^1.1.10
  • escape-html ^1.0.3
  • fp-ts ^2.11.5
  • fs-extra ^10.0.0
  • get-stdin ^8.0.0
  • github-slugger ^1.4.0
  • globby ^11.0.4
  • hyperscript ^2.0.2
  • is-docker ^2.2.1
  • jimp ^0.16.1
  • js-beautify ^1.14.0
  • js-yaml ^4.1.0
  • jsdom ^18.1.0
  • json5 ^2.2.0
  • jsonld ^5.2.0
  • mathjax-node ^2.1.1
  • mdast-util-compact ^3.0.0
  • mime ^3.0.0
  • minimist ^1.2.5
  • papaparse ^5.3.1
  • parse-author ^2.0.0
  • parse-full-name ^1.2.6
  • patch-package 6.4.7
  • pdf-lib ^1.17.1
  • plotly.js-dist ^1.58.5
  • png-chunk-text ^1.0.0
  • png-chunks-encode ^1.0.0
  • png-chunks-extract ^1.0.0
  • puppeteer ^11.0.0
  • remark-attr ^0.11.1
  • remark-frontmatter ^2.0.0
  • remark-generic-extensions ^1.4.0
  • remark-math ^3.0.1
  • remark-parse ^8.0.3
  • remark-stringify ^8.1.1
  • remark-sub-super ^1.0.20
  • tar ^6.1.11
  • temp-dir ^2.0.0
  • tempy ^1.0.1
  • to-vfile ^6.1.0
  • trash ^7.2.0
  • unified ^9.2.1
  • unist-util-filter ^2.0.3
  • unist-util-map ^2.0.1
  • unist-util-select ^3.0.4
  • vfile ^4.2.1
  • xlsx ^0.17.4
  • xml-js ^1.6.11