eval
Evaluate shell command or python code in sphinx and myst. maintainers: @Freed-Wu
Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
✓Committers with academic emails
1 of 2 committers (50.0%) from academic institutions -
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (5.7%) to scientific vocabulary
Keywords
Repository
Evaluate shell command or python code in sphinx and myst. maintainers: @Freed-Wu
Basic Info
- Host: GitHub
- Owner: sphinx-contrib
- License: gpl-3.0
- Language: Python
- Default Branch: main
- Homepage: https://sphinxcontrib-eval.readthedocs.io/
- Size: 64.5 KB
Statistics
- Stars: 0
- Watchers: 2
- Forks: 1
- Open Issues: 1
- Releases: 3
Topics
Metadata Files
README.md
sphinxcontrib-eval
Evaluate shell command or python code in sphinx and myst.
Install
See here.
Usage
Enable
docs/conf.py
python
extensions = [
"sphinxcontrib.eval",
]
Or
python
extensions = [
"myst_parser",
"sphinxcontrib.eval", # must be after myst_parser
]
Demonstration
For myst:
markdown
```{eval-sh}
echo My OS is $OSTYPE.
```
For rst:
```rst .. eval-sh:: echo My OS is $OSTYPE.
```
Then build:
sh
sphinx-build docs docs/_build/html
Result:
text
My OS is linux-gnu.
NOTE: the current working directory depends on you. That is, if you run
cd docs && sphinx-build . _build/html && cd -, CWD will be docs, which is
the default setting of https://readthedocs.org. So if your code structure is
like
console
$ tree --level 1
.
├── docs
├── scripts
├── src
└── tests
And you want to run scripts/*.sh, you need cd .. firstly from docs to
. else you have to run ../scripts/*.sh.
Advanced Usages
All of the following examples are myst. The corresponding examples of rst are similar. Click the hyperlinks of the titles and scripts to see the actual examples.
Generate API Document
Note: A more "sphinx" solution is sphinxcontrib-autofile.
Before:
````markdown
API of Translate Shell
{eval-rst}
.. automodule:: translate_shell
:members:
.. automodule:: translate_shell.__main__
:members:
... (More)
````
Now
`````markdown
API of Translate Shell
{eval-rst}
```{eval-sh}
cd ..
scripts/generate-api.md.pl src/*/*.py
```
`````
Where
scripts/generate-api.md.pl
replaces all src/translate_shell/XXX.pys to
rst
.. automodule:: translate_shell.XXX
:members:
Generate TODO Document
Before:
```markdown
TODO
- https://github.com/Freed-Wu/tranlate-shell/tree/main/src/translate_shell/translators/stardict/__init__.py#L4 more stardicts.
- https://github.com/Freed-Wu/tranlate-shell/tree/main/src/translate_shell/translators/stardict/__init__.py#L5 Create different subclasses for different dict to get phonetic, explains
- https://github.com/Freed-Wu/tranlate-shell/tree/main/src/translate_shell/ui/repl.py#L33 make the last line gray like ptpython
- ... ```
Now: (notice eval-bash because readthedocs uses dash as their default $SHELL)
````markdown
TODO
{eval-bash}
cd ..
shopt -s globstar
scripts/generate-todo.md.pl src/**/*.py
````
Where
scripts/generate-todo.md.pl
searches all TODOs in code then convert them to correct hyperlinks.
Generate Requirements Document
Note: A more "sphinx" solution is sphinxcontrib-requirements-txt.
Before:
```markdown
Requirements
completion
Generate shell completion scripts.
... ```
Now
````markdown
Requirements
{eval-sh}
cd ..
generate-requirements.md.pl
````
Where
scripts/generate-requirements.md.pl
searches all requirements/*.txts and requirements/completion.txt is:
```unixconfig
!/usr/bin/env -S pip install -r
Generate shell completion scripts.
shtab ```
See document to know more.
Owner
- Name: sphinx-contrib
- Login: sphinx-contrib
- Kind: organization
- Website: http://www.sphinx-doc.org/
- Repositories: 78
- Profile: https://github.com/sphinx-contrib
A collection of Sphinx extensions maintained by their respective authors. It is not an official part of Sphinx.
Citation (CITATION.cff)
#!/usr/bin/env -S scripts/generate-CITATION.cff.pl pyproject.toml
---
cff-version: 1.2.0
message: If you use this software, please cite it as below.
authors:
- family-names: Wu
given-names: Zhenyu
orcid: https://orcid.org/0000-0001-6478-9993
title: "sphinxcontrib-eval: Evaluate shell command or python code in sphinx and myst"
date-released: 2022-12-10
url: "https://github.com/sphinx-contrib/eval"
GitHub Events
Total
- Push event: 6
Last Year
- Push event: 6
Committers
Last synced: 8 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Wu Zhenyu | w****u@u****u | 13 |
| Xander Harris | x****s@g****m | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 7 months ago
All Time
- Total issues: 0
- Total pull requests: 2
- Average time to close issues: N/A
- Average time to close pull requests: 20 days
- Total issue authors: 0
- Total pull request authors: 2
- Average comments per issue: 0
- Average comments per pull request: 2.5
- Merged pull requests: 1
- Bot issues: 0
- Bot pull requests: 1
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
- edwardtheharris (2)
- pre-commit-ci[bot] (1)
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- actions/checkout v3 composite
- actions/setup-python v4 composite
- actions/upload-artifact v3 composite
- codecov/codecov-action v3 composite
- pypa/gh-action-pypi-publish release/v1 composite
- softprops/action-gh-release v1 composite
- myst-parser *
- tomli *
- myst-parser * development
- pre-commit * development
- pytest-cov * development
- tomli * development
- myst-parser *
- sphinx *