denet

a simple process monitor

https://github.com/btraven00/denet

Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (16.4%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

a simple process monitor

Basic Info
  • Host: GitHub
  • Owner: btraven00
  • License: gpl-3.0
  • Language: Rust
  • Default Branch: main
  • Size: 468 KB
Statistics
  • Stars: 1
  • Watchers: 0
  • Forks: 0
  • Open Issues: 1
  • Releases: 0
Created 9 months ago · Last pushed 8 months ago
Metadata Files
Readme Changelog License Citation Codeowners

README.md

denet: a streaming process monitor

denet /de.net/ v. 1. Turkish: to monitor, to supervise, to audit. 2. to track metrics of a running process.

Denet is a streaming process monitoring tool that provides detailed metrics on running processes, including CPU, memory, I/O, and thread usage. Built with Rust, with Python bindings.

PyPI version Crates.io codecov Ruff License: GPL v3

Features

  • Lightweight, cross-platform process monitoring
  • Adaptive sampling intervals that automatically adjust based on runtime
  • Memory usage tracking (RSS, VMS)
  • CPU usage monitoring with accurate multi-core support
  • I/O bytes read/written tracking
  • Thread count monitoring
  • Recursive child process tracking
  • Command-line interface with colorized output
  • Multiple output formats (JSON, JSONL, CSV)
  • In-memory sample collection for Python API

  • Analysis utilities for metrics aggregation, peak detection, and resource utilization

  • Process metadata preserved in output files (pid, command, executable path)

Requirements

  • Python 3.6+ (Python 3.12 recommended for best performance)
  • Rust (for development)
  • pixi (for development only)

Installation

bash pip install denet # Python package cargo install denet # Rust binary

Usage

Understanding CPU Utilization

CPU usage is reported in a top-compatible format where 100% represents one fully utilized CPU core:

  • 100% = one core fully utilized
  • 400% = four cores fully utilized
  • Child processes are tracked separately and aggregated for total resource usage
  • Process trees are monitored by default, tracking all child processes spawned by the main process

This is consistent with standard tools like top and htop. For example, a process using 3 CPU cores at full capacity will show 300% CPU usage, regardless of how many cores your system has.

Command-Line Interface

```bash

Basic monitoring with colored output

denet run sleep 5

Output as JSON (actually JSONL format with metadata on first line)

denet --json run sleep 5 > metrics.json

Write output to a file

denet --out metrics.log run sleep 5

Custom sampling interval (in milliseconds)

denet --interval 500 run sleep 5

Specify max sampling interval for adaptive mode

denet --max-interval 2000 run sleep 5

Monitor existing process by PID

denet attach 1234

Monitor just for 10 seconds

denet --duration 10 attach 1234

Quiet mode (suppress process output)

denet --quiet --json --out metrics.jsonl run python script.py

Monitor a CPU-intensive workload (shows aggregated metrics for all children)

denet run python cpuintensivescript.py

Disable child process monitoring (only track the parent process)

denet --no-include-children run python multiprocessscript.py ```

Python API

Basic Usage

```python import json import denet

Create a monitor for a process

monitor = denet.ProcessMonitor( cmd=["python", "-c", "import time; time.sleep(10)"], baseintervalms=100, # Start sampling every 100ms maxintervalms=1000, # Sample at most every 1000ms storeinmemory=True, # Keep samples in memory outputfile=None, # Optional file output includechildren=True # Monitor child processes (default True) )

Let the monitor run automatically until the process completes

Samples are collected at the specified sampling rate in the background

monitor.run()

Access all collected samples after process completion

samples = monitor.get_samples() print(f"Collected {len(samples)} samples")

Get summary statistics

summaryjson = monitor.getsummary() summary = json.loads(summaryjson) print(f"Average CPU usage: {summary['avgcpuusage']}%") print(f"Peak memory: {summary['peakmemrsskb']/1024:.2f} MB") print(f"Total time: {summary['totaltimesecs']:.2f} seconds") print(f"Sample count: {summary['samplecount']}") print(f"Max processes: {summary['maxprocesses']}")

Save samples to different formats

monitor.savesamples("metrics.jsonl") # Default JSONL monitor.savesamples("metrics.json", "json") # JSON array format monitor.save_samples("metrics.csv", "csv") # CSV format

JSONL files include a metadata line at the beginning with process info

{"pid": 1234, "cmd": ["python"], "executable": "/usr/bin/python", "t0_ms": 1625184000000}

```

```python

For more controlled execution with monitoring, use executewithmonitoring:

import denet import json import subprocess

Execute a command with monitoring and capture the result

exitcode, monitor = denet.executewithmonitoring( cmd=["python", "script.py"], baseintervalms=100, maxintervalms=1000, storeinmemory=True, # Store samples in memory outputfile=None, # Optional file output writemetadata=False, # Write metadata as first line to output file (default False) includechildren=True # Monitor child processes (default True) )

Access collected metrics after execution

samples = monitor.getsamples() print(f"Collected {len(samples)} samples") print(f"Exit code: {exitcode}")

Generate and print summary

summaryjson = monitor.getsummary() summary = json.loads(summaryjson) print(f"Average CPU usage: {summary['avgcpuusage']}%") print(f"Peak memory: {summary['peakmemrsskb']/1024:.2f} MB")

Save samples to a file (includes metadata line in JSONL format)

monitor.save_samples("metrics.jsonl", "jsonl") # First line contains process metadata ```

Adaptive Sampling

Denet uses an intelligent adaptive sampling strategy to balance detail and efficiency:

  1. First second: Samples at the base interval rate (fast sampling for short processes)
  2. 1-10 seconds: Gradually increases from base to max interval
  3. After 10 seconds: Uses the maximum interval rate

This approach ensures high-resolution data for short-lived processes while reducing overhead for long-running ones.

Analysis Utilities

The Python API includes utilities for analyzing metrics:

```python import denet import json

Load metrics from a file (automatically skips metadata line)

metrics = denet.load_metrics("metrics.jsonl")

If you want to include the metadata in the results

metricswithmetadata = denet.loadmetrics("metrics.jsonl", includemetadata=True)

Access the executable path from metadata

executablepath = metricswithmetadata[0]["executable"] # First item is metadata when includemetadata=True

Direct command execution with monitoring

exitcode, monitor = denet.executewith_monitoring(["python", "script.py"])

Execute with metadata written to output file

exitcode, monitor = denet.executewithmonitoring( cmd=["python", "script.py"], outputfile="metrics.jsonl", writemetadata=True # Includes metadata as first line: {"pid": 1234, "cmd": ["python", "script.py"], "executable": "/usr/bin/python", "t0ms": 1625184000000} )

executewithmonitoring also accepts subprocess.run arguments:

exitcode, monitor = denet.executewithmonitoring( cmd=["python", "script.py"], baseintervalms=100, storein_memory=True, # Any subprocess.run arguments can be passed through: timeout=30, # Process timeout in seconds stdout=subprocess.PIPE, # Capture stdout stderr=subprocess.PIPE, # Capture stderr cwd="/path/to/workdir", # Working directory env={"PATH": "/usr/bin"} # Environment variables )

Aggregate metrics to reduce data size

aggregated = denet.aggregatemetrics(metrics, windowsize=5, method="mean")

Find peaks in resource usage

cpupeaks = denet.findpeaks(metrics, field='cpuusage', threshold=50) print(f"Found {len(cpupeaks)} CPU usage peaks above 50%")

Get comprehensive resource utilization statistics

stats = denet.resourceutilization(metrics) print(f"Average CPU: {stats['avgcpu']}%") print(f"Total I/O: {stats['totaliobytes']} bytes")

Convert between formats

csvdata = denet.convertformat(metrics, toformat="csv") with open("metrics.csv", "w") as f: f.write(csvdata)

Save metrics with custom options

denet.savemetrics(metrics, "data.jsonl", format="jsonl", includemetadata=True)

Analyze process tree patterns

treeanalysis = denet.processtree_analysis(metrics)

Example: Analyze CPU usage from multi-process workload

See scripts/analyze_cpu.py for detailed CPU analysis example

```

Development

For detailed developer documentation, including project structure, development workflow, testing, and release process, see Developer Documentation.

License

GPL-3

Acknowledgements

  • sysinfo - Rust library for system information
  • PyO3 - Rust bindings for Python

Owner

  • Name: btraven
  • Login: btraven00
  • Kind: user

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it using the metadata below."
title: "denet"
version: "0.4.2"
date-released: 2025-06-19
authors:
  - family-names: Traven
    given-names: B
repository-code: "https://github.com/btraven00/denet"
url: "https://crates.io/crates/denet"
abstract: "Denet is a Rust-based streaming process-monitoring tool providing detailed metrics on running processes (CPU, memory, I/O, thread usage)."
keywords:
  - Rust
  - process monitoring
  - streaming
license: "GPL-3.0-or-later"

GitHub Events

Total
  • Release event: 3
  • Watch event: 1
  • Delete event: 3
  • Issue comment event: 9
  • Public event: 1
  • Push event: 67
  • Pull request event: 13
  • Create event: 6
Last Year
  • Release event: 3
  • Watch event: 1
  • Delete event: 3
  • Issue comment event: 9
  • Public event: 1
  • Push event: 67
  • Pull request event: 13
  • Create event: 6

Packages

  • Total packages: 2
  • Total downloads:
    • pypi 478 last-month
    • cargo 1,783 total
  • Total dependent packages: 0
    (may contain duplicates)
  • Total dependent repositories: 0
    (may contain duplicates)
  • Total versions: 12
  • Total maintainers: 2
pypi.org: denet

A streaming process monitoring tool

  • Versions: 6
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 478 Last month
Rankings
Dependent packages count: 9.0%
Average: 30.0%
Dependent repos count: 51.0%
Maintainers (1)
Last synced: 7 months ago
crates.io: denet

a simple process monitor

  • Versions: 6
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 1,783 Total
Rankings
Dependent repos count: 21.4%
Dependent packages count: 28.4%
Average: 48.2%
Downloads: 94.8%
Maintainers (1)
Last synced: 7 months ago

Dependencies

.github/workflows/publish.yml actions
  • actions/checkout v4 composite
  • actions/setup-python v5 composite
  • dtolnay/rust-toolchain stable composite
  • pypa/gh-action-pypi-publish release/v1 composite
.github/workflows/release-please.yml actions
  • googleapis/release-please-action v4 composite
.github/workflows/test.yml actions
  • actions/cache v4 composite
  • actions/checkout v4 composite
  • actions/download-artifact v4 composite
  • actions/setup-python v5 composite
  • actions/upload-artifact v4 composite
  • codecov/codecov-action v3 composite
  • dtolnay/rust-toolchain stable composite
Cargo.lock cargo
  • 104 dependencies
Cargo.toml cargo
  • once_cell 1.18 development
  • clap 4.5
  • colored 2.1
  • crossterm 0.27
  • ctrlc 3.4
  • pyo3 0.18
  • serde 1.0
  • serde_json 1.0
  • sysinfo 0.29.11
  • tokio 1
pyproject.toml pypi