Scarlet

Scarlet: Scalable Anytime Algorithms for Learning Fragments of Linear Temporal Logic - Published in JOSS (2024)

https://github.com/rajarshi008/scarlet

Science Score: 93.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 3 DOI reference(s) in README and JOSS metadata
  • Academic publication links
    Links to: springer.com, zenodo.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
    Published in Journal of Open Source Software
Last synced: 6 months ago · JSON representation

Repository

Tool for learning linear temporal logic formulas using combinatorial approach

Basic Info
  • Host: GitHub
  • Owner: rajarshi008
  • License: mit
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 13.1 MB
Statistics
  • Stars: 14
  • Watchers: 3
  • Forks: 2
  • Open Issues: 2
  • Releases: 1
Created almost 5 years ago · Last pushed about 1 year ago
Metadata Files
Readme License

README.md

DOI

We consider the problem of learning explainable models in Linear Temporal Logic (LTL) using system executions partitioned into positive (or desirable) and negative (or undesirable).

Our paper "Scalable Anytime Algorithms for Learning Fragments of Linear Temporal Logic" presenting the algorithms behind Scarlet was published in TACAS'2022.

Table of Contents <!-- toc -->

Input File format:

The input files consist of system executions, formally termed as traces, separated as positives and negatives, separated by ---. Each trace is a sequence of letter separated by ;. Each letter represents the truth value of atomic propositions. An example of a trace is 1,0,1;0,0,0 which consists of two letters each of which define the values of three propositions (which by default consider to be p,q,r). An example input file looks like the following: ``` 0,0,0;0,1,1;1,0,0;0,0,1;0,1,0 1,1,0;1,0,1;1,0,0;1,1,1;1,0,1

1,1,0;0,1,1;1,1,1;1,0,0;1,0,1

1,0,0;1,0,0;0,1,0;1,1,0;1,1,1 1,0,0;1,0,0;0,1,0;1,1,0;1,0,0 0,0,1;1,0,0;1,1,0;1,1,1;1,0,0 `` For *Scarlet* to work, you must save your input files with the extension.trace`.

There are two ways of using Scarlet: using its python API or using its command-line features. Here, we provide the instructions to use both.

Python API

Python Installation

Now, you can install the tool, as python package using pip command as follows:

python3 -m pip install Scarlet-ltl

Basic API Usage

We now provide the basic usage of the Scarlet python API. from Scarlet.ltllearner import LTLlearner learner = LTLlearner(input_file = input_file_path) learner.learn() Note that the python API does not provide any example input files. Hence, you must create your trace file in a format as described in the previous section and provide it to the API.

API Parameters

You can call the LTLlearner class with additional parameters as follows:

  • input_file = the path of the file containing LTL formuas, e.g., = 'example.trace'
  • timeout = For specifying the timeout, default = 900
  • csvname = the name of the output csv file, e.g., = 'result.csv'
  • thres = the bound on loss function for noisy data, default = 0 for perfect classification, has to be a number between zero and one

Generation of Trace Files using API

You can also generate trace files from given LTL formulas following the instructions below:

Installation of Dependencies

For generating benchmarks from a given set of LTL formula, we rely on a python package LTLf2DFA that uses MONA in its backend. As a result, one needs to install MONA first in order to be able to use this procedure (instructions can be found in the MONA website).

Creating Input Formula File

For generating benchmarks, you have to create an input file named formulas.txt in the same directory where venv folder is located. The formula file should contain a list of formulas (in prefix notation) along with the alphabet. An example of this file is as follows:

G(!(p));p |(G(!(p)),F(&(p, F(q))));p,q G(->(q, G(!(p))));p,q

Generating from a given Formula File

from Scarlet.genBenchmarks import SampleGenerator generator = SampleGenerator(formula_file= "formulas.txt") generator.generate()

API Generation Parameters

You can call the SampleGenerator class with additional parameters as follows:

  • formula_file = the path of the file containing LTL formuas, example = 'formulas.txt'
  • samplesizes = list of samplesize, i.e., number of positive traces and number of negative traces (separated by comma) in each sample, default = [(10,10),(50,50)]
  • trace_lengths = For specifying the length range for each trace in the samples, default = [(6,6)]
  • output_folder = For specifying the name of the folder in which samples are generated

The PyPi package for this API can be found in this link.

Command Line Usage

CMD Installation

To build from source, use the following set of commands: git clone https://github.com/rajarshi008/Scarlet.git cd Scarlet source ./installation.sh

CMD Running

Run using python -m Scarlet.ltllearner. By default, this will run Scarlet on example.trace located inside the Scarlet folder. For easy testing, one can replace example.trace with the trace file of choice (see All_benchmarks for more example traces).

CMD Parameters

There are a variety of arguments that one can use to run Scarlet, as listed below:

|Argument |Meaning |----------------|------------------------------ |-i | For specifying the name of the input file (to be located inside the Scarlet folder), default is example.trace. |-v | For outputting a detailed log of Scarlet's execution. |-vv | For outputting a even more detailed log of Scarlet's execution. |-t | For specifying the timeout, default is 900 secs (the best formula found till timeout can be found in result.csv, located in the Scarlet folder). |-o | For specifying the name of the output csv file, default is results.csv |-l | For specifying the bound on loss function for noisy data, default is 0 for perfect classification. |-h | For outputting the help.

Generation of Trace Files using CMD

For generating benchmarks from a given set of LTL formula, we rely on a python package LTLf2DFA that uses MONA in its backend. As a result, one needs to install MONA first in order to be able to use this procedure (instructions can be found in the MONA website).

After the installation, for generating samples one simply needs to run python -m Scarlet.genBenchmarks. By default, this generates samples that are separable using the formulas provided in formulas.txt.

CMD Generation Parameters

You can run the command with the following arguments:

|Argument |Meaning |----------------|------------------------------ |-f| For specifying the file containing all of the formulas (in prefix notation). |-s | List of samplesize, i.e., number of positive traces and number of negative traces (separated by comma) in each sample.
|-l <list
oftuple>| For specifying the length range for each trace in the samples |-o <outputfolder>| For specifying the name of the folder in which samples are generated. |-h | Outputs the help.

The formula file should contain a list of formulas (in prefix notation) along with the alphabet (see formulas.txt) and should be located inside the Scarlet folder to be used for generating the sample.

Testing

For testing all features of the tool use: python -m pytest Scarlet (Installation of MONA is required). For testing only the features of LTL learning use: python -m pytest Scarlet/tests/test_learner. For testing only the features of trace file generation use: python -m pytest Scarlet/tests/test_generator (Installation of MONA is required).

Report an issue

If you find any issue, please create a GitHub issue with specifics steps to reproduce the bug.

Contribute

Contributions are welcome! Please first, create an issue with what your contribution should be about. Then you can create a pull request.

Owner

  • Name: Rajarshi Roy
  • Login: rajarshi008
  • Kind: user
  • Location: Kaiserslautern
  • Company: Max Planck Institute for Software systems

I am PhD student working on Formal Methods and Machine Learning

JOSS Publication

Scarlet: Scalable Anytime Algorithms for Learning Fragments of Linear Temporal Logic
Published
January 09, 2024
Volume 9, Issue 93, Page 5052
Authors
Ritam Raha ORCID
University of Antwerp, Antwerp, Belgium, CNRS, LaBRI and Université de Bordeaux, France
Rajarshi Roy ORCID
Max Planck Institute for Software Systems, Kaiserslautern, Germany
Nathanaël Fijalkow ORCID
CNRS, LaBRI and Université de Bordeaux, France
Daniel Neider ORCID
TU Dortmund University, Dortmund, Germany, Center for Trustworthy Data Science and Security, University Alliance Ruhr, Germany
Editor
Adi Sinn ORCID
Tags
linear temporal logic (LTL) Explainable AI (XAI) specification mining Formal Methods

GitHub Events

Total
  • Issues event: 1
  • Push event: 1
Last Year
  • Issues event: 1
  • Push event: 1

Committers

Last synced: 7 months ago

All Time
  • Total Commits: 138
  • Total Committers: 8
  • Avg Commits per committer: 17.25
  • Development Distribution Score (DDS): 0.232
Past Year
  • Commits: 2
  • Committers: 1
  • Avg Commits per committer: 2.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Rajarshi Roy r****8@g****m 106
Ritam Raha r****a@J****n 9
Ritam r****8@g****m 8
Ritam Raha r****a@R****l 6
Nathanael Fijalkow n****w@g****m 5
Rajarshi Roy r****y@C****l 2
Matthew Fernandez m****z@g****m 1
Rajarshi Roy r****i@l****g 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 11
  • Total pull requests: 1
  • Average time to close issues: 3 months
  • Average time to close pull requests: 25 days
  • Total issue authors: 3
  • Total pull request authors: 1
  • Average comments per issue: 1.09
  • Average comments per pull request: 0.0
  • Merged pull requests: 1
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • JDRomano2 (7)
  • Smattr (3)
  • SimplisticCode (1)
Pull Request Authors
  • Smattr (1)
Top Labels
Issue Labels
Pull Request Labels

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 14 last-month
  • Total dependent packages: 0
  • Total dependent repositories: 0
  • Total versions: 4
  • Total maintainers: 2
pypi.org: scarlet-ltl

A package for learning LTL formulas from a sample consisting of traces partitioned into positive and negative

  • Homepage: https://github.com/rajarshi008/Scarlet
  • Documentation: https://scarlet-ltl.readthedocs.io/
  • License: Copyright (c) 2022 University of Antwerp, Belgium & LaBRI, University of Bordeaux, France Ritam Raha <ritam.raha18@gmail.com> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
  • Latest release: 0.0.4
    published about 2 years ago
  • Versions: 4
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 14 Last month
Rankings
Dependent packages count: 6.6%
Downloads: 19.7%
Average: 22.2%
Stargazers count: 23.3%
Forks count: 30.5%
Dependent repos count: 30.6%
Maintainers (2)
Last synced: 6 months ago

Dependencies

requirements.txt pypi
  • graphviz *
  • lark *
  • ltlf2dfa *
.github/workflows/draft-pdf.yml actions
  • actions/checkout v2 composite
  • actions/upload-artifact v1 composite
  • openjournals/openjournals-draft-action master composite