https://github.com/SimonBlanke/search-data-explorer
Visualize search-data from your gradient-free-optimization run.
Science Score: 13.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
○.zenodo.json file
-
○DOI references
-
○Academic publication links
-
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (13.3%) to scientific vocabulary
Keywords
Repository
Visualize search-data from your gradient-free-optimization run.
Basic Info
Statistics
- Stars: 3
- Watchers: 3
- Forks: 1
- Open Issues: 0
- Releases: 1
Topics
Metadata Files
README.md
Search Data Explorer
Visualize optimization search-data via plotly in a streamlit dashboard
The Search-Data-Explorer is a simple application specialized to visualize search-data generated from Gradient-Free-Optimizers or Hyperactive. It is designed as an easy-to-use tool to gain insights into multi-dimensional data, as commonly found in optimization.
I created this package, because I needed a convenient tool to visually analyse search-data during the development of gradient-free-optimization algorithms. My goal for this package is to help users get insight into the search-data and its corresponding objective-function and search-space. Building on this insight could help improve the selection of the search-space, compare models in the objective-function or explain the behaviour of the optimization algorithm.
Disclaimer
This project is in an early development stage and is only tested manually. If you encounter bugs or have suggestions for improvements, then please open an issue.
Installation
console
pip install search-data-explorer
How to use
The Search Data Explorer has a very simple API, that can be explained by the examples below or just execute the command "search-data-explorer [file]" to open the Search Data Explorer without executing a python script.
search-data requirements
The Search Data Explorer is used by loading the search-data with a few lines of code. The search data that is loaded from file must follow the pattern below. The columns can have any name but must contain the score, which is always included in search-data from Gradient-Free-Optimizers or Hyperactive.
| first column name | another column name | ... | score |
| 0.756 | 0.1 | 0.2 | -3 |
| 0.823 | 0.3 | 0.1 | -10 |
| ... | ... | ... | ... |
| ... | ... | ... | ... |
Examples
Load search-data by passing dataframe
You can pass the search-data directly, if you do not want to save your search-data to disk and just explore it one time after the optimization has finished.
```python import numpy as np from gradientfreeoptimizers import RandomSearchOptimizer
from searchdataexplorer import SearchDataExplorer
def parabola_function(para): loss = para["x"] * para["x"] + para["y"] * para["y"] + para["y"] * para["y"] return -loss
search_space = { "x": np.arange(-10, 10, 0.1), "y": np.arange(-10, 10, 0.1), "z": np.arange(-10, 10, 0.1), }
generate search-data for this example with gradient-free-optimizers
opt = RandomSearchOptimizer(searchspace) opt.search(parabolafunction, n_iter=1000)
searchdata = opt.searchdata
Open Search-Data-Explorer
sde = SearchDataExplorer() sde.open(search_data) # pass search-data ```
Load search-data by passing path to file
If you already have a search-data file on disk you can pass the path to the file to the search-data-explorer.
```python import numpy as np from gradientfreeoptimizers import RandomSearchOptimizer
from searchdataexplorer import SearchDataExplorer
def parabola_function(para): loss = para["x"] * para["x"] + para["y"] * para["y"] + para["y"] * para["y"] return -loss
search_space = { "x": np.arange(-10, 10, 0.1), "y": np.arange(-10, 10, 0.1), "z": np.arange(-10, 10, 0.1), }
generate search-data for this example with gradient-free-optimizers
opt = RandomSearchOptimizer(searchspace) opt.search(parabolafunction, n_iter=1000)
searchdata = opt.searchdata searchdata.tocsv("search_data.csv", index=False)
Open Search-Data-Explorer
sde = SearchDataExplorer() sde.open("model1.csv") # pass path to file on disk ```
Load search-data by browsing for file
You can just open the search-data-explorer without passing a file or path. In this case you can browse for the file via a menu inside the search-data-explorer.
```python import numpy as np from gradientfreeoptimizers import RandomSearchOptimizer
from searchdataexplorer import SearchDataExplorer
def parabola_function(para): loss = para["x"] * para["x"] + para["y"] * para["y"] + para["y"] * para["y"] return -loss
search_space = { "x": np.arange(-10, 10, 0.1), "y": np.arange(-10, 10, 0.1), "z": np.arange(-10, 10, 0.1), }
generate search-data for this example with gradient-free-optimizers
opt = RandomSearchOptimizer(searchspace) opt.search(parabolafunction, n_iter=1000)
searchdata = opt.searchdata searchdata.tocsv("search_data.csv", index=False)
Open Search-Data-Explorer
sde = SearchDataExplorer() sde.open() # start without passing anything and use the file explorer within the search-data-explorer ```
Owner
- Name: Simon Blanke
- Login: SimonBlanke
- Kind: user
- Location: Germany
- Company: Modis GmbH
- Website: linkedin.com/in/simon-blanke
- Twitter: blanke_simon
- Repositories: 11
- Profile: https://github.com/SimonBlanke
Physicist, software developer for driving assistance systems and machine learning enthusiast.
GitHub Events
Total
Last Year
Committers
Last synced: 8 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Simon Blanke | s****e@y****m | 101 |
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 1
- Total pull requests: 0
- Average time to close issues: 1 day
- Average time to close pull requests: N/A
- Total issue authors: 1
- Total pull request authors: 0
- Average comments per issue: 4.0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- zhaihaojie (1)
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- matplotlib *
- numpy *
- pandas *
- plotly *
- streamlit *
- actions/checkout v2 composite
- actions/setup-python v2 composite
- codecov/codecov-action v1 composite