building-inspection-toolkit
Building Inspection Toolkit
Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: researchgate.net, mdpi.com, zenodo.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (12.3%) to scientific vocabulary
Keywords
Repository
Building Inspection Toolkit
Basic Info
Statistics
- Stars: 23
- Watchers: 3
- Forks: 6
- Open Issues: 6
- Releases: 0
Topics
Metadata Files
README.md
Building Inspection Toolkit
Building Inspection Toolkit helps you with machine learning projects in the field of damage recognition on built structures, currenlty with a focus on bridges made of reinforced concrete.
- DataHub: It contains curated open-source datasets, with fix train/val/test splits (as this is often missing) and cleaned annotations. It is build for PyTorch.
- Metrics: We define useful metrics you can use and report to make comparability easier.
- Pre-trained Models: We provide strong baseline models for different datasets. See bikit-models for more details.
The Datasets
Open-source data
Name | Type | Unique images | train/val/test split ----------|-------------|---------------|------------- CDS [Web] | 2-class single-target Clf | 1,028 | bikit-version BCD [Paper] [Data] | 2-class single-target Clf | 6069 | modified-version SDNET [Web] | 2-class single-target Clf | 56,092 | bikit-version MCDS [Paper] [Data] | 8-class multi-target Clf | 2,612 | bikit-version CODEBRIM [Paper] [Data] | 6-class multi-target Clf | 7,730 | original-version
Bikit datasets
Different versions
For some datasets different versions exists. This may be due to the fact that the authors already provide different version (e.g. CODEBRIM) or other authors update datasets (e.g. Bukhsh for MCDS).
Splits
We provide carefully selected train/val/test splits. We introduce splits, when they are not available or update splits where we think this is useful.
Overview
| name | Note |
| ----------------------------|-------------------------------------------------------------------------------------------------------|
| dacl1k | 6-class multi-target dataset. |
| cds | Original dataset with bikit's train/val/test splits
| bcd | Original dataset with modified train/val/test splits (original train was splitted into train/val)
| sdnet | Original dataset with bikit's train/val/test splits; Many wrong labels
| sdnet_binary | Many wrong labels; Binarized version of sdnet: crack, no crack
| sdnet_bikit | Cleaned wrong labels
| sdnet_bikit_binary | Cleaned wrong labels; Binarized version of sdnet: crack, no crack
| mcds_Bukhsh | Bukhsh et al. create a 10-class dataset out of the 3-step dataset from Hüthwohl et al. (with same wrong labels); With bikit's train/val/test splits |
| mcds_bikit | We create a 8-class dataset from Hüthwohl et al. which prevent wrong labels with bikit's train/val/test splits.
| codebrim-classif-balanced | Original dataset with original train/val/test splits: Underrepresented classes are oversampled. |
| codebrim-classif | Original dataset with original train/val/test splits. |
| meta2 | 6-class multi-target dataset based on codebrim-classif, and mcdsbikit. |
| meta3 | 6-class multi-target dataset based on bcd, codebrim-classif, and mcdsbikit. |
| meta4 | 6-class multi-target dataset based on bcd, codebrim-classif, mcdsbikit, and sdnetbikitbinary. |
| meta2+dacl1k | 6-class multi-target dataset based on dacl1k, codebrim-classif, and mcdsbikit. |
| meta3+dacl1k | 6-class multi-target dataset based on dacl1k, bcd, codebrim-classif, and mcdsbikit. |
| meta4+dacl1k | 6-class multi-target dataset based on dacl1k, bcd, codebrim-classif, mcdsbikit, and sdnetbikitbinary. |
Usage
Data
List and download data
```python from bikit.utils import listdatasets, downloaddataset
List all datasets
alldataset = listdatasets(verbose=0) all_dataset.keys()
Download data
DATASETNAME = "mcdsbikit" downloaddataset(DATASETNAME) ```
Use BikitDataset
```python from bikit.utils import download_dataset from bikit.datasets import BikitDataset from torch.utils.data import DataLoader from torchvision import transforms
Select a dataset:
name = "mcds_bikit"
downloaddataset(name) # equals to `downloaddataset("mcdsBukhsh")` mytransform = transforms.Compose([transforms.Resize((256,256)), transforms.ToTensor()])
Use return_type 'pt' (default) or 'np'
traindataset = BikitDataset(name, split="train", transform=mytransform, returntype="pt") trainloader = DataLoader(dataset=traindataset, batchsize=64, shuffle=False, num_workers=0)
Use it in your training loop
for i, (imgs, labels) in enumerate(train_loader): print(i, imgs.shape, labels.shape) break ```
Metrics
- For single-target problems (like
cds,bcd,sdnet_bikit_binary) metrics will follow (#TODO). - For multi-target problems (like
sdnet_bikit,mcds_bikitormeta3) we use Exact Match Ratio (EMR_mt) and classwise Recall (Recalls_mt).
```python
!pip install torchmetrics
from bikit.metrics import EMRmt, Recallsmt import torch
myemr = EMRmt(uselogits=False) myrecalls = Recalls_mt()
fake data
preds0 = torch.tensor([[.9, 0.1, 0.9, 0.1, 0.9, 0.1], [.8, 0.2, 0.9, 0.2, 0.9, 0.2], [.7, 0.9, 0.2 , 0.2, 0.2 , 0.2]]) preds1 = torch.tensor([[.0, 0.1, 0.9, 0.1, 0.9, 0.1], [.8, 0.2, 0.9, 0.2, 0.9, 0.2], [.7, 0.9, 0.2 , 0.9, 0.2 , 0.9]]) target = torch.tensor([[1, 0, 1, 0, 0, 1], [1, 1, 0, 0, 1, 0], [1, 1, 0, 1, 0, 1]])
batch 0
myemr(preds0, target), myrecalls(preds0, target) print(myemr.compute(), myrecalls.compute())
batch 1
myemr(preds1, target), myrecalls(preds1, target)
print(myemr.compute(), myrecalls.compute())
Reset at end of epoch
myemr.reset(), myrecalls.reset() print(myemr, myrecalls) ```
Models
List models
```python from bikit.utils import list_models
List all models
List all models
allmodels = listmodels(verbose=0) all_models.keys() ```
Model Inference
```python from bikit.utils import loadmodel, getmetadata, loadimgfromurl from bikit.models import makeprediction from matplotlib import pyplot as plt
Download and load model
model, metadata = loadmodel("MCDSbikitResNet50dhb", addmetadata=True)
img = loadimgfromurl("https://github.com/phiyodr/building-inspection-toolkit/raw/master/bikit/data/11001990.jpg") plt.imshow(np.asarray(img)) prob, pred = makeprediction(model, img, metadata, printpredictions=True, preprocess_image=True)
> Crack [██████████████████████████████████████ ] 95.79%
> Efflorescence [ ] 0.56%
> ExposedReinforcement [ ] 0.18%
> General [ ] 0.61%
> NoDefect [ ] 1.31%
> RustStaining [ ] 0.44%
> Scaling [ ] 0.05%
> Spalling [ ] 0.86%
> Inference time (CPU): 207.96 ms
```
Installation
bash
!pip install git+https://github.com/phiyodr/building-inspection-toolkit
!pip install patool
!pip install efficientnet-pytorch
!pip install torchmetrics
Misc
PyTest
Install dependencies first
bash
pip3 install -U -r requirements.txt -r test_requirements.txt
Run PyTest
```bash
cd bridge-inspection-toolkit/
pytest ```
Citation
Use the "Cite this repository" tool in the About section of this repo to cite us :)
Owner
- Name: Philipp
- Login: phiyodr
- Kind: user
- Location: Munich
- Company: Bundeswehr University Munich
- Twitter: phiyodr
- Repositories: 23
- Profile: https://github.com/phiyodr
Hi, I'm Philipp a Statistician and PhD Candidate in CS from Germany :)
Citation (CITATION.cff)
cff-version: 1.2.0 message: "If you use this software, please cite it as below." authors: - family-names: "Rösch" given-names: "Philipp J." orcid: "https://orcid.org/0000-0000-0000-0000" - family-names: "Flotzinger" given-names: "Johannes" orcid: "https://orcid.org/0000-0000-0000-0000" title: "Building Inspection Toolkit" version: 0.1.4 date-released: 2022-02-04 url: "https://github.com/phiyodr/building-inspection-toolkit/"
GitHub Events
Total
- Watch event: 6
- Fork event: 2
Last Year
- Watch event: 6
- Fork event: 2
Issues and Pull Requests
Last synced: 8 months ago
All Time
- Total issues: 8
- Total pull requests: 3
- Average time to close issues: 1 day
- Average time to close pull requests: 16 days
- Total issue authors: 4
- Total pull request authors: 1
- Average comments per issue: 1.0
- Average comments per pull request: 1.33
- Merged pull requests: 3
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- jfltzngr (5)
- gachiemchiep (1)
- alexandre2r (1)
- mir-abir-hossain (1)
Pull Request Authors
- SeTruphe (3)
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- Pillow *
- efficientnet_pytorch *
- matplotlib *
- numpy >1.20
- opencv-python-headless *
- pandas *
- pathlib *
- patool *
- requests *
- torch *
- torchmetrics *
- torchvision *
- tqdm *
- Pillow *
- efficientnet_pytorch *
- matplotlib *
- numpy *
- opencv-python-headless *
- pandas *
- pathlib *
- patool *
- requests *
- torch *
- torchmetrics *
- torchvision *
- tqdm *
- codecov * test
- coveralls * test
- pytest * test
- pytest-cov * test
- sphinx * test
- sphinx_bootstrap_theme * test
- travis-sphinx * test
- actions/checkout v2 composite
- github/codeql-action/analyze v1 composite
- github/codeql-action/autobuild v1 composite
- github/codeql-action/init v1 composite