https://github.com/lromul/argus-tgs-salt
Kaggle | 14th place solution for TGS Salt Identification Challenge
Science Score: 13.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
○.zenodo.json file
-
○DOI references
-
○Academic publication links
-
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (9.9%) to scientific vocabulary
Keywords
Repository
Kaggle | 14th place solution for TGS Salt Identification Challenge
Basic Info
Statistics
- Stars: 76
- Watchers: 3
- Forks: 19
- Open Issues: 0
- Releases: 0
Topics
Metadata Files
README.md
Argus solution TGS Salt Identification Challenge
Source code of 14th place solution for TGS Salt Identification Challenge by Argus team (Ruslan Baikulov, Nikolay Falaleev).
Solution
We used PyTorch 0.4.1 with framework Argus simplifies the experiments with different architectures and allows to focus on deep learning trials rather than coding neural networks training and testing scripts.
Data preprocessing
The original images with size 101x101 px padded to 148x148 px with biharmonic inpaint from skimage package. This “padding” performed better for us than reflection or zero padding.
Random crop to the input size 128x128 px, flip in the left-right direction and random linear color augmentation (for brightness and contrast adjustment) were applied.
Model design:

After a series of experiments, we ended with a U-Net like architecture with an SE-ResNeXt50 encoder. Standard decoder blocks enriched with custom-built FPN-style layers. In addition to the segmentation task, an additional classification branch (empty/contains salt tile) added into basic network architecture.
Models Training
Loss: Lovasz hinge loss with elu + 1
Optimizer: SGD with LR 0.01, momentum 0.9, weightdecay 0.0001
Train stages:
1. EarlyStopping with patience 100; ReduceLROnPlateau with patience=30, factor=0.64, minlr=1e-8; Lovasz * 0.75 + BCE empty * 0.25.
2. Cosine annealing learning rate 300 epochs, 50 per cycle; Lovasz * 0.5 + BCE empty * 0.5.
Post-processing
Averaged results of two training used in the final submission: SE-ResNeXt50 on 5 random folds. SE-ResNeXt50 on 6 mosaic based folds (similar mosaics tiles placed in the same fold) without the second training stage.
Mosaics-based post-processing. We used the Vicens Gaitan’s Kernel but not on a raw input dataset, but on images after histogram matching to an average histogram, which helps us to assembly more tiles into mosaics. In addition to extrapolating tiles with vertical masks from train subset on neighbouring tiles, we performed an automatic detection of small missed corners and inpaint them by a polygon with a smooth-curve boundary. Holes in masks were also filled with OpenCV.
Results
Example of the whole mosaic post-processing. Green/blue - salt/empty regions from the train dataset; red - predicted mask; yellow - inpainted by the post-processing (used in the final submission).
The results are based on a step-by-step improvement of the pipeline, postprocessing, and fair cross-validation. Finally, results were achieved by carefully selected architectures without heavy ensembles of neural nets and second order models. Reasonable cross-validation with the evaluation metric prevented us from overfitting on the public leaderboard.
For more details on data pre- and post-processing, as well as conducted experiments with neural nets, check out a blog post.
Quick setup and start
Requirements
- Nvidia drivers, CUDA >= 9, cuDNN >= 7
- Docker, nvidia-docker
The provided dockerfile is supplied to build image with cuda support and cudnn.
Preparations
Clone the repo, build docker image.
bash git clone https://github.com/lRomul/argus-tgs-salt.git cd argus-tgs-salt make buildDownload and extract dataset
- extract train images and masks into
data/train/ - extract test images into
data/test/
- extract train images and masks into
The folder structure should be:
argus-tgs-salt ├── data │ ├── mosaic │ ├── test │ └── train ├── docker ├── mosaic ├── notebooks ├── pipeline ├── src └── unused
Run
Run docker container
bash make runStart full pipeline with postprocessing
bash ./run_pipeline.sh
The final submission file will be at:
data/predictions/mean-005-0.4/submission.csv
Owner
- Name: Ruslan Baikulov
- Login: lRomul
- Kind: user
- Location: Moscow, Russia
- Repositories: 5
- Profile: https://github.com/lRomul
Deep Learning Engineer
GitHub Events
Total
- Watch event: 1
Last Year
- Watch event: 1
Committers
Last synced: 10 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Ruslan Baikulov | r****3@g****m | 5 |
Issues and Pull Requests
Last synced: 10 months ago
All Time
- Total issues: 1
- Total pull requests: 0
- Average time to close issues: 3 days
- Average time to close pull requests: N/A
- Total issue authors: 1
- Total pull request authors: 0
- Average comments per issue: 9.0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- zxshi (1)
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- floydhub/pytorch 0.4.1-gpu.cuda9cudnn7-py3.34 build
- cffi *
- pretrainedmodels *
- pycocotools ==2.0.0
- pytorch-argus ==0.0.5
- scikit-optimize *
- shapely *
- torchsummary *
- tqdm *