force-2020-machine-learning-competition

the results, code and the data for the Force 2020 Machine learning competition after the completion of the competition in October 2020.

https://github.com/bolgebrygg/force-2020-machine-learning-competition

Science Score: 49.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 3 DOI reference(s) in README
  • Academic publication links
    Links to: zenodo.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (9.9%) to scientific vocabulary
Last synced: 7 months ago · JSON representation

Repository

the results, code and the data for the Force 2020 Machine learning competition after the completion of the competition in October 2020.

Basic Info
  • Host: GitHub
  • Owner: bolgebrygg
  • Language: Lasso
  • Default Branch: master
  • Homepage:
  • Size: 412 MB
Statistics
  • Stars: 152
  • Watchers: 9
  • Forks: 97
  • Open Issues: 61
  • Releases: 0
Created over 5 years ago · Last pushed over 3 years ago
Metadata Files
Readme Citation

readme.md

FORCE 2020 Machine Learning Competition

For citation please use: Bormann P., Aursand P., Dilib F., Dischington P., Manral S. 2020. FORCE Machine Learning Competition. https://github.com/bolgebrygg/Force-2020-Machine-Learning-competition

DOI

Link to contest https://www.npd.no/en/force/events/machine-learning-contest-with-wells-and-seismic/

Sponsors

Lithology prediction

The objective of the lithology prediction competition was to correctly predict lithology labels for provided well logs, provided NPD lithostratigraphy and well X, Y position.

The competition was scored using a penalty matrix. Some label mistakes are penalized more than others, see starter notebook and penalty matrix for details.

All datasets used in the competition and the starter notebook can be found under lithology_competition/data

Petrel ready files and standard well log las files, all csv file data, predictions and submited modeles and weights can also be found here along with a host of other free geosience subsurface data. The folder is called FORCE 2020 lithofacies prediction from well logs competition (https://drive.google.com/drive/folders/0B7brcf-eGK8CRUhfRW9rSG91bW8)

Results of final scoring

A total of 329 teams signed up for the competition and 148 teams submitted predictions on the open test dataset to enter the competition leaderboard. At the end of the competition the top 30 teams in the leaderboard were invited to submit their pre-trained models for scoring on a hidden dataset. Of these teams 13 submitted code that was easily runnable by the organizers, giving the final scores below.

Description and analysis of the results

Writeup geological and organisational summary of the results (https://docs.google.com/document/d/13XAftsBVHIm01ZN0lP56Q4hZ9hgdYR1G_6KeV2DdzOA/edit?usp=sharing)

| Team | Leaderboard score | Leaderboard rank | Final test score | Final rank | |---|---|---|---|---| | Olawale Ibrahim | -0.5118 | 24 | -0.4690 | 1 | | GIR Team | -0.5037 | 11 | -0.4792 | 2 | | Lab.ICA-Team / Smith A. | -0.4943 | 6 | -0.4954 | 3 | | H3G (Haoyuan Zhang, Harry Brandsen, Gregory Barrere, Helena Nandi Formentin) | -0.509 | 17 | -0.5045 | 4 | | ISPL Team | -0.4885 | 2 | -0.5084 | 5 | | Jiampiers C. | -0.5014 | 9 | -0.5087 | 6 | | Jos Bermdez | -0.5052 | 14 | -0.5091 | 7 | | Bohdan Pavlyshenko | -0.5112 | 22 | -0.5171 | 8 | | Jeremy Zhao | -0.5264 | 31 | -0.5173 | 9 | | Campbell Hutcheson | -0.505 | 13 | -0.5221 | 10 | | David P. | -0.4775 | 1 | -0.5256 | 11 | | SoftServe Team | -0.4936 | 3 | -0.5263 | 12 | | Dapo Awolayo | -0.5121 | 25 | -0.9441 | 13 |

Mapping faults on seismic FORCE 2020 competition

A total of 80 teams signed up for the competition but only 5 submitted a valid scored fault cube in the end. We were a bit surprised by the low rate of submissions. It is most liley explained by the fact that the seismic dataset was not one of the shining examples where ML fault models perform wonderfully but more of a standard seismic cube with some migration issues.

Sparveon won the competition followed by Equinor and Woodside

A geological summary write up of the competition can be found here: https://docs.google.com/document/d/1DURjbg2o5C4N5QUaK0bsUGHUb6QLKdhGuKjpZqTkm9g/edit?usp=sharing
A slidepack comparing the results can be found here: https://drive.google.com/file/d/1r0k4MU22QmhsxgZ7BJZ8sYveDRQwJlne/view?usp=sharing
All the submited scored blind cubes and the samples of the training data can be accesssed here: https://drive.google.com/drive/folders/1Hu4VJN9xLOWixSMdf2xN6fRk0zmOz-1J?usp=sharing

c1 c1

SSparveon 1 SSparveon 1 SSparveon 1

Owner

  • Name: Peter
  • Login: bolgebrygg
  • Kind: user
  • Location: Norway

Geologist working for the subsurface industry

GitHub Events

Total
  • Watch event: 19
  • Fork event: 3
Last Year
  • Watch event: 19
  • Fork event: 3

Dependencies

lithology_competition/code/BohdanP/requirements.txt pypi
  • lightgbm ==3.0.0
  • numpy ==1.18.1
  • pandas ==1.0.1
  • scikit-learn ==0.22.1
lithology_competition/code/CampbellH/requirements.txt pypi
  • Jinja2 ==2.11.2
  • MarkupSafe ==1.1.1
  • Pillow ==8.2.0
  • PyYAML ==5.3.1
  • Pygments ==2.7.2
  • QtPy ==1.9.0
  • Send2Trash ==1.5.0
  • argon2-cffi ==20.1.0
  • async-generator ==1.10
  • attrs ==20.2.0
  • backcall ==0.2.0
  • bleach ==3.2.1
  • blis ==0.4.1
  • catalogue ==1.0.0
  • certifi ==2020.6.20
  • cffi ==1.14.3
  • chardet ==3.0.4
  • cycler ==0.10.0
  • cymem ==2.0.3
  • decorator ==4.4.2
  • defusedxml ==0.6.0
  • entrypoints ==0.3
  • fastai ==2.0.16
  • fastcore ==1.2.4
  • fastprogress ==1.0.0
  • future ==0.18.2
  • idna ==2.10
  • ipykernel ==5.3.4
  • ipython ==7.18.1
  • ipython-genutils ==0.2.0
  • ipywidgets ==7.5.1
  • jedi ==0.17.2
  • joblib ==0.17.0
  • jsonschema ==3.2.0
  • jupyter ==1.0.0
  • jupyter-client ==6.1.7
  • jupyter-console ==6.2.0
  • jupyter-core ==4.6.3
  • jupyterlab-pygments ==0.1.2
  • kiwisolver ==1.2.0
  • lab ==6.2
  • matplotlib ==3.3.2
  • mistune ==0.8.4
  • murmurhash ==1.0.2
  • nbclient ==0.5.1
  • nbconvert ==6.0.7
  • nbformat ==5.0.8
  • nest-asyncio ==1.4.2
  • notebook ==6.1.4
  • numpy ==1.19.2
  • packaging ==20.4
  • pandas ==1.1.3
  • pandocfilters ==1.4.3
  • parso ==0.7.1
  • pexpect ==4.8.0
  • pickleshare ==0.7.5
  • plac ==1.1.3
  • preshed ==3.0.2
  • prometheus-client ==0.8.0
  • prompt-toolkit ==3.0.8
  • ptyprocess ==0.6.0
  • pycparser ==2.20
  • pyparsing ==2.4.7
  • pyrsistent ==0.17.3
  • python-dateutil ==2.8.1
  • pytz ==2020.1
  • pyzmq ==19.0.2
  • qtconsole ==4.7.7
  • requests ==2.24.0
  • scikit-learn ==0.23.2
  • scipy ==1.5.3
  • simplejson ==3.17.2
  • six ==1.15.0
  • spacy ==2.3.2
  • srsly ==1.0.2
  • terminado ==0.9.1
  • testpath ==0.4.4
  • thinc ==7.4.1
  • threadpoolctl ==2.1.0
  • torch ==1.6.0
  • torchvision ==0.7.0
  • tornado ==6.0.4
  • tqdm ==4.51.0
  • traitlets ==5.0.5
  • txt2tags ==3.7
  • urllib3 ==1.25.11
  • wasabi ==0.8.0
  • wcwidth ==0.2.5
  • webencodings ==0.5.1
  • widgetsnbextension ==3.5.1
  • xgboost ==1.2.1