https://github.com/ai-forever/ner-bert
BERT-NER (nert-bert) with google bert https://github.com/google-research.
Science Score: 10.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
○codemeta.json file
-
○.zenodo.json file
-
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (7.3%) to scientific vocabulary
Keywords
Repository
BERT-NER (nert-bert) with google bert https://github.com/google-research.
Basic Info
Statistics
- Stars: 408
- Watchers: 18
- Forks: 100
- Open Issues: 3
- Releases: 0
Topics
Metadata Files
README.md
0. Papers
There are two solutions based on this architecture. 1. BSNLP 2019 ACL workshop: solution and paper on multilingual shared task. 2. The second place solution of Dialogue AGRR-2019 task and paper.
Description
This repository contains solution of NER task based on PyTorch reimplementation of Google's TensorFlow repository for the BERT model that was released together with the paper BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
This implementation can load any pre-trained TensorFlow checkpoint for BERT (in particular Google's pre-trained models).
Old version is in "old" branch.
2. Usage
2.1 Create data
from modules.data import bert_data
data = bert_data.LearnData.create(
train_df_path=train_df_path,
valid_df_path=valid_df_path,
idx2labels_path="/path/to/vocab",
clear_cache=True
)
2.2 Create model
from modules.models.bert_models import BERTBiLSTMAttnCRF
model = BERTBiLSTMAttnCRF.create(len(data.train_ds.idx2label))
2.3 Create Learner
from modules.train.train import NerLearner
num_epochs = 100
learner = NerLearner(
model, data, "/path/for/save/best/model", t_total=num_epochs * len(data.train_dl))
2.4 Predict
from modules.data.bert_data import get_data_loader_for_predict
learner.load_model()
dl = get_data_loader_for_predict(data, df_path="/path/to/df/for/predict")
preds = learner.predict(dl)
2.5 Evaluate
``` from sklearncrfsuite.metrics import flatclassificationreport from modules.analyzeutils.utils import bertlabels2tokens, votingchoicer from modules.analyzeutils.plotmetrics import getbertspanreport from modules.analyzeutils.mainmetrics import precisionrecall_f1
predtokens, predlabels = bertlabels2tokens(dl, preds) truetokens, truelabels = bertlabels2tokens(dl, [x.bertlabels for x in dl.dataset]) tokensreport = flatclassificationreport(truelabels, predlabels, digits=4) print(tokens_report)
results = precisionrecallf1(truelabels, predlabels) ```
3. Results
We didn't search best parametres and obtained the following results.
| Model | Data set | Dev F1 tok | Dev F1 span | Test F1 tok | Test F1 span |-|-|-|-|-|-| |OURS|||||| | M-BERTCRF-IO | FactRuEval | - | - | 0.8543 | 0.8409 | M-BERTNCRF-IO | FactRuEval | - | - | 0.8637 | 0.8516 | M-BERTBiLSTMCRF-IO | FactRuEval | - | - | 0.8835 | 0.8718 | M-BERTBiLSTMNCRF-IO | FactRuEval | - | - | 0.8632 | 0.8510 | M-BERTAttnCRF-IO | FactRuEval | - | - | 0.8503 | 0.8346 | M-BERTBiLSTMAttnCRF-IO | FactRuEval | - | - | 0.8839 | 0.8716 | M-BERTBiLSTMAttnNCRF-IO | FactRuEval | - | - | 0.8807 | 0.8680 | M-BERTBiLSTMAttnCRF-fitBERT-IO | FactRuEval | - | - | 0.8823 | 0.8709 | M-BERTBiLSTMAttnNCRF-fitBERT-IO | FactRuEval | - | - | 0.8583 | 0.8456 |-|-|-|-|-|-| | BERTBiLSTMCRF-IO | CoNLL-2003 | 0.9629 | - | 0.9221 | - | B-BERTBiLSTMCRF-IO | CoNLL-2003 | 0.9635 | - | 0.9229 | - | B-BERTBiLSTMAttnCRF-IO | CoNLL-2003 | 0.9614 | - | 0.9237 | - | B-BERTBiLSTMAttnNCRF-IO | CoNLL-2003 | 0.9631 | - | 0.9249 | - |Current SOTA|||||| | DeepPavlov-RuBERT-NER | FactRuEval | - | - | - | 0.8266 | CSE | CoNLL-2003 | - | - | 0.931 | - | BERT-LARGE | CoNLL-2003 | 0.966 | - | 0.928 | - | BERT-BASE | CoNLL-2003 | 0.964 | - | 0.924 | -
Owner
- Name: AI Forever
- Login: ai-forever
- Kind: organization
- Location: Armenia
- Repositories: 60
- Profile: https://github.com/ai-forever
Creating ML for the future. AI projects you already know. We are non-profit organization with members from all over the world.
GitHub Events
Total
- Watch event: 5
- Fork event: 2
Last Year
- Watch event: 5
- Fork event: 2
Committers
Last synced: 9 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| king-menin | l****t@m****u | 64 |
| Ubuntu | l****s@s****t | 51 |
| Natalia Evlampieva | K****n | 8 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 9 months ago
All Time
- Total issues: 33
- Total pull requests: 0
- Average time to close issues: about 1 month
- Average time to close pull requests: N/A
- Total issue authors: 27
- Total pull request authors: 0
- Average comments per issue: 1.67
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- MNCTTY (3)
- EricAugust (2)
- OmGraja (2)
- karakiz (1)
- pfecht (1)
- Ian-peace (1)
- lucky630 (1)
- g-jing (1)
- possible1402 (1)
- dingmiaomiao (1)
- Nic-Ma (1)
- sloth2012 (1)
- chzuo (1)
- lz-chen (1)
- mhrihab (1)
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- bson *
- matplotlib *
- nltk *
- numpy *
- pandas *
- rusenttokenize *
- scikit-learn *
- sklearn-crfsuite *
- torch *
- tqdm *