deepke
[EMNLP 2022] An Open Toolkit for Knowledge Graph Extraction and Construction
Science Score: 64.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
✓Committers with academic emails
5 of 34 committers (14.7%) from academic institutions -
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (11.3%) to scientific vocabulary
Keywords
Keywords from Contributors
Repository
[EMNLP 2022] An Open Toolkit for Knowledge Graph Extraction and Construction
Basic Info
- Host: GitHub
- Owner: zjunlp
- License: mit
- Language: Python
- Default Branch: main
- Homepage: http://deepke.zjukg.cn/
- Size: 121 MB
Statistics
- Stars: 4,088
- Watchers: 46
- Forks: 726
- Open Issues: 1
- Releases: 9
Topics
Metadata Files
README.md
English | 简体中文
A Deep Learning Based Knowledge Extraction Toolkit
for Knowledge Graph Construction
DeepKE is a knowledge extraction toolkit for knowledge graph construction supporting cnSchema,low-resource, document-level and multimodal scenarios for entity, relation and attribute extraction. We provide documents, online demo, paper, slides and poster for beginners.
- ❗Want to use Large Language Models with DeepKE? Try DeepKE-LLM and OneKE, have fun!
- ❗Want to train supervised models? Try Quick Start, we provide the NER models (e.g, LightNER(COLING'22), W2NER(AAAI'22)), relation extraction models (e.g., KnowPrompt(WWW'22)), relational triple extraction models (e.g., ASP(EMNLP'22), PRGC(ACL'21), PURE(NAACL'21)), and release off-the-shelf models at DeepKE-cnSchema, have fun!
- We recommend using Linux; if using Windows, please use
\\in file paths; - If HuggingFace is inaccessible, please consider using
wisemodelormodescape.
If you encounter any issues during the installation of DeepKE and DeepKE-LLM, please check Tips or promptly submit an issue, and we will assist you with resolving the problem!
Table of Contents
- Table of Contents
- What's New
- Prediction Demo
- Model Framework
- Quick Start
- Tips
- To do
- Reading Materials
- Related Toolkit
- Citation
- Contributors
- Other Knowledge Extraction Open-Source Projects
What's New
June, 2025We integrate the MCP service tools into DeepKE, enabling knowledge extraction through large language models (LLMs) as tool callers for lightweight models.December, 2024We open source the OneKE knowledge extraction framework, supporting multi-agent knowledge extraction across various scenarios.April, 2024We release a new bilingual (Chinese and English) schema-based information extraction model called OneKE based on Chinese-Alpaca-2-13B.Feb, 2024We release a large-scale (0.32B tokens) high-quality bilingual (Chinese and English) Information Extraction (IE) instruction dataset named IEPile, along with two models trained withIEPile, baichuan2-13b-iepile-lora and llama2-13b-iepile-lora.Sep 2023a bilingual Chinese English Information Extraction (IE) instruction dataset calledInstructIEwas released for the Instruction based Knowledge Graph Construction Task (Instruction based KGC), as detailed in here.June, 2023We update DeepKE-LLM to support knowledge extraction with KnowLM, ChatGLM, LLaMA-series, GPT-series etc.Apr, 2023We have added new models, including CP-NER(IJCAI'23), ASP(EMNLP'22), PRGC(ACL'21), PURE(NAACL'21), provided event extraction capabilities (Chinese and English), and offered compatibility with higher versions of Python packages (e.g., Transformers).Feb, 2023We have supported using LLM (GPT-3) with in-context learning (based on EasyInstruct) & data generation, added a NER model W2NER(AAAI'22).
Previous News
* `Nov, 2022` Add data [annotation instructions](https://github.com/zjunlp/DeepKE/blob/main/README_TAG.md) for entity recognition and relation extraction, automatic labelling of weakly supervised data ([entity extraction](https://github.com/zjunlp/DeepKE/tree/main/example/ner/prepare-data) and [relation extraction](https://github.com/zjunlp/DeepKE/tree/main/example/re/prepare-data)), and optimize [multi-GPU training](https://github.com/zjunlp/DeepKE/tree/main/example/re/standard). * `Sept, 2022` The paper [DeepKE: A Deep Learning Based Knowledge Extraction Toolkit for Knowledge Base Population](https://arxiv.org/abs/2201.03335) has been accepted by the EMNLP 2022 System Demonstration Track. * `Aug, 2022` We have added [data augmentation](https://github.com/zjunlp/DeepKE/tree/main/example/re/few-shot/DA) (Chinese, English) support for [low-resource relation extraction](https://github.com/zjunlp/DeepKE/tree/main/example/re/few-shot). * `June, 2022` We have added multimodal support for [entity](https://github.com/zjunlp/DeepKE/tree/main/example/ner/multimodal) and [relation extraction](https://github.com/zjunlp/DeepKE/tree/main/example/re/multimodal). * `May, 2022` We have released [DeepKE-cnschema](https://github.com/zjunlp/DeepKE/blob/main/README_CNSCHEMA.md) with off-the-shelf knowledge extraction models. * `Jan, 2022` We have released a paper [DeepKE: A Deep Learning Based Knowledge Extraction Toolkit for Knowledge Base Population](https://arxiv.org/abs/2201.03335) * `Dec, 2021` We have added `dockerfile` to create the enviroment automatically. * `Nov, 2021` The demo of DeepKE, supporting real-time extration without deploying and training, has been released. * The documentation of DeepKE, containing the details of DeepKE such as source codes and datasets, has been released. * `Oct, 2021` `pip install deepke` * The codes of deepke-v2.0 have been released. * `Aug, 2019` The codes of deepke-v1.0 have been released. * `Aug, 2018` The project DeepKE startup and codes of deepke-v0.1 have been released.Prediction Demo
There is a demonstration of prediction. The GIF file is created by Terminalizer. Get the code.

Model Framework
- DeepKE contains a unified framework for named entity recognition, relation extraction and attribute extraction, the three knowledge extraction functions.
- Each task can be implemented in different scenarios. For example, we can achieve relation extraction in standard, low-resource (few-shot), document-level and multimodal settings.
- Each application scenario comprises of three components: Data including Tokenizer, Preprocessor and Loader, Model including Module, Encoder and Forwarder, Core including Training, Evaluation and Prediction.
Quick Start
DeepKE-LLM
In the era of large models, DeepKE-LLM utilizes a completely new environment dependency.
``` conda create -n deepke-llm python=3.9 conda activate deepke-llm
cd example/llm pip install -r requirements.txt ```
Please note that the requirements.txt file is located in the example/llm folder.
DeepKE-MCP-Tools
We integrate the MCP (Model Calling Protocol) service tools into DeepKE, enabling knowledge extraction through large language models (LLMs) as tool callers for lightweight models.
- The MCP service has been deployed and is accessible at URL.
- For local deployment, refer to the README for detailed operational procedures.
DeepKE
- DeepKE supports
pip install deepke.
Take the fully supervised relation extraction for example. - DeepKE supports both manual and docker image environment configuration, you can choose the appropriate way to build.
- Highly recommended to install deepke in a Linux environment. #### 🔧Manual Environment Configuration
Step1 Download the basic code
bash
git clone --depth 1 https://github.com/zjunlp/DeepKE.git
Step2 Create a virtual environment using Anaconda and enter it.
```bash conda create -n deepke python=3.8
conda activate deepke ```
- Install DeepKE with source code
```bash pip install -r requirements.txt
python setup.py install
python setup.py develop ```
- Install DeepKE with
pip(NOT recommended!)
bash
pip install deepke
- Please make sure that pip version <= 24.0
Step3 Enter the task directory
bash
cd DeepKE/example/re/standard
Step4 Download the dataset, or follow the annotation instructions to obtain data
```bash wget 121.41.117.246:8080/Data/re/standard/data.tar.gz
tar -xzvf data.tar.gz ```
Many types of data formats are supported,and details are in each part.
Step5 Training (Parameters for training can be changed in the conf folder)
We support visual parameter tuning by using wandb.
bash
python run.py
Step6 Prediction (Parameters for prediction can be changed in the conf folder)
Modify the path of the trained model in predict.yaml.The absolute path of the model needs to be used,such as xxx/checkpoints/2019-12-03_ 17-35-30/cnn_ epoch21.pth.
bash
python predict.py
- ❗NOTE: if you encounter any errors, please refer to the Tips or submit a GitHub issue.
🐳Building With Docker Images
Step1 Install the Docker client
Install Docker and start the Docker service.
Step2 Pull the docker image and run the container
bash
docker pull zjunlp/deepke:latest
docker run -it zjunlp/deepke:latest /bin/bash
The remaining steps are the same as Step 3 and onwards in Manual Environment Configuration.
- ❗NOTE: You can refer to the Tips to speed up installation
Requirements
DeepKE
python == 3.8
- torch>=1.5,<=1.11
- hydra-core==1.0.6
- tensorboard==2.4.1
- matplotlib==3.4.1
- transformers==4.26.0
- jieba==0.42.1
- scikit-learn==0.24.1
- seqeval==1.2.2
- opt-einsum==3.3.0
- wandb==0.12.7
- ujson==5.6.0
- huggingface_hub==0.11.0
- tensorboardX==2.5.1
- nltk==3.8
- protobuf==3.20.1
- numpy==1.21.0
- ipdb==0.13.11
- pytorch-crf==0.7.2
- tqdm==4.66.1
- openai==0.28.0
- Jinja2==3.1.2
- datasets==2.13.2
- pyhocon==0.3.60
Introduction of Three Functions
1. Named Entity Recognition
Named entity recognition seeks to locate and classify named entities mentioned in unstructured text into pre-defined categories such as person names, organizations, locations, organizations, etc.
The data is stored in
.txtfiles. Some instances as following (Users can label data based on the tools Doccano, MarkTool, or they can use the Weak Supervision with DeepKE to obtain data automatically):
| Sentence | Person | Location | Organization | | :----------------------------------------------------------: | :------------------------: | :------------: | :----------------------------: | | 本报北京9月4日讯记者杨涌报道:部分省区人民日报宣传发行工作座谈会9月3日在4日在京举行。 | 杨涌 | 北京 | 人民日报 | | 《红楼梦》由王扶林导演,周汝昌、王蒙、周岭等多位专家参与制作。 | 王扶林,周汝昌,王蒙,周岭 | | | | 秦始皇兵马俑位于陕西省西安市,是世界八大奇迹之一。 | 秦始皇 | 陕西省,西安市 | |
Read the detailed process in specific README
We support LLM and provide the off-the-shelf model, DeepKE-cnSchema-NER, which will extract entities in cnSchema without training.
Step1 Enter
DeepKE/example/ner/standard. Download the dataset.```bash wget 121.41.117.246:8080/Data/ner/standard/data.tar.gz
tar -xzvf data.tar.gz ```
Step2 Training
The dataset and parameters can be customized in the
datafolder andconffolder respectively.bash python run.pyStep3 Prediction
bash python predict.py- FEW-SHOTStep1 Enter
DeepKE/example/ner/few-shot. Download the dataset.```bash wget 121.41.117.246:8080/Data/ner/few_shot/data.tar.gz
tar -xzvf data.tar.gz ```
Step2 Training in the low-resouce setting
The directory where the model is loaded and saved and the configuration parameters can be cusomized in the
conffolder.bash python run.py +train=few_shotUsers can modify
load_pathinconf/train/few_shot.yamlto use existing loaded model.Step3 Add
- predicttoconf/config.yaml, modifyloda_pathas the model path andwrite_pathas the path where the predicted results are saved inconf/predict.yaml, and then runpython predict.pybash python predict.py- MULTIMODALStep1 Enter
DeepKE/example/ner/multimodal. Download the dataset.```bash wget 121.41.117.246:8080/Data/ner/multimodal/data.tar.gz
tar -xzvf data.tar.gz ```
We use RCNN detected objects and visual grounding objects from original images as visual local information, where RCNN via faster_rcnn and visual grounding via onestage_grounding.
Step2 Training in the multimodal setting
- The dataset and parameters can be customized in thedatafolder andconffolder respectively. - Start with the model trained last time: modifyload_pathinconf/train.yamlas the path where the model trained last time was saved. And the path saving logs generated in training can be customized bylog_dir.bash python run.pyStep3 Prediction
bash python predict.py
2. Relation Extraction
Relationship extraction is the task of extracting semantic relations between entities from a unstructured text.
The data is stored in
.csvfiles. Some instances as following (Users can label data based on the tools Doccano, MarkTool, or they can use the Weak Supervision with DeepKE to obtain data automatically):
| Sentence | Relation | Head | Headoffset | Tail | Tailoffset | | :----------------------------------------------------: | :------: | :--------: | :---------: | :--------: | :---------: | | 《岳父也是爹》是王军执导的电视剧,由马恩然、范明主演。 | 导演 | 岳父也是爹 | 1 | 王军 | 8 | | 《九玄珠》是在纵横中文网连载的一部小说,作者是龙马。 | 连载网站 | 九玄珠 | 1 | 纵横中文网 | 7 | | 提起杭州的美景,西湖总是第一个映入脑海的词语。 | 所在城市 | 西湖 | 8 | 杭州 | 2 |
- !NOTE: If there are multiple entity types for one relation, entity types can be prefixed with the relation as inputs.
Read the detailed process in specific README
We support LLM and provide the off-the-shelf model, DeepKE-cnSchema-RE, which will extract relations in cnSchema without training.
Step1 Enter the
DeepKE/example/re/standardfolder. Download the dataset.```bash wget 121.41.117.246:8080/Data/re/standard/data.tar.gz
tar -xzvf data.tar.gz ```
Step2 Training
The dataset and parameters can be customized in the
datafolder andconffolder respectively.bash python run.pyStep3 Prediction
bash python predict.py- FEW-SHOTStep1 Enter
DeepKE/example/re/few-shot. Download the dataset.```bash wget 121.41.117.246:8080/Data/re/few_shot/data.tar.gz
tar -xzvf data.tar.gz ```
Step 2 Training
- The dataset and parameters can be customized in thedatafolder andconffolder respectively. - Start with the model trained last time: modifytrain_from_saved_modelinconf/train.yamlas the path where the model trained last time was saved. And the path saving logs generated in training can be customized bylog_dir.bash python run.pyStep3 Prediction
bash python predict.py- DOCUMENTStep1 Enter
DeepKE/example/re/document. Download the dataset.```bash wget 121.41.117.246:8080/Data/re/document/data.tar.gz
tar -xzvf data.tar.gz ```
Step2 Training
- The dataset and parameters can be customized in thedatafolder andconffolder respectively. - Start with the model trained last time: modifytrain_from_saved_modelinconf/train.yamlas the path where the model trained last time was saved. And the path saving logs generated in training can be customized bylog_dir.bash python run.pyStep3 Prediction
bash python predict.py- MULTIMODALStep1 Enter
DeepKE/example/re/multimodal. Download the dataset.```bash wget 121.41.117.246:8080/Data/re/multimodal/data.tar.gz
tar -xzvf data.tar.gz ```
We use RCNN detected objects and visual grounding objects from original images as visual local information, where RCNN via faster_rcnn and visual grounding via onestage_grounding.
Step2 Training
- The dataset and parameters can be customized in thedatafolder andconffolder respectively. - Start with the model trained last time: modifyload_pathinconf/train.yamlas the path where the model trained last time was saved. And the path saving logs generated in training can be customized bylog_dir.bash python run.pyStep3 Prediction
bash python predict.py
3. Attribute Extraction
Attribute extraction is to extract attributes for entities in a unstructed text.
The data is stored in
.csvfiles. Some instances as following:
| Sentence | Att | Ent | Entoffset | Val | Valoffset | | :----------------------------------------------------------: | :------: | :------: | :--------: | :-----------: | :--------: | | 张冬梅,女,汉族,1968年2月生,河南淇县人 | 民族 | 张冬梅 | 0 | 汉族 | 6 | |诸葛亮,字孔明,三国时期杰出的军事家、文学家、发明家。| 朝代 | 诸葛亮 | 0 | 三国时期 | 8 | | 2014年10月1日许鞍华执导的电影《黄金时代》上映 | 上映时间 | 黄金时代 | 19 | 2014年10月1日 | 0 |
Read the detailed process in specific README
Step1 Enter the
DeepKE/example/ae/standardfolder. Download the dataset.```bash wget 121.41.117.246:8080/Data/ae/standard/data.tar.gz
tar -xzvf data.tar.gz ```
Step2 Training
The dataset and parameters can be customized in the
datafolder andconffolder respectively.bash python run.pyStep3 Prediction
bash python predict.py
4. Event Extraction
- Event extraction is the task to extract event type, event trigger words, event arguments from a unstructed text.
- The data is stored in
.tsvfiles, some instances are as follows:
| Sentence | Event type | Trigger | Role | Argument | |
|---|---|---|---|---|---|
| 据《欧洲时报》报道,当地时间27日,法国巴黎卢浮宫博物馆员工因不满工作条件恶化而罢工,导致该博物馆也因此闭门谢客一天。 | 组织行为-罢工 | 罢工 | 罢工人员 | 法国巴黎卢浮宫博物馆员工 | |
| 时间 | 当地时间27日 | ||||
| 所属组织 | 法国巴黎卢浮宫博物馆 | ||||
| 中国外运2019年上半年归母净利润增长17%:收购了少数股东股权 | 财经/交易-出售/收购 | 收购 | 出售方 | 少数股东 | |
| 收购方 | 中国外运 | ||||
| 交易物 | 股权 | ||||
| 美国亚特兰大航展13日发生一起表演机坠机事故,飞行员弹射出舱并安全着陆,事故没有造成人员伤亡。 | 灾害/意外-坠机 | 坠机 | 时间 | 13日 | |
| 地点 | 美国亚特兰 | ||||
Read the detailed process in specific README
Step1 Enter the
DeepKE/example/ee/standardfolder. Download the dataset.bash wget 121.41.117.246:8080/Data/ee/DuEE.zip unzip DuEE.zipStep 2 Training
The dataset and parameters can be customized in the
datafolder andconffolder respectively.bash python run.pyStep 3 Prediction
bash python predict.py
Tips
1.Using nearest mirror, THU in China, will speed up the installation of Anaconda; aliyun in China, will speed up pip install XXX.
2.When encountering ModuleNotFoundError: No module named 'past',run pip install future .
3.It's slow to install the pretrained language models online. Recommend download pretrained models before use and save them in the pretrained folder. Read README.md in every task directory to check the specific requirement for saving pretrained models.
4.The old version of DeepKE is in the deepke-v1.0 branch. Users can change the branch to use the old version. The old version has been totally transfered to the standard relation extraction (example/re/standard).
5.If you want to modify the source code, it's recommended to install DeepKE with source codes. If not, the modification will not work. See issue
6.More related low-resource knowledge extraction works can be found in Knowledge Extraction in Low-Resource Scenarios: Survey and Perspective.
7.Make sure the exact versions of requirements in requirements.txt.
To do
In next version, we plan to release a stronger LLM for KE.
Meanwhile, we will offer long-term maintenance to fix bugs, solve issues and meet new requests. So if you have any problems, please put issues to us.
Reading Materials
Data-Efficient Knowledge Graph Construction, 高效知识图谱构建 (Tutorial on CCKS 2022) [slides]
Efficient and Robust Knowledge Graph Construction (Tutorial on AACL-IJCNLP 2022) [slides]
PromptKG Family: a Gallery of Prompt Learning & KG-related Research Works, Toolkits, and Paper-list [Resources]
Knowledge Extraction in Low-Resource Scenarios: Survey and Perspective [Survey][Paper-list]
Related Toolkit
Doccano、MarkTool、LabelStudio: Data Annotation Toolkits
LambdaKG: A library and benchmark for PLM-based KG embeddings
EasyInstruct: An easy-to-use framework to instruct Large Language Models
Reading Materials:
Data-Efficient Knowledge Graph Construction, 高效知识图谱构建 (Tutorial on CCKS 2022) [slides]
Efficient and Robust Knowledge Graph Construction (Tutorial on AACL-IJCNLP 2022) [slides]
PromptKG Family: a Gallery of Prompt Learning & KG-related Research Works, Toolkits, and Paper-list [Resources]
Knowledge Extraction in Low-Resource Scenarios: Survey and Perspective [Survey][Paper-list]
Related Toolkit:
Doccano、MarkTool、LabelStudio: Data Annotation Toolkits
LambdaKG: A library and benchmark for PLM-based KG embeddings
EasyInstruct: An easy-to-use framework to instruct Large Language Models
Citation
Please cite our paper if you use DeepKE in your work
bibtex
@inproceedings{EMNLP2022_Demo_DeepKE,
author = {Ningyu Zhang and
Xin Xu and
Liankuan Tao and
Haiyang Yu and
Hongbin Ye and
Shuofei Qiao and
Xin Xie and
Xiang Chen and
Zhoubo Li and
Lei Li},
editor = {Wanxiang Che and
Ekaterina Shutova},
title = {DeepKE: {A} Deep Learning Based Knowledge Extraction Toolkit for Knowledge Base Population},
booktitle = {{EMNLP} (Demos)},
pages = {98--108},
publisher = {Association for Computational Linguistics},
year = {2022},
url = {https://aclanthology.org/2022.emnlp-demos.10}
}
Contributors
Ningyu Zhang, Haofen Wang, Fei Huang, Feiyu Xiong, Liankuan Tao, Xin Xu, Honghao Gui, Zhenru Zhang, Chuanqi Tan, Qiang Chen, Xiaohan Wang, Zekun Xi, Xinrong Li, Haiyang Yu, Hongbin Ye, Shuofei Qiao, Peng Wang, Yuqi Zhu, Xin Xie, Xiang Chen, Zhoubo Li, Lei Li, Xiaozhuan Liang, Yunzhi Yao, Jing Chen, Yuqi Zhu, Yujie Luo, Shumin Deng, Wen Zhang, Guozhou Zheng, Huajun Chen
Community Contributors: Shuo Shen, Zhoutian Shao, Wei Hu, thredreams, eltociear, Ziwen Xu, Rui Huang, Xiaolong Weng
Other Knowledge Extraction Open-Source Projects
Owner
- Name: ZJUNLP
- Login: zjunlp
- Kind: organization
- Location: China
- Website: http://zjukg.org
- Repositories: 19
- Profile: https://github.com/zjunlp
A NLP & KG Group of Zhejiang University
Citation (CITATION.cff)
cff-version: "1.0.0"
message: "If you use this toolkit, please cite it using these metadata."
title: "deepke"
repository-code: "https://https://github.com/zjunlp/DeepKE"
authors:
- family-names: Zhang
given-names: Ningyu
- family-names: Xu
given-names: Xin
- family-names: Tao
given-names: Liankuan
- family-names: Yu
given-names: Haiyang
- family-names: Ye
given-names: Hongbin
- family-names: Qiao
given-names: Shuofei
- family-names: Xie
given-names: Xin
- family-names: Chen
given-names: Xiang
- family-names: Li
given-names: Zhoubo
- family-names: Li
given-names: Lei
- family-names: Liang
given-names: Xiaozhuan
- family-names: Yao
given-names: Yunzhi
- family-names: Deng
given-names: Shumin
- family-names: Wang
given-names: Peng
- family-names: Zhang
given-names: Wen
- family-names: Zhang
given-names: Zhenru
- family-names: Tan
given-names: Chuanqi
- family-names: Chen
given-names: Qiang
- family-names: Xiong
given-names: Feiyu
- family-names: Huang
given-names: Fei
- family-names: Zheng
given-names: Guozhou
- family-names: Chen
given-names: Huajun
preferred-citation:
type: article
title: "DeepKE: A Deep Learning Based Knowledge Extraction Toolkit for Knowledge Base Population"
authors:
- family-names: Zhang
given-names: Ningyu
- family-names: Xu
given-names: Xin
- family-names: Tao
given-names: Liankuan
- family-names: Yu
given-names: Haiyang
- family-names: Ye
given-names: Hongbin
- family-names: Qiao
given-names: Shuofei
- family-names: Xie
given-names: Xin
- family-names: Chen
given-names: Xiang
- family-names: Li
given-names: Zhoubo
- family-names: Li
given-names: Lei
- family-names: Liang
given-names: Xiaozhuan
- family-names: Yao
given-names: Yunzhi
- family-names: Deng
given-names: Shumin
- family-names: Wang
given-names: Peng
- family-names: Zhang
given-names: Wen
- family-names: Zhang
given-names: Zhenru
- family-names: Tan
given-names: Chuanqi
- family-names: Chen
given-names: Qiang
- family-names: Xiong
given-names: Feiyu
- family-names: Huang
given-names: Fei
- family-names: Zheng
given-names: Guozhou
- family-names: Chen
given-names: Huajun
journal: "http://arxiv.org/abs/2201.03335"
year: 2022
GitHub Events
Total
- Issues event: 105
- Watch event: 564
- Issue comment event: 240
- Push event: 29
- Pull request review event: 1
- Pull request event: 16
- Fork event: 52
Last Year
- Issues event: 105
- Watch event: 564
- Issue comment event: 240
- Push event: 29
- Pull request review event: 1
- Pull request event: 16
- Fork event: 52
Committers
Last synced: 9 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Eric | z****0@v****m | 366 |
| tlk-dsg | 4****3@q****m | 312 |
| Xin Xu | x****s@z****n | 270 |
| guihonghao | 1****8@q****m | 221 |
| leo | y****0@1****m | 72 |
| TimelordRi | 3****i | 56 |
| pillow-xi | 7****i | 56 |
| Shuofei Qiao | 9****1@q****m | 51 |
| lilei | 4****n | 37 |
| peng2001 | 1****7@q****m | 28 |
| Auro | 9****g | 27 |
| Flow3rDown | 1****2@q****m | 15 |
| shengyumao | s****u@z****n | 14 |
| chen-jing | 4****3 | 11 |
| ningyu zhang | n****g@n****l | 10 |
| LesileZ | 8****Z | 9 |
| hunxuewangzi | 8****i | 9 |
| rolnan | 2****3@q****m | 7 |
| xzw | 5****4@q****m | 5 |
| Ivy | h****r@z****n | 5 |
| LiamLYYY | 9****Y | 5 |
| Wangxh-07 | 6****7 | 4 |
| TuBG | 3****8@q****m | 4 |
| HRHRHRHR666 | 3****6@q****m | 4 |
| xxu2020 | 7****0@u****m | 3 |
| Shumin Deng | 2****m@z****n | 2 |
| Alexzhuan | l****6@1****m | 2 |
| Guspan Tanadi | 3****i | 2 |
| Yixin Ou | 6****t | 2 |
| njcx-ai | c****j@o****m | 2 |
| and 4 more... | ||
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 437
- Total pull requests: 25
- Average time to close issues: 5 days
- Average time to close pull requests: 2 days
- Total issue authors: 268
- Total pull request authors: 10
- Average comments per issue: 4.81
- Average comments per pull request: 0.4
- Merged pull requests: 17
- Bot issues: 0
- Bot pull requests: 7
Past Year
- Issues: 63
- Pull requests: 9
- Average time to close issues: 9 days
- Average time to close pull requests: 2 days
- Issue authors: 47
- Pull request authors: 3
- Average comments per issue: 4.76
- Average comments per pull request: 0.11
- Merged pull requests: 9
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- nTjing (14)
- HuiGe88 (9)
- whwususu (8)
- BeimingCharles (7)
- linxianwang (7)
- qinglongheu (7)
- zhousuyu1111 (5)
- NEUtangyu (5)
- 1304150468 (5)
- JoshonSmith (4)
- liujc196 (4)
- EightMonth (4)
- zmt1002 (4)
- Lpover (4)
- MdcGIt (4)
Pull Request Authors
- dependabot[bot] (7)
- LiamLYYY (5)
- R10836 (4)
- guspan-tanadi (4)
- HRHRHRHR666 (2)
- zxlzr (2)
- xzwyyd (2)
- TuBG (1)
- GoooDte (1)
- eltociear (1)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 2
-
Total downloads:
- pypi 508 last-month
-
Total dependent packages: 0
(may contain duplicates) -
Total dependent repositories: 2
(may contain duplicates) - Total versions: 112
- Total maintainers: 7
pypi.org: deepke
DeepKE is a knowledge extraction toolkit for knowledge graph construction supporting low-resource, document-level and multimodal scenarios for entity, relation and attribute extraction.
- Homepage: https://github.com/zjunlp/deepke
- Documentation: https://deepke.readthedocs.io/
- License: MIT
-
Latest release: 2.2.7
published over 2 years ago
Rankings
pypi.org: mseep-deepke
DeepKE is a knowledge extraction toolkit for knowledge graph construction supporting low-resource, document-level and multimodal scenarios for entity, relation and attribute extraction.
- Homepage: https://github.com/zjunlp/deepke
- Documentation: https://mseep-deepke.readthedocs.io/
- License: MIT
-
Latest release: 2.2.7
published 8 months ago
Rankings
Maintainers (1)
Dependencies
- deepke *
- hydra-core ==1.0.6
- jieba ==0.42.1
- matplotlib ==3.4.1
- scikit-learn ==0.24.1
- tensorboard ==2.4.1
- torch ==1.5
- transformers ==4.5.0
- pytorch ==1.7.0
- tensorboardX ==2.4
- transformers ==3.4.0
- deepke *
- hydra-core ==1.0.6
- matplotlib ==3.4.1
- pytorch-transformers ==1.2.0
- seqeval ==0.0.5
- torch ==1.5.0
- tqdm ==4.31.1
- hydra-core ==1.0.6
- opt-einsum ==3.3.0
- torch ==1.8.1
- transformers ==4.7.0
- ujson *
- hydra-core ==1.0.6
- torch ==1.5
- transformers ==3.4.0
- deepke *
- hydra-core ==1.0.6
- jieba ==0.42.1
- matplotlib ==3.4.1
- scikit-learn ==0.24.1
- tensorboard ==2.4.1
- torch ==1.5
- transformers ==4.5.0
- hydra-core ==1.0.6
- pyld ==2.0.3
- pytorch_transformers ==1.2.0
- torch ==1.10
- transformers ==3.4.0
- huggingface_hub ==0.2.1
- hydra-core ==1.0.6
- jieba ==0.42.1
- matplotlib ==3.4.1
- opt-einsum ==3.3.0
- pytorch-transformers ==1.2.0
- scikit-learn ==0.24.1
- seqeval ==1.2.2
- tensorboard ==2.4.1
- torch >=1.5,<=1.10
- tqdm ==4.60.0
- transformers ==3.4.0
- ujson *
- wandb ==0.12.7
- huggingface_hub ==0.2.1
- hydra-core ==1.0.6
- jieba ==0.42.1
- matplotlib ==3.4.1
- opt-einsum ==3.3.0
- pytorch-crf ==0.7.2
- pytorch-transformers ==1.2.0
- scikit-learn ==0.24.1
- seqeval ==1.2.2
- tensorboard ==2.4.1
- torch ==1.11
- tqdm ==4.60.0
- transformers ==3.4.0
- ujson *
- wandb ==0.12.7
- ubuntu 18.04 build
- beautifulsoup4 ==4.9.3
- bs4 ==0.0.1
- deepke *
- elasticsearch ==7.17.1
- hydra-core ==1.0.6
- ipdb ==0.13.9
- lxml ==4.9.1
- sentencepiece ==0.1.95
- stanza ==1.2
- tensorboardx ==2.4
- torch ==1.8.0
- transformers ==3.1.0
- hydra *
- jinja2 *
- openai *
- beautifulsoup4 ==4.6.0
- morfessor ==2.0.4
- nltk ==3.6.6
- pattern ==3.6
- polyglot ==16.07.04
- pycld2 *
- pyicu *
- stanfordcorenlp *
- tqdm *
