Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic links in README
-
✓Committers with academic emails
2 of 5 committers (40.0%) from academic institutions -
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Unable to calculate vocabulary similarity
Last synced: 7 months ago
·
JSON representation
·
Repository
Basic Info
- Host: GitHub
- Owner: zhaochenyang20
- Language: HTML
- Default Branch: main
- Size: 5.68 MB
Statistics
- Stars: 1
- Watchers: 1
- Forks: 1
- Open Issues: 1
- Releases: 0
Created over 1 year ago
· Last pushed 8 months ago
Metadata Files
Citation
Owner
- Name: 仿生语言模型会生成赛博博客吗?
- Login: zhaochenyang20
- Kind: user
- Location: Peking
- Company: Tsinghua University
- Website: https://zhaochenyang20.vercel.app/
- Twitter: ChenytangZhao
- Repositories: 11
- Profile: https://github.com/zhaochenyang20
Targeted In-Coming CS Ph.D. in NLP.
Citation (citation.md)
# How To Cite Me
Please directly search my paper’s name on Google Scholar and cite it with BibTeX, since arXiv citation takes a longer time for Google Scholar to scan.
## MiniCPM
**Summary**: "MiniCPM offers a resource-efficient Small Language Model (SLM) family that rivals larger models, leveraging a novel Warmup-Stable-Decay (WSD) learning rate scheduler for effective data-model scaling, continuous training, and domain adaptation."
**Suggested Topics for Citation**: Small Language Models, Data-Model Scaling, Learning Rate Schedulers, Resource-Efficient Machine Learning
```tex
@article{hu2024minicpm,
title={Minicpm: Unveiling the potential of small language models with scalable training strategies},
author={Hu, Shengding and Tu, Yuge and Han, Xu and He, Chaoqun and Cui, Ganqu and Long, Xiang and Zheng, Zhi and Fang, Yewei and Huang, Yuxiang and Zhao, Weilin and others},
journal={arXiv preprint arXiv:2404.06395},
year={2024}
}
```
## Prompt2Model
**Summary**: "Prompt2Model introduces a novel approach to generate deployable models from natural language instructions, enabling users to create tailored models without the need for extensive training expertise."
**Suggested Topics for Citation**: Prompt Engineering, Data Generation, Natural Language Processing, Deployable Models
```tex
@article{viswanathan2023prompt2model,
title={Prompt2model: Generating deployable models from natural language instructions},
author={Viswanathan, Vijay and Zhao, Chenyang and Bertsch, Amanda and Wu, Tongshuang and Neubig, Graham},
journal={arXiv preprint arXiv:2308.12261},
year={2023}
}
```
## TeacherLM
**Summary**: "TeacherLM-7.1B enhances small model capabilities by annotating reasoning and common errors in NLP tasks, enabling models to learn underlying concepts, with its data augmentation boosting performance across 58 NLP datasets."
**Suggested Topics for Citation**: Small Language Models, Data Augmentation, Instructional Annotation, NLP Model Training
```tex
title={Teacherlm: Teaching to fish rather than giving the fish, language modeling likewise},
author={He, Nan and Lai, Hanyu and Zhao, Chenyang and Cheng, Zirui and Pan, Junting and Qin, Ruoyu and Lu, Ruofan and Lu, Rui and Zhang, Yunchen and Zhao, Gangming and others},
journal={arXiv preprint arXiv:2310.19019},
year={2023}
}
```
## Self-Guide
**Summary**: "SELF-GUIDE empowers language models to self-generate task-specific data, allowing for substantial performance improvements in both classification and generation tasks without external model reliance."
**Suggested Topics for Citation**: Self-Supervised Learning, Task-Specific Finetuning, Small Language Models, Data Synthesis in NLP, Self-Improvement
```tex
@article{zhao2024self,
title={Self-guide: Better task-specific instruction following via self-synthetic finetuning},
author={Zhao, Chenyang and Jia, Xueying and Viswanathan, Vijay and Wu, Tongshuang and Neubig, Graham},
journal={arXiv preprint arXiv:2407.12874},
year={2024}
}
```
## Internet of Agents
**Summary**: "The Internet of Agents (IoA) framework enables seamless collaboration among diverse agents, using an Internet-inspired design to support scalable, flexible multi-agent interactions, surpassing current baselines in adaptability and performance."
**Suggested Topics for Citation**: Multi-Agent Systems, LLM-Based Collaboration, Autonomous Agents, Distributed AI Frameworks
```tex
@article{chen2024internet,
title={Internet of agents: Weaving a web of heterogeneous agents for collaborative intelligence},
author={Chen, Weize and You, Ziming and Li, Ran and Guan, Yitong and Qian, Chen and Zhao, Chenyang and Yang, Cheng and Xie, Ruobing and Liu, Zhiyuan and Sun, Maosong},
journal={arXiv preprint arXiv:2407.07061},
year={2024}
}
```
## Configurable Foundation Models
**Summary**: "This paper introduces 'configurable foundation models,' modular LLMs composed of functional 'bricks,' enabling flexible and efficient task-specific inference through modular assembly, with empirical insights into neuron specialization and functional layering."
**Suggested Topics for Citation**: Modular Language Models, Efficiency in LLMs, Configurable Foundation Models, Functional Specialization
```tex
@article{xiao2024configurable,
title={Configurable Foundation Models: Building LLMs from a Modular Perspective},
author={Xiao, Chaojun and Zhang, Zhengyan and Song, Chenyang and Jiang, Dazhi and Yao, Feng and Han, Xu and Wang, Xiaozhi and Wang, Shuo and Huang, Yufei and Lin, Guanyu and others},
journal={arXiv preprint arXiv:2409.02877},
year={2024}
}
```
## CoPS
**Summary**: "CoPS (Cross-Task Experience Sharing) enhances sequential reasoning in agent systems by leveraging cross-task experiences with a pessimism-based selection strategy, improving adaptability and efficiency in resource-constrained scenarios."
**Suggested Topics for Citation**: Sequential Reasoning, Experience Sharing in LLMs, Cross-Task Learning, Agent Generalization
```tex
@article{yang2024cops,
title={CoPS: Empowering LLM Agents with Provable Cross-Task Experience Sharing},
author={Yang, Chen and Zhao, Chenyang and Gu, Quanquan and Zhou, Dongruo},
journal={arXiv preprint arXiv:2410.16670},
year={2024}
}
```
## Spatial Inequities
**Summary**: This study examines the spatial inequities in freight truck crash severity across Los Angeles using deep counterfactual inference models. By analyzing socioeconomic factors, road infrastructure, and environmental conditions, the research reveals significant disparities in crash severity among different communities. The findings inform targeted policy interventions to enhance transportation safety and equity.
**Suggested Topics for Citation**: AI for Science, Spatial Justice, Transport Geography, Counterfactual Inference, Traffic Safety, Socioeconomic Disparities
```tex
@article{wang2024navigating,
title={Navigating Spatial Inequities in Freight Truck Crash Severity via Counterfactual Inference in Los Angeles},
author={Wang, Yichen and Yin, Hao and Yang, Yifan and Zhao, Chenyang and Wang, Siqin},
journal={arXiv preprint arXiv:2411.17554},
year={2024}
}
```
GitHub Events
Total
- Watch event: 1
- Push event: 15
- Fork event: 1
Last Year
- Watch event: 1
- Push event: 15
- Fork event: 1
Committers
Last synced: 10 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| zhaochen20 | z****0@g****m | 28 |
| Chayenne | z****0@o****m | 4 |
| zhaochenyang20 | z****0@M****n | 2 |
| zhaochenyang20 | z****0@v****u | 1 |
| zhaochenyang20 | z****0@d****u | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 7 months ago
All Time
- Total issues: 1
- Total pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Total issue authors: 1
- Total pull request authors: 0
- Average comments per issue: 0.0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 1
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 1
- Pull request authors: 0
- Average comments per issue: 0.0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- PrinceSajjadHussain (1)