https://github.com/atsyplenkov/detect-chatgpt

ChatGPT Excess Words Checker

https://github.com/atsyplenkov/detect-chatgpt

Science Score: 49.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 1 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (10.4%) to scientific vocabulary
Last synced: 6 months ago · JSON representation

Repository

ChatGPT Excess Words Checker

Basic Info
Statistics
  • Stars: 0
  • Watchers: 1
  • Forks: 0
  • Open Issues: 2
  • Releases: 1
Created over 1 year ago · Last pushed 6 months ago
Metadata Files
Readme

README.md

ChatGPT Excess Words Checker

Streamlit App Version Project Status: Active – The project has reached a stable, usable state and is being actively developed.

The use of Large Language Models (LLMs) in text generation has already deeply influenced our lives and, unfortunately, scientific literature. Recent findings by Kobak et al. (2024) suggested that approximately 10% of all 2024 abstracts in the PubMed database were processed with LLMs, while Liang et al. (2024) found that up to 16.9% of peer reviews of AI conference proceedings have been substantially modified by LLMs. Possibly, all this happens mostly with the help of ChatGPT, which occupies 76% of the global generative AI market (Van Rossum, 2024).

From our perspective, when the publishable text has undergone something beyond spell-checking or minor writing updates with the help of LLM, the question that should be asked by every researcher, scientist, teacher, or anyone who cares about this topic is as follows. "Why should I bother reading something that nobody could be bothered to write?"

How does it work?

AI text detection is hard and not straightforward. Recent studies (e.g., Elkhatat et al., 2023) and Kaggle competitions show that without human manual review, it is currently nearly impossible to detect AI-generated text with confidence. While popular AI-detecting resources like GPTZero, zerogpt, and Copyleaks work like black-box models and offer blind reliance on their results, we suggest closely exploring text of interest.

One way to detect the potential overuse of LLMs for text paraphrasing and generation is through excess word analysis. Kobak et al. (2024) analysed 14 million PubMed abstracts from the 2010–2024 period and found that the frequency of certain words has statistically significantly increased over the decade. For example, "delves," "crucial," and "insights" were used more frequently in abstracts after the public release of ChatGPT 3.5 in November 2022. The authors suggested that the elevated frequency of these words is associated with LLMs.

Such a list of indicator words may serve as a red flag. However, we need to warn that it is not a silver bullet and, according to our analysis of Kobak's dataset, it is extremely domain-specific (in current version PubMed-specific). Therefore, it is highly possible that the tool may produce a false positive result, as Kobak et al. indicated that commonly used words in academic literature, such as "these," "research," "findings," etc., are included.

Call to action

Since there is evidence that such lists of indicator words are domain- and LLM-specific, we invite you to get in touch and create a similar study for the Earth Science domain.

Acknowledgments

We would like to thank @yorko for the fruitful discussion and relevant links on previous research. Additional kudos to @FareedKhan-dev for publishing his app as open source; it provided the momentum for creating this one.

Owner

  • Name: Anatolii Tsyplenkov
  • Login: atsyplenkov
  • Kind: user
  • Location: New Zealand
  • Company: @manaakiwhenua

Scientist-Geomorphologist and Research Software Engineer, fond of all things geospatial

GitHub Events

Total
  • Push event: 2
Last Year
  • Push event: 2

Committers

Last synced: 8 months ago

All Time
  • Total Commits: 13
  • Total Committers: 1
  • Avg Commits per committer: 13.0
  • Development Distribution Score (DDS): 0.0
Past Year
  • Commits: 13
  • Committers: 1
  • Avg Commits per committer: 13.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
atsyplenkov a****v@f****m 13
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 8 months ago

All Time
  • Total issues: 2
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 1
  • Total pull request authors: 0
  • Average comments per issue: 0.0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 2
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 1
  • Pull request authors: 0
  • Average comments per issue: 0.0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • atsyplenkov (1)
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels