paper_analizer
PaperAnalizer takes research papers an processes them, creating a word cloud based on key words that can be found in the abstract, a list of all the links that can be found in the selected papers and a file that shows the number of figures per paper and the sum of all of them.
Science Score: 49.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 1 DOI reference(s) in README -
✓Academic publication links
Links to: zenodo.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (13.0%) to scientific vocabulary
Keywords
Repository
PaperAnalizer takes research papers an processes them, creating a word cloud based on key words that can be found in the abstract, a list of all the links that can be found in the selected papers and a file that shows the number of figures per paper and the sum of all of them.
Basic Info
- Host: GitHub
- Owner: anastmur
- License: apache-2.0
- Language: Python
- Default Branch: main
- Homepage: https://paper-analizer.readthedocs.io/en/latest/
- Size: 37.5 MB
Statistics
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
- Releases: 2
Topics
Metadata Files
README.md
PaperAnalizer
Table of Contents
Introduction
PaperAnalizer takes research papers an processes them, creating a word cloud based on key words that can be found in the abstract, a list of all the links that can be found in the selected papers and a file that shows the number of figures per paper and the sum of all of them.
A more thorough explanation of the things the code does can be found in the rationale.md file.
Requirements
Python
The code runs on Python 3.10^, so it must be installed in the system to be able to use PaperAnalizer.
Dependencies
Dependencies can be installed by using Poetry. You simply must go to the root directory of the repository and run:
poetry install
Or install all dependencies with pip using requirements.txt in the root directory of the repository by running:
pip install -r requirements.txt
Grobid
PaperAnalizer connects to a Grobid Server to analize the papers, so you must install Grobid 0.8.0. You should use one of the available Docker images to run Grobid:
Full image: https://hub.docker.com/r/grobid/grobid
Light image: https://hub.docker.com/r/lfoppiano/grobid/
Running Grobid
To run Grobid use either:
docker run --rm --gpus all --init --ulimit core=0 -p 8070:8070 grobid/grobid:0.8.0
Or:
docker run --rm --init --ulimit core=0 -p 8070:8070 lfoppiano/grobid:0.8.0
Depending on which image you have downloaded.
How to use
Steps
To correctly run PaperAnalizer you must follow the following steps:
- In PDF format, put the papers you want to analize in the papers/ folder, found in the root directory of the repository. There are some example papers already there.
- Run a Grobid Server by using the commands described in Running Grobid.
- Run the main.py script with Python.
Results
After running the main.py script an image of a word cloud based on the keywords found in the abstract will open up, it can either be saved or simply closed. In the root directory of the repository two files will be created: + noffigures.txt: which contains the number of figures found per paper and the total amount of them. + listoflinks.txt: which contains a list of all links found in the papers.
Owner
- Name: AMT
- Login: anastmur
- Kind: user
- Repositories: 1
- Profile: https://github.com/anastmur
CodeMeta (codemeta.json)
{
"@context": "https://doi.org/10.5063/schema/codemeta-2.0",
"@type": "SoftwareSourceCode",
"license": "https://spdx.org/licenses/Apache-2.0",
"codeRepository": "git+https://github.com/anastmur/paper_analizer",
"downloadUrl": "https://github.com/anastmur/paper_analizer",
"issueTracker": "https://github.com/anastmur/paper_analizer/issues",
"name": "Paper Analizer",
"version": "0.0.1-alpha",
"description": "The code takes papers selected by the user and then: identifies how many figures there are in the papers, lists all the links found in the papers and generates a Word Cloud with the keywords.",
"applicationCategory": "Research",
"developmentStatus": "wip",
"keywords": [
"paper",
"research",
"keyword",
"figure",
"links",
"academia",
"scholar"
],
"programmingLanguage": [
"Python 3"
],
"operatingSystem": [
"Linux",
"Windows"
],
"softwareRequirements": [
"Python 3",
"https://pypi.org/project/wordcloud/"
],
"author": [
{
"@type": "Person",
"givenName": "Anastasia",
"familyName": "Muran Trus",
"email": "anastasia.muran.trus@alumnos.upm.es"
}
]
}
GitHub Events
Total
Last Year
Dependencies
- actions/checkout v3 composite
- actions/setup-python v3 composite
- certifi 2024.2.2
- charset-normalizer 3.3.2
- contourpy 1.2.0
- cycler 0.12.1
- fonttools 4.49.0
- grobid-client-python 0.0.8
- idna 3.6
- kiwisolver 1.4.5
- matplotlib 3.8.3
- numpy 1.26.4
- packaging 23.2
- pillow 10.2.0
- pyparsing 3.1.2
- python-dateutil 2.9.0.post0
- requests 2.31.0
- six 1.16.0
- urllib3 2.2.1
- wordcloud 1.9.3
- grobid-client-python ^0.0.8
- python ^3.10
- wordcloud ^1.9.3
- grobid-client-python ==0.0.8
- wordcloud ==1.9.3