scrapegraph-ai
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (15.9%) to scientific vocabulary
Repository
Basic Info
- Host: GitHub
- Owner: j-mo
- License: mit
- Language: Python
- Default Branch: main
- Size: 5.82 MB
Statistics
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
- Releases: 0
Metadata Files
README.md
🕷️ ScrapeGraphAI: You Only Scrape Once
ScrapeGraphAI is a web scraping python library that uses LLM and direct graph logic to create scraping pipelines for websites, documents and XML files. Just say which information you want to extract and the library will do it for you!
🚀 Quick install
The reference page for Scrapegraph-ai is available on the official page of pypy: pypi.
bash
pip install scrapegraphai
you will also need to install Playwright for javascript-based scraping:
bash
playwright install
Note: it is recommended to install the library in a virtual environment to avoid conflicts with other libraries 🐱
🔍 Demo
Official streamlit demo:
Try it directly on the web using Google Colab:
Follow the procedure on the following link to setup your OpenAI API key: link.
📖 Documentation
The documentation for ScrapeGraphAI can be found here.
Check out also the docusaurus documentation.
💻 Usage
You can use the SmartScraper class to extract information from a website using a prompt.
The SmartScraper class is a direct graph implementation that uses the most common nodes present in a web scraping pipeline. For more information, please see the documentation.
Case 1: Extracting information using Ollama
Remember to download the model on Ollama separately!
```python from scrapegraphai.graphs import SmartScraperGraph
graphconfig = { "llm": { "model": "ollama/mistral", "temperature": 0, "format": "json", # Ollama needs the format to be specified explicitly "baseurl": "http://localhost:11434", # set Ollama URL }, "embeddings": { "model": "ollama/nomic-embed-text", "base_url": "http://localhost:11434", # set Ollama URL } }
smartscrapergraph = SmartScraperGraph( prompt="List me all the articles", # also accepts a string with the already downloaded HTML code source="https://perinim.github.io/projects", config=graph_config )
result = smartscrapergraph.run() print(result)
```
Case 2: Extracting information using Docker
Note: before using the local model remember to create the docker container!
text
docker-compose up -d
docker exec -it ollama ollama pull stablelm-zephyr
You can use which models avaiable on Ollama or your own model instead of stablelm-zephyr
```python
from scrapegraphai.graphs import SmartScraperGraph
graphconfig = { "llm": { "model": "ollama/mistral", "temperature": 0, "format": "json", # Ollama needs the format to be specified explicitly # "modeltokens": 2000, # set context length arbitrarily }, }
smartscrapergraph = SmartScraperGraph(
prompt="List me all the articles",
# also accepts a string with the already downloaded HTML code
source="https://perinim.github.io/projects",
config=graph_config
)
result = smartscrapergraph.run() print(result) ```
Case 3: Extracting information using Openai model
```python from scrapegraphai.graphs import SmartScraperGraph OPENAIAPIKEY = "YOURAPIKEY"
graphconfig = { "llm": { "apikey": OPENAIAPIKEY, "model": "gpt-3.5-turbo", }, }
smartscrapergraph = SmartScraperGraph( prompt="List me all the articles", # also accepts a string with the already downloaded HTML code source="https://perinim.github.io/projects", config=graph_config )
result = smartscrapergraph.run() print(result) ```
Case 4: Extracting information using Groq
```python from scrapegraphai.graphs import SmartScraperGraph from scrapegraphai.utils import prettifyexecinfo
groqkey = os.getenv("GROQAPIKEY")
graphconfig = { "llm": { "model": "groq/gemma-7b-it", "apikey": groqkey, "temperature": 0 }, "embeddings": { "model": "ollama/nomic-embed-text", "temperature": 0, "baseurl": "http://localhost:11434", }, "headless": False }
smartscrapergraph = SmartScraperGraph( prompt="List me all the projects with their description and the author.", source="https://perinim.github.io/projects", config=graph_config )
result = smartscrapergraph.run() print(result) ```
Case 5: Extracting information using Azure
```python from langchainopenai import AzureChatOpenAI from langchainopenai import AzureOpenAIEmbeddings
lmmodelinstance = AzureChatOpenAI( openaiapiversion=os.environ["AZUREOPENAIAPIVERSION"], azuredeployment=os.environ["AZUREOPENAICHATDEPLOYMENTNAME"] )
embeddermodelinstance = AzureOpenAIEmbeddings( azuredeployment=os.environ["AZUREOPENAIEMBEDDINGSDEPLOYMENTNAME"], openaiapiversion=os.environ["AZUREOPENAIAPIVERSION"], ) graphconfig = { "llm": {"modelinstance": llmmodelinstance}, "embeddings": {"modelinstance": embeddermodel_instance} }
smartscrapergraph = SmartScraperGraph( prompt="""List me all the events, with the following fields: companyname, eventname, eventstartdate, eventstarttime, eventenddate, eventendtime, location, eventmode, eventcategory, thirdpartyredirect, noofdays, timeinhours, hostedorattending, refreshmentstype, registrationavailable, registrationlink""", source="https://www.hmhco.com/event", config=graphconfig ) ```
Case 6: Extracting information using Gemini
```python from scrapegraphai.graphs import SmartScraperGraph GOOGLEAPIKEY = "YOURAPI_KEY"
Define the configuration for the graph
graphconfig = { "llm": { "apikey": GOOGLE_APIKEY, "model": "gemini-pro", }, }
Create the SmartScraperGraph instance
smartscrapergraph = SmartScraperGraph( prompt="List me all the articles", source="https://perinim.github.io/projects", config=graph_config )
result = smartscrapergraph.run() print(result) ```
The output for all 3 the cases will be a dictionary with the extracted information, for example:
bash
{
'titles': [
'Rotary Pendulum RL'
],
'descriptions': [
'Open Source project aimed at controlling a real life rotary pendulum using RL algorithms'
]
}
🤝 Contributing
Feel free to contribute and join our Discord server to discuss with us improvements and give us suggestions!
Please see the contributing guidelines.
📈 Roadmap
Check out the project roadmap here! 🚀
Wanna visualize the roadmap in a more interactive way? Check out the markmap visualization by copy pasting the markdown content in the editor!
❤️ Contributors
🎓 Citations
If you have used our library for research purposes please quote us with the following reference:
text
@misc{scrapegraph-ai,
author = {Marco Perini, Lorenzo Padoan, Marco Vinciguerra},
title = {Scrapegraph-ai},
year = {2024},
url = {https://github.com/VinciGit00/Scrapegraph-ai},
note = {A Python library for scraping leveraging large language models}
}
Authors
| | Contact Info |
|--------------------|----------------------|
| Marco Vinciguerra | |
| Marco Perini |
|
| Lorenzo Padoan |
|
📜 License
ScrapeGraphAI is licensed under the MIT License. See the LICENSE file for more information.
Acknowledgements
- We would like to thank all the contributors to the project and the open-source community for their support.
- ScrapeGraphAI is meant to be used for data exploration and research purposes only. We are not responsible for any misuse of the library.
Owner
- Name: Joe Monastiero
- Login: j-mo
- Kind: user
- Location: East Bay, CA
- Company: SiBi
- Website: http://fractional-financial.com
- Repositories: 1
- Profile: https://github.com/j-mo
Citation (citation.cff)
cff-version: 0.0.1
message: "If you use Scrapegraph-ai in your research, please cite it using these metadata."
authors:
- family-names: Perini
given-names: Marco
- family-names: Padoan
given-names: Lorenzo
- family-names: Vinciguerra
given-names: Marco
title: Scrapegraph-ai
version: v0.0.10
date-released: 2024-1-10
url: https://github.com/VinciGit00/Scrapegraph-ai
license: MIT
GitHub Events
Total
- Watch event: 1
Last Year
- Watch event: 1
Dependencies
- actions/checkout v4 composite
- github/codeql-action/init v3 composite
- actions/checkout v4 composite
- actions/dependency-review-action v4 composite
- actions/checkout v3 composite
- actions/setup-python v3 composite
- actions/checkout v3 composite
- actions/setup-python v5 composite
- actions/cache v2 composite
- actions/checkout v4.1.1 composite
- actions/setup-node v4 composite
- actions/setup-python v5 composite
- cycjimmy/semantic-release-action v4.1.0 composite
- python 3.11-slim build
- ollama/ollama latest
- pytest 8.0.0 develop
- sphinx 7.1.2 docs
- sphinx-rtd-theme 2.0.0 docs
- beautifulsoup4 4.12.3
- faiss-cpu 1.8.0
- free-proxy 1.1.1
- google 3.0.0
- graphviz 0.20.1
- html2text 2020.1.16
- langchain 0.1.14
- langchain-aws ^0.1.2
- langchain-google-genai 1.0.1
- langchain-groq 0.1.3
- langchain-openai 0.1.1
- minify-html 0.15.0
- pandas 2.0.3
- playwright ^1.43.0
- python ^3.9
- python-dotenv 1.0.1
- tiktoken >=0.5.2,<0.6.0
- tqdm 4.66.3
- pytest ==8.0.0 development
- sphinx ==7.1.2 development
- sphinx-rtd-theme ==2.0.0 development
- beautifulsoup4 ==4.12.3
- faiss-cpu ==1.8.0
- free-proxy ==1.1.1
- google ==3.0.0
- graphviz ==0.20.1
- html2text ==2020.1.16
- langchain ==0.1.14
- langchain-aws ==0.1.2
- langchain-google-genai ==1.0.1
- langchain-groq ==0.1.3
- langchain-openai ==0.1.1
- minify-html ==0.15.0
- pandas ==2.0.3
- playwright ==1.43.0
- python-dotenv ==1.0.1
- tiktoken >=0.5.2,<0.6.0
- tqdm ==4.66.3