obsei

Obsei is a low code AI powered automation tool. It can be used in various business flows like social listening, AI based alerting, brand image analysis, comparative study and more .

https://github.com/obsei/obsei

Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.6%) to scientific vocabulary

Keywords

anonymization artificial-intelligence business-process-automation customer-engagement customer-support issue-tracking-system low-code lowcode natural-language-processing nlp process-automation python sentiment-analysis social-listening social-network-analysis text-analysis text-analytics text-classification workflow workflow-automation

Keywords from Contributors

transformers cryptocurrency
Last synced: 4 months ago · JSON representation ·

Repository

Obsei is a low code AI powered automation tool. It can be used in various business flows like social listening, AI based alerting, brand image analysis, comparative study and more .

Basic Info
  • Host: GitHub
  • Owner: obsei
  • License: apache-2.0
  • Language: Python
  • Default Branch: master
  • Homepage: https://obsei.com/
  • Size: 16.3 MB
Statistics
  • Stars: 1,298
  • Watchers: 30
  • Forks: 172
  • Open Issues: 34
  • Releases: 14
Topics
anonymization artificial-intelligence business-process-automation customer-engagement customer-support issue-tracking-system low-code lowcode natural-language-processing nlp process-automation python sentiment-analysis social-listening social-network-analysis text-analysis text-analytics text-classification workflow workflow-automation
Created about 5 years ago · Last pushed 5 months ago
Metadata Files
Readme Contributing License Code of conduct Citation Security

README.md


Test License PyPI - Python Version Release Downloads HF Spaces Last commit Github stars YouTube Channel Subscribers



Note: Obsei is still in alpha stage hence carefully use it in Production. Also, as it is constantly undergoing development hence master branch may contain many breaking changes. Please use released version.


Obsei (pronounced "Ob see" | /əb-'sē/) is an open-source, low-code, AI powered automation tool. Obsei consists of -

  • Observer: Collect unstructured data from various sources like tweets from Twitter, Subreddit comments on Reddit, page post's comments from Facebook, App Stores reviews, Google reviews, Amazon reviews, News, Website, etc.
  • Analyzer: Analyze unstructured data collected with various AI tasks like classification, sentiment analysis, translation, PII, etc.
  • Informer: Send analyzed data to various destinations like ticketing platforms, data storage, dataframe, etc so that the user can take further actions and perform analysis on the data.

All the Observers can store their state in databases (Sqlite, Postgres, MySQL, etc.), making Obsei suitable for scheduled jobs or serverless applications.

Obsei diagram

Future direction -

  • Text, Image, Audio, Documents and Video oriented workflows
  • Collect data from every possible private and public channels
  • Add every possible workflow to an AI downstream application to automate manual cognitive workflows

Use cases

Obsei use cases are following, but not limited to -

  • Social listening: Listening about social media posts, comments, customer feedback, etc.
  • Alerting/Notification: To get auto-alerts for events such as customer complaints, qualified sales leads, etc.
  • Automatic customer issue creation based on customer complaints on Social Media, Email, etc.
  • Automatic assignment of proper tags to tickets based content of customer complaint for example login issue, sign up issue, delivery issue, etc.
  • Extraction of deeper insight from feedbacks on various platforms
  • Market research
  • Creation of dataset for various AI tasks
  • Many more based on creativity 💡

Installation

Prerequisite

Install the following (if not present already) -

Install Obsei

You can install Obsei either via PIP or Conda based on your preference. To install latest released version -

shell pip install obsei[all]

Install from master branch (if you want to try the latest features) -

shell git clone https://github.com/obsei/obsei.git cd obsei pip install --editable .[all]

Note: all option will install all the dependencies which might not be needed for your workflow, alternatively following options are available to install minimal dependencies as per need - - pip install obsei[source]: To install dependencies related to all observers - pip install obsei[sink]: To install dependencies related to all informers - pip install obsei[analyzer]: To install dependencies related to all analyzers, it will install pytorch as well - pip install obsei[twitter-api]: To install dependencies related to Twitter observer - pip install obsei[google-play-scraper]: To install dependencies related to Play Store review scrapper observer - pip install obsei[google-play-api]: To install dependencies related to Google official play store review API based observer - pip install obsei[app-store-scraper]: To install dependencies related to Apple App Store review scrapper observer - pip install obsei[reddit-scraper]: To install dependencies related to Reddit post and comment scrapper observer - pip install obsei[reddit-api]: To install dependencies related to Reddit official api based observer - pip install obsei[pandas]: To install dependencies related to TSV/CSV/Pandas based observer and informer - pip install obsei[google-news-scraper]: To install dependencies related to Google news scrapper observer - pip install obsei[facebook-api]: To install dependencies related to Facebook official page post and comments api based observer - pip install obsei[atlassian-api]: To install dependencies related to Jira official api based informer - pip install obsei[elasticsearch]: To install dependencies related to elasticsearch informer - pip install obsei[slack-api]:To install dependencies related to Slack official api based informer

You can also mix multiple dependencies together in single installation command. For example to install dependencies Twitter observer, all analyzer, and Slack informer use following command - shell pip install obsei[twitter-api, analyzer, slack-api]

How to use

Expand the following steps and create a workflow -

Step 1: Configure Source/Observer
Twitter
```python from obsei.source.twitter_source import TwitterCredentials, TwitterSource, TwitterSourceConfig # initialize twitter source config source_config = TwitterSourceConfig( keywords=["issue"], # Keywords, @user or #hashtags lookup_period="1h", # Lookup period from current time, format: `` (day|hour|minute) cred_info=TwitterCredentials( # Enter your twitter consumer key and secret. Get it from https://developer.twitter.com/en/apply-for-access consumer_key="", consumer_secret="", bearer_token='', ) ) # initialize tweets retriever source = TwitterSource() ```

Youtube Scrapper

```python from obsei.source.youtube_scrapper import YoutubeScrapperSource, YoutubeScrapperConfig

initialize Youtube source config

sourceconfig = YoutubeScrapperConfig( videourl="https://www.youtube.com/watch?v=uZfns0JIlFk", # Youtube video URL fetchreplies=True, # Fetch replies to comments maxcomments=10, # Total number of comments and replies to fetch lookup_period="1Y", # Lookup period from current time, format: <number><d|h|m|M|Y> (day|hour|minute|month|year) )

initialize Youtube comments retriever

source = YoutubeScrapperSource() ```

Facebook

```python from obsei.source.facebook_source import FacebookCredentials, FacebookSource, FacebookSourceConfig

initialize facebook source config

sourceconfig = FacebookSourceConfig( pageid="110844591144719", # Facebook page id, for example this one for Obsei lookupperiod="1h", # Lookup period from current time, format: <number><d|h|m> (day|hour|minute) credinfo=FacebookCredentials( # Enter your facebook appid, appsecret and longtermtoken. Get it from https://developers.facebook.com/apps/ appid="<facebookappid>", appsecret="", longtermtoken="", ) )

initialize facebook post comments retriever

source = FacebookSource() ```

Email

```python from obsei.source.email_source import EmailConfig, EmailCredInfo, EmailSource

initialize email source config

sourceconfig = EmailConfig( # List of IMAP servers for most commonly used email providers # https://www.systoolsgroup.com/imap/ # Also, if you're using a Gmail account then make sure you allow less secure apps on your account - # https://myaccount.google.com/lesssecureapps?pli=1 # Also enable IMAP access - # https://mail.google.com/mail/u/0/#settings/fwdandpop imapserver="imap.gmail.com", # Enter IMAP server credinfo=EmailCredInfo( # Enter your email account username and password username="<emailusername>", password="" ), lookup_period="1h" # Lookup period from current time, format: <number><d|h|m> (day|hour|minute) )

initialize email retriever

source = EmailSource() ```

Google Maps Reviews Scrapper

```python from obsei.source.googlemapsreviews import OSGoogleMapsReviewsSource, OSGoogleMapsReviewsConfig

initialize Outscrapper Maps review source config

sourceconfig = OSGoogleMapsReviewsConfig( # Collect API key from https://outscraper.com/ apikey="", # Enter Google Maps link or place id # For example below is for the "Taj Mahal" queries=["https://www.google.co.in/maps/place/Taj+Mahal/@27.1751496,78.0399535,17z/data=!4m5!3m4!1s0x39747121d702ff6d:0xdd2ae4803f767dde!8m2!3d27.1751448!4d78.0421422"], numberofreviews=10, )

initialize Outscrapper Maps review retriever

source = OSGoogleMapsReviewsSource() ```

AppStore Reviews Scrapper

```python from obsei.source.appstore_scrapper import AppStoreScrapperConfig, AppStoreScrapperSource

initialize app store source config

sourceconfig = AppStoreScrapperConfig( # Need two parameters appid and country. # app_id can be found at the end of the url of app in app store. # For example - https://apps.apple.com/us/app/xcode/id497799835 # 310633997 is the appid for xcode and us is country. countries=["us"], appid="310633997", lookup_period="1h" # Lookup period from current time, format: <number><d|h|m> (day|hour|minute) )

initialize app store reviews retriever

source = AppStoreScrapperSource() ```

Play Store Reviews Scrapper

```python from obsei.source.playstore_scrapper import PlayStoreScrapperConfig, PlayStoreScrapperSource

initialize play store source config

sourceconfig = PlayStoreScrapperConfig( # Need two parameters packagename and country. # package_name can be found at the end of the url of app in play store. # For example - https://play.google.com/store/apps/details?id=com.google.android.gm&hl=en&gl=US # com.google.android.gm is the packagename for xcode and us is country. countries=["us"], packagename="com.google.android.gm", lookup_period="1h" # Lookup period from current time, format: <number><d|h|m> (day|hour|minute) )

initialize play store reviews retriever

source = PlayStoreScrapperSource() ```

Reddit

```python from obsei.source.reddit_source import RedditConfig, RedditSource, RedditCredInfo

initialize reddit source config

sourceconfig = RedditConfig( subreddits=["wallstreetbets"], # List of subreddits # Reddit account username and password # You can also enter reddit clientid and clientsecret or refreshtoken # Create credential at https://www.reddit.com/prefs/apps # Also refer https://praw.readthedocs.io/en/latest/gettingstarted/authentication.html # Currently Password Flow, Read Only Mode and Saved Refresh Token Mode are supported credinfo=RedditCredInfo( username="", password="" ), lookup_period="1h" # Lookup period from current time, format: <number><d|h|m> (day|hour|minute) )

initialize reddit retriever

source = RedditSource() ```

Reddit Scrapper

Note: Reddit heavily rate limit scrappers, hence use it to fetch small data during long period

```python from obsei.source.reddit_scrapper import RedditScrapperConfig, RedditScrapperSource

initialize reddit scrapper source config

sourceconfig = RedditScrapperConfig( # Reddit subreddit, search etc rss url. For proper url refer following link - # Refer https://www.reddit.com/r/pathogendavid/comments/tv8m9/pathogendavidsguidetorssandreddit/ url="https://www.reddit.com/r/wallstreetbets/comments/.rss?sort=new", lookup_period="1h" # Lookup period from current time, format: <number><d|h|m> (day|hour|minute) )

initialize reddit retriever

source = RedditScrapperSource() ```

Google News

```python from obsei.source.googlenewssource import GoogleNewsConfig, GoogleNewsSource

initialize Google News source config

sourceconfig = GoogleNewsConfig( query='bitcoin', maxresults=5, # To fetch full article text enable fetch_article flag # By default google news gives title and highlight fetch_article=True, # proxy='http://127.0.0.1:8080' )

initialize Google News retriever

source = GoogleNewsSource() ```

Web Crawler

```python from obsei.source.websitecrawlersource import TrafilaturaCrawlerConfig, TrafilaturaCrawlerSource

initialize website crawler source config

source_config = TrafilaturaCrawlerConfig( urls=['https://obsei.github.io/obsei/'] )

initialize website text retriever

source = TrafilaturaCrawlerSource() ```

Pandas DataFrame

```python import pandas as pd from obsei.source.pandas_source import PandasSource, PandasSourceConfig

Initialize your Pandas DataFrame from your sources like csv, excel, sql etc

In following example we are reading csv which have two columns title and text

csvfile = "https://raw.githubusercontent.com/deepset-ai/haystack/master/tutorials/smallgeneratordataset.csv" dataframe = pd.readcsv(csv_file)

initialize pandas sink config

sinkconfig = PandasSourceConfig( dataframe=dataframe, includecolumns=["score"], text_columns=["name", "degree"], )

initialize pandas sink

sink = PandasSource() ```

Step 2: Configure Analyzer Note: To run transformers in an offline mode, check [transformers offline mode](https://huggingface.co/transformers/installation.html#offline-mode).

Some analyzer support GPU and to utilize pass device parameter. List of possible values of device parameter (default value auto):

  1. auto: GPU (cuda:0) will be used if available otherwise CPU will be used
  2. cpu: CPU will be used
  3. cuda:{id} - GPU will be used with provided CUDA device id

Text Classification
Text classification: Classify text into user provided categories. ```python from obsei.analyzer.classification_analyzer import ClassificationAnalyzerConfig, ZeroShotClassificationAnalyzer # initialize classification analyzer config # It can also detect sentiments if "positive" and "negative" labels are added. analyzer_config=ClassificationAnalyzerConfig( labels=["service", "delay", "performance"], ) # initialize classification analyzer # For supported models refer https://huggingface.co/models?filter=zero-shot-classification text_analyzer = ZeroShotClassificationAnalyzer( model_name_or_path="typeform/mobilebert-uncased-mnli", device="auto" ) ```

Sentiment Analyzer

Sentiment Analyzer: Detect the sentiment of the text. Text classification can also perform sentiment analysis but if you don't want to use heavy-duty NLP model then use less resource hungry dictionary based Vader Sentiment detector.

```python from obsei.analyzer.sentiment_analyzer import VaderSentimentAnalyzer

Vader does not need any configuration settings

analyzer_config=None

initialize vader sentiment analyzer

text_analyzer = VaderSentimentAnalyzer() ```

NER Analyzer

NER (Named-Entity Recognition) Analyzer: Extract information and classify named entities mentioned in text into pre-defined categories such as person names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc

```python from obsei.analyzer.ner_analyzer import NERAnalyzer

NER analyzer does not need configuration settings

analyzer_config=None

initialize ner analyzer

For supported models refer https://huggingface.co/models?filter=token-classification

textanalyzer = NERAnalyzer( modelnameorpath="elastic/distilbert-base-cased-finetuned-conll03-english", device = "auto" ) ```

Translator

```python from obsei.analyzer.translation_analyzer import TranslationAnalyzer

Translator does not need analyzer config

analyzer_config = None

initialize translator

For supported models refer https://huggingface.co/models?pipeline_tag=translation

analyzer = TranslationAnalyzer( modelnameor_path="Helsinki-NLP/opus-mt-hi-en", device = "auto" ) ```

PII Anonymizer

```python from obsei.analyzer.pii_analyzer import PresidioEngineConfig, PresidioModelConfig, \ PresidioPIIAnalyzer, PresidioPIIAnalyzerConfig

initialize pii analyzer's config

analyzerconfig = PresidioPIIAnalyzerConfig( # Whether to return only pii analysis or anonymize text analyzeonly=False, # Whether to return detail information about anonymization decision returndecisionprocess=True )

initialize pii analyzer

analyzer = PresidioPIIAnalyzer( engineconfig=PresidioEngineConfig( # spacy and stanza nlp engines are supported # For more info refer # https://microsoft.github.io/presidio/analyzer/developingrecognizers/#utilize-spacy-or-stanza nlpenginename="spacy", # Update desired spacy model and language models=[PresidioModelConfig(modelname="encoreweblg", lang_code="en")] ) ) ```

Dummy Analyzer

Dummy Analyzer: Does nothing. Its simply used for transforming the input (TextPayload) to output (TextPayload) and adding the user supplied dummy data.

```python from obsei.analyzer.dummy_analyzer import DummyAnalyzer, DummyAnalyzerConfig

initialize dummy analyzer's configuration settings

analyzer_config = DummyAnalyzerConfig()

initialize dummy analyzer

analyzer = DummyAnalyzer() ```

Step 3: Configure Sink/Informer
Slack
```python from obsei.sink.slack_sink import SlackSink, SlackSinkConfig # initialize slack sink config sink_config = SlackSinkConfig( # Provide slack bot/app token # For more detail refer https://slack.com/intl/en-de/help/articles/215770388-Create-and-regenerate-API-tokens slack_token="", # To get channel id refer https://stackoverflow.com/questions/40940327/what-is-the-simplest-way-to-find-a-slack-team-id-and-a-channel-id channel_id="C01LRS6CT9Q" ) # initialize slack sink sink = SlackSink() ```

Zendesk

```python from obsei.sink.zendesk_sink import ZendeskSink, ZendeskSinkConfig, ZendeskCredInfo

initialize zendesk sink config

sinkconfig = ZendeskSinkConfig( # provide zendesk domain domain="zendesk.com", # provide subdomain if you have one subdomain=None, # Enter zendesk user details credinfo=ZendeskCredInfo( email="", password="" ) )

initialize zendesk sink

sink = ZendeskSink() ```

Jira

```python from obsei.sink.jira_sink import JiraSink, JiraSinkConfig

For testing purpose you can start jira server locally

Refer https://developer.atlassian.com/server/framework/atlassian-sdk/atlas-run-standalone/

initialize Jira sink config

sinkconfig = JiraSinkConfig( url="http://localhost:2990/jira", # Jira server url # Jira username & password for user who have permission to create issue username="", password="", # Which type of issue to be created # For more information refer https://support.atlassian.com/jira-cloud-administration/docs/what-are-issue-types/ issuetype={"name": "Task"}, # Under which project issue to be created # For more information refer https://support.atlassian.com/jira-software-cloud/docs/what-is-a-jira-software-project/ project={"key": "CUS"}, )

initialize Jira sink

sink = JiraSink() ```

ElasticSearch

```python from obsei.sink.elasticsearch_sink import ElasticSearchSink, ElasticSearchSinkConfig

For testing purpose you can start Elasticsearch server locally via docker

docker run -d --name elasticsearch -p 9200:9200 -e "discovery.type=single-node" elasticsearch:8.5.0

initialize Elasticsearch sink config

sinkconfig = ElasticSearchSinkConfig( # Elasticsearch server hosts="http://localhost:9200", # Index name, it will create if not exist indexname="test", )

initialize Elasticsearch sink

sink = ElasticSearchSink() ```

Http

```python from obsei.sink.http_sink import HttpSink, HttpSinkConfig

For testing purpose you can create mock http server via postman

For more details refer https://learning.postman.com/docs/designing-and-developing-your-api/mocking-data/setting-up-mock/

initialize http sink config (Currently only POST call is supported)

sink_config = HttpSinkConfig( # provide http server url url="https://localhost:8080/api/path", # Here you can add headers you would like to pass with request headers={ "Content-type": "application/json" } )

To modify or converting the payload, create convertor class

Refer obsei.sink.dailyget_sink.PayloadConvertor for example

initialize http sink

sink = HttpSink() ```

Pandas DataFrame

```python from pandas import DataFrame from obsei.sink.pandas_sink import PandasSink, PandasSinkConfig

initialize pandas sink config

sink_config = PandasSinkConfig( dataframe=DataFrame() )

initialize pandas sink

sink = PandasSink() ```

Logger

This is useful for testing and dry running the pipeline.

```python from obsei.sink.logger_sink import LoggerSink, LoggerSinkConfig import logging import sys

logger = logging.getLogger("Obsei") logging.basicConfig(stream=sys.stdout, level=logging.INFO)

initialize logger sink config

sink_config = LoggerSinkConfig( logger=logger, level=logging.INFO )

initialize logger sink

sink = LoggerSink() ```

Step 4: Join and create workflow `source` will fetch data from the selected source, then feed it to the `analyzer` for processing, whose output we feed into a `sink` to get notified at that sink. ```python # Uncomment if you want logger # import logging # import sys # logger = logging.getLogger(__name__) # logging.basicConfig(stream=sys.stdout, level=logging.INFO) # This will fetch information from configured source ie twitter, app store etc source_response_list = source.lookup(source_config) # Uncomment if you want to log source response # for idx, source_response in enumerate(source_response_list): # logger.info(f"source_response#'{idx}'='{source_response.__dict__}'") # This will execute analyzer (Sentiment, classification etc) on source data with provided analyzer_config analyzer_response_list = text_analyzer.analyze_input( source_response_list=source_response_list, analyzer_config=analyzer_config ) # Uncomment if you want to log analyzer response # for idx, an_response in enumerate(analyzer_response_list): # logger.info(f"analyzer_response#'{idx}'='{an_response.__dict__}'") # Analyzer output added to segmented_data # Uncomment to log it # for idx, an_response in enumerate(analyzer_response_list): # logger.info(f"analyzed_data#'{idx}'='{an_response.segmented_data.__dict__}'") # This will send analyzed output to configure sink ie Slack, Zendesk etc sink_response_list = sink.send_data(analyzer_response_list, sink_config) # Uncomment if you want to log sink response # for sink_response in sink_response_list: # if sink_response is not None: # logger.info(f"sink_response='{sink_response}'") ```
Step 5: Execute workflow Copy the code snippets from Steps 1 to 4 into a python file, for example example.py and execute the following command - ```shell python example.py ```

Demo

We have a minimal streamlit based UI that you can use to test Obsei.

Screenshot

Watch UI demo video

Introductory and demo video

Check demo at

(Note: Sometimes the Streamlit demo might not work due to rate limiting, use the docker image (locally) in such cases.)

To test locally, just run

``` docker run -d --name obesi-ui -p 8501:8501 obsei/obsei-ui-demo

You can find the UI at http://localhost:8501

```

To run Obsei workflow easily using GitHub Actions (no sign ups and cloud hosting required), refer to this repo.

Companies/Projects using Obsei

Here are some companies/projects (alphabetical order) using Obsei. To add your company/project to the list, please raise a PR or contact us via email.

  • Oraika: Contextually understand customer feedback
  • 1Page: Giving a better context in meetings and calls
  • Spacepulse: The operating system for spaces
  • Superblog: A blazing fast alternative to WordPress and Medium
  • Zolve: Creating a financial world beyond borders
  • Utilize: No-code app builder for businesses with a deskless workforce

Articles

Sr. No. Title Author
1 AI based Comparative Customer Feedback Analysis Using Obsei Reena Bapna
2 LinkedIn App - User Feedback Analysis Himanshu Sharma

Tutorials

Sr. No. Workflow Colab Binder
1 Observe app reviews from Google play store, Analyze them by performing text classification and then Inform them on console via logger
PlayStore Reviews → Classification → Logger Colab Colab
2 Observe app reviews from Google play store, PreProcess text via various text cleaning functions, Analyze them by performing text classification, Inform them to Pandas DataFrame and store resultant CSV to Google Drive
PlayStore Reviews → PreProcessing → Classification → Pandas DataFrame → CSV in Google Drive Colab Colab
3 Observe app reviews from Apple app store, PreProcess text via various text cleaning function, Analyze them by performing text classification, Inform them to Pandas DataFrame and store resultant CSV to Google Drive
AppStore Reviews → PreProcessing → Classification → Pandas DataFrame → CSV in Google Drive Colab Colab
4 Observe news article from Google news, PreProcess text via various text cleaning function, Analyze them via performing text classification while splitting text in small chunks and later computing final inference using given formula
Google News → Text Cleaner → Text Splitter → Classification → Inference Aggregator Colab Colab
💡Tips: Handle large text classification via Obsei ![](https://raw.githubusercontent.com/obsei/obsei-resources/master/gifs/Long_Text_Classification.gif)

Documentation

For detailed installation instructions, usages and examples, refer to our documentation.

Support and Release Matrix

Linux Mac Windows Remark
Tests Low Coverage as difficult to test 3rd party libs
PIP Fully Supported
Conda Not Supported

Discussion forum

Discussion about Obsei can be done at community forum

Changelogs

Refer releases for changelogs

Security Issue

For any security issue please contact us via email

Stargazers over time

Stargazers over time

Maintainers

This project is being maintained by Oraika Technologies. Lalit Pagaria and Girish Patel are maintainers of this project.

License

  • Copyright holder: Oraika Technologies
  • Overall Apache 2.0 and you can read License file.
  • Multiple other secondary permissive or weak copyleft licenses (LGPL, MIT, BSD etc.) for third-party components refer Attribution.
  • To make project more commercial friendly, we void third party components which have strong copyleft licenses (GPL, AGPL etc.) into the project.

Attribution

This could not have been possible without these open source softwares.

Contribution

First off, thank you for even considering contributing to this package, every contribution big or small is greatly appreciated. Please refer our Contribution Guideline and Code of Conduct.

Thanks so much to all our contributors

Owner

  • Name: oraika-oss
  • Login: obsei
  • Kind: organization
  • Email: contact@oraika.com
  • Location: India

Home of open source projects undertaken by Oraika Technologies Private Limited

Citation (CITATION.cff)

# YAML 1.2
---
authors: 
  -
    family-names: Pagaria
    given-names: Lalit

cff-version: "1.1.0"
license: "Apache-2.0"
message: "If you use this software, please cite it using this metadata."
repository-code: "https://github.com/obsei/obsei"
title: "Obsei - a low code AI powered automation tool"
version: "0.0.10"
...

GitHub Events

Total
  • Issues event: 1
  • Watch event: 111
  • Delete event: 2
  • Issue comment event: 4
  • Push event: 2
  • Pull request event: 10
  • Fork event: 16
  • Create event: 4
Last Year
  • Issues event: 1
  • Watch event: 111
  • Delete event: 2
  • Issue comment event: 4
  • Push event: 2
  • Pull request event: 10
  • Fork event: 16
  • Create event: 4

Committers

Last synced: almost 3 years ago

All Time
  • Total Commits: 402
  • Total Committers: 15
  • Avg Commits per committer: 26.8
  • Development Distribution Score (DDS): 0.095
Top Committers
Name Email Commits
lalitpagaria p****t@g****m 364
dependabot[bot] 4****]@u****m 11
Girish Patel g****s@g****m 8
Akarsh Gajbhiye a****e@g****m 4
Shahrukh Khan s****1@g****m 3
Salil Mishra m****3@g****m 2
pyup.io bot g****t@p****o 2
Udit Dashore t****3@g****m 1
Kumar Utsav k****v@g****m 1
Chenxi Liu 9****m@u****m 1
namanjuneja771 7****1@u****m 1
reenabapna 8****a@u****m 1
cnarte 5****e@u****m 1
sanjaybharkatiya 6****a@u****m 1
Jatin Arora j****n@u****p 1
Committer Domains (Top 20 + Academic)

Packages

  • Total packages: 2
  • Total downloads:
    • pypi 154 last-month
  • Total dependent packages: 0
    (may contain duplicates)
  • Total dependent repositories: 3
    (may contain duplicates)
  • Total versions: 21
  • Total maintainers: 1
proxy.golang.org: github.com/obsei/obsei
  • Versions: 7
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent packages count: 6.4%
Average: 6.6%
Dependent repos count: 6.9%
Last synced: 4 months ago
pypi.org: obsei

Obsei is an automation tool for text analysis need

  • Versions: 14
  • Dependent Packages: 0
  • Dependent Repositories: 3
  • Downloads: 154 Last month
Rankings
Dependent repos count: 9.0%
Dependent packages count: 10.1%
Average: 11.1%
Downloads: 14.3%
Maintainers (1)
Last synced: 4 months ago

Dependencies

sample-ui/requirements.txt pypi
  • obsei master
  • streamlit *
  • trafilatura *
.github/workflows/build.yml actions
  • actions/cache v3.2.4 composite
  • actions/checkout v3.3.0 composite
  • actions/setup-python v4 composite
.github/workflows/pypi_publish.yml actions
  • actions/checkout v3.3.0 composite
  • actions/setup-python v4 composite
.github/workflows/release_draft.yml actions
  • release-drafter/release-drafter v5 composite
.github/workflows/sdk_docker_publish.yml actions
  • actions/checkout v3.3.0 composite
  • docker/build-push-action v4 composite
  • docker/login-action v1 composite
  • docker/metadata-action v4.3.0 composite
  • docker/setup-buildx-action v1 composite
  • docker/setup-qemu-action v1 composite
.github/workflows/ui_docker_publish.yml actions
  • actions/checkout v3.3.0 composite
  • docker/build-push-action v4 composite
  • docker/login-action v1 composite
  • docker/metadata-action v4.3.0 composite
  • docker/setup-buildx-action v1 composite
  • docker/setup-qemu-action v1 composite
Dockerfile docker
  • python 3.10-slim-bullseye build
sample-ui/Dockerfile docker
  • python 3.10-slim-bullseye build
binder/requirements.txt pypi
  • trafilatura *
pyproject.toml pypi
  • SQLAlchemy >= 1.4.44
  • beautifulsoup4 >= 4.9.3
  • dateparser >= 1.1.3
  • mmh3 >= 3.0.0
  • pydantic >= 1.10.2
  • python-dateutil >= 2.8.2
  • pytz >= 2022.6
  • requests >= 2.26.0