roboradiology

AI Driven X-ray diagnostics

https://github.com/noahbakayou/roboradiology

Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (9.8%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

AI Driven X-ray diagnostics

Basic Info
  • Host: GitHub
  • Owner: NoahBakayou
  • License: agpl-3.0
  • Language: Python
  • Default Branch: main
  • Size: 15 MB
Statistics
  • Stars: 0
  • Watchers: 0
  • Forks: 0
  • Open Issues: 2
  • Releases: 0
Created almost 2 years ago · Last pushed over 1 year ago
Metadata Files
Readme Contributing License Citation

README.md

RoboRadiology

brain

Inspiration

According to the CIA, the average number of doctors for every one million people in the East African Community countries is 103. With the advancements in deep learning and computer vision, we sought to leverage these technologies to assist people who lack medical resources in the early detection and classification of tumors in the brain. The potential impact on patient outcomes and healthcare efficiency served as a driving force behind our project.

What it does

Our project utilizes a YOLO (You Only Look Once) model trained to detect and classify tumors in medical images of the brain. Upon uploading an image through the web interface, the system processes the image using the fine-tuned YOLO model to identify the presence of tumors.

Once the tumors are detected, the system highlights them by bounding boxes and labels them with confidence percentages. This visual representation provides users with immediate feedback on the location and nature of the tumors within the image.

Furthermore, the system generates a concise report using the OpenAI API, summarizing the findings in natural language and the system incorporates an interactive feature where a virtual AI doctor communicates the results to the user. The AI doctor provides a personalized response based on the detected tumors, offering reassurance or further guidance depending on the severity of the findings. This interaction aims to alleviate patient anxiety and enhance the overall user experience.

How we built it

We began by collecting and preprocessing a comprehensive dataset of brain x-rays containing tumors of varying types. We trained the YOLO model to detect tumors and classify them as benign or malignant. We fine-tuned the model iteratively, optimizing its performance and accuracy.

Next, we integrated the OpenAI API to generate human-readable reports based on the model's predictions. These reports provide clinicians with concise summaries of the tumor characteristics.

To facilitate user interaction and accessibility, we developed a user-friendly web interface using Flask. This interface allows users to upload medical images and receive real-time tumor detection and classification results. The intuitive design enhances the user experience, making it accessible to both healthcare professionals and patients.

Challenges we ran into

Throughout the development process, we encountered several challenges that tested our problem-solving skills and determination. One significant hurdle was our initial attempt to host the model online. As part of this process, we needed to upload images to a database for processing. However, finding a suitable and cost-effective solution for hosting a free database proved to be challenging. This obstacle compelled us to reconsider our hosting strategy and explore alternative approaches.

Additionally, sourcing an appropriate dataset and acquiring the necessary hardware for training posed another challenge. The availability of high-quality medical imaging datasets is limited, and obtaining access to reliable hardware for training deep learning models can be resource-intensive. Despite these obstacles, we persevered in our search and eventually found a suitable dataset. To address hardware constraints, we leveraged Google Colab's A100 GPU, which provided the computational power required for training our model efficiently.

By overcoming these challenges through collaboration and resourcefulness, we successfully trained our model and continued progressing toward our project goals.

Accomplishments that we're proud of

One of our most significant accomplishments is successfully creating a virtual doctor that informs patients in their own language. This innovative feature not only enhances the user experience but also fosters a sense of comfort and understanding for individuals receiving medical diagnoses. By developing this personalized interaction, we've achieved a more empathetic and accessible approach to healthcare technology, empowering users to engage with the system confidently.

What we learned

Throughout the development process, we gained a deeper understanding of convolutional neural networks (CNNs), particularly the You Only Look Once (YOLO) model, for object detection tasks. We also honed our skills in data preprocessing, model training, and evaluation. Furthermore, integrating OpenAI's API for generating medical reports enabled us to explore the intersection of AI and healthcare.

Our overall goal

Our overarching goal is to bridge the gap in healthcare accessibility, particularly in underserved regions where medical resources are scarce. By leveraging cutting-edge AI technologies, we aim to democratize healthcare by providing faster diagnostics and expert assistance in medical imaging interpretation. Our specific objectives include expediting medical diagnosis to reduce wait times, aiding healthcare professionals in confirming diagnoses and empowering patients with accessible healthcare solutions.

What's next?

Moving forward, we have identified several key areas for development and expansion. For instance, we are looking to expand our capabilities by incorporating smartphone-based imaging alongside traditional methodologies such as X-rays. Acquiring access to diverse and high-quality medical imaging datasets is essential for improving the robustness and accuracy of our AI models. We aim to further refine and expand our AI models to detect a broader range of medical issues beyond brain tumors. By leveraging state-of-the-art algorithms and advanced training techniques, we hope to enhance the versatility and efficacy of our system in detecting multiple types of abnormalities across different anatomical regions.

Owner

  • Login: NoahBakayou
  • Kind: user

Citation (CITATION.cff)

cff-version: 1.2.0
preferred-citation:
  type: software
  message: If you use YOLOv5, please cite it as below.
  authors:
  - family-names: Jocher
    given-names: Glenn
    orcid: "https://orcid.org/0000-0001-5950-6979"
  title: "YOLOv5 by Ultralytics"
  version: 7.0
  doi: 10.5281/zenodo.3908559
  date-released: 2020-5-29
  license: AGPL-3.0
  url: "https://github.com/ultralytics/yolov5"

GitHub Events

Total
Last Year

Dependencies

.github/workflows/ci-testing.yml actions
  • actions/checkout v4 composite
  • actions/setup-python v5 composite
  • slackapi/slack-github-action v1.25.0 composite
.github/workflows/codeql-analysis.yml actions
  • actions/checkout v4 composite
  • github/codeql-action/analyze v3 composite
  • github/codeql-action/autobuild v3 composite
  • github/codeql-action/init v3 composite
.github/workflows/docker.yml actions
  • actions/checkout v4 composite
  • docker/build-push-action v5 composite
  • docker/login-action v3 composite
  • docker/setup-buildx-action v3 composite
  • docker/setup-qemu-action v3 composite
.github/workflows/format.yml actions
  • ultralytics/actions main composite
.github/workflows/greetings.yml actions
  • actions/first-interaction v1 composite
.github/workflows/links.yml actions
  • actions/checkout v4 composite
  • nick-invision/retry v3 composite
.github/workflows/stale.yml actions
  • actions/stale v9 composite
utils/docker/Dockerfile docker
  • pytorch/pytorch 2.0.0-cuda11.7-cudnn8-runtime build
utils/google_app_engine/Dockerfile docker
  • gcr.io/google-appengine/python latest build
pyproject.toml pypi
requirements.txt pypi
  • Pillow >=9.4.0
  • PyYAML >=5.3.1
  • gitpython >=3.1.30
  • matplotlib >=3.3
  • numpy >=1.23.5
  • opencv-python >=4.1.1
  • pandas >=1.1.4
  • psutil *
  • requests >=2.23.0
  • scipy >=1.4.1
  • seaborn >=0.11.0
  • setuptools >=65.5.1
  • thop >=0.1.1
  • torchvision >=0.9.0
  • tqdm >=4.64.0
  • ultralytics >=8.0.232
  • wheel >=0.38.0
utils/google_app_engine/additional_requirements.txt pypi
  • Flask ==2.3.2
  • gunicorn ==19.10.0
  • pip ==23.3
  • werkzeug >=3.0.1