https://github.com/astrazeneca/multimodal-python-course

The purpose of the code is to facilitate a comprehensive understanding of multimodal data science applications within medical domain. The code serves to support the delivery of a cutting-edge workshop designed to introduce researchers to the rapidly evolving field of multimodal data science

https://github.com/astrazeneca/multimodal-python-course

Science Score: 49.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 1 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org, medrxiv.org, ncbi.nlm.nih.gov, nature.com
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (7.2%) to scientific vocabulary
Last synced: 6 months ago · JSON representation

Repository

The purpose of the code is to facilitate a comprehensive understanding of multimodal data science applications within medical domain. The code serves to support the delivery of a cutting-edge workshop designed to introduce researchers to the rapidly evolving field of multimodal data science

Basic Info
  • Host: GitHub
  • Owner: AstraZeneca
  • License: apache-2.0
  • Language: Jupyter Notebook
  • Default Branch: main
  • Size: 11.2 MB
Statistics
  • Stars: 9
  • Watchers: 2
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created almost 2 years ago · Last pushed 7 months ago
Metadata Files
Readme License

README.md

Navigating the Multimodal Map: Insights into Foundation Models

Venue

course-image

Online training course run by the NextGen Data Scientists, AstraZeneca

Trainers

Sylwia Majchrowska, Ricardo Mokhtari

Course structure and links

Day | Title | Activity | Materials | :---:|:-----:|:--------:|:---------:| 0 | Troubleshooting software installations | preparation | Introduction and installations | 1 | SAM Concept Cove | Session | Materials | 2 | Multimodal data handling | Session | Materials |

References

  1. LangSAM Code
  2. Grounding DiNO Code Paper
  3. SAM Code Paper
  4. Attention illustrated blog
  5. Attention video
  6. Another attention video
  7. Cross attention
  8. Visualise a transformer
  9. The RSNA-ASNR-MICCAI BraTS 2021 Benchmark on Brain Tumor Segmentation and Radiogenomic Classification
  10. MMML Tutorial - ICML 2023
  11. Multimodal data fusion – analysis
  12. Fusion of Multi-Modal Data Stream for Clinical Event Prediction - Imon Banerjee, PhD
  13. Data-Efficient Multimodal Fusion on a Single GPU
  14. Integrated multimodal artificial intelligence framework for healthcare applications
  15. Inferring multimodal latent topics from electronic health records
  16. Multimodal Risk Prediction with Physiological Signals, Medical Images and Clinical Notes

Owner

  • Name: AstraZeneca
  • Login: AstraZeneca
  • Kind: organization
  • Location: Global

Data and AI: Unlocking new science insights

GitHub Events

Total
  • Watch event: 3
  • Push event: 1
Last Year
  • Watch event: 3
  • Push event: 1