mmkit-features

A multimodal architecture to build multimodal knowledge graphs with flexible multimodal feature extraction and dynamic multimodal concept generation

https://github.com/dhchenx/mmkit-features

Science Score: 26.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.1%) to scientific vocabulary

Keywords

multimodal-data multimodal-feature multimodal-knowledge-graph
Last synced: 6 months ago · JSON representation

Repository

A multimodal architecture to build multimodal knowledge graphs with flexible multimodal feature extraction and dynamic multimodal concept generation

Basic Info
Statistics
  • Stars: 10
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Topics
multimodal-data multimodal-feature multimodal-knowledge-graph
Created over 4 years ago · Last pushed almost 3 years ago
Metadata Files
Readme License Citation

README.md

MMKit-Features: Multimodal Feature Extraction Toolkit

Traditional knowledge graphs (KGs) are usually comprised of entities, relationships, and attributes. However, they are not designed to effectively store or represent multimodal data. This limitation prevents them from capturing and integrating information from different modes of data, such as text, images, and audio, in a meaningful and holistic way.

The MMKit-Features project proposes a multimodal architecture to build multimodal knowledge graphs with flexible multimodal feature extraction and dynamic multimodal concept generation.

Project Goal

  • To extract, store, and fuse various multimodal features from multimodal datasets efficiently;
  • To achieve generative adversarial network(GAN)-based multimodal knowledge representation dynamically in multimodal knowledge graphs;
  • To provide a common deep learning-based architecture to enhance multimodal knowledge reasoning in real life.

Installation

You can install this toolkit using our PyPi package.

pip install mmkit-features

Design Science Framework

Multimodal Computational Sequence

Figure 1: Multimodal Computational Sequence

GAN-based Multimodal Concept Generation

Figure 2: GAN-based Multimodal Concept Generation

Modalities

  1. Text/Language modality
  2. Image modality
  3. Video modality
  4. Audio modality
  5. Cross-modality among above

Usage

A toy example showing how to build a multimodal feature (MMF) library is here:

python from mmkfeatures.fusion.mm_features_lib import MMFeaturesLib from mmkfeatures.fusion.mm_features_node import MMFeaturesNode import numpy as np if __name__ == "__main__": # 1. create an empty multimodal features library with root and dataset names feature_lib = MMFeaturesLib(root_name="test features",dataset_name = "test_features") # 2. set short names for each dimension for convenience feature_lib.set_features_name(["feature1","feature2","feature3"]) # 3. set a list of content IDs content_ids = ["content1","content2","content3"] # 4. according to IDs, assign a group of features with interval to corresponding content ID features_dict = {} for id in content_ids: mmf_node = MMFeaturesNode(id) mmf_node.set_item("name",str(id)) mmf_node.set_item("features",np.array([[1,2,3]])) mmf_node.set_item("intervals",np.array([[0,1]])) features_dict[id] = mmf_node # 5. set the library's data feature_lib.set_data(features_dict) # 6. save the features to disk for future use feature_lib.save_data("test6_feature.csd") # 7. check structure of lib file with the format of h5py feature_lib.show_structure("test6_feature.csd") # 8. have a glance of features content within the dataset feature_lib.show_sample_data("test6_feature.csd") # 9. Finally, we construct a simple multimodal knowledge base.

Further instructions on the toolkit refers to here.

Applications

Here are some examples of using our work in real life with codes and documents.

1. Multimodal Features Extractors

2. Multimodal Feature Library (MMFLib)

3. Multimodal Knowledge Bases

4. Multimodal Indexing and Querying

Credits

The project includes some source codes from various open-source contributors. Here is a list of their contributions.

  1. A2Zadeh/CMU-MultimodalSDK
  2. aishoot/SpeechFeatureExtraction
  3. antoine77340/videofeatureextractor
  4. jgoodman8/py-image-features-extractor
  5. v-iashin/Video Features

License

The mmkit-features project is provided by Donghua Chen with MIT license.

Citation

Please cite our project if the project is used in your research.

Chen, D. (2023). MMKit-Features: Multimodal Features Extraction Toolkit (Version 0.0.2) [Computer software]

Owner

  • Name: Donghua Chen
  • Login: dhchenx
  • Kind: user
  • Location: Beijing, China
  • Company: Department of Artificial Intelligence, University of International Business and Economics

His research areas focus on Natural Language Processing, Knowledge Modeling, Big Data Analysis, and Artificial Intelligence in Medical Informatics.

GitHub Events

Total
  • Watch event: 4
Last Year
  • Watch event: 4

Committers

Last synced: almost 3 years ago

All Time
  • Total Commits: 17
  • Total Committers: 1
  • Avg Commits per committer: 17.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Donghua Chen d****n@1****m 17
Committer Domains (Top 20 + Academic)
126.com: 1

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels

Packages

  • Total packages: 2
  • Total downloads:
    • pypi 54 last-month
  • Total dependent packages: 0
    (may contain duplicates)
  • Total dependent repositories: 2
    (may contain duplicates)
  • Total versions: 6
  • Total maintainers: 1
pypi.org: mmkit-features

A multimodal architecture to build multimodal knowledge graphs with flexible multimodal feature extraction and dynamic multimodal concept generation.

  • Versions: 5
  • Dependent Packages: 0
  • Dependent Repositories: 1
  • Downloads: 35 Last month
Rankings
Dependent packages count: 10.1%
Dependent repos count: 21.6%
Stargazers count: 21.6%
Average: 26.8%
Forks count: 29.8%
Downloads: 50.9%
Maintainers (1)
Last synced: 6 months ago
pypi.org: mmk-features

Extract and fuse multimodal features for deep learning

  • Versions: 1
  • Dependent Packages: 0
  • Dependent Repositories: 1
  • Downloads: 19 Last month
Rankings
Dependent packages count: 10.1%
Dependent repos count: 21.6%
Stargazers count: 21.6%
Average: 28.6%
Forks count: 29.8%
Downloads: 60.0%
Maintainers (1)
Last synced: 6 months ago

Dependencies

setup.py pypi
  • ffmpeg-python *
  • h5py *
  • librosa *
  • nexusformat *
  • numpy *
  • opencv-python *
  • requests *
  • sklearn *
  • spacy *
  • tensorflow *
  • torch *
  • torchvision *
  • tqdm *
  • validators *