https://github.com/ai-forever/ghost

A new one shot face swap approach for image and video domains

https://github.com/ai-forever/ghost

Science Score: 36.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
    Found 2 DOI reference(s) in README
  • Academic publication links
    Links to: ieee.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.1%) to scientific vocabulary

Keywords

computer-vision deep-face-swap deep-learning deepfake face-swap faceswap ghost ghost-faceswap ghost-swap pytorch
Last synced: 5 months ago · JSON representation

Repository

A new one shot face swap approach for image and video domains

Basic Info
  • Host: GitHub
  • Owner: ai-forever
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 91.7 MB
Statistics
  • Stars: 1,474
  • Watchers: 27
  • Forks: 292
  • Open Issues: 70
  • Releases: 6
Topics
computer-vision deep-face-swap deep-learning deepfake face-swap faceswap ghost ghost-faceswap ghost-swap pytorch
Created about 4 years ago · Last pushed 12 months ago
Metadata Files
Readme License

README.md

[Paper] [Habr]

👻 GHOST: Generative High-fidelity One Shot Transfer

Our paper "GHOST—A New Face Swap Approach for Image and Video Domains" has been published on IEEE Xplore.

Google Colab Demo

GHOST Ethics

Deepfake stands for a face swapping algorithm where the source and target can be an image or a video. Researchers have investigated sophisticated generative adversarial networks (GAN), autoencoders, and other approaches to establish precise and robust algorithms for face swapping. However, the achieved results are far from perfect in terms of human and visual evaluation. In this study, we propose a new one-shot pipeline for image-to-image and image-to-video face swap solutions - GHOST (Generative High-fidelity One Shot Transfer).

Deep fake synthesis methods have been improved a lot in quality in recent years. The research solutions were wrapped in easy-to-use API, software and different plugins for people with a little technical knowledge. As a result, almost anyone is able to make a deepfake image or video by just doing a short list of simple operations. At the same time, a lot of people with malicious intent are able to use this technology in order to produce harmful content. High distribution of such a content over the web leads to caution, disfavor and other negative feedback to deepfake synthesis or face swap research.

As a group of researchers, we are not trying to denigrate celebrities and statesmen or to demean anyone. We are computer vision researchers, we are engineers, we are activists, we are hobbyists, we are human beings. To this end, we feel that it's time to come out with a standard statement of what this technology is and isn't as far as us researchers are concerned. * GHOST is not for creating inappropriate content. * GHOST is not for changing faces without consent or with the intent of hiding its use. * GHOST is not for any illicit, unethical, or questionable purposes. * GHOST exists to experiment and discover AI techniques, for social or political commentary, for movies, and for any number of ethical and reasonable uses.

We are very troubled by the fact that GHOST can be used for unethical and disreputable things. However, we support the development of tools and techniques that can be used ethically as well as provide education and experience in AI for anyone who wants to learn it hands-on. Now and further, we take a zero-tolerance approach and total disregard to anyone using this software for any unethical purposes and will actively discourage any such uses.

Image Swap Results

Video Swap Results

Installation

  1. Clone this repository bash git clone https://github.com/sberbank-ai/sber-swap.git cd sber-swap git submodule init git submodule update
  2. Install dependent packages bash pip install -r requirements.txt If it is not possible to install onnxruntime-gpu, try onnxruntime instead

  3. Download weights bash sh download_models.sh

    Usage

    1. Colab Demo google colab logo or you can use jupyter notebook SberSwapInference.ipynb locally
    2. Face Swap On Video

Swap to one specific person in the video. You must set face from the target video (for example, crop from any frame). bash python inference.py --source_paths {PATH_TO_IMAGE} --target_faces_paths {PATH_TO_IMAGE} --target_video {PATH_TO_VIDEO} Swap to many person in the video. You must set multiple faces for source and the corresponding multiple faces from the target video. bash python inference.py --source_paths {PATH_TO_IMAGE PATH_TO_IMAGE ...} --target_faces_paths {PATH_TO_IMAGE PATH_TO_IMAGE ...} --target_video {PATH_TO_VIDEO} 3. Face Swap On Image

You may set the target face, and then source will be swapped on this person, or you may skip this parameter, and then source will be swapped on any person in the image. bash python inference.py --target_path {PATH_TO_IMAGE} --image_to_image True

Training

We also provide the training code for face swap model as follows: 1. Download VGGFace2 Dataset. 2. Crop and align faces with out detection model. bash python preprocess_vgg.py --path_to_dataset {PATH_TO_DATASET} --save_path {SAVE_PATH} 3. Start training. bash python train.py --run_name {YOUR_RUN_NAME} We provide a lot of different options for the training. More info about each option you can find in train.py file. If you would like to use wandb logging of the experiments, you should login to wandb first --wandb login.

Tips

  1. For the first epochs we suggest not to use eye detection loss and scheduler if you train from scratch.
  2. In case of finetuning you can variate losses coefficients to make the output look similar to the source identity, or vice versa, to save features and attributes of target face.
  3. You can change the backbone of the attribute encoder and numblocks of AAD ResBlk using parameters --backbone and `--numblocks`.
  4. During the finetuning stage you can use our pretrain weights for generator and discriminator that are located in weights folder. We provide the weights for models with U-Net backbone and 1-3 blocks in AAD ResBlk. The main model architecture contains 2 blocks in AAD ResBlk.

Cite

If you use our model in your research, we would appreciate using the following citation

### BibTeX Citation @article{9851423, author={Groshev, Alexander and Maltseva, Anastasia and Chesakov, Daniil and Kuznetsov, Andrey and Dimitrov, Denis}, journal={IEEE Access}, title={GHOST—A New Face Swap Approach for Image and Video Domains}, year={2022}, volume={10}, number={}, pages={83452-83462}, doi={10.1109/ACCESS.2022.3196668} }

### General Citation

A. Groshev, A. Maltseva, D. Chesakov, A. Kuznetsov and D. Dimitrov, "GHOST—A New Face Swap Approach for Image and Video Domains," in IEEE Access, vol. 10, pp. 83452-83462, 2022, doi: 10.1109/ACCESS.2022.3196668.

Owner

  • Name: AI Forever
  • Login: ai-forever
  • Kind: organization
  • Location: Armenia

Creating ML for the future. AI projects you already know. We are non-profit organization with members from all over the world.

GitHub Events

Total
  • Issues event: 6
  • Watch event: 234
  • Issue comment event: 7
  • Pull request event: 2
  • Fork event: 42
Last Year
  • Issues event: 6
  • Watch event: 234
  • Issue comment event: 7
  • Pull request event: 2
  • Fork event: 42

Committers

Last synced: 9 months ago

All Time
  • Total Commits: 35
  • Total Committers: 5
  • Avg Commits per committer: 7.0
  • Development Distribution Score (DDS): 0.571
Past Year
  • Commits: 1
  • Committers: 1
  • Avg Commits per committer: 1.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
AlexanderGroshev N****g@y****u 15
danyache d****v@n****u 11
NastyaMittseva n****a@m****u 5
Andrey Kuznetsov k****y@g****m 3
Denis d****v@g****m 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 5 months ago

All Time
  • Total issues: 89
  • Total pull requests: 23
  • Average time to close issues: about 1 month
  • Average time to close pull requests: 2 days
  • Total issue authors: 69
  • Total pull request authors: 11
  • Average comments per issue: 2.03
  • Average comments per pull request: 0.09
  • Merged pull requests: 11
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 5
  • Pull requests: 5
  • Average time to close issues: about 3 hours
  • Average time to close pull requests: N/A
  • Issue authors: 5
  • Pull request authors: 3
  • Average comments per issue: 0.0
  • Average comments per pull request: 0.2
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • netrunner-exe (6)
  • ak3389 (4)
  • bmc84 (3)
  • ihorrible (3)
  • Rhenm091619 (3)
  • loboere (2)
  • epicstar7 (2)
  • Jcpancho303 (2)
  • Funkybo0dah (2)
  • andriken (2)
  • adamgeddon1686 (2)
  • talhaty (1)
  • alcanunsal (1)
  • osmankaya (1)
  • alvinlee001 (1)
Pull Request Authors
  • AlexanderGroshev (6)
  • Danyache (3)
  • what-in-the-nim (2)
  • NastyaMittseva (2)
  • UnusualNick (2)
  • AyeshaIrshad1337 (2)
  • quereste (2)
  • thepirat000 (1)
  • johndpope (1)
  • mst-rajatmishra (1)
  • thegenerativegeneration (1)
Top Labels
Issue Labels
Pull Request Labels

Dependencies

apex/requirements.txt pypi
  • PyYAML >=5.1
  • cxxfilt >=0.2.0
  • numpy >=1.15.3
  • pytest >=3.5.1
  • tqdm >=4.28.1
apex/requirements_dev.txt pypi
  • Sphinx >=3.0.3 development
  • flake8 >=3.7.9 development
requirements.txt pypi
  • dill *
  • insightface ==0.2.1
  • kornia ==0.5.4
  • mxnet-cu101mkl *
  • numpy *
  • onnx ==1.9.0
  • onnxruntime-gpu ==1.4.0
  • opencv-python *
  • requests ==2.25.1
  • scikit-image *
  • torch ==1.6.0
  • torchvision ==0.7.0
  • wandb *
apex/examples/docker/Dockerfile docker
  • $BASE_IMAGE latest build
apex/setup.py pypi