Deepfakes Implementation with Pytorch

Photo by

What is Deepfakes

Basically, Deepfakes is an unsupervised machine learning algorithm.
It trains one encoder and two decoders to process person A and person B, the loss can be calculated by difference between ground truth image and decoded image.
For example, it trains an encoder(e) to extract person A features, and trains a decoder(dA) to decode A's feature and produce fake A's image(fA). Through comparing difference between original image and fA, to make encoder and decoder work well.
Similarly, the same encoder(e) also extracts B's feature, and trains a decoder(dB) to decode B's feature and produce fake B's image(fB).

Think about it, what if given a image of A, and use (dB) to decode, what will happens?

Here is the magic:

Basic Concept

This basic concept of deepfake can also be found here: understanding Deepfakes or chinese version Good Tutorial of Chinese version Deepfakes In my tutorial, I give a solution to use pytorch implement deepfakes and change trump's presentation video with my face!

5 Steps To Use My code

find my code at My Code and git clone to an empty repo.


python == 3.6
pytorch >= 0.4.0 or 1.0

1. Construct Dataset

First of all, we need to constrcut our dataset.

$ cd #current directory#
$ mkdir train
$ cd train

Put videos of A and B to train/ , for example, trump.mp4 and me.mp4 where A is trump, B is myself

2. Crop frames from videos

In train/ directory, make two subdirectory to store all frames of videos
$ mkdir personA
$ mkdir personB
to save frames to personA and personB directory.

Make sure change Video_Path and save_path parameter in the python file. Do it twice to crop trump video to personA directory, crop my video to personB directory.

3. Collect faces from frames

In train/ directory, make two subdirectory to store faces of all frames.
$ mkdir personA_face
#(To save faces of person A from personA)
$ mkdir personB_face
#(To save faces of person A from personA)


use dlib to crop faces from frames and save to personA_face and personB_face

## Make sure change Image_Folder and OutFace_Folder parameter in the python file. Do it twice to crop trump face to personA_face directory, crop myselft face to personB_face directory.

4. Train Model


5. Load Model and Output video with my face and trump body



Hanqing Guo
Hanqing Guo
PhD Candidate

My research interests include speech recognitions, signal processing, machine learning, IoT applications.