Faces Animation

The idea of this project is to create an animation of people who died at the Eastern Front (World War II). We still have their photographs, but, often, no other materials have been preserved, and even one photograph is already a lot.

With new machine learning technologies, we can try to create the videos with these people, we can ask them to say something. During this project, this will not be something meaningful, but we can at least test the animation results.
Data is the heart of any ML application. In this case, we should collect photos of WWII participants. To automate this process Pamyat Naroda site was parsed. It contains data about ~2.3 million entries (in Russian language) about the participants in the war, including photos, birth date, date of death, military ranks, etc.

I downloaded ~300k unique records. After that data was filtered so only deaths during the war are counted (death date from 1939 to 1946). So we have 40k records from the initial 300k. About half of them have photos. Downloaded faces photos cropped to fit our animation model format.
Now we can use the animation model (first-order-model, for example). Generating a single 10sec length video takes about one minute. It can be optimized, but it is out of the score of this project.

Data and cloud functions are hosted on Google Cloud. The site is created and hosted on Tilda. Each time updating the page random records are shown. By clicking on the photo you can go to the person's page on Pamyat Narods site.
There are some artifacts, but the overall quality is quite good. A model can be future improved by retraining on more relevant videos and additional pre/post processing.
Comments and Questions