SPARK: Self-supervised Personalized Real-time Monocular Face Capture

1 2 3
SIGGRAPH Asia 2024

SPARK creates a 3D face reconstruction from multiple unconstrained portrait videos of a person and enables real-time tracking on new unseen videos.

1. Multi-video Face Avatar Reconstruction

SPARK reconstructs a relightable face avatar from multiple monocular videos.

Multi-video avatar reconstruction results #1.
Multi-video avatar reconstruction results #2.

Examples of the multi-video avatar reconstruction stage, shown for two training frames.

2. Personalized Real-time Tracking

Leveraging the same videos and the estimated avatar, SPARK learns to precisely track unseen footage of the person in real-time.

Abstract

Feedforward monocular face capture methods seek to reconstruct posed faces from a single image of a person. Current state of the art approaches have the ability to regress parametric 3D face models in real-time across a wide range of identities, lighting conditions and poses by leveraging large image datasets of human faces. These methods however suffer from clear limitations in that the underlying parametric face model only provides a coarse estimation of the face shape, thereby limiting their practical applicability in tasks that require precise 3D reconstruction (aging, face swapping, digital make-up, ...).

In this paper, we propose a method for high-precision 3D face capture taking advantage of a collection of unconstrained videos of a subject as prior information. Our proposal builds on a two stage approach. We start with the reconstruction of a detailed 3D face avatar of the person, capturing both precise geometry and appearance from a collection of videos. We then use the encoder from a pre-trained monocular face reconstruction method, substituting its decoder with our personalized model, and proceed with transfer learning on the video collection. Using our pre-estimated image formation model, we obtain a more precise self-supervision objective, enabling improved expression and pose alignment. This results in a trained encoder capable of efficiently regressing pose and expression parameters in real-time from previously unseen images, which combined with our personalized geometry model yields more accurate and high fidelity mesh inference.

Through extensive qualitative and quantitative evaluation, we showcase the superiority of our final model as compared to state-of-the-art baselines, and demonstrate its generalization ability to unseen pose, expression and lighting.

Method

Illustration of our-two stage adaptation process. In stage 1, we rely on a collection of different video sources of the same person to build a personalized geometry decoder through inverse rendering. In stage 2, the 3DMM of a generalizable feedforward face capture network is swapped with the new decoder, and the encoder is tuned by reconstructing the same adaptation video frames leveraging the pre-estimated reflectance function for each video.

Visual Effects

The precise tracking from SPARK can be used for face editing or other visual effects applications, which would usually require manual modelling or 3D scanning sessions of the actor.

BibTeX (non definitive)

@article{baert2024spark,
  author    = {Baert, Kelian and Bharadwaj, Shrisha and Castan, Fabien and Maujean, Benoit and Christie, Marc and Abrevaya, Victoria and Boukhayma, Adnane},
  title     = {SPARK: Self-supervised Personalized Real-time Monocular Face Capture},
  booktitle = {SIGGRAPH Asia 2024 Conference Papers (SA Conference Papers '24), December 3-6, Tokyo, Japan},
  doi       = {10.1145/3680528.3687704},
  isbn      = {979-8-4007-1131-2/24/12},
  year      = {2024},
}