Exploratory action

XGAN

Interpretable Representation Learning for Video GANs
Interpretable Representation Learning for Video GANs

Despite remarkable progress in generative adversarial networks (GANs), such networks operate currently as black boxes. XGAN aims at piercing the black box of GANs for video generation by proposing strategies to interpret the latent space in (a) design of interpretable architectures, and by (b) analysis of symmetric functions in input and output of patch-based generation.  

Towards (a) we will design original architectures, streamlined to generate high-quality videos, as only then an analysis of interpretability is meaningful, as well as to allow for analysis of the latent motion representation.

An orthogonal strategy in (b) has to do with tackling the question "How do GANs encode different semantics in latent space?''. Here we intend to find a correspondence between simple functions in GAN-input and output.

Inria teams involved
STARS

Contacts

Antitza Dantcheva

Scientific leader