Normalizing Flows Across Dimensions

Abstract

Real-world data with underlying structure, such as pictures of faces, is hypothesized to lie on a low-dimensional manifold. This manifold hypothesis has motivated state-of-the-art generative algorithms that learn low-dimensional data representations. Unfortunately, a popular generative model, normalizing flows, cannot take advantage of this. Normalizing flows are based on successive variable transformations that, by design, are incapable of learning lower-dimensional representations. In this paper, we introduce noisy injective flows (NIF), a generalization of normalizing flows that can go across dimensions. NIF explicitly map the latent space to a learnable manifold in a high-dimensional data space using injective transformations. We further employ an additive noise model to account for deviations from the manifold and identify a stochastic inverse of the generative process. Empirically, we demonstrate that a simple application of our method to existing flow architectures can significantly improve sample quality and yield separable data embeddings.

Publication
In Second workshop on Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models (ICML 2020)
Abhinav Agrawal
Abhinav Agrawal
MS and Ph.D. Student

Currently using deep learning to scale approximate probabilistic inference.

Related