Technical Program

Paper Detail

Paper IDL.10.4
Paper Title Stochastic Bottleneck: Rateless Auto-Encoder for Flexible Dimensionality Reduction
Authors Toshiaki Koike-Akino, Ye Wang, Mitsubishi Electric Research Laboratories (MERL), United States
Session L.10: Learning Theory I
Presentation Lecture
Track Statistics and Learning Theory
Manuscript  Click here to download the manuscript
Virtual Presentation  Click here to watch in the Virtual Symposium
Abstract We propose a new concept of rateless auto-encoders (RL-AEs) that enable a flexible latent dimensionality, which can be seamlessly adjusted for varying distortion. In the proposed RL-AEs, instead of a deterministic bottleneck architecture, we use an over-complete representation that is stochastically regularized with weighted dropouts. Our RL-AEs employ monotonically increasing dropout rates across the latent representation nodes such that the latent variables become sorted by importance like in principal component analysis (PCA). This is motivated by the rateless property of conventional PCA, where the least important principal components can be discarded to realize variable rate dimensionality reduction that gracefully degrades the distortion. Our proposed stochastic bottleneck framework enables seamless rate adaptation with high reconstruction performance, without requiring predetermined latent dimensionality at training. We experimentally demonstrate that the proposed RL-AEs can achieve variable dimensionality reduction while retaining nearly optimal distortion compared to conventional AEs.

Plan Ahead

IEEE ISIT 2021

2021 IEEE International Symposium on Information Theory

11-16 July 2021 | Melbourne, Victoria, Australia

Visit Website!