Coder Social home page Coder Social logo

binding-problem's Introduction

List of papers on disentangled representation learning.

2019

  • Learning Disentangled Representations with Reference-Based Variational Autoencoders (Ruiz, Martinez et. al) [paper]
  • Spatial Broadcast Decoder: A Simple Architecture for Learning Disentangled Representations in VAEs (Watters, Burgess, Lercher) ((paper))[https://arxiv.org/abs/1901.07017]

2018

  • Relational Neural Expectation Maximization: Unsupervised Discovery of Objects and their interaction (Sjoerd van Steenkiste, Michael Chang, Klaus Greff, Jürgen Schmidhuber) [paper]
  • Curiosity Driven Exploration of Learned Disentangled Goal Spaces [paper]
  • Recurrent World Models Facilitate Policy Evolution (David Ha, Jürgen Schmidhuber) [paper]
  • Hyperprior Induced Unsupervised Disentanglement of Latent Representations (Jan, Ansari and Soh) [paper]
  • A Spectral Regularizer for Unsupervised Disentanglement (Dec, Ramesh et. al.) [paper]
  • Disentangling Disentanglement (Dec, Mathieu et. al.) [paper]
  • Recent Advances in Autoencoder-Based Representation Learning (Dec, Tschannen et. al.) [paper]
  • Visual Object Networks: Image Generation with Disentangled 3D Representation (Dec, Zhu et. al.) [paper]
  • Towards a Definition of Disentangled Representations (Dec, Higgins et. al.) [paper]
  • Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations (Dec, Locatello et. al.) [paper]
  • Learning Deep Representations by Mutual Information Estimation and Maximization (Aug, Hjelm et. al.) [paper]
  • Life-Long Disentangled Representation Learning with Cross-Domain Latent Homologies (Aug, Achille et. al.) [paper]
  • Learning to Decompose and Disentangle Representations for Video Prediction (Hsieh et. al.) [paper]
  • Insights on Representational Similarity in Neural Networks with Canonical Correlation (Jun, Morcos et. al.) [paper]
  • Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects (Jun, Kosiorek et. al.) [paper]
  • Neural Scene Representation and Rendering (Jun, Eslami et. al.) [paper]
  • Image-to-image translation for cross-domain disentanglement (May, Gonzalez-Garcia et. al.) [paper]
  • Learning Disentangled Joint Continuous and Discrete Representations (May, Dupont) [paper] [code]
  • DGPose: Disentangled Semi-supervised Deep Generative Models for Human Body Analysis (Apr, Bem et. al.) [paper]
  • Structured Disentangled Representations (Apr, Esmaeili et. al.) [paper]
  • Understanding disentangling in β-VAE (Apr, Burgess et. al.) [paper]
  • On the importance of single directions for generalization (Mar, Morcos et. al.) [paper]
  • Unsupervised Representation Learning by Predicting Image Rotations (Mar, Gidaris et. al.) [paper]
  • Disentangled Sequential Autoencoder (Mar, Li & Mandt) [paper]
  • Isolating Sources of Disentanglement in Variational Autoencoders (Mar, Chen et. al.) [paper] [code] -- &&
  • Disentangling by Factorising (Feb, Kim & Mnih) [paper]
  • Disentangling the Independently Controllable Factors of Variation by Interacting with the World (Feb, Bengio's group) [paper]
  • On the Latent Space of Wasserstein Auto-Encoders (Feb, Rubenstein et. al.) [paper]
  • Auto-Encoding Total Correlation Explanation (Feb, Gao et. al.) [paper]
  • Fixing a Broken ELBO (Feb, Alemi et. al.) [paper] -- &
  • Learning Disentangled Representations with Wasserstein Auto-Encoders (Feb, Rubenstein et. al.) [paper]
  • Rethinking Style and Content Disentanglement in Variational Autoencoders (Feb, Shu et. al.) [paper]
  • A Framework for the Quantitative Evaluation of Disentangled Representations (Feb, Eastwood & Williams) [paper]
  • Disentangling factors of variation by mixing them [paper] -- &&

2017

  • Neural Expectation Maximization (Klaus Greff et. al.) [paper]
  • The β-VAE's Implicit Prior (Dec, Hoffman et. al.) [paper]
  • The Multi-Entity Variational Autoencoder (Dec, Nash et. al.) [paper]
  • Learning Independent Causal Mechanisms (Dec, Parascandolo et. al.) [paper]
  • Variational Inference of Disentangled Latent Concepts from Unlabeled Observations (Nov, Kumar et. al.) [paper]
  • Neural Discrete Representation Learning (Nov, Oord et. al.) [paper]
  • Disentangled Representations via Synergy Minimization (Oct, Steeg et. al.) [paper]
  • Unsupervised Learning of Disentangled and Interpretable Representations from Sequential Data (Sep, Hsu et. al.) [paper] [code]
  • Experiments on the Consciousness Prior (Sep, Bengio & Fedus) [paper]
  • The Consciousness Prior (Sep, Bengio) [paper]
  • Disentangling Motion, Foreground and Background Features in Videos (Jul, Lin. et. al.) [paper]
  • SCAN: Learning Hierarchical Compositional Visual Concepts (Jul, Higgins. et. al.) [paper] -- &&
  • DARLA: Improving Zero-Shot Transfer in Reinforcement Learning (Jul, Higgins et. al.) [paper]
  • Unsupervised Learning via Total Correlation Explanation (Jun, Ver Steeg) [paper] [code]
  • PixelGAN Autoencoders (Jun, Makhzani & Frey) [paper] -- &
  • Emergence of Invariance and Disentanglement in Deep Representations (Jun, Achille & Soatto) [paper] -- &
  • A Simple Neural Network Module for Relational Reasoning (Jun, Santoro et. al.) [paper] -- &
  • Learning Disentangled Representations with Semi-Supervised Deep Generative Models (Jun, Siddharth, et al.) [paper]
  • Unsupervised Learning of Disentangled Representations from Video (May, Denton & Birodkar) [paper]

2016

  • Tagger: Deep Unsupervised Perceptual Grouping (Klaus Greff, Jurguen Schmidhuber et. al.) ((paper))[https://papers.nips.cc/paper/6067-tagger-deep-unsupervised-perceptual-grouping.pdf]
  • Deep Variational Information Bottleneck (Dec, Alemi et. al.) [paper] -- & (Didn't understand some parts of it)
  • β-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework (Nov, Higgins et. al.) [paper] [code] -- &&
  • Disentangling factors of variation in deep representations using adversarial training (Nov, Mathieu et. al.) [paper]
  • Information Dropout: Learning Optimal Representations Through Noisy Computation (Nov, Achille & Soatto) [paper]
  • InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets (Jun, Chen et. al.) [paper] -- &&
  • Attend, Infer, Repeat: Fast Scene Understanding with Generative Models (Mar, Eslami et. al.) [paper]
  • Building Machines That Learn and Think Like People (Apr, Lake et. al.) [paper]
  • Understanding Visual Concepts with Continuation Learning (Feb, Whitney et. al.) [paper]
  • Disentangled Representations in Neural Models (Feb, Whitney) [paper]

Older work

  • Deep Convolutional Inverse Graphics Network (2015, Kulkarni et. al.) [paper] -- &
  • Learning to Disentangle Factors of Variation with Manifold Interaction (2014, Reed et. al.) [paper]
  • Representation Learning: A Review and New Perspectives (2013, Bengio et. al.) [paper]
  • Disentangling Factors of Variation via Generative Entangling (2012, Desjardinis et. al.) [paper]
  • Transforming Auto-encoders (2011, Hinton et. al.) [paper] -- &&
  • Learning Factorial Codes By Predictability Minimization (1992, Schmidhuber) [paper]
  • Self-Organization in a Perceptual Network (1988, Linsker) [paper]

binding-problem's People

Contributors

saharudra avatar

Stargazers

 avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.