This document summarizes two papers presented at NIPS 2018 on anomaly detection and out-of-distribution detection. The first paper proposes a simple unified framework using geometric transformations and Dirichlet density estimation to detect anomalies and adversarial examples. The second paper introduces a method that uses an ensemble of neural networks to detect out-of-distribution samples and adversarial attacks with state-of-the-art performance on CIFAR-10, SVHN and FGSM attacks. It also explores applications to class-incremental learning.
1. The document discusses probabilistic modeling and variational inference. It introduces concepts like Bayes' rule, marginalization, and conditioning.
2. An equation for the evidence lower bound is derived, which decomposes the log likelihood of data into the Kullback-Leibler divergence between an approximate and true posterior plus an expected log likelihood term.
3. Variational autoencoders are discussed, where the approximate posterior is parameterized by a neural network and optimized to maximize the evidence lower bound. Latent variables are modeled as Gaussian distributions.
This document summarizes two papers presented at NIPS 2018 on anomaly detection and out-of-distribution detection. The first paper proposes a simple unified framework using geometric transformations and Dirichlet density estimation to detect anomalies and adversarial examples. The second paper introduces a method that uses an ensemble of neural networks to detect out-of-distribution samples and adversarial attacks with state-of-the-art performance on CIFAR-10, SVHN and FGSM attacks. It also explores applications to class-incremental learning.
1. The document discusses probabilistic modeling and variational inference. It introduces concepts like Bayes' rule, marginalization, and conditioning.
2. An equation for the evidence lower bound is derived, which decomposes the log likelihood of data into the Kullback-Leibler divergence between an approximate and true posterior plus an expected log likelihood term.
3. Variational autoencoders are discussed, where the approximate posterior is parameterized by a neural network and optimized to maximize the evidence lower bound. Latent variables are modeled as Gaussian distributions.