Skip to content

nextBillyonair/DPM

Repository files navigation

DPM

Differentiable Probabilistic Models


Table of Contents

  1. Distributions
  2. Transforms
  3. Criterion
  4. Models
  5. Monte Carlo

Distributions

  1. Arcsine
  2. Asymmetric Laplace
  3. Bernoulli
  4. Beta
  5. Binomial
  6. Categorical
  7. Cauchy
  8. ChiSquare
  9. Conditional Model
    • Uses a Neural Network to take inputs and create the parameters of a distribution.
    • Sampling -> takes a value, runs the network to create the distribution, sample from conditional distribution.
    • Log Prob -> Create distribution conditioned on X, take log_prob w.r.t. Z
  10. Convolution -Sum of component distributions, only allows sampling
  11. Data Distribution - Randomly sample from a given set of data.
  12. Dirac Delta
  13. Dirichlet
  14. Exponential
  15. Fisher-Snedecor (F-Distribution)
  16. Gamma
  17. Generator
    • Uses a latent distribution to sample inputs to a neural network to generate a distribution. Train with the adversarial losses.
  18. Geometric
  19. Gumbel Softmax (Relaxed Categorical)
  20. Gumbel
  21. Half Cauchy
  22. Half Normal
  23. Hyperbolic Secant
  24. Kumaraswamy
  25. Langevin
    • Adds Langevin Dynamics to sampling methods (see wikipedia)
  26. Laplace
  27. Log Cauchy
  28. Log Laplace
  29. Log Normal
  30. Logistic
  31. Logit Normal
  32. Negative Binomial
  33. Normal (Multivariate)
  34. Normal (Independent)
  35. Pareto
  36. Poisson
  37. Rayleigh
  38. Relaxed Bernoulli
  39. Student T
  40. Transform Distribution
    • Composes a list of transforms on a distribution
    • Example: Exp(Normal) ~ LogNormal
  41. Uniform
  42. Weibull
  43. Mixture Model
    • Static weights to pick from sub-models using a categorical distribution.
  44. Gumbel Mixture Model
    • Uses the Gumbel Softmax as a differentiable approximation to the categorical distribution, allowing mixture weights to be learned.
  45. Infinite Mixture Model
    • Student T written as a Mixture Model.

Transforms

  1. Affine
  2. Exp
  3. Expm1
  4. Gumbel
  5. Identity
  6. InverseTransform (Inverts a transform)
  7. Kumaraswamy
  8. Log
  9. Logit
  10. NICE
  11. Planar
  12. Power
  13. Radial
  14. Reciprocal
  15. Sigmoid
  16. SinhArcsinh
  17. Softplus
  18. Softsign
  19. Square
  20. Tanh
  21. Weibull

Criterion

  1. Divergences
    1. Cross Entropy
    2. Perplexity
    3. Exponential
    4. Forward KL Divergence
      • P Model -> Sampling (rsample)
      • Q Model -> PDF Function (log_prob)
    5. Reverse KL Divergence
      • P Model -> PDF Function (log_prob)
      • Q Model -> Sampling + PDF Function
    6. Jensen-Shannon Divergence (JS)
      • P Model -> PDF + Sampling
      • Q Model -> PDF + Sampling
  2. Adversarial
    1. GAN Loss
    2. MMGAN Loss
    3. WGAN Loss
    4. LSGAN Loss
  3. Variational
    1. ELBO
      • Implements SVI with ELBO loss.
      • Requires a Conditional Model to learn, in addition to P and Q models.

Models

  1. Regression
    1. Linear Regression (Normal)
    2. L1 Regression (Laplace)
    3. Ridge Regression (Normal + Normal Prior on weights) (Bayesian Linear Regression)
    4. Lasso Regression (Normal + Laplace Prior on weights)
    5. Poisson Regression (Poisson)
    6. Negative Binomial Regression (Generalized Poisson)
  2. Classification
    1. Logistic Regression (Bernoulli)
    2. Bayesian Logistic Regression (Bernoulli + Normal Prior on weights)
    3. Softmax Regression (Categorical)
    4. Gaussian Naive Bayes
    5. Bernoulli Naive Bayes
    6. Multinomial Naive Bayes
    7. Linear Discriminant Analysis (Shared Covariance)
    8. Gaussian Discriminant Analysis (Multivariate Normal)
  3. Clustering
    1. Gaussian Mixture Model
  4. Decomposition
    1. Functional PCA
    2. Dynamic SVD Based (can update projection size)
    3. EM PPCA
    4. Variational PPCA
  5. Unconstrained Matrix Factorization
  6. Generative Adversarial Networks
    1. GAN
    2. MMGAN
    3. WGAN
    4. LSGAN
  7. Variational Auto-Encoder (VAE)
  8. Ordinal Models (For Ordinal Targets)
    1. OrdinalLayer (Layer to convert Real Value to Target Prob)
    2. OrdinalModel (Wraps a Predictor + Ordinal in one module)
    3. OrdinalLoss (Reminder to use NLLLoss b/c you don't want to softmax the probs)
    4. Functional CDFs
      1. exp_cdf -> Exponential Function
      2. erf_cdf -> Error Function as a CDF
      3. tanh_cdf -> Uses Hyperbolic Tangent to fake a CDF
    5. Distribution CDFs
      1. normal_cdf
      2. laplace_cdf
      3. cauchy_cdf

Monte Carlo

  1. Approximations (Integration, Expectation, Variance, etc.)
  2. Inverse Transform Sampling
  3. Rejection Sampling (and Mode Sampling)
  4. Metropolis
  5. Metropolis-Hastings
  6. Simulated Annealing
  7. Metropolis-Adjusted Langevin Algorithm (MALA)
  8. Hamiltonian Monte Carlo (HMC)