Spark library for generalized K-Means clustering. Supports general Bregman divergences. Suitable for clustering probabilistic data, time series data, high dimensional data, and very large data.
-
Updated
Jan 19, 2024 - HTML
Spark library for generalized K-Means clustering. Supports general Bregman divergences. Suitable for clustering probabilistic data, time series data, high dimensional data, and very large data.
Maximum entropy and minimum divergence models in Python
Trending algorithm based on the article "Trending at Instagram"
🐍 🔬 Fast Python implementation of various Kullback-Leibler divergences for 1D and 2D parametric distributions. Also provides optimized code for kl-UCB indexes
Code, data, and tutorials for "Sense organ control in moths to moles is a gamble on information through motion"
Code for Variable Selection in Black Box Methods with RelATive cEntrality (RATE) Measures
Using entities from NER on GOV.UK content to power personalisation.
Kullback-Leibler projections for Bayesian model selection in Python
Non-Negative Matrix Factorization for Gene Expression Clustering
Methods for computational information geometry
NLP implementations like information-theoretic measures of distributional similarity, text preprocessing using shell commands, Naive Bayes text categorization model, Cocke-Younger-Kasami parsing.
Mode Selection/Covering of GANs (TensorFlow 2)
This project implements in Python some common statistical analysis methods used in data analysis, including Entropy, Mutual Information, Kolmogorov–Smirnov test, Kullback-Leibler divergence (KLD), AB tests (Mann-Whitney U and t-tests)
Experiments of the three PPO-Algorithms (PPO, clipped PPO, PPO with KL-penalty) proposed by John Schulman et al. on the 'Cartpole-v1' environment.
Basic GANs with variety of loss functions as an exercise for my Thesis with Prof. Randy Paffenroth. KL, Reverse-KL, JS and Wasserstein GAN.
Sequential KMeans algorithm implementation
PyTorch implementations of the beta divergence loss.
The repo consists of Statistics Algorithms
TATTER (Two-sAmple TesT EstimatoR) is a tool to perform two-sample hypothesis test.
Add a description, image, and links to the kullback-leibler-divergence topic page so that developers can more easily learn about it.
To associate your repository with the kullback-leibler-divergence topic, visit your repo's landing page and select "manage topics."