LIME calculates the similarity score via distance metric between permutations and the original observations. Thereafter, it runs the permutations through the predictor to identify the best describing features. The best describing features and the similarity score are used to compute a simpler model e.g. a linear regression. The calculated feature weights of the simpler model provides an explanation of the local behaviour of the complex model.
https://github.com/marcotcr/lime is running a project which is about explaining what machine learning classifiers (or models) are doing.
At the moment, it supports explaining individual predictions for text classifiers or classifiers that act on tables (numpy arrays of numerical or categorical data) or images, with a package called lime (short for local interpretable model-agnostic explanations).
In this use case LIME technique is applied on classifiers that are attempting to identify individual customers that have a high probability of defaulting on their next credit card payment.