How to properly calculate inference for Bayesian net? #1746
-
I'm trying to do inference like this:
Is there an easier/better way? The problem with the first one is that it is pretty slow compared to the existing (smile) solution and I don't understand why. (We'd like to migrate to open-source from SMILE.) I'd appreciate any pointers, thank you! |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 9 replies
-
@ferencbartok I am not exactly sure what you are trying to compute in the example above. Could you elaborate a bit please? If you are looking to compute the (posterior) marginal distributions over all variables given some evidence you could do something like: In [13]: from pgmpy.utils import get_example_model
In [14]: model = get_example_model('alarm')
In [15]: from pgmpy.inference import VariableElimination
In [16]: infer = VariableElimination(model)
In [18]: observation = {'CVP': 'LOW', 'PAP': 'HIGH'}
In [19]: for node in model.nodes():
...: if node not in observation:
...: infer.query([node], evidence=observation) By default pgmpy tries to compute the joint disribution over all the variables in the I imagine that SMILE is faster in this case because it might be using an approximate Belief Propagation algorithm (and is written in C++), which computes the marginals over all the nodes together. Variable Elimination isn't designed to do that and our Belief Propagation algorithm currently doesn't have the functionality to access these marginals right now. If you could explain a bit what you are trying to compute, I can try to modify the Belief Propagation algorithm to do that. |
Beta Was this translation helpful? Give feedback.
-
Thank you! About the reversed issue: I've checked a few of my tests based on your code and the probabilities given the exact states are correct. But why is this happening when I set up everything in the same way, the only difference is that I use ApproxInference instead of VariableElimination? Everything else is the same. About the second question: As for the rejection sampling approach, this seems promising, but I have 2 problems:
This works: |
Beta Was this translation helpful? Give feedback.
@ferencbartok I am not exactly sure what you are trying to compute in the example above. Could you elaborate a bit please? If you are looking to compute the (posterior) marginal distributions over all variables given some evidence you could do something like:
By default pgmpy tries to compute the joint disribut…