Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gaussian binary tree inference_gym collider model #1349

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

gisilvs
Copy link
Contributor

@gisilvs gisilvs commented Jun 3, 2021

No description provided.

@google-cla google-cla bot added the cla: yes Declares that the user has signed CLA label Jun 3, 2021
@gisilvs
Copy link
Contributor Author

gisilvs commented Jun 3, 2021

@davmre

nodes = []
# in the "root" layer (or inverse root, as it is a reversed tree) we have
# 2**num_layers nodes (with depth 2 --> 4 nodes, depth 4 --> 16 nodes)
for i in range(2 ** num_layers):
Copy link
Contributor

@davmre davmre Jun 3, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be more efficient to write each layer as a single distribution with batch shape:

layer = yield Root(
  tfd.Normal(loc=initial_loc * tf.ones([2 ** num_layers]),
             scale=initial_scale,
             name='layer_{}'.format(num_layers)))
for l in range(num_layers - 1, 0, -1):
  layer = coupling_link(layer) if coupling_link else layer
  layer = yield tfd.Normal(loc=layer[..., : -1 : 2] - layer[..., 1 : : 2],
                           scale=nodes_scale,
                           name='layer_{}'.format(l))

We'd need to be sure that the CF code does the right thing on batched distributions (which should be treated equivalently to a list of independent dists), but we'd need to do that anyway.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla: yes Declares that the user has signed CLA
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants