Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Confidence Maps Offset #1611

Open
roomrys opened this issue Nov 28, 2023 Discussed in #1608 · 1 comment
Open

Confidence Maps Offset #1611

roomrys opened this issue Nov 28, 2023 Discussed in #1608 · 1 comment
Labels
bug Something isn't working

Comments

@roomrys
Copy link
Collaborator

roomrys commented Nov 28, 2023

Discussed in #1608

Originally posted by daskandalis November 25, 2023
I've often observed that the validation confidence maps are highly offset. Is this just a display issue? It's highly directional and the green (MAP?) estimate lies outside the confidence bound.

image

I'm wondering though if it's related to this printout:

INFO:sleap.nn.training: Input shape: (1088, 1920, 1).

The width is 1088 but I think it should be 1080, the resolution of the image (unless the convolution kernels are padded in width but not height?).

Incidentally, is it possible to access the variance of the estimated position from the outputs? The measurement error would be useful in filtering.

Model parameters:
"unet": {
"stem_stride": null,
"max_stride": 64,
"output_stride": 4,
"filters": 32,
"filters_rate": 1.5,
"middle_block": true,
"up_interpolate": true,
"stacks": 1
}

EDIT: or maybe comes from translation in the augmentation step?


This looks to be a visualization issue. The red predicted points should be centered on the peak value of the green 2D Gaussian (which depicts confidence values at pixel locations). While it seems that the red predicted and green ground truth points are correctly overlaid on the image, the confidence maps (from which the predicted points are derived from) are not being correctly displayed. We must be missing some offset for the visualization of the confidence maps.

Note: it seems you have a green point labeled as visible that should be invisible (out above the animal).

Thanks for reporting this! - I'll convert this discussion to an issue.
Liezl

@talmo talmo added the bug Something isn't working label Dec 1, 2023
@talmo
Copy link
Collaborator

talmo commented Jan 5, 2024

Right, I think we add padding to the image to avoid truncation when downsampling --> upsampling, but not to the confidence maps, which might result in weird aspect ratio issues?

Relevant plotting code:

sleap/sleap/nn/viz.py

Lines 81 to 96 in 16241e0

def plot_confmaps(confmaps: np.ndarray, output_scale: float = 1.0):
"""Plot confidence maps reduced over channels."""
ax = plt.gca()
return ax.imshow(
np.squeeze(confmaps.max(axis=-1)),
alpha=0.5,
origin="upper",
vmin=0,
vmax=1,
extent=[
-0.5,
confmaps.shape[1] / output_scale - 0.5,
confmaps.shape[0] / output_scale - 0.5,
-0.5,
],
)

Called from here during training (for single instance -- other model types plot a bit differently):

sleap/sleap/nn/training.py

Lines 1068 to 1122 in 16241e0

def _setup_visualization(self):
"""Set up visualization pipelines and callbacks."""
# Create visualization/inference pipelines.
self.training_viz_pipeline = self.pipeline_builder.make_viz_pipeline(
self.data_readers.training_labels_reader, self.keras_model
)
self.validation_viz_pipeline = self.pipeline_builder.make_viz_pipeline(
self.data_readers.validation_labels_reader, self.keras_model
)
# Create static iterators.
training_viz_ds_iter = iter(self.training_viz_pipeline.make_dataset())
validation_viz_ds_iter = iter(self.validation_viz_pipeline.make_dataset())
inference_layer = SingleInstanceInferenceLayer(
keras_model=self.keras_model,
input_scale=self.config.data.preprocessing.input_scaling,
pad_to_stride=self.config.data.preprocessing.pad_to_stride,
peak_threshold=0.2,
return_confmaps=True,
)
def visualize_example(example):
img = example["image"].numpy()
preds = inference_layer(tf.expand_dims(img, axis=0))
cms = preds["confmaps"].numpy()[0]
pts_gt = example["instances"].numpy()[0]
pts_pr = preds["instance_peaks"].numpy()[0][0]
scale = 1.0
if img.shape[0] < 512:
scale = 2.0
if img.shape[0] < 256:
scale = 4.0
fig = plot_img(img, dpi=72 * scale, scale=scale)
plot_confmaps(cms, output_scale=cms.shape[0] / img.shape[0])
plot_peaks(pts_gt, pts_pr, paired=True)
return fig
self.visualization_callbacks.extend(
setup_visualization(
self.config.outputs,
run_path=self.run_path,
viz_fn=lambda: visualize_example(next(training_viz_ds_iter)),
name=f"train",
)
)
self.visualization_callbacks.extend(
setup_visualization(
self.config.outputs,
run_path=self.run_path,
viz_fn=lambda: visualize_example(next(validation_viz_ds_iter)),
name=f"validation",
)
)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants