Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Write SNIC segmentation to vector with attributes #7

Open
hanamthang opened this issue Jan 19, 2021 · 6 comments
Open

Write SNIC segmentation to vector with attributes #7

hanamthang opened this issue Jan 19, 2021 · 6 comments

Comments

@hanamthang
Copy link

Dear Moritz,

Great works on pysnic!

We would like to do the SNIC on remote sensing image with a comparison to SLIC. I did the SLIC in SAGA GIS with attributes of average spectral values for each of segments which is useful for post-classification using machine learning models.

My question is, can we write the array "segmentation" after running SNIC to a vector format with any attributes which can be used for further post-classification?

Regards,
Thang

@MoritzWillig
Copy link
Owner

MoritzWillig commented Jan 19, 2021

Hey Thang,
looking at the examples in the repo there is yet no such demo code.

The pysnic operations are performed on python arrays because it is faster for pure python. For further image processing steps I would, however, recommend working with numpy arrays again (or any other native/hardware accelerated library).

If you want to extract the individual feature vectors of each segment for further processing, you have to extract each segment into a separate array. The basic problem is, that the segments contain different numbers of pixels - so we have to pick them one-by-one.

If you take the minimal.py example and append the following code, it should print the average feature values per segment and also fill them back into the segment area:

# to numpy: segmentation.shape = (400, 600, 3)
segmentation = np.array(segmentation)

# create segment masks
actual_number_of_segments = len(centroids)
segment_masks = [segmentation == segment_idx for segment_idx in range(actual_number_of_segments)]
# select pixels per segment. [number_of_segments](n, 3) [n=number of pixels per segment]
segment_vectors = [color_image[segment_masks[segment_idx], :] for segment_idx in range(actual_number_of_segments)]

# post-process the individual segment vectors (N, 3)
post_image = np.empty_like(color_image)
for segment_idx in range(actual_number_of_segments):
    average = np.mean(segment_vectors[segment_idx], 0)

    print(f"Segment {segment_idx} average: {average}")
    post_image[segment_masks[segment_idx], :] = average

fig = plt.figure("Post-processed image")
plt.imshow(post_image)
plt.show()

If you have any further comments or questions feel free to ask.

Best,
Moritz

@hanamthang
Copy link
Author

Hi Moritz,

Yes, that are all I need. Thank you so much for your codes. It's seem so easy to you but save me from days of coding. I can now write each of dimensions of post_image to separate GTiff file for further analysis with spectral information for each of pixels/segments.

Just a minor question that, do you know any libraries which help to write the numpy array post_image to a vector format? Recently, I have to write the array to GTiff then convert back to vector format in SAGA. I would like to have a classification with machine learning models on vector format. SHP format will be enough in this case.

Regards,
Thang

@MoritzWillig
Copy link
Owner

I haven't done any projects with SHP or vector format conversion yet, so I hope I understood you correctly.

If you want to convert the segmentation to a vector format, you probably want to extract the superpixel boundaries/shapes? Have a look at the polygons.py example on how to extract the boundary graph from the segmentation. Not passing the curve_simplification-parameter to polygonize will return pixel-exact borders. Do not get confused by the function returning multiple graphs: Usually there is only one, but we have to handle the case where segments get fully enclosed by others.

I haven't looked that much into libraries for further processing the data into shapes. However, I think that converting the graph into a half-edge structure (e.g. OpenMesh should provide helpful functionality) should make it reasonably easy to write your own code to extract the shapes.

Also, if speed isn't that much of a concern to you and there are no enclosed superpixels occurring with your parameters (*1), you can do the following: (It should work, but I haven't tested it yet)

  1. Segment the image
  2. Pad the segmentation with a 1 pixel border of value -1.
  3. Adjust the seeds position for the padded image (adding 1 to x and y)
  4. Call trace_isles([], seeds, segmentation). This returns a list of (vertices, edges) for each superpixel border.
  5. Reverse the coordinate shift caused by the padding (subtracting 1 from all positions).

(*1) This would be problematic, because trace_isles only traces the first border it encounters. If there are enclosed superpixels, the returned edgelist only either contain the inner or outer border.

For writing SHP format, I found this StackExchange question: https://gis.stackexchange.com/questions/113799/how-to-read-a-shapefile-in-python - Fiona and pyshp, both, look good to me for reading and writing data.

Best,
Moritz

@CaoZhonglei
Copy link

Dear Moritz,

Great works on pysnic!

I want to use SNIC for moving object segmentation in the video sequence, and SNIC as a preprocessing step is very helpful for my research. My question is: if I want to extract the texture or color features of each pixel within a segmented superpixel, how do I programmatically implement it?

Regards,
Cao

@MoritzWillig
Copy link
Owner

Hi Cao,
I guess you can use the code posted above (#7 (comment)). segment_vectors contains the pixel values of the superpixels, which are helpful for analyzing the value distribution (mean, variances, ...). Actually this code might be a bit cleaner:

u, indices = np.unique(segmentation, return_inverse=True)
color_flat = color_image.reshape(None, 3)
segment_vectors = [color_flat[np.where(indices == segment_idx)] for segment_idx in range(actual_number_of_segments)]

If you want to extract the super pixels as smaller image patches, e.g. for analysing texture structures, you need to do some more work:

  • Unravel the flattened indices.
  • Take the index min/max per dimension to get the patch size.
  • Extract values from superpixel indices and fill the rest of the patch with zeros/NaNs/...

I don't have time to provide a working example at the moment, but maybe can have a look at it later. Feel free to ask if there are still some questions or I misunderstood your problem.

@CaoZhonglei
Copy link

Hi Moritz,

Yes, that are all I need. Thank you so much for your codes.

I run the program with CPU, a 720*480 picture takes about 5 seconds, which is bad for tasks that require real-time performance, such as video image segmentation.

I think that for a video, we can use the correlation between the previous and next frames of the video, and we don’t need to grid every frame of the video. We only need to grid seed points for the first frame of the picture. For subsequent frames, we can use the cluster center after superpixel segmentation of the previous frame as initial seed points.

So, my plan is, can I use the seed points after superpixel segmentation of the previous frame to initialize seed points for the next frame picture instead of your grid-based initial seed point operation? This should improve the efficiency of video image segmentation.

Can you give me some advice on how to achieve this?

Regards,
Cao

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants