Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Learn from each other #2

Open
kallaballa opened this issue Feb 23, 2022 · 7 comments
Open

Learn from each other #2

kallaballa opened this issue Feb 23, 2022 · 7 comments

Comments

@kallaballa
Copy link

kallaballa commented Feb 23, 2022

Hi! I created a project very similar to yours. How about we find a channel to talk about our projects and approaches? Anyway i rendered a comparison of our algorithms. I used following command-line to generate the animation.

poppy -p20 -c5 -f60 images/*

comp

On the left the animation from your readme and on the right a rendering done with Poppy. Poppy is still a very young project.

@jankovicsandras
Copy link
Owner

Hi Amir,

Thanks for reaching out, Your project looks great! I just skimmed it, but I'll try to test it when I'll have some time.

Poppy looks much more advanced than my projects.
I have some questions:
- Does Poppy calculate Lucas-Kanade Optical Flow ( https://docs.opencv.org/3.4/d4/dee/tutorial_optical_flow.html ) on the whole series of input keyframes, or just pairs of keyframes?
- How does the feature point detection work? I saw Canny edge detector and working on intensity/grayscale (good idea!), but just skimmed the source.

I recommend checking out https://github.com/jankovicsandras/autoimagemorphjs , it's less than 200 lines of (very dense) JavaScript, without any advanced math.

These are some of the different ideas in autoimagemorphjs vs. autoimagemorph (Python):

  • Instead of calculating the triangle-to-triangle transformations on every pixel in the triangles, it's enough to calculate the vertices and do a simple linear walk/scan between them. I mean, solving homography is mathematically correct, but it might be wasteful, because it doesn't utilize the fact that the pixels are neighbors and are only dx dy away. autoimagemorphjs uses only +-*/ vector math, because I'm too stupid for matrices. :D
  • autoimagemorphjs uses a very naive "sum of absolute RGBA difference between this pixel and the neighbors" to get the feature points, this could be improved by more advanced algorithms and working with intensity (HSL Lightness?).
  • Matching the triangles on the start and end keyframes was a big question, so I solved this by cheating: using almost regular triangle grids, where the triangles are already matched. This could be optimized by non-uniform grids, but I don't know how yet. :)

Some random ideas:

Keep up the good work!

@kallaballa
Copy link
Author

Thanks for the great answer! I'll get back to you in a couple of days, because I am currently so involved in that project that everything i say now, may be void by then. :D

btw. at the moment morphing van gogh looks like this:
output

@kallaballa
Copy link
Author

kallaballa commented Mar 17, 2022

This has taken me way longer than I thought it would :). Anyway, I am working on a walk-through on how poppy works.

Hi Amir,

Thanks for reaching out, Your project looks great! I just skimmed it, but I'll try to test it when I'll have some time.

Poppy looks much more advanced than my projects. I have some questions: - Does Poppy calculate Lucas-Kanade Optical Flow ( https://docs.opencv.org/3.4/d4/dee/tutorial_optical_flow.html ) on the whole series of input keyframes, or just pairs of keyframes?

It doesn't do it at all, anymore. I found faster alternatives to achieve what i needed.

  • How does the feature point detection work? I saw Canny edge detector and working on intensity/grayscale (good idea!), but just skimmed the source.

This part has become kind of lengthy to explain, but at the core of it is a neat trick i found to remove the background quite well on still images. no intra-frame required. The idea is to incrementally apply more blurred versions of your image to a MOG2 background subtractor. That way you end up with a mask that pretty much nails areas of interest.
Then I use that mask to extract areas of interest from the original image and add some contour information.
Anyway in the end i use ORB to extract features from images i previously prepared like that.

I recommend checking out https://github.com/jankovicsandras/autoimagemorphjs , it's less than 200 lines of (very dense) JavaScript, without any advanced math.

These are some of the different ideas in autoimagemorphjs vs. autoimagemorph (Python):

  • Instead of calculating the triangle-to-triangle transformations on every pixel in the triangles, it's enough to calculate the vertices and do a simple linear walk/scan between them. I mean, solving homography is mathematically correct, but it might be wasteful, because it doesn't utilize the fact that the pixels are neighbors and are only dx dy away. autoimagemorphjs uses only +-*/ vector math, because I'm too stupid for matrices. :D

Very nice idea! I'd very much like to get rid of solving the homography.

  • autoimagemorphjs uses a very naive "sum of absolute RGBA difference between this pixel and the neighbors" to get the feature points, this could be improved by more advanced algorithms and working with intensity (HSL Lightness?).
  • Matching the triangles on the start and end keyframes was a big question, so I solved this by cheating: using almost regular triangle grids, where the triangles are already matched. This could be optimized by non-uniform grids, but I don't know how yet. :)

Some random ideas:

I use them for contour aware blending using masks on a laplacian pyramid.

  • I think background separation might improve quality (doing the morphing on the foreground and background parts of the image separately). Maybe separating to not just 2 layers, but more.

It did!

https://docs.opencv.org/3.4/d1/dc5/tutorial_background_subtraction.html
https://en.wikipedia.org/wiki/Active_contour_model

Keep up the good work!

Thx!

@kallaballa
Copy link
Author

Btw. a video of all the Poppy demos (https://vimeo.com/679551761) and a video you might particularly like (https://vimeo.com/687432685) :)

@kallaballa
Copy link
Author

This part has become kind of lengthy to explain, but at the core of it is a neat trick i found to remove the background quite well on still images. no intra-frame required. The idea is to incrementally apply more blurred versions of your image to a MOG2 background subtractor. That way you end up with a mask that pretty much nails areas of interest. Then I use that mask to extract areas of interest from the original image and add some contour information. Anyway in the end i use ORB to extract features from images i previously prepared like that.

The steps above visualized:
0
1
2
3

@kallaballa
Copy link
Author

Oh... btw :)

vg

@jankovicsandras
Copy link
Owner

Thanks for the updates, I see you're making good progress with Poppy! :)

I like the van Gogh Vimeo video, but this gif looks even better.

My observation is that there are often several "effects" happen during morphing (in Poppy, autoimagemorphjs, and in others): alpha blending, movement/stretching, blur, etc.
I think ( and this is just my subjective preference, you don't have to agree with this :) ) that movement/stretching/rotation looks best, this is the essence of the illusion, but of course some alpha blending is required also.
With autoimagemorphjs, too much/very noticeable alpha blending happens when the feature grid is either too small or too big. Of course, it's a difficult problem to decide what feature grid is best for each image, and it's not in scope for autoimagemorphjs now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants