Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Visual fields file format #95

Open
simwes opened this issue Feb 17, 2022 · 5 comments
Open

Visual fields file format #95

simwes opened this issue Feb 17, 2022 · 5 comments
Assignees
Labels
documentation Improvements or additions to documentation

Comments

@simwes
Copy link

simwes commented Feb 17, 2022

In the documentation the Visual fields file format contains one array for each parameter. The documentation states:

Each container holds multiple arrays, each are shaped Nx2x2x512 for N frames, 2 eyes and 2 depth-layers per eye.

Question: in the array shape, what 512 stands for?

Thank you

@mooch443
Copy link
Owner

Hey! The simulated visual field has a fixed width for each individual. This is usually measured in angles, which is currently 130° in each direction - giving individuals a FOV of 260° (let me know if this is not enough for you). These 260° have to be mapped to a fixed number of probes, though, which is currently 512. So for each angle/eye/depth layers you get 1.96 entries in the array there.

Sorry for incomplete docs, hope this clears it up.
Will add this to the list :-)
Thanks for your report!

@mooch443 mooch443 self-assigned this Feb 27, 2022
@mooch443 mooch443 added this to To do in Update documentation via automation Feb 27, 2022
@mooch443 mooch443 added the documentation Improvements or additions to documentation label Feb 27, 2022
@simwes
Copy link
Author

simwes commented Mar 24, 2022

Hi,
Thank you for your answer. Please I have two questions:

  • What is the difference from the first and second layer?
  • How exaclty do you scale the depth?

Thank you!

@mooch443
Copy link
Owner

mooch443 commented Mar 25, 2022

Hey,

so if you look at the picture below, the grey lines are the second layer. The focal individual in the middle shoots "rays", conceptually, and the first individual that is hit by the ray is in the first layer. If it hits another individual afterward, then this individual will be in the second layer.

The second question is a good question. Have had someone else ask recently and really need to get a working example for this. So essentially, this is how it is calculated:

// rp is the absolute position of a collision candidate
// e.pos is the absolute position of an eye
double d = (SQR(double(rp.x) - double(e.pos.x)) + SQR(double(rp.y) - double(e.pos.y)));
// so d is essentially the squared-distance (p.x-e.y)^2 + (p.y-e.y)^2 in pixels

(see Application/src/tracker/tracking/VisualField.cpp:368)

Meaning you would basically take the square-root and that should be the euclidean distance in pixel coordinates. You will need the angle, too, if you want to calculate other things. Which is the index i in the 512px 1D-array i/512*260-130 => angle in degrees, which spans 260°, with an eye separation from the straight-ahead-look-angle by 60°. And this angle is -angle for the left eye and positive angle for the right.

Let's hope this is correct. Let me know if that works for you!

@simwes
Copy link
Author

simwes commented Mar 28, 2022

Hi, thank you very much for the clarification. However, I still have some problems interpreting the values of depth array from the visual field. In the picture below you can see a snapshot of the fish visual field for two time frames (959, 966):

Screenshot from 2022-03-28 11-57-45
Figure 1

I need to extract the distance between the fish eye and the object in the fish visual plus the related angle (see black arrow in the image). From the array called depth I have plotted the values for the first layer for each time frame related to the snapshot above:

depth
Figure 2

My questions:
How do I scale the values depth in figure 2 to plot them in pixel values?
How do I obtain the angle to plot the depth in polar coordinates?

Thank you very much for your help!

@mooch443
Copy link
Owner

mooch443 commented Aug 8, 2022

I am a bit late with my response, but in the hopes that this might either still be relevant, or help people in the future - here goes.

The angle of each datapoint index is calculated as follows:

index = (angle0 - fov_range.start) / len * T(VisualField::field_resolution);

So that means to get the angle from an index:

index = (angle0 - 130) / 260 * 512
// if we bring angle0 to the left:
angle0 = index * 260 / 512 + 130

And just to be clear: the angle0 in these equations is the fish_angle + eye.angle, so it should be the absolute angle (with 0° being the horizontal X-axis).

Of course, that does not give you the angle marked in your pictures directly, but it gives you the angle for each dot in your Figure 2. Meaning that the angle is narrower in frame 966. than in 959, which makes sense. If you want the full angle range, you would have to add data from the second eye as well - it seems like you only plotted one eye.

To get the distance, I refer you to the previous message, but essentially sqrt(d) should hopefully do the trick - although to be precise with regards to your figure: sqrt(d) of the outer most point in your graph.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
Status: To Do
Development

No branches or pull requests

2 participants