Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Native Desktop example location #734

Open
invicted-ceo opened this issue Nov 8, 2018 · 4 comments
Open

Native Desktop example location #734

invicted-ceo opened this issue Nov 8, 2018 · 4 comments

Comments

@invicted-ceo
Copy link

I'm having trouble finding the desktop application after compiling. I'm compiling using the commands in the readme as follows for mac:
cmake -H. -B_builds -GXcode -DHUNTER_STATUS_DEBUG=ON -DDRISHTI_BUILD_EXAMPLES=ON
cmake --build _builds --config Release

When I look at _builds/ I find a few executables, but it doesn't seem like there is anything that's immediately usable. Is there documentation on just getting the desktop applications running? I'm looking to the examples on some test videos.

Do I need to download the model files externally from the resource page?

@ruslo
Copy link
Collaborator

ruslo commented Nov 8, 2018

Can you try to search for facefilter-desktop file in _builds directory?

@swkonz
Copy link

swkonz commented Nov 9, 2018

Having the same issue. I found the facefilter-desktop file in _builds, but I'm having trouble running it now. Which model file should I use? When I run it with a few different model files I get the same error message:
Failed to read a video frame with requested dimensions, received 1920x1080 expected 0x0
I'm passing a .mov video file. The videos are limited to just the eyes, so I don't necessarily need the full face fitting, just the eye model fitting. Any advice would be appreciated

@headupinclouds
Copy link
Collaborator

Is there documentation on just getting the desktop applications running?

There is a readme w/ instructions for this console app: https://github.com/elucideye/drishti/tree/master/src/app/hci

The desktop facefilter app should be fairly similar. I noticed that one isn't currently installed by cmake, and it should be. I'll fix that. As ruslo mentioned, it should be in the build tree. I can adapt that one for the facefilter app.

You can download the models manually as shown in that readme. They are also installed internally as part of the builds process to support the tests.

@headupinclouds
Copy link
Collaborator

headupinclouds commented Nov 9, 2018

I'm passing a .mov video file. The videos are limited to just the eyes, so I don't necessarily need the full face fitting, just the eye model fitting. Any advice would be appreciated

The facefilter app is geared towards selfie video, and it assumes a full face is visible without any strong FOV related clipping. It runs: (1) face detection (using accelerated ACF models); (2) coarse landmarks at low resolution to localize the eyes; (3) eye models. From what you describe, I don't think it will work for you as is.

If you have video that only contains eyes, you will need to adapt this to get it to work. FWIW, the eye models will work on eye crop images if you already have reasonable bounding boxes. You can use the installed drishti-eye console application on eye crop images with a 4:3 aspect ratio and padding similar to what is shown on the README.

One example calling the model in the SDK can be seen in the following test:

TEST_F(EyeSegmenterTest, ImageValid) // NOLINT (TODO)
{
for (auto iter = getFirstValid(); iter != m_images.end(); iter++)
{
// Make sure image has the expected size:
EXPECT_EQ(iter->second.image.getCols(), iter->first);
drishti::sdk::Eye eye;
int code = (*m_eyeSegmenter)(iter->second.image, eye, iter->second.isRight);
// Sanity check on each model:
EXPECT_EQ(code, 0);
checkValid(eye, iter->second.storage.size());
// Ground truth comparison for reasonable resolutions
if ((iter->first > 128) && m_eye)
{
const float threshold = (iter->first == m_eye->getRoi().width) ? m_scoreThreshold : 0.5;
ASSERT_GT(detectionScore(eye, *m_eye), threshold);
}
}
}

If you want to run on video and you don't have a full face visible, then you can probably run an object detector directly to find bounding boxes for the eyes, and then run the eye model regression on those using the SDK.

The ACF repo does have a few eye models trained on high res images from unsplash.com. That might work for you. See:

https://github.com/elucideye/acf/blob/fe2738dc5d086092c9a708d15cce06369300c7a2/CMakeLists.txt#L241-L255

https://github.com/elucideye/acf/releases/download/v0.0.0/acf_unsplash_60x40_eye_any_color_d4.cpb

https://github.com/elucideye/acf/releases/download/v0.0.0/acf_unsplash_60x40_eye_any_gray_d4.cpb
That detector is used in this repository via the acf package.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants