Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

On the robustness of the headtrackr #29

Open
nicholaswmin opened this issue Jul 13, 2014 · 10 comments
Open

On the robustness of the headtrackr #29

nicholaswmin opened this issue Jul 13, 2014 · 10 comments

Comments

@nicholaswmin
Copy link

Hi,

I am trying to use the headtrackr for getting ONLY the X position of the head.
I am using it in a small game I've build which looks pretty nice this far.

At the moment I ask my user to present his full face so the headtrackr can lock on his head and then I ask my user to tilt the screen a bit so that only the chin and up is visible(I want to hide the neck since low-cut tops create color interference).

I would like to ask 2 things:

What is the best advice you think I should give to my users in order to have my game as robust/accurate/fast as possible. What calibration recommendations do you suggest I should present and what are the ''perfect'' conditions for the headtrackr to work at it's best?

The goal is to make the head-tracking as robust as possible between different lighting environments.
Also I need to have the head-position detection as predictable as humanly possible(some times the headtracker goes all nuts on me and starts swinging right and left, losing it's center).

At the moment I only advice that during the head tracking, the user should ensure that he has his both sides of his face evenly and brightly illuminated.
Also as a second calibration step , I advice my user to tilt his laptop screen up until the point where the neck is not in the frame.

Second thing:

Of course any advice for any parameters I might pass on starting the headtrackr are welcome. (Should I use facetracking x position or headtracking x position?, Should I calculate angles etc etc)

Thanks in advance man and thanks for the work

@nicholaswmin nicholaswmin changed the title The robustness of headtrackr On the robustness of the headtrackr Jul 13, 2014
@auduno
Copy link
Owner

auduno commented Jul 15, 2014

Hi, glad you like headtrackr!

headtrackr relies a lot on facial colors to track the face, so to get good results, it's most important that light is even, and that there are no skin-colored objects in the background. I usually get best results if I face a window or another light source, so that there are no shadows on my face.

Regarding precision, have you had a look at clmtrackr which I've also made? It has much more precise facial tracking than headtrackr, but might be slower on some systems.

@nicholaswmin
Copy link
Author

Yep I definetely did, but clm tracker is not even near to the tracking
speed of the camshift algorithm.

My game relies on my players moving their heads rapidly right and left and
clm tracker loses track in that case.

On the other hand - headtrackr keep up to speed just fine - but it loses
it's ''focus'' somewhat easily.

Is there any possiblity of mixing the 2 algorithms? I am aware that
anything other than camshift is not available in real time - but maybe some
corrections on-the-fly by re-running the Viola-Jones at more regular
intervals can be done to improve accuracy.

On 15 July 2014 16:33, Audun Mathias Øygard notifications@github.com
wrote:

Hi, glad you like headtrackr!

headtrackr relies a lot on facial colors to track the face, so to get good
results, it's most important that light is even, and that there are no
skin-colored objects in the background. I usually get best results if I
face a window or another light source, so that there are no shadows on my
face.

Regarding precision, have you had a look at clmtrackr
https://github.com/auduno/clmtrackr which I've also made? It has much
more precise facial tracking than headtrackr, but might be slower on some
systems.


Reply to this email directly or view it on GitHub
#29 (comment).

Nicholas Kyriakides
Laser Plastics Industry LTD

This message and its attachments are private and confidential. If you have
received this message in error, please notify the sender and remove it and
its attachments from your system.

The University of Westminster is a charity and a company
limited by guarantee. Registration number: 977818 England.
Registered Office: 309 Regent Street, London W1B 2UW.

@auduno
Copy link
Owner

auduno commented Jul 15, 2014

Yeah, it might help to stop and start headtrackr at regular intervals ( via stop() and start() ) if it loses focus often. I don't really know any other algorithms that are as fast as camshift but more precise, unfortunately.

@nicholaswmin
Copy link
Author

I already do this, but I have no way of detecting automatically whether the
focus was truly lost - Plus the delay in re-detecting the face is
counter-intuitive.

Maybe a simple algorithm could be built using as input the events already
emmited by headtrackr(head width/height rapidly changing might do) and then
via a web-worker restart the Viola-Jones in the background and pass as a
message the new detection to the original headtrackr thread.

I am no scientist nor too experienced to suggest something but I am trying
to find ways to improve this -

I don't really think something other than camshift can be used - as you
said yourself, I just have a hunch that there is room for improvement.

On 15 July 2014 16:51, Audun Mathias Øygard notifications@github.com
wrote:

Yeah, it might help to stop and start headtrackr at regular intervals (
via stop() and start() ) if it loses focus often. I don't really know any
other algorithms that are as fast as camshift but more precise,
unfortunately.


Reply to this email directly or view it on GitHub
#29 (comment).

Nicholas Kyriakides
Laser Plastics Industry LTD

This message and its attachments are private and confidential. If you have
received this message in error, please notify the sender and remove it and
its attachments from your system.

The University of Westminster is a charity and a company
limited by guarantee. Registration number: 977818 England.
Registered Office: 309 Regent Street, London W1B 2UW.

@Neon22
Copy link

Neon22 commented Jul 15, 2014

Side note - clmtracker seems to be using an old fork of numeric.js - which is now faster.
http://numericjs.com/wordpress/?p=79

@auduno
Copy link
Owner

auduno commented Jul 15, 2014

I actually thought about looking at the dimensions of the box in order to detect when it fails, but in my experiments this would restart the tracking too often when it shouldn't. If you run it in the background, though, the slowdown/lag on detection of face might not be an issue, so it's certainly worth a try.

Neon22 : where did you find an old version of numeric.js in clmtrackr? I thought I was using version 1.2.6 already, but might be I forgot to remove it somewhere.

@Neon22
Copy link

Neon22 commented Jul 15, 2014

actually I tyhinkits just the link to sloisel rather than spiroz https://github.com/spirozh/numeric. sorry for misdirect

@nicholaswmin
Copy link
Author

In general there should be some telltale signs that the tracking has failed

  • maybe one could make the detection more strict in terms of telling the
    user to change positions since the background does not have enough contrast
    to provide an accurate detection - or maybe detecting whether the face has
    shadows and telling the user to try to evenly lit his face. The UI factor
    can play a role here.

Some time ago I remember seeing somewhere you telling that Hue/Saturation
are not implemented, yet the camshift algorithm does take those into
account.

Is this still the case?

On 16 July 2014 02:49, Neon22 notifications@github.com wrote:

actually I tyhinkits just the link to sloisel rather than spiroz
https://github.com/spirozh/numeric. sorry for misdirect


Reply to this email directly or view it on GitHub
#29 (comment).

Nicholas Kyriakides
Laser Plastics Industry LTD

This message and its attachments are private and confidential. If you have
received this message in error, please notify the sender and remove it and
its attachments from your system.

The University of Westminster is a charity and a company
limited by guarantee. Registration number: 977818 England.
Registered Office: 309 Regent Street, London W1B 2UW.

@auduno
Copy link
Owner

auduno commented Jul 25, 2014

The original camshift paper mentions using only hue and saturation to track the face, but in my experiments I found that this didn't work as well as just using RGB, so I ended up just using RGB information to track the face. So hue and saturation is not implemented.

@nicholaswmin
Copy link
Author

Very well,

If I get some off time I'll try to test what happens with the web-workers
doing async face detection in the background and let you know how it went.
I have in idea in mind by I doubt I'll find the time and interest to do it.

Until then thanks a lot for the info and the lib of course.

On 25 July 2014 19:09, Audun Mathias Øygard notifications@github.com
wrote:

The original camshift paper mentions using only hue and saturation to
track the face, but in my experiments I found that this didn't work as well
as just using RGB, so I ended up just using RGB information to track the
face. So hue and saturation is not implemented.


Reply to this email directly or view it on GitHub
#29 (comment).

Nicholas Kyriakides
Laser Plastics Industry LTD

This message and its attachments are private and confidential. If you have
received this message in error, please notify the sender and remove it and
its attachments from your system.

The University of Westminster is a charity and a company
limited by guarantee. Registration number: 977818 England.
Registered Office: 309 Regent Street, London W1B 2UW.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants