Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Openface License Information. No Commercial Use Without Purchase, $18k Per Year #27

Open
Hunanbean opened this issue Aug 3, 2020 · 20 comments

Comments

@Hunanbean
Copy link

Hunanbean commented Aug 3, 2020

I just learned that Openface, which this project depends on, cannot be used for commercial purposes without purchasing a license.
Important points from the license:

The non-exclusive commercial license requires a non-refundable annual royalty 

(USD $10,000 for OpenFace Light,
USD$15,000 for OpenFace Landmark,
USD$18,000 for OpenFace Full Suite: see each offering for details.
The license is non-negotiable.

Information required to complete the license:

Legal company name
State/country of incorporation
Type of corporation
Principal address of corporation
Name and title of person who will sign the license
Title of the person who will sign the license
Address for notices, including name and email of person to be notified

https://cmu.flintbox.com/#technologies/5c5e7fee-6a24-467b-bb5f-eb2f72119e59

Thank you again for making this. I just felt this is an important point to note, as i was rather far into using this when i discovered that i had to pay someone else to be able to use it commercially.

@NumesSanguis
Copy link
Owner

I see, I guess the documentation could point this out more clearly.
There is a file called LICENSE_note.txt, which states: "FACSvatar integrates many projects, which can contain different licenses.".
Also, it refers to the documentation, where a page called License info can be found. This page states more clearly that there are commercial restrictions.
I should put somewhere: "SOME MODULES USE NON-COMMERCIAL LICENSES, please check this page for more info: ".

https://cmu.flintbox.com/#technologies/5c5e7fee-6a24-467b-bb5f-eb2f72119e59

Thank you for the link. That page describes the licensing terms better than their GitHub page. This page did not exist (at least that I'm aware of) when I made that license page. I'll update the documentation with that one.

The idea of FACSvatar is that it is not dependent on a specific module. As long as someone develops a tracker that outputs AU values (based on FACS), it can be used as a drop-in replacement for OpenFace. Everything else will continue to work as normal.
There has been no need yet (and it was the best one I could find at that time) for me personally to use another tracker.
I would welcome pull requests for new trackers though ^-^

@NumesSanguis
Copy link
Owner

NumesSanguis commented Aug 3, 2020

Also note that the FACSvatar code itself is licensed under LGPL. This allows commercial usage, but any changes to the code need to be published. The L in LGPL means that only FACSvatar code changes need to be published, not any code that interfaces with FACSvatar. So you can combine it with commercial closed-source code.

Publishing your code would mean more functionality, making it useful for more people, leading to more people contributing, which increasingly improves the project. So please do publish what you make :)

@Hunanbean
Copy link
Author

Thank you for the clarifications. I will have to research a bit further to make sure that StrongTrack outputs actual AU values, and not something particular. But as far as i know, you can set it up to output the tracking information in a variety of formats. It is still in the early stages, but does has proven function. I am also awaiting clear clarification of his license, but from what he has said in his demonstration videos, this is freely usable for commercial purposes.

StongTrack at github

Thank you, and be well!

@Hunanbean
Copy link
Author

Update about StrongTrack. I must been flustered by the openface license when i was checking StrongTrack licenese. Rob in Motion has a clearly stated GNU General Public License v3.0 on his github. I have zero programming experience. Hopefully someone will be kind enough to get these two awesome projects working together.

@NumesSanguis
Copy link
Owner

Thank you for linking that project, it's interesting. It seems for now that it's only lip tracking, which also OpenFace does (not sure which one does it better), but full face tracking is coming. License-wise it's somewhat better for commercial projects, but just as a warning, the GPL is quite hard to manage in a commercial project (as opposed to FACSvatar LGPL), unless you're planning to distribute all your project files.

Personally, I'm not using this project for commercial purposes, so honestly OpenFace is enough for me. More supported trackers is of course better for the health of this project, so it would be great if someone would create support for it.

@Hunanbean
Copy link
Author

It is very confusing territory. So the code of a given program may be licensed under GPL instead of LGPL (or any myriad other licenses), but what about the Output of that software? This is a subject that even high paid lawyers are having a problem deciphering. (A google search will show many videos of lawyers saying that the language of the GPL is at times undecipherable, and also contradictory) for example, Blender is under GPL, but anything i create with it is fully mine. I would think that is the general intent of the people who release under GPL. But, again, i think in many cases, people are releasing under GPL, attempting to say 'hey, use this for whatever' with out realising the effect of that license on other software that may either derive from, or use the output of. So i guess the pertinent question is, if i am using a GPL 3.0 software, do i fully own the content i create with it. It may be worthwile for people to watch what Linus Torvalds has to say about GPL 3.0 in general. It is not good.

But my point is, even though StrongTrack is GPL, the Output of that program should be readily, legally, useable to send data to FACSvatar without issue. Seems people are following the pack, and releasing under GPL3 without really understanding the limitations this is putting on, for example, indie game developers.

I really do not know, because i surely cannot comprehend all these licenses.

@Hunanbean
Copy link
Author

Regarding StrongTrack. Rob in Motion has stated that he is actively working on extending the type and amount of what it can track. I get the impression that when he is done, it will be comparable to OpenFaces ability.

@Hunanbean
Copy link
Author

Further information. I went ahead and contacted the lawyers for 'Openface' and posed the following question.

"I just want to use the Output from this tracker to make my game characters move their face. does this really require a commercial license?
I do not want to include any code, or use this for advanced analysis. I just want to use it to make my indie game characters move their face. I can run the output through an open software called FACSvatar to animate my characters faces. Does this require a commercial license? yes, i will be selling the game. No, your core technology would not be included. Only the Output tracking point numbers."

The response I received stated that yes, this would fall under commercial license requirements.
Just wanted to clarify that for others, as i was not sure that it would, since i was not using any code from the project, in my project.
I also wrote back indicating the advancements in the open source community, such as 'StrongTrack' and suggested that they may wish to lower the price of their license since they are moments away from 'free' competition. I do not know if there will be a response.

@NumesSanguis
Copy link
Owner

@Hunanbean Sorry to hear that :/
I doubt they're going to lower their price as long as there is no good FACS competitor.
Also, it doesn't seem that StrongTrack is going to target that specific system of describing the face...
Maybe once StrongTrack does robust face tracker, the output is translatable to FACS though?

Just out of interest and the chance of another FACS tracker coming out, around what time period are you planning to release your Indie game?

@Hunanbean
Copy link
Author

Hunanbean commented Aug 11, 2020 via email

@NumesSanguis
Copy link
Owner

I'm in the process of rewriting the documentation, although some other stuff grabbed my attention. I thought I was clear enough of my mentioning of the license, but it seems it can be better. So until the new documentation is more clearer, we can leave this issue open.


Good luck with your game! If I come across another FACS tracker, I'll let you know.

@Hunanbean
Copy link
Author

Hunanbean commented Aug 14, 2020 via email

@fire
Copy link

fire commented Aug 30, 2021

Proposing using Google mediapipe facemesh integration.

@NumesSanguis
Copy link
Owner

NumesSanguis commented Aug 31, 2021

@fire I think you're talking about this solution right?:
https://google.github.io/mediapipe/solutions/face_mesh.html

Thanks for the suggestion! I've wanting to use Google's facemesh since it was part of Android ARcore, but it was too hard to separate the code. Now it's a Python package, it's much easier to use it seems.
Just 1 thing noticed from the video with the child, the tracking is still symmetric in the face. That means it cannot track 1 eyebrow raised while the other eyebrow is not.

There is still work to be done before it can be used though. Using mediapipe facemesh only gives the coordinates of the tracking points, but is not in the FACS format yet. So someone would need to make a mapping from this tracker to FACS. This is not trivial and I'm not actively working on this project anymore.

Why FACS? This project wants to stay tracker and model agnostic, and FACS is a great description of human facial muscles. An image explaining this can be found in the v0.4.0 branch:
https://github.com/NumesSanguis/FACSvatar/tree/v0.4.0

@fire
Copy link

fire commented Aug 31, 2021

Can you sketch a way to map from this Google Mediapipe to FACS? I have no idea where to start.

@NumesSanguis
Copy link
Owner

Mapping would mean that you translate the values of the mesh to muscle contraction/relaxation (Action Unit - AU). Here you can find some basic information on the Facial Action Coding System (FACS):
https://facsvatar.readthedocs.io/en/v0.4.0/avatars/facs_theory.html

Mapping is a research project in itself, and the most common way is to find a database with images/videos that have FACS annotations (trained human's looked at the face and provide a value between 1-5 for every AU in the face. Then you would train a model that learns to map the mesh points to these values.

A paper by OpenFace on how they did Facial Action Unit detection:

@Hunanbean
Copy link
Author

Based on my experience creating a full set of CMU compatible visemes for use with the Papagayo-NG lipsync software, i would be willing to handle the mapping, contingent on there being a clear path to do so. What i mean is, i am not a programmer, so that portion would have to be laid out ahead of time, but as far as accurately transposing the visual information, i believe it is something i could do a good job of.

@NumesSanguis
Copy link
Owner

NumesSanguis commented Sep 1, 2021

@Hunanbean Thanks for you offer! That's actual a different step in the process, but also very important. The process is as follows:

  1. Tracking software generating 3D tracking dots --> 2. Convert to FACS format (values for AU) --> 3. Mapping AU values to a 3D model.

What @fire is asking for is step 2, but what you are willing to help with is step 3. The advantage of this 3 step process (as opposed to directly mapping the tracking dots to a 3D model) is that once you've completed step 3, any tracker with mapped AU values can be used to animate that model.
Similarly, if you have a tracker producing AU values (which is 3D model agnostic), it can be matched to various 3D models as long as they are FACS compatible. Mapping FACS (if the model is not FACS compatible, like MB-Lab) is also much more straightforward than mapping a 3D tracking points.

As a bonus, the FACS format lets you do additional things like interpolation, exaggerating certain parts of the facial expression, and even AI applications.

@Hunanbean
Copy link
Author

If tracking information is generated, i think i can take that information and convert/match it to FACS values, depending on the type of information the tracker outputs. But i would need to see the output data from the tracker to verify it is something i can do. Basically, i would need it to be at a point where it is kind of a template to fill in, such as Tracker outputs w = .2, q = .5 g = .1 then i could manually interpret that value to FACS 12 =.x FACS 17=.whatever, etc until a viable conversion database is made. I am willing to do the work, but again, i need to make sure i Can.

@NumesSanguis
Copy link
Owner

@Hunanbean @fire I did a quick exploration of MediaPipe Face Mesh. As this issue is about OpenFace licensing, I created a separate issue for this feature request.

Use Google's project MediaPipe Face Mesh as FACS tracker: #33

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants