Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hololens 2 sample from workshop #246

Open
mrcfschr opened this issue Jul 21, 2022 · 1 comment
Open

Hololens 2 sample from workshop #246

mrcfschr opened this issue Jul 21, 2022 · 1 comment

Comments

@mrcfschr
Copy link

At ~ 2h in Platform for Situated Intelligence Workshop | Day 2, a Hololens 2 sample is shown that allows the user to look at an object and see a tag with its name in a different language next to it.
image
image
I would really like to try out this sample and learn from it, but I couldn't find the code anywhere in the psi repo and the psi samples repo.
If I missed it could someone please point me to it or if it's not public yet would it be possible to share the code?

Thank you!

@danbohus
Copy link
Contributor

We have not released that demonstrator as a sample, as it relies on some psi components (specifically for the translation service) that we have not released yet. Unfortunately, we do not have plans to release this in the immediate future.

However, if you're looking to learn more about how to use hololens with psi, a good starting point is the Hololens sample app and the Mixed Reality Overview. In addition, the What Is That sample app implements a lot of the functionality shown in the HoloLens workshop demo in some ways. In fact, a good learning experience might be to try to reconstruct the workshop demo based on these two samples. The What Is That sample app shows how to compute the intersection of the hand direction with the mesh, project it back in the RGB image, crop around, and send the cropped image for object detection. One would have to replace the Kinect sensor component with the HoloLens sensing components, and use the gaze ray (as opposed to the hand direction) to intersect with the mesh. What's missing then is passing the results of the object detection to a translation service component. We have examples in the repo on how to wrap other Cognitive Services for speech and vision, and a similar pattern can be used to wrap the translator service. Finally, the HoloLens sample shows how to render things on device, and there are rendering components in the repo like TextStereoKitRenderer that can be used to render the floating text / notes.

If you go down this path, and run into specific questions as to how to do something, don't hesitate to post again to this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants