Skip to content

This is a sample project that allows users to match voice commands to 3D models and load them dynamically in the augmented world. The voice command is processed using StanfordNLP while the models are obtained and loaded dynamically from Google Poly API. The AR interaction is implemented through ARCore with Sceneform.

License

Notifications You must be signed in to change notification settings

catalinghita8/ARCore-NLP-persistent-augmentation

Repository files navigation

ARCore-NLP-persistent-augmentation

This is a sample project that allows users to match voice commands to 3D models and load them dynamically in the augmented world. The voice command is processed using StanfordNLP while the models are obtained and loaded dynamically from Google Poly API. The AR interaction is implemented through ARCore with Sceneform.

Also, with the use of Cloud Anchors, the augmentated experience is persistent thus allowing the user to add models to a 3D room and then retrieve them at any time. The 3D room contains details like assets ids (that correspond to Google Poly assets) and cloud anchor ids thus allowing to recover the state of several models throughout the room.

Table of contents

1. Demo

2. Instalation tutorial

3. Architecture details

4. Limitations

1. Demo

Watch the video

2. Installation tutorial

  • Download or clone the repository
  • Add your own Google Poly API key in the Manifest.xml file:
            <meta-data
                android:name="com.google.android.ar.API_KEY"
                android:value="YOUR_KEY_HERE" />
    You can create your own Poly API key using the official documentation.
  • Make sure you can run the app on a physical device that supports ARCore. You can check the devices available here.
  • Install the app and provide Audio permissions like below:

Provide Audio permissions

  • Enjoy!

3. Architecture details

Flows relation to modules

Flows relation to modules

Module A. Transform and process voice commands

Module A

Module B. Visualize corresponding assets and perform selection

Module B

Module C. Persistent AR experience

Module C

4. Limitations:

  • 24h persistence limit for anchors imposed by Cloud Anchors API
  • Local persistence of 3D room

About

This is a sample project that allows users to match voice commands to 3D models and load them dynamically in the augmented world. The voice command is processed using StanfordNLP while the models are obtained and loaded dynamically from Google Poly API. The AR interaction is implemented through ARCore with Sceneform.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages