Skip to content

After Apple’s introduction of ARKit 2, we have been consistently working behind to create shared-AR experiences. Our goal is to improve the utility of mobile using AR experiences.

License

Notifications You must be signed in to change notification settings

SimformSolutionsPvtLtd/ARKit2.0-Prototype

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 

Repository files navigation

ARKit 6.0

After Apple’s introduction of ARKit, we have been consistently working behind to create various AR experiences levraging the power of ARKit, RealityKit & SceneKit. Our goal is to improve the utility of mobile using AR experiences.

This demo created using ARKit:

  • Creates Geo-localized AR experiences on ARWorldMap
  • Detects objects and images
  • Mark specific objects and create 3D renders in point cloud
  • Share information locally over maintaining the ARWorld Session
  • Detecting user's sitting posture and providng info on it
  • Detecting user's standing posture along with angles

Features in this demo:

  • Image tracking
  • Face tracking
  • Sitting posture tracking
  • Standing posture tracking
  • Applying live filter on detected surface
  • Save and load maps
  • Detect objects
  • Environmental texturing

Prerequisites

Before we dive into the implementation details let’s take a look at the prerequisites of the course.

  • Latest Xcode
  • Latest iOS version
  • Physical iPhone device (Devices above X series is recommended for performance)

Body Tracking with angles detection

Body tracking is an essential feature of ARKit enabling to track a person in the physical environment and visualize their motion by applying the same body movements to a virtual character. Alongside this we can also create our own model to mimic user's movemnets or also can use the "biped_robot" provided by Apple itself

In this demo we will detect 2 types of posture

  1. Sitting posture In this demo we detects the angle between the knee and spine joints as this demo is of sitting posture it mainly focus on user's sitting posture, According to certain reports when user sit's, there is certain pressure applied to spine joints so according to detected posture when user's sit with a reliable support with almost more than 90 degree angles and above which scientifically applies less pressure on spine joints the demo updates itself with the green shape and turns red if vice-versa
RPReplay_Final1709301576.1.MP4
RPReplay_Final1709272419.1.MP4
  1. Standing posture This demo is all about standing and detecting user's movement along with angles, In this demo i have created a skeleton using cylinder(Bones) and sphere(joints) which will mimic users movement also i have placed angle calculations at the joints based on calculation of 2 nearby joints. This usecase serves various purpose of body tracking and can be useful for exercise related applicaitons.
RPReplay_Final1714027152.1.MP4

Applying live filter on rectangluar surface

Whenever we detect any surface let's say we detect any rectangle in the camera feed and what if we need to apply live filter on detected rectangular surface, this can be achieved using Vision and ARKit framework and for applying live filters on that detected surface we have used a StyleTransferModel which is a MLModel provided by the Apple itself.

What we did here is using the Arkit's ARImageAnchor we have tracked an image which is of rectangular shape then we have extracted or identified the rectangular surface using the VNRectangleObservation finding it's 4 different corner and then converting the detected image's pixelbuffer into the dimension which is required by the MLModel and then passing it through the MlModel which will then provide us the altered images to display real time. You can see the video reference below

RPReplay_Final1717345626.3.MP4

Face Traking and loading live 3d content

Tracking and visualizing faces is a key feature to track user's face along with their expressions and simultaneuosly mimic the same user's expression using a 3D model, also there are many possible use cases of tracking face by honing the capability of ARKit

Here, in this tutiorial we have added some of the basic functionality of tracking face along with mimicking user's facial expression

RPReplay_Final1717350037.1.mov

Image recognition and tracking

“A photo is like a thousands words” - words are fine, but, ARKit-2 turns a photo into thousands of stories.

Among the new dev features introduced in WWDC 2018, Image detection and tracking is one of the coolest. Imagine having the capability to replace and deliver contextual information to any static image that you see.

Image detection was introduced in ARKit 1.5, but the functionality and maturity of the framework was a bit low. But with this release, you can build amazing AR experiences. Take a look at the demo below:

alt text

  • Class : AVImageDetection.swift
let configuration = ARImageTrackingConfiguration()
let warrior = ARReferenceImage(UIImage(named: "DD")!.cgImage!,orientation: CGImagePropertyOrientation.up,physicalWidth: 0.90)
configuration.trackingImages = [warrior]
configuration.maximumNumberOfTrackedImages = 1

you can do custom action after image detect. We have added one GIF in replace of detected image.

func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
        let node = SCNNode()
        if let imageAnchor = anchor as? ARImageAnchor {
            let plane = SCNPlane(width: imageAnchor.referenceImage.physicalSize.width, height: imageAnchor.referenceImage.physicalSize.height)
            plane.firstMaterial?.diffuse.contents = UIColor(white: 1, alpha: 0.8)
            let material = SCNMaterial()
            material.diffuse.contents = viewObj
            plane.materials = [material]
            let planeNode = SCNNode(geometry: plane)
            planeNode.eulerAngles.x = -.pi / 2
            node.addChildNode(planeNode)
        } else {
            if isFirstTime == true{
                isFirstTime = false
            } else {
                return node
            }
            let plane = SCNPlane(width: 5, height: 5)
            plane.firstMaterial?.diffuse.contents = UIColor(white: 1, alpha: 1)
            let planeNode = SCNNode(geometry: plane)
            planeNode.eulerAngles.x = .pi
            let shipScene = SCNScene(named: "art.scnassets/Sphere.scn")!
            let shipNode = shipScene.rootNode.childNodes.first!
            shipNode.position = SCNVector3Zero
            shipNode.position.z = 0.15
            planeNode.addChildNode(shipNode)
            node.addChildNode(planeNode)
        }
        return node
    }

New Image Detecttion

Save and load maps

ARKit 2 comes with revolutionary ARWorldMap that allows persistent and multiuser AR experiences. In simpler words, you can use ARWorldMap to not only render AR experiences and render objects, but it also builds awareness about your user’s physical space and helps your app. This means that you can detect and standardise real world features in your iOS app.

You can then use these standardised features to place virtual content (funny GIFs anyone?) on your application.

We are going to build something like this

alt text Let’s dive into tech implementation.

First we are going to get the current world map from a user’s iPhone by using .getCurrentWorldMap(). We will save this session to get spatial awareness and initial anchors that were are going to share with another iPhone user.

  • Class : AVSharingWorldMapVC.swift
sceneView.session.getCurrentWorldMap { worldMap, error in
        guard let map = worldMap
        else { print("Error: \(error!.localizedDescription)"); return }
        guard let data = try? NSKeyedArchiver.archivedData(withRootObject: map, requiringSecureCoding: true)
        else { fatalError("can't encode map") }
        self.multipeerSession.sendToAllPeers(data)
}

Once, you get the session and world map related anchors from the first iPhone user. The app will now use Multipeer connectivity framework to push information on a P2P network to the 2nd iPhone user.

The code below shows how the second iPhone user would receive session sent over Multipeer. The 2nd iPhone user in this case would receive a notification using a receiver handler. Once the receive a notification, we can then get session data and render it in ARWorld.

func receivedData(_ data: Data, from peer: MCPeerID) {
        if let unarchived = try? NSKeyedUnarchiver.unarchivedObject(of: ARWorldMap.classForKeyedArchiver()!, from: data), let worldMap = unarchived as? ARWorldMap {
            // Run the session with the received world map.
            let configuration = ARWorldTrackingConfiguration()
            configuration.planeDetection = .horizontal
            configuration.initialWorldMap = worldMap
            sceneView.session.run(configuration, options: [.resetTracking, .removeExistingAnchors])
            mapProvider = peer
        }
}

If you wish to get a more hands on experience with Multi user AR experiences, Apple has a demo project just for that. You can download the demo here.

New AR world sharing 1 (1) New AR world sharing 2 (1)

Object Scanning & Detection

The new ARKit gives you the ability to scan 3D objects in the real world, creating a map of the scanned object that can be loaded when the object comes into view in the camera.

While scanning we can get current status of scanned object from the following delegate method

  • Class : ScanObjectsVC.swift
 @objc func scanningStateChanged(_ notification: Notification)

After a successful scan we can create and share a reference object which will be used to detect an object later on.

  • Class : Scan.swift
 func createReferenceObject(completionHandler creationFinished: @escaping (ARReferenceObject?) -> Void) {
        guard let boundingBox = scannedObject.boundingBox, let origin = scannedObject.origin else {
            print("Error: No bounding box or object origin present.")
            creationFinished(nil)
            return
        }
        // Extract the reference object based on the position & orientation of the bounding box.
        sceneView.session.createReferenceObject(
            transform: boundingBox.simdWorldTransform,
            center: float3(), extent: boundingBox.extent,
            completionHandler: { object, error in
                if let referenceObject = object {
                    // Adjust the object's origin with the user-provided transform.
                    self.scannedReferenceObject =
                        referenceObject.applyingTransform(origin.simdTransform)
                    self.scannedReferenceObject!.name = self.scannedObject.scanName
                    creationFinished(self.scannedReferenceObject)
                } else {
                    print("Error: Failed to create reference object. \(error!.localizedDescription)")
                    creationFinished(nil)
                }
        })
    }
Object.Scanning.mp4

Similar to the ARWorldMap object, ARKit creates a savable ARReferenceObject that can be saved and loaded during another session.

  • Class : AVReadARObjectVC.swift
 let configuration = ARWorldTrackingConfiguration()
 
 // ARReferenceObject(archiveURL :...   this method used when we have  ARReferenceObject Store in local Document Directory 
 configuration.detectionObjects = Set([try ARReferenceObject(archiveURL: objectURL!)])
 
 // ARReferenceObject.referenceObjects(inGroupNamed...   this method used when we have  ARReferenceObject Store in Assest Folder
 configuration.detectionObjects = ARReferenceObject.referenceObjects(inGroupNamed: "", bundle: .main)!
 
 sceneView.session.run(configuration)

When Object is recognized, this delegate method will be called.

   func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? 

You can do You custom action in this delegate method.

New Object detection

Environment Texturing

In previous versions of ARKit, 3D objects placed in the real world didn’t have the ability to gather much information about the world around them. This left objects looking unrealistic and out of place. Now, with environmental texturing, objects can reflect the world around them, giving them a greater sense of realism and place. When the user scans the scene, ARKit records and maps the environment onto a cube map. This cube map is then placed over the 3D object allowing it to reflect back the environment around it. What’s even cooler about this is that Apple is using machine learning to generate parts of the cube map that can’t be recorded by the camera, such as overhead lights or other aspects of the scene around it. This means that even if a user isn’t able to scan the entire scene, the object will still look as if it exists in that space because it can reflect objects that aren’t even directly in the scene.

To enable environmental texturing, we simply set the configuration’s

environmentalTexturing property to .automatic.

New Env Texturing

Apple has created a project that can be used to scan 3D objects, and can be downloaded here

Inspired

This demo is created from Apple's ARKit 2 sample demo