Skip to content

Using ARKit and LiDAR to save depth data and export point cloud, based on WWDC20-10611 sample code

Notifications You must be signed in to change notification settings

Waley-Z/ios-depth-point-cloud

Repository files navigation

Save iOS ARFrame and Point Cloud

This project improves the usability of the sample code from WWDC20 session 10611: Explore ARKit 4. Note that the sample code is also on the original branch and the original code from WWDC20 can be checked out at the first commit. The original project places points in the real-world using the scene's depth data to visualize the shape of the physical environment.

Video walkthrough

Check out the walkthrough video on Youtube.

Usability functions

This project adds the following functions:

  • Add buttons to start/pause recordings.

  • Save ARFrame raw data asynchronously at a chosen rate when recording. The selected data include

    // custom struct for pulling necessary data from arframes
    struct ARFrameDataPack {
        var timestamp: Double
        var cameraTransform: simd_float4x4
        var cameraEulerAngles: simd_float3
        var depthMap: CVPixelBuffer
        var smoothedDepthMap: CVPixelBuffer
        var confidenceMap: CVPixelBuffer
        var capturedImage: CVPixelBuffer
        var localToWorld: simd_float4x4
        var cameraIntrinsicsInversed: simd_float3x3
    }

    The sampling rate is controlled by a slider. One of every n new frames will be saved. The current sampling rate will be indicated in the filename, i.e. {timestamp}_{samplingRate}.[json|jpeg]. Please note that all frames will contribute to the point cloud as well as AR display. In other words, all the points in AR will be saved to PLY point cloud file but only 1/n of them will be saved as json and jpeg.

    The captured images are stored in jpeg format and others are coded into json files of which the format is specified as below.

    struct DataPack: Codable {
        var timestamp: Double
        var cameraTransform: simd_float4x4 // The position and orientation of the camera in world coordinate space.
        var cameraEulerAngles: simd_float3 // The orientation of the camera, expressed as roll, pitch, and yaw values.
        var depthMap: [[Float32]]
        var smoothedDepthMap: [[Float32]]
        var confidenceMap: [[UInt8]]
        var localToWorld: simd_float4x4
        var cameraIntrinsicsInversed: simd_float3x3
    }

    They can be retrieved in Finder with USB connection. Those raw data make it possible to leverage photogrammetry techniques for various tasks.

  • Save the point cloud in PLY format when the recording is stopped.

  • Add low memory warning and file saving progress.

Discussions

  • This answer has been very helpful in exporting point cloud in PLY format.

  • According to this answer, releasing ARFrame memory pool holding is important. If ARFrame currentFrame is passed into time-consuming async tasks like converting data formats or saving files to disks, the memory pool used by ARFrame is retained and no more frames can be written to the pool. In this case, a warning will arise.

    ARSessionDelegate is retaining XX ARFrames.
    

    To solve this problem, copy over the selected data to custom struct and pass that to async tasks. To deep copy CVPixelBuffer, try this code.

  • To get warning on low memory, see this doc.

  • Similar projects

About

Using ARKit and LiDAR to save depth data and export point cloud, based on WWDC20-10611 sample code

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published