Skip to content

VR Support

Bjorn Stahl edited this page Jan 20, 2018 · 3 revisions

Though the VR support is still in its infancy, it's far enough that it is possible to experiment with. To get VR working, you need a device that is working with OpenHMD so make sure that the examples/simple is working before you move on.

Setup

You need to build the vrbridge which is a separate process that gets access to the VR subprotocol. The reason for the separation is to make it easier to debug, and separate something as volatile as device access into a process of its own.

For scripting, the most important functions right now are vr_setup, vr_metadata, vr_map_limb

The basic setup is simple:

vr_setup("args_go_here", function(source, status)
    print(status.kind)
end)

To be able to debug the bridge itself, you can either just attach to the process - or if you need control at the initialization stage, set the argument string to "debug". It will then spinlock on a volatile flag after launch, so you can attach to the process and release the flag when you are ready.

For vr_setup to work, you need to reference the bridge in the ext_vr key in the 'arcan' (or arcan_lwa if you are running nested for ease of debugging) appl in the configuration database.

Setting up a suitable 3D pipeline

A normal 3D pipeline for normal rendering would look something like this:

function setup_scene(near, far, fov, aspect)
    local camera = null_surface(1, 1)
    camtag_model(camera, near, far, fov, aspect, true, false)
    forward3d_model(camera, -10.0)
-- loading, configuring other drawables goes here
    local cube = build_3dbox(1, 1, 1)
    local col = fill_surface(32, 32, 0, 255, 0)
    image_sharestorage(col, cube)
    delete_image(col)
    show_image(cube)
    return camera, cube
end

This would create a camera object that is attached to the global rendertarget, being first to do so, it becomes the qualified view when there are 3D models in the scene that should be processed. It starts out at world origo, looking down the Z axis. In the minimal scene here, we create a temporary full-green texture and sets it as the storage for the cube.

If we take the VR handler from the last example, and extends it with this, we'd get something like

vr_setup("args_go_here",
function(source, status)
    if status.kind == "limb_added" and status.name == "neck" then
       local camera, model = setup_scene(0.1, 100.0, 45, 1.333)
       vr_map_limb(source, camera, status.id, false, true)
    end
end)

Now, when the VR bridge is started and the hmd device discovers the HMD (exposed as neck limb) we create an association between orientation of the limb and the orientation of our camera. In this way, multiple VR sensors can be mapped to objects without exposing the high samplerate sensor and so on in ways that could cause uneven performance and introduce latency.

Distortion Pipeline

The normal way one would like to do VR rendering is as two separate passes, one for the left eye and one for the right, and then have a combining stage where distortion is applied.

function setup_pipeline()
    local leye = alloc_surface(0.5 * VRESW, VRESH)
    local reye = alloc_surface(0.5 * VRESW, VRESH)
    show_image({leye, reye})
    move_image(reye, VRESW * 0.5, 0)
    local cube = build_3dbox(1, 1, 1)
    local col = fill_surface(32, 32, 0, 255, 0)
    image_sharestorage(col, cube)
    delete_image(col)
    show_image(cube)
    define_rendertarget(leye, {cube}, RENDERTARGET_DETACH, RENDERTARGET_NOSCALE, -1, RENDERTARGET_FULL)
    define_linktarget(reye, leye, RENDERTARGET_DETACH, RENDERTARGET_NOSCALE, -1, RENDERTARGET_FULL)
    image_shader(leye, "distortion_left")
    image_shader(reye, "distortion_right")
    return leye, reye
end

The shader setup is just incomplete placeholders here, first shaders need to be built for the two labels, and have their uniforms set to match the data from the HMD display. This is explained in the "HMD Parameters" section further below.

Other things to note is that we now define two separate offscreen rendering passes. The rendertargets take some extra arguments to indicate that we want a full pipeline, with depthbuffer and so on, as the default is setup for normal 2D processing only. This time around, the camera setup is still incomplete, we need to do some more mapping.

function setup_hmd(lpipe, rpipe, source, hmd)
     local c_l = null_surface(1, 1)
     local c_r = null_surface(1, 1)
     local pos = null_surface(1, 1)
     local separation = 0.0
     link_image(c_l, pos)
     link_image(c_r, pos)
     scale3d_model(c_l, 1.0, -1.0, 1.0)
     scale3d_model(c_r, 1.0, -1.0, 1.0)
     nude3d_model(c_l, -separation, 0, 0)
     nudge3d_model(c_r, separation, 0, 0)
     local md = hmd.metadata
     local lfd = (md.left_fov * 180 / 3.1416) + md.fov_delta
     local rfd = (md.right_fov * 180 / 3.1416) + md.fov_delta
     camtag_model(c_l, md.near, md.far, lfd, md.left_ar, true, false, 0, lpipe)
     camtag_model(c_r, md.near, md.far, rfd, md.right_ar, true, false, 0, rpipe)
     vr_map_limb(source, c_l, status.id, false, true)
     vr_map_limb(source, c_r, status.id, false, true)
 end

vr_setup("arguments go here",
function(source, status)
    if status.kind == "limb_added" and status.name == "neck" then
        local lpipe, rpipe = setup_pipeline()
        setup_hmd(lpipe, rpipe, source, status)
    end
end)

So that is most of the boilerplate (sans error handling) for setting up a first person avatar object, positioning two "eyes" that follow hmd orientation and respective rendering.

HMD Parameters

Incomplete, go through the contents of the HMD structure.