Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Volume rendering #120

Open
neurolabusc opened this issue Feb 4, 2021 · 15 comments
Open

Volume rendering #120

neurolabusc opened this issue Feb 4, 2021 · 15 comments

Comments

@neurolabusc
Copy link

The recent commits have started work on volume rendering, using the shader in bis_3dvolrenutils.js

While volume rendering can be done with WebGL1, the 3D textures of WebGL2 make this much more efficient. This is timely, as Chrome, Edge and Firefox all support WebGL2, and the Safari Technology Preview reveals that WebGL2 will no longer be disabled by default.

Can I suggest a couple of features that are illustrated here:

My suggestions are described in Engel et al.'s Real-Time Volume Graphics.

  • The step size and opacity correction should be based on the resolution of your image. With WebGL2 you can get this from textureSize. In my example code this is dynamically adjusted to maintain a frame rate.
  • If you are only interested in orthogonal renderings (which I prefer, as size is not impacted by depth), all the rays are parallel. Therefore, there is no need to compute view_ray for each fragment. It can be done in the vertex shader (as in my example, just eight times) or as a uniform (once). I admit this probably has virtually no impact on performance.
  • Adding a stochastic jitter to the ray start will reduce the wood grain aliasing effects. Each ray's start point is offset randomly in the range between 0..1 steps. In my example, this is done with wang_hash. You can remove that to see the impact.
  • Each call to sample_3d_texture() is nested in 6 if conditionals. GPU shaders are poor at conditionals, and these should be avoided in your inner loop. In my code, only samples within the texture are sampled, so there is no need for a conditional.
  • The add_lighting function in your shader requires 6 expensive texture lookups and will yield low precision gradients. Pre computed gradients will be better quality and require only a single texture lookup. My code shows how to create a 3D gradient texture which you do once and retain for all subsequent renderings.
  • Beyond simple ambient+diffuse+specular lighting models, my example shows how one can use MatCaps.
  • Early ray termination accelerates rendering. In my example a ray stops once it is 95% opaque if (color.a >= 0.95)
  • While not demonstrated in my code, these ray casting shaders can be easily extended with other features, as shown in MRIcroGL. For example, once you know the textureSize, one can leverage the fact that you are using linear interpolation to have the ray traverse quickly in open space and switch to finer sampling with the first hit (so rays that hit a target benefit from early termination, and those that do not benefit from rapid traversal). It is also easy to apply arbitrary clip planes and CT specific optimizations.

I am happy to generate pull requests to improve your volume rendering methods if you wish. The bioImageSuite is an outstanding project and enhancing the volume rendering will have a great impact.

@XeniosP
Copy link
Collaborator

XeniosP commented Feb 4, 2021

Chris: Let's talk in person. I'd love to pursue this and I am thrilled to see you working on this!

@pieper
Copy link
Collaborator

pieper commented Feb 4, 2021

I'm also thrilled to see this! There are a couple other use-cases and features I could add to the wishlist if you are interested (I don't want be a distraction).

@XeniosP
Copy link
Collaborator

XeniosP commented Feb 4, 2021

Steve: go for it. This is all good.

@pieper
Copy link
Collaborator

pieper commented Feb 4, 2021

Okay here are some of my wishlist items:

  • I'd love to be able to re-use shader snippets across desktop vtk (slicer), vtkjs, and other environments. We have the PrismRendering code here that implements some of those. This is better supported now in VTK 9 with the vtkShaderProperty class.
  • I'm interested in rendering segmentations and compositing them with reference images. We use vtkjs for this in OHIF for IDC with ray casting to do MPR and overlay segmentations with screen space calculation of outline thickness and opacity.
  • VTK desktop now supports multi-volume compositing (composite per-step in the ray cast) but the rendering doesn't look as nice as Chris's, maybe due to lack of MatCap or else there are other optimizations to be done.
  • I have a prototype renderer that supports nonlinear spatial transforms applied during the rendering and it would be great to integrate that with better lighting and other effects.
  • I'd like to have better volume rendering of segmentations (i.e. calculating the isosurface in the ray cast with a labelmap and lookup table).

It would be amazing if we could pool our efforts somehow. I've been actively working on putting some of this into Slicer.

@neurolabusc
Copy link
Author

@pieper for compositing segmented images, does MRIcroGL's mosaic, glass brain and clip plane styles match what you want? You can try these out with MRIcroGL by choosing the menu items "basic", "clip", and "mosaic" from the Scripting/Templates menu item. These show different ways that a background image can be superimposed on a background image. The shader will show overlays beneath the surface of the background image, but they are more transparent depending on depth beneath the surface. Below is the example of the "Clip" script. The "overlayDepth" slider allows the user to interactively choose the rate of transparency decay.

If this is what you are looking for, this is easy to implement. If this is not what you want, can you provide bitmaps of what you are looking for.

overlay

@XeniosP
Copy link
Collaborator

XeniosP commented Feb 4, 2021

@neurolabusc please go ahead and send pull requests, I'd love to see what you have done and integrate it in.

@pieper
Copy link
Collaborator

pieper commented Feb 4, 2021

Hi @neurolabusc - that looks close to what I mean, but it's not clear to me that any of those modes are exactly what I mean. I'm thinking of something like a freesurfer aseg file, where each voxel in the labelmap is a is the index into a color lookup table that provides RGBA. Every ray step would need to look at the neighborhood of voxels to calculate the gradient / surface normal based on the whether the neighbor shares the same index value. There's some discussion here.

@neurolabusc
Copy link
Author

neurolabusc commented Feb 4, 2021

@pieper the MRIcroGL cutout script shows a similar render. However, I always precompute gradients instead of looking at your neighborhood with each ray step. That method requires at least six expensive texture lookups with each step, and eight lookups for a decent estimate (the sobel8). The precomputed gradients allow a single texture() call to get a better quality estimate. The RGBA in for the gradient texture is the XYZ for gradient direction as well as the gradient magnitude. Engle discusses these in section 5.5 Pre-Computed Gradients and 5.6 On-the-fly Gradients.

While the results are the same from the user's perspective, I find the implementation of one pass shaders like Will Usher's and Philip Rideout must simpler to implement than the two-pass method of PRISM. There is no need for an extra frame buffer: one can infer the back face of a unit cube if you know the front face and the ray direction.

template
cutout

@pieper
Copy link
Collaborator

pieper commented Feb 4, 2021

Yes, pre-computed makes sense. The colored cortex looks like just the thing, but I'm not seeing the labelmap case on your page - I would love to try to replicate that in VTK / Slicer.

@neurolabusc
Copy link
Author

@pieper

  1. The problem is that most label maps do not precisely map the edge of the background image. We can not generate good gradients from the indexed integer atlases (below right), because each voxel is either 100% or 0% air (where typical scans have intermediate intensity at tissue boundaries reflecting partial volume effects). We can use an anatomical scan to derive these, but it requires a perfect match with the atlas and the brain boundary. In regions where a voxel is coded in the anatomical scan but not the atlas, we will get artifacts (below left). The obvious solution is to dilate the atlas so that each voxel in the anatomical scan is associated with some region. However, this does mean that the atlas used for visualization is somewhat different than the one used for analyses and distributed by the atlas creators.

aicha_blendx

  1. I enjoyed the article. The depth peeling in particular looks great interactively. Image below shows how airways can be easily visualized.
    air22

I do think having common shaders across tools will be really useful for development. My current GLSL shaders use __VERSION__ > 300 to allow the same shader to run on OpenGL 2.1 (which remains popular on servers) and more modern OpenGL 3.3 Core. I suspect we could do something similar for for WebGL2.

@pieper
Copy link
Collaborator

pieper commented Feb 5, 2021

Hi @neurolabusc - I agree with your point about indexed label maps and artifacts and let me suggest an approach.

First, let's stipulate that if a structural scan threshold defines the surface, you can use it as the alpha channel and the dilated label map to define the color and that may or may not be what you want. That's what I did for this microCT:
https://dl.dropboxusercontent.com/s/l04gdww0l3ftqs1/RGBA-mouse-segment-render.mp4?dl=0

But in addition. I'd like to be able to render a smoothed surface of the segmentation independent of the structural reference volume.

What I'd like to see is effectively performing the isosurface generation as part of the ray casting in the shader. That is, fitting a local surface based on the neighboring voxel values. This would be effectively the same as what goes on in current surface mesh generation pipelines. I agree this requires extra texture fetches but that's generally cheap and it is a constant overhead independent of surface complexity.

The advantage of this would be that you could get realtime updates of complex geometries during interactive operations just by updating the texture (e.g. this would be useful in Slicer's Segment Editor, which currently runs a CPU surface generation pipeline that bogs down as the surface become complex).

Do you know if such an implementation exists? I'm pretty sure I can implement this but would be happy if there were a starting point available.

@neurolabusc
Copy link
Author

neurolabusc commented Feb 5, 2021

First of all, that is a major advantage of pre-computing your gradients is that you can apply a smooth prior to your Sobel. At some stage this also blurs out the surfaces, which you can see in the top right image above. I pre-compute my gradients using the GPU not the CPU. A clever optimization is that the GPU 3D texture reads compute a trilinear interpolation in hardware. Therefore, a carefully placed texture read is sampling 8 voxels with a chosen weighting between them. This extends the 1D (weighted sample of 2 pixels) and 2D (4 pixels) methods described here. You can see my WebGL blur shader here, which is run prior to the Sobel shader for gradient estimation. You can run my demo changing the blur width (dX/dY/dZ).

You can also use the anatomical scan for normals, but ignore voxels where there is no color in the atlas. You want to make sure to reuse normals. There is still an issue with the fact that integer atlases do not handle partial volume, so you get some stair-case artifacts with surfaces that are parallel with the rays.

blender

With regards to getting a nice isosurface, you might want to try the latest release of MRIcroGL. All the GLSL shaders are text files in the /Resources/Shader folder. You can interactively edit them with a text editor. This allows rapid prototyping new effects. The//pref section of each shader allows you to define new uniforms that the user can adjust with a slider. For example, the line:

blend|float|0.0|0.5|1

Will create a uniform named 'blend' with an initial value of 0.5, and the user can adjust the slider in the user interface between 0.0..1.0.

The image below shows the included "tomography" shader which is designed to deal with the fact that the bone in CT is extremely bright, leading to stair-stepping with typical volume rendering. The two images below are the same shader, but with the surfaceHardness varied from 0.0 (left, pure volume rendering) to 1.0 (right, where the surface color is 100% driven by the sample with the strongest gradient magnitude). The blur applied to the gradients really helps out here.

isosurface22

Here is the AICHA atlas with the Tomography shader and full surfaceHardness:

AICHA22

@pieper
Copy link
Collaborator

pieper commented Feb 5, 2021

Thanks for the info Chris, those shaders look great. I'll go off an see about getting the effect I want Slicer/VTK but also keep an eye on your work here with BIS. Let's keep thinking about how we might factor some of the code out to work in multiple environments.

@neurolabusc
Copy link
Author

Here would be my proposal for updating the shader, which is now very similar to FSLeyes.

  • Since files get combined in the build folder, it is a bit hard to make pull requests. Perhaps there is an easier way to modify this project, but at the moment I am editing the /build/bislib.js file and reloading the web page in my browser. If there is a simpler way, please tell me.
  • Currently, there is a check box for "mip" - my proposed shader has three modes: "isosurface", "mip" and "volume". The first two are similar to your solutions. Below are screenshots of my "isosurface" and "volume" modes. Perhaps it is worth making this a pull-down selection instead of a check box.
  • Since this new shader uses large step sizes before the first hit, and early ray termination, the "Quality" slider has very little impact on rendering speed (except in the mip mode). Perhaps remove this GUI decoration and always compute best quality.
  • The scalp stripped images included with BioImageSuite have no partial volume near the air-tissue boundary. You will get better results by including sample images with a bit more information at the edges. For the images below, I use the images that come with MRIcroGL.
  • Stochastic jittering of the ray start minimizes wood grain artifacts.
  • Note that voxels are only sampled in the bounding box, the inner loop does not have lots of conditionals. Also, the artifact of oblique views with a clip plane and high resolution is resolved.
  • The variables darkScale and dark are kludges: intensity adjustments should be handled in the 1D texture u_cmdata. Like all of this code, the current implementation is totally compatible with your previous shader, so you can test and compare. For the same reason, this is still a two-pass shader, generating twice as many fragments as needed (with half emitting discard which in my experience can have little or huge penalty depending on the implementation). If you are happy with everything, these can be streamlined in the future.

iso
vol

const volume_fragment_shader=`
// ---------------------------------------------------------------------------------------
// <script src="https://threejs.org/examples/js/shaders/VolumeShader.js"></script>
// ---------------------------------------------------------------------------------------
//
// * @author Almar Klein / http://almarklein.org
// *
// * & Shaders to render 3D volumes using raycasting.
// * The applied techniques are based on similar implementations in the Visvis and Vispy projects.
// * This is not the only approach, therefore it's marked 1.
// Updated by Chris Rorden, 2021


precision highp float;
precision mediump sampler3D;

uniform float u_opacity; // added xp
uniform float u_stepsize; // added xp

uniform vec3 u_boundsmin; // added xp
uniform vec3 u_boundsmax; // added xp

uniform vec3 u_size;
uniform int u_renderstyle;
uniform float u_renderthreshold;
uniform vec2 u_clim;

uniform sampler3D u_data;
uniform sampler2D u_cmdata;
uniform vec3 u_spacing;

varying vec3 v_position;
varying vec4 v_nearpos;
varying vec4 v_farpos;

vec4 apply_colormap(float val) {
    val = (val - u_clim[0]) / (u_clim[1] - u_clim[0]);
    return texture2D(u_cmdata, vec2(val, 0.5));
}

vec4 add_lighting6(float val, vec3 loc, vec3 step, vec3 view_ray)
{
    // 6 sample: central limit
    // Calculate color by incorporating lighting
    // View direction
    vec3 V = normalize(view_ray);
    
    // calculate normal vector from gradient
        vec3 N;
    float val1, val2;
    val1 = texture(u_data, loc + vec3(-step[0], 0.0, 0.0)).r;
    val2 = texture(u_data, loc + vec3(+step[0], 0.0, 0.0)).r;
    N[0] = val1 - val2;
    val = max(max(val1, val2), val);
    val1 = texture(u_data, loc + vec3(0.0, -step[1], 0.0)).r;
    val2 = texture(u_data, loc + vec3(0.0, +step[1], 0.0)).r;
    N[1] = val1 - val2;
    val = max(max(val1, val2), val);
    val1 = texture(u_data, loc + vec3(0.0, 0.0, -step[2])).r;
    val2 = texture(u_data, loc + vec3(0.0, 0.0, +step[2])).r;
    N[2] = val1 - val2;
    val = max(max(val1, val2), val);
    
    float gm = length(N); // gradient magnitude
    N = normalize(N);
    
    // Flip normal so it points towards viewer
    float Nselect = float(dot(N, V) > 0.0);
    N = (2.0 * Nselect - 1.0) * N;  // ==  Nselect * N - (1.0-Nselect)*N;
    
    // Init colors
    vec4 ambient_color = vec4(0.0, 0.0, 0.0, 0.0);
    vec4 diffuse_color = vec4(0.0, 0.0, 0.0, 0.0);
    vec4 specular_color = vec4(0.0, 0.0, 0.0, 0.0);
    // note: could allow multiple lights
    for (int i=0; i<1; i++)
    {
        const vec4 ambient_color0 = vec4(0.0, 0.0, 0.0, 0.0);
        const vec4 specular_color0 = vec4(0.0, 0.0, 0.0, 0.0);
        const float shininess0 = 40.0;
        // Get light direction (make sure to prevent zero devision)
        vec3 L = normalize(view_ray);  //lightDirs[i];
        float lightEnabled = float( length(L) > 0.0 );
        L = normalize(L + (1.0 - lightEnabled));
        
        // Calculate lighting properties
        float lambertTerm = clamp(dot(N, L), 0.0, 1.0);
        vec3 H = normalize(L+V); // Halfway vector
        float specularTerm = pow(max(dot(H, N), 0.0), shininess0);
        
        // Calculate mask
        float mask1 = lightEnabled;
        
        // Calculate colors
        ambient_color +=  mask1 * ambient_color0;  // * gl_LightSource[i].ambient;
        diffuse_color +=  mask1 * lambertTerm;
        specular_color += mask1 * specularTerm * specular_color0;
    }
    
    // Calculate final color by componing different components
    vec4 final_color;
    vec4 color = apply_colormap(val);
    final_color = color * (ambient_color + diffuse_color) + specular_color;
    final_color.a = color.a;
    return final_color;
}

vec2 intersectAABB(vec3 rayOrigin, vec3 rDir, vec3 boxMin, vec3 boxMax) {
//https://gist.github.com/DomNomNom/46bb1ce47f68d255fd5d
//https://prideout.net/blog/old/blog/index.html@p=64.html
	vec3 rayDir = rDir;
	float tiny = 0.0001;
	if (abs(rDir.x) < tiny) rayDir.x = tiny;
	if (abs(rDir.y) < tiny) rayDir.y = tiny;
	if (abs(rDir.z) < tiny) rayDir.z = tiny;
    vec3 tMin = (boxMin - rayOrigin) / rayDir;
    vec3 tMax = (boxMax - rayOrigin) / rayDir;
    vec3 t1 = min(tMin, tMax);
    vec3 t2 = max(tMin, tMax);
    float tNear = max(max(t1.x, t1.y), t1.z);
    float tFar = min(min(t2.x, t2.y), t2.z);
    return vec2(tNear, tFar);
}

void cast_iso2(vec4 samplePos, vec4 deltaDir, vec3 dir, float len) {
//isosurface rendering
	while (samplePos.a <= len) {
		float val = texture(u_data, samplePos.xyz).r;
		if (val > u_renderthreshold) {
			gl_FragColor = add_lighting6(val, samplePos.xyz, 1.5 / u_size, dir);
			return;
		}
		samplePos += deltaDir; //advance ray position	
	}
}

void cast_vol2(vec4 samplePos, vec4 deltaDir, vec3 dir, float len, float opacityCorrection) {
//volume rendering
	//float low_threshold = u_renderthreshold - 0.02 * (u_clim[1] - u_clim[0]);
	vec4 colAcc = vec4(0.0,0.0,0.0,0.0); //accumulated color
	float darkScale = u_renderthreshold / 1.0;
	while (samplePos.a <= len) {
		float val = texture(u_data, samplePos.xyz).r;
		if (val > u_renderthreshold) {
			vec4 colorSample = apply_colormap(val);
			colorSample.a *= u_opacity;
			colorSample.a = 1.0-pow((1.0 - colorSample.a), opacityCorrection);
			//darkening should be applied to 1D texture u_cmdata, this is a kludge: 
			float dark = darkScale * mix(1.0, 0.0, (val- u_renderthreshold)/(1.0-u_renderthreshold));
			colorSample.rgb *= (1.0 - dark);
			colorSample.rgb *= colorSample.a;
			colAcc = (1.0 - colAcc.a) * colorSample + colAcc;
			if ( colAcc.a > 0.95 ) //early ray termination
				break;
		}
		samplePos += deltaDir; //advance ray position	
	}
	colAcc.a = colAcc.a/0.95;
	gl_FragColor = colAcc;
}

void cast_mip2(vec4 samplePos, vec4 deltaDir, vec3 dir, float len) {
//maximum intensity projection rendering
	float low_threshold = u_renderthreshold - 0.02 * (u_clim[1] - u_clim[0]);
	float mx = texture(u_data, samplePos.xyz).r;
	while (samplePos.a <= len) {
		float val = texture(u_data, samplePos.xyz).r;
		mx = max(mx, val);
		samplePos += deltaDir; //advance ray position	
	}
	if (mx < low_threshold) return;
	gl_FragColor = apply_colormap(mx);
}

void cast_ray(vec3 frontPos, vec3 backPos) {
	//gl_FragColor = vec4(frontPos, 1.0); return; //frontface of unit cube
	//gl_FragColor = vec4(backPos, 1.0); return; //backface of unit cube
	gl_FragColor = vec4(0.0);  // init transparent
	vec3 dir = backPos - frontPos;
	//length: distance ray travels from front to back of volume
	float len = length(dir); //in unit cube, from 0..sqrt(3)
	float lenVox = length((u_size *frontPos) - (u_size * backPos)); //in voxel cube, from 0..sqrt(dim[i]^2+dim[j]^2+dim[k]^2)
	float sliceSize = len / lenVox; //e.g. if ray length is 1.0 and traverses 50 voxels, each voxel is 0.02 in unit cube 
	float stepSize = sliceSize * 0.5 * u_stepsize; //quality: larger step is faster traversal, but fewer samples
	float opacityCorrection = stepSize/sliceSize;
	dir = normalize(dir);
	vec4 deltaDir = vec4(dir.xyz * stepSize, stepSize);
	vec4 samplePos = vec4(frontPos.xyz, 0.0); //ray position
	//start: clipBox
	vec2 startEnd = intersectAABB(frontPos, dir, u_boundsmin, u_boundsmax);
	startEnd.x = max(0.0, startEnd.x);
	len = min(len, startEnd.y);
	samplePos += (vec4(dir.xyz, 1.0) * startEnd.x);
	//end: clipBox
	//start: fast Pass
	float rayStart = samplePos.a;
	float stepSizeFast = sliceSize * 1.9;
	vec4 deltaDirFast = vec4(dir.xyz * stepSizeFast, stepSizeFast);
	//val = (val - u_clim[0]) / (u_clim[1] - u_clim[0]);
	float low_threshold = u_clim[0];
	while (samplePos.a <= len) {
		float val = texture(u_data, samplePos.xyz).r;
		if (val > low_threshold) break;
		//if (val > 0.00001) break;
		samplePos += deltaDirFast; //advance ray position	
	}
	if (samplePos.a > len) return;
	if ((samplePos.a == rayStart) && (u_renderstyle == 1)) { //isosurface is not hollow at clip plane
		float val = texture(u_data, samplePos.xyz).r;
		gl_FragColor = apply_colormap(val);
		return;
	}
	samplePos -= deltaDirFast; //start slow traversal just prior to first hit
	//end: fast Pass
	float rand = fract(sin(gl_FragCoord.x * 12.9898 + gl_FragCoord.y * 78.233) * 43758.5453);
	samplePos += (deltaDir * rand); //stochastic jittering: advance ray random fraction of step size
	if (u_renderstyle == 1)
		cast_iso2( samplePos, deltaDir, dir, len);
	else if (u_renderstyle == 2)
		cast_mip2( samplePos, deltaDir, dir, len);
	else
		cast_vol2( samplePos, deltaDir, dir, len, opacityCorrection);
}

void main() {
    // Normalize clipping plane info
    vec3 farpos_in = v_farpos.xyz / v_farpos.w;
    vec3 nearpos_in = v_nearpos.xyz / v_nearpos.w;
    //
    // Normalize to spacing now -- this allows for non 1mm sized images
    // XP Jan 2018
    vec3 farpos=      farpos_in*u_spacing;
    vec3 nearpos =    nearpos_in *u_spacing;
    vec3 s_position = v_position*u_spacing;
    
    // Calculate unit vector pointing in the view direction through this fragment.
    vec3 view_ray = normalize(nearpos.xyz - farpos.xyz);

    
    // Compute the (negative) distance to the front surface or near clipping plane.
    // v_position is the back face of the cuboid, so the initial distance calculated in the dot
    // product below is the distance from near clip plane to the back of the cuboid
    float distance = dot(nearpos - s_position, view_ray);
    distance = max(distance, min((-0.5 - s_position.x) / view_ray.x,
                                 (u_size.x - 0.5 - s_position.x) / view_ray.x));
    distance = max(distance, min((-0.5 - s_position.y) / view_ray.y,
                                 (u_size.y - 0.5 - s_position.y) / view_ray.y));
    distance = max(distance, min((-0.5 - s_position.z) / view_ray.z,
                                 (u_size.z - 0.5 - s_position.z) / view_ray.z));
    
    // Now we have the starting position on the front surface
    vec3 front = s_position + view_ray * distance;
    
    // Decide how many steps to take
    int nsteps = int(-distance / u_stepsize + 0.5);
    if ( nsteps < 1 ) {
    	//<- could be avoided with one pass volume render
        discard;
    }
    
    // Get starting location and step vector in texture coordinates
    vec3 step = ((s_position - front) / u_size) / float(nsteps);
    vec3 start_loc = front / u_size;
    cast_ray(start_loc, s_position / u_size);
}`;

@XeniosP
Copy link
Collaborator

XeniosP commented Feb 8, 2021

@neurolabusc The whole project gets "built" using gulp, so bislib.js is created using webpack from all the component js files. I would be happy to describe the process for you -- but basically once you install all the prerequisites the development cycle boils down do

typing gulp

-- this runs a webserver + webpack in watch mode that looks for changes to the js code and rebuilds the bislib.js combi file which you access from localhost:8080

The volume rendering code is fed images from

js/webcomponents/bisweb_orthogonalviewer.js

which then calls code in

js/coreweb/bis_3dVolume.js

that then calls the sharders.

Making changes to the GUI to add three modes is actually fairly straightforward. I can do that.

Xenios

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants