Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Please give me some advice #56

Open
vinkovsky opened this issue Apr 28, 2021 · 50 comments
Open

Please give me some advice #56

vinkovsky opened this issue Apr 28, 2021 · 50 comments

Comments

@vinkovsky
Copy link

Hi Erich! I admire your professionalism and would like to better understand your code. I know three js well as a library, but have no idea how a shader program works. The rendering topic is very interesting to me, since I myself am a cgi artist. Could you please suggest me a roadmap? What do I need to know to write my own shader program? What topics of mathematics, geometry or physics do I need to work with? Many thanks!

@erichlof
Copy link
Owner

erichlof commented Apr 28, 2021

Hi @vinkovsky !

Great question! I'll try to be as helpful as I can, given the limits of this text message board.

About 7 years ago, I was in the same 'boat' as you. I knew a fair amount about three.js, and having done 3d game programming with opengl and C on windows 98 back in the late 1990's/early 2000's , I knew just enough math to be dangerous (ha ha). More precisely, I knew through trial and error what math algos or math routines I needed at various points in my simple games to get them working and looking somewhat correct. I spent several months getting to know vectors and vector operations like the dot product, and matrices, which I did not comprehend, but knew what I needed to use and in what situation (which I just copied and pasted from other math libraries out there at the time). Now I'm thankful we have great libraries like three.js and Babylon.js and glMatrix.js, which have everything you could possibly need to do 3d graphics/games/even basic physics simulation, right at our fingertips after simply downloading!

Everything was going fine for me using the stock opengl/webgl libraries to do my simple 3d games and projects, until 1 day when working on my 3D Asteroid Patrol game with three.js, I needed a billboard sprite explosion effect when your ship was destroyed in the game. I knew the math to correctly keep the sprite billboard always facing the camera, but how do you make fire or a realistic ball of fire out of three.js primitives? Spoiler alert: you can't, lol. You need a shader effect for special effects like that, and thus began my epic journey into the world of shaders!

I first visited glslsandbox.com and Shadertoy.com and just poked around until I found a decent explosion, sun with flares, or fire ball effect, and then literally copied and pasted the glsl shader code into my project and attached it to a three.js plane as a three.js shader material, and voila! All of a sudden I had a cool explosion fireball animation with alpha transparency. The next problem came when I needed a different color of explosion and a different rate of animation of the fireball expansion, depending on the situation during game play. I went in and started changing some of the numbers in this mysterious new language, glsl, and before long, I stumbled on to where the color was set and where the animation speed was set. But so far as the effect itself, there were all these crazy math and perlin noise functions going on. It was basically a magic black box that I could safely tinker with, as long as I didn't get too deep, because then the shader would crash or the effect would go away and turn into something else, which I also did not understand, lol!

Fast forward a couple of years and I came across this Shadertoy, which converged on a realistic looking image:
smallpt-based path tracer

I was intrigued - I later found out that this shader was inspired by the path tracer 'smallpt' by Kevin Beason. I researched his code for months and found out he in turn got most of his routines from Peter Shirley in his book called Realistic Ray Tracing (now out of print, but I have since bought a used copy). Peter Shirley is quite famous in the rendering world for this book, and more recently, his great Ray Tracing in One Weekend series of 3 digital books - and now he works for the cutting edge RTX division of NVIDIA. I started my own project here using snippets of that Shadertoy demo, conversions from Kevin Beason's C++ code from smallpt, and my basic familiarity with three.js, and then I have been filling in the gaps and improving it little by little for over 6 years now! Ray Tracing can be a never ending rabbit hole, lol!

However, if you don't want go down that path of ray tracing like I did just yet, and merely want to get acquainted with how shaders work and how to do the basics with their language, glsl, then I highly recommend watching this great video from the Art of Code channel on YouTube:
Intro to shaders

Martin is a naturally gifted instructor who explains shaders in a unique and sometimes humorous way, all the while coding every single line on screen in real time so you can immediately see the results.

Once you watch this video, I would highly recommend going to Shadertoy.com or glslsandbox.com and just starting from the stock default template- try out some random numbers, don't be afraid to break the shader - just mess around with something different (someone else's shader that they already wrote) each time you visit the shader site, and pretty soon you'll realize what variables sort of do what. Try stuff like changing the background color, changing the spacing of the effect, make everything smaller, everything bigger, speed up time, slow down time, stuff like that... until you feel you know what your small changes are doing to the screen output.

After that, there's a lifetime of shaders on both of these sites to keep you busy learning (and sometimes in awe and wondering how they did that! Some of them, from the great 'iq' (Inigo Quilez) for instance, are pure magic- I still have no idea how they were implemented!

One final thought about shaders in general that tripped me up for months back when I started this journey: programming shaders for the GPU is not like CPU programming in Javascript or three.js or C, or whatever. Yes, I suppose that it is iterative programming like C (as opposed to object oriented programming in C++ or functional programming in Haskell), but how the shader operates as an entire screen output unit is nothing like normal single threaded CPU programming.

My mental model of shader (pixel fragment shader to be precise) programming is the following: imagine if you had at your disposal a tiny dust-grain sized desktop computer dedicated to each and every pixel on the screen. That's thousands to millions of little bitty desktops, each assigned to its own pixel. Each pixel's 'computer' can be accessed through its ID, or glFragCoord in glsl. That's how we can assign a different color or routine for each individual pixel, by looking up their screen-space ID. You can for instance, assign a ray direction based on how far each pixel is from the center of the screen as I do here on this repo, or you also could use this ID number's x and y value as a floating point number and have that seed a random function. That way, the color or effect varies as smoothly as you desire, as you go from top to bottom, or left to right of the screen. The advantages of this architecture are: you only have to write 1 overall program, then behind the scenes, three.js passes around your master source code to each and every tiny computer on every pixel of your screen, each with their easily accessible unique x and y ID number. And amazingly, all these tiny computers execute their own copy of your master shader code at the same time, 60 times a second! A truly parallel way of thinking. A famous quote from the 1970s goes something like, "Ray Tracing is an embarrassingly parallel operation - my dream ray tracer would have tiny little processors for each and every pixel, to be executed in lockstep and in unison, producing an instantaneous image". Well, that's why many people, including myself, have gone to the GPU in modern times for tracing scenes faster and faster. NVIDIA'S RTX is almost there, with thousands of processors running in parallel on their GPUs- one day I'm hoping for that historical quote to become reality- millions, even billions, of tiny parallel processors, all for the same screen's output.

Now for the disadvantages of this architecture: since you have all these tiny computers operating on their own pixels (so to speak, in reality it's 'warps' on the GPU which work on 2x2 blocks of pixels at a time, but that's a digression), you would think that you could have pixels remember what they just computed on the previous frame (some sort of memory), or ask them to look over and see what their neighbor pixels are working on - but such is not the case. It's as if they have blinders on (tunnel-vision) and run through your shader blazingly fast, but then all data is lost after it shows on screen. You have to use tricks like ping pong buffers for temporary pixel 'memory' storage and built-in screen space glsl derivative functions (dFdx, dFdy) in order for a pixel to peek over at its neighbor pixel horizontally and vertically to see what's happening and compare notes (I currently use this technique for edge detection in my de-noiser, to keep edges of objects and corners sharp as possible with minimal blurring and blending).

A final disadvantage, and the main one in my opinion, is that since in reality there's 'warps' operating on 2x2 blocks of pixels at the same time, too many conditional statements like 'if' statements and branching can potentially cripple your performance. In CPU land, you can have hundreds or thousands of if statements and not notice any detriment to performance, but 1 too many if statements on a GPU or webgl shader, and it could crash or not even compile! GPUs like doing the same sort of computation or at least roughly the same amount of work inside the tiny 2x2 block of pixels. So for instance, even if you have a complex Mandelbrot fractal running, it will go at 60 fps all day every day (even on mobile), because every processor is doing roughly the same famous fractal algorithm. If however you give the GPU a Monte Carlo randomized path tracer, which I have crazily done (ha!), then one pixel's ray might bounce off of a glass surface, but its immediate neighbor inside the same 'warp' might pass on through that same glass surface to get the transparent effect, thus taking a completely different path than its neighbor - too much of this kind of activity and GPU diversion starts showing up to slow down your framerate. Over the years I have had to find a delicate balance between robustness of my path tracer (able to randomly sample any kind of scene and surfaces), vs. giving each tiny processor roughly the same amount of work to chew on so they're all done at about the same time - no stragglers holding up the line for everybody else!

Sorry I got into the weeds there a little, but I am really passionate about graphics, shaders, and ray/path tracing - it is a seemingly endless journey! Hope I could help, or at least point you in the right direction. Please feel free to clarify or ask questions related to your shader journey here on this thread. I'll be happy to assist if I am able!

-Erich

@vinkovsky
Copy link
Author

vinkovsky commented Apr 30, 2021

WOW! Thanks so much Erich, very detailed and consistent answer! I agree with you that the best way to learn something is to plunge headlong into an unknown topic. At one time, this is how I learned three js, practically without looking at the documentation, copied examples and changed some parameters. I really appreciate your work and will definitely study the books you mentioned.

I have a few questions about your project. I've seen several examples where you can change the material on an object, but can they be combined with each other? For example, use a diffuse texture with a normal map texture or with a metalness map.

P.S. Sorry if I made mistakes, I am not a native speaker

EDIT:

oh sorry i haven't seen a demo of the animated BVH model

@erichlof
Copy link
Owner

erichlof commented Apr 30, 2021

@vinkovsky
No need to apologize for grammar - you actually have quite an impressive command of English (which is the most non-intuitive language on Earth, so I sympathize). I couldn't even begin to ask a question outside of English (my native and only language, embarrassingly), so I respect your efforts - and I understand what you are saying! :-D

In regards to the material parameters, yes you can either have very simple single-type materials like in most of my demos, or layered PBR materials. For example, if I were to create the famous old Cornell Box, I would assign a simple DIFF (diffuse) material to the walls, floor, and ceiling of the room, as well as DIFF (diffuse) for the little short cube on the right, then the tall mirror box on the left would receive a SPEC (specular) material. The quad light near the ceiling would receive a LIGHT (emissive) material. To see all the possible choices for simple, singular materials in action, take a look at my Switching_Materials demo, more importantly the for 'bounces' loop inside its fragment shader.

Using these singular materials might seem too easy, but since we are using path tracing, modeled from real-world physical optics of light interaction with objects. these simplistic, idealized materials will produce very realistic results. But this is of course if you're making assumptions about the world you are rendering, and what your final project goals are. If you, like we are here on this repo, want to just get photo-realistic images to the screen as fast as possible (60 FPS), then these simple singular materials should do the job most of the time. But we must make assumptions about the environment we are rendering; we have to assume that it is a sterile, perfect atmosphere environment (impossible in the real world, except when you're in outer-space, lol ), with idealized surfaces without flaws, dust, or scratches in the materials (also impossible on our 'messy' planet). But since things are moving fast and we are using correct optics laws and stuff like the Fresnel Equations, things will look photo-realistic, as long as you don't get your 'nose' right up to the surface, ha.

If however you're wanting to capture these tiny, detailed, imperfections for a final beauty-image render (non real-time), or if you're an animation studio rendering out individual animation frames to be viewed in a major motion picture, like Pixar, then these singular, simplistic, idealized materials will fall short. This is where multi-layered materials and physically-based (PBR) materials come into play. Rather than specify some crazy math function for scratches and bumps, rust, metalic vs. diffuse as our eyes move across the surface (you could do it mathematically I guess, but it would be extremely difficult to come up with a multivariable calculus function for such a rough, complex surface), usually people just go out into the real world with a camera and take loads of high-detail pictures and then take out the lighting (I'm not sure how), and separate the material components as they change across the surface (for example, a corroding metal pipe in an abandoned factory that was painted white many years ago, but is now starting to reveal some rusty red metal parts underneath as it decays). Usually the CG photographers/artists separate the components of the material into what they call Albedo Map, Normal Map, Emissive Map, Roughness Map, Specular/Gloss Map, Metalness Map, and Ambient Occlusion Map (which we don't need because we get Ambient Occlusion (AO) for free as a natural by-product from path tracing, ha!). Don't let these technical names scare you - they are just textures that are layered right on top of each other on your 3d model. Plus, you don't typically have to use ALL of them, just the main ones that are relevant to your final rendering needs. For instance, I always use the Albedo(diffuse) map, as this gives the overall color of the object. In the rusty old painted pipe example, this texture would randomly go from white paint color to red-brown rusty color as your eyes move across the surface. Then I always use the Normal Map, which gives a sense of bumpiness or small cavities. So in the pipe example, the normals would be a uniform rgb color (x,y,z direction) where the white paint is still intact, but maybe with a slight cracking/chipping bumpiness pattern where the paint is peeling and chipping, then crazy random when it got to the brown rusty parts. Then I would definitely need the Metalness map, because that would indicate to the path tracer where to use my idealized COAT vs SPEC (shiny white paint vs. metal underneath). Lastly I might use the glossiness map, as I would need to differentiate between parts of the pipe that are still looking good with a shiny coat of white paint (COAT in our path tracer), vs. worn, corroded parts that have lost their shine over the years of decay, maybe DIFF for those parts. You may have noticed that we are reverting back to our old DIFF, COAT and SPEC which I just said wouldn't be enough, but it turns out if you use these correctly with the textures as they vary across the surface (at each pixel's ray level), then physically it works perfectly and looks correct!

But in the end, the nice thing is that you don't have to think about all the physical real-world processes (like how would a corroding factory pipe look after x amount of years) - you just have to link up the provided textures with the idealized materials modeled in our path tracer. In other words, every fancy material, no matter how complex or layered, gets boiled down or funneled down to a single, basic idealized material model at that pixel's level that its ray is calculating inside the path tracer. Specifically, the Albedo Map gets a vec3(r,g,b) color, The Normal map gets a vec3(x,y,z) surface normal which is then normalized (unit length) and used for bouncing or reflecting secondary rays, the Emissive Map (if you had a tv screen texture material for example) gets a simple LIGHT material with an emission color property vec3(r,g,b) inside the path tracer, the Metalness Map is binary (either metal or not) and gets SPEC if it is, DIFF/COAT if not, Gloss/specular Map gets COAT or REFR(glass) with varying amounts of IoR (index of refraction, resistance to light rays passing through transparently) to the glass or plastic clear coat on top of the diffuse surface, based on what the gloss texture says to do).

To see how I used these PBR layered materials, have a look at my BVH_Animated_Model demo. In particular, first look at the .js setup file which loads in all the maps(textures) that came with the Damaged Helmet model. Then look at the for 'bounces' loop inside CalculateRadiance() function, where I extract the various maps texture data and assign materials accordingly.

Don't be discouraged - this is a fairly advanced topic - where you are trying to use layered materials inside a ray/path tracer. I only recently came to understand the basics of doing all this, but that was 4 or 5 years after studying how to do the simple idealized model materials, like LIGHT, DIFF, SPEC, REFR and COAT. I still have to go back and look at my old demo's source code to remind myself how to load in and use all the various maps - it is something easily forgotten if you don't work on it every day, ha! Hope this helped. :-)

-Erich

P.S. To get your 'feet wet': for seeing how to just apply a Diffuse (albedo) color texture, take a look at Billiard_Table_Fragment.glsl.
That shows how to just stick a texture on a surface, like the blue table cloth and wood grain on the cue stick and the table's wooden side rails.
Likewise, for a very simplistic use of 1 PBR texture only, a Normal Map, take a look at the classic Whitted_TheCompleatAngler_Fragment.glsl source. It shows how I just use a normal map that I simply Googled and downloaded, and then applied it to the orbiting yellow sphere, in order to make a square tiled indentation pattern effect.

@vinkovsky
Copy link
Author

Hi Erich! Thank you for such a visual explanation! This is what I want to achieve. I already started to study the code from your example of an animated bvh model, of course it looks very complicated to me. I should probably start with something simpler, as you said to pay attention to the pool table. At least, the development vector is clear to me and I thank you again for your support

@erichlof
Copy link
Owner

erichlof commented May 1, 2021

@vinkovsky
No problem! Glad to be of assistance. I just remembered you had asked what kind of math and CG skills you needed in order to make realistic graphics with shaders. I have 2 older great online resources for learning the math and theory behind traditional ray tracing and path tracing (Monte Carlo style, like on this repo).

The first resource is Scratchapixel - in particular, if you scroll down the webpage you'll see the section called Volume 1: Foundations of 3D Rendering. This would be an excellent place to start learning the theory behind rendering. This Scratchapixel website is not complete (there are missing chapters everywhere, and I don't know if the author will ever complete them), but this Volume 1: Foundations of 3D Rendering is the only fully complete set, so that's why I recommend that first. If you are curious about Monte Carlo numerical methods and why we have to use that for path tracing sampling, then you'll also find some fairly complete sections on that under the Mathematics and Physics for Computer Graphics section, which is not complete, but complete enough for our purposes! What I like about this online learning website is that the author goes through all the details of the theory behind all CG problems, then proceeds to put everything into actual code that you could conceivably run yourself. I say 'conceivably', because he uses C/C++, so you would need a development environment set up like Microsoft Visual Studio Community (free) to be able to compile and execute the C/C++ example code. I'm actually ok he chose C/C++ because by the time you get to glsl, it looks a lot like C, which has strong variable typing, upfront declarations, gets fully compiled before execution, and no garbage collection. So you can actually use some of the math routines/algos as-is with minimal changes to get them to work in your future shaders.

The other resource is directly related to what we do here on this repo, which was inspired by Kevin Beason's brilliant old non-realtime path tracer in just 100 lines of C/C++ code called 'smallpt' (stands for smallPathTracer). Now when looking at his compact 100 lines which is a complete Monte Carlo path tracer, it would be hard to decipher what's going on, especially if you have not written ray tracers before, but a CG professor by the name of David Cline took Kevin's 100 C/C++ lines, unpacked them, and then made a complete Google slide presentation which takes each line, 1 by 1, and first analyzes the mathematical theory behind what Kevin is doing, then analyzes how he got that theory into actual code that runs efficiently on a computer. Here's a link to the presentation and here's a link to Kevin's project page which has other cool older projects as well - Kevin Beason's website By the way, although Kevin magically got this complete path tracing renderer code down to just 100 lines, the algos and math he uses are directly from the older book I told you about a couple of posts ago - it is Peter Shirley's Realistic Ray Tracing: 2nd Edition, which is now out of print but you can still find used copies on vintage online bookstores. There are many gold nuggets and gems inside this old classic book that I have directly used here on my project as well.

Best of luck on your rendering journey! Let me know if you have any other questions or concerns.
-Erich

@vinkovsky
Copy link
Author

Hi Erich! I keep looking into your rendering engine and here is my progress. I was able to pass variables to the shader to disable the environment while still keeping the reflections. The next goal is to be able to use multiple textures on an object like emissive texture, roughness, metalness, etc. Thanks a lot for your tips and detailed explanations!

three.js.PathTracing.Renderer.-.HDRI.Environment.-.Google.Chrome.2022-04-01.17-22-36.mp4

@vinkovsky
Copy link
Author

Now I'm learning how your volumetric light example works and maybe I can create an image similar to my avatar that was created in redshift renderer :D

T02T76FCCRH-U034DMSA7MM-09cf9ca66182-512

@vinkovsky
Copy link
Author

vinkovsky commented Apr 1, 2022

I think I created the lighting incorrectly, I got reflections from a quadlight on the model but lost the light from the sun
image
image

Here is my code

HDRI_Environment_Fragment.zip

@erichlof
Copy link
Owner

erichlof commented Apr 2, 2022

Hello @vinkovsky !

Yes I could see the problem once I checked the (if hitType == COAT) section of your code. You were accidentally overwriting the dirToLight variable when calling the different sampling functions. In Monte Carlo-style path tracing, we are only allowed to choose one random outcome on each iteration. So, you can either sample the Sun (HDRI environment map), OR the Quad light, OR the Sphere light - on each bounce pass. Since we are only picking 1, but there are 3 total light options which would make the scene brighter, we must up-weight the light source that we DID end up randomly choosing. So, in code:
weight *= 3.0; // 3.0 = number of Lights

Here is the corrected and commented glsl fragment shader file:
HDRI_Environment_Fragment.glsl

And here are a couple of screenshots of the more correct results (hopefully what you were looking for):

MultipleLights1

MultipleLights2

Hope this helps! :-)
-Erich

@erichlof
Copy link
Owner

erichlof commented Apr 2, 2022

@vinkovsky
Oh and about the volumetric avatar rendering (pink/purple beams of light through fog), that would be really cool! Just a word of warning though - my current Volumetric rendering glsl file is set up to find glass sphere caustics for my demo. If you render something like the pic you posted earlier, you won't need any of my 'caustic hunting' code. Also, Volumetric rendering is an advanced topic - it took me a while to wrap my head around how you light all the particles in the fog. Plus, the 'equiAngular' sampling function is shader code I lifted from a fairly recent research paper by the team at Arnold renderer (I believe) - I'm not entirely sure how it works, LOL. Best of luck though! If you have any problems, just post here and I'll try to help the best that I can. :-)

@vinkovsky
Copy link
Author

vinkovsky commented Apr 3, 2022

Hello @erichlof! Thanks for your corrections, I guessed it was a random choice of light. When I looked at the geometry example, I saw this line

lightChoice =spheres[int(rand() * N_LIGHTS)];

I was able to add volumetric lighting to my hdri experiment demo and had a few questions. Is there a way to use fog on top of the environment map? I think i need to somehow pass the color of the environment in the line

accumCol += Fog_color * vHitEmission * geomTerm * trans/pdf;

Again, I'm not sure if I've correctly distributed the particles around the rectangular light source. I used part of the code from the sampleQuadLight function.

particlePos = cameraRayOrigin + xx * cameraRayDirection;

float randParticleChoice = rng();
vec3 randPointOnLight.x;
randPointOnLight.x = mix(quads[0].v0.x, quads[0].v2.x, clamp(rng(), 0.1, 0.9));
randPointOnLight.y = quads[0].v0.y;
randPointOnLight.z = mix(quads[0].v0.z, quads[0].v2.z, clamp(rng(), 0.1, 0.9));

if (randParticleChoice < 0.5) {
    lightVec = randPointOnLight - particlePos;
} else {
    lightVec = sphere[0].position - particlePos;
}

d = length(lightVec);

I think using SceneIntersect twice slowed down rendering. Can it be accelerated?

And another off-topic question. What ide are you using to format the code?

image

image

My monkey code

HDRI_Environment_Fragment.glsl.zip

@erichlof
Copy link
Owner

erichlof commented Apr 3, 2022

@vinkovsky
Those screenshots are starting to look really cool! That's a neat scene idea that haven't thought of before.

Yes, that line from the geometry showcase demo that picks a random light is basically doing the same math as the one I showed in my last post for your multiple light sources demo. The only reason it is able to be squashed into a 1-liner like that is because the geometry demo has 3 very similar type of lights, all spheres with just different colors. If the lights are very different though like in your scene, you have to do it the longer way like I showed you recently. But it all basically boils down to flipping a coin or rolling dice to decide which light source to sample for each ray bounce loop iteration.

Yes it should be possible to have an HDRI in the background, and also to sample the HDRI light source (like the Sun in the symmetrical garden outdoor HDRI) - all while having fog on top of that. I'll have to take a moment and look over your code and my old fog particles sampling code (haven't touched it in a while), but hopefully I can come up with a solution that will give you multiple shafts of light, or shafts of shadows from each light source, assuming that's the effect you're wanting for this scene.

About my IDE, I really like Visual Studio Code now and exclusively use that. It has many handy extensions like code formatting, GLSL syntax highlighting, JS-linting, Live Server (for quick page refreshing when you make any edits at all), and much much more. Plus it's entirely free to download.

Will be back soon hopefully with some solutions to the fog and multiple different types of lights! :-)

@erichlof
Copy link
Owner

erichlof commented Apr 4, 2022

@vinkovsky

Success! :-D

volumetric1

volumetric2

The first image shows just a black background. The colored fog (colored blue and magenta by the 2 lights) fades into black as it gets farther away - very cool in my opinion!
The second image correctly shows the HDRI in the background, partly obscured by the pink and bluish fog. Although it could be useful in some instances I guess, I don't really think it has the same dramatic effect on the viewer as the first image with no HDRI (just fading to black). The shafts of light vs shafts of colored light are not as pronounced with a bright HDRI like this.

I wrote some instructions in the shader for you, showing what to comment out in order to switch between HDRI background and plain black background (resulting in either of the 2 screenshots above).

With some minor modifications, I was able to sample both area lights through the fog. The fog density, L1 and L2 colors for the lights make a big difference in this situation, so you might have to play around with these values quite a bit, in order to get the look that you're wanting (like the David statue with pink and purple lasers in fog that you shared earlier).

Also, I noticed a bug in my original volume rendering code - eHitType and vHitType were declared but not initialized. This is why the background HDRI was not showing up for you. Took me a while to figure out what was wrong, ha! Updated your example and my old Volumetric Demo here on my repo as well - with the new fixed lines as well as comments warning future me (lol).

Here's a link to the new HDRI_Environment Fragment Shader

Hope you can get a little closer to your target image now! Happy volume rendering! ;-)

@vinkovsky
Copy link
Author

vinkovsky commented Apr 6, 2022

Hello @erichlof! Sorry for the late reply, this looks amazing! Many thanks. I noticed that the fog around the rect light is not correct, how can I change its behavior?

Снимок экрана 2022-04-06 172904

@vinkovsky
Copy link
Author

If I use a double sided rectangle I get this

image

@erichlof
Copy link
Owner

erichlof commented Apr 6, 2022

@vinkovsky

Hi! Sorry, I forgot to mention about the rectangle (Quad in this scene), that the first parameter in the struct is the normal to its own surface. Right now it is set to vec3(0,-1,0), which points straight down. Take a look at:
Line 715

This will work so long as you don't change the vertices. If you do however, a new normal must be given (or computed). Sorry that this is not automatic; I intended these simple quads to be aligned with the walls, floor, and mainly ceiling of box-shaped rooms. Therefore, the normals are easy to imagine:
on ceiling: vec3(0,-1,0) / on floor: vec3(0,1,0)
on right wall: vec3(-1,0,0) / on left wall: vec3(1,0,0)
on front wall: vec3(0,0,-1) / on back wall: vec3(0,0,1)

I didn't intend for this simple quad to be arbitrarily rotated. Now if you need to be able to spin it around in any random direction, I will need to replace the simple Quad with the more robust Rectangle struct:
https://github.com/erichlof/THREE.js-PathTracing-Renderer/blob/gh-pages/shaders/Kajiya_TheRenderingEquation_Fragment.glsl#L25
https://github.com/erichlof/THREE.js-PathTracing-Renderer/blob/gh-pages/shaders/Kajiya_TheRenderingEquation_Fragment.glsl#L38
https://github.com/erichlof/THREE.js-PathTracing-Renderer/blob/gh-pages/shaders/Kajiya_TheRenderingEquation_Fragment.glsl#L354

I will start trying to add this, as I feel it would be helpful in this situation.
Will return soon! :)

@vinkovsky
Copy link
Author

vinkovsky commented Apr 6, 2022

Thanks for the clarification, Erich! I just didn't fully understand the meaning of this line :D It would be cool if the rectangular light had a target point

@erichlof
Copy link
Owner

erichlof commented Apr 6, 2022

@vinkovsky

Alright!

rectangleLight

rectangle2

I removed the Quad geometry and light, and replaced it with Rectangle geometry and light, allowing you to point it in any direction by specifying a target direction (same as its own normal). Also you can specify U and V measurements of the horizontal and vertical dimensions of the rectangle (when you define it in the SetupScene function.

Make sure you normalize the target direction (normal) of your rectangle, otherwise the intersection and sampling calcs might be incorrect: in this line of code, where the 'normalize(vec3(1.0,-0.1,-0.1))' is. This is the pointing/target direction of the rectangle. You can easily point it towards something else in the scene, for instance:
vec3 target = vec3(0,5,0); // or model position, or scene center: vec3(0,0,0), or anything you want to target
vec3 rectTargetDirection = target - rectangles[0].position; // creates a vector pointing from rectangle towards your target
rectTargetDirection = normalize(rectTargetDirection); //important, if you use it anywhere as a parameter, it must be normalized to unit length of 1.

As mentioned previously, the fogDensity, L1 and L2 values make a big difference in the dramatic effect. My screenshot shows a slightly less aggressive, softer light setup, but if you want full-on shafts of dark shadows coming from the dragon, you might have to make the lights much brighter and the fog a little less dense (so that the shafts last longer all the way to the bottom of the scene).

Here's the new HDRI_Environment_Fragment.glsl file. Hopefully you can just copy and paste this one into your project, replacing the older .glsl file.

P.S. Make sure to also re-download PathTracingCommon.js and replace your old file, as I have changed some of the Rectangle functions for the whole pathtracing library.

Good luck!
-Erich

@vinkovsky
Copy link
Author

vinkovsky commented Apr 8, 2022

Thank you so much, Erich! Looks very cool! Is it possible to make rect area light double-sided?

@vinkovsky
Copy link
Author

vinkovsky commented Apr 8, 2022

And another question. How to rotate it ?
image

@erichlof
Copy link
Owner

erichlof commented Apr 8, 2022

Sure! With some minor changes, the rectangle area light now is double-sided:
Here's the updated HDRI_Environment_Fragment.glsl file.

@erichlof
Copy link
Owner

erichlof commented Apr 8, 2022

Ooh, that's starting to look cool!

Rotation of a rectangular area light is possible, but that will require more plumbing. I'll see if I can throw something together.

@erichlof
Copy link
Owner

erichlof commented Apr 8, 2022

@vinkovsky

Ok I'm back! - I had to make several changes to the Rectangle code to get this working.

rotatedLight

Here's the updated HDRI_Environment_Fragment.glsl file.

The new code allows you to rotate the rectangle light along the X axis, Y axis, and Z axis. Since we are now rotating it in this manner, the older 'point-to-target' code doesn't really work anymore. I briefly tried it, but the pointing direction messes up the later user-specified rotation calculations - matrix math, doh! The new way is to specify the amounts, if any, you would like to rotate the rectangle in any or all of the axes.

I included a lot of comments and notes so hopefully you can just start using this.

One heads-up though: I had to make the rectangle's vec3 position a global variable, so that it may be accessed by the sampling and intersection functions as well as the rotation code parts. If you now look at the bottom of the file in SetupScene(), where Rectangle is defined, it has a position of vec3(0,0,0), or the origin. This is intended and correct, because its real actual position is now a global shader variable called rectanglePosition. To move the light around in your scene, change this global variable rectanglePosition and not anything else. The old rectangles[0].position is no longer valid or used for anything anymore.

Good luck and have fun!

@vinkovsky
Copy link
Author

@erichlof

Awesome! Thank you very much, I will post my final render soon 😀

@vinkovsky
Copy link
Author

vinkovsky commented Apr 12, 2022

Hello Erich! How difficult is it to create the bloom effect in this scene?

Снимок экрана 2022-04-12 в 23 08 03

@vinkovsky
Copy link
Author

Also my model consists of triangular polygons, how can I make it smoother?

@erichlof
Copy link
Owner

erichlof commented Apr 12, 2022

@vinkovsky
Looking good! About bloom, typically that is a post-process effect that doesn't actually physically occur, but it is an artifact of camera over-exposure. My best advice (if you don't have access to Photoshop after-effects and such) is to increase the fogDensity to something thicker like 0.01 (it was at 0.00001 previously). Then make the rectangle light thin and very tall so that the top and bottom edges are outside of the image frame. The effect will be much like a light saber at night in the Star Wars films, or a bright neon light at night in a polluted futuristic city (I'm reminded of a 'Punk-Noir' Blade Runner type of atmosphere). Here's the best that I can do, while keeping everything physically plausible and not cheating with Photoshop (ha):

bloom

I've created a new fragment shader file that has these changes. Be careful not to just copy and paste everything, as I have isolated the pink laser light and am not sampling the blue light at all, just for demo purposes. But maybe you can get some settings from the file and apply them and mess around with them.
Light 'bloom' effect updated shader

It does take some tweaking! :)

@erichlof
Copy link
Owner

erichlof commented Apr 12, 2022

About the triangulated look of your model, that is directly related to the vertex normals supplied by the model creator (and their model creation software). If it is appearing faceted, the vertex normals might not be smooth enough and instead what you are seeing is pure face normals (which you don't want because it will look like a low-poly retro style). If your model doesn't have smooth enough vertex normals, I believe there is a built in three.js function helper to create these for you at model loading time (at the startup of the webpage). I'll take a look and see if I can find it for you...

Update: Ok I found the helper function, but I'm not sure if it's going to fix your normals issue. Try to add the following at the beginning of my initSceneData() function (around line 215 or so) in the accompanying .js file to your demo - is it still called HDRI_Environment.js ?

if (modelMesh.geometry.attributes.normal === undefined)
{
modelMesh.geometry.computeVertexNormals();
}

If your model doesn't have vertex normals, this three.js helper function will create them at start-up.

@vinkovsky
Copy link
Author

I believe that the problem is in the model itself, the calculation of vertex normals has no effect. After increasing the intensity of the fog, it looks good. thank you. Are you planning to add post-processing to your renderer in the future?

Снимок экрана 2022-04-13 в 14 18 14

@vinkovsky
Copy link
Author

Also wanted to ask. Is it possible to use perspective camera three.js and orbit control in your project? If so, what should be paid attention to in the adaptation process?

@vinkovsky
Copy link
Author

Sorry, i ask too many questions in this thread

@erichlof
Copy link
Owner

erichlof commented Apr 14, 2022

@vinkovsky
Don't worry about asking questions! I also learn from trying to help you with your issues/desires in your rendering.

Yes it is possible to have an orbit style camera, but at the moment the way I have it set up is more of a first person camera. Yes it is already using perspective camera, but with custom rotation and flying controls. Three.js has 2 main types of cameras: Orthographic and Perspective. I've never needed Orthographic, so I don't plan on adding that anytime soon. But Perspective is the one you and I are using now with this path tracing project codebase, so no need to change anything there.

Let me see if I can hack something together to get you very close to a Three.js OrbitControls-type of camera. I think it just extends the camera's position along the +Z axis, but not letting the WASD keys move the position - instead the act of rotating with an extended Z (imagine a boom camera pole, or a "selfie stick") that is always connected to either the target (like your model), or to the scene origin ( Vector3(0,0,0) ), will move the camera position in an arc automatically. I did this for my AntiGravity pool game when lining up a billiards shot, so I think it should work without too much trouble.

Sorry to hear about the model normals fix not working for you - is it possible to send me a sharing link here to your David model itself, so I can download it? Where did you get it from originally, if I may ask?

@erichlof
Copy link
Owner

erichlof commented Apr 14, 2022

Ok I implemented an OrbitControls camera for you. Here's the new setup js file - replace your old HDRI_Environment.js file with this new one:
HDRI_Environment.js
Note: It loads the Stanford Bunny model (because that's all I have), so you will have to change that line of code to your David model path on your computer. Also, if you wanted your David 'modelPositionOffset' and 'modelScale' variables to be recorded from your old file, make sure to write them down before you replace the file. ;-)

You will also need to copy the InitCommon.js file and replace your old InitCommon.js file - as I have found a better way to offer this OrbitControls option and have updated the file on my repo here. (See, I'm learning and improving too, ha):
InitCommon.js

Using an OrbitControls camera now, the mouse wheel serves a different function - it 'dollies' the entire camera forward and backward along the camera's Z axis. If you need a different FOV (which used to be mouse wheel), I have now hooked it up to the period or '>' and comma or '<' keys: the period/> key increases the FOV, while the comma/< key decreases the FOV. The camera's aperture and focus distance still work as normal, with the 4 arrow keys.

@vinkovsky
Copy link
Author

vinkovsky commented Apr 15, 2022

Hi Erich! I bought David model for 3d max on the 3ddd website. You can download the converted gltf here.

The new way of moving the camera around the object is awesome! I would also like to know how to disable the cursor lock when translating? Also threejs provides two ways to pan the camera around an object, .screenSpacePanning is important to me.

@erichlof
Copy link
Owner

erichlof commented Apr 15, 2022

Thanks for the David model! I will place it in my test scene now so we can both be on sort of the same page. I'll let you know if I find any solutions to the vertex normals issue.

Glad you like my brand of OrbitControls! Yes, I'm pretty sure I can make the camera pan, but it might be easier to use the WASD keys. I already have listeners for those set up. I think three.js' Orbit Controls use the right mouse button drag to pan. I could eventually implement that too, but it would require adding more listeners and a drag action-detector algo, which I don't have currently in the engine.

Speaking of drag, if you want to avoid the PointerLock (and show the mouse cursor), I can remove those lines of code for you, but then the same drag issue will arise when doing the normal camera orbiting. How will the app know that you are wanting to orbit or pan, if the PointerLock is not engaged and you are just moving the mouse around? The correct way would be to have the cursor always visible, and the camera does orbiting/panning only if you click and drag the mouse (left button/right button). But since I use first person style cameras for all the demos, I just rely on key presses, since the mouse pointer is usually invisible and locked.

I'll see if I can throw something together that satisfies your needs, but at the same time not requiring too much refactoring and plumbing work on my end, ha.

Talk to you soon!

@erichlof
Copy link
Owner

erichlof commented Apr 17, 2022

Sorry for the delay. It being Easter weekend and all, I haven't had as much time as usual to work on this. I successfully got the David model in my scene now and I also implemented panning with the orbit controls camera. Now I'm working on adding a couple of GUI sliders so that we can control the fog density as well as the 2 laser light brightnesses individually. This will help in faster iteration time when trying to match your old avatar rendering.
More updates coming soon!

@erichlof
Copy link
Owner

erichlof commented Apr 19, 2022

@vinkovsky

Here's as close as I can get by just eyeballing it:

StatueOfDavid

I believe I have the camera angles and the lasers very close to your old avatar pic, but the thing that I can't seem to match is the dramatic, crisp light and shadow on David's face. On my renderer, due to the particles of fog, it seems to wash his face features out and over-expose the body of the statue toward the white end of the spectrum. Your old rendering seems to keep the blue-purple darkness of the statue. I've messed around with all the new settings, but I just can't seem to hit on the right combo. More troubling still, is that there may not be a right combo at all - due to how my renderer works vs. how the old one that you used worked under the hood (so far as ray tracing is concerned).

But in any case, I have completely overhauled the js and the glsl files, so that now the GUI more finely controls the lights and the fog of the scene. Also, WASD now pans the camera left and right, up and down, while the mouse wheel dollies the camera position back and forth, more like a physical camera on a movie set.

Here is the new JS file,
and the new Fragment Shader file.

As previously discussed, the only change I could not do is unlocking the cursor while manipulating the orbit camera, because that would have required a lot more internal plumbing overhaul. But hopefully, the new Orbit Camera that I implemented will be intuitive enough to use while searching for that perfect frame/shot camera angle.

Hope this all helps!

@vinkovsky
Copy link
Author

vinkovsky commented Apr 20, 2022

Hi Erich! I am in no way rushing you, happy holidays! You have already done a lot for me.

Your changes looks very cool !!! I think the main problem with why David's face looks bright is that the light around the lights is too diffuse. For example, in the corona renderer, you can more flexibly adjust the direction of the light rays.

image

image

Making the rectangular light one-sided and rotating it in front of David's face without illuminating him would be better. Can this be done without a global code rewrite?

BTW, the rest of the picture is very similar, it looks just amazing!

image

Panning works great! Based on your code i will try to create native three js orbit control.

How did you manage to fix the shading of the triangulated model?

@vinkovsky
Copy link
Author

vinkovsky commented Apr 20, 2022

Perfect for my new avatar :D

image

P.s.

I also switched to cineon tone mapping for a more contrasty image.

@vinkovsky
Copy link
Author

vinkovsky commented Apr 20, 2022

Rendering on my iphone 11. Pretty fast!

Upload.from.GitHub.for.iOS.MOV

image

@erichlof
Copy link
Owner

erichlof commented Apr 20, 2022

Wow! Those renderings are looking amazing! I'm so glad you were able to dial in the appropriate settings to bring back the contrast and shadow details to David's face. It's so much better now, congrats!

And equally impressive is how well it runs on your iPhone 11! That image and video you posted is why I started this three.js path tracing project in the first place - I had the dream of being able to experience real time progressive rendering on all devices, even my phone when I'm on the go!

Yes about the lights, (thanks for posting those images of Corona renderer), I agree that one-sided panel lights will work better for your scene. At the moment, I'm not sure how to narrow or widen the angle of total illumination from the rectangle (like in your first Corona pic), but that has inspired me to try and see if I can get something similar in the near future for my project. It probably works similar to how a spot light controls its output angle - should be doable. Anyway, for now I will make the rectangle lights one-sided for you (don't worry, it doesn't require many changes at all).

Good to hear that you like the new orbit-style controls! Yes, if you look at three.js' source code for their Orbit Controls, you should be able to see how they implement mouse dragging for orbiting/panning (remember, you would have to disable pointer lock), and for your iPhone 11, you could also add a 2-finger swipe detection to control panning. I may eventually add all of this to a 2nd camera (Orbit Camera) option for my path tracing renderer here, but I'm currently working on other parts of the codebase.

About the normals and triangulation issues you were having, but that are now gone (yay!), that is actually somewhat of a mystery. When I loaded in your David model that you shared, I really couldn't see any triangulation issues on my system. However, with each new version of the Gists (source files) that I'm sharing with you, I have been removing some of the old code from my outdoor Dragon HDRI environment demo. So maybe something left over from that was messing up the current fog and laser light environment lighting that we have now. That's one of those happy bug-fixes that just magically take care of themselves, ha!

Will return soon with the 1-sided rectangle lights adjustment...

@erichlof
Copy link
Owner

Whoo-hoo!

DavidStatue

Here's the new Fragment shader file that has 1-sided rectangles.

By the way, I didn't know if you knew or not, but you can change the length and width of the laser rectangle lights in the shader. Just find their definitions inside the SetupScene function towards the bottom of the file. You can see some of my changes in order to achieve this most recent render:
list of changes

I just didn't have time to add those rectangle U and V lengths as sliders to the GUI. But maybe you can see now how I hook up certain sliders/parameters inside the JS setup file and matching Fragment shader GLSL file. For instance, you can look at lightPower as a good example of how to add your own sliders.

@vinkovsky
Copy link
Author

Thanks! I will try to implement it.

Why don't you use MRT rendering in your PT?

https://threejs.org/examples/?q=webgl2_multiple_rendertargets#webgl2_multiple_rendertargets

@vinkovsky
Copy link
Author

Speaking of which, an orthographic camera would be a cool tool for this project. It would allow render images like this one

image_processing20200711-21856-15438nr

@erichlof
Copy link
Owner

erichlof commented Apr 20, 2022

Hello!

About MRT, I do use render targets, but as more of an internal function. In a nutshell, three.js (and all WebGL-based software) are rasterization APIs (as opposed to NVIDIA's RTX hardware-supported ray tracing API). Forgive me if you already know this, but in the typical WebGL pipeline, 3D Triangles are supplied to the scene, then they are projected to the screen as 2D triangles, and then any pixels on the screen that happen to cover these flattened, projected 2D triangles are sent to the fragment shader for any color calculations that we want done on them. Since I am forced to use this basic rasterization system of at least providing 1 triangle (or 2), I chose to provide 2 very large triangles back-to-back, that form a full screen quad (having just 2 triangles, our vertex shader is almost non-existent!). Since every pixel on the screen will cover 1 of these 2 huge triangles, every pixel gets sent to the fragment shader (or pixel shader), where all the PT magic happens!

Getting back to render targets, I use the path tracing glsl fragment shader (the heart of PT) to perform ray tracing/path tracing on every pixel in parallel (well, almost, because of how GPUs are designed), and then save these results of the 1st frame of animation (samples=1) to a Render Target. Then I have another very simple 'copy' shader which pulls from this render target (as a screen-size texture, basically), and copies it into another Render Target. This copied Render Target is fed back through to the first path tracing shader, again as a large screen-sized texture. On the 2nd frame of animation (samples=2), the path tracing shader now has its own color history available for each pixel. It then blends the past history color with the fresh new color that it just got done calculating (through PT), and keeps doing this over and over. Pretty soon, after repeatedly blending with itself, the image will magically converge! So I do rely on multiple Render Targets, but maybe not the way you're asking about.

The ortho camera would be a cool addition indeed! Thanks for the suggestion - I feel like I could add something like that without too much trouble. It's all in the ray generation part of the

function inside the large PathTracingCommon.js file. If I could make the camera rays go straight out (instead of fanning out like in the Perspective camera), it should do the trick. I'll play around with it. Thanks for the ortho pic - quite cool!

@erichlof
Copy link
Owner

erichlof commented Apr 21, 2022

I successfully got the Orthographic camera working!

Orthographic

Orthographic2

Orthographic3

It still needs polishing - certain areas of the scene get unintentionally clipped, just because of the way the Ortho camera position and its view clip planes work. The neat thing is that it only required 3 lines of code, ha. That's the beauty of Ray Tracing vs. traditional Rasterization. Something that would have required a whole new file and dozens to hundreds of lines (to implement Orthographic view matrices and such) in traditional Rasterization (WebGL, OpenGL, DirectX, etc), can be done with our Ray Tracing system in a couple of lines. ;-)

Haven't figured out how to expose this new feature yet - maybe as part of the setup options on each demo's JS file. Default would be Perspective camera of course, but with a simple flag, it would turn into an Orthographic camera. What's interesting about this whole Ortho thing is that it fools around with your eyes - ha. The path tracing part gives you basically visual reality (like in real life environments), but the Ortho projection is something that none of us have ever truly experienced, because of the way our optic biology is built. The images above as well as all the other scenes I've tried it out on, both look incredibly real and unreal(surreal?) at the same time!

@vinkovsky
Copy link
Author

Hi Erich! Thanks for the explanation of MRT, I simply looked through the InitCommon file and noticed several lines of code with RenderTarget. Now I understand it better.

Your ortho camera is awesome! Could you please share the code, maybe I will try to make a picture as I dropped above?

@erichlof
Copy link
Owner

erichlof commented Apr 22, 2022

Ok I just implemented the Orthographic camera option for the whole repo! (Well, currently for the static scenes, not the dynamic moving scenes - coming soon!)

Here's the list of changes to path tracing 'main' that is used by all static-scene demos. I made changes to only 3 lines (marked with a red color on left side of page): changes to shader main function

This is one of those beautiful Ray Tracing cases where you get awesome functionality for only a handful of lines of code - the other case being the blurring effect of the camera's Depth of Field feature - like 5 lines of code, ha! 😁

A couple of notes on using this new feature:

  • You need to replace your InitCommon.js file with my new one I just posted (contains code for switching camera modes by pressing O for Ortho and P for Perspective)
  • You need to replace your PathTracingCommon.js file with my new one (has the small shader changes discussed above)

It will work across all demos that are static. The reason it doesn't yet function on the dynamic moving scenery demos is because they do not use the path tracing 'main' shader function which I just updated with the new ortho code. Rather, they use a custom 'main' shader function that is tailor-made for each dynamic situation. I will have to go and manually change all those, one by one. But it won't be too much work - it's just a couple of lines for each demo.

Also, in the case of your David demo scene, we disabled useGenericInput so you could operate the OrbitControls using your custom key presses. If you want to use Orthographic camera mode with a demo like yours that has non-generic custom input handlers, then all you need to do is add the following line to your initSceneData() function inside your demo's accompanying JS setup file:

changeToOrthographicCamera = true;

It's as simple as that! Doing this will begin your non-generic scene in Ortho mode. If ever you want to have Perspective mode, simply remove this line of code - because the default for all demos is Perspective mode.

Have fun!

@erichlof
Copy link
Owner

erichlof commented Apr 22, 2022

@vinkovsky
Quick Update: I just now changed the InitCommon.js file again. Make sure to get the latest version. I added the Ortho option to mobile platforms also, so now you can use the GUI checkbox on your phone or tablet to easily toggle Ortho/Perspective modes.
You no longer need to add that line of code I told you about in the last post. I made it available to all demos by default. The only time you have to do anything extra like that now, is if you do not want ortho camera as an option for the end user. Now we just simply press O for ortho mode and P for Perspective mode. I'm happy with how well it turned out!

Also, I forgot to add the 'O' and 'P' keypress handlers to your custom HDRI js setup file (the one rendering David statue), since we aren't using the default generic input that the other demos have. Here's the updated file
Again, if there are any GUI settings for the area lights that you have now, maybe write them down, so when you replace the js file with my new one, you can add back in your correct GUI settings!

Enjoy!

@vinkovsky
Copy link
Author

Hello Erich! Amazing work! Exactly what i need. I'll try to create an isometric image soon. At the same time I will study how texturing works. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants