Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Just wondering how you handled something #68

Open
Pjbomb2 opened this issue May 17, 2022 · 14 comments
Open

Just wondering how you handled something #68

Pjbomb2 opened this issue May 17, 2022 · 14 comments

Comments

@Pjbomb2
Copy link

Pjbomb2 commented May 17, 2022

Hey, so sorry for the issue, but I dont know how else to contact you. Ive been trying to roughly copy what you have with the volume rendering for rendering things like crepuscular rays, but ive been having a major issue that im out of ideas for of how to solve, so I figured I should ask you. My issue is that the crepuscular rays seem to brighten the objects behind them, but in a way that makes them more clear, as well as being closer to a light source makes the entire environment around me physically brighter. I was wondering if you ran into this kind of thing, and if so how you handled it? thank you! and sorry for the trouble, but this project is absolutely amazing!

@erichlof
Copy link
Owner

erichlof commented May 18, 2022

@Pjbomb2
Hello and thanks for the kind words! No worries about posting issues here on the GitHub repo: I actually prefer it to private messages because that way, other people having similar issues can possibly benefit from our discussions. So all is good!

About the crepuscular(ah, such a flowery scientific word) rays, I'm going to have to clarify with you a little further so I can exactly know what the problem is. To make sure I am understanding correctly, is the problem that you do not want objects behind the crepuscular rays(let's just call them 'light shafts' from here on, ha) getting brighter?

If that is indeed what is happening, then there might be a small error in your shader code. If you look at my Volumetric Rendering Demo with the two spheres in a red and blue Cornell Box that is surrounded by blueish fog, the light shafts are actually not a physical entity. It's more about the shadows from objects blocking the light source that gives the pretty shafting effects. If you remove the light blockers (spheres in my case) from the scene altogether, your scene should have a slight light gray, or blue, or yellow (whichever mood you're trying to set) fog around the entirety of the scene. It's exactly like classic OpenGL fog: the closer you are to an object, the clearer (and possibly brighter from color saturation) the object will be. As you pull the camera backwards, the farther you get from the object, the more de-saturated and hazy (and possibly darker) the object will become. If you have this fog mechanic working, you are ready to add the shadow shafts (and resulting crepuscular ray effects).

When you place the blocker near the light source, shafts of dark areas should appear underneath the blocking object. These shadow shafts will start small and tight, hugging the bottom of the blocking objects, and then will increase in their radius as your eye follows them down into the depths of the scene.

It's a little bit of an optical illusion because normally when we see 'God' rays coming dramatically through openings in cloud cover, our eyes 'see' the light areas that are not shadowed, and our attention is drawn to those, and they become almost real objects themselves- but in reality they are just part of the whole scene background fog lighting, and actually it's the shadow shafts that make the seperate areas of light vs dark, so that the shaft effects are even visible.

In other words, nothing should be getting unusually brighter or clearer, just because they happen to be in or behind a light shaft (which is not a real thing, remember). The only thing that should happen, is if they are directly in the shadowed volume (or dark shaft),, they will not be able to directly see the light source, and therefore will indeed appear darker than other objects that can see the light source directly.

Let me know if this sums up the issue you're facing, or if not, please clarify further, possibly with a PRTSC screenshot from your computer, posted here in the comments. Additionally if you like, you can create a GitHub Gist and copy and paste your shader there with a public link so I can read through it.

I will say that Volumetric Rendering was one of the most challenging topics for me personally, just to make everything work and look right, without any obvious artifacts. The Volumetric sampling function I use, I copy-pasted from a Shadertoy demo (ha), who's author copy-pasted it from a Solid Angle team (famous for their Arnold renderer) research paper (double ha). I don't fully understand how this Volumetric 'equiangular-light sampling' function does its magic, but it works like a charm and it's freely available, so I use it! 😉

Looking forward to discussing this issue with you further. 🙂
-Erich

@Pjbomb2
Copy link
Author

Pjbomb2 commented May 18, 2022

Thank you so much for your response!
But this is what im talking about:
ezgif com-gif-maker (50)
the objects that are behind sun rays get brightened and more clear, Trying to evaluate both parts at once, the change that a ray both hits a particle and hits a particle, with the particle ray being evaluated by NEE, and the main ray getting evaluated normally
OH also yes! Equiangular sampling im in the same boat with, its magic but it works like a charm, so thats what im using, with 1 sample per ray
Though also i dont see much reference to modifying ray throughput in your code

@Pjbomb2
Copy link
Author

Pjbomb2 commented May 18, 2022

Or yet heres a better example
image

You can see its not just brightening the area
Its making it clear
However I have only access to direct lighting channels, indirect lighting channels, and throughput, I dont accumulate total color
This effect holds true if you fly into the sunbeam
image

@erichlof
Copy link
Owner

erichlof commented May 18, 2022

Oh! That's a cool use-case project you have there! Are you using the volumetric lighting inside some kind of engine, like Unity or Godot? Or is this captured from a browser with WebGL? I really like the scenery geometry and the light shafts are a perfect touch for this kind of environment. Reminds me of an id Software Quake-type game - good work so far!

Thank you for posting the helpful video and screenshots. I can see better now the issue you are trying to solve. As I mentioned before, Volumetric Rendering is not my strongest area, but I might have some tips for you.

The 1st screenshot with the tapering horizontal light shaft shows 2 issues:

  1. The beam itself should spread out like a flashlight as your eye starts at the small window on the right, going left, deep into the room. Instead, the beam is somehow getting more narrow, which isn't physically plausible. To fix, remember with path tracing (and ray tracing in general), we must work backwards from the camera, then prioritize the areas of the scene where there is any chance that the rays will find light. The camera rays will start from the camera position, go through each pixel on the view plane (your device screen), then will randomly sample particle density (fog density) as they make their way towards the back of the room. In my shader code, these random spots along the ray are called vRayOrigin(the dust particle's position out in the scene), and vRayDirection(the direction it will eventually take towards the light source). In other words, there should be camera rays that find dust particles at the top and bottom of your image too. Then they will trace back to the available light source, which should be a random spot on the rectangular window opening to the right side of the scene. Chances are that particles directly above and below the window opening won't be able to successfully hit the window, and will remain in shadow. But as our eyes travel left in the image, the chances are greater and greater that particles can find their way back to the light window with just one straight sample ray. This should give the flashlight or spotlight effect if done correctly.

  2. The main issue that you're writing about: the fact that objects behind the light beam volume (not in it) are getting a weird global ambient lightening that is over-saturated, and additionally is missing many shadows. I could be wrong, but I think this has to do with the order you render your scene, and the ray bounce budget. Just looking over my volumetric rendering demo shader code bounces loop, I first gather the dust/light particles on each iteration. This will render shafts of light and dark only, no geometry. Then right after that, I path trace as normal. If normal direct lighting is successful for a geometry surface, then that surface is lit accordingly. Also note that at this very moment, you have to take into account the dust particles or fog which the viewer must look through, which will de-saturate and darken slightly any objects that are farther out from the camera. This important small step is what I believe is missing in your shader code.
    Similarly, if the normal path tracing direct light ray runs into any geometry (like a wall or the big staircase) on its way to the window, the surface is left dark and in shadow (just like a normal path tracing scene).
    In theory (ha) these steps should all come together to form a correct fanning out light beam, with objects getting more obscured and de-saturated as they move farther from the camera, whether they happen to be in a light shaft or not, doesn't matter - it's just like old school OpenGL fog for all scene objects. Btw, you can adjust how thick it is with the density parameter.

Hopefully this makes sense, and maybe you could check over your shader's ray-bounces loop to see if all these elements are in place. Is there a way you could share the shader code with me on a GitHub Gist? I like Gist a lot for this kind of thing, and it's easy to create a Gist (+ icon in upper right corner) and then copy, paste , and get a shareable link that you can post here in the comments section.

I think that would help a lot, to be able to see the bounces loop myself. Talk to you soon!

@Pjbomb2
Copy link
Author

Pjbomb2 commented May 18, 2022

This does make sense! thank you! and the scene in the screenshot I got from sketchup, you can find the exact link on my github, but its like 1ish million polys
Currently im aiming for real time,, which means things have to converge FAST
Currently I have the evaluation of the shadow ray in an entirely seperate kernel, as im doing wavefront path tracing, therefore I cant know whether or not it successfully hit immediately, rather I send a value with the shadow ray that gets added to either the indirect or direct lighting in the event of a successful light hit, however the package of light I send with the light ray happens after the main shading evaluation for that bounce, and therefore cant modify the throughput of NEE rays or main shading evaluation, should I change this to its own kernel to happen before, so I can accumulate the changes in the lighting buffers and modify the throughput before doing any other shading that bounce?
Thank you so much!!

@erichlof
Copy link
Owner

erichlof commented May 18, 2022

@Pjbomb2
Cool! About the kernel order, mmm.. I'm not sure if changing it so that the dust particles lighting happens first, would fix the issue or not. Looking back over my ray bounces loop in the shader, the dust particle light is added and then the usual NEE direct light is added to any surfaces that are seen by a particular camera ray. Since lighting is additive, I'm not sure if you add 1 or the other first, if it will make a difference. The thing that has to happen though, is that when objects are lit in the normal manner with NEE direct lighting, that you multiply the camera ray's throughput by the fog density (Beer's Law). The code for this is just a couple lines in my demo's shader code. But if you're restricted on which passes you can touch in the rendering pipeline, maybe the kernel re-ordering is worth a try though? (if it's not too time consuming of an experiment).

What back-end engine (Three.js, Unity, custom?) and what back-end language are you using to render all this geometry? Does this run in the browser, or is it a native C++ app?

@erichlof
Copy link
Owner

erichlof commented May 18, 2022

One more note, if you are doing real-time rendering (like for a game), then you may want to consider some sort of simple de-noising. In all of my demos, I use a simple custom-made one that you can feel free to use/copy/learn from. It's located in the ScreenOutput_Fragment.glsl file, which is in my shaders folder. I do a simple box-blur filter for all diffuse surfaces, and then for specular surfaces like glass, metal, etc., keep those at sharp resolution (no filter applied). The tricky part is handling the boundaries between diffuse (blurring is desired) and specular (blurring is to be avoided). For this, an edge detector is required - which I also have in place. The edge detector is in the main() function at the very bottom of the PathTracingCommon.js file in my js folder. It takes some tweaking on a scene-by-scene basis, but it makes a huge difference, especially for real-time dynamic scenes.

Speaking of the old Quake series again, my inspiration for this was NVIDIA's insane real-time Quake RTX demo a couple years ago. This real-time denoiser is much, much more sophisticated in that it re-uses diffuse pixel samples and re-projects them for the next frame when the camera moves. I believe Minecraft RTX also does something similar with its shaders. I'm not at that level of wizardry yet, so I just have my simple custom denoiser that makes a good impact, without a lot of extra algos and code. But someday, I would love to be able to do what they did with Quake RTX. That would look incredible with your scene!

@Pjbomb2
Copy link
Author

Pjbomb2 commented May 18, 2022

Ok so, sorry for delayed reply, got distracted, and this is using unity only for things like mesh and texture loading, everything else is custom and can be found on my github under the reposityory "[Compute-Shader-Unity-PathTracer", ive tried to make a pathtracer that performs well, is completely compute based(so no rtx hardware used, itll even run on integrated graphics(have tested this)), and is realtime. Currently my denoising options are atrous for static scenes, but that messes with color due to the multiply by albedo at the end of it, and SVGF for dynamic scenes(tho I either need to tune it heavily or find something else, as it takes 13ms at 1080p and is still very very noisy for scenes with lots of lights/fireflys, but when it works, god is it magical), currently best performing scene is intels new Sponza, around 5 million triangles with the curtains, running at a steady 40 fps at 1080p on my system(3080 mobile)
Also noted! I will take a look at your denoiser! I remember when I was looking for denoisers I was looking for where yours was but couldnt find
Also quake uses ASVGF, an improved varient of SVGF, and minecraft I think also uses ASVGF
I should take a video of what the svgf filter looks like and also put that on my mess of a repo
And thank you so much for your help! I suppose at this point the last thing I could try thats obvious is the reorganizing of fog into its own kernel, as that will allow me to see what rays got inturupted and what didnt before the shading stage, tho at this point im not sure what to do with that information once I have it
Thank you!!

@erichlof
Copy link
Owner

Wow, that sounds great! Although just dabbling in it, I really like Unity's streamlined interface/editor. That's really cool that you are using the strengths of Unity for engine-related stuff, and then using real time path tracing shaders on top for realistic rendering. I wish I could help further, but as I mentioned I don't have very much experience with Unity, let alone their shader system. But feel free whenever to share a shader (even if it's not in GLSL, I can sort of make it out) if you want.

I would love to incorporate the ASGVF you mentioned into my renderer, but before I could do any porting, I would have to understand how it works at a detailed level, in order to make a successful js/GLSL port. In any case, hopefully you can check out how I did my simple denoiser here. I literally started from 0. Like most things on this project, I feel a need to understand the algos/code before I drop stuff into the codebase. Since there were no tutorials on how to make GLSL denoisers, I rolled my own! Lol.
If you need any explanations of the denoiser and how it works under the hood, feel free to post here, or you could start up a new issue thread - either is totally fine by me.

Best of luck - please keep me informed if you find a solution to the volumetric lighting issue!

-Erich

@Pjbomb2
Copy link
Author

Pjbomb2 commented May 18, 2022

Thank you so much! Ill try the aditional kernel, and for me ive never been able to go off papers, ive usually had to have code to look at to implement something, but ASVGF is truely quite magical
Ill let you know of any progress with the volume things, but theres a few other things i wanna fix such as point lights making directional lights darker(something to do with the MIS weights for NEE)
But overall thank you so much! Ill be sure to come back if I have any other questions!

@Pjbomb2
Copy link
Author

Pjbomb2 commented May 19, 2022

image
well I got to here
its better, but its washing out stuff(which is correct right?) but more importantly, they are super powerful, even though in the equation
(exp( -((hit.t) * sigma_t) ))
sigma_t is like 0.00000000001

@erichlof
Copy link
Owner

erichlof commented May 19, 2022

That's looking better! I'm not sure what values are good general values for their (Solid Angle team's) equations.

Your cool images have inspired me to try a little experiment - I will try setting up a very simple scene, but with lighting effects that resemble your first horizontal beam screenshot with small window opening on the right wall. I may just put 2 spheres on the ground at the back of a Cornell Box room, then place a small square opening on the right side of the box, and a strong light source just outside of the window. I'll place dusty fog around the scene and see if I can get the result to look how I imagine it would be. I'll also check that the beam in front of my room test scene does not overly wash out or artificially brighten the 2 sphere objects behind it.

Will be back in a day or two with hopefully some results. In other words, I'm going to put the Solid Angle team's equations to the test! Ha! 😁

@erichlof
Copy link
Owner

erichlof commented May 19, 2022

@Pjbomb2

Back with good news!
LightShafts0

LightShafts1

LightShafts2

LightShafts3

The results are promising in that they match what I was seeing in my mind's eye before I created the scene.

The 1st image shows how the light beam needs to fan out like a flashlight or spotlight.

The 2nd image shows 2 things: that the beam itself is an optical illusion - it is just part of the background fog and should match the background exactly. And since the camera is pulled back from the scene, you can see the white and yellow spheres have been increasingly obscured and desaturated with dust particles.

The 3rd image shows that the beam does not alter the lighting or shadows for the scene objects behind the beam. This is physically correct behavior, as far as I can tell. All the beam should do to objects behind it is maybe tint them ever so slightly - in my scene's case, a hazy blueish color. But it's almost imperceptible.

The last image shows the camera placed directly in the light beam's volume. Again, the background objects may get a slight blueish tint, but reflections, color, and shadows are all intact.

BTW, the reason for the extra rectangle panel light over the spheres is to give them very visible contact shadows (and a reflection highlight in the yellow sphere).

So in conclusion, I guess the team at Solid Angle (Arnold renderer) knew their stuff and therefore deserve whatever pay they're getting! LOL

Here's a live link to try it out for yourself:
Light Shafts Demo

And the main shader source code file is here

Let me know if you have any questions about the shader. I really didn't alter it that much from my old demo (just took out volumetric caustics and replaced sphere lights with rectangle lights).

-Erich

@Pjbomb2
Copy link
Author

Pjbomb2 commented May 19, 2022

OOO thank you! I think I mostly fixed the volumes, not sure on my end, but one thing is that th light source was technically infinitely far away, as its a directional light, so much more concentrated, bout to update my github again tho as im mostly happy with it
biggest problem rn is that I dont think the colors are attenuating correctly with density of the fog in mine
I moved fog into its own buffer detatched from the rest and it seems to have really helped, as its no longer multiplied by albedo, and the way I ended up having to implement fog density is by multiplying the actual value added to the fog buffer by a small value
Thank you so much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants