Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More samples per frame option #64

Open
tom-adsfund opened this issue Feb 16, 2022 · 60 comments
Open

More samples per frame option #64

tom-adsfund opened this issue Feb 16, 2022 · 60 comments

Comments

@tom-adsfund
Copy link

On high-end graphics cards, a bottleneck is the 60fps cap. As part of the "more abstractions", it would be good to have an option to adjust the number of samples calculated per frame. This could then be experimented with, maybe making it adaptive depending on attained frame rate.

@erichlof
Copy link
Owner

Hi @tom-adsfund
This is actually something I think that could be added without too much trouble. From the beginning of this project, I've had limited system specs (laptop with integrated graphics, mobile devices, etc.). I'm actually ok with continuing to use these underpowered devices to develop on - it makes me think outside the box and come up with not-so-obvious solutions to rendering/real time problems so that most everyone, no matter what hardware they're on, can enjoy real time path tracing in the browser.

However, for users like yourself who have more modern GPUs, the 60fps cap of WebGL2 doesn't allow the full potential of path tracing on more powerful hardware. In addition to a SPF (samples per frame) option, I would also like to add a max specular and max diffuse bounces option.

The ultimate goal here would be to have something like the Blender Cycles side panel, where users can use the sliders to adjust the rendering quality vs speed for their particular device.

The only problem I can forsee at the moment with adding this sort of options is that I don't want to add 'if' statements to my shaders, in an attempt to make them more generalized and abstracted. GPUs don't like divergence, and I already have it in spades, ha (necessary for any path tracer) - I don't want to bog it down further, especially in the super-tight bounces for-loop in each shader.

So that means we would have to devise a pre-compilation system (basically pound-defines and ifdef's) that would build the shader with the user's specifications up front. This is what three.js does with their normal WebGL renderer, but I've never really studied it closely enough to imitate it.

If we can bypass those concerns, I see no problem adding your suggestion, and even more fine tuning options.

@tom-adsfund
Copy link
Author

Given the limits of the shader language, I think the best option would be to have a shader generator in JavaScript, and that way you can compose a spec in some higher-level way and then produce the shader code from that spec. This allows a powerful decoupling where you can also test the specs for viability and give users feedback etc. Even more advanced would then be to generate specs depending on client machines.

I find there's a benefit to working both with high powered and low powered machines, because the higher powered allows rapid testing and exploration. Also, today's "high end" hardware rapidly becomes more available, and there are at least as many benefits to catering to the high end market (more powerful apps).

@tom-adsfund
Copy link
Author

I've just been testing the latest version with some upweighting controls, and while everything is extremely good, this frame rate limit makes it essentially impossible to determine whether upweighting improves on the current situation. I'm left waiting as the frames cycle for the updates.

Hopefully this gives an impression of the speed (unfortunately Github only allows up to 10MB videos, so had to scale down):

frame-rate-issue.mp4

@erichlof
Copy link
Owner

erichlof commented Feb 17, 2022

@tom-adsfund

Yes that looks pretty smooth- although it's hard to tell if the up-weighting scheme is making a big impact or not.

I don't know if I mentioned it, or you may have seen my comments in other threads, but this demo is the only one in the entire repo not made by me. It was submitted years ago by n2k3. I don't believe he's working on it anymore, but I've tried my best to update it and maintain it, as there have been tons of changes to my project, dependencies, and path tracing algos since then.

Since you've already been working with that demo it appears, I'm hesitant to say it, but I would maybe suggest to test with one of my demos (I don't know how invested you are with that particular demo at this juncture), but that demo doesn't follow my clear pattern of file and dependencies organization and init patterns.

I say this also because I don't know how many revisions down the line I can keep going in and fixing errors that crop up with every change to my repo - for instance, on the recent start-at-black fix, it automatically just worked for my demos repo wide, but I had to go into his source code and fix the errors by hand. Otherwise, this demo would not work at all since many years ago.

Lastly, I haven't had the time or motivation to go in and add by hand all the recent real time denoising efforts that just work on all of my own demos. I think this might help with your perceived smoothness and noise suppression.

If you're needing a glTF model demo, any of my BVH or HDRI demos have a hopefully clear and consistent loading, processing, and rendering pattern across the board.

@tom-adsfund
Copy link
Author

Yeah, I was highlighting how it's not really possible to know without the frame rate issue being "fixed". I don't know if it's obvious why that is, but having the faster sampling would highlight the improvement when moving, for example.

I only started with that demo because it was one of the few with a control for the pixel ratio(!!)

I'll happily move to any other one. But I do need a control to allow more sampling per frame.

@erichlof
Copy link
Owner

I'm currently working on looping multisamples per frame and when I get something working, I will make a special test demo for you on the repo (but it won't have a public-facing clickable link, like all the other demos). Multi-sampling is a little tricky, simply because my denoiser has been set up with 1 SPP per frame in mind. But I'm confident I can get a little test scene for you to experiment with.

@erichlof
Copy link
Owner

As promised, here is a new custom demo that lets you dynamically change the pixel resolution (pixelRatio in three.js), as well as dynamically change the number of samples per frame:
MultiSamples Per Frame Demo

I chose the Geometry Showcase demo as a scene starting point, partly because it loads super-quickly for fast developer/tester iteration times, and partly due to this collection of shapes (curves and straight edges), lights (multiple area lights) and materials (most common materials that are encountered in the wild) as being a good representative demo of a generalized scene and setup. Lastly, I chose this scene over a BVH one as the amount of lines of code is significantly reduced. This way, you can quickly navigate to a part of the code that I added or that you are interested in, and you should be able to immediately see how I did it. The 3 files of this demo are MultiSamples_Per_Frame.html (just a shell), MultiSamples_Per_Frame.js (setup / GUI handling), and MultiSamples_Per_Frame_Fragment.glsl (the heart of the path tracing demo).

The Pixel Resolution slider can go anywhere from 0.3 (pretty chunky) to 1.0 (glorious full resolution). In the past I've used 0.5 as default for my demos, but recently found that 0.75 offers a little better quality (less noticeable noise patterns) and is still able to keep the frame rate up somewhat. 0.75 is currently the page start default now across the repo. As everyone's GPUs and mobile devices get faster in the future, I would like 1.0 to be the ultimate goal and default.

As far as Multi-Samples per frame goes, I included a similar gui slider to choose between 1 and 20 samples per pixel, per frame. 1 to 2 samples is fast but too noisy to be useable without my custom denoiser (that was unfortunately designed for 1 SPP per frame). 6 to 10 seems like a nice balance between quality and performance. 10-20 we start to see the curse of Monte Carlo diminishing returns, as I can't see much of a difference between 16 and 20. However the difference between 16 and 20 is noticeable in the drop in frame rate, at least on my humble laptop with integrated graphics. I have to shrink the browser window down to postage stamp size (ha) to run 20 SPP at 1.0 full resolution. But boy does it look good though! 😄

Interested to see what kind of performance you can get on your setup.
-Erich

@tom-adsfund
Copy link
Author

I've always loved that demo!!

Awesome, I'll work on it now.

@tom-adsfund
Copy link
Author

It's hard to show the quality with a 10MB limit... but it's amazing.

multi-frame.mp4

@tom-adsfund
Copy link
Author

Notice the sample counts in these screenshots:

multi-sample-upweight
multi-sample-upweight2

@tom-adsfund
Copy link
Author

So,

I've found the limits of the Tesla hardware: you're realistically looking at a 1080p with 6 samples per frame roughly as in the images above (bigger than 1080p).

And I think the upweighting makes a strong perceptible difference to the quality. Without it, you get a mushiness to the image at first.

I thought that demo was the moving one... if you can set that one up I'll try it.

@erichlof
Copy link
Owner

Whoo hoo! That video looks like each frame was pre-rendered offline - except that it wasn't and you only had to wait a fraction of a second for each frame to finish! (lol). Thanks for posting the example pics and videos. It really helps communication here on a GitHub thread.

If you don't mind me asking, what are your system specs? CPU/GPU? And can you get 30-60 fps even with higher sample counts? 0.75-1.0 resolution?

@tom-adsfund
Copy link
Author

One NVIDIA Tesla V100 16GB, 90GB of regular memory.

This is last generation hardware, the A100 is the latest. With that I assume you'd be able to do 1440p realtime.

The framerate definitely goes down with higher settings. I really want to see the animated demo with the settings I showed in video.

Also, just to say: I had to struggle with the movement controls again...!!

@passariello
Copy link

passariello commented Feb 17, 2022

Hi, It's possible to remove the echo effect during transformation? ... I try to think a different way to mix two different position avoiding the echo... especially from yellow light reflection ( I know that merge frames it's to avoid black refresh... but... )
Many thanks for your hard job!!

@tom-adsfund
Copy link
Author

@passariello My guess is that the echo will easily be removed by tweaking the parameters. There will be many improvements like that to make.

@erichlof
Copy link
Owner

erichlof commented Feb 18, 2022

@passariello
Yes that echo (I think the traditional CG name for it is 'motion blur', or 'ghosting') comes from the previous animation frame being blended with the current animation frame.
There are 2 ways around excessive motion blur. The first is to simply have enough FPS (like 50 to 60 preferably), where the previous image gets cleared so fast that the eye cannot see the ghosting before the renderer has drawn the next frame. In this case, a simple half/half blend will be perfect. For example,
finalPixelColor = previousPixelColor * 0.5 + currentPixelColor * 0.5;

If frame rate cannot be kept at those speeds, the 0.5 strategy above will still result in ghosting (as seen in the video), so the 0.5 blending weights need to be adjusted. The more previousPixelColor you have, the more ghosting but less noisy the image will be. On the other hand, the more currentPixelColor you have, the faster the update of the screen, but it might show more distracting noise. Since the weights need to add up to 1.0, maybe something like
finalPixelColor = previousPixelColor * 0.3 + currentPixelColor * 0.7;
might do the trick. It is very much a user-subjective opinion and up to their taste to decide on which weights look good to him/her.

Personally speaking, I prefer just a little blur when moving the camera fast, as this mimics real physical cameras that can't keep their shutter speeds up with the quick movements. But too much motion blur can be as distracting as the raw noise. It really takes some experimentation on a personal basis.

On that note, I will soon try to add a slider for both of the weights that control the frame blending. Then the end users can simply dial in the animated look that they want.

@erichlof
Copy link
Owner

erichlof commented Feb 18, 2022

@tom-adsfund
Thanks for the specs and reports - and yes I'll be happy to add a similar multi-sample version of the GameEngine_PathTracer.html. I believe when you said 'the scene that moves', this is what you are referring to. It contains the exact shapes, but a handful of them are moving around.

Just a note about that, this will require a further fine tuning of the 2 weights as discussed in my previous reply post. This is because, unlike all of the static scenes, like Geometry_Showcase (which you were just experimenting with recently), the progressive samples never quite settle down and converge. There has to be a steady stream of incoming currentPixelColor samples on every frame, otherwise, major ghosting occurs on the moving objects over time. I tend to go with 0.8, 0.2 or maybe 0.7,0.3 and just live with the slight ghosting. When you leave the camera still, this helps settle down the room's background diffuse walls, floor, and ceiling, which are being continuously sampled to achieve global illumination.

A more sophisticated approach that is used on custom real time path traced shaders for games like Minecraft RTX, is one where any surface that has been sitting still for even a couple of frames, is made to settle down and no new samples are taken, therefore - no noise. The 3rd person player's character that is always moving though, has to be handled with a different strategy, kind of like my edge-detecting Gaussian blur and noise filter on this repo. I would like to copy the more sophisticated code/algos someday, but there's is proprietary and closed-source. NVIDIA'S even uses deep learning AI-trained denoising/image reconstruction to achieve real time sample noise-suppression on dynamic objects. It is pure sorcery!

Will be back soon with the dynamic scene demo for you!

P.s. By the way, sorry for the late replies - Sometimes there might be a lag between when you post a question and when I respond. I promise to always respond, but this is after all my passion hobby, and life happens, and I must tend to various things. I will respond eventually though! 🙂

@erichlof
Copy link
Owner

@tom-adsfund
Oh I forgot - sorry that you're having issues with my controls. If I may ask, what is it that you are running up against when using my control scheme? Is it something that doesn't work correctly, or is it something that you would like to add, or maybe something that you would like a slider/setting for more fine control? I'll be glad to take a look at my controls and keyboard/mouse handling and see if there's anything that would make them more useful or satisfying.

@tom-adsfund
Copy link
Author

Yeah, given what you've said, I think I can solve that ghosting problem in a more robust way. I'll do it as part of testing the moving demo (which is the one you said).

The controls I'm talking about are the mouse controls, which on desktop go crazy if you go past a certain distance from where you started. And so I spend almost a minute trying to get the view back to something of any interest. Having sliders would be much better generally for fine control, as you say.

@tom-adsfund
Copy link
Author

Less echo:

less-echo.1080-00.00.08.883-00.00.13.616.mp4

I do it using the distance between the two pixels, which in shader language is just distance(pix1,pix2)!

@tom-adsfund
Copy link
Author

@erichlof I'm probably going to wait until the port to WebGPU and Node Materials until I put more into all of this. I'd be interested in the motion demo, but my main interest will be when we can use this renderer with general Three.js scenes.

@erichlof
Copy link
Owner

erichlof commented Feb 18, 2022

@tom-adsfund
Ok I'll look into providing some camera fine controls. In the meantime, here is the Dynamic moving test scene you requested:
MultiSPF Dynamic Scene Demo

The noise is a lot more pesky on this demo, simply because the diffuse surfaces can never settle down completely (ghosting issue talked about before).

I added another slider so you can control the previousFrameBlendWeight amount directly. Adjusting this does the inverse to the current pixelColor, currentPixelColor *= (1.0 - previousFrameBlendWeight). This ensures that all the weights add up to 1.0, as required by Monte Carlo-style integration.

Interested to see what kind of quality you can get on your better hardware setup. Enjoy!
-Erich

@erichlof
Copy link
Owner

@tom-adsfund
Regarding controls, I have 2 global variables in place that scale the speed of movement and rotation of the camera. These are camFlightSpeed and cameraRotationSpeed. Both should be defined/adjusted on a per-scene basis because one scene might be a tiny room, while another might be an entire mountain range.

I just realized that camFlightSpeed and cameraRotationSpeed are inconsistent in that camFlightSpeed is defined with the keyword 'let' on each and every demo's js init file, while cameraRotationSpeed is defined with the keyword 'let' also, but in global fashion only once in the large InitCommon.js file. I wasn't aware of this, apologies for the confusion. I will address this right away, it's just that I need to update that 1 line across the dozens of demos' respective/matching js init files (which is easy, but annoying, lol).

I think defining both of these variables once in the common init file that all demos/scenes use, would be the best plan and the least amount of code. As proof of concept, I will retroactively place 2 new sliders in your 2 test demos, so you can see how you like it, and see if that fixes the camera wackiness on your system. Just a side hint: if you zoom in quite a bit with the mouse wheel for an FOV of 30 and below, yes the camera rotation movement becomes very very sensitive. I've always just dealt with it, but I realize some users like yourself might be trying to capture a certain small part of the scene and have the camera be smooth, even at high zoom levels. Hopefully the new global camera controls variables combined with sliders for each will solve the problem.

Will be back with those changes soon!

@tom-adsfund
Copy link
Author

So I won't make videos trying to show the real quality: it's not really worth the time trying to fit something into 10MB.

But here's a clip of my version that highlights that it's blending all the time, including when the camera moves, and avoids all noise (pretty hard to see with the video):

smooth-moving.1080.mp4

My summary after playing with it for about an hour is:

Most importantly: with the current WebGL setup and that card, you can get very high quality but with an impractical size.

If you can live with noise, you can get a large size and see lighting effects you wouldn't get elsewhere, and there is a level of quality with that, but it's not really end-user friendly.

There's plenty of room to tweak the settings on my distance-based setup to presumably get exactly the level of clarity and motion blur, but I won't do that until the general Three.js integration.

I'd be very interested to see the performance gains from WebGPU which are supposed to be great. Maybe that will make the Tesla GPU practical for good sizes.

@tom-adsfund
Copy link
Author

And to give an impression of what that video should look like:

Screenshot from 2022-02-18 22-47-51

@erichlof
Copy link
Owner

erichlof commented Feb 19, 2022

@tom-adsfund
Wow it looks very nice on your system - thanks for posting!

In an effort to give more fine-grained camera controls to users, I have made cameraFlightSpeed and cameraRotationSpeed more consistent with each other, in that they are now both defined once globally in InitCommon.js. If the end user wants a different value for these than the defaults (as defined in InitCommon.js), then they can simply go in the accompanying js file for each demo/scene (or their personal custom scene), and set these variables to their desired values.

For your test scenes I went back and added these variables as sliders in the GUI, on both the MultiSamples_Per_Frame test demo as well as the MultiSPF_Dynamic_Scene test demo. I tried to give a wide but useful range to the sliders. Hopefully this will allow you to have very fine-grain control over the camera/mouse manipulations. Let me know if this helps in that department.

Thanks!

@tom-adsfund
Copy link
Author

@erichlof I've made a 2.4GB recording of some trials I was doing, do you know a good way I can share that with you?

@tom-adsfund
Copy link
Author

@erichlof Trailer (lol):

movie.mp4

@passariello
Copy link

@passariello Yes that echo (I think the traditional CG name for it is 'motion blur', or 'ghosting') comes from the previous animation frame being blended with the current animation frame. There are 2 ways around excessive motion blur. The first is to simply have enough FPS (like 50 to 60 preferably), where the previous image gets cleared so fast that the eye cannot see the ghosting before the renderer has drawn the next frame. In this case, a simple half/half blend will be perfect. For example, finalPixelColor = previousPixelColor * 0.5 + currentPixelColor * 0.5;

If frame rate cannot be kept at those speeds, the 0.5 strategy above will still result in ghosting (as seen in the video), so the 0.5 blending weights need to be adjusted. The more previousPixelColor you have, the more ghosting but less noisy the image will be. On the other hand, the more currentPixelColor you have, the faster the update of the screen, but it might show more distracting noise. Since the weights need to add up to 1.0, maybe something like finalPixelColor = previousPixelColor * 0.3 + currentPixelColor * 0.7; might do the trick. It is very much a user-subjective opinion and up to their taste to decide on which weights look good to him/her.

Personally speaking, I prefer just a little blur when moving the camera fast, as this mimics real physical cameras that can't keep their shutter speeds up with the quick movements. But too much motion blur can be as distracting as the raw noise. It really takes some experimentation on a personal basis.

On that note, I will soon try to add a slider for both of the weights that control the frame blending. Then the end users can simply dial in the animated look that they want.

Probably to reduce echo ( ghosting or motion blur are usually zdepth or options... I think that echo it's more appropriate about baking frame) ... we need to have a "time exposure" or "sampling exposure timer" for animation. Blur it's not a good way to have in final render, usually it's used a velocity channel for post-production. Bake it's the key.

:)

@passariello
Copy link

Also, probably it's necessary some option about camera like f-stop and exposure. Usually blur or depth are post-processing. z-channel should be very welcome in future for a professional use in exporting from High-End app to web.
Channels:

  1. velocity Map
  2. Zmap (or depth)
  3. Normal Map
  4. Fake occlusion and cavity
  5. ID map

My suggestion is to focus first in Arch, Design and Prototyping to have a product (like component in react) to use in web prods.
and embed system like iframe should be very useful and permit to your project to bring life, money and interest.
Please, let me know if you like to discuss. I have some ideas and I want really help you.
dariopassariello@gmail.com

@erichlof
Copy link
Owner

erichlof commented Feb 19, 2022

@tom-adsfund
About video sharing, what's the limit on something like Dropbox? Or what about as a huge email attachment? I don't mind downloading, I have a pretty fast connection.

P.S. Did you get a chance to try the new camera fine-controls sliders on both of the test scene demos that I recently added?

@tom-adsfund
Copy link
Author

If you have a fast connection, we can send using croc?

Yeah, the controls were useful in a way, but didn't solve the problem of how when you go past a certain point in one direction on a large desktop screen it just yanks the camera to the top or bottom of the screen.

Anyway, not really important to me now: my biggest request is I want things to be modular.

@tom-adsfund
Copy link
Author

The demo and the video I mentioned were done without the upweighting thing I've been talking about, but I did use upweighting today and it allows much bigger areas. So I can send a smaller video of that too.

@erichlof
Copy link
Owner

Croc sounds fine, although I have never used it - I need to follow the install steps for Windows, which I think include installing something like Scoop or Chocolatey. I guess let me try to get this setup on my end first, then I'll notify you here when things are ready.

@tom-adsfund
Copy link
Author

There is a binary download in the releases. I guess those other things are to make it update.

@erichlof
Copy link
Owner

Oh ok - even better! I'm not much of a command line kind of guy, ha.

@erichlof
Copy link
Owner

I'm still puzzled about your mouse camera rotation snapping unexpectedly, or jumping when you go past a certain point. I'm not using anything out of the ordinary under the hood for the camera rotaion controls- I am just using a three.js object with quaternions, as they do in their examples. Unless three.js has a corner case bug for large displays, I'm not sure why this is happening.

Is it possible to capture a short video of a couple seconds as this happens? Can you make it repeat the problem on command?

@erichlof
Copy link
Owner

I'm away from my main computer at the moment, I'm typing these replies from my phone - so it might be several hours before I get croc going on my system. Will do it soon as I can though. 🙂

@tom-adsfund
Copy link
Author

We'll probably have to do it tomorrow then.

If you have a large enough screen, click on one of the controls, that will capture the mouse, then start trying to move, and it will start going crazy. But like I say, I'm not going to waste more time on that.

@erichlof
Copy link
Owner

Oh I see, it's when you click on the gui controls. I will look into this - maybe I can tell the browser's mouse capture not to engage when you're interacting/clicking on the controls. Might just be a simple mouse target element check and a one line if statement fix .

@erichlof
Copy link
Owner

Ok about doing the file transfer tomorrow. Also understood about not wanting to go much further before WebGPU port is operational. Yes I too am excited to see the possible performance gains!

@erichlof
Copy link
Owner

@tom-adsfund
I have croc set up on my laptop now. So I think I'm ready to do the video file transfer when you are. ;-)

@erichlof
Copy link
Owner

I put that string of characters into the command prompt line where it says Enter receive code: (after I double-clicked on croc.exe).
It said connecting... but then the command line window closed abruptly. Do I need to do anything else now?

@erichlof
Copy link
Owner

I copied and pasted the phrase 'croc 1570-change-roman-mike' and placed it in the little black command prompt window after the prompt phrase 'Enter receive code:' Should I do it without the word 'croc' at the beginning?

@erichlof
Copy link
Owner

Ha, ok

@erichlof
Copy link
Owner

erichlof commented Feb 20, 2022

its working! yay
Estimated time is around 15 minutes, it's already at 10%

@tom-adsfund
Copy link
Author

Yeah, good test of my internet upload..!

@erichlof
Copy link
Owner

I'm spiking at around 3 mb/s - good test of my 'supposed' download speeds as advertised by my ISP, ha

@tom-adsfund
Copy link
Author

I've done tests since the video and I can get much crisper rendering by using the upweighting, but the video I'm sending shows that your work has paid off. I really look forward to it being integrated well with Three.js so we can enjoy rapid 3D development with path-traced rendering.

@tom-adsfund
Copy link
Author

The upload has stopped, you're supposed to be able to resume it...

@erichlof
Copy link
Owner

erichlof commented Feb 20, 2022

yeah I noticed it paused on my end too. Does croc send files in big chunks? Or do you think it just tries to send the whole thing as one big stream?

I'm currently at 59% - but there's no prompt on my end to type anything

my computer will stay on all evening (it's almost 6 pm here in Texas), so I can wait however long it takes :-)

@tom-adsfund
Copy link
Author

It's in chunks apparently.

@tom-adsfund
Copy link
Author

Just cancel it with CTRL-C, or whatever Windows uses now. Then start again, because I assume it keeps a partial file on disk.

@erichlof
Copy link
Owner

we're back up!

@tom-adsfund
Copy link
Author

Wow, that's good to see. I've got the other video ready for when this is finished.

@erichlof
Copy link
Owner

looks like everything came through just fine! I have to sit down to dinner atm, but will be back later tonight to review the video and comment. Thanks for sending the files! Be back pretty soon ;)

@erichlof
Copy link
Owner

Wow the quality on those videos is awesome! If somehow we were able to construct a real life scene made up of these weird mathematical shapes, take an HD video of it, then put it beside your videos, I don't think I could tell the difference between the rendered one and the real life one. That is a testament to the power and elegant simplicity of Monte Carlo path tracing.

That's why I think that this style of rendering is the future of ultra-realistic graphic displays, whether it be on the traditional 2D computer screen, or on AR/VR devices.

Thanks for turning me on to croc file sharing also. Brings me back to the ol' Napster days (ha). I love the simplicity, ease of use, and that it is open source. If this is a taste of Web3, then I want a part of that action!

Thank you for sharing your videos. Looking forward to what's to come!

@Wis76
Copy link

Wis76 commented May 24, 2022

Being a raytracing sucker since the Amiga 500 and 1200 (I used Lightwave back then), I stumbled on this page by chance, searching for some random realtime raytracing demos, and I've been hooked up to your tracer since then.
Is there any chance you'll add the "more samples per frame" to all the other demos and let it go above 20?
Thanks, have a nice day.

@erichlof
Copy link
Owner

erichlof commented May 26, 2022 via email

@Wis76
Copy link

Wis76 commented Jun 8, 2022

Hey Erich,

sorry for the late reply.
Thank you so much for the answer!
I played a bit with the new experimental setups and I can tell what my results are with a Ryzen 3600 and a RTX 3070Ti (all with pixel_resolution 1 at 1080p):

MultiSamples-per-Frame Demo:
Without moving the camera
75 fps for 16 samples (my actual monitor refresh rate, going to 17 drops the fps)
62 fps for 20 samples
32 fps for 40 samples
13 fps for 100 samples

MultiSPF Dynamic Scene Demo:
Without moving the camera (fps range)
69-75 fps for 16 samples
47-64 fps for 20 samples
24-33 fps for 40 samples
10-13 fps for 100 samples

Hope this helps.
I'm positive that by using the Nvidia denoising algorhythm your pathtracer would look fantastic!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants