Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simplify High-Res capturing #947

Open
johannesvollmer opened this issue Jan 15, 2024 · 2 comments
Open

Simplify High-Res capturing #947

johannesvollmer opened this issue Jan 15, 2024 · 2 comments

Comments

@johannesvollmer
Copy link

johannesvollmer commented Jan 15, 2024

There is an example that shows how to render to a high-res texture and display a downscaled version. It would be nice to have this as core functionality, so the users don't have to bother with all the rendering details.

As a first step, I quickly threw together the code from the examples into a separate struct, which you can add to your model right now:

// see https://github.com/nannou-org/nannou/blob/91cd548f8d92cfb8ebcd7bcb2069575acba66088/examples/draw/draw_capture_hi_res.rs
struct HighResCapturer {
    // The texture that we will draw to.
    texture: wgpu::Texture,
    // Create a `Draw` instance for drawing to our texture.
    draw: nannou::Draw,
    // The type used to render the `Draw` vertices to our texture.
    renderer: nannou::draw::Renderer,
    // The type used to capture the texture.
    texture_capturer: wgpu::TextureCapturer,
    // The type used to resize our texture to the window texture.
    texture_reshaper: wgpu::TextureReshaper,

    upscale: u32,
}

impl HighResCapturer {
    pub fn new(window: &Window, format: wgpu::TextureFormat, upscale: u32) -> Self {
        let texture_size = [window.rect().w() as u32 * upscale, window.rect().h() as u32 * upscale];

        // Retrieve the wgpu device.
        let device = window.device();

        // Create our custom texture.
        let sample_count = window.msaa_samples();
        let texture = wgpu::TextureBuilder::new()
            .size(texture_size)
            // Our texture will be used as the RENDER_ATTACHMENT for our `Draw` render pass.
            // It will also be SAMPLED by the `TextureCapturer` and `TextureResizer`.
            .usage(wgpu::TextureUsages::RENDER_ATTACHMENT | wgpu::TextureUsages::TEXTURE_BINDING)
            // Use nannou's default multisampling sample count.
            .sample_count(sample_count)
            .format(format)
            // Build it!
            .build(device);

        let draw = nannou::Draw::new();
        let descriptor = texture.descriptor();
        let mut renderer = nannou::draw::RendererBuilder::new().build_from_texture_descriptor(device, descriptor);

        // Create the texture capturer.
        let texture_capturer = wgpu::TextureCapturer::default();

        // Create the texture reshaper.
        let texture_view = texture.view().build();
        let texture_sample_type = texture.sample_type();
        let dst_format = Frame::TEXTURE_FORMAT;

        let texture_reshaper = wgpu::TextureReshaper::new(
            device,
            &texture_view,
            sample_count,
            texture_sample_type,
            sample_count,
            dst_format,
        );


        HighResCapturer {
            texture,
            draw,
            renderer,
            texture_capturer,
            texture_reshaper,
            upscale,
        }
    }

    pub fn draw(&mut self, window: &Window, mut view: impl FnMut(&Draw, Rect)){
        // First, reset the `draw` state.
        let draw = &self.draw;
        draw.reset();

        // Create a `Rect` for our texture to help with drawing.
        let [w, h] = self.texture.size();
        let r = geom::Rect::from_w_h(w as f32, h as f32);

        view(&draw, r);

        // Render our drawing to the texture.
        let device = window.device();
        let ce_desc = wgpu::CommandEncoderDescriptor { label: Some("texture renderer"), };
        let mut encoder = device.create_command_encoder(&ce_desc);
        self.renderer.render_to_texture(device, &mut encoder, draw, &self.texture);

        window.queue().submit(Some(encoder.finish()));
    }

    pub fn try_save(&self, window: &Window, path: impl Into<PathBuf>){
        let device = window.device();
        let ce_desc = wgpu::CommandEncoderDescriptor { label: Some("texture renderer"), };
        let mut encoder = device.create_command_encoder(&ce_desc);

        // Take a snapshot of the texture. The capturer will do the following:
        //
        // 1. Resolve the texture to a non-multisampled texture if necessary.
        // 2. Convert the format to non-linear 8-bit sRGBA ready for image storage.
        // 3. Copy the result to a buffer ready to be mapped for reading.
        let snapshot = self.texture_capturer.capture(device, &mut encoder, &self.texture);

        // Submit the commands for our drawing and texture capture to the GPU.
        window.queue().submit(Some(encoder.finish()));

        // Submit a function for writing our snapshot to a PNG.
        //
        // NOTE: It is essential that the commands for capturing the snapshot are `submit`ted before we
        // attempt to read the snapshot - otherwise we will read a blank texture!
        let path = path.into();

        snapshot
            .read(move |result| {

                let image = result.expect("failed to map texture memory").to_owned();
                image.save(&path).expect("failed to save texture to png image");

                println!("Saved as {:?}.png", path);
            })
            .unwrap();
    }

    pub fn view_downscaled(&self, frame: Frame){
        // TODO: keep aspect ratio by drawing into a rect??

        // Sample the texture and write it to the frame.
        let mut encoder = frame.command_encoder();

        self
            .texture_reshaper
            .encode_render_pass(frame.texture_view(), &mut *encoder);
    }
}

Of course, this needs a lot of polishing before adding it to nannou ever gets feasible.

I'm interested in contributing. But I'll need some guidance. Anyone has an idea where this could be integrated? Maybe into the Builder and App structs?

@johannesvollmer
Copy link
Author

Seeing #946, this issue should probably be seen as low-priority, to be done after any large refactoring related to bevy rendering.

@altunenes
Copy link

this would be really helpful. :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants