Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

jpeg 2000 support #44

Open
kwon-young opened this issue Mar 14, 2022 · 14 comments
Open

jpeg 2000 support #44

kwon-young opened this issue Mar 14, 2022 · 14 comments
Assignees
Labels
enhancement New feature or request

Comments

@kwon-young
Copy link

Hello,

I found this project through hacker news and the combination Qt + Vulkan + node based editor is something I searched for a long time.

I'm just dropping here to ask if you could add jpeg2000 support as an input ( and maybe output ) format ?

Also I tried to use the editor with a relatively large image 19370627_1-METS_P8_DENSE_MST_BIN and the UI has a lot of trouble to keep up.

image

@ttddee
Copy link
Owner

ttddee commented Mar 14, 2022

Hi!

Jpeg2000 is not a problem and should be quick to implement because the IO library used supports it out of the box.

As for the performance with this large image:

It works pretty well on my system with a little bit of lag. That is to be expected, since the current version is not yet optimized for performance.

What's your OS, graphics card and driver?

@ttddee ttddee self-assigned this Mar 14, 2022
@ttddee ttddee added the enhancement New feature or request label Mar 14, 2022
@kwon-young
Copy link
Author

Hello,

Just to be sure, I've only tested the appimage and did not compile the software from source.
If you think there will be a difference, I can retry by compiling from source.

Here is my laptop configuration:

Operating System: Fedora Linux 35
KDE Plasma Version: 5.24.2
KDE Frameworks Version: 5.91.0
Qt Version: 5.15.2
Kernel Version: 5.16.12-200.fc35.x86_64 (64-bit)
Graphics Platform: Wayland
Processors: 16 × Intel® Core™ i9-10885H CPU @ 2.40GHz
Memory: 62.5 GiB of RAM
Graphics Processor: Mesa Intel® UHD Graphics

I also have a discrete nvidia gpu that I can use using optimus:

$ nvidia-smi
Tue Mar 15 09:58:58 2022       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.47.03    Driver Version: 510.47.03    CUDA Version: 11.6     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Quadro T2000 wi...  Off  | 00000000:01:00.0 Off |                  N/A |
| N/A   61C    P8     1W /  N/A |      5MiB /  4096MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1760      G   /usr/libexec/Xorg                   4MiB |
+-----------------------------------------------------------------------------+

@ttddee
Copy link
Owner

ttddee commented Mar 15, 2022

No, compiling it yourself won't make a difference I think.

As it is now, Cascade chooses the first viable GPU it finds to run on. So my guess is that it runs on your integrated GPU (Intel UHD). That might slow things down because it is a lot less powerful than a discrete GPU.

You can verify this by looking in the file Cascade.log. There should be a line that looks like this:

[VULKAN] Physical device [0]: name 'GeForce GTX 1080 Ti' version 457.36.0

That tells you which GPU is being used.

There are two things here that need improving:

  1. Like I stated above, the renderer will be optimized to run better on low-end GPUs, even with large images.
  2. In a case like yours, the user should be able to decide which GPU to run on.

Can you post the contents of Cascade.log or attach it, for me to take a look?

Thanks!

@kwon-young
Copy link
Author

So, here are the log output of using Cascade without and with __NV_PRIME_RENDER_OFFLOAD.

Without:

[INFO] Cascade Image Editor - v0.1.9
[VULKAN] QIODevice::read (QFile, ":ads/stylesheets/default.css"): device not open
[INFO] Creating Vulkan instance
[VULKAN] Vulkan init (libvulkan.so)
[VULKAN] Supported Vulkan instance layers: QVector(QVulkanLayer("VK_LAYER_NV_optimus" 1 1.3.194 "NVIDIA Optimus layer"), QVulkanLayer("VK_LAYER_MESA_device_select" 1 1.2.73 "Linux device selection layer"))
[VULKAN] Supported Vulkan instance extensions: QVector(QVulkanExtension("VK_KHR_device_group_creation" 1), QVulkanExtension("VK_KHR_display" 23), QVulkanExtension("VK_KHR_external_fence_capabilities" 1), QVulkanExtension("VK_KHR_external_memory_capabilities" 1), QVulkanExtension("VK_KHR_external_semaphore_capabilities" 1), QVulkanExtension("VK_KHR_get_display_properties2" 1), QVulkanExtension("VK_KHR_get_physical_device_properties2" 2), QVulkanExtension("VK_KHR_get_surface_capabilities2" 1), QVulkanExtension("VK_KHR_surface" 25), QVulkanExtension("VK_KHR_surface_protected_capabilities" 1), QVulkanExtension("VK_KHR_wayland_surface" 6), QVulkanExtension("VK_KHR_xcb_surface" 6), QVulkanExtension("VK_KHR_xlib_surface" 6), QVulkanExtension("VK_EXT_acquire_xlib_display" 1), QVulkanExtension("VK_EXT_debug_report" 10), QVulkanExtension("VK_EXT_debug_utils" 2), QVulkanExtension("VK_EXT_direct_mode_display" 1), QVulkanExtension("VK_EXT_display_surface_counter" 1), QVulkanExtension("VK_EXT_acquire_drm_display" 1))
[VULKAN] Enabling Vulkan instance layers: ()
[VULKAN] Enabling Vulkan instance extensions: ("VK_EXT_debug_report", "VK_KHR_surface", "VK_KHR_xcb_surface")
[VULKAN] QVulkanWindow init
[INFO] Creating renderer
[VULKAN] 3 physical devices
[VULKAN] Physical device [0]: name 'Intel(R) UHD Graphics (CML GT2)' version 21.3.7
[VULKAN] Physical device [1]: name 'llvmpipe (LLVM 13.0.0, 256 bits)' version 0.0.1
[VULKAN] Physical device [2]: name 'Quadro T2000 with Max-Q Design' version 510.188.192
[INFO] Found integrated GPU.
[INFO] Found discrete GPU.
[VULKAN] Using physical device [2]
[VULKAN] queue family 0: flags=0xf count=16 supportsPresent=1
[VULKAN] queue family 1: flags=0xc count=2 supportsPresent=0
[VULKAN] queue family 2: flags=0xe count=8 supportsPresent=1
[VULKAN] Using queue families: graphics = 0 present = 0

With:

[INFO] Cascade Image Editor - v0.1.9
[VULKAN] QIODevice::read (QFile, ":ads/stylesheets/default.css"): device not open
[INFO] Creating Vulkan instance
[VULKAN] Vulkan init (libvulkan.so)
[VULKAN] Supported Vulkan instance layers: QVector(QVulkanLayer("VK_LAYER_NV_optimus" 1 1.3.194 "NVIDIA Optimus layer"), QVulkanLayer("VK_LAYER_MESA_device_select" 1 1.2.73 "Linux device selection layer"))
[VULKAN] Supported Vulkan instance extensions: QVector(QVulkanExtension("VK_KHR_device_group_creation" 1), QVulkanExtension("VK_KHR_display" 23), QVulkanExtension("VK_KHR_external_fence_capabilities" 1), QVulkanExtension("VK_KHR_external_memory_capabilities" 1), QVulkanExtension("VK_KHR_external_semaphore_capabilities" 1), QVulkanExtension("VK_KHR_get_display_properties2" 1), QVulkanExtension("VK_KHR_get_physical_device_properties2" 2), QVulkanExtension("VK_KHR_get_surface_capabilities2" 1), QVulkanExtension("VK_KHR_surface" 25), QVulkanExtension("VK_KHR_surface_protected_capabilities" 1), QVulkanExtension("VK_KHR_wayland_surface" 6), QVulkanExtension("VK_KHR_xcb_surface" 6), QVulkanExtension("VK_KHR_xlib_surface" 6), QVulkanExtension("VK_EXT_acquire_xlib_display" 1), QVulkanExtension("VK_EXT_debug_report" 10), QVulkanExtension("VK_EXT_debug_utils" 2), QVulkanExtension("VK_EXT_direct_mode_display" 1), QVulkanExtension("VK_EXT_display_surface_counter" 1), QVulkanExtension("VK_EXT_acquire_drm_display" 1))
[VULKAN] Enabling Vulkan instance layers: ()
[VULKAN] Enabling Vulkan instance extensions: ("VK_EXT_debug_report", "VK_KHR_surface", "VK_KHR_xcb_surface")
[VULKAN] QVulkanWindow init
[INFO] Creating renderer
[VULKAN] 3 physical devices
[VULKAN] Physical device [0]: name 'Quadro T2000 with Max-Q Design' version 510.188.192
[VULKAN] Physical device [1]: name 'Intel(R) UHD Graphics (CML GT2)' version 21.3.7
[VULKAN] Physical device [2]: name 'llvmpipe (LLVM 13.0.0, 256 bits)' version 0.0.1
[INFO] Found discrete GPU.
[VULKAN] Using physical device [0]
[VULKAN] queue family 0: flags=0xf count=16 supportsPresent=1
[VULKAN] queue family 1: flags=0xc count=2 supportsPresent=0
[VULKAN] queue family 2: flags=0xe count=8 supportsPresent=1
[VULKAN] Using queue families: graphics = 0 present = 0

So it seems it automatically uses the nvidia gpu everytime ???

@kwon-young
Copy link
Author

Also, when I said that:

The ui has a lot of trouble to keep up

I meant that the image view does not refresh correctly or as often as it should.
When I tweak a setting, I expect the image to be updated instantly but instead the image view is often in a crazy broken state and often zooming or dezooming corrects the view.

@ttddee
Copy link
Owner

ttddee commented Mar 15, 2022

Yes it does. That's good news.

How is the performance when you use other nodes, apart from Rotate?

Just looking at the rotate shader (haven't touched it in a while), there is definitely some room for improvement.

@kwon-young
Copy link
Author

How is the performance when you use other nodes, apart from Rotate?

Performance wise, it's okay, kind of similar to the rotate shader, but the view is not updating correctly with often broken state, previous state or an empty black view instead of the image.

@ttddee
Copy link
Owner

ttddee commented Mar 16, 2022

What do you mean by "broken state"? Can you post a screenshot?

@kwon-young
Copy link
Author

I manage to do this by just zooming in and out.

Screenshot_20220316_105726

@ttddee
Copy link
Owner

ttddee commented Mar 16, 2022

Ok weird. I can't reproduce this on any of my systems.

I'll have to investigate some more. Can you do me a favor and post that screenshot as a new issue together with your system specs?

We'll use this issue here for the jpeg2000 support. I am working on that right now.

@ttddee
Copy link
Owner

ttddee commented Mar 16, 2022

I have added preliminary support for jpeg2000.

Here is an AppImage for testing: https://github.com/ttddee/Cascade/releases/download/nightly/Cascade-Image-Editor-x86_64-v0-2-0-dev-e31cf87.AppImage

Valid extensions for input are: .jp2, .j2k, .j2c

For output the extension is: .jp2

I have noticed that, for large files, it is slow. The reason for that is probably that the used version of openjpeg is not multithreading correctly. OIIO recommends at least version 2.4, this build uses 2.3.

Upgrading the openjpeg dependency should improve the performance in the next release.

@kwon-young
Copy link
Author

I'm getting a segmentation fault when trying to read a big jp2 image:

18850101_1_low.jp2.zip

@ttddee
Copy link
Owner

ttddee commented Mar 16, 2022

How are you creating that image?

Does this image segfault too?

sample1.jp2.zip

@kwon-young
Copy link
Author

the image was converted from tif to jp2 using imagemagick.

sample1.jp2 does not segfault for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants