diff --git a/README.md b/README.md index 1e88eef..2dfd460 100644 --- a/README.md +++ b/README.md @@ -71,6 +71,10 @@ The proposed Q-Bench includes three realms for low-level vision: perception (A1) - We are open to **submission-based evaluation** for the two tasks. The details for submission is as follows. - For assessment (A3), as we use **public datasets**, we provide an abstract evaluation code for arbitrary MLLMs for anyone to test. +## Release +- [2/10] 🔥 We are releasing the extended [Q-bench+](https://github.com/Q-Future/Q-Bench/blob/master/Q_Bench%2B.pdf), which challenges MLLMs with both single images and image pairs on low-level vision. More details coming soon! +- [1/16] 🔥 Our work "Q-Bench: A Benchmark for General-Purpose Foundation Models on Low-level Vision" is accepted by **ICLR2024 as Spotlight Presentation**. + ## Close-source MLLMs (GPT-4V-Turbo, Gemini, Qwen-VL-Plus, GPT-4V)