Skip to content
View jianghaojun's full-sized avatar
😎
Work for yourself
😎
Work for yourself

Highlights

  • Pro
Block or Report

Block or report jianghaojun

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
jianghaojun/README.md

Hi there 👋

My name is Haojun Jiang(蒋昊峻) [My Google Scholar], a fourth-year Ph.D. student in the Department of Automation at Tsinghua University, advised by Prof. Gao Huang. Before that, I received my B.E. degree in Automation at Tsinghua University. 😄

Beginning in May (2023), I will be shifting my focus to the research of autonomous medical ultrasound systems. My commitment will be towards revolutionizing ultrasound examinations by making them more intelligent and autonomous, thereby liberating doctors' hands and enhancing medical efficiency.

If you are also passionate about research in this area, please feel free to reach out using the contact details below. I am always open for a fruitful exchange of ideas.

I’m currently working on vision and language.

😄 Projects

  • Probe Guidance for Echocardiography (TTE, Under Progress).
  • The World's First Autonomous Echocardiography (TTE) Robotic System (Under Progress).
  • Towards Expert-level Autonomous Ultrasonography Using AI-Driven Robotic System (Under Review).
  • Cross-Modal Adapter [Paper][Code]
  • Pseudo-Q CVPR'22 [Paper][Code]
  • CondenseNetV2 CVPR'21 [Paper][Code]
  • AdaFocus ICCV'21 [Paper][Code]

😄 Awesome Collections

  • Awesome Parameter Efficient Transfer Learning [Repo]
  • Awesome Autonomous Medical Ultrasound System [Repo]

💬 News

[2024/04]: Our intelligent autonomous ultrasound robot won the Silver Award at the Tsinghua Challenge Cup Entrepreneurship Competition.
[2024/03]: I was selected for the Tsinghua University 'Qi Chuang' Student Entrepreneurship Talent Program (20/60000) due to our innovative prototype in the field of intelligent autonomous ultrasound robots.
[2024/01]: Our intelligent autonomous ultrasound robot won the First Prize (1/245) at the Global Artificial Intelligence and Robotics Innovation Competition organized by the Guoqiang Research Institute of Tsinghua University.
[2023/12]: Our intelligent autonomous ultrasound robot has been selected as one of the Top-10 in the Tsinghua Medical-Engineering Innovation Competition.
[2023/07]: Deep Incubation: Training Large Models by Divide-and-Conquering is accepted by ICCV 2023! Paper is available at arXiv.
[2023/05]: I will be shifting my focus to the research of autonomous medical ultrasound systems.
[2023/01]: Text4Point now is available at arXiv. This work propose a novel Text4Point framework to construct language-guided 3D point cloud models. The key idea is utilizing 2D images as a bridge to connect the point cloud and the language modalities.
[2022/12]: A curated list about Parameter Efficient Transfer Learning in computer vision and multimodal is created.
[2022/12]: Deep Incubation: Training Large Models by Divide-and-Conquering now is available at arXiv. This work explores a novel Modular Training paradigm which divides a large model into smaller modules, trains them independently, and reassembles the trained modules to obtain the target model.
[2022/11]: Cross-Modal Adapter now is available at arXiv. This work explores the adapter-based parameter-efficient transfer learning for text-video retrieval domain. It reduces 99.6% of fine-tuned parameters without performance degradation.
[2022/09]: An introduction about Parameter Efficient Transfer Learning is given in BAAI dynamic neural network seminar.
[2022/07]: Glance and Focus Networks for Dynamic Visual Recognition is accepted by TPAMI (IF=24.31)!
[2022/07]: AI Time invites me to give a talk about Pseudo-Q.
[2022/04]: An introduction about 3D Visual Grounding is given in BAAI dynamic neural network seminar.
[2022/04]: A curated list about 3D Vision and Language is created.
[2022/03]: Pseudo-Q and AdaFocusV2 are accepted by CVPR 2022!
[2021/07]: AdaFocus is accepted by ICCV 2021!
[2021/03]: CondenseNetV2 is accepted by CVPR 2021!

🌱 Academic Services

  • Conference Reviewer: CVPR, ICCV, ECCV

📫 Contact

Please include a brief note about the reason for reaching out when you contact me.

  • E-mail:jhj20 at mails.tsinghua.edu.cn
  • Wechat:LebronJames5Champ

✨ GitHub Stats

Haojun's GitHub stats

Pinned

  1. LeapLabTHU/Pseudo-Q LeapLabTHU/Pseudo-Q Public

    [CVPR 2022] Pseudo-Q: Generating Pseudo Language Queries for Visual Grounding

    Python 138 9

  2. CondenseNetV2 CondenseNetV2 Public

    [CVPR 2021] CondenseNet V2: Sparse Feature Reactivation for Deep Networks

    Python 85 20

  3. blackfeather-wang/AdaFocus blackfeather-wang/AdaFocus Public

    Reducing spatial redundancy in video recognition. SOTA computational efficiency.

    Python 120 16

  4. Awesome-Parameter-Efficient-Transfer-Learning Awesome-Parameter-Efficient-Transfer-Learning Public

    A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.

    337 22

  5. LeapLabTHU/Cross-Modal-Adapter LeapLabTHU/Cross-Modal-Adapter Public

    [arXiv] Cross-Modal Adapter for Text-Video Retrieval

    51 2

  6. Awesome-Autonomous-Medical-Ultrasound-System Awesome-Autonomous-Medical-Ultrasound-System Public

    In this repository, we will collect and document startup companies, researchers, and their outstanding work related to autonomous medical ultrasound systems.

    2