Skip to content

[IJCAI 2024] Official TensorFlow implementation of "Wearable Sensor-Based Few-Shot Continual Learning on Hand Gestures for Motor-Impaired Individuals via Latent Embedding Exploitation".

License

Notifications You must be signed in to change notification settings

riyadRafiq/wearable-latent-embedding-exploitation

Repository files navigation

Wearable Sensor-Based Few-Shot Continual Learning on Hand Gestures for Motor-Impaired Individuals via Latent Embedding Exploitation

This repository contains the implementation of our IJCAI 2024 (AI for Social Good track) paper titled, "Wearable Sensor-Based Few-Shot Continual Learning on Hand Gestures for Motor-Impaired Individuals via Latent Embedding Exploitation" by Riyad Bin Rafiq, Weishi Shi, and Mark V. Albert.

Abstract: Hand gestures can provide a natural means of human-computer interaction and enable people who cannot speak to communicate efficiently. Existing hand gesture recognition methods heavily depend on pre-defined gestures, however, motor-impaired individuals require new gestures tailored to each individual's gesture motion and style. Gesture samples collected from different persons have distribution shifts due to their health conditions, the severity of the disability, motion patterns of the arms, etc. In this paper, we introduce the Latent Embedding Exploitation (LEE) mechanism in our replay-based Few-Shot Continual Learning (FSCL) framework that significantly improves the performance of fine-tuning a model for out-of-distribution data. Our method produces a diversified latent feature space by leveraging a preserved latent embedding known as gesture prior knowledge, along with intra-gesture divergence derived from two additional embeddings. Thus, the model can capture latent statistical structure in highly variable gestures with limited samples. We conduct an experimental evaluation using the SmartWatch Gesture and the Motion Gesture datasets. The proposed method results in an average test accuracy of 57.0%, 64.6%, and 69.3% by using one, three, and five samples for six different gestures. Our method helps motor-impaired persons leverage wearable devices, and their unique styles of movement can be learned and applied in human-computer interaction and social communication. image

Requirements

The required packages are listed in the requirements.txt file. Python 3.10.x is recommended.

How to run

To install the dependencies in python3 environment, run:

pip install -r requirements.txt

Training and Evaluation

To train the model and evaluate in a few-shot continual learning way, run the following command::

python main.py --subject_id ID --gesture_map ORDER

Here, the number of the id and order in the command line argument ID and ORDER will be as follows:

ID = < 101 / 103 / 104 / 105 / 107 / 108 / 109 / 110 / 111 / 112 / 113 / 114 >
ORDER = < order1 / order2 / order3 / order4 / order5 >

For example, to train and evaluate the model for the subject_id 107 with a specific order of gesture classes, order3, run the following command:

python main.py --subject_id 107 --gesture_map order3

Citation

@article{rafiq2024wearable,
  title={Wearable Sensor-Based Few-Shot Continual Learning on Hand Gestures for Motor-Impaired Individuals via Latent Embedding Exploitation},
  author={Rafiq, Riyad Bin and Shi, Weishi and Albert, Mark V},
  journal={arXiv preprint arXiv:2405.08969},
  year={2024}
}

About

[IJCAI 2024] Official TensorFlow implementation of "Wearable Sensor-Based Few-Shot Continual Learning on Hand Gestures for Motor-Impaired Individuals via Latent Embedding Exploitation".

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages