Skip to content

Official PyTorch implementation of "Tiny Federated Wireless Foundation Models for Resource-Constrained Devices" (accepted at IEEE IoT-J 2025). This repo introduces a unified framework that combines structured pruning of ViT with efficient federated fine-tuning for low-power wireless foundation models performing wireless sensing applications.

License

Notifications You must be signed in to change notification settings

Mohammad-Hallaq/tiny-FWFM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tiny Federated Wireless Foundation Models: A PyTorch Implementation

The figure below illustrates the overall architecture of our proposed Tiny Federated Wireless Foundation Models framework.


Figure 1: A depiction of the system model illustrating federated fine-tuning of pruned WFM, incorporating a ViT-based encoder and task-specific heads.

This repository provides the official PyTorch implementation of the paper:

Tiny Federated Wireless Foundation Models for Resource-Constrained Devices
Mohammad Hallaq, Fazal Muhammad Ali Khan, Ahmed Aboulfotouh, Syed Ali Hassan, Kapal Dev, Mohammad Tabrez Quasim, and Hatem Abou-Zeid


📖 Overview

This project presents a complete pipeline for compressing and federating Vision Transformer (ViT)-based wireless foundation models. It is designed to enable efficient deployment on low-power, resource-constrained IoT devices.

Key components include:

  • 🔧 Structured pruning of ViT backbones
  • 🧠 Task-specific head construction for wireless sensing and communication tasks
  • 🌐 Federated fine-tuning under both IID and non-IID client data distributions

Note: This implementation is built upon RadioMAE, which itself is based on the original Facebook MAE repository.


📚 Citation

If you find this work useful, please consider citing:

@article{hallaq2025structured,
  title     = {Tiny Federated Wireless Foundation Models for Resource-Constrained Devices},
  author    = {Mohammad Hallaq and
               Fazal Muhammad Ali Khan and
               Ahmed Aboulfotouh and
               Syed Ali Hassan and
               Kapal Dev and
               Mohammad Tabrez Quasim and
               Hatem Abou-Zeid},
  year      = {2025}
}

🛠️ Getting Started

Step 1: Prune the Vision Transformer (ViT)

Apply structured pruning to the ViT encoder. You can choose the pruning ratio depending on your desired trade-off between model size and performance.

python compress_vit.py --pruning_ratio 0.85 --save_dir compressed_vit

The following figure shows the structured pruning pipeline used to compress the ViT backbone.


Figure 2: An illustration of the proposed block-wise pruning strategy.
The process consists of three steps:
(1) Prune each encoder block individually to its maximum extent and measure the resulting loss increase;
(2) Estimate the importance of each block based on loss degradation;
(3) Map the normalized importance scores to pruning ratios and prune each block accordingly.

Step 2: Build a Task-Specific Model

Attach a lightweight head to the pruned encoder for your target task. Supported tasks include:

sensing → Human Activity Recognition
radio → Radio Signal Identification
python build_task_model.py \
  --pruned_vit compressed_vit/pruned_ViT_with_85pct.pth \
  --task sensing \
  --save_dir compressed_vit/compressed_sensing

📦 Download Pruned Models

Instead of following steps 1 and 2, you can directly download the pruned task-specific models for the two tasks below:

Task Download Link
🧍Human Activity Recognition Download
📡 Radio Signal Identification Download

Step 3: Federated Fine-Tuning

Fine-tune the compressed model using federated learning. Configure task, partitioning strategy, and other federated learning parameters as needed:

python federated_finetuning.py \
  --pruned_model_path compressed_vit/compressed_sensing/pruned_ViT_for_sensing_task.pth \
  --partitioning non-iid \
  --log_dir has_output_dir \
  --task sensing

Step 4: Testing the Performance

Now, we can test the performance of the federated fine-tuned pruned model:

python pruned_eval_sensing.py \
  --model_dir has_output_dir

About

Official PyTorch implementation of "Tiny Federated Wireless Foundation Models for Resource-Constrained Devices" (accepted at IEEE IoT-J 2025). This repo introduces a unified framework that combines structured pruning of ViT with efficient federated fine-tuning for low-power wireless foundation models performing wireless sensing applications.

Topics

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published