The figure below illustrates the overall architecture of our proposed Tiny Federated Wireless Foundation Models framework.
Figure 1: A depiction of the system model illustrating federated fine-tuning of pruned WFM, incorporating a ViT-based encoder and task-specific heads.
This repository provides the official PyTorch implementation of the paper:
Tiny Federated Wireless Foundation Models for Resource-Constrained Devices
Mohammad Hallaq, Fazal Muhammad Ali Khan, Ahmed Aboulfotouh, Syed Ali Hassan, Kapal Dev, Mohammad Tabrez Quasim, and Hatem Abou-Zeid
This project presents a complete pipeline for compressing and federating Vision Transformer (ViT)-based wireless foundation models. It is designed to enable efficient deployment on low-power, resource-constrained IoT devices.
Key components include:
- 🔧 Structured pruning of ViT backbones
- 🧠 Task-specific head construction for wireless sensing and communication tasks
- 🌐 Federated fine-tuning under both IID and non-IID client data distributions
Note: This implementation is built upon RadioMAE, which itself is based on the original Facebook MAE repository.
If you find this work useful, please consider citing:
@article{hallaq2025structured,
title = {Tiny Federated Wireless Foundation Models for Resource-Constrained Devices},
author = {Mohammad Hallaq and
Fazal Muhammad Ali Khan and
Ahmed Aboulfotouh and
Syed Ali Hassan and
Kapal Dev and
Mohammad Tabrez Quasim and
Hatem Abou-Zeid},
year = {2025}
}
Apply structured pruning to the ViT encoder. You can choose the pruning ratio depending on your desired trade-off between model size and performance.
python compress_vit.py --pruning_ratio 0.85 --save_dir compressed_vit
The following figure shows the structured pruning pipeline used to compress the ViT backbone.
Figure 2: An illustration of the proposed block-wise pruning strategy.
The process consists of three steps:
(1) Prune each encoder block individually to its maximum extent and measure the resulting loss increase;
(2) Estimate the importance of each block based on loss degradation;
(3) Map the normalized importance scores to pruning ratios and prune each block accordingly.
Attach a lightweight head to the pruned encoder for your target task. Supported tasks include:
sensing → Human Activity Recognition
radio → Radio Signal Identification
python build_task_model.py \
--pruned_vit compressed_vit/pruned_ViT_with_85pct.pth \
--task sensing \
--save_dir compressed_vit/compressed_sensing
Instead of following steps 1 and 2, you can directly download the pruned task-specific models for the two tasks below:
Task | Download Link |
---|---|
🧍Human Activity Recognition | Download |
📡 Radio Signal Identification | Download |
Fine-tune the compressed model using federated learning. Configure task, partitioning strategy, and other federated learning parameters as needed:
python federated_finetuning.py \
--pruned_model_path compressed_vit/compressed_sensing/pruned_ViT_for_sensing_task.pth \
--partitioning non-iid \
--log_dir has_output_dir \
--task sensing
Now, we can test the performance of the federated fine-tuned pruned model:
python pruned_eval_sensing.py \
--model_dir has_output_dir