News Release

Frontiers of Computer Science

Peer-Reviewed Publication

Higher Education Press

image

image: 

image

view more 

Credit: HIGHER EDUCATION PRESS

Federated learning (FL) is a promising privacy-preserving paradigm, yet real-world applications face the challenge of model heterogeneity. Most personalized FL (PFL) methods require uniform architectures, hindering deployment across diverse hardware. While federated distillation (FD) allows cross-architecture collaboration, current strategies often apply uniform aggregation, ignoring individual client needs. This blind knowledge transfer leads to performance degradation and noise interference, posing a significant bottleneck for heterogeneous systems.

To address this, the HIT (Shenzhen) team developed FedPD, a framework based on partial distillation. Unlike traditional methods, FedPD intelligently evaluates the relevance between global ensemble knowledge and local features, enabling selective knowledge transfer. By filtering conflicting information and extracting beneficial traits, the model ensures precise optimization. Adaptive weight adjustment further balances local experience with global insights, allowing clients to maximize performance gains while maintaining their unique data characteristics within the federated network.

Experiments show FedPD outperforms existing FD and PFL algorithms on heterogeneous datasets, demonstrating robustness under non-IID conditions. Testing across various architectural combinations proves its flexibility in large-scale networks. This efficient paradigm reduces collaboration barriers and supports the development of inclusive, distributed AI ecosystems. It offers a robust path for decentralized learning.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.