Options
Securing blockchain-based distributed learning for heterogeneous clients through knowledge distillation and zero-knowledge proofs
Publikationstyp
Conference Paper
Date Issued
2025-10
Sprache
English
Author(s)
Nkamdjin Njike, Ghislain
Hoang, Anh-Tu
Start Page
151
End Page
160
Citation
IEEE International Conference on Blockchain, Blockchain 2025
Contribution to Conference
Publisher DOI
Publisher
IEEE
ISBN of container
979-8-3315-9016-1
979-8-3315-9015-4
Distributed learning (DL) is gaining popularity as it enables clients (e.g., AI Agents) to enhance their machine learning (ML) models’ performance by exchanging knowledge without revealing private datasets. State-of-the-art DL approaches primarily focus on transferring knowledge between heterogeneous clients with diverse model architectures, connecting clients with those that can improve their models, and protecting data privacy. However, they overlook the threat of malicious clients that potentially downgrade the models’ performance by sharing inaccurate knowledge or excluding high-performing clients from the training procedure.Therefore, we introduce the Zero-Knowledge Blockchain-Based Knowledge Distillation Learning Framework (zkBKD). In zkBKD, heterogeneous clients communicate with a blockchain network to discover high-performing clients, verify zero-knowledge proofs (ZKPs) to ensure the correctness of the knowledge shared from other clients, and vote to eliminate malicious clients. We analyze security and privacy risks and show that zkBKD prevents membership, poisoning, and collusion attacks. We conduct extensive experiments on two standard datasets across heterogeneous clients with four model architectures. The experimental results demonstrate that zkBKD relatively improves the average model accuracy of all clients by 25.71%. Even lightweight models such as ResNet-2 achieve up to a 103.35% accuracy gain compared to independent training.
Subjects
distributed learning
knowledge distillation
zero-knowledge proofs
blockchain
DDC Class
005.7: Data