TUHH Open Research
Help
  • Log In
    New user? Click here to register.Have you forgotten your password?
  • English
  • Deutsch
  • Communities & Collections
  • Publications
  • Research Data
  • People
  • Institutions
  • Projects
  • Statistics
  1. Home
  2. TUHH
  3. Publication References
  4. Securing blockchain-based distributed learning for heterogeneous clients through knowledge distillation and zero-knowledge proofs
 
Options

Securing blockchain-based distributed learning for heterogeneous clients through knowledge distillation and zero-knowledge proofs

Publikationstyp
Conference Paper
Date Issued
2025-10
Sprache
English
Author(s)
Nkamdjin Njike, Ghislain
Data Engineering E-19  
Hoang, Anh-Tu 
Christian Doppler Forschungsgesellschaft  
Schulte, Stefan  
Christian Doppler Forschungsgesellschaft  
TORE-URI
https://hdl.handle.net/11420/60373
Start Page
151
End Page
160
Citation
IEEE International Conference on Blockchain, Blockchain 2025
Contribution to Conference
IEEE International Conference on Blockchain, Blockchain 2025  
Publisher DOI
10.1109/blockchain67634.2025.00029
Publisher
IEEE
ISBN of container
979-8-3315-9016-1
979-8-3315-9015-4
Distributed learning (DL) is gaining popularity as it enables clients (e.g., AI Agents) to enhance their machine learning (ML) models’ performance by exchanging knowledge without revealing private datasets. State-of-the-art DL approaches primarily focus on transferring knowledge between heterogeneous clients with diverse model architectures, connecting clients with those that can improve their models, and protecting data privacy. However, they overlook the threat of malicious clients that potentially downgrade the models’ performance by sharing inaccurate knowledge or excluding high-performing clients from the training procedure.Therefore, we introduce the Zero-Knowledge Blockchain-Based Knowledge Distillation Learning Framework (zkBKD). In zkBKD, heterogeneous clients communicate with a blockchain network to discover high-performing clients, verify zero-knowledge proofs (ZKPs) to ensure the correctness of the knowledge shared from other clients, and vote to eliminate malicious clients. We analyze security and privacy risks and show that zkBKD prevents membership, poisoning, and collusion attacks. We conduct extensive experiments on two standard datasets across heterogeneous clients with four model architectures. The experimental results demonstrate that zkBKD relatively improves the average model accuracy of all clients by 25.71%. Even lightweight models such as ResNet-2 achieve up to a 103.35% accuracy gain compared to independent training.
Subjects
distributed learning
knowledge distillation
zero-knowledge proofs
blockchain
DDC Class
005.7: Data
TUHH
Weiterführende Links
  • Contact
  • Send Feedback
  • Cookie settings
  • Privacy policy
  • Impress
DSpace Software

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science
Design by effective webwork GmbH

  • Deutsche NationalbibliothekDeutsche Nationalbibliothek
  • ORCiD Member OrganizationORCiD Member Organization
  • DataCiteDataCite
  • Re3DataRe3Data
  • OpenDOAROpenDOAR
  • OpenAireOpenAire
  • BASE Bielefeld Academic Search EngineBASE Bielefeld Academic Search Engine
Feedback