Khuu, Denise-PhiDenise-PhiKhuuSober, Michael PeterMichael PeterSoberSchallmoser, DominikDominikSchallmoserFischer, MathiasMathiasFischerSchulte, StefanStefanSchulte2024-07-222024-07-222024-04-08Proceedings of the ACM Symposium on Applied Computing (SAC 2024)9798400702433https://hdl.handle.net/11420/48476Federated Learning (FL) is an emerging machine learning paradigm in which multiple clients collaboratively train a model without exposing their local datasets. Under this paradigm, numerous clients share the responsibility of model training instead of having a centralized server. However, this enables clients of an FL system to send malicious model updates. An adversary could, e.g., train the local model with incorrect data to insert an adversary-defined objective into the model or cause a severe drop in accuracy.We show that it is possible for a small number of adversaries to considerably reduce the model performance after only one round of FL. Using Shapley Additive Explanation (SHAP) values as indicators, we propose a detection algorithm that pairs SHAP values and Support Vector Machines (SVMs) to derive classifiers that can effectively differentiate malicious from honest clients.endata poisoningdetectionfederated learninglabel-flipping attacksshapley additive explanationMLE@TUHHComputer Science, Information and General Works::005: Computer Programming, Programs, Data and SecurityData poisoning detection in federated learningConference Paper10.1145/3605098.3635896Conference Paper