Please use this identifier to cite or link to this item: http://theses.ncl.ac.uk/jspui/handle/10443/6614
Title: Security of distributed and federated deep learning systems
Authors: Alqattan, Duaa Salman M
Issue Date: 2025
Publisher: Newcastle University
Abstract: Distributed and federated deep learning (DL) systems, operating across the client-edgecloud continuum, have transformed real-time data processing in critical domains like smart cities, healthcare, and industrial Internet of Things (IoT). By distributing DL training and inference tasks across multiple nodes, these systems enhance scalability, reduce latency, and improve efficiency. However, this decentralisation introduces significant security challenges, particularly concerning the availability and integrity of DL systems during training and inference. This thesis tackles these challenges through three main contributions. • Edge-based Detection of Early-stage IoT Botnets: The first contribution involves employing Modular Neural Networks (MNN), a distributed DL approach, to develop an edge-based system for detecting early-stage IoT botnet activities and preventing DDoS attacks. By harnessing parallel computing on Multi-Access Edge Computing (MEC) servers, the system delivers rapid and accurate detection, ensuring uninterrupted service availability. This addresses the research gap in detecting early-stage IoT botnet activities as faults in network communication, enabling preventive measures before attacks escalate. Key findings include a significant reduction in false-negative rates and faster detection times (as low as 16 milliseconds), enabling early intervention in large-scale IoT environments. • Security Assessment of Hierarchical Federated Learning (HFL): The second contribution is a security assessment of Hierarchical Federated Learning (HFL), evaluating its resilience against data and model poisoning attacks during training and adversarial data manipulation during inference. Defense mechanisms like Neural Cleanse (NC) and Adversarial Training (AT) are explored to improve model integrity in privacysensitive environments. This addresses the gap in systematically assessing the security vulnerabilities of HFL systems, particularly in detecting and mitigating targeted attacks in multi-level architectures. Key findings highlight that while HFL enhances scalability and recovery from untargeted attacks, it remains vulnerable to targeted backdoor attacks, especially in higher-level architectures, necessitating stronger defence mechanisms. • Analysis of HFL Dynamics Under Attack: The third contribution examines HFL dynamics under attack using a Model Discrepancy score to analyse discrepancies in model updates. This study sheds light on the impact of adversarial attacks and data heterogeneity, providing insights for more robust aggregation methods in HFL. This addresses the gap in understanding the dynamics of HFL under adversarial attacks through model discrepancy phenomena. Key findings reveal that increased hierarchy and data heterogeneity can obscure malicious activity detection, emphasising the need for advanced aggregation methods tailored to complex, real-world scenarios. Overall, this thesis enhances the security, availability, and integrity of Distributed and Federated DL systems by proposing novel detection and assessment methods, ultimately laying the foundation for more resilient DL-driven infrastructures.
Description: PhD Thesis
URI: http://hdl.handle.net/10443/6614
Appears in Collections:School of Computing

Files in This Item:
File Description SizeFormat 
Alqattan D S M 2025.pdfThesis2.33 MBAdobe PDFView/Open
dspacelicence.pdfLicence43.82 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.