Dissertation Announcement for Xingyu Li – 3/10/2023 at 4:00 PM

February 24, 2023

Dissertation Title: Secure and Efficient Federated Learning

Date: 03/10/2023 4:00 PM

Location: Simrall Conference Room 228

Candidate: Xingyu Li

Degree: Doctor of Philosophy in Electrical and Computer Engineering

Committee Members: Dr. Bo Tang, Dr. Jenny Du, Dr. John Ball, Dr. Yong Fu

Abstract:
In the past 10 years, the growth of machine learning technology has been significant, largely due to the availability of large datasets for training. However, gathering a sufficient amount of data on a central server can be challenging. Additionally, with the rise of mobile networking and the large amounts of data generated by IoT devices, privacy and security issues have become a concern, resulting in government regulations such as GDPR, HIPAA, CCPA, and ADPPA. Under these circumstances, traditional centralized machine learning methods face a problem in that sensitive data must be kept locally for privacy reasons, making it difficult to achieve the desired learning outcomes. Federated learning (FL) offers a solution to this by allowing for a global shared model to be trained by exchanging locally computed optimums instead of sharing the actual data.

Despite its success as a natural solution for IoT machine learning implementation, Federated learning (FL) still faces challenges with regards to security and performance. These include high communication costs between IoT devices and the central server, the potential for sensitive information leakage and reduced model precision due to the aggregation process in the distributed IoT network, and performance concerns caused by the heterogeneity of data and devices in the network.

In this dissertation, I present practical and effective techniques with strong theoretical supports to address these challenges. To optimize communication resources, I introduce a new multi-server FL framework called MS-FedAvg. To enhance security, I propose a robust defense algorithm called LoMar. To address data heterogeneity, I present FedLGA, and for device heterogeneity, I propose FedSAM.