Skip to main content
Browse by:
GROUP

Learning with Distributed Systems: Adversary-Resilience and Neural Networks

Event Image
Icon calendar
Wednesday, October 30, 2019
Icon time
12:00 pm - 1:00 pm
Icon speaker
Lili Su

In this talk, I will first talk about how to secure Federated Learning (FL) against adversarial faults. FL is a new distributed learning paradigm proposed by Google. The goal of FL is to enable the cloud (i.e., the learner) to train a model without collecting the training data from users' mobile devices. Compared with traditional learning, FL suffers serious security issues and several practical constraints call for new security strategies. Towards quantitative and systematic insights into the impacts of those security issues, we formulated and studied the problem of Byzantine-resilient Federated Learning. We proposed two robust learning rules that secure gradient descent against Byzantine faults. The estimation error achieved under our more recently proposed rule is order-optimal in the minimax sense.
Then, I will briefly talk about our recent results on neural networks, including both biological and artificial neural networks. Notably, our results on the artificial neural networks (i.e., training over-parameterized 2-layer neural networks) improved the state-of-the-art. In particular, we showed that nearly-linear network over-parameterization is sufficient for the global convergence of gradient descent.

Contact: Dina Khalilova