Learning Two-Layer Neural Networks with Symmetric Inputs
Sponsor(s): Mathematics
Deep learning has been extremely successful in practice. However, existing guarantees for learning neural networks are limited even when the network has only two layers - they require strong assumptions either on the input distribution or on the norm of the weight vectors. In this talk we give a new algorithm that is guaranteed to learn a two-layer neural network under much milder assumptions on the input distribution. Our algorithms works whenever the input distribution is symmetric - which means two inputs x and ¿x have the same probability.
Based on joint work with Rohith Kuditipudi, Zhize Li and Xiang Wang
Contact: Kristen Gerondelis