A recent study titled 'Over-parameterization and Adversarial Robustness in Neural Networks: An Overview and Empirical Analysis' reveals that over-parameterized neural networks, despite their high predictive power, are robust against adversarial examples. The research, conducted by R Shwartz-Ziv, M Goldblum, A Bansal, C. B Bruss, Y LeCun, and A G Wilson from New York University, also explores the flexibility of neural networks in practice. The study highlights that standard optimizers find minima where neural networks can fit training sets with significant accuracy, challenging the notion that over-parameterization leads to vulnerability. Factors like architecture and optimizer play crucial roles in determining the flexibility and performance of neural networks.
Just How Flexible are Neural Networks in Practice?. https://t.co/Geirm2BQsI
[LG] Just How Flexible are Neural Networks in Practice? R Shwartz-Ziv, M Goldblum, A Bansal, C. B Bruss, Y LeCun, A G Wilson [New York University] (2024) https://t.co/1YamryORkz - Standard optimizers find minima where neural networks can only fit training sets with significantly… https://t.co/GrjvQpeDx4
We often determine whether a neural network is over or under parameterized by counting parameter. In practice, how much data we can fit depends on many factors: architecture, optimizer, etc. So just how flexible are neural networks in practice? 🧵 Paper: https://t.co/oqkrqBLHRM
Over-parameterization and Adversarial Robustness in Neural Networks: An Overview and Empirical Analysis. https://t.co/uFdU04Ep7G
Over-parameterization and Adversarial Robustness in Neural Networks: An Overview and Empirical Analysis ◼ Over-parameterized neural networks, known for their predictive power, may seem vulnerable to adversarial examples. However, this study shows they are robust against such… https://t.co/kV1bnVdlVx