Preetum Nakkiran
Preetum Nakkiran
Apple ML Research
Verified email at - Homepage
Cited by
Cited by
Deep double descent: Where bigger models and more data hurt
P Nakkiran, G Kaplun, Y Bansal, T Yang, B Barak, I Sutskever
International Conference on Learning Representations (ICLR) 2019, 2019
SGD on Neural Networks Learns Functions of Increasing Complexity
P Nakkiran, G Kaplun, D Kalimeris, T Yang, B Edelman, H Zhang, B Barak
Advances in Neural Information Processing Systems, 3491-3501, 2019
Having Your Cake and Eating It Too: Jointly Optimal Erasure Codes for I/O, Storage, and Network-bandwidth.
KV Rashmi, P Nakkiran, J Wang, NB Shah, K Ramchandran
FAST, 81-94, 2015
Adversarial robustness may be at odds with simplicity
P Nakkiran
arXiv preprint arXiv:1901.00532, 2019
Automatic gain control and multi-style training for robust small-footprint keyword spotting with deep neural networks
R Prabhavalkar, R Alvarez, C Parada, P Nakkiran, TN Sainath
2015 IEEE International Conference on Acoustics, Speech and Signal …, 2015
Optimal regularization can mitigate double descent
P Nakkiran, P Venkat, S Kakade, T Ma
arXiv preprint arXiv:2003.01897, 2020
Compressing deep neural networks using a rank-constrained topology
P Nakkiran, R Alvarez, R Prabhavalkar, C Parada
More data can hurt for linear regression: Sample-wise double descent
P Nakkiran
arXiv preprint arXiv:1912.07242, 2019
The deep bootstrap framework: Good online learners are good offline generalizers
P Nakkiran, B Neyshabur, H Sedghi
International Conference on Learning Representations (ICLR) 2021, 2020
Revisiting model stitching to compare neural representations
Y Bansal, P Nakkiran, B Barak
Advances in neural information processing systems 34, 225-236, 2021
Computational Limitations in Robust Classification and Win-Win Results
A Degwekar, P Nakkiran, V Vaikuntanathan
Proceedings of the Thirty-Second Conference on Learning Theory 99, 994-1028, 2019
Distributional generalization: A new kind of generalization
P Nakkiran, Y Bansal
arXiv preprint arXiv:2009.08092, 2020
General strong polarization
J Błasiok, V Guruswami, P Nakkiran, A Rudra, M Sudan
ACM Journal of the ACM (JACM) 69 (2), 1-67, 2022
Limitations of neural collapse for understanding generalization in deep learning
L Hui, M Belkin, P Nakkiran
arXiv preprint arXiv:2202.08384, 2022
Rank-constrained neural networks
RA Guevara, P Nakkiran
US Patent 9,767,410, 2017
A Discussion of'Adversarial Examples Are Not Bugs, They Are Features': Adversarial Examples are Just Bugs, Too
P Nakkiran
Distill 4 (8), e00019. 5, 2019
Benign, tempered, or catastrophic: Toward a refined taxonomy of overfitting
N Mallinar, J Simon, A Abedsoltan, P Pandit, M Belkin, P Nakkiran
Advances in Neural Information Processing Systems 35, 1182-1195, 2022
Automatic gain control for speech recognition
R Alvarez, P Nakkiran
US Patent App. 14/727,741, 2016
Predicting positive and negative links with noisy queries: Theory & practice
CE Tsourakakis, M Mitzenmacher, KG Larsen, J Błasiok, B Lawson, ...
arXiv preprint arXiv:1709.07308, 2017
Learning rate annealing can provably help generalization, even for convex problems
P Nakkiran
arXiv preprint arXiv:2005.07360, 2020
The system can't perform the operation now. Try again later.
Articles 1–20