Preetum Nakkiran
Preetum Nakkiran
Geverifieerd e-mailadres voor - Homepage
Geciteerd door
Geciteerd door
Deep double descent: Where bigger models and more data hurt
P Nakkiran, G Kaplun, Y Bansal, T Yang, B Barak, I Sutskever
International Conference on Learning Representations (ICLR) 2019, 2019
Having your cake and eating it too: Jointly optimal erasure codes for i/o, storage, and network-bandwidth
KV Rashmi, P Nakkiran, J Wang, NB Shah, K Ramchandran
13th {USENIX} Conference on File and Storage Technologies ({FAST} 15), 81-94, 2015
Compressing deep neural networks using a rank-constrained topology
P Nakkiran, R Alvarez, R Prabhavalkar, C Parada
Automatic gain control and multi-style training for robust small-footprint keyword spotting with deep neural networks
R Prabhavalkar, R Alvarez, C Parada, P Nakkiran, TN Sainath
2015 IEEE International Conference on Acoustics, Speech and Signal …, 2015
SGD on Neural Networks Learns Functions of Increasing Complexity
P Nakkiran, G Kaplun, D Kalimeris, T Yang, B Edelman, H Zhang, B Barak
Advances in Neural Information Processing Systems, 3491-3501, 2019
Adversarial robustness may be at odds with simplicity
P Nakkiran
arXiv preprint arXiv:1901.00532, 2019
Optimal regularization can mitigate double descent
P Nakkiran, P Venkat, S Kakade, T Ma
arXiv preprint arXiv:2003.01897, 2020
More data can hurt for linear regression: Sample-wise double descent
P Nakkiran
arXiv preprint arXiv:1912.07242, 2019
Computational Limitations in Robust Classification and Win-Win Results
A Degwekar, P Nakkiran, V Vaikuntanathan
Proceedings of the Thirty-Second Conference on Learning Theory 99, 994-1028, 2019
General strong polarization
J Błasiok, V Guruswami, P Nakkiran, A Rudra, M Sudan
Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing …, 2018
Predicting positive and negative links with noisy queries: Theory & practice
CE Tsourakakis, M Mitzenmacher, KG Larsen, J Błasiok, B Lawson, ...
arXiv preprint arXiv:1709.07308, 2017
A Discussion of'Adversarial Examples Are Not Bugs, They Are Features': Adversarial Examples are Just Bugs, Too
P Nakkiran
Distill 4 (8), e00019. 5, 2019
Fundamental limits on communication for oblivious updates in storage networks
P Nakkiran, NB Shah, KV Rashmi
2014 IEEE Global Communications Conference, 2363-2368, 2014
Automatic gain control for speech recognition
R Alvarez, P Nakkiran
US Patent App. 14/727,741, 2016
Learning rate annealing can provably help generalization, even for convex problems
P Nakkiran
arXiv preprint arXiv:2005.07360, 2020
The Deep Bootstrap Framework: Good Online Learners are Good Offline Generalizers
P Nakkiran, B Neyshabur, H Sedghi
arXiv preprint arXiv:2010.08127, 2020
Rank-constrained neural networks
RA Guevara, P Nakkiran
US Patent 9,767,410, 2017
Optimal oblivious updates in distributed storage networks
P Nakkiran, NB Shah, KV Rashmi, A Sahai, K Ramchandran
Accessed: Jul, 2016
Optimal systematic distributed storage codes with fast encoding
P Nakkiran, KV Rashmi, K Ramchandran
2016 IEEE International Symposium on Information Theory (ISIT), 430-434, 2016
Distributional generalization: A new kind of generalization
P Nakkiran, Y Bansal
arXiv preprint arXiv:2009.08092, 2020
Het systeem kan de bewerking nu niet uitvoeren. Probeer het later opnieuw.
Artikelen 1–20