Volgen
Amit Daniely
Amit Daniely
Geverifieerd e-mailadres voor mail.huji.ac.il - Homepage
Titel
Geciteerd door
Geciteerd door
Jaar
Toward deeper understanding of neural networks: The power of initialization and a dual view on expressivity
A Daniely, R Frostig, Y Singer
Advances in neural information processing systems 29, 2016
3502016
SGD learns the conjugate kernel class of the network
A Daniely
Advances in neural information processing systems 30, 2017
1842017
Strongly adaptive online learning
A Daniely, A Gonen, S Shalev-Shwartz
International Conference on Machine Learning, 1405-1411, 2015
1742015
Complexity theoretic limitations on learning halfspaces
A Daniely
Proceedings of the forty-eighth annual ACM symposium on Theory of Computing …, 2016
1462016
Complexity theoretic limitations on learning dnf’s
A Daniely, S Shalev-Shwartz
Conference on Learning Theory, 815-830, 2016
1232016
From average case complexity to improper learning complexity
A Daniely, N Linial, S Shalev-Shwartz
Proceedings of the forty-sixth annual ACM symposium on Theory of computing …, 2014
1092014
Depth separation for neural networks
A Daniely
Conference on Learning Theory, 690-696, 2017
922017
Optimal learners for multiclass problems
A Daniely, S Shalev-Shwartz
Conference on Learning Theory, 287-316, 2014
822014
Multiclass learnability and the erm principle
A Daniely, S Sabato, S Ben-David, S Shalev-Shwartz
Proceedings of the 24th Annual Conference on Learning Theory, 207-232, 2011
812011
Multiclass learnability and the ERM principle.
A Daniely, S Sabato, S Ben-David, S Shalev-Shwartz
J. Mach. Learn. Res. 16 (1), 2377-2404, 2015
782015
Learning parities with neural networks
A Daniely, E Malach
Advances in Neural Information Processing Systems 33, 20356-20365, 2020
752020
The implicit bias of depth: How incremental learning drives generalization
D Gissin, S Shalev-Shwartz, A Daniely
arXiv preprint arXiv:1909.12051, 2019
592019
Multiclass learning approaches: A theoretical comparison with implications
A Daniely, S Sabato, S Shwartz
Advances in Neural Information Processing Systems 25, 2012
532012
A PTAS for agnostically learning halfspaces
A Daniely
Conference on Learning Theory, 484-502, 2015
492015
Learning economic parameters from revealed preferences
MF Balcan, A Daniely, R Mehta, R Urner, VV Vazirani
Web and Internet Economics: 10th International Conference, WINE 2014 …, 2014
482014
Clustering is difficult only when it does not matter
A Daniely, N Linial, M Saks
arXiv preprint arXiv:1205.4891, 2012
482012
More data speeds up training time in learning halfspaces over sparse vectors
A Daniely, N Linial, S Shalev-Shwartz
Advances in Neural Information Processing Systems 26, 2013
442013
On the practically interesting instances of MAXCUT
Y Bilu, A Daniely, N Linial, M Saks
arXiv preprint arXiv:1205.4893, 2012
442012
Neural networks learning and memorization with (almost) no over-parameterization
A Daniely
Advances in Neural Information Processing Systems 33, 9007-9016, 2020
302020
Most ReLU Networks Suffer from Adversarial Perturbations
A Daniely, H Shacham
Advances in Neural Information Processing Systems 33, 6629-6636, 2020
292020
Het systeem kan de bewerking nu niet uitvoeren. Probeer het later opnieuw.
Artikelen 1–20