Roman Novak
Roman Novak
Google Brain
Verified email at google.com - Homepage
Title
Cited by
Cited by
Year
Deep Neural Networks as Gaussian Processes
J Lee, Y Bahri, R Novak, SS Schoenholz, J Pennington, J Sohl-Dickstein
International Conference on Learning Representations (ICLR) 2018, 2017
4102017
Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent
J Lee, L Xiao, SS Schoenholz, Y Bahri, R Novak, J Sohl-Dickstein, ...
Advances in Neural Information Processing Systems (NeurIPS) 32, 8570 - 8581, 2019
2942019
Sensitivity and Generalization in Neural Networks: an Empirical Study
R Novak, Y Bahri, DA Abolafia, J Pennington, J Sohl-Dickstein
International Conference on Learning Representations (ICLR) 2018, 2018
2132018
Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes
R Novak, L Xiao, J Lee, Y Bahri, G Yang, J Hron, D Abolafia, J Pennington, ...
International Conference on Learning Representations (ICLR) 2019, 2018
1252018
Neural Tangents: Fast and Easy Infinite Neural Networks in Python
R Novak, L Xiao, J Hron, J Lee, AA Alemi, J Sohl-Dickstein, ...
International Conference on Learning Representations (ICLR) 2020, 2019
41*2019
Exploring the Neural Algorithm of Artistic Style
Y Nikulin, R Novak
arXiv preprint arXiv:1602.07188, 2016
222016
Improving the Neural Algorithm of Artistic Style
R Novak, Y Nikulin
arXiv preprint arXiv:1605.04603, 2016
212016
Iterative Refinement for Machine Translation
R Novak, M Auli, D Grangier
Bay Area Machine Learning Symposium (BayLearn) 2017, 2016
162016
Finite versus infinite neural networks: an empirical study
J Lee, SS Schoenholz, J Pennington, B Adlam, L Xiao, R Novak, ...
Neural Information Processing Systems (NeurIPS) 2020, 2020
122020
On the infinite width limit of neural networks with a standard parameterization
J Sohl-Dickstein, R Novak, SS Schoenholz, J Lee
arXiv preprint arXiv:2001.07301, 2020
82020
Infinite attention: NNGP and NTK for deep attention networks
J Hron, Y Bahri, J Sohl-Dickstein, R Novak
International Conference on Machine Learning (ICML) 2020, 2020
62020
Wide neural networks of any depth evolve as linear models under gradient descent
J Lee, L Xiao, SS Schoenholz, Y Bahri, R Novak, J Sohl-Dickstein, ...
Journal of Statistical Mechanics: Theory and Experiment 2020 (12), 124002, 2020
52020
Exact posterior distributions of wide Bayesian neural networks
J Hron, Y Bahri, R Novak, J Pennington, J Sohl-Dickstein
ICML 2020 Workshop on Uncertainty & Robustness in Deep Learning; BayLearn 2020, 2020
12020
The system can't perform the operation now. Try again later.
Articles 1–13