Kenji Kawaguchi
Kenji Kawaguchi
Presidential Young Professor, National University of Singapore
Geverifieerd e-mailadres voor nus.edu.sg - Homepage
Geciteerd door
Geciteerd door
Deep learning without poor local minima
K Kawaguchi
Advances In Neural Information Processing Systems (NeurIPS), 586-594, 2016
Interpolation consistency training for semi-supervised learning
V Verma, K Kawaguchi, A Lamb, J Kannala, A Solin, Y Bengio, ...
Neural Networks 145, 90-106, 2022
Generalization in Deep Learning
K Kawaguchi, LP Kaelbling, Y Bengio
Cambridge University Press, 2022
Adaptive activation functions accelerate convergence in deep and physics-informed neural networks
AD Jagtap, K Kawaguchi, GE Karniadakis
Journal of Computational Physics 404, 109136, 2020
How Does Mixup Help With Robustness and Generalization?
L Zhang*, Z Deng*, K Kawaguchi*, A Ghorbani, J Zou
International Conference on Learning Representations (ICLR), 2021
Locally adaptive activation functions with slope recovery for deep and physics-informed neural networks
AD Jagtap*, K Kawaguchi*, G Em Karniadakis
Proceedings of the Royal Society A 476 (2239), 20200334, 2020
Theory of Deep Learning III: explaining the non-overfitting puzzle
T Poggio, K Kawaguchi, Q Liao, B Miranda, L Rosasco, X Boix, J Hidary, ...
Massachusetts Institute of Technology, CBMM Memo No. 073, 2018
GraphMix: Improved Training of GNNs for Semi-Supervised Learning
V Verma, M Qu, K Kawaguchi, A Lamb, Y Bengio, J Kannala, J Tang
Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2021
Bayesian optimization with exponential convergence
K Kawaguchi, LP Kaelbling, T Lozano-Pérez
Advances in Neural Information Processing Systems (NeurIPS) 28, 2809-2817, 2015
Combined scaling for zero-shot transfer learning
H Pham*, Z Dai*, G Ghiasi*, K Kawaguchi*, H Liu, AW Yu, J Yu, YT Chen, ...
Neurocomputing 555, 126658, 2023
Depth Creates No Bad Local Minima
H Lu, K Kawaguchi
arXiv preprint arXiv:1702.08580, 2017
Towards domain-agnostic contrastive learning
V Verma, T Luong, K Kawaguchi, H Pham, Q Le
International Conference on Machine Learning (ICML), 10530-10541, 2021
Interpolated adversarial training: Achieving robust neural networks without sacrificing too much accuracy
A Lamb, V Verma, K Kawaguchi, A Matyasko, S Khosla, J Kannala, ...
Neural Networks, 2022
Deep Kronecker neural networks: A general framework for neural networks with adaptive activation functions
AD Jagtap, Y Shin, K Kawaguchi, GE Karniadakis
Neurocomputing 468, 165-180, 2022
Effect of depth and width on local minima in deep learning
K Kawaguchi, J Huang, LP Kaelbling
Neural computation 31 (7), 1462-1498, 2019
Depth with Nonlinearity Creates No Bad Local Minima in ResNets
K Kawaguchi, Y Bengio
Neural Networks 118, 167-174, 2019
Elimination of all bad local minima in deep learning
K Kawaguchi, L Kaelbling
Artificial Intelligence and Statistics (AISTATS), 853-863, 2020
When Do Extended Physics-Informed Neural Networks (XPINNs) Improve Generalization?
Z Hu, AD Jagtap, GE Karniadakis, K Kawaguchi
SIAM Journal on Scientific Computing 44 (5), A3158-A3182, 2022
Optimization of graph neural networks: Implicit acceleration by skip connections and more depth
K Xu*, M Zhang, S Jegelka, K Kawaguchi*
International Conference on Machine Learning (ICML), 11592-11602, 2021
Gradient descent finds global minima for generalizable deep neural networks of practical sizes
K Kawaguchi, J Huang
2019 57th Annual Allerton Conference on Communication, Control, and …, 2019
Het systeem kan de bewerking nu niet uitvoeren. Probeer het later opnieuw.
Artikelen 1–20