Deep reinforcement learning in large discrete action spaces G Dulac-Arnold, R Evans, H van Hasselt, P Sunehag, T Lillicrap, J Hunt, ... arXiv preprint arXiv:1512.07679, 2015 | 749 | 2015 |
On the effectiveness of interval bound propagation for training verifiably robust models S Gowal, K Dvijotham, R Stanforth, R Bunel, C Qin, J Uesato, ... arXiv preprint arXiv:1810.12715, 2018 | 529 | 2018 |
A Dual Approach to Scalable Verification of Deep Networks. K Dvijotham, R Stanforth, S Gowal, TA Mann, P Kohli UAI 1 (2), 3, 2018 | 467 | 2018 |
Uncovering the limits of adversarial training against norm-bounded adversarial examples S Gowal, C Qin, J Uesato, T Mann, P Kohli arXiv preprint arXiv:2010.03593, 2020 | 331 | 2020 |
Data augmentation can improve robustness SA Rebuffi, S Gowal, DA Calian, F Stimberg, O Wiles, TA Mann Advances in Neural Information Processing Systems 34, 29935-29948, 2021 | 302 | 2021 |
Fixing data augmentation to improve adversarial robustness SA Rebuffi, S Gowal, DA Calian, F Stimberg, O Wiles, T Mann arXiv preprint arXiv:2103.01946, 2021 | 269 | 2021 |
Improving robustness using generated data S Gowal, SA Rebuffi, O Wiles, F Stimberg, DA Calian, TA Mann Advances in Neural Information Processing Systems 34, 4218-4233, 2021 | 264 | 2021 |
Scalable verified training for provably robust image classification S Gowal, KD Dvijotham, R Stanforth, R Bunel, C Qin, J Uesato, ... Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2019 | 186 | 2019 |
Robust reinforcement learning for continuous control with model misspecification DJ Mankowitz, N Levine, R Jeong, Y Shi, J Kay, A Abdolmaleki, ... arXiv preprint arXiv:1906.07516, 2019 | 123 | 2019 |
An alternative surrogate loss for pgd-based adversarial testing S Gowal, J Uesato, C Qin, PS Huang, T Mann, P Kohli arXiv preprint arXiv:1910.09338, 2019 | 84 | 2019 |
Adaptive skills adaptive partitions (ASAP) DJ Mankowitz, TA Mann, S Mannor Advances in neural information processing systems 29, 2016 | 72 | 2016 |
Approximate Value Iteration with Temporally Extended Actions SMDP Timothy A. Mann Journal of Artificial Intelligence Research 53, 375-438, 2015 | 69 | 2015 |
Soft-robust actor-critic policy-gradient E Derman, DJ Mankowitz, TA Mann, S Mannor arXiv preprint arXiv:1803.04848, 2018 | 63 | 2018 |
Scaling up approximate value iteration with options: Better policies with fewer iterations T Mann, S Mannor International conference on machine learning, 127-135, 2014 | 63 | 2014 |
Achieving robustness in the wild via adversarial mixing with disentangled representations S Gowal, C Qin, PS Huang, T Cemgil, K Dvijotham, T Mann, P Kohli Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2020 | 61 | 2020 |
How hard is my MDP?" The distribution-norm to the rescue" OA Maillard, TA Mann, S Mannor Advances in Neural Information Processing Systems 27, 2014 | 61 | 2014 |
A bayesian approach to robust reinforcement learning E Derman, D Mankowitz, T Mann, S Mannor Uncertainty in Artificial Intelligence, 648-658, 2020 | 53 | 2020 |
Learning robust options D Mankowitz, T Mann, PL Bacon, D Precup, S Mannor Proceedings of the AAAI Conference on Artificial Intelligence 32 (1), 2018 | 48 | 2018 |
Beyond greedy ranking: Slate optimization via list-CVAE R Jiang, S Gowal, TA Mann, DJ Rezende arXiv preprint arXiv:1803.01682, 2018 | 47 | 2018 |
Defending against image corruptions through adversarial augmentations DA Calian, F Stimberg, O Wiles, SA Rebuffi, A Gyorgy, T Mann, S Gowal arXiv preprint arXiv:2104.01086, 2021 | 46 | 2021 |