Volgen
Amir-massoud Farahmand
Amir-massoud Farahmand
Vector Institute, University of Toronto
Geverifieerd e-mailadres voor vectorinstitute.ai - Homepage
Titel
Geciteerd door
Geciteerd door
Jaar
Error propagation for approximate policy and value iteration
A Farahmand, C Szepesvári, R Munos
Advances in Neural Information Processing Systems (NeurIPS), 568-576, 2010
1842010
Regularized Policy Iteration
A Farahmand, M Ghavamzadeh, S Mannor, C Szepesvári
Advances in Neural Information Processing Systems 21 (NeurIPS 2008), 441-448, 2009
1602009
Learning from Limited Demonstrations
B Kim, A Farahmand, J Pineau, D Precup
Advances in Neural Information Processing Systems (NeurIPS), 2859-2867, 2013
1032013
Manifold-adaptive dimension estimation
A Farahmand, C Szepesvári, JY Audibert
Proceedings of the 24th International Conference on Machine Learning (ICML …, 2007
962007
Regularized fitted Q-iteration for planning in continuous-space Markovian decision problems
A Farahmand, M Ghavamzadeh, C Szepesvári, S Mannor
American Control Conference (ACC), 725-730, 2009
91*2009
Regularized policy iteration with nonparametric function spaces
A Farahmand, M Ghavamzadeh, C Szepesvári, S Mannor
Journal of Machine Learning Research (JMLR) 17 (1), 4809-4874, 2016
722016
Robust jacobian estimation for uncalibrated visual servoing
A Shademan, A Farahmand, M Jägersand
IEEE International Conference on Robotics and Automation (ICRA), 5564-5569, 2010
702010
Model Selection in Reinforcement Learning
AM Farahmand, C Szepesvári
Machine learning 85 (3), 299-332, 2011
572011
Value-aware loss function for model-based reinforcement learning
A Farahmand, A Barreto, D Nikovski
Artificial Intelligence and Statistics (AISTATS), 1486-1494, 2017
562017
Global visual-motor estimation for uncalibrated visual servoing
A Farahmand, A Shademan, M Jagersand
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS …, 2007
51*2007
Action-Gap Phenomenon in Reinforcement Learning
AM Farahmand
Neural Information Processing Systems (NeurIPS), 2011
452011
Regularization in Reinforcement Learning
AM Farahmand
Department of Computing Science, University of Alberta, 2011
382011
Model-based and model-free reinforcement learning for visual servoing
A Farahmand, A Shademan, M Jagersand, C Szepesvári
IEEE International Conference on Robotics and Automation (ICRA), 2917-2924, 2009
34*2009
Deep reinforcement learning for partial differential equation control
A Farahmand, S Nabi, DN Nikovski
American Control Conference (ACC), 3120-3127, 2017
312017
Approximate MaxEnt Inverse Optimal Control and its Application for Mental Simulation of Human Interactions
DA Huang, AM Farahmand, KM Kitani, JA Bagnell
AAAI Conference on Artificial Intelligence (AAAI), 2015
302015
Attentional network for visual object detection
K Hara, MY Liu, O Tuzel, A Farahmand
arXiv preprint arXiv:1702.01478, 2017
282017
Iterative Value-Aware Model Learning
A Farahmand
Advances in Neural Information Processing Systems (NeurIPS), 9072-9083, 2018
272018
Interaction of Culture-based Learning and Cooperative Co-evolution and its Application to Automatic Behavior-based System Design
AM Farahmand, MN Ahmadabadi, C Lucas, BN Araabi
IEEE Transactions on Evolutionary Computation 14 (1), 23-57, 2010
242010
Regularized fitted Q-iteration: Application to planning
AM Farahmand, M Ghavamzadeh, C Szepesvári, S Mannor
Recent Advances in Reinforcement Learning, 55-68, 2008
24*2008
Bellman Error Based Feature Generation using Random Projections on Sparse Spaces
MM Fard, Y Grinberg, A Farahmand, J Pineau, D Precup
Advances in Neural Information Processing Systems (NeurIPS), 3030--3038, 2013
20*2013
Het systeem kan de bewerking nu niet uitvoeren. Probeer het later opnieuw.
Artikelen 1–20