Wendelin Böhmer
Title
Cited by
Cited by
Year
Autonomous learning of state representations for control: An emerging field aims to autonomously learn state representations for reinforcement learning agents from their real …
W Böhmer, JT Springenberg, J Boedecker, M Riedmiller, K Obermayer
KI-Künstliche Intelligenz 29 (4), 353-362, 2015
482015
Multi-agent common knowledge reinforcement learning
C Schroeder de Witt, J Foerster, G Farquhar, P Torr, W Boehmer, ...
Advances in Neural Information Processing Systems 32, 9927-9939, 2019
372019
The effect of novelty on reinforcement learning
A Houillon, RC Lorenz, W Boehmer, MA Rapp, A Heinz, J Gallinat, ...
Progress in brain research 202, 415-439, 2013
292013
Construction of Approximation Spaces for Reinforcement Learning.
W Böhmer, S Grünewälder, Y Shen, M Musial, K Obermayer
Journal of Machine Learning Research 14 (7), 2013
272013
Deep coordination graphs
W Böhmer, V Kurin, S Whiteson
International Conference on Machine Learning, 980-991, 2020
262020
Neural systems for choice and valuation with counterfactual learning signals
MJ Tobia, R Guo, U Schwarze, W Böhmer, J Gläscher, B Finckh, ...
NeuroImage 89, 57-69, 2014
252014
Generating feature spaces for linear algorithms with regularized sparse kernel slow feature analysis
W Böhmer, S Grünewälder, H Nickisch, K Obermayer
Machine Learning 89 (1-2), 67-86, 2012
212012
Regularized sparse kernel slow feature analysis
W Böhmer, S Grünewälder, H Nickisch, K Obermayer
Joint European Conference on Machine Learning and Knowledge Discovery in …, 2011
182011
Deep Multi-Agent Reinforcement Learning for Decentralized Continuous Cooperative Control
C Schroeder de Witt, B Peng, PA Kamienny, P Torr, W Böhmer, ...
arXiv e-prints, arXiv: 2003.06709, 2020
14*2020
Generalized off-policy actor-critic
S Zhang, W Boehmer, S Whiteson
arXiv preprint arXiv:1903.11329, 2019
142019
Autonomous learning of state representations for control
W Böhmer, JT Springenberg, J Boedecker, M Riedmiller, K Obermayer
KI-Künstliche Intelligenz, 1-10, 2015
132015
Optimistic exploration even with a pessimistic initialisation
T Rashid, B Peng, W Boehmer, S Whiteson
arXiv preprint arXiv:2002.12174, 2020
92020
Exploration with unreliable intrinsic reward in multi-agent reinforcement learning
W Böhmer, T Rashid, S Whiteson
arXiv preprint arXiv:1906.02138, 2019
92019
Interaction of instrumental and goal-directed learning modulates prediction error representations in the ventral striatum
R Guo, W Böhmer, M Hebart, S Chien, T Sommer, K Obermayer, ...
Journal of Neuroscience 36 (50), 12650-12660, 2016
92016
The impact of non-stationarity on generalisation in deep reinforcement learning
M Igl, G Farquhar, J Luketina, W Boehmer, S Whiteson
arXiv preprint arXiv:2006.05826, 2020
72020
Multitask soft option learning
M Igl, A Gambardella, J He, N Nardelli, N Siddharth, W Böhmer, ...
Conference on Uncertainty in Artificial Intelligence, 969-978, 2020
62020
Multi-agent hierarchical reinforcement learning with dynamic termination
D Han, W Boehmer, M Wooldridge, A Rogers
Pacific Rim International Conference on Artificial Intelligence, 80-92, 2019
62019
Regression with linear factored functions
W Böhmer, K Obermayer
Joint European Conference on Machine Learning and Knowledge Discovery in …, 2015
62015
Towards structural generalization: Factored approximate planning
W Böhmer, K Obermayer
ICRA Workshop on Autonomous Learning, 2013
62013
Non-deterministic policy improvement stabilizes approximated reinforcement learning
W Böhmer, R Guo, K Obermayer
arXiv preprint arXiv:1612.07548, 2016
42016
The system can't perform the operation now. Try again later.
Articles 1–20