Follow
Minqi Jiang
Minqi Jiang
Other names蒋 旻骐
Research Scientist at Google DeepMind
Verified email at ucl.ac.uk - Homepage
Title
Cited by
Cited by
Year
Prioritized level replay
M Jiang, E Grefenstette, T Rocktäschel
International Conference on Machine Learning, 4940-4950, 2021
1442021
Evolving Curricula with Regret-Based Environment Design
J Parker-Holder*, M Jiang*, M Dennis, M Samvelyan, J Foerster, ...
International Conference on Machine Learning, https://accelagent.github.io, 2022
932022
Replay-Guided Adversarial Environment Design
M Jiang*, M Dennis*, J Parker-Holder, J Foerster, E Grefenstette, ...
NeurIPS 2021, 2021
842021
Minihack the planet: A sandbox for open-ended reinforcement learning research
M Samvelyan, R Kirk, V Kurin, J Parker-Holder, M Jiang, E Hambro, ...
NeurIPS 2021 Datasets and Benchmarks, 2021
772021
Improving intrinsic exploration with language abstractions
J Mu, V Zhong, R Raileanu, M Jiang, N Goodman, T Rocktäschel, ...
NeurIPS 2022, 2022
552022
Exploration via Elliptical Episodic Bonuses
M Henaff, R Raileanu, M Jiang, T Rocktäschel
NeurIPS 2022, 2022
272022
Motion responsive user interface for realtime language translation
AJ Cuthbert, JJ Estelle, MR Hughes, S Goyal, MS Jiang
US Patent 9,355,094, 2016
272016
MAESTRO: Open-ended environment design for multi-agent reinforcement learning
M Samvelyan, A Khan, M Dennis, M Jiang, J Parker-Holder, J Foerster, ...
International Conference on Learning Representations 2023, 2023
222023
Jaxmarl: Multi-agent rl environments in jax
A Rutherford, B Ellis, M Gallici, J Cook, A Lupu, G Ingvarsson, T Willi, ...
arXiv preprint arXiv:2311.10090, 2023
19*2023
Insights from the neurips 2021 nethack challenge
E Hambro, S Mohanty, D Babaev, M Byeon, D Chakraborty, ...
NeurIPS 2021 Competitions and Demonstrations Track, 41-52, 2022
182022
WordCraft: An Environment for Benchmarking Commonsense Agents
M Jiang, J Luketina, N Nardelli, P Minervini, PHS Torr, S Whiteson, ...
Language in Reinforcement Learning Workshop at ICML 2020, 2020
182020
General intelligence requires rethinking exploration
M Jiang, T Rocktäschel, E Grefenstette
Royal Society Open Science 10 (6), 230539, 2023
152023
Rainbow teaming: Open-ended generation of diverse adversarial prompts
M Samvelyan, SC Raparthy, A Lupu, E Hambro, AH Markosyan, M Bhatt, ...
arXiv preprint arXiv:2402.16822, 2024
132024
Grounding Aleatoric Uncertainty for Unsupervised Environment Design
M Jiang, M Dennis, J Parker-Holder, A Lupu, H Küttler, E Grefenstette, ...
NeurIPS 2022, 2022
132022
Resolving causal confusion in reinforcement learning via robust exploration
C Lyle, A Zhang, M Jiang, J Pineau, Y Gal
Self-Supervision for Reinforcement Learning Workshop-ICLR 2021, 2021
92021
A Study of Global and Episodic Bonuses for Exploration in Contextual MDPs
M Henaff, M Jiang, R Raileanu
International Conference on Machine Learning 2023, 2023
82023
Grid-to-Graph: Flexible Spatial Relational Inductive Biases for Reinforcement Learning
Z Jiang, P Minervini, M Jiang, T Rocktäschel
AAMAS 2021 (Oral), 2021
82021
minimax: Efficient Baselines for Autocurricula in JAX
M Jiang, M Dennis, E Grefenstette, T Rocktäschel
Second Agent Learning in Open-Endedness Workshop, 2023
62023
Discovering General Reinforcement Learning Algorithms with Adversarial Environment Design
MT Jackson, M Jiang, J Parker-Holder, R Vuorio, C Lu, G Farquhar, ...
NeurIPS 2023, 2023
62023
Stabilizing Unsupervised Environment Design with a Learned Adversary
I Mediratta, M Jiang, J Parker-Holder, M Dennis, E Vinitsky, T Rocktäschel
CoLLAs 2023 (Oral), 2023
62023
The system can't perform the operation now. Try again later.
Articles 1–20