Volgen
Laura Ruis
Titel
Geciteerd door
Geciteerd door
Jaar
A Benchmark for Systematic Generalization in Grounded Language Understanding
L Ruis, J Andreas, M Baroni, D Bouchacourt, BM Lake
NeurIPS 2020, 2020
1652020
Debating with more persuasive llms leads to more truthful answers
A Khan, J Hughes, D Valentine, L Ruis, K Sachan, A Radhakrishnan, ...
ICML 2024 (best paper award), 2024
842024
The goldilocks of pragmatic understanding: Fine-tuning strategy matters for implicature resolution by llms
L Ruis, A Khan, S Biderman, S Hooker, T Rocktäschel, E Grefenstette
NeurIPS 2023 (spotlight), 2024
77*2024
Chatarena: Multi-agent language game environments for large language models
Y Wu, Z Jiang, A Khan, Y Fu, L Ruis, E Grefenstette, T Rocktäschel
GitHub repository, 2023
172023
Improving systematic generalization through modularity and augmentation
L Ruis, B Lake
CogSci 2022, 2022
122022
Insertion-Deletion Transformer
L Ruis, M Stern, J Proskurnia, W Chan
EMNLP WNGT, 2019
122019
Procedural Knowledge in Pretraining Drives Reasoning in Large Language Models
L Ruis, M Mozes, J Bae, SR Kamalakara, D Talupuru, A Locatelli, R Kirk, ...
ICLR 2025, 2024
42024
What do Large Language Models ‘think’determines occupational prestige?
R De Vries, M Hill, L Ruis
SocArXiv, 2024
22024
Do LLMs selectively encode the goal of an agent's reach?
L Ruis, A Findeis, H Bradley, HA Rahmani, KW Choe, E Grefenstette, ...
ToM ICML 2023, 2023
22023
Towards Reliable Evaluation of Behavior Steering Interventions in LLMs
I Pres, L Ruis, ES Lubana, D Krueger
MINT NeurIPS 2024 (spotlight), 2024
12024
Developing an occupational prestige scale using Large Language Models
R De Vries, MJ Hill, L Ruis
Workshop on Socially Responsible Language Modelling Research, 2024
12024
Investigating Non-Transitivity in LLM-as-a-Judge
Y Xu, L Ruis, T Rocktäschel, R Kirk
arXiv preprint arXiv:2502.14074, 2025
2025
Developing an occupational prestige scale using Large Language Models
M Hill, R de Vries, L Ruis
2024
What do Large Language Models ‘think’determines occupational prestige?
M Hill, R de Vries, L Ruis
2024
What Kind of Pretraining Data Do Large Language Models Rely on When Doing Reasoning?
L Ruis, M Mozes, J Bae, SR Kamalakara, D Gnaneshwar, A Locatelli, ...
The Thirteenth International Conference on Learning Representations, 0
Het systeem kan de bewerking nu niet uitvoeren. Probeer het later opnieuw.
Artikelen 1–15