Follow
Dara Bahri
Dara Bahri
Research Scientist, Google Research
Verified email at google.com - Homepage
Title
Cited by
Cited by
Year
Efficient transformers: A survey
Y Tay, M Dehghani, D Bahri, D Metzler
ACM Computing Surveys (CSUR), 2020
3792020
Berkeley advanced reconstruction toolbox
M Uecker, F Ong, JI Tamir, D Bahri, P Virtue, JY Cheng, T Zhang, M Lustig
Proc. Intl. Soc. Mag. Reson. Med 23 (2486), 2015
3092015
Long range arena: A benchmark for efficient transformers
Y Tay, M Dehghani, S Abnar, Y Shen, D Bahri, P Pham, J Rao, L Yang, ...
arXiv preprint arXiv:2011.04006, 2020
1932020
Synthesizer: Rethinking self-attention for transformer models
Y Tay, D Bahri, D Metzler, DC Juan, Z Zhao, C Zheng
International conference on machine learning, 10183-10192, 2021
1632021
Sparse sinkhorn attention
Y Tay, D Bahri, L Yang, D Metzler, DC Juan
International Conference on Machine Learning, 9438-9447, 2020
1302020
Are Pre-trained Convolutions Better than Pre-trained Transformers?
Y Tay, M Dehghani, J Gupta, D Bahri, V Aribandi, Z Qin, D Metzler
arXiv preprint arXiv:2105.03322, 2021
402021
Ext5: Towards extreme multi-task scaling for transfer learning
V Aribandi, Y Tay, T Schuster, J Rao, HS Zheng, SV Mehta, H Zhuang, ...
arXiv preprint arXiv:2111.10952, 2021
342021
Deep k-nn for noisy labels
D Bahri, H Jiang, M Gupta
International Conference on Machine Learning, 540-550, 2020
332020
Charformer: Fast character transformers via gradient-based subword tokenization
Y Tay, VQ Tran, S Ruder, J Gupta, HW Chung, D Bahri, Z Qin, ...
arXiv preprint arXiv:2106.12672, 2021
312021
Diminishing returns shape constraints for interpretability and regularization
M Gupta, D Bahri, A Cotter, K Canini
Advances in neural information processing systems 31, 2018
222018
Transformer memory as a differentiable search index
Y Tay, VQ Tran, M Dehghani, J Ni, D Bahri, H Mehta, Z Qin, K Hui, Z Zhao, ...
arXiv preprint arXiv:2202.06991, 2022
212022
Rethinking search: making domain experts out of dilettantes
D Metzler, Y Tay, D Bahri, M Najork
ACM SIGIR Forum 55 (1), 1-27, 2021
172021
Structformer: Joint unsupervised induction of dependency and constituency structure from masked language modeling
Y Shen, Y Tay, C Zheng, D Bahri, D Metzler, A Courville
arXiv preprint arXiv:2012.00857, 2020
172020
Omninet: Omnidirectional representations from transformers
Y Tay, M Dehghani, V Aribandi, J Gupta, PM Pham, Z Qin, D Bahri, ...
International Conference on Machine Learning, 10193-10202, 2021
152021
Hypergrid transformers: Towards a single model for multiple tasks
Y Tay, Z Zhao, D Bahri, D Metzler, DC Juan
142021
Encased cantilevers for low-noise force and mass sensing in liquids
D Ziegler, A Klaassen, D Bahri, D Chmielewski, A Nievergelt, F Mugele, ...
2014 IEEE 27th International Conference on Micro Electro Mechanical Systems …, 2014
142014
Unifying Language Learning Paradigms
Y Tay, M Dehghani, VQ Tran, X Garcia, D Bahri, T Schuster, HS Zheng, ...
arXiv preprint arXiv:2205.05131, 2022
92022
Scarf: Self-supervised contrastive learning using random feature corruption
D Bahri, H Jiang, Y Tay, D Metzler
arXiv preprint arXiv:2106.15147, 2021
92021
Rethinking search: Making experts out of dilettantes
D Metzler, Y Tay, D Bahri, M Najork
arXiv preprint arXiv:2105.02274, 2021
92021
Efficient transformers: A survey. arXiv 2020
Y Tay, M Dehghani, D Bahri, D Metzler
arXiv preprint arXiv:2009.06732, 0
9
The system can't perform the operation now. Try again later.
Articles 1–20