Follow
Wenhui Wang
Wenhui Wang
Microsoft Research
Verified email at microsoft.com
Title
Cited by
Cited by
Year
Unified language model pre-training for natural language understanding and generation
L Dong, N Yang, W Wang, F Wei, X Liu, Y Wang, J Gao, M Zhou, HW Hon
Advances in neural information processing systems 32, 2019
18292019
Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers
W Wang, F Wei, L Dong, H Bao, N Yang, M Zhou
Advances in Neural Information Processing Systems 33, 5776-5788, 2020
11772020
Gated self-matching networks for reading comprehension and question answering
W Wang, N Yang, F Wei, B Chang, M Zhou
Proceedings of the 55th Annual Meeting of the Association for Computational …, 2017
8582017
Image as a foreign language: Beit pretraining for vision and vision-language tasks
W Wang, H Bao, L Dong, J Bjorck, Z Peng, Q Liu, K Aggarwal, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023
731*2023
Kosmos-2: Grounding multimodal large language models to the world
Z Peng, W Wang, L Dong, Y Hao, S Huang, S Ma, F Wei
arXiv preprint arXiv:2306.14824, 2023
5412023
Language is not all you need: Aligning perception with language models
S Huang, L Dong, W Wang, Y Hao, S Singhal, S Ma, T Lv, L Cui, ...
Advances in Neural Information Processing Systems 36, 72096-72109, 2023
4362023
Unilmv2: Pseudo-masked language models for unified language model pre-training
H Bao, L Dong, F Wei, W Wang, N Yang, X Liu, Y Wang, J Gao, S Piao, ...
International conference on machine learning, 642-652, 2020
4262020
InfoXLM: An information-theoretic framework for cross-lingual language model pre-training
Z Chi, L Dong, F Wei, N Yang, S Singhal, W Wang, X Song, XL Mao, ...
arXiv preprint arXiv:2007.07834, 2020
3472020
Vlmo: Unified vision-language pre-training with mixture-of-modality-experts
H Bao, W Wang, L Dong, Q Liu, OK Mohammed, K Aggarwal, S Som, ...
Advances in Neural Information Processing Systems 35, 32897-32912, 2022
3282022
Minilmv2: Multi-head self-attention relation distillation for compressing pretrained transformers
W Wang, H Bao, S Huang, L Dong, F Wei
arXiv preprint arXiv:2012.15828, 2020
2192020
Graph-based dependency parsing with bidirectional LSTM
W Wang, B Chang
Proceedings of the 54th Annual Meeting of the Association for Computational …, 2016
1852016
Multiway attention networks for modeling sentence pairs.
C Tan, F Wei, W Wang, W Lv, M Zhou
IJCAI, 4411-4417, 2018
1532018
Cross-lingual natural language generation via pre-training
Z Chi, L Dong, F Wei, W Wang, XL Mao, H Huang
Proceedings of the AAAI conference on artificial intelligence 34 (05), 7570-7577, 2020
1472020
The era of 1-bit llms: All large language models are in 1.58 bits
S Ma, H Wang, L Ma, L Wang, W Wang, S Huang, L Dong, R Wang, J Xue, ...
arXiv preprint arXiv:2402.17764, 2024
1412024
Longnet: Scaling transformers to 1,000,000,000 tokens
J Ding, S Ma, L Dong, X Zhang, S Huang, W Wang, N Zheng, F Wei
arXiv preprint arXiv:2307.02486, 2023
1292023
Language models are general-purpose interfaces
Y Hao, H Song, L Dong, S Huang, Z Chi, W Wang, S Ma, F Wei
arXiv preprint arXiv:2206.06336, 2022
1052022
Learning to ask unanswerable questions for machine reading comprehension
H Zhu, L Dong, F Wei, W Wang, B Qin, T Liu
arXiv preprint arXiv:1906.06045, 2019
552019
Harvesting and refining question-answer pairs for unsupervised QA
Z Li, W Wang, L Dong, F Wei, K Xu
arXiv preprint arXiv:2005.02925, 2020
522020
Adapt-and-distill: Developing small, fast and effective pretrained language models for domains
Y Yao, S Huang, W Wang, L Dong, F Wei
arXiv preprint arXiv:2106.13474, 2021
472021
Consistency regularization for cross-lingual fine-tuning
B Zheng, L Dong, S Huang, W Wang, Z Chi, S Singhal, W Che, T Liu, ...
arXiv preprint arXiv:2106.08226, 2021
472021
The system can't perform the operation now. Try again later.
Articles 1–20