Follow
Weize Chen
Weize Chen
Verified email at mails.tsinghua.edu.cn
Title
Cited by
Cited by
Year
Parameter-efficient fine-tuning of large-scale pre-trained language models
N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su, S Hu, Y Chen, CM Chan, ...
Nature Machine Intelligence 5 (3), 220-235, 2023
4452023
Communicative agents for software development
C Qian, X Cong, C Yang, W Chen, Y Su, J Xu, Z Liu, M Sun
arXiv preprint arXiv:2307.07924 6, 2023
3292023
Chateval: Towards better llm-based evaluators through multi-agent debate
CM Chan, W Chen, Y Su, J Yu, W Xue, S Zhang, J Fu, Z Liu
arXiv preprint arXiv:2308.07201, 2023
2042023
Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models
N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su, S Hu, Y Chen, CM Chan, ...
arXiv preprint arXiv:2203.06904, 2022
2042022
Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors in agents
W Chen, Y Su, J Zuo, C Yang, C Yuan, C Qian, CM Chan, Y Qin, Y Lu, ...
arXiv preprint arXiv:2308.10848 2 (4), 6, 2023
1242023
Fully hyperbolic neural networks
W Chen, X Han, Y Lin, H Zhao, Z Liu, P Li, M Sun, J Zhou
arXiv preprint arXiv:2105.14686, 2021
902021
Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors
W Chen, Y Su, J Zuo, C Yang, C Yuan, CM Chan, H Yu, Y Lu, YH Hung, ...
The Twelfth International Conference on Learning Representations, 2023
502023
Exploring low-dimensional intrinsic task subspace via prompt tuning
Y Qin, X Wang, Y Su, Y Lin, N Ding, Z Liu, J Li, L Hou, P Li, M Sun, J Zhou
arXiv preprint arXiv:2110.07867, 2021
372021
Gact: Activation compressed training for generic network architectures
X Liu, L Zheng, D Wang, Y Cen, W Chen, X Han, J Chen, Z Liu, J Tang, ...
International Conference on Machine Learning, 14139-14152, 2022
252022
Exploring mode connectivity for pre-trained language models
Y Qin, C Qian, J Yi, W Chen, Y Lin, X Han, Z Liu, M Sun, J Zhou
arXiv preprint arXiv:2210.14102, 2022
192022
Exploring universal intrinsic task subspace via prompt tuning
Y Qin, X Wang, Y Su, Y Lin, N Ding, J Yi, W Chen, Z Liu, J Li, L Hou, P Li, ...
arXiv preprint arXiv:2110.07867, 2021
162021
Experiential co-learning of software-developing agents
C Qian, Y Dang, J Li, W Liu, W Chen, C Yang, Z Liu, M Sun
arXiv preprint arXiv:2312.17025, 2023
152023
D-bot: Database diagnosis system using large language models
X Zhou, G Li, Z Sun, Z Liu, W Chen, J Wu, J Liu, R Feng, G Zeng
Proceedings of the VLDB Endowment 17 (10), 2514-2527, 2024
142024
Cross-lingual contrastive learning for fine-grained entity typing for low-resource languages
X Han, Y Luo, W Chen, Z Liu, M Sun, Z Botong, H Fei, S Zheng
Proceedings of the 60th Annual Meeting of the Association for Computational …, 2022
122022
Quantifying similarity between relations with fact distribution
W Chen, H Zhu, X Han, Z Liu, M Sun
arXiv preprint arXiv:1907.08937, 2019
102019
Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models. CoRR, abs/2203.06904, 2022. doi: 10.48550
N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su, S Hu, Y Chen, CM Chan, ...
arXiv preprint arXiv.2203.06904, 0
9
Internet of agents: Weaving a web of heterogeneous agents for collaborative intelligence
W Chen, Z You, R Li, Y Guan, C Qian, C Zhao, C Yang, R Xie, Z Liu, ...
arXiv preprint arXiv:2407.07061, 2024
52024
Iterative Experience Refinement of Software-Developing Agents
C Qian, J Li, Y Dang, W Liu, YF Wang, Z Xie, W Chen, C Yang, Y Zhang, ...
arXiv preprint arXiv:2405.04219, 2024
52024
Beyond Natural Language: LLMs Leveraging Alternative Formats for Enhanced Reasoning and Communication
W Chen, C Yuan, J Yuan, Y Su, C Qian, C Yang, R Xie, Z Liu, M Sun
arXiv preprint arXiv:2402.18439, 2024
32024
Different tunes played with equal skill: Exploring a unified optimization subspace for parameter-efficient tuning
J Yi, W Chen, Y Qin, Y Lin, N Ding, X Han, Z Liu, M Sun, J Zhou
Findings of the Association for Computational Linguistics: EMNLP 2022, 3348-3366, 2022
32022
The system can't perform the operation now. Try again later.
Articles 1–20