Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su, S Hu, Y Chen, CM Chan, ... arXiv preprint arXiv:2203.06904, 2022 | 103 | 2022 |
Parameter-efficient fine-tuning of large-scale pre-trained language models N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su, S Hu, Y Chen, CM Chan, ... Nature Machine Intelligence 5 (3), 220-235, 2023 | 73 | 2023 |
Tool learning with foundation models Y Qin, S Hu, Y Lin, W Chen, N Ding, G Cui, Z Zeng, Y Huang, C Xiao, ... arXiv preprint arXiv:2304.08354, 2023 | 65 | 2023 |
Fully hyperbolic neural networks W Chen, X Han, Y Lin, H Zhao, Z Liu, P Li, M Sun, J Zhou arXiv preprint arXiv:2105.14686, 2021 | 49 | 2021 |
Communicative agents for software development C Qian, X Cong, C Yang, W Chen, Y Su, J Xu, Z Liu, M Sun arXiv preprint arXiv:2307.07924, 2023 | 34 | 2023 |
Chateval: Towards better llm-based evaluators through multi-agent debate CM Chan, W Chen, Y Su, J Yu, W Xue, S Zhang, J Fu, Z Liu arXiv preprint arXiv:2308.07201, 2023 | 26 | 2023 |
Exploring lowdimensional intrinsic task subspace via prompt tuning Y Qin, X Wang, Y Su, Y Lin, N Ding, Z Liu, J Li, L Hou, P Li, M Sun, J Zhou arXiv preprint arXiv:2110.07867, 2021 | 26 | 2021 |
Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors in agents W Chen, Y Su, J Zuo, C Yang, C Yuan, C Qian, CM Chan, Y Qin, Y Lu, ... arXiv preprint arXiv:2308.10848, 2023 | 18 | 2023 |
GACT: Activation compressed training for generic network architectures X Liu, L Zheng, D Wang, Y Cen, W Chen, X Han, J Chen, Z Liu, J Tang, ... International Conference on Machine Learning, 14139-14152, 2022 | 12 | 2022 |
Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models. CoRR, abs/2203.06904, 2022. doi: 10.48550 N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su, S Hu, Y Chen, CM Chan, ... arXiv preprint arXiv.2203.06904, 0 | 7 | |
Exploring mode connectivity for pre-trained language models Y Qin, C Qian, J Yi, W Chen, Y Lin, X Han, Z Liu, M Sun, J Zhou arXiv preprint arXiv:2210.14102, 2022 | 6 | 2022 |
Quantifying similarity between relations with fact distribution W Chen, H Zhu, X Han, Z Liu, M Sun arXiv preprint arXiv:1907.08937, 2019 | 6 | 2019 |
Exploring Universal Intrinsic Task Subspace via Prompt Tuning Y Qin, X Wang, Y Su, Y Lin, N Ding, J Yi, W Chen, Z Liu, J Li, L Hou, P Li, ... arXiv preprint arXiv:2110.07867, 2021 | 4 | 2021 |
Gact: Activation compressed training for general architectures X Liu, L Zheng, D Wang, Y Cen, W Chen, X Han, J Chen, Z Liu, J Tang, ... arXiv preprint arXiv:2206.11357, 2022 | 3 | 2022 |
Cross-Lingual Contrastive Learning for Fine-Grained Entity Typing for Low-Resource Languages X Han, Y Luo, W Chen, Z Liu, M Sun, Z Botong, H Fei, S Zheng Proceedings of the 60th Annual Meeting of the Association for Computational …, 2022 | 3 | 2022 |
Different Tunes Played with Equal Skill: Exploring a Unified Optimization Subspace for Parameter-Efficient Tuning J Yi, W Chen, Y Qin, Y Lin, N Ding, X Han, Z Liu, M Sun, J Zhou Findings of the Association for Computational Linguistics: EMNLP 2022, 3348-3366, 2022 | 2 | 2022 |
Boosting Inference Efficiency: Unleashing the Power of Parameter-Shared Pre-trained Language Models W Chen, X Xu, X Han, Y Lin, R Xie, Z Liu, M Sun, J Zhou arXiv preprint arXiv:2310.12818, 2023 | | 2023 |
Knowledge Representation Learning and Knowledge-Guided NLP X Han, W Chen, Z Liu, Y Lin, M Sun Representation Learning for Natural Language Processing, 273, 2023 | | 2023 |
Ten Key Problems of Pre-trained Models: An Outlook of Representation Learning N Ding, W Chen, Z Zhang, S Hu, G Cui, Y Yao, Y Qin, Z Zeng, X Han, ... Representation Learning for Natural Language Processing, 491, 2023 | | 2023 |
Stochastic Bridges as Effective Regularizers for Parameter-Efficient Tuning W Chen, X Han, Y Lin, Z Liu, M Sun, J Zhou arXiv preprint arXiv:2305.17670, 2023 | | 2023 |