Follow
Chen Liang
Title
Cited by
Cited by
Year
Phi-3 technical report: A highly capable language model locally on your phone
M Abdin, J Aneja, H Awadalla, A Awadallah, AA Awan, N Bach, A Bahree, ...
arXiv preprint arXiv:2404.14219, 2024
5412024
Bond: Bert-assisted open-domain named entity recognition with distant supervision
C Liang, Y Yu, H Jiang, S Er, R Wang, T Zhao, C Zhang
Proceedings of the 26th ACM SIGKDD international conference on knowledge …, 2020
2762020
Loftq: Lora-fine-tuning-aware quantization for large language models
Y Li, Y Yu, C Liang, P He, N Karampatziakis, W Chen, T Zhao
arXiv preprint arXiv:2310.08659, 2023
1212023
Platon: Pruning large transformer models with upper confidence bound of weight importance
Q Zhang, S Zuo, C Liang, A Bukharin, P He, W Chen, T Zhao
International conference on machine learning, 26809-26823, 2022
792022
Less is more: Task-aware layer-wise distillation for language model compression
C Liang, S Zuo, Q Zhang, P He, W Chen, T Zhao
International Conference on Machine Learning, 20852-20867, 2023
662023
Super tickets in pre-trained language models: From model compression to improving generalization
C Liang, S Zuo, M Chen, H Jiang, X Liu, P He, T Zhao, W Chen
arXiv preprint arXiv:2105.12002, 2021
602021
Losparse: Structured compression of large language models based on low-rank and sparse approximation
Y Li, Y Yu, Q Zhang, C Liang, P He, W Chen, T Zhao
International Conference on Machine Learning, 20336-20350, 2023
582023
Moebert: from bert to mixture-of-experts via importance-guided adaptation
S Zuo, Q Zhang, C Liang, P He, T Zhao, W Chen
arXiv preprint arXiv:2204.07675, 2022
502022
Multi-domain neural machine translation with word-level adaptive layer-wise domain mixing
H Jiang, C Liang, C Wang, T Zhao
arXiv preprint arXiv:1911.02692, 2019
352019
A fully convolutional tri-branch network (fctn) for domain adaptation
J Zhang, C Liang, CCJ Kuo
2018 IEEE International Conference on Acoustics, Speech and Signal …, 2018
262018
Homodistil: Homotopic task-agnostic distillation of pre-trained transformers
C Liang, H Jiang, Z Li, X Tang, B Yin, T Zhao
arXiv preprint arXiv:2302.09632, 2023
252023
Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling
L Ren, Y Liu, Y Lu, Y Shen, C Liang, W Chen
arXiv preprint arXiv:2406.07522, 2024
172024
No parameters left behind: Sensitivity guided adaptive learning rate for training large transformer models
C Liang, H Jiang, S Zuo, P He, X Liu, J Gao, W Chen, T Zhao
arXiv preprint arXiv:2202.02664, 2022
152022
Self-training with differentiable teacher
S Zuo, Y Yu, C Liang, H Jiang, S Er, C Zhang, T Zhao, H Zha
arXiv preprint arXiv:2109.07049, 2021
152021
Adversarial regularization as stackelberg game: An unrolled optimization approach
S Zuo, C Liang, H Jiang, X Liu, P He, J Gao, W Chen, T Zhao
arXiv preprint arXiv:2104.04886, 2021
102021
Module-wise adaptive distillation for multimodality foundation models
C Liang, J Yu, MH Yang, M Brown, Y Cui, T Zhao, B Gong, T Zhou
Advances in Neural Information Processing Systems 36, 2024
82024
Camero: Consistency regularized ensemble of perturbed language models with weight sharing
C Liang, P He, Y Shen, W Chen, T Zhao
arXiv preprint arXiv:2204.06625, 2022
62022
Adversarial training as stackelberg game: An unrolled optimization approach
S Zuo, C Liang, H Jiang, X Liu, P He, J Gao, W Chen, T Zhao
arXiv preprint arXiv:2104.04886, 2021
62021
ARCH: Efficient adversarial regularized training with caching
S Zuo, C Liang, H Jiang, P He, X Liu, J Gao, W Chen, T Zhao
arXiv preprint arXiv:2109.07048, 2021
32021
Token-wise curriculum learning for neural machine translation
C Liang, H Jiang, X Liu, P He, W Chen, J Gao, T Zhao
arXiv preprint arXiv:2103.11088, 2021
32021
The system can't perform the operation now. Try again later.
Articles 1–20