Zhengyan Zhang
Zhengyan Zhang
Verified email at mails.tsinghua.edu.cn - Homepage
Cited by
Cited by
Graph neural networks: A review of methods and applications
J Zhou, G Cui, S Hu, Z Zhang, C Yang, Z Liu, L Wang, C Li, M Sun
AI Open 1, 57-81, 2020
ERNIE: Enhanced Language Representation with Informative Entities
Z Zhang, X Han, Z Liu, X Jiang, M Sun, Q Liu
arXiv preprint arXiv:1905.07129, 2019
KEPLER: A unified model for knowledge embedding and pre-trained language representation
X Wang, T Gao, Z Zhu, Z Zhang, Z Liu, J Li, J Tang
arXiv preprint arXiv:1911.06136, 2019
TransNet: Translation-Based Network Representation Learning for Social Relation Extraction.
C Tu, Z Zhang, Z Liu, M Sun
IJCAI, 2864-2870, 2017
A unified framework for community detection and network representation learning
C Tu, X Zeng, H Wang, Z Zhang, Z Liu, M Sun, B Zhang, L Lin
IEEE Transactions on Knowledge and Data Engineering 31 (6), 1051-1065, 2018
CPM: A large-scale generative Chinese pre-trained language model
Z Zhang, X Han, H Zhou, P Ke, Y Gu, D Ye, Y Qin, Y Su, H Ji, J Guan, F Qi, ...
AI Open 2, 93-99, 2021
Train No Evil: Selective Masking for Task-guided Pre-training
Y Gu, Z Zhang, X Wang, Z Liu, M Sun
arXiv preprint arXiv:2004.09733, 2020
CPM-2: Large-scale Cost-effective Pre-trained Language Models
Z Zhang, Y Gu, X Han, S Chen, C Xiao, Z Sun, Y Yao, F Qi, J Guan, P Ke, ...
arXiv preprint arXiv:2106.10715, 2021
Know what you don't need: Single-Shot Meta-Pruning for attention heads
Z Zhang, F Qi, Z Liu, Q Liu, M Sun
AI Open 2, 36-42, 2021
Better Robustness by More Coverage: Adversarial Training with Mixup Augmentation for Robust Fine-tuning
C Si, Z Zhang, F Qi, Z Liu, Y Wang, Q Liu, M Sun
arXiv preprint arXiv:2012.15699, 2020
Pre-Trained Models: Past, Present and Future
X Han, Z Zhang, N Ding, Y Gu, X Liu, Y Huo, J Qiu, L Zhang, W Han, ...
AI Open, 2021
Contextual Knowledge Selection and Embedding towards Enhanced Pre-Trained Language Models
YS Su, X Han, Z Zhang, P Li, Z Liu, Y Lin, J Zhou, M Sun
arXiv preprint arXiv:2009.13964, 2020
Red Alarm for Pre-trained Models: Universal Vulnerabilities by Neuron-Level Backdoor Attacks
Z Zhang, G Xiao, Y Li, T Lv, F Qi, Y Wang, X Jiang, Z Liu, M Sun
arXiv preprint arXiv:2101.06969, 0
COSINE: Compressive network embedding on large-scale information networks
Z Zhang, C Yang, Z Liu, M Sun, Z Fang, B Zhang, L Lin
IEEE Transactions on Knowledge and Data Engineering, 2020
Knowledge Inheritance for Pre-trained Language Models
Y Qin, Y Lin, J Yi, J Zhang, X Han, Z Zhang, Y Su, Z Liu, P Li, M Sun, ...
arXiv preprint arXiv:2105.13880, 2021
Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger
F Qi, M Li, Y Chen, Z Zhang, Z Liu, Y Wang, M Sun
arXiv preprint arXiv:2105.12400, 2021
SHUOWEN-JIEZI: Linguistically Informed Tokenizers For Chinese Language Model Pretraining
C Si, Z Zhang, Y Chen, F Qi, X Wang, Z Liu, M Sun
arXiv preprint arXiv:2106.00400, 2021
Adversarial Language Games for Advanced Natural Language Intelligence
Y Yao, H Zhong, Z Zhang, X Han, X Wang, C Xiao, G Zeng, Z Liu, M Sun
arXiv preprint arXiv:1911.01622, 2019
CSS-LM: A Contrastive Framework for Semi-supervised Fine-tuning of Pre-trained Language Models
Y Su, X Han, Y Lin, Z Zhang, Z Liu, P Li, J Zhou, M Sun
arXiv preprint arXiv:2102.03752, 2021
The system can't perform the operation now. Try again later.
Articles 1–19