Samira Abnar
Cited by
Cited by
An evolutionary algorithm for forming mixed groups of learners in web based collaborative learning environments
S Abnar, F Orooji, F Taghiyareh
2012 IEEE international conference on technology enhanced education (ICTEE), 1-6, 2012
Experiential, distributional and dependency-based word embeddings have complementary roles in decoding brain activity
S Abnar, R Ahmed, M Mijnheer, W Zuidema
Cognitive Modeling and Computational Linguistics (CMCL) 2018, 2018
Blackbox meets blackbox: Representational Similarity and Stability Analysis of Neural Language Models and Brains
S Abnar, L Beinborn, R Choenni, W Zuidema
2nd BlackBoxNLP workshop, 2019
Authorship identification using dynamic selection of features from probabilistic feature set
H Zamani, HN Esfahani, P Babaie, S Abnar, M Dehghani, A Shakery
International Conference of the Cross-Language Evaluation Forum for European …, 2014
Expanded n-grams for semantic text alignment
S Abnar, M Dehghani, H Zamani, A Shakery
Cappellato et al.[35], 2014
The healing power of poison: Helpful non-relevant documents in feedback
M Dehghani, S Abnar, J Kamps
Proceedings of the 25th ACM International on Conference on Information and …, 2016
Meta Text Aligner: Text Alignment Based on Predicted Plagiarism Relation
S Abnar, M Dehghani, A Shakery
Conference of the CLEF Association, CLEF’15 Toulouse, France, September 8–11 …, 2015
Robust Evaluation of Language-Brain Encoding Experiments
L Beinborn, S Abnar, R Choenni
CICLing, 2019
How the brain gives meaning to words
R Ahmed, BOK Intelligentie, S Abnar, J Zuidema
Unpublished Bachelor thesis, Artificial Intelligence, University of Amsterdam, 2017
Combining experiential and distributional semantic data to predict neural activity patterns
M Mijnheer, S Abnar, J Zuidema, S van Splunter
Unpublished Bachelor thesis, Artificial Intelligence, University of Amsterdam, 2017
Transferring Inductive Biases through Knowledge Distillation
S Abnar, M Dehghani, W Zuidema
arXiv preprint arXiv:2006.00555, 2020
Quantifying Attention Flow in Transformers
S Abnar, W Zuidema
arXiv preprint arXiv:2005.00928, 2020
A Comparison of Architectures and Pretraining Methods for Contextualized Multilingual Word Embeddings.
N van der Heijden, S Abnar, E Shutova
AAAI, 9090-9097, 2020
From Attention in Transformers to Dynamic Routing in Capsule Nets
S Abnar, 2019
Incremental Reading for Question Answering
S Abnar, T Bedrax-Weiss, T Kwiatkowski, WW Cohen
CL Workshop@NeurIPS, 2018
The system can't perform the operation now. Try again later.
Articles 1–15