Follow
Heysem Kaya
Heysem Kaya
Assistant Professor of Social and Affective Computing, Utrecht University
Verified email at uu.nl - Homepage
Title
Cited by
Cited by
Year
Video-based emotion recognition in the wild using deep transfer learning and score fusion
H Kaya, F Gürpınar, AA Salah
Image and Vision Computing 65, 66-75, 2017
2722017
AVEC 2018 workshop and challenge: Bipolar disorder and cross-cultural affect recognition
F Ringeval, B Schuller, M Valstar, R Cowie, H Kaya, M Schmitt, ...
Proceedings of the 2018 on audio/visual emotion challenge and workshop, 3-13, 2018
1962018
Local and Global Learning Methods for Predicting Power of a Combined Gas & Steam Turbine
H Kaya, P Tüfekci, FS Gürgen
Proceedings of the International Conference on Emerging Trends in Computer …, 2012
1712012
The INTERSPEECH 2021 computational paralinguistics challenge: COVID-19 cough, COVID-19 speech, escalation & primates
BW Schuller, A Batliner, C Bergler, C Mascolo, J Han, I Lefter, H Kaya, ...
arXiv preprint arXiv:2102.13468, 2021
1392021
Modeling, recognizing, and explaining apparent personality from videos
HJ Escalante, H Kaya, AA Salah, S Escalera, Y Güçlütürk, U Güçlü, ...
IEEE Transactions on Affective Computing 13 (2), 894-911, 2020
1342020
Predicting CO and NOx emissions from gas turbines: novel data and a benchmark PEMS
H Kaya, P Tüfekci, E Uzun
Turkish Journal of Electrical Engineering & Computer Sciences 27, 4783 – 4796, 2019
1092019
Ensemble CCA for continuous emotion prediction
H Kaya, F Çilli, AA Salah
Proceedings of the 4th International Workshop on Audio/Visual Emotion …, 2014
1012014
Kernel ELM and CNN based facial age estimation
F Gurpinar, H Kaya, H Dibeklioglu, A Salah
Proceedings of the IEEE conference on computer vision and pattern …, 2016
872016
Efficient and effective strategies for cross-corpus acoustic emotion recognition
H Kaya, AA Karpov
Neurocomputing 275, 1028-1034, 2018
862018
CCA based feature selection with application to continuous depression recognition from acoustic speech features
H Kaya, F Eyben, AA Salah, B Schuller
2014 IEEE International Conference on Acoustics, Speech and Signal …, 2014
782014
Fisher vectors with cascaded normalization for paralinguistic analysis
H Kaya, AA Karpov, AA Salah
Sixteenth Annual Conference of the International Speech Communication …, 2015
772015
Contrasting and combining least squares based learners for emotion recognition in the wild
H Kaya, F Gürpinar, S Afshar, AA Salah
Proceedings of the 2015 ACM on international conference on multimodal …, 2015
762015
Emotion, age, and gender classification in children’s speech by humans and machines
H Kaya, AA Salah, A Karpov, O Frolova, A Grigorev, E Lyakso
Computer Speech & Language 46, 268-283, 2017
752017
Multi-modal score fusion and decision trees for explainable automatic job candidate screening from video cvs
H Kaya, F Gurpinar, A Ali Salah
Proceedings of the IEEE conference on computer vision and pattern …, 2017
742017
Combining Deep Facial and Ambient Features for First Impression Estimation
F Gürpınar, H Kaya, AA Salah
ECCV Workshops, 2016
662016
Combining modality-specific extreme learning machines for emotion recognition in the wild
H Kaya, AA Salah
Proceedings of the 16th international conference on multimodal interaction …, 2014
612014
The turkish audio-visual bipolar disorder corpus
E Çiftçi, H Kaya, H Güleç, AA Salah
2018 First Asian Conference on Affective Computing and Intelligent …, 2018
592018
Multimodal fusion of audio, scene, and face features for first impression estimation
F Gürpinar, H Kaya, AA Salah
2016 23rd International conference on pattern recognition (ICPR), 43-48, 2016
552016
EmoChildRu: Emotional Child Russian Speech Corpus
E Lyakso, O Frolova, E Dmitrieva, A Grigorev, H Kaya, AA Salah, ...
SPECOM 2015 / LNAI 9319, 134-141, 2015
542015
Ensembling end-to-end deep models for computational paralinguistics tasks: ComParE 2020 Mask and Breathing Sub-challenges
M Markitantov, D Dresvyanskiy, D Mamontov, H Kaya, W Minker, ...
INTERSPEECH 2020, 2072-2076, 2020
422020
The system can't perform the operation now. Try again later.
Articles 1–20