Follow
Shrikanth (Shri) Narayanan
Shrikanth (Shri) Narayanan
University Professor and Niki & Max Nikias Chair in Engineering, University of Southern California
Verified email at sipi.usc.edu - Homepage
Title
Cited by
Cited by
Year
IEMOCAP: Interactive emotional dyadic motion capture database
C Busso, M Bulut, CC Lee, A Kazemzadeh, E Mower, S Kim, JN Chang, ...
Language resources and evaluation 42, 335-359, 2008
38172008
The Geneva minimalistic acoustic parameter set (GeMAPS) for voice research and affective computing
F Eyben, KR Scherer, BW Schuller, J Sundberg, E André, C Busso, ...
IEEE transactions on affective computing 7 (2), 190-202, 2015
18272015
Toward detecting emotions in spoken dialogs
CM Lee, SS Narayanan
IEEE transactions on speech and audio processing 13 (2), 293-303, 2005
12982005
Analysis of emotion recognition using facial expressions, speech and multimodal information
C Busso, Z Deng, S Yildirim, M Bulut, CM Lee, A Kazemzadeh, S Lee, ...
Proceedings of the 6th international conference on Multimodal interfaces …, 2004
12052004
Acoustics of children’s speech: Developmental changes of temporal and spectral parameters
S Lee, A Potamianos, S Narayanan
The Journal of the Acoustical Society of America 105 (3), 1455-1468, 1999
11661999
A system for real-time twitter sentiment analysis of 2012 us presidential election cycle
H Wang, D Can, A Kazemzadeh, F Bar, S Narayanan
Proceedings of the ACL 2012 system demonstrations, 115-120, 2012
10412012
Environmental sound recognition with time–frequency audio features
S Chu, S Narayanan, CCJ Kuo
IEEE Transactions on Audio, Speech, and Language Processing 17 (6), 1142-1158, 2009
8962009
The INTERSPEECH 2010 paralinguistic challenge
B Schuller, S Steidl, A Batliner, F Burkhardt, L Devillers, C Müller, ...
Proc. INTERSPEECH 2010, Makuhari, Japan, 2794-2797, 2010
7732010
Method of using a natural language interface to retrieve information from one or more data resources
IZ E. Levin, S. Narayanan, R. Pieraccini
US Patent 6,173,279, 2009
5962009
The Vera am Mittag German audio-visual emotional speech database
M Grimm, K Kroschel, S Narayanan
2008 IEEE international conference on multimedia and expo, 865-868, 2008
5642008
Emotion recognition using a hierarchical binary decision tree approach
CC Lee, E Mower, C Busso, S Lee, S Narayanan
Speech communication 53 (9-10), 1162-1171, 2011
5462011
An approach to real-time magnetic resonance imaging for speech production
S Narayanan, K Nayak, S Lee, A Sethy, D Byrd
The Journal of the Acoustical Society of America 115 (4), 1771-1776, 2004
4212004
Primitives-based evaluation and estimation of emotions in speech
M Grimm, K Kroschel, E Mower, S Narayanan
Speech communication 49 (10-11), 787-800, 2007
4162007
Paralinguistics in speech and language—state-of-the-art and the challenge
B Schuller, S Steidl, A Batliner, F Burkhardt, L Devillers, C MüLler, ...
Computer Speech & Language 27 (1), 4-39, 2013
4092013
Analysis of emotionally salient aspects of fundamental frequency for emotion detection
C Busso, S Lee, S Narayanan
IEEE transactions on audio, speech, and language processing 17 (4), 582-596, 2009
4052009
A review of speaker diarization: Recent advances with deep learning
TJ Park, N Kanda, D Dimitriadis, KJ Han, S Watanabe, S Narayanan
Computer Speech & Language 72, 101317, 2022
3732022
System and method for providing a compensated speech recognition model for speech recognition
RC Rose, S Pathasarathy, AE Rosenberg, SS Narayanan
US Patent 7,451,085, 2008
3552008
Behavioral signal processing: Deriving human behavioral informatics from speech and language
S Narayanan, PG Georgiou
Proceedings of the IEEE 101 (5), 1203-1233, 2013
3422013
Emotion recognition based on phoneme classes.
CM Lee, S Yildirim, M Bulut, A Kazemzadeh, C Busso, Z Deng, S Lee, ...
Interspeech, 889-892, 2004
3352004
System and method of performing user-specific automatic speech recognition
B Gajic, SS Narayanan, S Parthasarathy, RC Rose, AE Rosenberg
US Patent 9,058,810, 2015
3282015
The system can't perform the operation now. Try again later.
Articles 1–20