Introduction: Correctly perceiving emotions in speech is at the core of communication. To identify spoken emotions, one should identify the semantics (lexical meaning) and the prosody (tone of speech) of the utterance, and integrate them. This may be challenging for people with Tinnitus (PwT), due to changes in auditory-sensory factors and the associated cognitive cost.
Method: The study used a novel tool, Test for Rating of Emotions in Speech (T-RES). 22 PwT and 24 controls (matched on age, gender, education, vocabulary) were presented with 30 spoken sentences. The emotional valence of prosody and semantics appear in different combinations from trial to trial, with four separate discreet emotions (anger, happy, sad and neutral). In one condition, listeners were asked to rate the emotion expressed by the speaker (as if they are over the phone) on three emotional scales (anger, happy and sad) in three separate blocks. Spoken sentences were presented 40 dB above individual auditory thresholds (PTA) in a sound attenuated booth.
Results: Mimicking previous studies, controls place a larger emphasis on the prosodic information than the lexical. For example, lexically happy "This is the happiest day" spoken with angry prosody is rated higher on anger than on happiness scale. However, this was not the case for PwT, who placed the same relative weight for the prosody and semantics. Thus, the previous example was rated as equally angry and happy.
Conclusions: Findings suggest implications for social communication with PwT, and possible paths for rehabilitation. Our data also highlights the cognitive toll associated with Tinnitus.