M1 paid internship (3 months, January – March 2020)
Master Subject 1:
Spectographic interpretations: Can we ‘read’ emotions from spectrograms of human vocalisations?
Laboratory:
Equipe de Neuro-Ethologie Sensorielle ENES / CRNL
University of Lyon / Saint-Etienne,
CNRS UMR5292, INSERM UMR_S 1028
23 rue Michelon
42023 Saint-Etienne cedex 2
France
Supervisors:
Dr. Katarzyna PISANSKI University of Lyon / St Etienne, ENES ( kasiapisanski@gmail.com )
Prof. David REBY University of Lyon / St Etienne, ENES ( dreby@me.com )
Prof. Nicolas MATHEVON University of Lyon / St Etienne, ENES ( mathevon@univ-st-etienne.fr )
Description of the Project:
While surprisingly understudied in humans, nonverbal vocalisations such as laughter, screams, roars, and cries are frequently produced across a range of social and interpersonal contexts (Anikin et al. 2018). They are observed in every human culture and are evolutionarily ancient – probably predating speech and language – showing clear parallels with the affective vocalizations of other mammals including primates (Bryant and Aktipis, 2014). Studying human vocalisations can therefore provide novel insight into the evolution and social functions of vocal behaviour.
Form-function analyses of human vocalisations reveal that their acoustic structure (form) maps onto their purported evolved or social function. For example, babies’ cries that are recorded in a painful context (vaccine) compared to a distressing context (bath) tend to be louder, higher pitched, and are characterized by relatively more spectral nonlinearities such as deterministic chaos and sub-harmonics (Koutseff et al., 2018). All of these acoustic features can contribute to the cries ‘unpleasant’ quality, and are thought to function to elicit immediate attention and aid from a caregiver, who will be highly motivated to stop the aversive crying (for review see Pisanski & Bryant, 2018).
While there is good empirical evidence that naïve human listeners can gauge motivational and emotional states from audio of human vocalisations and speech, it has not been tested whether they can do the same based only on a visual representation of such sounds (i.e., voice spectrogram). Given that humans possess deep-rooted cross-modal association between sounds and other modalities, including vision (Spence, 2011 for review), we predict that listeners will perform well in such a task, even if they have no prior experience reading spectrograms. This project will therefore test whether men and women can assess various motivations and emotions (e.g., pain level) of human vocalisations or speech using only the corresponding spectrogram of the sound.
The successful candidate will be responsible for performing acoustic analysis of vocal stimuli, preparing the acoustic and visual stimuli (vocalisations and spectrograms) and experimental platform for playback/rating experiments, and conducting these experiments with human participants (raters).
Profile of the candidate:
The candidate must have some foundation in bioacoustics and acoustic analysis, particularly useful would be some experience producing and reading spectrograms. Experience with Praat acoustic analysis software, and/or knowledge of human or animal voice production and perception and animal behavior, are additional assets. The candidate should also have very good writing skills and knowledge of statistical analysis.
A strong motivation for data collection and analysis, seriousness and rigor in the conduct of experimental protocols and an autonomous working capacity will be essential. The student will contribute to the joint activities of the ENES laboratory.
Publications related to the project:
Anikin, A., Bååth, R., & Persson, T. (2018). Human non-linguistic vocal repertoire: Call types and their meaning. Journal of nonverbal behavior, 42(1), 53-80.
Bryant, G. A., & Aktipis, C. A. (2014). The animal nature of spontaneous human laughter. Evolution and Human Behavior, 35(4), 327-335.
Koutseff, A., Reby, D., Martin, O., Levrero, F., Patural, H., & Mathevon, N. (2018). The acoustic space of pain: cries as indicators of distress recovering dynamics in pre-verbal infants. Bioacoustics, 27(4), 313-325.
Pisanski, K., & Bryant, G. A. (2016). The evolution of voice perception. Oxford, UK: Oxford University Press.
Spence, C. (2011). Crossmodal correspondences: A tutorial review. Attention, Perception, & Psychophysics, 73(4), 971-995.
M2 paid internship (6 months, January – June 2020)
Master Subject 2:
Do men and women laugh differently? Investigating the role of acoustic cues to gender in human vocalisations
Laboratory:
Equipe de Neuro-Ethologie Sensorielle ENES / CRNL
University of Lyon / Saint-Etienne,
CNRS UMR5292, INSERM UMR_S 1028
23 rue Michelon
42023 Saint-Etienne cedex 2
France
Supervisors:
Dr. Katarzyna PISANSKI University of Lyon / St Etienne, ENES ( kasiapisanski@gmail.com )
Prof. David REBY University of Lyon / St Etienne, ENES ( dreby@me.com )
Prof. Nicolas MATHEVON University of Lyon / St Etienne, ENES ( mathevon@univ-st-etienne.fr )
Prof. Greg BRYANT University of California Los Angeles, UCLA ( gabryant@ucla.edu )
Description of the Project:
While surprisingly understudied in humans, nonverbal vocalisations such as laughter, screams, roars, and cries are frequently produced across a range of social and interpersonal contexts (Anikin et al. 2018). They are observed in every human culture and are evolutionarily ancient – probably predating speech and language – showing clear parallels with the affective vocalizations of other mammals including primates (Bryant and Aktipis, 2014). Studying human vocalisations can therefore provide novel insight into the evolution and social functions of vocal behaviour.
Laughter is one of the most commonly produced and most extensively studied human nonverbal vocalisations. Laughter functions as a social tool during social interactions. For example, it may be used to communicate positive regard, humour, or even sarcasm, and can help to form and reinforce social bonds or to communicate these bonds to bystanders (Scott et al. 2014). However, compared to speech, little is known about the indexical information embedded within the laughter signal itself, such as cues to a person’s age, sex, or relative level of masculinity and femininity.
This project will examine whether cues to gender attributes (i.e., sex, masculinity/femininity) are present in human laughter. Using archived online audio-video databases of laughter, we will compare the acoustic structure of men’s and women’s laughs to test whether men laugh with a more ‘masculine’ acoustic profile compared to women, after controlling for intrinsic sexual dimorphism in men’s and women’s voice frequencies.
The successful candidate will be responsible for collating a database of men’s and women’s laughs, performing acoustic analysis of the stimuli, preparing the acoustic stimuli and experimental platform for playback experiments, and conducting playback experiments with human participants (listeners).
Profile of the candidate:
The candidate must have a solid foundation in either bioacoustics, voice production and perception, evolutionary/experimental psychology, and animal/human behavior. She or he should have very good writing skills and knowledge of statistical analysis, as well as experience in acoustic analysis (e.g., Praat software).
A strong motivation for both online and lab-based data collection, seriousness and rigor in the conduct of experimental protocols and an autonomous working capacity will be essential. The student will contribute to the joint activities of the ENES laboratory.
Publications related to the project:
Anikin, A., Bååth, R., & Persson, T. (2018). Human non-linguistic vocal repertoire: Call types and their meaning. Journal of nonverbal behavior, 42(1), 53-80.
Bryant, G. A., & Aktipis, C. A. (2014). The animal nature of spontaneous human laughter. Evolution and Human Behavior, 35(4), 327-335.
Bryant, G. A., Fessler, D. M., Fusaroli, R., Clint, E., Amir, D., Chávez, B., ... & Fux, M. (2018). The perception of spontaneous and volitional laughter across 21 societies. Psychological Science, 29(9), 1515-1525.
Owren, M. J., & Bachorowski, J. A. (2003). Reconsidering the evolution of nonlinguistic communication: The case of laughter. Journal of Nonverbal Behavior, 27(3), 183-200.
Simpson, A. P. (2009). Phonetic differences between male and female speech. Language and linguistics compass, 3(2), 621-640.
Scott, S. K., Lavan, N., Chen, S., & McGettigan, C. (2014). The social life of laughter. Trends in cognitive sciences, 18(12), 618-620.
Master Subject 1:
Spectographic interpretations: Can we ‘read’ emotions from spectrograms of human vocalisations?
Laboratory:
Equipe de Neuro-Ethologie Sensorielle ENES / CRNL
University of Lyon / Saint-Etienne,
CNRS UMR5292, INSERM UMR_S 1028
23 rue Michelon
42023 Saint-Etienne cedex 2
France
Supervisors:
Dr. Katarzyna PISANSKI University of Lyon / St Etienne, ENES ( kasiapisanski@gmail.com )
Prof. David REBY University of Lyon / St Etienne, ENES ( dreby@me.com )
Prof. Nicolas MATHEVON University of Lyon / St Etienne, ENES ( mathevon@univ-st-etienne.fr )
Description of the Project:
While surprisingly understudied in humans, nonverbal vocalisations such as laughter, screams, roars, and cries are frequently produced across a range of social and interpersonal contexts (Anikin et al. 2018). They are observed in every human culture and are evolutionarily ancient – probably predating speech and language – showing clear parallels with the affective vocalizations of other mammals including primates (Bryant and Aktipis, 2014). Studying human vocalisations can therefore provide novel insight into the evolution and social functions of vocal behaviour.
Form-function analyses of human vocalisations reveal that their acoustic structure (form) maps onto their purported evolved or social function. For example, babies’ cries that are recorded in a painful context (vaccine) compared to a distressing context (bath) tend to be louder, higher pitched, and are characterized by relatively more spectral nonlinearities such as deterministic chaos and sub-harmonics (Koutseff et al., 2018). All of these acoustic features can contribute to the cries ‘unpleasant’ quality, and are thought to function to elicit immediate attention and aid from a caregiver, who will be highly motivated to stop the aversive crying (for review see Pisanski & Bryant, 2018).
While there is good empirical evidence that naïve human listeners can gauge motivational and emotional states from audio of human vocalisations and speech, it has not been tested whether they can do the same based only on a visual representation of such sounds (i.e., voice spectrogram). Given that humans possess deep-rooted cross-modal association between sounds and other modalities, including vision (Spence, 2011 for review), we predict that listeners will perform well in such a task, even if they have no prior experience reading spectrograms. This project will therefore test whether men and women can assess various motivations and emotions (e.g., pain level) of human vocalisations or speech using only the corresponding spectrogram of the sound.
The successful candidate will be responsible for performing acoustic analysis of vocal stimuli, preparing the acoustic and visual stimuli (vocalisations and spectrograms) and experimental platform for playback/rating experiments, and conducting these experiments with human participants (raters).
Profile of the candidate:
The candidate must have some foundation in bioacoustics and acoustic analysis, particularly useful would be some experience producing and reading spectrograms. Experience with Praat acoustic analysis software, and/or knowledge of human or animal voice production and perception and animal behavior, are additional assets. The candidate should also have very good writing skills and knowledge of statistical analysis.
A strong motivation for data collection and analysis, seriousness and rigor in the conduct of experimental protocols and an autonomous working capacity will be essential. The student will contribute to the joint activities of the ENES laboratory.
Publications related to the project:
Anikin, A., Bååth, R., & Persson, T. (2018). Human non-linguistic vocal repertoire: Call types and their meaning. Journal of nonverbal behavior, 42(1), 53-80.
Bryant, G. A., & Aktipis, C. A. (2014). The animal nature of spontaneous human laughter. Evolution and Human Behavior, 35(4), 327-335.
Koutseff, A., Reby, D., Martin, O., Levrero, F., Patural, H., & Mathevon, N. (2018). The acoustic space of pain: cries as indicators of distress recovering dynamics in pre-verbal infants. Bioacoustics, 27(4), 313-325.
Pisanski, K., & Bryant, G. A. (2016). The evolution of voice perception. Oxford, UK: Oxford University Press.
Spence, C. (2011). Crossmodal correspondences: A tutorial review. Attention, Perception, & Psychophysics, 73(4), 971-995.
M2 paid internship (6 months, January – June 2020)
Master Subject 2:
Do men and women laugh differently? Investigating the role of acoustic cues to gender in human vocalisations
Laboratory:
Equipe de Neuro-Ethologie Sensorielle ENES / CRNL
University of Lyon / Saint-Etienne,
CNRS UMR5292, INSERM UMR_S 1028
23 rue Michelon
42023 Saint-Etienne cedex 2
France
Supervisors:
Dr. Katarzyna PISANSKI University of Lyon / St Etienne, ENES ( kasiapisanski@gmail.com )
Prof. David REBY University of Lyon / St Etienne, ENES ( dreby@me.com )
Prof. Nicolas MATHEVON University of Lyon / St Etienne, ENES ( mathevon@univ-st-etienne.fr )
Prof. Greg BRYANT University of California Los Angeles, UCLA ( gabryant@ucla.edu )
Description of the Project:
While surprisingly understudied in humans, nonverbal vocalisations such as laughter, screams, roars, and cries are frequently produced across a range of social and interpersonal contexts (Anikin et al. 2018). They are observed in every human culture and are evolutionarily ancient – probably predating speech and language – showing clear parallels with the affective vocalizations of other mammals including primates (Bryant and Aktipis, 2014). Studying human vocalisations can therefore provide novel insight into the evolution and social functions of vocal behaviour.
Laughter is one of the most commonly produced and most extensively studied human nonverbal vocalisations. Laughter functions as a social tool during social interactions. For example, it may be used to communicate positive regard, humour, or even sarcasm, and can help to form and reinforce social bonds or to communicate these bonds to bystanders (Scott et al. 2014). However, compared to speech, little is known about the indexical information embedded within the laughter signal itself, such as cues to a person’s age, sex, or relative level of masculinity and femininity.
This project will examine whether cues to gender attributes (i.e., sex, masculinity/femininity) are present in human laughter. Using archived online audio-video databases of laughter, we will compare the acoustic structure of men’s and women’s laughs to test whether men laugh with a more ‘masculine’ acoustic profile compared to women, after controlling for intrinsic sexual dimorphism in men’s and women’s voice frequencies.
The successful candidate will be responsible for collating a database of men’s and women’s laughs, performing acoustic analysis of the stimuli, preparing the acoustic stimuli and experimental platform for playback experiments, and conducting playback experiments with human participants (listeners).
Profile of the candidate:
The candidate must have a solid foundation in either bioacoustics, voice production and perception, evolutionary/experimental psychology, and animal/human behavior. She or he should have very good writing skills and knowledge of statistical analysis, as well as experience in acoustic analysis (e.g., Praat software).
A strong motivation for both online and lab-based data collection, seriousness and rigor in the conduct of experimental protocols and an autonomous working capacity will be essential. The student will contribute to the joint activities of the ENES laboratory.
Publications related to the project:
Anikin, A., Bååth, R., & Persson, T. (2018). Human non-linguistic vocal repertoire: Call types and their meaning. Journal of nonverbal behavior, 42(1), 53-80.
Bryant, G. A., & Aktipis, C. A. (2014). The animal nature of spontaneous human laughter. Evolution and Human Behavior, 35(4), 327-335.
Bryant, G. A., Fessler, D. M., Fusaroli, R., Clint, E., Amir, D., Chávez, B., ... & Fux, M. (2018). The perception of spontaneous and volitional laughter across 21 societies. Psychological Science, 29(9), 1515-1525.
Owren, M. J., & Bachorowski, J. A. (2003). Reconsidering the evolution of nonlinguistic communication: The case of laughter. Journal of Nonverbal Behavior, 27(3), 183-200.
Simpson, A. P. (2009). Phonetic differences between male and female speech. Language and linguistics compass, 3(2), 621-640.
Scott, S. K., Lavan, N., Chen, S., & McGettigan, C. (2014). The social life of laughter. Trends in cognitive sciences, 18(12), 618-620.