mercredi 11 décembre 2024

M2 paid internship (Up to 6 months, early 2025)

 Human nonverbal vocalisations across cultures
Laboratories:
Laboratoire Dynamique Du Langage
Université Lumière Lyon 2
CNRS UMR5596
DDL – MSH, 14 Avenue Berthelot
69363 Lyon CEDEX 07, France
ENES Bioacoustics Research Lab, CRNL Center for Research in Neuroscience in Lyon
University of Lyon / Saint-Etienne,
CNRS UMR5292, INSERM UMR_S 1028
Campus Métare, Batiment K, 21 rue du Dr Paul Michelon
42100 Saint-Etienne, France
Supervisors:
PhD student, Aitana Garcia Arasco Dynamique Du Langage' (DDL) laboratory, University
of Lyon 2 & ENES lab, Jean Monnet University, St Etienne (aitana.garcia-arasco@cnrs.fr)
Dr. Kasia Pisanski, ENES lab, Jean Monnet University, St Etienne / DDL lab, University of
Lyon 2 (katarzyna.pisanski@cnrs.fr)
Prof. David Reby, ENES lab, Jean Monnet University (dreby@me.com)
Anticipated start date and duration:
January or February 2025 (with some flexibility), up to 6 months.
Description of the project:
This project aims to gain new insights about the diversity of non-verbal vocalisations (like
scream, cries and laughter) across cultures, their acoustic forms and functions from an
evolutionary perspective, and ultimately the role that they might have in the forms of emotional
interjections (like wow! ouch!).
There are more than 7000 languages in the world, but even if the potential space for all these
different sound systems of all these languages is enormous, any given language makes use
of only a small portion of all these possible sounds. Compared to speech, nonverbal vocal
signals can exploit a much broader acoustic soundscape due to the lack of linguistic rules.
Despite their ubiquity in human social communication and their ostensible roots in animal
affective calls, human non-verbal vocalisations such as laughter, screams and moans remain
remarkably understudied in our species, especially across cultures. It has been shown that
people can correctly classify several emotions from nonverbal vocalisations even if they are
produced by speakers from a different culture. However, preliminary studies point to a potential
in-group advantage in which accuracy increases as a function of cultural similarity between
speaker and listener. If so, does it mean that the acoustic structure of simulated non-verbal
vocalisations, at least to some degree, may differ across cultures? Among all different factors
that may reduce this in-group advantage, we hypothesise that the phylogenetic language
and geographic proximity between cultures may play an important role.
For this project we have more than 15000 non-verbal vocalisations and interjections (i.e.
words like ay! or ouch) recorded from participants of 15 different countries, including speakers
of 7 big languages families (Indo-European, Uralic, Afro-Asiatic, Turkic, Sino-Tibetan,
Austronesian and Altaic). These vocalisations were collected either by an online
M2 paid internship (Up to 6 months, early 2025)
crowdsourcing platform (Prolific) or directly in the field (Mongolia and Japan). We asked the
participants to imagine themselves in 16 fictional scenarios (e.g., pain, amusement, fear) and
produce both a non-verbal vocalisation and an interjection according to the emotional context.
Thus, we have volitional (i.e. simulated) non-verbal vocalisations and their verbal
counterparts for the same situation. Our aim now is to test whether the acoustic structure of
vocalisations and interjections varies across cultures depending on the emotional context.
Finally, we aim to use perception studies to test if these differences are salient to listeners to
better understand any in-group advantage found in previous studies. In this project we also
aim (if time allows for it) to test if non-verbal vocalisations and interjections produced in
the same situation share the similar acoustic space across cultures.
Main missions of the successful candidate:
- To conduct the acoustic analysis (i.e. pitch, loudness, formants, non-linear
phenomena) of a large dataset of nonverbal vocalisations and verbal utterances from
more than 15 different cultures using PRAAT and soundgen in R.
- To test the functionality of non-verbal vocalisations by analysing whether their acoustic
structure varies depending on the context in which they were produced
- To test if the phylogenetic distance across languages may explain the variability of the
acoustic structure of non-verbal vocalizations.
- To participate in the design and realisation of playback experiments using the vocal
stimuli already collected in order to test if listener accuracy varies according to an “in-
group” hypothetical advantage.
- If time allows for it, to analyse whether the acoustic structure of non-verbal vocalizations
is consistent with their verbal counterparts (i.e. interjections and verbal utterances)
when they are produced in the same context.
Profile of the candidate:
We are looking for a candidate with a background in bioacoustics, psychology, biology and/or
linguistics. This project is embodied in the field of bioacoustics thus experience with acoustic
analysis (e.g. PRAAT, Audacity, Raven software or soundgen) but also solid programming and
statistical skills (e.g., R, Python) are highly desirable. A rigorous, patient and autonomous
attitude will be essential given the nature of the analysis and the large amount of data we have.
How to apply?
Interested candidates should send a cover letter and CV to Aitana Garcia Arasco
(aitana.garcia-arasco@cnrs.fr) and Katarzyna Pisanski (katarzyna.pisanski@cnrs.fr)
before December 18, 2024. You can also contact us with questions or to discuss.
Publications related to the project
Anikin, A., Bååth, R., and Persson, T. (2018). Human non-linguistic vocal repertoire: Call types and their meaning.
J. Nonverbal Behav. 42, 53–80.
Bremner, A. J., Caparos, S., Davidoff, J., de Fockert, J., Linnell, K. J., & Spence, C. (2013). “Bouba” and “Kiki” in
Namibia? A remote culture make similar shape–sound matches, but different shape–taste matches to Westerners.
Cognition, 126(2), 165-172.
Briefer, E. F. (2012). Vocal expression of emotions in mammals: mechanisms of production and evidence. Journal
of Zoology, 288(1), 1-20.
Bryant, G. A., Fessler, D. M., Fusaroli, R., Clint, E., Amir, D., Chávez, B., ... & Zhou, Y. (2018). The perception of
spontaneous and volitional laughter across 21 societies. Psychological science, 29(9), 1515-1525.
Bryant, G. A., Fessler, D. M., Fusaroli, R., Clint, E., Aarøe, L., Apicella, C. L., ... & Zhou, Y. (2016). Detecting
affiliation in colaughter across 24 societies. Proceedings of the National Academy of Sciences, 113(17), 4682-4687.
M2 paid internship (Up to 6 months, early 2025)
Cordaro, D. T., Keltner, D., Tshering, S., Wangchuk, D., & Flynn, L. M. (2016). The voice conveys emotion in ten
globalized cultures and one remote village in Bhutan. Emotion, 16(1), 117.
Cowen, A. S., Laukka, P., Elfenbein, H. A., Liu, R., & Keltner, D. (2019). The primacy of categories in the recognition
of 12 emotions in speech prosody across two cultures. Nature human behaviour, 3(4), 369-382.
D’Onofrio, A. (2013). Phonetic detail and dimensionality in soundshape correspondences: Refining the bouba-kiki
paradigm. Language and Speech, 57, 367–393.
Elfenbein, H. A., & Ambady, N. (2002). Is there an in-group advantage in emotion recognition?.
Elfenbein, H. A., & Ambady, N. (2003). Universals and cultural differences in recognizing emotions. Current
directions in psychological science, 12(5), 159-164.
Fitch, W. T. (2018). The biology and evolution of speech: A comparative analysis. Annual Review of Linguistics,
4(1), 255–279
Hawk, S. T., Van Kleef, G. A., Fischer, A. H., & Van Der Schalk, J. (2009). " Worth a thousand words": absolute
and relative decoding of nonlinguistic affect vocalizations. Emotion, 9(3), 293.
Johansson, N. E., Anikin, A., Carling, G., & Holmer, A. (2020). The typology of sound symbolism: Defining macro-
concepts via their semantic and phonetic features. Linguistic Typology, 24(2), 253–310.
Kamiloğlu, R. G. (2023). Positive emotions in the voice: Towards an ethological understanding.
Kleisner, K., Leongómez, J. D., Pisanski, K., Fiala, V., Cornec, C., Groyecka-Bernard, A., ... & Akoko, R. M. (2021).
Predicting strength from aggressive vocalizations versus speech in African bushland and urban communities.
Philosophical Transactions of the Royal Society B, 376(1840), 20200403.
Koutseff A, Reby D, Martin O, Levrero F, Patural H, Mathevon N. (2018). The acoustic space of pain: cries as
indicators of distress recovering dynamics in pre-verbal infants. Bioacoustics. 27(4):313–325.
doi:10.1080/09524622.2017.1344931
Laukka, P., & Elfenbein, H. A. (2021). Cross-cultural emotion recognition and in-group advantage in vocal
expression: A meta-analysis. Emotion Review, 13(1), 3-11.
Laukka, P., Elfenbein, H. A., Söder, N., Nordström, H., Althoff, J., Chui, W., ... & Thingujam, N. S. (2013). Cross-
cultural decoding of positive and negative non-linguistic emotion vocalizations. Frontiers in Psychology, 4, 353.
Lev-Ari, S., & McKay, R. (2022). The sound of swearing: Are there universal patterns in profanity?. Psychonomic
Bulletin & Review, 1-12.
Matsumoto, D. (2002). Methodological requirements to test a possible in-group advantage in judging emotions
across cultures: comment on Elfenbein and Ambady (2002) and evidence.
Morton, E. S. (1977). On the occurrence and significance of motivation-structural rules in some bird and mammal
sounds. The American Naturalist, 111(981), 855-869.
Pisanski, K., & Bryant, G. A. (2019). The evolution of voice perception. In N. S. Eidsheim & K. Meizel (Eds.), The
Oxford handbook of voice studies (pp. 268300). Oxford University Press.
Pisanski, K., Bryant, G. A., Cornec, C., Anikin, A., & Reby, D. (2022). Form follows function in human nonverbal
vocalisations. Ethology Ecology & Evolution, 34(3), 303-321.
Pisanski, K., Cartei, V., McGettigan, C., Raine, J., & Reby, D. (2016). Voice modulation: a window into the origins
of human vocal control?. Trends in cognitive sciences, 20(4), 304-318.
Raine J, Pisanski K, Bond R, Simner J, Reby D. (2019). Human roars communicate upper-body strength more
effectively than do screams or aggressive and distressed speech. PLoS ONE. 14(3):e0213034.
doi:10.1371/journal.pone.0213034
Sauter, D. A. (2013). The role of motivation and cultural dialects in the in-group advantage for emotional
vocalizations. Frontiers in psychology, 4, 814.
Sauter DA, Eisner F, Ekman P, Scott SK. (2010) Crosscultural recognition of basic emotions through nonverbal
emotional vocalizations. Proc. Natl Acad. Sci. USA 107, 2408–2412. (doi:10.1073/pnas. 0908239106)
Scherer, K. R., Banse, R., & Wallbott, H. G. (2001). Emotion inferences from vocal expression correlate across
languages and cultures. Journal of Cross-cultural psychology, 32(1), 76-92