It is crucial to go further in order to evaluate non-Western traditional music, collect data from diverse listeners and democratize this research to different musical cultures in the world”. Gómez adds that “most of the research on music and emotions has been carried out by and for people from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) countries. The research has allowed the authors to propose areas where the research field needs to go into greater depth, such as the accessibility of open-source data, the reproducibility of the experiments, the relevance of people’s cultural context, and the need to study the ethical implications of the possible applications of MER. Hence they propose an approach in which the human being is at the center of the design of the system in order to combat the problem of subjectivity. The main goal of the research was to create a guide on the operation of current music emotion recognition (MER) systems. It is a very subjective task and using artificial intelligence algorithms still requires a great deal of research”. “People’s opinions vary greatly and it is difficult to find the reasons why the section of a song can arouse a certain emotion. “Recognition of emotions in music is one of the most complex tasks of musical description and computational modeling”, explains the doctoral student Juan Sebastián Gómez Cañón, first author of the study. In a recent publication in the journal IEEE Signal Processing Magazine, researchers with the Music Technologies Research Group (MTG) at Pompeu Fabra University, together with scientists from the Academia Sinica in Taiwan, the University of Hong Kong, and Durham University in the United Kingdom, among others, propose a new conceptualization framework that helps to characterize music in terms of emotions and thus build models that are better adapted to people’s characteristics. But with a symphony by Beethoven, the labels can range from “happy” to “nostalgic”, depending on the listener and the context. For example, for a photo of a golden labrador on Instagram, it is highly likely that we all agree that the label should be “dog”. It is the basis on which the algorithm “learns”. It is important to define this aspect as an AI algorithm needs to know what is called “ground truth” or “labels”. Each of us perceives music in a very personal way and this can be influenced by such general aspects as musical preferences, cultural background, the language of the song, etc. A song like “Happy Birthday” can express “happiness” because it is in a major scale and has a fast pace, but it can generate “sadness” if we remember a person who is no longer with us. However, not all people agree on the type of emotions, neither those that music arouses in us nor those that we perceive in the music itself when listening to it. Major music platforms such as Spotify or Deezer use classifications, generated by artificial intelligence (AI) algorithms, based on the emotions that music arouses in its listeners. Hence, knowing how to recognize the emotions that music produces has been and will continue to be very important. We use music on a day-to-day basis to regulate our emotions or revive a memory. When writing a song a composer tries to express a particular feeling, causing concert-goers to perhaps laugh, cry or even shiver. Music has been of great importance throughout human history, and emotions have always been the ultimate reason for all musical creations. Summary: A new AI algorithm recognizes the complex range of emotions invoked when people listen to pieces of music.
0 Comments
Leave a Reply. |