Computer program creates music based on emotions – sciencedaily


A group of researchers from the University of Granada (UGR) have developed Inmamusys, software that can create music in response to emotions that arise in the listener. Using artificial intelligence (AI) techniques, the program allows original, royalty-free and inspiring music to be played continuously.

UGR researchers Miguel Delgado, Waldo Fajardo and Miguel Molina decided to design software that would allow someone who knows nothing about composition to create music. The system they designed, using AI, is called Inmamusys, acronym for Intelligent Multiagent Music System, and is capable of composing and playing music in real time.

If successful, this prototype, which was recently described in the journal Expert systems with applications, seems likely to bring big changes in terms of intrusive and repetitive canned music played in public places.

Miguel Molina, lead author of the study, says that although the repertoire of this canned music is very limited, the new invention can be used to create a pleasant, non-repetitive musical environment for anyone who needs to be within earshot all the way. throughout the day.

Everyone’s ears have suffered from the effects of canned music played repeatedly, whether in workplaces, in hospital environments, or during phone calls made to directory request numbers. On this basis, the research team decided that it would be “very interesting to design and build an intelligent system capable of automatically generating music, ensuring the right degree of emotionality (in order to manage the created environment). and originality (ensuring that the composed tunes are not repeated, and are original and endless). “

Inmamusys has the knowledge to compose emotional music through the use of AI techniques. During the design and development of the system, researchers worked on the abstract representation of the concepts needed to manage emotions and feelings. To achieve this, says Molina, “we have designed a modular system that includes, among other things, a two-tier multi-agent architecture.”

A survey was used to evaluate the system, with the results showing that users are able to identify the type of music composed by the computer. A person without any musical knowledge can use this artificial musical composer, since the user has nothing else to do but decide on the type of music. “

Under the ease of use of the system, Miguel Molina reveals that a complex framework is at work to allow the computer to imitate functionality as human as creativity. Besides creativity, music also requires specific knowledge.

According to Molina, it is “usually something that is done by human beings, although they don’t understand how they do it. In reality, there are many processes involved in musical creation and, unfortunately, we don’t. We still do not understand many of them. Others are so complex that they cannot be analyzed, despite the enormous power of current computer tools. Nowadays, thanks to advances in computer science, there are areas of research – like artificial intelligence – which seek to replicate human behavior. One of the most difficult facets to replicate is creativity. “

Goodbye to copyright payments

Commercial development of this prototype will not only change the way research is conducted on the relationship between computers and emotions, the ways of interacting with music, and the structures by which music is composed in the future. It will also serve, according to the authors of the study, to reduce costs.

According to the researchers, “Music is very prevalent in our leisure and work environments, and many of the places we visit have canned music systems. Playback of these pieces of music involves copyright payments. Our system will make those copyright payments a thing. the past.”

Source of the story:

Material provided by SINC platform. Note: Content can be changed for style and length.


Gordon K. Morehouse