Generated Therapeutic Music Based on the ISO Principle
This paper presents an emotion-driven music generation model designed to support the development of an intelligent system to support music therapy informed by the ISO principle [1]. Following the ISO principle, the system’s primary objective is to generate music that aligns with patients’ emotions swiftly. To achieve this, we leverage a dataset for emotion recognition to fine-tune a pre-trained audio model, aiding in the annotation of a vast ABC notation dataset. Utilizing these annotated ABC notations, we employ a sequence generation model to build a system that could generate music according to the recognized emotions on the fly, thereby efficiently tailoring musical compositions to the emotional needs of patients in a therapeutic context.