Today, within computer-aided analysis of music recordings and environmental sounds, algorithms of the so-called Artificial Intelligence (AI) are used, which are specifically trained for certain tasks and fields of application with approaches of the so-called Machine Learning or Deep Learning and special audio data sets. The goal of the seminar at the Music University Weimar (winter 2022/23) was the introduction to practical approaches of audio signal processing as well as these AI-based methods on the basis ofthe programming language Python, whose basic knowledge was also taught. In the second half of the seminar, these approaches and methods were applied to concrete examples. The focus is on the recognition and classification of auditory events and textures in the context of issues in music analysis and soundscape research.
On this website, Jakob Abeßer's teaching materials are made available and student project reports (in German) are presented. Updated versions of the teaching materials as well as more materials are available on the Machine Listening Lectures website.
Presentation slides:
AIAA 0 Introduction
AIAA 1 Installation Python
AIAA 2 Audio Processing
AIAA 3 Research Projects
AIAA 4 Machine Learing
AIAA 5 Deep Learning
… and the corresponding Jupyter Notebooks:
AIAA_2_Audio_Processing
AIAA_4_Machine_Learning
AIAA_5_Deep_Learning
For installation of the Jupyter Notebooks please compare the instructions here.
Please, use the follownig yml-file as project-specific Python environment: aiaa.yml.
Some more helpful materials:
Python - Introduction
AIAA_1_Python
AIAA_6_Research_Project_Useful_Hints, html
Automatische Erkennung von Vogelgesang, Sophie Krüger.
Automatische Erkennung von Musikinstrumenten, Ronja Hoffmann.
Rhythmus-Analyse mit KI-Systemen, Nicklas Koppe