This set of slides briefly describes what we have been working on at the Yating Music AI team at the Taiwan AI Labs. We are going to talk about these as two demo papers at the 20th annual conference of the International Society for Music Information Retrieval (ISMIR),
1 of 10
More Related Content
Learning to Generate Jazz & Pop Piano Music from Audio via MIR Techniques
2. Music AI Research (Common Approach)
? Algorithmic composition
?MIDI in, MIDI out
? Limitations
?Lack expressivity: Cannot be directly listened to
?Some music genres are not ¡°written language¡±
2
NLU
NLG
(Music encoding used by openAI¡¯s MuseNet model)
3. Music AI Research (at the Taiwan AILabs)
3
? audio in, audio out
? audio ¡ú audio: source separation (SS) [denoising]
? audio ¡ú score: music transcription (MT) [ASR]
? score ¡ú score: composition [NLG]
? score ¡ú audio: synthesis [TTS]
4. The ¡°MIR4generation¡± Pipeline
4
? The transcription model predicts the pitch, onset/offset
timing (in absolute time), and velocity (dynamics)
? and the beat/downbeat model provides the underlying metrical grid
(in symbolic time) of the music
? So that the composition model can also learn to generate
expressive music
5. Demo
5
? We focus on piano music currently
https://soundcloud.com/yating_ai/sets/ismir-2019-
submission/
9. Jazz Piano Trio Generation: JazzRNN
? Given an 8-bar chord sequence, generate an 8-bar piano
performance with an RNN model
? Use a Transformer-like model to extend it
? Use another RNN model to improvise the bass part, given the
AI-generated piano and a random human-made drum loop
9
? https://ailabs.tw/hu
man-interaction/ai-
jazz-bass-player/
10. ? https://www.youtube.com/watch?v=9ZIJrr6lmHg
Jamming with Yating
10
? Yeh et al., ¡°Learning to generate Jazz and Pop piano music from audio via MIR
techniques,¡± ISMIR-LBD 2019
? Hsiao et al., ¡°Jamming with Yating: Interactive demonstration of a music
composition AI,¡± ISMIR-LBD 2019