Generative AI generated music traces - Gen AI model

arun396690 17 views 10 slides Mar 11, 2025
Slide 1
Slide 1 of 10
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10

About This Presentation

AI


Slide Content

GEN AI-AI-GENERATED MUSIC TRACKES (Develop a Generative AI Model Capable of Composing Original Music Pieces in Various Genres) PRESENTED BY J.MUTHU MENAKA - 710422105012 E.PREAM MOORTHI – 710422105018 S.PRAVEEN – 710422105017 R.SABARISHWARAN - 710422105501

Abstract In last few years we noticed that the usage of artificial intelligence growing very fast which is implemented in the photography, videography, computer vision and many Moe topics which we can’t explain in this paper but the point is AI is used everywhere. AI music generation is on of them in which we generate music through the AI with the help of machine learning models

System Requirements HARDWARE REQUIREMENTS 1. CPU : Quad-core processor (e.g., Intel i5 or AMD Ryzen 5) 2. RAM : 8 GB 3. Storage : 500 GB SSD (for faster load times) 4. GPU : Integrated graphics can work, but a dedicated GPU is better for more complex models . SOFTWARE REQUIREMENTS 1. OpenAI MuseNet 2. AIVA 3. JukeBox 4. Magenta (from Google)

TOOLS AND VERSIONS OpenAI MuseNet AIVA (Artificial Intelligence Virtual Artist) Google Magenta Jukedeck (Now part of TikTok )

CHARACTERISTICS OF GENRE-SPECIFIC The goal of this system is to generate novel. Contextually appropriate. Emotionally resonant music pieces that reflect the stylistic conventions and nuances of different genres. Classical, jazz, electronic, pop, and more. The core of the proposed system is a multi-layered neural network architecture that blends reinforcement learning.

FLOW CHART

CODE IMPLEMENTATION import music21 as m21 import tensorflow as tf # Data Preparation def preprocess_data ( music_files ): # Convert music files to MIDI format midi_data = [m21.converter.parse(file) for file in music_files ]  # Convert notes and chords to numerical sequences input_sequences = [] for i in range(0, len ( notes_and_chords ) - sequence_length , 1): sequence_in = notes_and_chords [ i:i + sequence_length ] input_sequences.append ([ element_to_int [char] for char in sequence_in ])   # Pad sequences to a fixed length input_sequences = tf.keras.preprocessing.sequence.pad_sequences ( input_sequences , maxlen = sequence_length , padding='post')   # Create input and output sequences X, y = input_sequences [:, :-1], input_sequences [:, -1]   # Reshape input to [samples, time steps, features] X = np.reshape (X, ( X.shape [0], X.shape [1], 1)) return X, y, element_to_int

SAMPLE OUTPUT y is the next note (the target value) after the sequence Y ou need to ensure the shape of y matches the output dimensions of the model e.g., one-hot encoded vectors if you're using a softmax output

CONCLUSION The development of a generative AI model for composing original music across various genres has the potential to revolutionize the way music is created, consumed, and experienced. By leveraging advanced machine learning techniques such as deep learning, reinforcement learning, and transformer models, AI can autonomously generate complex compositions that capture the nuances of diverse musical styles, from classical and jazz to contemporary pop, electronic, and experimental genres.

THANK YOU
Tags