MUSZIC GENERATION USING DEEP LEARNING PPT.pptx

life45165 45 views 9 slides Jun 03, 2024
Slide 1
Slide 1 of 9
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9

About This Presentation

To create a Streamlit application for music generation using deep learning, you need to ensure that all the elements of your Python script are correctly set up and that you handle file paths correctly, especially given the specific paths on your system.


Slide Content

INTRODUCTION MUSIC GENERATOR In recent years, deep learning has revolutionized various fields, and one fascinating application is the generation of music. Deep learning-based music generators have demonstrated an ability to compose original pieces, mimic different genres, and even collaborate with human musicians. This innovative approach combines the power of neural networks with the complexity and creativity inherent in music composition.

LIBRARY USED Pretty_midi: A usefu l l ibrary that contains functions and c l asses for easy hand l ing, parsing and modifying MIDI data. Tensorfiow (RNN-LSTM): The recurrent neura l network are he l pfu l in l earning and mode ll ing sequentia l data, and can remember important information from the input, as they have an internal memory. RNN is best suitable for music generation task. GPT2: The GPT2 mode l provides exce ll ent performance a l ong with stabi l ity in text generation. This mode l can a l so be used for music generation. Fluidsynth: It is a software synthesizer for generating customized music using MIDI. It uses soundfont instruments (.sf2) to play MIDI notes.

MUSIC REPRESENTATION Sheet music: A visua l representation of musica l notes in the form of symboLs to represent pitch, chords, etc. Piano roll: Another popular and simple visual representation of musical notes in the form of bars, with each bar describing a specific note. This representation is widely used in modern DAWs for music production.

MUSIC REPRESENTATION MiDI: (Musical instrument Digital interface) is a digital standard to store the information about a music in the form of notes, durations, timings, pitch, etc, which can be provided to digital music synthesizer to create music. WAV or MP3: These are modern fi l e formats for storing audio recorded in the form of bitstream

Recurrent Neural Networks Recurrent Neural Networks (RNNs) are a type of artificial neural network designed for sequential data processing. Unlike traditional feedforward neural networks, RNNs have connections that form directed cycles, allowing them to maintain a hidden state representing information about previous inputs. This recurrent nature makes RNNs well-suited for tasks involving sequential or time-dependent data, such as natural language processing, speech recognition, and time series analysis.

METHODOLOGY 1. Data Collection: Gather a diverse dataset of musical pieces in a suitable format (MIDI, audio files, etc.). This dataset serves as the training ground for the neural network. 2. Data Preprocessing: Clean and preprocess the musical data. This may involve tasks such as normalizing tempo, transposing to a consistent key, and converting the data into a format suitable for deep learning, such as MIDI representation. 3. Choosing a Model Architecture: Select a deep learning architecture suitable for music generation. Recurrent Neural Networks (RNNs), especially variants like Long Short-Term Memory (LSTM) or Gated Recurrent Unit (GRU), are commonly used due to their ability to capture sequential dependencies.

METHODOLOGY 4. Input Representation: Represent the musical data in a way that the model can understand. This could involve encoding musical notes, durations, and other relevant features into a numerical format. 5. Model Training: Train the chosen deep learning model on the preprocessed dataset. During training, the model learns the patterns and structures present in the input musical data. 7. Generating Music: Once the model is trained, use it to generate new music. Provide an initial seed or context to guide the generation process, and let the model create a sequence of musical events. 8. Evaluation and Iteration: Evaluate the generated music based on criteria such as coherence, creativity, and adherence to a particular style. Iterate on the model and training process to improve the quality of generated music.

REFERENCES REFERENCES [ 1] Khawir Mahmood, Tausfer Rana and Abdur Rehman Raza, "Singular adaptive multi role intelligent personal assistant (SAM-IPA) for human computer interaction", International conference on open source system and technologies, 2018. [2] Veton Kepuska and Gamal Bohota, "Next generation of virtual assistant (Microsoft Cortana Apple Siri Amazon Alexa and Google Home)", IEEE conference, 2018. [3] Piyush Vashishta, Juginder Pal Singh, Pranav Jain and Jitendra Kumar, "Raspberry PI based voice-operated personal assistant", International Conference on Electronics And Communication and Aerospace Technology ICECA, 2019.

THANKS