Automatic Music Generation
Automatic Music Generation
Generation
Presented By:
GUNJAL PRATIK PRAKASH(24070149008)
BARGUJE VEDANT KRUSHNA(24070149031)
BIRWADKAR PRAJWAL SUNIL(24070149003)
Content
• Introduction
• Literature Study
• Methodology
• Implementation
• Conclusion & Future Work
Introduction
• Algorithmic music generation is a difficult subject that has been
extensively researched in recent decades. MarkoV models or graph-
based energy reduction algorithms, which both produce carefully
planned melodic characteristics, are two common techniques for
making algorithmic music.
• Despite the fact that these techniques can yield unique compositions,
the music they produce frequently features repetitive sequences and
lacks the theme patterns that are common in most musical works .
Large-scale corpuses may now be able to produce novel music thanks
to recent advancements in recurrent network topologies and the
expansion of computing power.
• The most well-known recurrent network for simulating long-term
dependence is the Long Short-Term Memory (LSTM) network, which
Hochreiter and Schmidhuber (5) created in 1997. Cho et al.’s Gated
Recurrent Units (GRU) have been utilized to successfully replicate long-
term dependencies in a number of generic sequence modeling
applications.
• We believe that by using LSTM and GRU networks for algorithmic
music production, we may produce works that sound distinctive and
are musically cohesive while also more correctly representing the long-
term theme structure of musical compositions.
Literature Study:
Sr. Paper Title Publisher Year Take-away points
numbe
r
1 FLUX that Plays Music Zhengcong Fei, 1 September FluxMusic, is Transformers for generating
Mingyuan Fan, 2024 music from text. It works by transforming
Changqian Yu, and text descriptions into mel-spectrogram
Junshi Huang representations using attention
mechanisms.
2 Automatic Music Generator Using Zayed University, 9 December The music generated was listenable and
Recurrent Neural Network. Abu Dhabi, United 2019 interesting which the highest score is
Arab Emirates double stacked layer GRU model with a
score of 6.85 out of 10.
3 Generating Music by Fine-Tuning Tayba Asgher Dept. 2016 the ability to train models to generate
Recurrent Neural Networks with of Computer Science pleasantsounding melodies, we believe our
Reinforcement Learning Riphah International approach of using RL to fine-tune RNN
University Lahore, Pa models could be promising for a number of
kistan. applications..
Sr.no Paper title publisher year Take away points
4 Music Generation by Deep Christopher Lueg 16 October 2018. The use of deep
Learning – Challenges and learning architectures
Directions∗
University of
Technology, and techniques for the
generation of music (as
Sydney, Australia well as other artistic
content) is a growing
area of research.
However, there remain
open challenges such
as control, structure,
creativity
• there are several feature enhancements you can consider. First, you can focus on
refining the model architecture by exploring different RNN variants such as
stacked or bidirectional LSTM/GRU layers. This exploration aims to improve the
quality of the generated music. Additionally, fine-tuning the model through
hyperparameter adjustments, like tuning the learning rate, batch size, or
sequence length, can further enhance the model's performance.
• Another avenue to explore is conditional generation, where you can train the
RNN to generate music in specific styles or genres by incorporating genre labels
or genre-specific features. Integrating additional data sources, such as lyrics or
artist-specific patterns, can also diversify and enhance the generated music.
Thank you