A great forum of talented individuals. The first lecture I attended was with Carmine Cella speaking to mathematic generation of musical ideas. To be honest, most of it went over my head and seemed to advanced for me to understand. I was piqued by his description, however, of the "reduction" or "simplification" or "filtration" process (I forget what word he used exactly) that was used to sort and kind of modify musical ideas when orchestrating in Orchidea. He at one point demonstrated a kind of chart that reminded me of Punnett squares from biology, especially since he referred to some combined ideas derived from his algorithm as "offspring". This was interesting to me as a musical idea, as I had never thought of possibly composing around musical ideas in such a "natural" way. How could these ideas be applied to natural selection of music? What would dominant traits and such look like? Moreover, I wish he went into detail on some of how his algorithms work, but most likely that would've gone over my head too. Regardless, I am thinking of how I might make my own algorithms for generative music and would that could look like when applied to games. I also got an idea from Professor Ruzanka to use music created in my games as information to inform the creation of procedurally generated terrain. What would this look like in Unity...?
top of page
bottom of page
Comentários