Samhain Presentation
I recently had another presentation for my PhD. Here’s what I said..
My aims and objectives for my doctoral research have been to integrate various types of machine learning and AI applications into my workflow to see how this influences me and see what effect this has on my creative practice.
This is helping me to explore the theme of collaboration with these systems and compare that with my past and present experiences of collaboration with people. I have a lot of experience in various types of creative partnerships over the last 25 years or so and I wanted to contrast these against these new methods of working collaboratively with machine learning and AI.
As I come near the end of my studies and writing my dissertation I have been looking back to my first year and the different types of plans I had then and how things have changed over that time. In my first year I had the idea that I wanted to create an software app for my PhD project. It was suggested to me not to do that as I would be extremely busy coding for the whole 4 years. Now, since the advent of ChatGPT and similar, if I wanted to, I could create an application in minutes with vibe coding, using natural language to describe the type of application I want and its functionality. What once seemed like a multi-year commitment can now be prototyped in a day.
As I now write my dissertation, I have tended to focus broadly on the effect AI is having on my understanding of my creative and collaborative processes. As well as discussing what I have learned about the practical process of making art and music with machine learning and AI tools, I am also thinking and writing about my work lensed through the ideas explored by authors such as Yuval Harari, Byung Chul Han and Mark America who deal with the social impact and philosophy of this new technology. At the moment I am revising and reworking my text I have accumulated over the four years - I have submitted drafts of all my chapters – now I am adding more detail and elaborating about my successes and struggles in adapting my practice to include machine learning and the trials and tribulations of that integration.
I’ve been asking myself - To what extent is collaborating with this technology similar or different to working with humans?
This year, for my second major PhD work, in June, I presented the audiovisual album - Postcards. I was lucky enough to be selected by Crash Ensemble to create 7 music videos for their Postcards project which is a climate crisis centered visual-album featuring music from 7 of their new Crash Work composers. 'Postcards' was released earlier this year on the 9th of January, on RTE Culture online.
I also created a short documentary film Synthetic Hallucinations for my social media channels, to explain the work I have been doing to a more general audience, making sure to use language that could be understood by all. Its main focus was a 10 minute long narration in my own AI-cloned voice – voiceovers are difficult and cloning my own voice saved me a lot of time! - and in the background were clips and stills of all the projects I have been doing over this time.
Additionally, my time-lapse installation, my first Major PhD work from my second year was exhibited at 126Gallery in Galway in January this year and also during this year's Clifden Arts Festival recently. The material for the time-lapse now covers photographs from 4 and a half years, and I will continue to expand this piece as long as I live where I am currently. It's been interesting to observe this installation continuing to evolve with each showing.
Other performances include, an instrumental work I composed for an animation by Cartoon Salon for an Irish Composers Collective concert in Kilkenny alternative arts festival this Summer and I collaborated on a graphic score with guitarist Barry Halpin too. Its performance was in September in Dublin's Kirkos venue. The projects above in which I have collaborated with both humans and machines, have given me many experiences and autoethnographic journal materials that will be compared and contrasted in my final submission.
Some additional context on the theoretical side of my research:
Over the past year, I’ve been exploring the history of machine learning and collaboration, seeking examples that resonate with my current work. This context has helped me understand how ideas of technological collaborations have evolved, from simple tool-like applications to more embodied partnerships. Historical cases like David Cope and George Lewis offer compelling examples for thinking ahead.
George Lewis, a composer and trombonist, pioneered AI in music through works like Rainbow Family (1984), where live musicians interacted with AI systems in real-time improvisation. His Voyager system (1985–1987), developed at IRCAM in Paris, was groundbreaking in attempting to treat machines as creative partners.
David Cope’s Experiments in Musical Intelligence (EMI), launched in 1980, it analyzed classical compositions and generated new works by identifying genre-specific patterns. Using early Markov models, EMI predicted musical sequences statistically, producing coherent pieces that raised questions about creativity, authorship, and machine collaboration.
What intrigues me is how these systems respond, occasionally surprise, and influence and take me in directions I would not usually explore alone. Working with AI involves experimentation, curation, and mutual responsiveness, but the question is - how much is it really like collaborating with humans?
It seems that now everyone has a strong opinion in favor or against AI. One of the main worries is with the potential to replace workers, increasing productivity without increasing wages or improving working conditions. In the arts we are no strangers to this type of discourse. And while machine learning and AI are very broad topics with many benefits we do also need to acknowledge the excessive energy consumption and environmental impact it is causing.
Moreover, the internet is rapidly overflowing with AI-generated slop and with the recent release of video generator Sora by Open AI, this has only increased exponentially. There is a worry that Large language models – what we are familiar with in Chat GPT and similar - will soon run out of human-created knowledge and will end up training on their own slop, which will likely lead to a degradation in the output of these systems.
There are those that believe super advanced AI will bring about a utopian society and fix our greatest problems and forever change us for the better; that we will be able to live longer and lead less laborious lives: there are dystopian believers on the other side that think a Terminator-style apocalyptic ending is on the cards. There is some evidence to support this and I have noticed that many leading researchers working on AI, warn of its dangers and its potential to cause a major disaster. For example, Godfather of AI - George Hinton has been particularly vocal about halting progress on super intelligent AI as it is increasingly being integrated into all types of systems including autonomous weapon systems.
Looking back now, even in just four years, things have changed at a very rapid pace. Early AI music examples were poor by today's standards but this has changed rapidly. Now music by AI is indistinguishable from human created music and AI visual art is also difficult to distinguish. This is not necessarily a step in the right direction for creatives. Now roughly 30% of music uploaded to Spotify is fully AI-generated. Many will lose work to these systems and the frustration from creatives is real.
However, over the course of this research, I have concrete examples from my own practice of AI becoming an essential tool, and I feel that personally I have kept my artistic agency. While it can assist with idea generation, image upscaling, and repetitive tasks, it lacks the subtlety and embodied gesture that human partners bring to artistic work. My early experiments with chatbot collaboration revealed some parallels to working with people but ultimately felt more like input-output processing than genuine co-creation. Prompting, has become a skill I’ve developed, learning to use longer, more specific text to shape outputs that reflect my style and imagination. Still, the process requires curation and interpretation, and the results often feel more like responses than dialogue.
This journey has also raised deeper questions about productivity, burnout, and the future of creative labor. AI tools make us faster, but do they make us happier, or more human? Many fellow music producers share my concern about AI’s impact on the industry, which was part of my motivation to study it. Funnily enough, now, I feel ready to explore forms of art that don’t rely on software, especially given the environmental toll of large-scale AI infrastructure. Looking ahead, I believe collaboration with AI may grow as systems to become more embodied, perhaps through humanoid robots but for now, I see AI as an evolving tool with potential, not yet a partner with presence.
I would like to thank Dr. Eoin Callery and Dr. Óscar Mascareñas for their guidance through this PhD. It has been extremely helpful, thanks!