Thoughts Firing Actions –
I’ll not lie, I was feeling so happy with my grade from Unit 2. Ahead of receiving my grade, I was so worried that I had completely messed up with regards to my thinking. I knew that there was a depth to my work still lacking, but I just couldn’t quite put my finger on it. Then I received my feedback! This put beautifully into words exactly what I was feeling. There had to be something deeper to what I was trying to achieve. It’s really not enough to simply use AI and it’s crucial that investigations go much further.
I decided that I would try to dig a little deeper and ‘go under the bonnet’ of AI. What exactly is AI and machine learning? How easy is it to master? Can a non programmer access it? Could there be a creative use for it beyond simply using it and tapping into the vast wealth of humanity that created it without being asked?
I decided that in order to move forward I needed to train one of my own models. I wasn’t sure where to start so I asked my old friend ChatGPT. The bot suggested checking out Teachable Machine and RunwayML which I subsequently did.
It was absolutely fascinating. Below is a video of how I quickly and easily trained a model to recognise when I had a hand in front of my face. It will even work with other faces but the accuracy level drops. Although the process worked and was quite fascinating I wasn’t sure how I could use it as a part of my project work.
Machine Recognises a Hand
In the video below I trained my model to react when I placed a hand in front of my face. This can be seen by looking at the two output bars just below my face. If you would like to try this out for yourself then you can access my model from here.
It’s pretty cool and in the real world has a number of applications. For example ensuring that only your cat comes through the cat flap and not the dog!
Although this was an interesting experiment I wondered if it would be possible to train a model to output images that were based solely on my inputs. I was aware of exactly.Ai who provide a service to do this, but I hadn’t had much luck with it previously. Therefore based on ChatGPT’s recommendation I decided to try RunwayML. After one or two glitches I managed to train a character model based on a set of ‘selfies’ that I had taken over the years. The short video below shows a number of images generated from the set of images that I provided.
The AI Imposter and Me
The video below shows a selection of selfie images that I used to train my character model, set against the AI output. The text prompt was simply my name as my images had been attached to that label. I used a variety of genres to generate the outputs including: cinematic, 35mm and normal. The last few images are AI generations only. This model was trained on 29 images in total. I think that above all else the model did recognise that I have relatively long thick hair.
Conclusion
What I concluded from my little journey into machine learning was that I can accept my images being animated by AI. This is because these images are still mine. In this scenario, all the AI does, is remove the ‘heavy lifting’ from the animation process. However, I cannot accept AI text to image, because ultimately the models used in this process, derive from the work of others.
I now feel clear moving forwards, that all imagery should be mine and made by me, with the exception of AI animation that uses only my images. This will allow me to continue with my desire to produce a human, digital and AI combined piece of art. I am now feeling much happier.