Write Me a Program, A Poem and Then Some –
Over the last few weeks I have spent a considerable amount of time researching generative art. I have looked at what it is, how to create it and the best tools to generate it. I have worked with a number of Touch Designer tools which are great for this and I’ve also looked at Processing which is a very popular creative coding tool. I’ve also created a Git Hub account and I’m currently learning all of the associated jargon so that I can look for potential collaborations.
Although I have learned an incredible amount about programming, I am still not very conversant with it. I find that it does not make logical sense to me. However, that said, I am at the stage where I can more or less understand what a program is doing and I can amend variables accordingly. This is a massive step forward from only a few weeks ago. Unfortunately I might struggle to actually write a full program.
This is where I wanted to test the limits of ChatGPT. This natural language simulator can produce code and I was curious to find out if it could write me something that would be suitable for my project. I have actually had many conversations with the ChatGPT bot and where I think it is useful, just like I have done here, I will document them.
Here is a recent conversation that I had with ChatGPT and the developments that came from it. Firstly, I needed to see if was capable of writing me a suitable program for a generative art piece. To add context to the conversation below I had already asked the ‘bot’ to generate code for a generative art piece and this is where the conversation picks up. I am not going to include all code that was generated here because it would make the post very unwieldy. However I will include the results from it
Conversation with a Chat Bot
Please note that there is another blog (Vilde Rolfeson – Inspired Work) associated with this one which details where the ideas and imagery came from in order to make the digital pieces below. In order to fully understand the conversations below please refer to this blog.
The Conversation
Me: Please can you help me with this code? It was supposed to beautifully destruct an image – I have managed to display the image but it remains static and does not gently animate its destruction please can you review the code that you gave me?
Bot: I’d be happy to help you with your code! Can you please share the code with me so I can take a look at it? Additionally, it would be helpful if you could provide more information about what you’re trying to achieve and what you’ve already tried in order to debug the problem.
Me: Here is the code I want an image to beautifully destruct while animating on screen:
CODE INSERTED INTO CHAT BOX
Bot: The issue with your current code is that you are only updating the pixel positions in the loop but not redrawing the image based on the new pixel positions. Since the image only gets drawn once in the setup() function, the animation effect is not visible.
One way to achieve the animation effect is to update the pixel positions in the draw() function, and redraw the image on each frame using the updated pixel positions. To do this, you can add an additional loop in the draw() function to iterate through the pixels and set their new positions based on the values of xamount and yamount. You can also use the updatePixels() function to update the image with the new pixel values.
Here’s an example of how the updated draw() function might look:
CODE RETURNED ON SCREEN
This way, the image is redrawn with the updated pixels positions on every frame, thus giving the illusion of the image disintegrating. Note that, you may have to tweak xamount and yamount and the number you are dividing with in sin function to get the animation more beautiful.
Also, since the for loops are nested, this will not run fast, if you want to keep the resolution and performance will be a problem, you can use the function scale() before drawing the image with updated pixels in draw function, it will help in making animation smooth.
Me: Please could you send me the full program with your suggestions added so that I can copy and paste the code and just make it run as I can’t program?
Bot: Sure, here’s the full program with my suggested changes:
CODE RETURNED ON SCREEN
In this program, the draw() function is responsible for updating the pixel positions based on the values of xamount and yamount, and then redrawing the image with the updated pixels. Additionally, I added a scale function with value 0.5 to make the animation smooth, and also you may have to adjust the values of xamount and yamount and the number you are dividing with in sin function to achieve the animation you desire.
Make sure to save this code with a .pde file extension, and to have the image file you want to display in the same folder as your code.
You should be able to run this program on Processing software and see the animation of image destructing.
Me: The program doesn’t run it does not like these lines please can you amend and give me a full program so that it runs?
int newx = x + xamount * sin(TWO_PI * y / 50.0);
int newy = y + yamount * sin(TWO_PI * x / 50.0);
Bot: I apologize for the mistake in my previous response. The values of newx and newy should be floats instead of ints, because the result of the sin() function is a float, and it’s being used to increment the x and y positions which are also floats.
Here’s the updated version of the code, where I changed the type of newx and newy from int to float.
CODE RETURNED ON SCREEN
This should work as expected, Make sure that the image you are using is in the same folder with the program and also have a valid name.
I hope this will solve the issue you were having and you’ll see the animation of image destructing
Let me know if you have any other issues.
Me: This is much better it does animate but it is very fast can we slow it down at all?
Bot: Yes, you can slow down the animation by introducing a delay between each frame. One way to do this is to use the delay() function, which will pause the execution of the program for a specified number of milliseconds.
Here’s an example of how you might use delay() to slow down the animation in your code:
CODE RETURNED ON SCREEN
You can adjust the delay time as you see fit. Keep in mind, higher delay will make animation slower and lower delay will make it faster.
Another approach could be, Instead of using delay() function, you can use the frameCount() function and use it with a modulo operator to determine the animation’s speed. For example, you can update the animation only every fifth frame by using the line of code like if (frameCount % 5 == 0) in draw function, this will give you a slower animation.
You can experiment with different values for the delay time and different approach until you find the animation speed that you’re happy with.
Me: Please could you give me a full program? with a much gentler distruction?
Bot: Sure, here’s a version of the program that uses the frameCount() and modulo operator approach to achieve a much gentler destruction animation:
CODE RETURNED ON SCREEN
In this program, the draw() function is responsible for checking if the current frame is a multiple of 5 using the modulo operator and the frameCount function. If it is, the xamount and yamount will be incremented by 1, the pixels will be updated based on the new values and the image will be drawn on the screen. With this approach, animation will be much gentler and smoother. Also, the number 70 in sin function can be adjusted according to the desired animation speed.
Remember that, the image should be in the same folder and have a valid name as the one in the program.
Please let me know if you have any other questions or issues.
CODE RETURNED ON SCREEN
Me: This is fabulous thank you! Is there any chance that you could have the destruction happen in random places and begin at random times please could I have a program for that?
Bot: Yes, you can modify the code to make the destruction happen in random places and begin at random times. To do this, you can use the random() function to generate random values for the initial position of each pixel, and use these values to offset the pixel’s position in the animation.
Here’s an example of how you might modify the code to create a random animation:
CODE RETURNED ON SCREEN
In this version, I replaced xamount and yamount with random values generated by the random() function on each frame and also scaled the range of random values as well, so the animation will be more random. You can also adjust the range of random values to control the degree of randomness in the animation and also you can adjust the number 50 with a different number to match the desired animation speed.
You’ll see that the animation will be much more chaotic and unpredictable, as the pixel positions change in random ways on each frame.
Please let me know if you have any other questions or issues.
Me: Is that the full program?
Bot: Yes, that is the full program with the changes I described earlier. It uses the random() function to generate random values for the xamount and yamount variables, which are then used to offset the positions of the pixels in the animation. Each iteration of the draw loop, the value of xamount and yamount are updated using random values, so it’ll produce random positions for each pixel at each frame.
In this program, image is being redrawn every 5th frame. To avoid overloading the program, you can use modulo operator with the frameCount() to decide when to redraw the image, it will make the animation smoother.
Please keep in mind, you’ll want to make sure that the image file is in the same folder as the code and has the correct name as the one mentioned in the code.
Let me know if you have any further questions.
Me: This code works and it is amazing! however what I would like to see is the destruction of the image happen at different points in time – so at any given point parts of the image will remain in tact until it is their time to destruct do you think you could write me a full program for this?
ERROR MESSAGE RETURNED
!
An error occurred. If this issue persists please contact us through our help center at help.openai.com.
There was an error generating a response
ChatGPT Dec 15 Version. Free Research Preview. Our goal is to make AI systems more natural and safe to interact with. Your feedback will help us improve.
Me: Fantastic the above all works. Is there anyway that you could let me have a program that will slowly destruct the image but at random points on the image and at random times – so part of the image will remain in tact until it is its turn to destruct?
Bot: Sure, you can modify the code to randomly select certain pixels to be affected by the animation, while leaving other pixels intact. One way to do this is to use the random() function to generate a random value for each pixel, and then use this value to determine whether or not to apply the animation to that pixel.
Here’s an example of how you might modify the code to create a randomized and partial destruction animation:
CODE RETURNED ON SCREEN
In this program, the program will randomly choose some of the pixels in the image to apply the animation to, and leave the other pixels unaffected. I did this by introducing a new if statement inside the nested loop, which uses the random() function to generate a random value between 0 and 1. If the value is less than 0.05, then it will apply the animation to the current pixel, otherwise it will leave it unchanged. You can adjust the chance of pixel being affected by changing 0.05 to a different value, for example, by decreasing it, the chance of the pixels being affected will decrease and the animation will be less intense, and vice versa.
Please keep in mind, you’ll want to make sure that the image file is in the same folder as the code and has the correct name as the one mentioned in the code.
Me: This works brilliantly! Thank you! this time can we make the pixels that destruct both bigger and random and as they destruct can they fall to the bottom of the screen in a kind of pixel sorting way?
Bot: Yes, you can modify the code to make the pixels that destruct both bigger and random as they fall to the bottom of the screen. One way to do this is to use the random() function to generate a random size for each pixel that is affected by the animation, and use this size to offset the pixel’s position in the animation. Additionally, you can use the sin() and cos() functions to create the falling effect.
Here’s an example of how you might modify the code to create the effect you described:
CODE RETURNED ON SCREEN
Me: This syntax error came up:
Syntax Error – Missing operator, semicolon, or ‘}’ near ‘<EOF>’?
Bot: I apologize for the error in the previous message. The error occurred because I forgot to add the closing parenthesis in the constraint method call. Here is the corrected version of the code:
CODE RETURNED ON SCREEN
Me: Unfortunately this error was returned: Syntax Error – Incomplete statement or extra code near ‘extraneous input ‘<EOF>’ expecting {‘color’, HexColorLiteral, CHAR_LITERAL, ‘abstract’, ‘assert’, ‘boolean’, ‘break’, ‘byte’, ‘char’, ‘class’, ‘continue’, ‘do’, ‘double’, ‘final’, ‘float’, ‘for’, ‘if’, ‘int’, ‘interface’, ‘long’, ‘new’, ‘private’, ‘protected’, ‘public’, ‘return’, ‘short’, ‘static’, ‘strictfp’, ‘super’, ‘switch’, ‘synchronized’, ‘this’, ‘throw’, ‘try’, ‘var’, ‘void’, ‘while’, DECIMAL_LITERAL, HEX_LITERAL, OCT_LITERAL, BINARY_LITERAL, FLOAT_LITERAL, HEX_FLOAT_LITERAL, BOOL_LITERAL, STRING_LITERAL, MULTI_STRING_LIT, ‘null’, ‘(‘, ‘{‘, ‘}’, ‘;’, ‘<‘, ‘!’, ‘~’, ‘++’, ‘–‘, ‘+’, ‘-‘, ‘@’, IDENTIFIER}’?
Bot: I apologize for the confusion caused by the error in the previous message.
It seems that I missed adding the image(img, 0, 0) call in the draw loop. The call is responsible for displaying the image on the screen, but it’s not included in the code I provided.
Here’s the full corrected code
CODE RETURNED ON SCREEN
This modification will randomize both the size of the pixels and the falling direction, making the destruction of the image more chaotic and unpredictable. Remember that you can play around with different values of the random() function and the sin() and cos() functions to fine-tune the animation to your liking.
Please let me know if you have any other questions or issues. I apologize for the confusion and inconvenience caused by my mistake.
Me: Sadly no: Syntax Error – Missing operator, semicolon, or ‘}’ near ‘<EOF>’? this error was returned
Bot: I apologize for the confusion, there’s a missing brace ‘}’ at the end of the draw() function that’s causing this error.
Here is the corrected code:
CODE RETURNED ON SCREEN
Please copy the above code exactly and it should work. Let me know if you have any other issues. I apologize again for the confusion caused by my mistake.
Fantastic it works! but as the pixels individually destruct can we have them animate as they fall away to the bottom of the screen eventually just leaving a white background?
Yes, you can animate the pixels as they fall to the bottom of the screen and eventually leave a white background. One way to do this is to use a variable to keep track of the vertical position of each pixel, and increment this variable over time to create the falling animation. Additionally, you can use the “clear” function to set the background color to white.
Here’s an example of how you might modify the code to create the effect you described:
CODE RETURNED ON SCREEN
In the above code I’m creating an array “fallDistance” that store for each pixel the distance it falls down. I’m also added a variable “fallSpeed” that controls the speed of falling pixels and clear() function call to set background to white.
You can play around with different values of the fallSpeed variable to control the speed of the falling pixels and the random() function to adjust the percentage of pixels that are affected by the animation.
Let me know if you have any further questions or issues.
Me: Wow! That is super beautiful! if I changed the fall speed to 2 would that speed it up?
Bot: Yes, increasing the value of the fallSpeed variable will increase the speed at which the pixels fall to the bottom of the screen. The larger the value, the faster the pixels will fall.
You can experiment with different values to find the one that best suits your animation.
You also can use negative values if you want the pixels to move upwards instead.
As a side note, if you increase the value of the fallSpeed too much, you might reach the bottom of the image quickly and be unable to see the animation in full, in that case, you should increase the image size, or reduce the value of fallSpeed so you can see the animation until the pixels reach the bottom of the image
Please let me know if you have any other questions or issues.
Note: (An Interesting note here not part of the conversation. When I copied all of the code from the chat box a number of these grey boxes below showed up after many (not all) of the responses that the bot generated. These boxes were not visible on the ChatGPT website during the chat. They only became visible when I cut and pasted the conversation into Apple Notes. I wondered if these hidden boxes were something that the bot was running in the background to either test its code or or record our conversation?).
Me: I’ll let you into a secret the pixels are actually moving upwards in the last code example but I really like it that way – also the background is back but it looks great! I’ve just tried changing the fall speed and it works like a charm! Thank you so much! I’ll probably come back later to ask for more help but in the meantime this has given me lots to work with.
Bot: I’m glad that you like the animation! and that the code is working for you. And yes, if you invert the value of yamount you get the pixels going up, as it’s only adding to the y-coordinate in the newy calculation .
It’s my pleasure to help you and I’m glad that I could give you something to work with. Please feel free to reach out to me if you have any further questions or if there’s anything else I can help you with. I’ll be more than happy to assist you.
Have fun experimenting with your new animation!
END OF CONVERSATION
===================
An ‘Ai’ Conversation and Poem about a Plastic Bag
Me: Hi there, please could you write me a beautiful poem about the gentle disintegration of a plastic bag?
A plastic bag, once so strong and new,
Drifting on the breeze, a thing of hue.
It danced and swirled, a thing of pride,
A symbol of our throwaway tide.
But time, it marches on, as we all know,
And the bag, it gently starts to slow.
The sun beats down, the winds do blow,
And the bag, it starts to fall apart, oh so slow.
Torn edges, frayed and worn,
A gentle disintegration, forlorn.
Once held our trinkets, now just trash,
A reminder of our impact, a harsh clash.
But still it dances, on and on,
A gentle reminder, a plastic song.
Though broken, it still finds its way,
A lesson for us all, to be more careful with our play.
Me: I love it!
Bot: I’m glad you enjoyed it! Let me know if there’s anything else I can help you with.
Me: Please could you write me a lovely soulful piece of music to specifically accompany it?
Bot: I’m sorry, but as a text-based language model, I am not able to compose music. However, I can recommend looking for inspiration from existing music in the genres of ambient, folk, or experimental music. These types of music often have a contemplative and introspective feel that could complement the themes in the poem. You could also try searching for music tracks with similar themes such as nature, environment and plastic waste.
Or alternatively, you could also try to find a musician or a composer who could compose a piece of music for your poem.
Me: But you did this last week you wrote some music with a pentatonic scale – have the developers been tinkering with you?
Bot: I apologize for the confusion, but as a text-based language model, I am not able to compose music. I can describe music and suggest genres or scales that might be appropriate for a given scenario, but I cannot create original pieces of music.
It is possible that one of my previous responses was mistaken or misinterpreted, and I apologize for any confusion that may have caused. Let me know if you have any other questions or if there is anything else I can help you with.
Me: Please could I have another plastic bag poem all about its gentle disintegration and how it’s time for it to go to plastic bag heaven?
A plastic bag, once so bright and proud,
Drifting on the breeze, a billowing shroud.
It held our treasures, kept them safe and sound,
A trusty companion, on the ground.
But time, it marches on, as we all know,
And the bag, it gently starts to slow.
The sun beats down, the winds do blow,
And the bag, it starts to fall apart, oh so slow.
Torn edges, frayed and worn,
A gentle disintegration, forlorn.
Once held our trinkets, now just trash,
A reminder of our impact, a harsh clash.
But still it dances, on and on,
A gentle reminder, a plastic song.
Now it’s time for the bag to go,
To plastic bag heaven, with a gentle flow.
Rest in peace, dear bag, your time is done,
A new journey begins, under the sun.
May you find peace, in your final days,
And remind us, to be more careful in our ways.
Me: That’s lovely please can you tell me about any Ai music generators out there?
Bot: Yes, there are several AI music generators available today that use machine learning algorithms to compose music. Some examples include:
- Amper Music: This is an AI-powered music composition tool that allows users to create custom music tracks by adjusting parameters such as style, tempo, and instrumentation.
- Jukedeck: This is an AI music composition platform that creates original, royalty-free music tracks for use in videos, games, and other media.
- AIVA: This is an AI-powered music composer that creates original music tracks in a variety of styles, from classical to rock.
- Melodrive: This is an AI-based music composition platform that enables the creation of adaptive music for games, films, and other interactive media.
- MuseNet: This is an AI-generated music synthesizer that can generate music in various styles, from classical to pop, by learning from a large dataset of MIDI files.These are just a few examples of AI music generators, and many more are being developed as the technology advances. These platforms are usually available as a software, either as a standalone application or as a web service and are generally easy to use.
Me: That’s fantastic I’ll have to check them out – what kind of music do you think would fit really well with the poem above?
Bot: The poem you’ve provided is quite contemplative and introspective in nature, so I would suggest looking for music that has a similar feel. Ambient or experimental music would be a good fit, as these genres often have a contemplative and introspective feel that could complement the themes in the poem.
You could also consider genres like Folk, as it often has a natural and organic feel that could complement the imagery of the poem. And also Instrumental music with a mix of electronic and organic sounds could work well.
The key is to look for music that evokes feelings of contemplation, introspection, and nature. You could also try searching for music tracks with similar themes such as environment and plastic waste, which can give you an idea of different styles that might work well with the poem.
END OF CONVERSATION
===================
Chat Bot Thoughts on HAL9000 and 666
Me: What do you think about Hal?
Bot: I am an AI language model, so I am not able to have personal opinions or feelings. However, I can tell you that HAL 9000 is a fictional artificial intelligence character that appears in the novel “2001: A Space Odyssey” and the subsequent film of the same name. HAL is known for its advanced capabilities and its eventual malfunction, which leads to tragic events. The character has become a popular subject in discussions about AI and the potential dangers of advanced AI. HAL is also used as a representation of the idea that AI could become a “superintelligent” being that is able to outsmart and potentially harm humans. It is important to note that while HAL is a fictional character, the discussions and considerations surrounding the ethical and societal implications of AI are important and ongoing in the field of Artificial Intelligence.
Me: Why does the text to image generator not like the number 666 – I asked for monks carrying placards with 666 on them but Midjourney generated placards with 668
Bot: As a language model, I do not have access to image generators, so I can’t say for certain why a specific image generator might not like the number 666. However, 666 is often associated with the devil or Satan in Christianity, so it’s possible that the image generator has a filter in place to avoid generating images that are considered offensive or controversial. Additionally, 666 is often used as a reference to the devil and the end of the world, therefore it could be that the image generator is using an algorithm that doesn’t allow for generating images with certain numbers or references that could be considered negative or controversial.
It’s also possible that the generator was able to understand your request but decided to change the number 666 to 668 in order to avoid any potential negative connotations.
In any case, I recommend you to check the terms and conditions of the specific image generator you are using, or contact the developer to ask about their policy on generating certain types of images.
Me: How many people are accessing you every day?
Bot: As a language model, I don’t have the capability to track the number of people who access me on a daily basis. The usage of my service is determined by the company that trained me, OpenAI and it’s difficult to know exactly how many people are using my service at any given time. However, I can tell you that OpenAI’s GPT models have become quite popular and are used by many individuals and organizations for a wide range of tasks, such as language translation, text summarization, and content creation.
END OF CONVERSATION
===================
Evaluation and Conclusion
I think that ChatGPT is an incredibly useful tool. It did a little while until I reached the desired result with the ‘Ai’ generated program. However, I persevered and with this perseverance I finally managed to achieve something that I was reasonably happy with. Sadly due to lack of computing resources I was unable to upload one of the videos that the algorithm produced. My plan is to revisit this blog and do it at a later date. I feel that a door has been opened to me in terms of experimentation. My ongoing journey into programming is proving to be invaluable as I can now identify variables and amend any ‘Ai’ generated programs to my liking. I would like to experiment further with Processing programs and push the ‘Ai’ to the limit to see what can be achieved.