Artificial Intelligence and Generative Art ExperimentsONGOING
Experimental, artificial intelligence, generative art.

Here is some work I’ve created while experimenting with generative art. While DALL·E, etc. were cool for a hot second, they immediately felt limiting and restrictive as a tool. Case in point, it is rather easy to tell that an artwork was created using MidJourney or the like. 
        I’ve opted instead for using various Stable Diffusion models inside of Google Colab’s Python environment. It was not user friendly and in a large measure quite frustrating to use but I did end up with some impressive results along the way. Below are two music videos, and a typography project I’ve worked on using these tools.


“Meilleur et mieux” by KNLO — Lyric Video


The typography was designed in Apple Motion and laid out in a 3D environment with movement that matched the pace of the background imagery, which was entirely generated using Stable Diffusion and the Deforum notebook. This Python notebook was a workflow to generate an image using information from a prompt as well as the pixels from another image (eg. the previous frame in the sequence).         Different variables allowed to create a sense of motion within the frames. The idea was to create a visually interesting backdrop for the lyrics that didn’t require any footage, as the artist didn’t want to shoot any more videos of himself that year. Everything in the video was created by me, from the type and motion design, to the prompts and the image generation, to the editing and compositing. 


“In Other News” by Zhao — Music Video

This was the first video I created using Stable Diffusion and Deforum. It evokes the dreamlike feeling of the music and spoken word piece by Zhao, who was interested in a combination of Tarot and I-Ching and allowing the computer to make mistakes in its interpretations of the prompts.The prompts were intentionally vague, with occasional words alluding to the lyrics and a baseline that helped the consistency of the visuals. 
        The process involved a lot of trial and error and Excel sheets, and was developed from scratch by me because there weren’t many tools available for this type of work at the time.



36 Days of Type Design Challenge — 2023



36 Days of Type was a yearly social design challenge involving creating a letter form each day for 36 days going through the 26 letters of the alphabet and the 10 digits. I wanted to try using generative art for it but early experiments proved that AI didn’t understand letters (at least not at the time – today there are plenty of generative tools able to type set complete sentences).
        I found instead a way to generate images based on grayscale z-depth maps, as a sort of reverse logic to that of portrait photos on the iPhone. 
I designed the letters in grayscale, using the lightness of the pixels to indicate the “distance” from the camera, as a reverse depth map. I was able to feed these back to the Stable Diffusion model as a starting point. 
         From there, I was able to prompt just about anything and the desired shape prevailed through each iteration.
         For reference, swipe to see all the depth maps I created. My idea was to design midcentury modern lamps, and I used that as a part of my prompts in most cases in the above examples.






An added bonus of the depth maps was the ability to use it as z-depth information for video as well. Music was generated using Google’s MusicLM as the cherry on top.
©MMXXIV Over the BreaksRecent Work by Nik BrovkinAll rights reserved