top of page
Writer's pictureAindri H Patra

Creative Computers

Over the summer I took a class that was called “Computing in the Arts”. The goal of the class was to make connections between psychology, art, and computers. We would essentially have to use coding, in our preferred language, in order to produce poetry, music, or art.

At the most basic level, we use a statistical analysis method known as the Markov Model. The Markov Model essentially uses a matrix with a set number of probabilities of a particular event happening after another. The future states depend on the current state. For instance, in a poem, there could be a 75% chance that there will be a verb succeeding a noun and a 10% of a noun succeeding a verb.

Using this model, we can randomize different features of the piece of art in order to develop one final piece of work. This is completely done by the computer: art without a person. Another one of such computer models is known as Generative Adversarial Networks (GANs).

First, let's take a look at the following pictures below:

Can you tell which was not made by a human?

When I presented this image to my class this summer, no one but one person could tell which was made by the computer. Even then, he admitted that he completely guessed (the picture he choose was the first letter of his first name).

The answer is F. This piece was generated by an art robot named Cloudpainter, trained to replicate the style of 20th-century American abstract expressionism.

The other 5 pieces were painted by humans.

When we look at artificial intelligence and neural networks, which are algorithms that are meant to mimic the way that humans think, there are have various limitations in the past. The main is isue is there is a “lack of imagination” in deep neural networks (DNNs)

  • DNNs need to have large sets of labeled data which forces humans to explicitly define what each of the data samples represents.

  • Need high quality data & expensive/time consuming to label all training data

  • Deep learning is meant for classification, not creation

    • Purpose is to understand data


GAN tries to fix this by using two neural networks to create and refine the data. Its entire purpose is to generate new data from the old.

  • Instead of mapping raw data to a specific output, it moves backward having the generator trace back from the output to generate the input data.

  • Purpose is to trick the system into thinking this input it provides is authentic

These two networks are called the generative network and the discriminative network.

The generator in a GAN generates new instances of data after being trained on input. Its job is to “fool” the discriminator by increasing its rate of error — essentially, to make it difficult for the discriminator to tell which is the image from the training set and which is the image generated by the generator.

The discriminator tries to detect fake generated data. Its goal is to minimize the classification error — to successfully discriminate between the fake image and the real image from the training every time.

As a result, both neural networks improve. The generator creates more convincing fakes as a result of the discriminators’ feedback. In turn, the discriminator’s job distinguishing between images becomes harder and it becomes better at doing so.

Taking a closer look at the code for the GAN algorithm, let's examine each of the steps:

  1. The generator takes in a set of random numbers and uses that to generate an image to return.

  2. The generated image is then fed into the discriminator with the stream of images from the dataset.

  3. The discriminator takes both real and fake images and returns a probability (between 0 and 1). 1 is authentic, 0 is fake.

  4. The discriminator is in a feedback loop with the ground truth images

  5. The generator itself is in a feedback loop with the discriminator.


Let's look at GANs in the context of Cloudpainter:



This is the process of Cloudpainter, the AI art machine that created the abstract painting from the 6 shown before.


1: Generative Adversarial Networks (GANs) that imagine portraits.

2: Convolutional Neural Networks (CNNs) that apply a style to imagined portraits.

3: Visual Feedback Loops (VFLs) that paint and analyze one brushstroke at a time.


However, there are limitations to GANs:

  • “Pseudo-imaging” -- is it really using “ imagination/creativity”?

  • Still need a lot of training data

  • Can’t create new things, can only combine what they already know in new ways

  • Hard to maintain “balance” between generator and discriminator

    • If discriminator is too weak, it will accept anything generator produces no matter how accurate it actually is

    • If discriminator is a lot stronger, it will constantly reject the results forming an endless loop of disappointing data

    • Need to constantly optimize generator and discriminator


While GANs do a good job of trying to address limitations of creativity in computers, the incorporation of machine learning technology into the creation of visual art raises many questions about the future of creativity.


  • An AI-generated portrait, shown here, recently sold for $432,500 at an auction. In this case, and in the cases of other artwork created by machines, who “owns” the rights to the work? The person who wrote the code? The company that implemented it? The machine itself?



  • If most people cannot distinguish between machine-made and human-made art, is “creativity” really unique to humans?

Just some things to think about :))

126 views0 comments

Recent Posts

See All

コメント


bottom of page