Q&A: Robbie Barrat on training neural networks to create art

June 12, 2018, 9:00 a.m.

When Robbie Barrat isn’t conducting research at the Center for Biomedical Informatics Research, he spends his spare time training neural networks to create art.

Eighteen-year-old Barrat has attracted a lot of attention with his work. In the last few months, he has guest lectured in a Continuing Studies class on artificial intelligence and neuroscience, and the art that his computers have produced has appeared on the cover of Bloomberg’s Business Week.

Prior to coming to Stanford, Barrat taught himself to code and trained a neural network to rap in the style of Kanye West. He worked as an intern on artificial intelligence in autonomous vehicles at NVIDIA after a Quartz article about his project attracted the attention of a company executive. He has yet to attend college in a student capacity.

The Daily’s Joe Dworetzky sat down with Barrat last week to talk about his project.

 

The Stanford Daily (TSD): Tell me a little bit about your background.

Robbie Barrat (RB): I’m a recent high school graduate from West Virginia. I came to California pretty much immediately after [high school graduation] to work at NVIDIA.  NVIDIA noticed a project that I published where I taught a neural network to write rap songs like Kanye West, and NVIDIA offered me an internship straight out of high school because of that. I’ve been in California since. Recently, I left NVIDIA and I came to work in a bioinformatics lab at Stanford.

 

TSD: How did you get into artificial intelligence?

RB: It was self-taught I guess, because there [are] no classes for that stuff in West Virginia. [My high school] had a computer class, but it was supposed to be computer repair [and] the football coach was the teacher. [AI] was just what seemed really neat, getting computers to try and be creative and make art. I had to learn how to do stuff with AI, but getting computers to try and make art has always been my goal.

 

TSD: Why did you make art your goal?

RB: Trying to see if computers can be creative has always been super interesting to me. I really want to work on projects that deal more with the fundamental theory of trying to get computers to come up with creative solutions to problems, etcetera. Art is just a nice medium to work with on until then.

I had tried a very similar project previously, but I didn’t really know much about what I was doing so the results were only 128 by 128 pixels. They were very small, like postage stamp-sized landscape paintings. But now I’ve gotten a little better at programming in AI, and I am able to generate high-res landscapes and the high-res nude portraits.

 

TSD: Can you describe the process through which you have computers making art?

RB: A neural network is basically an algorithm that learns from data, right? Traditionally if you think about programming you think about a programmer coding rules into the computer, but neural networks are the opposite. The neural network will just look at a bunch of data and then figure out the rules. If a neural network looks at a ton of landscape paintings, it’ll figure out typically there’s ground and grass on the bottom, there are trees in the middle. There’s sky on the top. It’ll figure out the general structure and the associated rules that people follow typically when they make a landscape painting.

To make paintings I had to train two neural networks. There was the generator network and the discriminator network. The two neural networks in this setup are called a GAN. GAN stands for “generative adversarial networks.” The generator network tries to make paintings and then the discriminator network judges the paintings and tells the generator how well it’s doing at making landscape paintings. The discriminator will look at real paintings and the output of the generator as it tries to tell the difference between the two.

At the beginning of their training, both are really, really bad at their jobs. The discriminator is bad at differentiating, and the generator is just making noise. Then, the generator gets the discriminator’s feedback and the discriminator is trained too because it has the access to the dataset [of real paintings] and the fake paintings that the generator made. Over time, the generator and discriminator are trying to fool each other. The generator is trying to make paintings the discriminator classifies as real, and the discriminator’s trying to get better at distinguishing authenticity before that can happen.

 

TSD: How long must you run datasets through the networks?

RB:  To generate good high res landscape paintings probably it takes about two weeks. The discriminator looks at a collection of about 30,000 paintings over and over again. It probably looked over the whole data set like at least at least 10,000 times, so it’s looked at a good few million paintings.

 

TSD: I understand you trained separate networks for landscapes and nude portraits. Were the processes of generating these paintings different?

RB: [When generating landscapes the neural network] actually went through a bout of generating dark images. The whole training process took like two weeks but a week and half in it was just generating these really really dark paintings constantly. But after that it started generating these super bright, saturated paintings again, so it was able to fight through its blue period.

The nudes turned out very kind of horrible, but in a good way. The generator is just trying to get a good score from the discriminator. It figured out that the discriminator only looks at low-level features like folds of fat and belly buttons, very low local features. If the generator feeds the discriminator a painting with all of these features in it, then it’ll pass as real and it doesn’t have to organize them in the shape of a body. The generator fooled the discriminator and was able to just feed it these awful blobs of flesh with tendrils poking out and they passed. The discriminator said, yeah, that’s a nude portrait. At that point, learning kind of stopped because the generator didn’t have to get any better.

When the nude portrait thing didn’t work, I was kind of pissed off. But the results were still pretty cool. If [the discriminator] made good nude portraits, that would have been exciting for like 10 minutes, because we’ve already seen so many nude portraits people have been painting those for the better half of history. [The neural network] is making a new style of painting, even if it’s by error. I think that’s really interesting and I would definitely try to pass that off as being creative.

 

TSD: Do you consider yourself an artist?

RB: I like to think that I am. I don’t know any traditional art at all. I don’t know how to paint or sketch or anything. Instead of working with painting or something my medium is AI.

 

TSD: What are your next steps? What are you going to do with yourself?

RB: I don’t know. I do want to go to [college]. I’ve been looking at art school, but the problem with that is that they usually don’t offer math classes. I need math class. I need to know math to do my work. I want to try to get AI to make its own novel types of art instead just imitating stuff.

 

This transcript has been lightly edited and condensed.

 

Contact Joseph Dworetzky at duret ‘at’ stanford.edu.

Despite being older than any two other Daily staffers combined, Joe remains awake late enough at night to contribute political cartoons and occasional articles to The Daily. He is a Fellow in the 2018 DCI cohort. Contact him at duret 'at' stanford.edu.

Login or create an account