translucent-header

We’re Letting a Neural Network Loose In an Art Museum’s Collections Database

Students in Professor Matias del Campo’s Space, Time, and Artificial Intelligence class are asking questions about the relationship between artificial intelligence, architecture, and creativity. Can artificial intelligence dream new visions of architecture into being? 

We’re Letting a Neural Network Loose In an Art Museum’s Collections Database

Written by Olivia Ordoñez

Students learn to test this question by creating an artificial neural network, a computer process meant to emulate certain aspects of the human brain (in this case, dreaming and hallucinating) to solve problems that traditional algorithms struggle with. Then, using a data set of images from UMMA's collection, the neural net will learn the various features of these images and use them as the basis to produce its own architectural structures and styles.

 

When a neural net produces its own images, scientists often call this dreaming, in part because we lack a better word for it.

 

Even though much about how the human brain works remains a mystery, as Professor del Campo explains, "What we know is that certain things that happen in our brain [like dreaming] can be manifested as mathematical problems that we can apply to a neural network." Recently, del Campo says, "Artworks that are coming out of these ideas are starting to really gain traction in terms of recognition as pieces of art."

 

In 2018, the Paris-based art collective Obvious trained an algorithm on Western art portraits from the Middle Ages to the present and watched the neural network produce art based on this collection.

 

Hugo Caselles-Dupré, a member of the art collective, explained the art generation process to Christie's auction house as follows: "The algorithm is composed of two parts. On one side is the Generator, on the other the Discriminator. We fed the system with a data set of 15,000 portraits painted between the 14th century to the 20th. The Generator makes a new image based on the set, then the Discriminator tries to spot the difference between a human-made image and one created by the Generator. The aim is to fool the Discriminator into thinking that the new images are real-life portraits. Then we have a result."

 

In this case, the results were a series of portraits that Obvious characterized as a look at a historical, but fictional, family: the Belamys. One image, The Potrait of Edmond Belamy, was auctioned at Christie's in October 2018 and sold for $435,000, a sum much higher than what estimators had predicted.

Professor del Campo is well-attuned to the wide reaching implications of Obvious's success and the reactions of the art world.

 

"Human agency is not anymore like a pyramid, where human agency is at the top and everything else is below that. But rather, we have now a plateau where there's different players on the same level: you have human agency and AI agency. There's a lot of aspects of agency and authorship involved here," del Campo said. 

 

As del Campo reminds, "This whole problem is far deeper than you think. It's not only about 'Can an AI do a piece of art?'" 

 

He asks his students to consider the questions of art and agency in the course. Who can be said to have "created" or designed the results of the neural net? The neural net itself? The people that coded the neural net? The architects and artists that designed the pieces in UMMA's collection fed to the neural net? The students that had the idea to feed that specific data set to that particular neural net?

 

But you don't have to be a student in Professor del Campo's class to have run into an algorithm trained on millions of photos. In fact, if you've logged into a Google system since the early 2000s, chances are you've helped train such an algorithm yourself.

 

According to Google, who bought ReCAPTCHA from Carnegie Mellon in 2009, "CAPTCHAs play an important role in keeping the internet spam-free and making everyone's experience a little better." If you tried to buy concert tickets online in the early 2000s, you might remember waiting for the exact on-sale moment to refresh Ticketmaster's website only the find out the tickets had sold out in minutes. In cases like this, scalpers use spam bots to attack a website, purchase up all available tickets, and then list them for resale.

 

Enter the CAPTCHA. By having to prove your'e a human before you can check out with tickets, CAPTCHAs help genuinely interested human buyers have a chance at seeing Taylor Swift from the front row (side note: remember when we could go to live concerts?).

 

Google also says, "reCAPTCHA also makes positive use of. This human effort by channeling the time spent solving CAPTCHAs into digitizing text, annotating images, and building machine learning databases."

 

Let's dig in.

A reCAPTCHA test showing hard to parse words

Remember the CAPTCA tests like the one above? These images are generated from scans of books. Every time a user confirms the words, they are doing two things: 

(1) Because computers struggle to identify the characters in the photograph, users prove to the site they’re trying to access that they are, in fact, human, because they can identify characters that a computer couldn’t.

(2) At the same time, Google feeds the user-generated CAPTCHA solution to its text-recognition algorithms in order to provide it with more data and confirmed knowledge points. With every solved CAPTCHA, the algorithm gets better at recognizing the features of optical characters, even when they’re highly distorted.

After years, and with your help providing reCAPTCHA with millions of data points for these algorithms, computers have actually become better than human beings at recognizing textual characters. So, Google and reCAPTCHA have moved on to new challenges. 

Now, you’re much more likely to encounter a reCAPTCHA test that asks you to identify a crosswalk, fire hydrants, traffic lights, storefronts, or even minivans. Three-dimensional objects are, currently, harder for machines to recognize than text, but as the algorithms get more data, and more successfully solved puzzles, to integrate into their models, they get better at recognizing our everyday objects as well. 

ReCAPTCHAs of fire hydrants, sidewalks, and traffic lights might be used to improve Google Maps, helping the program better chart our built environment, or as data points for self-driving cars, making them safer on the road with each successfully identified crosswalk.

Of course, the more data we help train neural nets with, the easier we make it for them to surveil us with things like facial recognition, too.

Like Professor del Campo’s students, who dream up new architectural styles with help of their neural nets, you, too, participate in bringing machine intelligence closer to that of a human’s every time you prove you’re a human on the internet.

But even as we train neural nets to better recognize our world, AIs that look, talk, and act like humans are still a far-off dream.

"What our body can do is absolutely incredible in terms of complexity," del Campo said. "Trying to do this synthetically is something we're really quite far from."

So while you may not have to wonder whether or not your neighbor is an android anytime soon, it's clear that algorithms and machine learning are profoundly changing our world, and Professor del Campo and his students will be at the forefront of using these technologies to push the boundaries of what is possible when we use neural nets to help design our world. 

Stay tuned for how Professor del Campo, his students, neural nets, and the UMMA collection start to redefine authorship and agency to dream new architectural possibilities into being. Soon, you may be working, sleeping, and living in the house that AI built.

Other Recent C/C Stories

One of the goals of Curriculum/Collection is to foster student engagement with UMMA’s collection and showcase the exciting work students produce in turn. Here, Mellisa Lee, who was a student in Art and Design 352: Florilegium, shares the art she created in Cathy Barry’s class and explains her process.


Learn about how the Curriculum/Collection exhibition for Social Work 560 with Professor Larry Gant was curated and the importance of community art!


Downtown Detroit, a 1947 painting by Carlos Lopez included in the works of art selected for Curriculum/Collection, depicts a Motor City skyline that doesn't quite exist. All the buildings are real, but Lopez made a composite of several different vantage points for his landscape. Click through an interactive version of Lopez's painting to zoom into the details and learn more about the history of the buildings and architecture represented here.