Professor del Campo is well-attuned to the wide reaching implications of Obvious's success and the reactions of the art world.
"Human agency is not anymore like a pyramid, where human agency is at the top and everything else is below that. But rather, we have now a plateau where there's different players on the same level: you have human agency and AI agency. There's a lot of aspects of agency and authorship involved here," del Campo said.
As del Campo reminds, "This whole problem is far deeper than you think. It's not only about 'Can an AI do a piece of art?'"
He asks his students to consider the questions of art and agency in the course. Who can be said to have "created" or designed the results of the neural net? The neural net itself? The people that coded the neural net? The architects and artists that designed the pieces in UMMA's collection fed to the neural net? The students that had the idea to feed that specific data set to that particular neural net?
But you don't have to be a student in Professor del Campo's class to have run into an algorithm trained on millions of photos. In fact, if you've logged into a Google system since the early 2000s, chances are you've helped train such an algorithm yourself.
According to Google, who bought ReCAPTCHA from Carnegie Mellon in 2009, "CAPTCHAs play an important role in keeping the internet spam-free and making everyone's experience a little better." If you tried to buy concert tickets online in the early 2000s, you might remember waiting for the exact on-sale moment to refresh Ticketmaster's website only the find out the tickets had sold out in minutes. In cases like this, scalpers use spam bots to attack a website, purchase up all available tickets, and then list them for resale.
Enter the CAPTCHA. By having to prove your'e a human before you can check out with tickets, CAPTCHAs help genuinely interested human buyers have a chance at seeing Taylor Swift from the front row (side note: remember when we could go to live concerts?).
Google also says, "reCAPTCHA also makes positive use of. This human effort by channeling the time spent solving CAPTCHAs into digitizing text, annotating images, and building machine learning databases."
Let's dig in.
Remember the CAPTCA tests like the one above? These images are generated from scans of books. Every time a user confirms the words, they are doing two things:
(1) Because computers struggle to identify the characters in the photograph, users prove to the site they’re trying to access that they are, in fact, human, because they can identify characters that a computer couldn’t.
(2) At the same time, Google feeds the user-generated CAPTCHA solution to its text-recognition algorithms in order to provide it with more data and confirmed knowledge points. With every solved CAPTCHA, the algorithm gets better at recognizing the features of optical characters, even when they’re highly distorted.
After years, and with your help providing reCAPTCHA with millions of data points for these algorithms, computers have actually become better than human beings at recognizing textual characters. So, Google and reCAPTCHA have moved on to new challenges.
Now, you’re much more likely to encounter a reCAPTCHA test that asks you to identify a crosswalk, fire hydrants, traffic lights, storefronts, or even minivans. Three-dimensional objects are, currently, harder for machines to recognize than text, but as the algorithms get more data, and more successfully solved puzzles, to integrate into their models, they get better at recognizing our everyday objects as well.
ReCAPTCHAs of fire hydrants, sidewalks, and traffic lights might be used to improve Google Maps, helping the program better chart our built environment, or as data points for self-driving cars, making them safer on the road with each successfully identified crosswalk.
Of course, the more data we help train neural nets with, the easier we make it for them to surveil us with things like facial recognition, too.
Like Professor del Campo’s students, who dream up new architectural styles with help of their neural nets, you, too, participate in bringing machine intelligence closer to that of a human’s every time you prove you’re a human on the internet.
But even as we train neural nets to better recognize our world, AIs that look, talk, and act like humans are still a far-off dream.
"What our body can do is absolutely incredible in terms of complexity," del Campo said. "Trying to do this synthetically is something we're really quite far from."
So while you may not have to wonder whether or not your neighbor is an android anytime soon, it's clear that algorithms and machine learning are profoundly changing our world, and Professor del Campo and his students will be at the forefront of using these technologies to push the boundaries of what is possible when we use neural nets to help design our world.
Stay tuned for how Professor del Campo, his students, neural nets, and the UMMA collection start to redefine authorship and agency to dream new architectural possibilities into being. Soon, you may be working, sleeping, and living in the house that AI built.