How Van Gogh is Training Neural Networks to Dream of Electric Poppies

How Van Gogh is Training Neural Networks to Dream of Electric Poppies

We took advantage of our digital age and gathered a panel of Curriculum/Collection instructors and other field experts to discuss the possibilities of combining art practice and creation with emerging machine learning, or “artificial intelligence,” technologies.

Curriculum/Collection instructors Matias del Campo (Architecture 509 Space, Time, and Artificial Intelligence), Ivo Dinov (Health Sciences 650 Data Science and Predictive Analytics), and Maegan Fairchild (Philosophy 298 Metaphysics: Art and Ontology), along with Dave Choberka, UMMA Curator for University Learning and Programs, and Kathleen Creel (Postdoctoral Fellow, Stanford University), discussed issues of art, authorship, and creation. 

In the first video, our panelists discussed questions related closely to creation: If an algorithm produces a composite portrait based on the work of thousands of artists, who is the “artist”? And how is this challenging our traditional understandings of creation? What are some of the exciting possibilities of using machine learning to dissect larger questions of meaning and authorship?

In the second video, Creel asks the panelists about how the kind of training data these neural networks are fed impacts the kind of art they’re capable of (re)producing and whether the technology has evolved to a place where it’s capable of producing art that a given artist would recognize, or even mistake, as their own, and to what extent this technology can be used in art restoration. What do Google DeepDream, Jackson Pollock, and Vincent Van Gogh all have in common? Watch to find out.

Related Link: The Artist's Sanction in Contemporary Art