On Friday evening, the Gray Area Foundation for the Arts and Research at Google will host "DeepDream," a special exhibit and auction in San Francisco featuring artwork made using neural networks.
To create the works on display during Friday's event, artists first trained neural networks to distinguish objects and parse them into high-level components using natural images from the environment. Once trained, the networks were told to “imagine” new images based on the rules and associations they had learned.
Google's open-source DeepDream software represents one technique for generating new images using a trained neural network, and it's behind several of the pieces in the exhibit. Essentially, DeepDream involves showing an initial image to the network, which will then visually parse and interpret it. In a series of repeated steps, the picture is then changed incrementally in order to enhance that initial interpretation, often with surreal results.
Compounding the potential possibilities, neurons that are closer to the input image will respond to simple features while neurons deep in the network will respond to more complex ones. So, "depending on the depth of the neural-network layer that’s targeted, different forms and features are obtained, often leading to interesting new recombinations of knowledge elements the network has learned," the exhibit's presenters noted.
The work of 10 artists will be included in Friday's exhibit, and organizers will auction off the entire limited-edition collection of more than two dozen pieces to support the Gray Area Foundation's work in bridging the worlds of art and technology.