Neural Networks and the Digital Hallucination

By Chris Iverach-Brereton - June 20, 2016

Artificial Neural Networks. The name alone conjures up every science fiction trope about artificial intelligence, but unlike so much "technobabble" that appears in popular culture, Artificial Neural Networks (or ANNs for short) are a very real thing, and are used every day by millions of people without knowing anything about them. 

Rather than explain what exactly ANNs are - something Welch Labs and Computerphile have both done a reasonable job of already - I want to talk about some personal experiences I've had working with them. Because honestly, ANNs are really cool and I want more people to get excited about them for what they are and what they can do, not just for the fact that they make a great buzzword for time-travelling assassin robots

Machine Learning

There's something intrinsically awesome and terrifying at the idea of computers that can learn. Science fiction is overflowing the stories of benevolent artificial intelligences that help reinforce our own humanity or serve as our protectors and friends (e.g. Data from Star Trek, Andrew from Bicentennial Man, Robbie the Robot from Forbidden Planet). But for every "good" AI story, there's a dark parallel: computers learning our own weaknesses and overthrowing human civilization (e.g. Skynet from the Terminator franchise, the Cylons from Battlestar Galactica, GLADOS from the Portal games). Fortunately such existential threats are - at the moment anyway -- highly unlikely. Computers, for all their raw number-crunching power, are pretty stupid sometimes. 

Part of this stupidity is due to human error. When I was doing my undergraduate degree in Computer Science one of my profs told us an anecdote about a massive 1980s American military research project into using ANNs to create automated weapon systems. The goal was to produce a camera system that could accurately detect tanks hidden in the forest

The scientists collected 100 photographs of camouflaged tanks surrounded by trees and another 100 photographs of stands of trees without tanks among them. They set aside 50 photographs from each set to test with later, and trained their system on other half of the photos. 

The neural network, after being given the training photographs, eventually identified a pattern, and was able to easily identify the presence of tanks in both the training data, and the 100 photographs set aside initially for testing purposes. Success confirmed! The scientists sent their software off to the Pentagon. 

It is at this point that I should point out a very fun quirk of most machine learning algorithms; there is no easy way to tell what pattern the system has identified. Only that there is a pattern, and that the system has correlated that pattern to the intended result. 

Well, as it turns out for our Pentagon-hired scientists, that their system had a fatal flaw. When the Pentagon tested the system with their own photographs they found that the system was incapable of identifying tanks with any level of accuracy. A coin-flip had better odds of determining if there was a tank in a stand of trees. 

As it turns out, the original 200 photographs used by the scientists had 100 pictures of tanks taken on sunny days, and 100 pictures of tanks taken on cloudy days. The system had learned to identify the colour of the sky, not the presence of tanks. It had literally learned that "if the sky is grey it means there's a tank in the woods." Which is not helpful, if your enemy should choose to attack in good weather. 

Points for identifying the obvious pattern, but unless your enemy happens to choose a cloudy day to attack your automated weapon systems aren't likely to fire many shots. The story, as told by my professor, concludes with the scientists admitting that they goofed, the project being de-funded, and serving as a warning for years to come that allowing computers to autonomously control the firing mechanism on weapons of any kind is ultimately probably a Very Bad Idea©. 

On the other end of the spectrum, we have seen some fantastic breakthroughs in machine learning and artificial intelligence. Chess, long considered to be one of the holy grails of artificial intelligence, was all-but-solved as long ago as 1997 when Deep Blue beat Gary Kasparov, the world chess champion, in a regulation tournament. Earlier this year AlphaGo beat one of the world's premiere go players, Lee Sedol, under tournament conditions. Outside the realm of board games, IBM's Watson defeated two of the best Jeopardy players ever in multi-day match in 2011, demonstrating the ability to understand natural language. 

Inventing Patterns

Machine learning algorithms, including ANNs, are very good at identifying patterns where patterns exist. But they can also identify patterns where no actual pattern exists. Just like humans will see faces in random arrangements of circles and lines, computer systems can identify apparent patterns that are just background noise. 

My personal favourite example of this phenomenon - with the dial cranked up to 11 -- is Google's Deep Dream

Google uses ANNs to help classify images automatically, identifying the picture's contents so that when you perform a Google Image Search the right images appear with the correct search keywords. But to make sure that these classification systems were working properly, engineers at Google designed a piece of software that basically ran the ANN backwards; instead of feeding an image in and getting keywords out, they would feed a keyword in and the ANN would produce an image corresponding to its internal model of what that keyword meant. In a way the system was being asked to produce an image of its Platonic Ideal of whatever keywords you fed into it. 

In essence, Google's engineers were trying to avoid the same fate as our misguided Pentagon scientists; by creating a system that could run the ANN backwards they could verify that the system was not learning to correlate sky colour with the presence of a tank, or anything equally obvious to a human. 

This system produced some strange results. But there's a certain artistry to these digital hallucinations. 

This tool was further extended to allow the system itself to scan an image and enhance patterns and features it identified. By taking this enhanced image and running it through again you can reinforce these detected features, gradually morphing what was once a picture of a leaf into some kind of psychedelic bird-insect creature

This system of iteratively finding and enhancing patterns found in an image became the genesis for Deep Dream, a new "visual remix" concept for artists, as well as a tool for learning network engineers to check in on their networks and make sure they're learning properly. 

Dreaming Videos

While running individual images through Deep Dream makes for some interesting visuals, the real craziness happens when you use Deep Dream on a video. Seeing the random shapes twist and squirm across the frame is like experiencing a hallucination without the need for pharmaceuticals

For Burning Man 2015 our company was asked to help make an interactive video installation. Users would come in and take a short video of themselves in one of several booths we made. The videos of everyone who came through the exhibit would play on giant screens located around the central space in the exhibit. To add to the Burning Man vibe, I wrote a program using an open-source version of Deep Dream that would take random user videos and enhance them. And by "enhance" I mean "make them look like something out of a Lovecraftian nightmare." 

The installation was a huge success, and everyone involved had a great time. But I wanted to do more. 

When our company was approached to make some content to play between sets at the SpaceLand 2016 concert I knew I had to break out Deep Dream again. I chose three music videos from bands that were performing at the show and ran them through my Burning Man program. Unfortunately time was against me, so I had to use pretty low-resolution videos in order to get them processed in time, but in the end the results looked pretty good, especially when overlaid with the original videos. 

Deep Dream Gallery

The following images are individual frames from the videos I made for SpaceLand 2016. The original frame is on the left, with the frame run through Deep Dream on the right. 

Still #1 from Ghost Twin's "Here We Are The Night"

Still #2 from Ghost Twin's "Here We Are The Night"

Deep Art

An interesting new development in the world of ANNs for artistic expression is the using of neural networks to simulate and re-create the artistic style of arbitrary images. This technique, originally published by researchers from Cornell University, is perhaps most widely-known for the commercial website DeepArt.io. DeepArt lets users upload pairs of images: a content and a style image. DeepArt's servers then re-draw the content image in the style of the style image.

Open-source versions of the artistic expression networks are also available for anyone to download and run on their own computers, though a certain degree of technical expertise is required.

Unlike Deep Dream's sometimes grotesque, hallucination-like qualities, DeepArt and its kin are capable of producing images that are artistically pleasing. Depending on your style image you use you can produce images that look like like impressionist paintings, works of great masters, or digital remixes.

Given how cool the moving hallucinations of Deep Dream videos looked, I was really curious to see how Deep Art would work on a video. The idea of a shimmering, living canvas just seemed like an awesome visual effect, and I couldn't find many examples of that online. So I modified my Burning Man video script, but instead of running every frame through Deep Dream I ran it through Neural Style.

So far I've only run one short video through this procedure. Neural Style takes a very long time to run, and my computer lacks a GPU with the oomph to really do it well. Rendering a short (~1:30) video took the better part of a week. With better hardware I can probably get that way down.

However, despite the fairly low resolution I had to use, the initial results look pretty promising. For the style image I used Munsch's "The Scream." I definitely want to do more with this kind of artistic rendering. With powerful enough hardware I'm hoping I can actually create a real-time video effect. Imagine a magic mirror made out of a webcam and a monitor, but you can choose in real-time what famous artists' work you want your reflection to be drawn like. Anything from Caravaggio to Da Vinci to Van Gogh to random textures like bricks and circuit boards. There are so many possibilities, and I can't wait to start trying them all out!

Deep Art Gallery

The following images were created by me using the Neural Style library.

Chris, the original image

Chris, the original image

Chris vs Picasso's Self-Portrait

Chris vs Picasso's Self-Portrait

Chris vs Judith Beheading Holofernes

Chris vs Judith Beheading Holofernes

Chris vs Woman with a Hat

Chris vs Woman with a Hat

Chris vs The Matrix

Chris vs The Matrix

Chris vs Escher's Sphere

Chris vs Escher's Sphere

Chris vs Seated Nude

Chris vs Seated Nude

Chris vs Starry Night

Chris vs Starry Night

Chris vs Mona Lisa

Chris vs Mona Lisa

Chris vs Circuit Board

Chris vs Circuit Board

Chris vs The Scream

Chris vs The Scream

Chris vs Escher's Infinite Staircase

Chris vs Escher's Infinite Staircase