The world welcomed the news of Google Deep Dream’s open-sourcing a few weeks ago. It intrigued all kinds of people, from scientists to artists, and invited everyone to run images through the software. The results are creepily trippy images which seemed to produce dog faces and eyes out of thin air.
No, it’s not magic. It is, however, a great example of an artificial neuron network (ANN) at work. An ANN approximates functions and uses millions of training inputs to make it “learn.” It gradually adjusts network parameters until the desired classifications are achieved.
One thing researchers were not counting on was the capacity of the network that is supposed to discern among images to produce them as well. It uses a lot of information just to produce one desired thing, out of random noise, say a banana or measuring cup.
The image is processed and iterated level after level, until one final result, your very own acid-inspired image, is produced.
Google Deep Dream uses a number of layers or neurons communicating with other neurons. The image is processed and iterated level after level, until one final result, your very own acid-inspired image, is produced. This process is called Inceptionism.
At every layer, the ANN extracts higher levels of the features of the image. One level might focus on interpreting edges, while the next looks for basic shapes, like leaves. The highest levels put together those various outputs and produce one final interpretation.
For this reason, eyes and dog faces might eventually be recognized from a seemingly blank wall or a person’s shoulder or something because the ANN interpreted these shapes, edges, interpretations, etc etc etc to be as such.
Once you feed Deep Dream with an image, you allow the software to interpret it in its own way.
Once you feed Deep Dream with an image, you allow the software to interpret it in its own way. ANNs utilize different layers, ranging from 10 – 30 layers each, and can approximate images from the input in very different ways.
If you go deeper and explore how it works with the process reversed, some incredible images can be formed. Using a random noise image to begin with, the final generated image can be as complicated as it is beautiful—and the result of pure neural functions.
Die 9 besten Smartphone-Kameras
This Is Why You Should And Should Not Get A Self-driving Car
Top 5 Must Download Apps Of The Week
It’s Time For You To Change, Says Google
Apps That Will Make You More Useful
Massive Fail For Google On April Fool's Day
When Apple Needs A Little Help From Google
How Can You Really Save Battery Life On Your Phone?
Top 10 Phone Apps You Need To Survive In 2016
Is Your Device Susceptible To The OS Problem?
Should investors back Asian companies to win the driverless car race?
Major tech trends and highlights from this year’s Mobile World Congress
Samsung Galaxy S9 and Galaxy S9 Plus: First Impressions and New Features
14 Cool Tech Gifts for Valentine’s Day