This is the technology behind Google Photos Wizard

Photos

Google pictures is one of those applications that get carried away with his mouth open, for several reasons. Not only does a backup cloud of all your photos, it allows you to find your photos by content and even encouraged to create versions improved your photos and animations. Last one.

A week ago and we show 23 examples of what the assistant Google can do with your photos. And best of all, you do not have to do anything, except activate the backup and wait for Google do its magic.

Nicolas Cage play Nicolas Cage in a movie about… oh Nicolas Cage! a Lot of eye to what I bring here, becaus...
LG Optimus G: Analysis and user experience While today announces the new LG Optimu...

but how this magic works? How do you manage to distinguish Google’s photos of your cat cafe con leche? How do you know when to put a frame photo with an artistic effect and when a collage can be more interesting

?

Patterns, patterns, patterns …

Dog With Hat

Google Photos not be half of what it is without the support of the technology visual recognition from Google, it allows you to perform complex searches in your gallery. There is an infallible technology, but as time passes and more people use it becomes increasingly is more accurate.

On the other side of the cloud, Google’s servers have no eyes to see photos holiday at the beach, but they can recognize some patterns and act accordingly. Technology receives the obvious name recognition patterns and is the basis for many other services we use every day such as recognition of voice or weather forecast.

the pattern recognition using Google neural networks to remove various layers of information of each photograph. Low-level layers extract basic information such as the basic characteristics of the photograph or the general edges, while high-level layers detect more sophisticated features and even entire objects.

 Pattern 5 Outline pattern recognition, Wikipedia

Google still has no idea what is shown in your photo on the beach, but it’s more than possible that the neural network is able to successfully distinguish some of the objects and features that make it up like there is a blue sky, sea, sand and several people.

Training the machine

But where do they get the initial information these neural networks to learn how to recognize certain objects? The truth is that in this we do not differ much from the machines: all you need is Training

If desarrolláramos recognition software image without resorting to networks. neural, we could probably use a series of rules to recognize a motorcycle. It is a vehicle with two wheels, a seat, a handlebar and thicker than the body of a bicycle. And probably the algorithm work properly as long as is a perfect photo of a motorcycle.

Neural networks allow the system to “learn” what a bike. Not only are there two wheels, but also of three with sidecar, a thousand and one ways and colors, with pilot over parked, seen in profile, front, covered by another car … would hand impossible to program all variables, but luckily the neural network can learn to recognize them.

 Tv Or Monitor

in this regard Google has it easy, because hundreds of millions of people use its services every day, training machine intentionally or not. For example, every time you search Google images of a “dog with cowboy hat” and open one of the images might be giving a hint of what is just that. Similarly, the neural network has the content network for nutrition knowledge indefinitely.

Other ways to train the system are rather more straightforward. Photos from Google itself can delete results that do not correspond to the search by selecting them and choosing Remove from the results, although this option only appears now automatically suggested searches from the “stuff”.

Cat and Dog wait a minute … this is not a dog

in June 2015 Google engineers surprised us mostrando visually see how their neural networks worldwide. Making adjustments to the code, the neural network is not only able to detect patterns, but also to generate images from the patterns you have learned.

In addition to serving to create images of the scariest (and even music videos ), this technology has application to verify that the system is properly learning the concepts.

Dreams

An example used by Google is a weight. They forcing the neural network to express their concept of a weight found that artificial intelligence considered should always be accompanied by an arm muscular holding it.

The vast majority of the images of weight that the system had picked up in his training included the arm, which resulted in the wrong training.
Pesa

in short, the image recognition technology of Google allows Google Photos have a rough idea of what is happening in each snapshot. It is vital information for the wizard to generate those automatic photo images both surprise us, but it would not be sufficient by itself to many cases. And that’s where metadata come.

Metadata, the missing ingredient

Surely you’re aware of the amount of information in each picture you take with your camera mobile. The most obvious is date and time, but often Google can also make a rough but reliable idea of ​​the Location of each photo.

Metadata

metadata give context to information obtained by the image recognition technology. A photo on the beach as part of a rapid burst of photos look great as an animated GIF, while several shots taken in a short time interval may look better in a collage.

Google’s creations photos most are based on metadata are the collages (photos of people taken on the same day) and animations (generated from similar photos taken in a short time ). However, other less obvious as panoramic images also taken into account. For example, the effect of panoramic image is obtained joining several photos of the same place, but only if you’ve also taken at the same time.

Finally, luck

to avoid creations are too repetitive, sometimes Google Photos simply plays it. A photo of a building or monument? Let’s try black and white effect. A photo of an outdoor landscape? See what you think with a retro and frame filter …

These filters are more “to see if the bell rings” and certainly also part of the Training artificial neural network. The more users keep the combination of types of photo and indeed more satisfactory will be marked in red, while if no one likes to see pictures of your cat in black and white then it’s probably not a good combination.

One of the best uses of your data ​​h2>

it is clear that the network giants like, and much, have your personal data ​​strong>. The more the merrier. It is allowing them to create a more accurate you with which offer better services profile, yes, advertising better segmented to perform better.

However, it is in services such as Google Now and Google photos when finally users get something in return to be wildly analyzed by the Big Data to Google, this time in the form of beautiful compositions and effects on our pictures. If you’re gonna keep creándome a profile of everything I do on the network, unless it is to offer things like this

In Engadget Android. | 23 Examples of what you can get with Google assistant Photos


News This is the technology behind Google Photos Wizard was originally published in Engadget Android by Ivan Ramirez.



Engadget Android

Bibliography ►
Phoneia.com (February 2, 2016). This is the technology behind Google Photos Wizard. Recovered from https://phoneia.com/en/this-is-the-technology-behind-google-photos-wizard/