There’s a lot of data to be had — and Google wants it.
It’s never been easier to make an informed, market-based decision about going solar.
A great introduction to how networks work.
T-t-that’s pretty good actually.
Google announced it will get all its 2017 energy from solar and wind.
Can’t wait to see what they come up with.
It all happened because they let the program learn without interference.
You don’t need to change the world for deep learning to have a meaningful impact in your life.
That adds up to a lot of money.
In a rather hilarious turn of events, Google translate has been making some conversions that most people would find a tad inaccurate. When going from Ukrainian to Russian, the word “Russia” would show “Mordor,” “Russians” translated to “occupiers,” and Russia’s foreign minister Sergey Lavrov became “sad little horse.”
Google’s Rachel Potvin took the stage @scale and hinted on just how many lines of code Google uses: a staggering 2 billion.
Ever wondered how Google Maps can predict traffic jams?
Connecting traditionally problematic areas of the world to the Internet is high on the list of many virtual giants already, and some time ago, in 2013, Google also stepped up to the challenge. Their solution was, in classic Google fashion, ambitious, simple, and light.
Self-driving cars have a promising future, and leading the pack technologically is Google, now a household name that has long transcended its status as a search engine. First and foremost, Google is a technology company and its interest are aligned with anything cutting edge, whether it’s information technology or hardware (smart homes, smart appliances, cars). Since 2011 when Google first showcased its extremely successful self-driving Prius, later switched for a Lexus, the company has been making rapid progress. But Google rarely experiments just for the sake of it. Part of its philosophy is turning disruptive technology into a product, get it out to the people. But how do you go about self-driving cars, considering the auto business is one of the riskiest in the world? Well, just like it did with the Android for smartphones, Google could partner with the leading automakers supplying the technology. Indeed, Google confirmed this January that it had talks with General Motors, Ford, Toyota, Daimler and Volkswagen. But some highly interesting documents gathered by The Guardian suggests a possible alternate route. Google might actually build its own cars, all from scratch.
Facial recognition and motion tracking is already old news. The next level is describing what you do or what’s going on – for now only in still pictures. Meet NeuralTalk, a deep learning image processing algorithm developed by Stanford engineers which uses processes similar to those used by the human brain to decipher and interpret photos. The software can easily describe, for instance, a band of people dressed up as zombies. It’s remarkably effective and freaking creepy at the same time.
In his book “Do Androids Dream of Electric Sheep”, one of my favorite writers Philip K. Dick explores what sets apart humans from androids. The theme is more valid today than it ever was, considering the great leaps in artificial intelligence we’re seeing coming off major tech labs around the world, like Google’s. Take for instance how the company employs advanced artificial neural networks to zap through a gazillion images, interpret them and return the right one you’re looking for when you make a query using the search engine. Though nothing like a human brain, the networks uses 10-30 stacked layers of artificial neurons with each layer doing its job in incremental order to come to an “answer” by the final output layer is finished. While not dead-on, the network seems to return results better than anything we’ve seen before and as a by-product, it can also “dream.” These artificial dreams output some fascinating images to say the least, going from virtually nothing (white noise) to something’s that out of a surrealist painting. Who says computers can’t be creative?
Self-driving cars were one of those technologies that we sort of visualized as part of the future, much like jetpacks or hoverboards… but it seems like the future is already here, at least for driverless cars (you’ve got a lot to prove, Lexus!). Google’s cars are already hitting the street in California.
Don’t you just hate it when you’re looking for support for a service or app you bought, only to be greeted by some monosyllabic robot ? Ok, that can happen just as well when dealing with outsourced tech support, but at least you know you’re talking to a real person. Well, that might change sooner than you might think. The singularity is getting closer by the moment. Just take a look at Google’s new chatbot which according to the developers has moderate “natural language understanding”. In other words, it can roll with the punches and continue the conversation by itself without following predefined question – answer. Of course, after a while you can still tell it’s not human (fails Turing test), but that doesn’t mean it isn’t entertaining. Have a look at how it answers to “what’s the purpose of life?”.
Swiping your phone’s touchscreen might disappear just as quickly as it emerged, if Google have their way. When their new technology hits the shelves, you won’t even have to touch a screen ever again. Here’s why. It’s called Project Soli, and it uses radar waves to detect precise finger movements – or as they call them, “micromotions”. The technology would