World on auto-pilot: Machine learning and the future

You’ve probably heard of artificial intelligence and machine learning before. The prospect really came to the fore recently with Google’s appointment of John Giannandrea, as the head of the company’s Search program. Giannandrea’s appointment may kickstart a new era in Google’s Search program, as explained in this article.

But machine learning is being used much more widely all over the world today. Believe it or not, you may be looking at a world that is on auto-pilot in future, with machines making intelligent decisions. How you ask? Read on.

In Search of results..

The best known example of machine learning in Search is in Google’s Rankbrain algorithm (mentioned in the article linked above), that has been talked about by the company publicly. Google gets millions of search queries every day, and RankBrain’s task is simple -- to deal with the queries that the Search algorithm(s) haven’t encountered before. The idea of machine learning is of course centered around the fact that the machines take data and learn more, which keeps making them better, as more data is fed to them.

It’s not just Google either, Microsoft is in the game too, as is Apple. In an email interview with Digit, Microsoft India General Manager, Application and Services Group, Russ Arun, said that machine learning will help make pivotal advances in Search as well as data sciences and analytics. Arun also said that Microsoft has been using machine learning models in traditional statistical models along with social listening (monitoring digital media channels for the purpose of devising better strategies for maximum consumer impact) and search analysis for its Bing predictions.

 “To give you an idea of ways in which we use algorithms that are not just related to the ranking of web search results, let us look at the NCAA Basketball championship from Apr 2015. Bing analysed 9.2 quintillion possible outcomes of the bracket and other related data to make its selections. The search engine also crunched more than a decade of NCAA historical data to help drive its predictions.”

Simulating the human brain..

These computer systems, designed after the human brain, are at the core of any machine learning venture. In practical usage, these neural networks can also be layered. “Neural networks are designed to replicate nodes in the human brain. More advanced systems have these neural nets in layers. So one layer of neural nets may identify features that need to be analysed;

while another layer may decide how to classify them for ranking. As you throw more and more ‘layers’ into this system and it continuously learns over time to optimise outputs; you are moving the system into deep learning.”

So, neutral networks can essentially replicate the human brain and just like the brain itself, can become more intelligent as they are fed more data. There’s a definite problem with neural networks though, in the sense that they take away the control from humans. While most would quickly think of a doomsday scenario here, what we’re talking about is the machine learning algorithm itself.

Traditionally, humans have written algorithms, and based on their applications, tweaked them in future. This changes.

With machine learning, machines learn on the job, but why they react a certain way is often impossible for humans to determine. Hence, tweaking a certain algorithm can become a process of trial and error. Ambarish Mitra, Founder and Global CEO, Blippar, pitched in here and explained that while this is true, machine learning has been more useful than human input, so far. “Humans tend to, over a long period of time, corrupt the data more than what machines have done,” said Mitra.

This is further justified by Microsoft’s Russ Arun, who said, “The neural network changes the weights during the learning phase. This change/learning is a function of inputs that are continuously provided to the neural network. However, we don’t explicitly change any of the weights which encode the learning manually. So even an engineer who codes the neural networks cannot say with certainty how much an outcome is affected by human tweaks versus the neural network’s learning capabilities.”

On the other hand, while Mitra says that machines will perform well as long as they’re taught the right way, he agrees that there will come a time when human dissatisfaction will creep in. “It’ll be a call,” he said, “but there’ll certainly be a risk, because the environment will genuinely be out of control since it continues to teach itself.” Mitra also said that people inside the world of machine learning sometimes find it difficult to understand their own algorithms.

To bring Internet ON things...

I started this article with the premise that the world may soon be on auto-pilot, and augmented reality, perhaps, has the best way of doing so. Blippar (mentioned above), for example, is an augmented reality app, which uses machine learning algorithms to improve its capabilities as more and more users use it.
CEO Mitra talked at the Surge Summit in Bangalore recently about something he calls the Internet ON Things. The idea here, as Mitra himself explains, is that you can’t put a chip inside everything, but with smartphones etc. you can still connect them.

How? To explain this, Mitra talks about something he calls, the light Internet, which is basically a condensed form of the entire Internet.

To understand, imagine pointing your phone to a storefront and getting information about the products there. This is being done by recognising the things in the store and then delivering information. For example, when you open the Digit Magazine, you’ll find certain parts of it are interactive, allowing you to point your phone’s camera on them to be directed to the video review or other content on this website.

When you pointed your phone to the magazine’s page, two systems came into play -- the first was a machine learning algorithm, while the other was an image mapping system. The image mapping system sends a skeleton of the image (the page) onto the server, which then finds an exact copy of that skeleton and sends the information related to it to your phone.

“This entire system takes two seconds on an optimum connection, while from a technical architecture point of view, it takes 600 milliseconds,” explains Mitra. Of course, the whole loop can change based on the file size of the content packet.

It seems the image mapping system did the whole work right? That’s only partly true. While it was responsible for sending the information, the image mapping system alone won't make the app intuitive. That’s the machine learning system’s job. For example, if you are keeping the magazine on a wooden table, then the machine learning algorithm takes that data and learns for future.

So, next time someone uses a similar setup, it becomes easier to only recognise the part that the image mapping system needs.

Augmented reality has been taken further by projects like Google’s Project Tango as well. At MWC, Lenovo and Google announced an advancement in Project Tango, which allows you to use your phone as a map of a room or building you’re in.

The voice of reason..

Google Now, Siri and Cortana come to mind. Ever wondered why Google Now has been so far ahead of Apple’s Siri always? Or why Microsoft took so much time to bring Cortana to its devices? The answer is on one word, data. Google already had way more data than others, which allowed its machine learning algorithms to perform better than Apple. Microsoft took it’s time to bring Cortana, which has so far been really impressive and that also has to do with amassing data and bringing its algorithms up to speed.

Microsoft’s Arun says that the company uses simple rule-based systems to augment the machine learning systems used for Cortana. “Yes, simple rules that are mostly based on decision tree approaches and grammars can also be used to create simple voice assistants, but they are not effective with conversations that require back and forth. They are also not general enough to handle all types of conversations and will be more susceptible to errors when compared to those generated with ML techniques,” explains Arun.

Machine learning and chill..

One of the most common uses of machine learning that you see around you is in-app suggestions and he same inside apps. For example, Facebook’s friend suggestions are based on such algorithms, as is your News Feed and what the social network shows you on it.

Similarly, Netflix, arguably the most popular streaming service worldwide, gives huge emphasis to suggesting content that you would like. There, are of course, many examples, but machine learning, can eventually also improve the suggestions that are being made to you on Netflix, Google and so on. In fact, in the case of Google's suggested Search results, it already has. Another example is the app Duolingo, which uses machine learning to teach language. Duolingo does this by teaching a part of its users in a certain manner, while another part tries a different method. The company then takes the data from both these subsets, to determine the best way to teach a language.

In essence, the world around you is largely already on auto-pilot, you just don’t realise it yet. It’ll be a while before augmented reality, Search and other implementations of machine learning become mainstream, but as fast as the speed of data flow is, it’ll not be very long.


No comments:

Powered by Blogger.