Back

Machine Learning In Mobile Devices: What To Expect

Capabilities of mobile devices we’re carrying every day have been increasing rapidly over the past few years, and this process doesn’t seem to be slowing down. One one hand, today’s smartphones have unbelievable computing power; on the other hand, there’s a number of ways for them to collect huge amounts of data about us. Combined in a right way, these two aspects can create a countless realm of possibilities for the new generation of apps and services.

nowontap
Image credit: The Verge

The key to the future of the mobile experience appears to be machine learning, arguably the most popular part of Artificial Intelligence studies. The most widely used definition of the concept is that by Arthur Samuel, who described it as a “field of study that gives computers the ability to learn without being explicitly programmed.”

When applied to today’s mobile devices, this definition means that machine learning algorithms could be able to process, find patterns in and make sense of the data gathered by various sensors in order to significantly change and improve the way we interact with our gadgets. Actually, this is already happening—think Google Now, or Siri, or Swiftkey,—but the future prospects of machine learning on mobile devices are indeed breathtaking.

Let’s have a glimpse of what to expect in the near future.

Talk To Your Phone

Although voice recognition and natural language processing are already being used in a number of smartphone applications like Siri or Google Now, user experience with these is still far from perfect. With the development of machine learning algorithms, our phones—or tablets, or smart watches—will become able to not only understand our input with 100 percent accuracy, but also realise the context of it.

One example of working in context is Now on Tap, Google’s service built on top of Google Now, which understands what you’re doing on the phone and processes your query using this knowledge. Here’s a great demo of the feature, in which the user asks “Who is the lead singer?” while listening to a song by Twenty One Pilots.

The context, however, could be much more wider than the phone itself. Using data from cameras, GPS, and accelerometer, the phone could be able to understand you much better in the future.

Machine learning technologies are also key for developing automated translation services, including real-time spoken translation like that Microsoft is trialing in Skype. Imagine being able to understand everyone and everything around you in any country on Earth—that’s something to look forward to, isn’t it?

Ubiquitous Fitness

Apps and devices that help you achieve your fitness goals are numerous across the platforms and activity types, however there’s still a lot of space for improvement here.

One of the main ways machine learning can revolutionise the experience with fitness apps in is continuous tracking of activities without any need of user input. Current apps mostly track steps and sometimes heart rate continuously, while you’d need to change the settings when going for a run or a swim.

With advanced machine learning algorithms that have access to data from numerous sensors, it all might come to a point when your phone will faultlessly determine any activity you’re doing at any given time without any additional instructions from your side.

User Authentication

One of the most futuristically-looking forecasts about machine learning technologies in the mobile space is the ways your smartphone could be able to confirm your identity. Passwords and PIN codes are becoming obsolete with integration of fingerprint scanners, however there are more ways to authenticate user without it.

The obvious way is face recognition, which, again, is a process based on machine learning techniques. It can be used not only for user authentication but also to improve context for other apps by understanding, for example, who’s on the recent photos made by the phone’s owner.

Another interesting way to make sure that the phone is being used by its owner is to analyse accelerometer patterns, i.e. the way the user holds the phone and uses it. Additionally, typing patterns can be used for this purpose as well, as it’s already done on desktop by Coursera.

Looking Forward

While anticipating the revolution in mobile experience brought by advanced machine learning techniques coupled with sensor data, it’s useful to remember that this future is only possible if an adequate level of data security has been reached. The more information about us is being collected by our phones and other devices, the more important it is to protect it.

However, this definitely isn’t a reason to avoid using the new features of our gadgets that require more information: it’s all about being aware of how it’s used and whether it’s safe with companies and service providers you’ve shared it with.

For app developers including ourselves, this means that security and data protection should always be the main priority in the development process.

Not sure on how to best build your next-gen mobile application? Talk to us today!

 

Read more:

February 1, 2016

androidgoogle nowiosmachine learningmobile developmentmobile experiencesiri