It wasn’t long ago when half the world got glued to their screens watching the presentation of a new iPhone X, which claims to be “innovational”. All the fans who have recently been lining up for a couple of days in order to be among the first iPhone X owners also add hype to the whole “innovative AI technology” trend. But what is it really about?
Mobile Processing Power is Finally Ready for AI Technology
New iPhone is powered by A11 bionic processor that is optimized for AI. This means that “neural engine” is designed to work with Apple’s Core ML developer tools, which exist for app developers to gain easy access to the power of machine learning.
Thanks to this engine iPhone got new facial recognition and augmented reality features.
AI technology is becoming increasingly central to smartphones, powering everything from the speech recognition to tiny software tweaks. But to date, AI features on mobile devices have been mostly powered by the cloud. This saves your phone’s battery power by not overusing its processor, but it’s less convenient (you need an internet connection for it to work) and less secure (your personal data is sent off to far-away servers).
Before we enter the era when every mobile app starts doing some AI magic right on our phones let’s take a look at the popular mobile apps that have already been utilizing AI technology for certain tasks and have succeeded in gaining users’ trust.
Spotify, the largest on-demand music service in the world, has a history of pushing technological boundaries and using big data, artificial intelligence and machine learning to drive success.
According to Erik Bernhardsson, who worked on machine learning at Spotify from 2008 to 2015, the best way to get good recommendations is 90% through collaborative filtering then use deep learning models to get the extra 10%.
Discover Weekly service is entirely powered by collaborative filtering, in particular, a few extensions to word2vec that the machine learning team has built.
Spotify has already acquired four startups in 2017 that are meant to help the music giant provide its users with better search and recommendation, better connect artists and licensing agreements, and integrate audio detection. This means we are about to see and experience even more progressive AI tech from Spotify.
If you capture 20 seconds of a song, no matter if it’s intro, verse, or chorus, it will create a fingerprint for the recorded sample, consult the database, and use its music recognition algorithm to tell you exactly which song you are listening to. But how does Shazam really work?
Shazam’s algorithm was revealed to the world by its inventor Avery Li-Chung Wang in 2003, here are the fundamentals of this music recognition algorithm.
Shazam holds an extensive catalog of songs with detailed “spectrograms” that contain the various frequencies that a song emits. Once the user tags a song, the application takes that signal, cross references it with their database and returns a match. Artificial intelligence is used in this instance to take data that would be useless by itself (the song signal), provide context, and match it with their database to produce something that is useful for the user.
MSQRD a.k.a. Facebook
Of course, Facebook is an industry leader and AI technology is encrypted into its DNA. Facebook uses deep neural networks for targeted advertising in order to decide which adverts to show to which users. They’ve also developed a text understanding engine named DeepText to extract meaning from words we post.
Facebook also uses a DL application called DeepFace to teach it to recognize people in photos.
The task of deciding which processes can be improved by AI and Deep Learning is also handled by machines. Facebook team has implemented a system called Flow which uses Deep Learning analysis to run simulations of 300,000 machine learning models every month, to allow engineers to test ideas and pinpoint opportunities for efficiency.
But in this article, we wanted to focus on the technology behind MSQRD, a mobile app acquired by Facebook in 2016. First, MSQRD was an app that allowed their users to put filters on their videos so they looked like Leonardo DiCaprio, Barack Obama, various animals or a zombie. The key to the whole thing is 3D face tracking. The app tracks loads of reference points on your face, and then it maps the movement of the 3D model/effect over your own face’s movements.
The team claims to have accumulated enormous expertise in solving the problem of face detection and tracking through building a proprietary self-learning face tracking algorithm as well as a framework for creating special effects. The code implementing mathematical algorithms in the app has been optimized so well that it achieves stellar performance on both modern mobile devices as well as on outdated ones.
A major breakthrough in healthtech domain has recently been done by Flo Period Tracker – the first period tracking app to publicly announce using artificial intelligence for improving cycle predictions. Flo became the most downloaded app worldwide in its category within months after introducing neural networks to its prediction algorithm. It’s no surprise considering around 30% of women around the globe face the challenge of irregular periods. Let’s dig into how the technology works.
Flo team found a source of new features in the uniqueness of its users. Women manually log their mood, physical inner activity, symptoms like a headache, fatigue or acne, which sometimes form a stable pattern that repeats on certain days of a cycle. These unique repeatable patterns are so individual that no human can create enough rules to capture them all, but they may be so evident and stable for a particular woman that by analyzing them the neural network can make a better prediction. For that reason, the data science team working on the project developed a machine learning algorithm that can capture the unique menstrual cycle patterns for every woman.
Technically, this is realized through a two-step process. At the first step, unique patterns are recognized by individual-level machine learning models. At the second step, the patterns are transformed into features for the neural network. Thus, an output from one algorithm becomes an additional feature for the neural network.
Model and philanthropist Natalia Vodianova has recently made a $5 million investment in the mobile app, which got us thinking that Flo team is about to bring even more groundbreaking innovations into their product.
Uber’s main goal is to get their riders to their destinations faster, and automated reasoning is one of the most powerful applications of AI technology to help the consumer complete their goal more easily. In this case, AI algorithms are a visual representation of how the human mind quickly weighs up risks, benefits, and costs to make decisions. Uber application takes the inputted address and automatically provides drivers with the best route given the time of day and known traffic congestion.
It is all possible thanks to an algorithm that takes millions of bits of data from other drivers who have traveled similar roads, and learns from their trips.
Uber has recently revealed that they are using artificial intelligence to figure out how much customers are willing to pay for their journey.
According to Bloomberg, Uber’s new system uses machine learning to estimate fares for groups of customers based on sociological factors, as well as a destination, time of day and current location.
It used to be that this estimated cost was based on time, distance and geographic demand. However, Uber’s AI system, called “route-based pricing”, now adds estimated wealth into the mix.