As the number of users of iPhones and other Apple devices grows by billions each year, Siri has some mammoth ground to cover when it comes to understanding a wide variety of accents, dialects, and speech difficulties. It also has to get smarter to keep up with the demands of a constantly changing and evolving set of features, apps and tasks that are becoming available.
To do all of these things, Siri relies on artificial intelligence and natural language processing that’s centered around three parts. This includes the conversational interface, personal context awareness and service delegation.
The first part of the Siri experience is about getting her to understand what you’re asking for in the first place. Straight word-for-word voice recognition has to work well enough, but deciphering what you are actually trying to say is a whole lot harder than that and involves statistics and machine learning. This is largely what separates Siri from the average automated phone system and makes it smarter.
Once Siri has a grasp of what you want it to do, it can communicate with other apps on the device and perform the task. For example, if you ask for the weather forecast, Siri will communicate with the built-in Weather app to provide that information. The same is true for asking to play a specific song or podcast, or even asking to call a contact or send an SMS.
You can also use Siri to request a route, either in navigation mode by saying “hey siri, navigate to” or by stating your destination. You can tap a mode of transportation or the “Share ETA” button to share your estimated arrival time with contacts.