Google’s mission is to “create a more helpful Google for you,” said Rick Osterloh, Google’s senior vice president for Devices and Services, at the Made by Google event in October 2019. One of the ways to further integration of services is to continuously improve its artificial intelligence (AI)-powered Google Assistant, originally launched in 2016.
To help smartphone users, AI fans, and enterprise decision-makers understand the features of Google Assistant, and why it matters, we’ve put together the most important details and resources in this cheat sheet. This article will be updated and refreshed as new, relevant information becomes public.
SEE: Google Pixel 5: Cheat sheet (TechRepublic)
Executive summary
- What is Google Assistant? Google Assistant is a conversational, voice-activated digital assistant created by Google that can perform actions on behalf of a user and provide contextual information.
- Why does Google Assistant matter? Google Assistant is important because it is the linchpin in Google’s AI-first strategy for the future, and will likely come to define how users interact with almost all of Google’s core products.
- Who does Google Assistant affect? Consumers who are invested in the Google ecosystem will be affected by this, as will other smartphone manufacturers who may be prompted to create or acquire their own competing AI product.
- When was Google Assistant released? Google Assistant was first unveiled at the 2016 Google I/O developer conference in May 2016, and it was first available on the Google Pixel phone on Oct. 20, 2016. At the 2017 Google I/O conference, Google Assistant support for iPhone was announced. At the 2019 Made by Google event, new features and improvements were announced. Additional features and improvements were announced at the 2021 Google I/O event.
- How do I use Google Assistant? Google Assistant is available on a few premium Android handsets, the iPhone, Google Home, Google’s smart chat app Allo, Android TV, some Wear OS smartwatches and devices, and certain cars with Android Auto integration.
SEE: IoT security: A guide for IT leaders (TechRepublic Premium)
What is Google Assistant?
Google Assistant is a voice-activated virtual assistant, first introduced by Google at the 2016 I/O conference in California. Much like Amazon’s Alexa, Apple’s Siri, or Microsoft’s Cortana, the Google Assistant provides contextual information and performs actions such as booking a restaurant reservation or sending a message on behalf of the user. Smartphone users can also type requests to Google Assistant if they don’t want to use voice input.
To perform its functions, Google Assistant relies on artificial intelligence (AI) technologies such as natural language processing and machine learning to understand what the user is saying, and to make suggestions or act on that language input. Google Assistant is the foundational piece of Google’s “AI-first” strategy that CEO Sundar Pichai discussed at a 2016 Made by Google event.
SEE: How to use Google Assistant as a text translator (TechRepublic)
Before Google Assistant, Google created another digital assistant software known as Google Now. And, while Google Now is technically still in operation, there are some subtle differences between the two platforms. Google Now operates within an app available on Android or iOS, while Google Assistant is exclusive to certain products like Google Allo and certain smartphones. Google Now has since been phased out, and its functionality integrated into Google’s other products.
The goal of Google Assistant, Pichai previously said, is to “build a personal Google for each and every user.” Google Assistant is the connective tissue among all of Google’s core software and hardware products.
SEE: All of TechRepublic’s cheat sheets and smart person’s guides
Why does Google Assistant matter?
Google Assistant matters more for what it represents in terms of the future of Google’s approach to consumer products than for what features it has or what it can do. As noted above, Google believes that, just as the tech world moved from web to mobile, the next stage in that evolution is to move from mobile to AI.
As hardware becomes more commoditized, smartphone manufacturers must compete on what they can provide through next-generation software and AI. They also must build out an ecosystem of products and devices that share access to this AI. Google Assistant is embedded in various Android phones, but it also works in its smart chat app, Allo, and is a key part of its Google Home smart speakers and various device apps.
Other companies are following a similar trajectory. Samsung purchased Viv Labs and and eventually create Bixby, Samsung’s personal assistant. Microsoft put Cortana in the Xbox One and Windows 10 to bring voice commands to gamers, and Apple has integrated its personal assistant, Siri, into most of its products.
Who does Google Assistant affect?
The advent of Google Assistant affects consumers in the Google ecosystem. Smartphone users who purchase an eligible Android phone, or fans of smart home devices like Google Home, will be able to use Google Assistant to stay more connected and automate many parts of their daily lives. A single Google Home with Google Assistant can recognize multiple users’ questions or commands, provide proactive assistance, and offer hands-free phone calls. The Google Assistant can also push visual responses from a Google Home request to a user’s smartphone or certain connected televisions and smart displays, while also being able to stream music.
However, being that Google Assistant works with SmartThings, and is also available on the iPhone and in some Android Auto-enabled vehicles, the technology has a pretty broad reach. Users can even link a Chromecast device to their Google Assistant.
SEE: Google plots Embedded Google Assistant SDK, tools for developers (ZDNet)
Developers will also be affected by Google Assistant, as it opens up a whole new realm of possibilities for AI-powered services. Google has also released a Google Assistant SDK, aimed at hardware vendors and developers who can use it to integrate Google Assistant into their products.
Other smart home device manufacturers will also be affected by Google Assistant, as it adds another major player to the market. Amazon Alexa, despite its strong lead with the Amazon Echo and Dot, is facing major competition in home automation from Google Home as the ecosystem continues to grow.
When was Google Assistant released?
Google first announced Google Assistant in May 2016 at the 10th annual Google I/O developer event at the Shoreline Amphitheater in Mountain View, California. In September 2016, one of the first previews of Google Assistant came with the launch of Google Allo, the smart messaging app.
The first phone that took advantage of Google Assistant, the Google Pixel phone, was unveiled on October 4, 2016. That same day it opened for preorders, and it arrived in stores on Oct. 20, 2016.
Google Home, which also utilizes Google Assistant, became publicly available on Nov. 4, 2016.
Google Assistant continues to updated regularly with new features and integrations.
How do I use Google Assistant?
Users who want to take advantage of Google Assistant must purchase one of the products that offers the digital personal assistant as a feature–Google Home, one of the various smartphones it supports, a Wear OS device, an Android TV, an Android Auto vehicle with Assistant support, or the chat app Google Allo. Once a user has begun using it along with their Google account, it is important to use it often to improve the quality of information it provides.
Users trigger Google Assistant to start listening by saying “Hey Google” or “OK Google.” In the past, users had to say one of these phrases every time they wanted to trigger the assistant but a new feature called Continued Conversation, unveiled at the 2018 I/O developer conference, allows for actual back-and-forth conversations, as Assistant can pick up on cues in the conversation to keep listening for commands.
While it could initially only perform a single task, ahead of Google I/O 2018, the firm announced a new feature called Routines that will allow the Assistant to handle a string of multiple actions with a single voice command. One example of this would be to say “Hey Google, I am home,” which could trigger lights to come on and music to start playing.
Google Assistant can open apps, make suggestions, tell you the weather, play music, set reminders, and more—all of which is typically possible with other digital assistants. But with a technology called Duplex, unveiled during the 2018 I/O keynote, Google Assistant can also make calls and schedule appointments for you. During the I/O keynote, a live demo had Assistant call a hair salon to book an appointment for a woman. On the phone, Assistant sounded like a real person, responded to proper verbal cues, and was able to book a specific service within a certain time frame.
Google Assistant supports 30 languages and has six different voices to choose from—one of which is that of award-winning musician John Legend and actor and comedian Issa Rae. Many of the newest actions available for Google Assistant can be found within the Explore feature.
In September 2020, Google announced a new Hold for Me feature for Pixel devices, which enables Google Assistant to monitor customer service holds for Pixel users. Instead of having to sit and wait for someone to pick up the phone, Assistant will monitor the line and alert users when a person picks up. The feature will be exclusive to the Pixel 5 and Pixel 4A, and will eventually be available on older Pixel devices as well. Google hasn’t mentioned if the feature would be available for non-Pixel devices later on.
Developers and businesses who wish to sign up to work with the embeddable SDK can get started here.
SEE: Hiring Kit: Android Developer (TechRepublic Premium)
On May 19, 2021, Google announced several new features and improvements for developers and users at its virtual Google I/O event. First announced was Mobile App Actions which allows Android developers to easily enable their apps to fulfill queries from Google Assistant users and integrates Android App with Google Assistant. The functionality allows users jump to the most interesting and useful points in the app using voice commands. Developers can enable App Actions via Android Studio by mapping user intents to specific features or steps within apps. App Actions now has support for Custom Intents so App Actions that match a developer’s unique functionality can be built. With Built-In Intents, App Actions will surface your app’s actions and content across Android and Google.
Also announced was the Capabilities API—a new framework API that allows declaring support for common and vertical intents. It provides an Android-friendly way of declaring the Built-In Intents your app supports. It is available in beta as of May 19, 2021. With Android 12, developers can now participate in creating Android Shortcuts.
Through Android Shortcuts, users can set up a personal voice command that takes advantage of the Shortcut in the developer’s app. Google is also offering a new Shortcuts Jetpack module which enables shortcuts whenever users take the corresponding action in your app. Developers can describe the set of deep links supported by their app directly in the Shortcuts XML and can use the new Shortcuts Jetpack module to push unlimited shortcuts to the Android system and Google Assistant.
Added interactivity and customization just got easier with widgets—users can now get more done on their home screen, according to Google. On Android 12, widgets offer an interactive and customized view into an app, with a consistent design and look and feel. Widgets are easier to use across all sorts of new surfaces, reaching users via Voice On Assistant on a number of new surfaces including: mobile, lock screen and Android Auto.
Voice-forward interactions helps users complete simple actions in a voice-forward way with Text-To-Speech (TTS) support for multi-turn flows. Developers can map specific built-in intents to widgets using the Capabilities API which lets users invoke widgets from Google Assistant and optimize for voice. The goal is to make it as easy as possible to integrate Android apps with Google Assistant, according to the company.
SEE: 5 Internet of Things (IoT) innovations (free PDF) (TechRepublic)
The Google Native Assistant Development for Smart Display features an interactive canvas for games, stories and education. Now the faster, simpler and low-latency Canvas API offers support for client-side TTS, natural language understanding and storage. This feature is optional for developers to use and will be “available soon” in the developer preview.
Also coming soon is Release Management Capabilities; with it, developers will be able to manage releases in the console by launching in stages (e.g. launching in one country or to a percentage of users).
Google also announced improvements to the user experience on Smart Display—now, users can have full, immersive experiences. Google also removed interruption to TTS when users tap on the smart display, launched full screen canvas capabilities and made improvements to the Media API.
Finally, Google announced the upcoming availability of on-device CVC entry on Smart Display and on-device credit card entry. Both make on-device transactions much easier, reducing the chance that users will have to be redirected to their mobile devices.
Editor’s note: This article was written by Conner Forrest and updated by Brandon Vigliarolo and Kristen Lotze.