Finally, the day for which Google is preparing grounds to share its exciting products has arrived. On May 7, at Google I/O 2019 held in Mountain View, California, CEO Sunder Pichai has announced a bunch of improvements to its products that are really interesting and productive at the same time.
Well, here goes the list...
Google Assistant soon going to turn dramatically faster processing your multiple voices requests 10 times faster than the previous iteration. To achieve this, the company has made its AI models that listen and interpret your speech run directly on the device, rather than sending them to remote servers for processing that consume extra time.
In addition to it, the contextual understanding of Google Assistant is made better now. Meaning, you need not say "Hey Google" again and again for your multiple requests. You can just continue asking queries (be it regarding any app) in rapid succession.
Further, the Assistant will have a voice-enabled driving mode whose every feature can be accessed just with simple voice commands. All you need is to enable it by saying Hey Google, let's drive! On hearing these words, Assistant brings up a driving navigation view on your dashboard. Once you input your destination, it shows up the required directions.
It also provides some personalized shortcuts to music player (will be at the bottom), frequent and regular contacts, podcasts that you left in middle, etc., speaks out missed calls and notifications that you receive on your smartphone, etc. So, you need not take your eyes off the road!
Further, the company also showcased the Assistant's ability to write and send emails through voice inputs.
However, you can't access your next-gen Google Assistant right now. It would take some time to reach your phones.
Google Lens is also getting many fantastic updates.. of which, the most notable one is the ability to identify the popular dishes from your restaurant physical menu card when you point your mobile camera at it. Users can also tap on a particular food item to how it looks and know its user reviews.
What seems to be the most interesting with new Google Lens is that, when you point your camera to the bill, it can calculate your tip and split the bill too.
The other amazing feature of Google Lens will be the ability to read the text out aloud, in addition to capturing and translating. Once you point your camera at the text (of any language), Lens translates it to the set language as regularly and reads out the text aloud highlighting the words as they spoke. Also, if you wanna know the definition of a particular word, you can simply tap on it.
The feature which is known to arrive in 'Google Go' initially, would be really helpful to those who find reading hard.
Google Search will soon get the support of AR which let users view and interact with 3D objects directly from the search results and place them in the real world. For instance, if you search for an animal in Google, the results will read out a new button 'view in 3D' that shows up the 3D structure of the animal where you can even see an option to view in AR (putting in physical surroundings).
Don't you think that the feature would be really helpful while analysing a human organ or shopping a product, for instance? Definitely!
(Image Courtesy: Twitter & Google)