The in-built Android feature, which is powered by Artificial Intelligence and Augmented Reality, adds a host of new tools to make your lockdown life easier
The ongoing lockdowns and isolations around the world have incited technologists to take on multi-faceted approaches to work-from-home living. One of them is to converge the offline and online ecosystems of productivity, and Google Lens has been one of these facilitators.
According to a May 7 blog post by Lou Wang (Group Product Manager for Google Lens and AR), “You can already use Lens to quickly copy and paste text from paper notes and documents to your phone to save time. Now, when you select text with Lens, you can tap ‘copy to computer’ to quickly paste it on another signed-in device with Chrome. This is great for quickly copying handwritten notes (if you write neatly!) and pasting it on your laptop without having to retype them all. Copying text to your computer requires the latest version of Chrome, and for both devices to be signed into the same Google account.”
However, it is up to the user where they want that text pasted, be it in the browser’s search bar, a Google Files sheet (Sheets, Slides, Docs). The handwritten-to-typed text facility has many potential use-cases, the most prominent being transferring meeting notes to an email for colleagues, brainstormed lesson plans for fellow teachers and students, authors who prefer writing stories by hand, etcetera.
Other new features
In the same blog post by Wong, Google Lens has added a feature to help users learn new words and how to pronounce them. This is to help people who are making the most of their lockdown time by learning a new language. “If you’re using the extra time at home to pick up a new language, you can already use Lens to translate words in Spanish, Chinese and more than 100 other languages, by pointing your camera at the text,” he writes. According to Google Trends, searches for ‘learn a new language’ have more than doubled in number. Pronunciation is important too, and Wong continues, “You can also use Lens to practice words or phrases that are difficult to say. Select the text with Lens and tap the new Listen button to hear it read out loud — and finally figure out how to say ‘hipopótamo!’”
Google has also made sure that tools like G Suite are more accessible to professionals, students and teachers working from home. Google Lens’ third new feature now helps you understand unfamiliar concepts linking to verified sources online, “If you come across a word or phrase you don’t understand in a book or newspaper, like ‘gravitational waves,’ Google Lens can help. Now, with in-line Google Search results, you can select complex phrases or words to quickly learn more.”
Existing features in Lens include scanning and translating, scanning a physical product to bring up similar products and where they can be purchased, dish names on a menu to see what they might look like, scanning a physical landmark or building to get its historical data and opening hours, and even identifying flora and fauna.
Into the Lens
Rajan Patel, Google’s Director of Augmented Reality, penned a Google AI blog post on September 4, 2019 about Google Lens’ inner-workings and its early days. He writes, “To build a universal tool that can reliably capture high-quality images with minimal lag, we made Lens in Google Go (a light app version of Google, using minimum data) an early adopter of a new Android support library called CameraX. Available in Jetpack — a suite of libraries, tools, and guidance for Android developers — CameraX is an abstraction layer over the Android Camera2 API that resolves device compatibility issues so developers don’t have to write their own device-specific code.”
The text recognition tool is more intricate, of course; the AI needs to make sense of shapes and letters which constitute words, sentences and paragraphs before implementing Optical Character Recognition.
A two-step process follows: Hough Transform and Text Flow which, together, assists the AI understand different fonts. Understandably, there can be a margin of error on Lens’ part and to correct these mistakes, Patel explains, “Lens in Google Go uses the context of surrounding words to make corrections. It also utilises the Knowledge Graph to provide contextual clues, such as whether a word is likely a proper noun and should not be spell-corrected.”
Given this framework is from its early days, Google would have upgraded the tech to suit the new features announced on May 7.
One can access Google Lens by going to the in-built Camera app on their Android device and heading to ‘More’ and then selecting ‘Lens.’ Those with iPhones can download and install the Google app which has Lens as an in-app feature. However, Listen — a Lens tool, is yet to roll out for iOS.