Five Google features developed in Israel

Google Photo: Shutterstock ASAP Creative
Google Photo: Shutterstock ASAP Creative

The annual Google I/O developers conference heard about features for flood prediction, and helping the deaf, and those with speech and reading difficulties.

"I did volunteer work with people having disabilities for years. I saw how with the aid of technology, their lives could be changed for the better," said Google Israel software engineer Sapir Caduri, who led the development of a feature that will help deaf people communicate through telephone calls. The feature, entitled Live Relay, was presented last week at the annual Google I/O developers conference. Caduri was speaking at a press conference at Google's offices at which Israeli engineers presented the developments they led that had been presented at the conference.

Live Relay turns the caller's voice into text. The user can respond with text, which the device will decode and provide a voice response to the caller. The feature is still in the development stages, and no date has been given for its launch. "I wanted to combine my hobby of making technology accessible to people with disabilities with my work at Google," Caduri says. "Telephone calls are something very personal, so the speech and text decoder talks place on the telephone. At no point is information sent to Google's servers," she adds.

Google VP engineering Yossi Matias, managing director of Google's R&D center in Israel, explained that the model, which makes it possible to use artificial intelligence (AI) on the device in order to decode speech and text, was developed in Israel. He says that it is an important breakthrough. Matias added that the design for people with disabilities eventually provided advantages for the general public. "Such a feature also makes it possible to answer phone calls during a meeting and in noisy conditions," he said.

Matias also said that the feature was developed on the base of another feature developed in Israel that was launched several months ago - Call Screen, which asks unidentified callers what they want and displays a transcript of the answer to the user, who can then decide whether to accept the call. "Since Call Screen was announced, it has become one of the popular features on pixel devices," Matias says.

Another feature developed in Israel and presented at the conference that helps people with disabilities and utilizes decoding capabilities on the device itself is Live Caption, which makes it possible to obtain automatic subtitles for any video or audio on a smartphone, including clips filmed by the user, similar to a current option on YouTube. Michal Ramanovitz, a software engineer on the team that led Live Caption, explained that the feature will be used through the device's volume button. Matias added, "This redefines a telephone mute. It's not that I don't want to know what the telephone says; it's just that I don't want it with noise."

Google's accessibility efforts also include information for people with reading difficulties in developing countries, for example. Another feature led by Israelis was presented at the conference is Google Lens for the Google Go app, designed for developing countries. The app enables users to take pictures of signs, for example, analyzes the text in the picture, and reads it aloud. It can even translate it into other languages.

Another project in the research stage is Euphonia, aimed at adapting Google's Speech Recognition for people with speech difficulties caused by diseases such as ALS, which the technology currently has trouble identifying. Dotan Emanuel, who heads a research team in Google's AI division, said that Google Israel had begun working on the project a year ago. "It's sometimes hard for systems to understand non-standard speech, but the patient's family understands it. We thought that if they are capable of adjusting themselves, our models can do it, too," Emanuel explained. "We started collecting recordings of patients and speech disabilities and training personal models for them, because the patients involved often also suffer from motor disabilities. Activating home devices with a smart loudspeaker, for example, could help them, but the system doesn't identify their voice instruction." Emanuel expressed hope that enough data would be collected to train a more general model for voice identification, but added, "Until that happens, we can help quite a few people individually, and that's a worthy goal."

While Google is trying to make its products accessible to as many groups as possible, including people with disabilities and people who have trouble reading and writing, people in Israel are still unable to use Google's voice speech recognition capabilities, which do not work in Hebrew. This has not yet been solved. When Matias was asked whether the voice services would be available soon, he answered, "Our goal is to eventually make it work in all languages, but there is more information in English, so we start from there."

Other than services based on voice analysis and text, Google's flood prediction tool, development of which was led by Israeli engineers, was also presented. An update for it was announced at last week's conference. "In crises like natural disasters and terrorist attacks, it's important for people to obtain access to information in order to take care of themselves," Matias said. "Nine years ago, when the fire in the Carmel forest started, I was at our development center in Haifa. There wasn't much information on the Internet, and I didn't know whether the office had to be evacuated. The police had information, and I asked the team to make sure that everyone could find it with a search. Two years ago, a team from Israel and the rest of the world launched a search product called SOS Alerts, in which a search makes important information available in less than an hour from the beginning of a significant event. Since we launched it, there have been 250 events in which it was operated, and there were two billion viewings of information."

On the subject of crises, Matias says that Google learned that floods affect over 250 million people worldwide. Advance warning could have given them better protection, so they launched the flood prediction tool.

Sella Nevo, who head the Google AI research team leading the company's flood prediction efforts, said, "Floods are the most common and deadliest natural disaster. They affect hundreds of millions of people around the world and kill thousands or tens of thousands of people annually, but one third to one half of the fatalities and economic damage can be prevented by advance warning. The problem is that in developing countries, where most of the flood fatalities take place, the ability to provide precise warning is lacking. At Google, we take information from optical satellite photographs and other sources, predict how water will move on the land, and generate accurate assessments. We give active warnings on Android to users in dangerous places, show safe and dangerous places on Google Maps, and make the information about the event available to searches. We did the first pilot last year in one city in India, and this year, we're expanding the coverage to millions of people along the Ganges and Brahmaputra Rivers, two of the world's biggest rivers."

Nevo explained that the focus right now is on large floods from rivers overflowing their banks, but that in the future, they hoped to predict flash floods that occur in the Negev, for example.

Published by Globes, Israel business news - en.globes.co.il - on May 16, 2019

© Copyright of Globes Publisher Itonut (1983) Ltd. 2019

Google Photo: Shutterstock ASAP Creative
Google Photo: Shutterstock ASAP Creative
Twitter Facebook Linkedin RSS Newsletters גלובס Israel Business Conference 2018