"We delete a million phony accounts a day"

Guy Rosen  photo: PR
Guy Rosen photo: PR

Facebook VP product management Guy Rosen talks to "Globes" about the effort to keep election campaigning clean, but admits: "The problem is unsolvable."

The Israeli election campaign ending Tuesday is the first in which a significant part of the campaign took place on the social networks and in cyberspace. It can be regarded as the first test for Israeli democracy and for public discourse in the country in general after the foreign efforts to influence the 2016 US presidential election.

Following the US election, in which the importance of bots, phony profiles, concocted reports, and fake news in general was first realized, most of the criticism was directed against Facebook, the world's leading social network. The company, which has 2.3 billion users worldwide, was attacked for failing to detect efforts by Russian groups to influence the election. Facebook, which held so much promise for democracy, allowed hitherto unheard groups to have their say, and gave a forum that inspired the Arab Spring, rapidly became a new threat to the prevailing form of government in the West.

Since then, Facebook has embarked on a new path, recognized the weaknesses of its platform, and begun to implement solutions wherever possible. The company publicizes these efforts and claims that regulation is needed in order to protect elections against threats. Facebook VP product management Guy Rosen, the highest-ranked Israeli at Facebook, tells "Globes" about the company's efforts to prevent external influence on the elections in Israel, rejects criticisms of the differences between Facebook and Google, and describes the new tools that have been launched. "We're spending more this year on security than all of our revenue we made when the company held its IPO," Rosen says. Facebook's revenue that year, 2012, totaled $5 billion.

Rosen has been a member of Facebook's global management for the past two years. His job includes control of content on the platform and security during elections campaigns. He reports directly to Facebook founder Mark Zuckerberg. "A few years ago, there weren't many people working on questionable content. Artificial intelligence wasn't ready for the type of work needed. Since then, we have been switching from reactive work, in which we waited for someone to report a problem, to proactive work, in which we look for things before anyone reports them, or even sees them," Rosen says. "The company has completely changed its attitude to these matters. This is very important to us. This year alone hundreds of millions of Facebook users will be voting in elections," he explains.

Rosen joined Facebook in 2013 when the US company acquired Israeli company Onavo, of which Rosen was a cofounder, for $150 million. The startup was the basis for Facebook's R&D center in Israel, and Onavo's activity was the basis of a controversial VPN app that collected information about usage patterns for apps and websites. The app reportedly helped Facebook spot trends that caused it to acquire competitors or develop features identical to successful ones that they developed. In late February, Facebook announced that it would discontinue the app.

"We went a step further than Google"

Facebook has employed a series of measures in the Israeli election campaign, including applying transparency tools to political messages requiring advertisers to obtain authorization before buying an ad, while at the same time giving surfers details about the ads and the range of prices paid for them. Facebook launched fact-checking tools for posts, and even flew a delegation of professional teams to Israel to supervise activity on the platform in the days preceding the election. If a given post proves to be fake news, Facebook reduces its distribution and marks it, unless it was distributed by a politician, in which case it is treated as news.

"Globes": Google completely banned personalized advertising in Israel during the election campaign, but you chose not to take such a drastic step.

Rosen: "We actually took it a step further - we applied transparency to political advertising," Rosen claims. "Developing detection and transparency tools for political advertisements takes a lot of work, and I think that not every company in the industry is willing to dedicate itself to making such an effort. I think that allowing advertising is important, and the ones who can benefit from it are the small parties, which don't have so much money, because that's what online advertising can do. The issue of transparency in ads is one of the bases of our attitude to election campaigns. It has two aspects: preventing foreign intervention by verifying and identifying the people behind an advertisement, and transparency for the surfers, who can see who is responsible for the advertisement. I'm proud that Israel is the fifth country in the world in which we launched this service. It requires adaptation for each location. We accelerated the process because elections were held ahead of schedule, and launched it in mid-March."

The use of phony users is part of the political game in the current election campaign. Last week, "Yedioth Ahronoth" reported that hackers Noam Rotem and Yuval Adam had exposed a network of bots on Twitter, which they believe the Likud was using. The Likud denied any connection to the bot network, but it cannot be ruled out that various groups are operating such networks without our being aware of it.

According to Rosen, Facebook has an advantage in combating fake users - the fact that people have to use their real names on the platform. "We always take a look when such activity is exposed on other places in order to see whether it exists on Facebook. It's an important matter. In recent years, we have been developing artificial intelligence systems that are closing more than a million phony accounts a day, most of them as soon as they are registered. In addition, we employ security researchers, who also look at the small numbers of the sophisticated networks that passed through the filter and try to distribute content in a way that's not allowed. In many places around the world, we get rid of such networks of pages and accounts on Facebook and Instagram, including an Iranian network appealing to an Israeli audience that we took off in January. This is Sisyphean work; there will always be efforts to get around it."

Is your investment in elections and safety on the platform in general a result of the realization that you have responsibility in this matter, or the realization that you won't survive as a business unless you pay attention to criticism?

"I and the other people working on this are doing it out of a feeling of responsibility. The ways that we measure success, set priorities, and direct the work are not business-oriented. We think about what influences people, where they will be hurt, is where the entire society is liable to be hurt. The effort is a long-term one, and we're making process little by little, but the problems are actually unsolvable. We also regard ourselves as obligated to exercise responsibility, and to be transparent about the work that's being done. That's why we issue the transparency report every six months. I really like delving into the numbers and understanding that there are things that work better and other things that we should have improved. Facebook invests in this because you can't be this size and not do such things."

The algorithm on the social networks creates an echo chamber effect that exposes us more and more to views like ours. Isn't this dangerous in itself before an election?

"On WhatsApp, for example, there is no algorithm, and I myself get all sorts of political messages from my family on WhatsApp that I have already sent to check whether they're real. But this means that at bottom, it's a matter of human behavior - people send things that interest them, that they think support their opinions. This exists without any algorithm or feed. What's important on Facebook is knowing how to combine the topic with fact-checking, which is another of our cornerstones before elections. We also started working in Israel in March with the '"Globes" whistle' (full disclosure: the 'whistle' checks facts for Facebook for payment). When the fact-checker refutes a report, we reduce its distribution, so that it doesn't go viral."

"Technology can't solve all the problems"

In addition to election campaigns, Rosen is responsible for supervising content on the platform. Facebook was recently severely criticized following the massacre in New Zealand, which was broadcast live by the shooter. Facebook did not block the clip in real time, and its distribution continued until after the event ended. In a post published by Rosen on Facebook's official blog, he explained that the reason that the company's artificial intelligence system did not automatically block the clip concerned the way systems of this type work. They require many examples in order to spot such cases. Events like the one in New Zealand are rare, Rosen explained in his post, so there is not enough data to train the system to identify them.

The explanation that you published illustrates a paradox. There are few examples of the most extreme content, the kind that should be taken off the network. How do you solve this?

"It was a horrifying event. Two things happened there, and it is important to distinguish between them: the broadcast itself and the subsequent mass distribution of the clip, which was not related to the platform on which the original clip was broadcast. Where the broadcast is concerned, artificial intelligence does terrific work, but it isn't perfect. 97% of the graphically violent content that exists today on Facebook is detected automatically before anyone reports it. There is still 3% that is not spotted, because it is less common. Fortunately and unfortunately, there are not many such horrifying clips, so it's important for us to also rely on the human factor and reporting mechanisms. During the broadcast, this clip had 200 views, and not a single report. We believe that many of the views were by people who supported the attack.

"We're doing several things about the broadcast, including broadcasting restrictions, who should and shouldn't be allowed to broadcast. We've already made an initial improvement in the system of reporting clips after the live broadcast, because we didn't respond quickly enough to the report we got. We'll also continue researching the technological aspect, but I don't want to give people the illusion that technology will solve everything.

"It's an area in which we have to mainly cooperate as an industry, because in the end, this event affected all of the companies."

Rosen adds, "We have a regular process for learning lessons that we have applied in recent years whenever there was an event, like the investigatory culture of the air force in Israel, because we want to be an organization that learns. I and the people working in this sphere are dealing with something on the scale of a system with two billion people, but anything I do makes a slight improvement in something real happening in the world. It's crazy. That's what gets me up in the morning."

Published by Globes, Israel business news - en.globes.co.il - on April 8, 2019

© Copyright of Globes Publisher Itonut (1983) Ltd. 2019

Guy Rosen  photo: PR
Guy Rosen photo: PR
Twitter Facebook Linkedin RSS Newsletters גלובס Israel Business Conference 2018