"Sorry I'm late," Dr. Chris Brauer apologizes. "I was preparing a bot for a bank, and it got a little crazy. We had to correct it."
"Globes": How does a bot go crazy?
Brauer: "When you give a learning machine too much freedom, or when you let the wrong people work on it, you get an unpredictable, inefficient machine that is sometimes racist."
This statement began the meeting with Brauer, who owns a creative media consultant firm, and founded the Centre for Creative and Social Technologies at Goldsmiths University of London. He will address next week's Globes Israel Business Conference in Tel Aviv. He immediately explains: "A bot is actually software that learns how to respond through interactions with its surroundings. We teach it how to respond to a given number of situations, and it is then supposed to make deductions from these examples, and to respond to new situations. It receives feedback from its decisions - if it was right - and improves its decision the next time according to the feedback."
This is similar to how a child is taught to recognize a dog, so that the definition will include all types of dogs, but not all the other animals having four legs and a tail. First he is shown a dog and told, "This is a dog," and then he is allowed to point to dogs, cats, and ferrets in the street. Only when he correctly points to a dog is he told that he was right, and his ability to identify a dog improves.
When the pound fell with no real reason
"Every bot has different degrees of freedom," Brauer says. "It can be restricted by setting many hard and fast rules in advance what it is and isn't allowed to do, but then you get a rather hidebound bot that does not benefit from all the advantages of machine learning. On the other hand, if you allow it too free a hand, the bot is liable to make generalizations that we don't like." One example is Google's bot, which mistakenly labels certain people as animals.
"We also have to decide who is entitled to teach the bot," Brauer continues. "If we let an entire community of participants prepare the bot and give it feedback, we get a very effective and powerful bot within a short time and with little effort. This, however, is like sending a child to a school where you know nothing about the teachers or the study plan. Sometimes, the community will teach the bot undesirable things, and sometimes it does this deliberately. That's what happened, for example, when Microsoft wanted to teach its bot to speak like a 10 year-old child. Microsoft sent it to talk with real little girls, but someone took advantage of this by deliberately teaching the bot terrible wordsthat destroyed it rather quickly."
People are dangerous to machines, and machines are dangerous to people.
"Absolutely. Machines were responsible, for example, for the drop in the pound following the Brexit events, and the process by which they did this is not completely clear to all those involved to this day. It is clear, however, that the pound fell sharply without people having made an active decision that this should be the pound's response to Brexit. It simply happened one day all of a sudden because of the machines. Only when they investigated this did they discover that the fall had occurred right around the time when a certain report was published in the "Financial Times." No one thought that this report said anything previously unknown, but for some reason, it was news to this machine.
"The mystery is that we don't know what in this report caused the machines to sell the pound at the same moment, what information was in the report, or what was the wording that drove the machines to sell. In a world in which machines are responsible for 90% of trading, they don't wait for approval from us, the human beings. They act first, and don't even explain afterwards."
New experts on relations
Brauer says that such incidents created a need for "machine relations experts" people whose job is to try to predict how certain actions by a person will affect how machines making decisions about him or her will act.
For example, Brauer now works with a public relations company. The job of such a company is to issue announcements to newspapers written in a manner that will grab the attention of human readers, and especially the attention of journalists whose job is to process these reports and use them as a basis for newspaper stories. This, however, is changing. Today, a sizeable proportion of press releases pass through some kind of machine filter before they get to the journalists. In the future, this will be the norm. "Because of the large amount of information and the need to process it at super-human speed, we have to delegate a large proportion of our decisions to machines," Brauer explains. "A journalist who doesn't let a machine sort press releases for him, and insists on sorting them by himself, will not produce the output required of him."
The public relations firms will therefore have to write the press release so that it will catch the attention of a machine, not a journalist. In the near future, people will jump through hoops trying to understand the machine reading the press releases in order to tailor the press releases to it. Later, the machine will also be doing the writing, or at least will process the press release into a language that other machines like. Bit by bit, people are losing control of the process, and even losing the understanding of it - the transparency.
This is true not only for journalists. For example, take profiles on dating websites. Machines are already deciding which profiles will see which people surfing the site. In the future, there will be advisors for helping you devise a profile for the website that a computer, not necessarily your prince charming, will like, because if you don't do this, the computer won't send prince charming the profile. You can hope as much as you want that your special beauty will shine out, and the right man will spot it, but not in the new era. If it doesn't pass the machine, it won't get you a date.
That's also how it will be in looking for a job when a machine is the one matching employers and CVs, or between entrepreneurs and investors. Even today, when you throw a joke out into space (Facebook or Twitter space, of course), and you want someone to hear it, it first has to please a machine.
"We're talking about a search engine optimization (SEO) world. Up until now, we have improved our websites for their benefit. Tomorrow, it will be the entire world," Brauer declares.
To get back to the "Financial Times" Brexit story, public relations firms also have to speak with machines, and journalists also have to realize that they're talking with machines, and that the stories they write activate many machines whose actions are not necessarily rational.
"That's right. A reporter must know that what he writes can directly set machines in motion, in contrast with human readers, who are supposed to exercise some kind of judgment. The press may be more influential in such a world, if that's what it wants."
That sounds frightening.
"I'd like people to begin designing the machines so that we will at least be able to retrospectively understand what led them to make a given decision. There should be documentation for every machine, an 'anti-machine' that will follow and report what's happening in real time, so that people can intervene and tell the algorithm, 'I saw you, algorithm! I know what you tried to do!' I want to believe that in 2025, there will be no more events like Brexit, in which months afterwards, we still haven't understood why the machines acted the way they did."
People are superior to machines
The world of 2025 will be the subject of one of the sessions at the Israel Business Conference that will take place in December, in which Brauer will take part. As a former developing technologies consultant for PricewaterhouseCoopers and the owner now of his own consultant company (he isdirector of Creative Industries at investment bank Clarity Capital), the need to flatter machines is only one of his technological predictions.
"The Internet of Things is expected to substantially alter the energy industry," Brauer says. "We are seeing a change in the direction of much better adaptation of energy consumption to the consumers' real needs, and differential energy pricing at times when it is in short supply. For example, the dishwasher will operate itself at times when energy is cheap and available, and will be aware of availability in real time, because all the devices will be connected, not just for the user's convenience in a smart home. When it is working, this dishwasher will also be able to consume energy from 15 different suppliers, and to automatically switch between them. It will change the energy companies' market from top to bottom, because like all of us, they too will be marketing to your machine, not to you."
Will the machines leave work for people, other than as machine relations managers, of course?
"We have always known that technology increases output. This happens mainly in places where decisions are deterministic, for example in medicine, where treatment takes place in the framework of a clear protocol. In such a world, there is ostensibly no need for a doctor, or at least, not for many doctors. The few that will remain will be the technology controllers, or will be consulted only in the difficult cases that a machine can't solve. You can see that the new technology improves employees' output. Instagram has attained a billion dollar value with only 12 employees, and they reach the same number of people as 'The New York Times'.
"People see this, and are fearful, but I say, 'Let's regard this period as our emergence from slavery.' You could say that up until now, because we didn't have the technology we really wanted, too many people worked in imitation of machines, and that detracted from their ability to be human beings. Now we can let the machines be machines, and people will prosper in all sorts of places where creativity is needed that is beyond a machine's capability. People will flourish when they are able to think critically, with values and nuances, about every good database they get from machines. People will do what they were always meant to do."
Is everyone cutout for these professions? Will they have enough work?
"I don't believe that any person is only capable of thinking like a machine. Our society has made them develop this way by pushing people into doing a machine's work. We're now learning how to change education and the enterprise environment so that all people will be able to do the work of people."
In order to prove his point, Brauer examined an algorithm that writes newspaper stories with a senior "Financial Times" journalist (not the one that pushed down the pound; a different journalist named Sara O'Connor). "The machine issued quite a good summary that included all the important facts, but it missed the real story, what all the human readers agreed was the story after reading Sara's story. That's what a good reporter does sees the contexts that are not immediately accessible, asks the right question, and fills in what's missing. This, at least, will characterize the reporter of the future, and it will be the same with all the other professions. Anyone who rises above all of them today in professional creativity, whether it's a politician or an accountant, will be the model for how this profession will appear in the future."
And all humans will have to work with machines in order to achieve the output expected of them.
"Anyone who doesn't will be useless. They will have no place in the hyper-productive future."
Published by Globes [online], Israel business news - www.globes-online.com - on December 4, 2016
© Copyright of Globes Publisher Itonut (1983) Ltd. 2016