Prof. Yoav Shoham, in February, you published an article in "Fortune" magazine in which you described the way organizations currently work with AI as "Prompt and Pray"-meaning they have to pray that AI will provide a reliable answer. You said your biggest concern was that AI was "unreliable." What did you mean?
"What I mean is that you can't depend on it, especially for complex tasks. In the business world, if you’re brilliant 95% of the time but talk nonsense 5% of the time, you’re dead."
How does this manifest in practice?
"There are companies testing hundreds of AI projects, but in the end, only a small number-just 6%, according to AWS (Amazon Web Services, H.W. and A.G.)-are actually launched. The main reason is inaccuracy of the models: they’re brilliant, but they sometimes generate complete nonsense. You can’t even recognize it, because within the large text you receive, there are errors and fabricated content hide."
Your language model, Jamba, is the only large AI model developed in Israel. Large language models are the basis for many well-known chatbots like ChatGPT. Does Jamba never make mistakes?
"We’re talking about almost zero errors. At the organizations using Jamba, we’ve come across almost nothing bizarre. For example, we work with One Zero, a digital bank. They have an AI-based customer service system and want to provide reliable automated responses to customers. Think of questions like, 'What is my checking account balance?' or 'What is the best investment option for me?'"
But Jamba seems to be lagging behind. We don’t see it at the top of AI model rankings, like Hugging Face’s. The top positions are taken by models from OpenAI, Meta, Anthropic, Alibaba, and French company Mistral.
"We work with enterprises, and they don’t pay much attention to these rankings. We didn’t invest in an AI chatbot for the general public, which would have helped strengthen our branding, the way those companies did. That built their value. Did we make a mistake by not launching a chatbot? I don’t know. It’s very expensive and requires a great deal of work. So, the question is whether we want to raise a few hundred million more dollars to do it. So far, AI21 hasn’t invested much in branding, and we’re working to change that."
Last week, you launched the Maestro system, which helps organizations reduce errors even when using competing models like GPT and the Claude Sonnet app. Is this a major strategic direction for AI21?
"Yes, this is the future of the industry. The various models will always have some degree of inaccuracy. So what’s missing is an AI system that adds oversight and increases reliability."
Would you consider AI21 a success? Is it on the road to profitability?
"I'm never satisfied. The day I stop worrying, my investors should worry. At any rate, it’s too early to say whether this is a success or not. As for profitability-most companies you talk to in this field don’t even know how to spell the word."
What’s your goal?
"My goal is to create the smartest AI in the world, that enterprises can use reliably, just as they trust traditional technology. If every organization in the world uses our software, that will be a success."
Your competitors, like OpenAI and Anthropic, raise billions of dollars every year and are valued at tens of billions. That hasn’t happened for AI21. To date, you’ve raised only a few hundred million.
"Money is important, but it's not everything. We’re not terribly interested in creating a model that can draw a donkey on the moon. We don’t play the game of throwing every piece of data we can find onto a mass of processors and seeing what happens, and launching a chatbot that’s just a crowd-pleaser along the way. Our technology, which is highly intelligent, doesn’t require that kind of wastefulness. The amount we’ve raised so far-$336 million-is still money. But it’s not billions."
"Neural networks are simpler than the human brain"
Two years ago, Prof. Geoffrey Hinton, a pioneer in neural networks, the basis of generative AI, left Google to warn about the dangers of AI. You know him personally. What’s your opinion?
"I have great respect for Geoff, but I disagree with him. Can machines compete with our intelligence? That’s a very open question. My opinion is that no one really knows. It’s worth thinking about. But from there to death notices for the human race, I think that’s a stretch."
What’s your guess?
"My guess is that in the short term, over the next 20 years, machines will increasingly empower us, allowing us to do things we couldn’t do before. We’ll stop doing some tasks because machines will do them better. That’s the nature of technology. In the long term, say, 200 years, the question will be about where the boundary between human and machine lies. Will there be brain-machine communication, or will machines be integrated into our bodies? That’s the real question-not whether machines will replace us, but where we draw the line between us and them."
On a recent "Globes" podcast, Lemonade CEO Daniel Schreiber said, ‘There is nothing the human brain can do that silicon can’t.’ Do you agree?
"Does a machine think the way we do? No one can give you a definitive answer. The human brain is an incredibly complex system. The more we study it, the more we realize how little we actually know about it. The functioning of artificial neural networks is far simpler than what happens in the human brain."
Still, what dangers inherent in AI concern you?
"My biggest concern is not that AI is too smart, but that it’s too dumb. It does amazing things, but it can also be as dumb as a rock. When it makes mistakes, they’re not minor-it talks absolute nonsense.
"What worries me, on the social level, is the erosion of the information layer that is the foundation of liberal democracy. I don’t like terms like 'fake news' or 'post-truth,' but we live in an age of information overload. Some of it is innocent and right, some is innocent and wrong, and more and more of it is not innocent. It’s therefore harder and harder to know what’s true.
"What really scares me is AI’s ability to mimic people’s voices and images. This strikes at something very fundamental in us. When I look at you now, I know it’s you. When I see a familiar face on TV, I recognize them. But I can’t trust that anymore. That keeps me up at night."
About a year ago, Hinton’s student and one of the founders of OpenAI, Ilya Sutskever, established a company called Safe Superintelligence that has offices in Tel Aviv. Its goal is to prevent AI from advancing to dangerous places. Do you think there’s a need for a company like that?
"It was clear that, after leaving OpenAI Sutskever, wouldn’t sit at home and knit. What does he know how to do? He knows how to do AI and train networks at a very high cost. The company he founded is not a philanthropic venture; it’s a for-profit business. So, we need to see what it does in practice-I assume it will be something interesting."
What Israel can learn from France
"You advise the national AI program. The State Comptroller determined that Israel has no AI strategy, no supercomputer, and the budget is very small - only one billion shekels until 2027. We’ve declined in the international rankings in this field.
"You have to take these rankings with a grain of salt. I know this well because I built the world’s leading AI index, the Global AI Vibrancy Rankings. Regarding the state comptroller’s report, I’ll say: we have a good starting point. We have excellent researchers in academia and good AI companies.
"I distinguish between companies that produce AI and those that consume AI-both are important. We have more companies that consume AI than ones that produce it. This is the case worldwide. In Israel, there is the beginning of an AI infrastructure, such as large data centers and research infrastructure at the universities. But if it stops there, it won’t be enough."
Let’s focus, for a moment, on the Israeli researchers.
"If you count at the number of researchers in Israeli academic institutions who focus on core AI, you won’t reach 100. At Stanford University alone (where Shoham taught for 28 years. - A.G. & H.W.), there are 100 researchers working in this field. Israel needs a larger number. How many? I don’t know, but at least double. There are efforts underway to increase this number. The total budget approved so far for the national AI program is one billion shekels. That is still a tiny budget."
How much is needed?
"Ten times more. I say this humbly because resources in Israel are limited, and it’s not my place to determine national priorities. We will never surpass the US, Saudi Arabia, or the UAE in the number of graphic processors, but still, the available resources need to grow. Israel must establish the right connections with organizations worldwide to ensure access to computing power and to contribute our expertise to the global AI community."
The United Arab Emirates is investing vast sums in purchasing GPUs, building server farms, and developing solar and nuclear power plants.
"This is exactly what we’re not able to do. We don’t have the funds or energy for this. We do have supplemental resources: algorithmic capability, and the ability to develop technologies based on it. These are things we’re very good at, and these are the things we must excel at."
The Prime Minister’s Office intends to establish a new AI authority, apparently because it views the programs that were in place to date, including the National AI Program, as insufficient and unsuitable. What do you think?
"Today there is an active, functioning program, and it’s important to understand this. The TELEM Forum that leads it includes the Israel Council for Higher Education (Vatat), the Directorate of Defense Research & Development (Mafat), the Israel Innovation Authority, and the Ministry of Finance, and the Ministry of Science. The Forum exists, and in my opinion the more government backing and the more resources, the better. By the way, a good example of a country that has succeeded in advancing AI is France, and we can learn from them."
What was France’s strategy?
"President Macron decided that AI was a super-important area, and took care to allocate resources and raise its profile internationally. For a country to succeed in this area, relevant professionals must take the lead. You have to make it easy for them, meaning, without bureaucratic constraints, and no politics."
Published by Globes, Israel business news - en.globes.co.il - on March 30, 2025.
© Copyright of Globes Publisher Itonut (1983) Ltd., 2025.