Israeli Startup Develops 'Ultimate Truth Machine' – Claims Assad Wasn't Behind Chemical Attack

Who shot down Malaysia Airlines Flight 17? Does Donald Trump wear a wig? Israeli entrepreneur Saar Wilf says his company Rootclaim knows the answers to these and many more questions – with a probability of close to 100%

Saar Wilf. The force behind the Israeli startup that’s building the ultimate truth machine.
Eyal Toueg

"Intelligent and rational people can look at the same information and reach totally contradictory conclusions, and everyone is absolutely certain their conclusion is the right one," says Israeli entrepreneur Saar Wilf, whose latest venture claims to be able to find answers to "the issues that interest society."

The startup's goal is to transform people's understanding of complex and controversial matters. And with everyone from heads of state to the media accusing each other of propagating fake news, the timing couldn't have been better.

The 42-year-old's company, Rootclaim, looks at any event – for instance, murder charges or the 2013 chemical weapons attack in Syria – objectively, using mathematical models of probability theory instead of relying on intuition.

Is the media really helping us make sense of the world?

After selling his first and second startups (the second, Fraud Sciences, sold to PayPal in 2008 for around $169 million), Wilf has worked with companies involved in computer vision technologies, algorithms and artificial intelligence. He currently invests in about 15 companies, including Rootclaim, with the money he made earlier in his career. "Mainly [I want] the ability to finance other projects and set up things," he says. "I don’t buy yachts."

Rootclaim doesn't purport to "reach a perfect solution," but Wilf believes it's possible for its algorithm to be correct with 90 percent certainty.

"They use the term 'less wrong,'" he says. "This is a company that wants to change the way we understand the world around us; the way we communicate information and insights, and the ways we cooperate."

The contradictory conclusions "should not be happening in a system that draws conclusions in a measured, calibrated manner," says Wilf. "In fact, each of us holds a number of beliefs about the world that we’re certain are correct."

And they’re not?

They’re not. It’s strange that this happens, and the question is why.

What’s the answer?

The wreckage of Malaysia Airlines flight 17 IN 2014. 'We haven’t seen one group that has accused both the Russians of the Malaysian plane attack and the Syrian opposition of the chemical attacks.'
AFP

We have some tough bugs in our brains, which don’t allow us to deal with complexity and uncertainty.

It’s funny that we also argue about things that have already happened.

Wherever there’s a lot of information mixed in with uncertainty – there’s an argument.

Three riddles from Rootclaim.

For example, take the 2005 disengagement of Israel from Gaza. Half the people say it harmed Israel and that we’ve suffered from terror attacks and wars ever since, while the other half says it was inevitable and that we’d be in a worse security and diplomatic situation now without it.

That’s an excellent example. In order to assess whether the disengagement had a positive effect, you can’t just examine how many missiles and casualties we’ve had since then. That’s a one-dimensional way of looking at a super-complex system. Terror is influenced by everything that happens around the world. For example, the decision to bring in foreign laborers had a profound impact on the Palestinian economy and society in the 1990s – and that is only one parameter. In order to analyze such a complex issue as the disengagement, it is necessary to examine at least 100 parameters until it reaches a 90% level of certainty. It’s months of work.

Soldiers lead out an Israeli family from their home in Kfar Dorom in Gaza during the disengagement, August 18, 2005.
Nir Kafri

What does it involve?

Gathering data, building complicated mathematical models, calculating probabilities. The public debate is nonsense. The brain can’t grapple with such large amounts of information and uncertainty, so everyone attaches different weight to facts and filters information in a way that suits them – and then different conclusions emerge.

The limitations of the human brain

Last month, after the terror attack at Israeli settlement Har Adar, MK Bezalel Smotrich (Habayit Hayehudi) tweeted: “It’s the usual ritual: As soon as we start talking of renewing diplomatic negotiations, we pay the painful price. Hope is the force that drives murderous Arab terror. Every time we awaken it, it rises up and attacks us. Only removing hope will end terrorism.”

It’s the same thing as with the disengagement issue. These are intuitive statements that try to placate the listener.

A man holds the body of a dead child among bodies of people activists say were killed by nerve gas in the Ghouta region outside of Damascus, Syria, August 21, 2013.
REUTERS

The approach from the left is also usually simplistic – that an economic and diplomatic horizon will help stop the terror.

People are complex, and in order to analyze what increases a person’s chances of deciding he’s willing to sacrifice his life in order to kill others, you have to look at dozens of factors, analyzing each in correlation with a person’s determination, analyzing interdependencies. And then perhaps you can say something smart. Thinking that there is only one factor – such as concessions, or poverty – that determines what people do is childish. Human intuition is not enough here; deep analysis is required.

But that’s how we make decisions here.

Particularly at the level of elected officials. Discussions rely more on data at the professional levels, but they also ultimately suffer from the limitations of the human brain.

And you think you can solve this?

Yes. But let’s not exaggerate – we can’t reach a perfect solution. But I argue that it’s possible, using the available research methodologies, to move humanity to decisions that are 90% valid. This will improve things and minimize errors. At Rootclaim, they use the term “Less Wrong.”

Maybe such a debate on political decisions could be held in a courtroom and the judges would decide.

The legal system is one of the places where limitations in understanding probabilities cause enormous damage.

Do judges make mistakes?

Despite their experience and training, judges’ brains repeatedly lead them to make unreasonable decisions.

Can you give us an example?

[In 2013], an Israeli district court found a man called Nissim Hadad guilty “beyond reasonable doubt” of sodomizing a baby. In a minute’s calculation, you could show that the probability of guilt was less than 1%.

Less than 1%?

Yes, this is a well-known failure in human decision-making, called "prosecutor's fallacy" It’s a failure in which we focus only on the strength of the evidence, while ignoring the general incidence of the phenomenon. Probabilistically, the rarer or unlikelier the phenomenon we’re looking at, the stronger the evidence is needed in order to be convinced that it’s really occurred this time.

And in Hadad’s case?

It’s quite simple. Raping a baby is a very rare event, with a probability of perhaps one in tens of millions. The judges focused on the evidence – a pathological report according to which the baby suffered from an injury that could have been rape.

That’s strong evidence.

Yes, it’s a tangible piece of forensic evidence – the kind that courts love.

So where’s the problem?

They forgot to consider the rarity of such events. The chances of human error in the pathological examination was much higher than the chances that Hadad wanted to rape a baby. That’s even before considering the fact that nothing in his past indicated such serious deviancy, and the bizarre claim that he chose to commit the crime in a gym with glass doors.

Who carried out the chemical attack in Ghouta on August 21, 2013?

The probability is low, but could he have done it anyway?

Everything is possible. It’s also possible that I raped that baby – it’s all a question of probabilities. But the chances that it was him were one in a thousand, and a judge who is an an expert in assessing this likelihood said Hadad was guilty at a 95% level of certainty (the definition of reasonable doubt). That’s an example of a brain bug.

In the end, the Supreme Court acquitted Hadad.

But only after he’d been in jail for four and a half years, and he was only acquitted due to reasonable doubt and a judicial vote of 2-1. It’s not that they believed he was innocent. If the probability that he raped the baby was one in a thousand, he should have been unconditionally acquitted.

But the legal system doesn’t work according to numbers and statistics.

Therein lies the problem.

Should it use those methods?

Certainly. I say “certainly” since I know it’s possible. There are excellent mathematical tools for this.

Do you want a computer to replace judges?

No, a computer shouldn’t be a judge – but it could help in making decisions. If that happened, far fewer innocent people would be in jail and far fewer guilty people would be walking around free – and in general there would be far fewer mistakes. Hadad was eventually released, but there are similar cases where people have not been acquitted.

Like who?

Suleiman al-Abeid, who was convicted [in 1993] of murdering Hanit Kikos; the five people convicted of murdering Danny Katz [in 1983]; the four convicted of murdering Dafna Carmon [in 1982]. They were all convicted due to the same failure of relying on moderately strong evidence while ignoring the very low probability that they were guilty. These were all people who were very unlikely to have committed murder, which required very strong evidence in order to convince people they were guilty. It was 100 times more likely in that they were innocent all these cases, and that there was some mistake in interpreting the evidence.

And they are all still in jail now.

Some of the murderers of Danny Katz have been released, but the others remain in prison.

That’s disturbing, even shocking.

Very much so. The system tends to convict weaker people – this is one of its limitations. Everyone knows this, but no one does anything about it. It’s true that weaker segments of society are more involved in violent crimes. But even taking that into account, it’s clear that false convictions are more common. This is very noticeable for people who don’t have a good lawyer, don’t know the language or how to deal with pressure.

So was that baby raped or not?

No, probably not. The issue revolves around two possibilities: a totally ordinary person decided to rape a baby in a public gym with glass doors – perhaps a one-in-a-billion event, perhaps without precedent, in contrast to a hypothesis that the baby was injured or had internal damage that was misdiagnosed by a doctor as penetration.

The doctor erred.

Doctors usually make the right diagnosis, but they aren’t right 999,999 times out of a million. It’s a rare mistake, but raping a baby is rarer – making this an issue that’s easy to decide. Perhaps there was a rape, but there is a hypothesis that is 1,000 times more likely.

So how were the judges convinced that the baby was raped?

That’s the whole thing: people people are convinced by things that are easy to understand. It’s related to the way our brain processes information. We like focusing on one view and its analysis, and that serves as proof. But it almost never happens.

What doesn’t happen, a smoking gun?

Yes, there is no alternative but to simply weigh the evidence presented by all sides. The fantasy of having one piece of evidence that will resolve everything doesn’t exist, but that doesn’t mean you don’t know anything. There are ways of arriving at certainty that are no less valid. You don’t grope in the dark. The fact that, intuitively, this way of thinking is unnatural or not innate doesn’t mean it’s wrong. It’s like quantum mechanics: it’s true, but our intuition can’t grasp it. There are lots of things like that.

Here’s an example that may clarify this: Daniel Maoz, who murdered his parents [in 2011], chose a delusional line of defense in which he blamed his twin brother, who had the same DNA. Anyone reading this immediately understands that this was unlikely, because he described a series of events, each one of which was improbable – and the probability of all of them occurring was close to zero. It could have happened, but it was highly unlikely.

Why is it easier to understand his lie?

Because that was an extreme case that was easy for the human brain to analyze. But what happens when there’s evidence on both sides, and each has multiple interpretations? That’s what usually happens. Here, our brain fails under the weight of all these possibilities, and therefore a more structured approach is required. Thus, we can reach accurate insights, even if we did not succeed intuitively. It’s like how our brain is not bad at understanding the physical behavior of everyday objects surrounding us, but when we want higher precision or wish to understand the physics of very small or very large objects, we need to part from intuition and rely on mathematical models. Mathematics is not magic – it’s just a way of organizing our thinking when the reality is too complex.

How well do you know all these legal cases to be making such unambiguous statements?

I know them well. The stories we analyze give us a deep understanding of the material. In some cases, I believe it’s the deepest acquaintance anyone has ever had.

It's a wig, it's his hair, it’s a transplant

In practice, what do you do?

Every analysis begins with collecting all the evidence, from all sides, with everything open to criticism and the public providing complementing information on our website. When we have all the information, we do a probability analysis. The process is structured so that you can’t filter out or ignore information. Anyone can upload evidence, and all evidence is analyzed in the same manner.

Do others filter information?

Yes, that’s where a lot of people fail – especially agencies with biased motivations like the police or state prosecutors, or ones with political bias such as intelligence agencies.

But you’re not sitting in the courtroom. Some of the judge’s impressions are formed by direct observation – the defendant stuttering, a look, a choice of words, a silence.

I don’t know about the ability of judges to detect lies – I don’t know of any research in this area, and I wouldn’t be surprised if this was an illusion. But the same question could be asked about secret information held by intelligence agencies. Here, the answer is interesting: It turns out that the importance of additional information is negligible in comparison to the importance of the two principles I mentioned – equal treatment of evidence; and using probabilistic analysis instead of intuition. Even when there is very reliable confidential information, correct analysis of open information can be more powerful and decisive.

Do you have an example in this area?

The media claimed that the army’s [elite signal intelligence] 8200 Unit intercepted a conversation in which Syrian government officials took responsibility for the chemical attack that happened during the civil war there [in 2013]. Without knowing anything about that conversation, I would bet a considerable amount that this conversation was far less incriminating than believed.

How do you explain the apparent taking of responsibility?

It’s far more likely that they misinterpreted the conversation – for example, the speakers could have been voicing their opinions without having any concrete evidence. This is more likely than all the open information lining up so strongly in one direction, and the secret information aligning with another result.

What do you mean by aligning with another result? Do you guys at Rootclaim not think Assad was the one who attacked using chemical weapons?

With a fairly high certainty, this was an attack by the opposition forces.

How do you know?

We don’t have a secret recording – we’re relying on evidence that’s known to everyone. This is one of the triggers that helped me develop the system. I saw how the quality of discussions on blogs was much more serious than that published by governments and intelligence agencies. And this is true both for the Russians who said the opposition groups were responsible, and the Americans who blamed Assad.

Both the Russians and Americans distorted reality?

Both sides published basic errors regarding the timing, angles and range of the ordnance. In contrast, independent researchers produced strong evidence. Why? Because they are open to criticism. And yet, even when the discussion was at a high level, every side was convinced it was right. What’s funny is that the people who are right about the chemical attack in Syria are wrong about the downing of the Malaysian Airlines plane over Ukraine in July 2014.

Why?

Because they all come from the same political angle, or they support Russia or the United States. It shows how many analyses are biased, bias is one of the main reasons for poor analyses. – and that’s why we haven’t seen one group that has accused both the Russians of the Malaysian plane attack and the Syrian opposition of the chemical attacks.

And what about the plane? Who shot that down?

Ukrainians claim the Russians did it, the Russians claim the Ukrainians. The answer: Pro-Russian forces in Ukraine hit it by accident.

What you’re doing is interesting, but to me it sounds like a black box.

It’s not a black box – it’s mathematical models of probability theory. It’s a complex system, right, it’s not simple. But every mathematician will understand what it does.

Did you also check on President Donald Trump’s hair?

Yes, it’s not a wig and it’s not natural – it’s a transplant.