Hamas’ campaign to spy on Israeli soldiers won admiration from cybersecurity experts familiar with it for good reason. The operators of the fake social-media profiles communicated with targeted soldiers and officers for a year or more to win their trust, in fluent and idiomatic Hebrew. Only after establishing a connection did operatives recommend apps to download. They even took advantage of the popularity of the World Cup to plant malware in an app offering live streaming and updates of the soccer tournament.
These capabilities didn’t come out of the blue. Back in 2012, Israel was engulfed in a minor panic over a computer virus (misleadingly nicknamed the Benny Gantz virus, for the army’s then-chief of staff) spread by email to soldiers, police officer and even Foreign Ministry personnel. Since then, more and more such operations have been discovered, with names like Arid Viper or DustySky.
>> Hamas cyber ops spied on hundreds of Israeli soldiers using World Cup app ■ IDF soldiers may still be vulnertable to Hamas hacking, intel official says ■ Israeli army combating a new kind of cyber threat <<
They never displayed great technical sophistication, and none were equivalent to the Stuxnet virus that destroyed many of Iran’s nuclear centrifuges or the malware developed by the U.S. National Security Agency. Israeli researchers even discovered several embarrassing mistakes that led to the exposure of infrastructure relating to Hamas’ cyberattack corps or key operatives in it.
Nevertheless, when modest technical capabilities are integrated with high social engineering capabilities and a wide variety of tools readily available on the web, we’re essentially facing the digital equivalent of incendiary kites from the Gaza Strip. Israel may be able to stop missiles in mid-flight, but a technology developed thousands of years ago continues to set Israeli fields ablaze.
The ‘advantage’ of smartphones
For years, Hamas operatives focused on trying to hack computers. But for the past two years they have increasingly targeted smartphones, which offer some important advantages to them.
According to the Israeli information technology firm Check Point Software Technologies, only 3 percent of organizations worldwide have suitable protections in place against attacks on mobile phones. And here, we’re talking not about organizational phones, but personal phones, which aren’t given the same protections as the defense establishment’s core systems.
In other words, a soldier might operate a critical system which is protected in every possible way. But in her pocket, or on her desk, sits a device equipped with cameras, microphones, motion sensors and GPS.
We’re the weak link
In cyberespionage, these capabilities are added to one of the axioms of cybersecurity: No matter how many walls you build, in the end, someone will open the gate to the Trojan horse. We are flawed beings incapable of thinking up a password more complicated than 123456, and we even forget that after stubbornly using it on every possible account.
This is the first lesson that leaps to mind when you hear about soldiers who fell into the Facebook traps of fictitious characters, with pictures taken from physical fitness gurus’ Instagram accounts. This is something that won’t change, and we’ve already seen how organizations responsible for cybersecurity (whether as attackers or defenders) suffered embarrassing break-ins and leaks.
The problem is much bigger
Perhaps a little suspicion could have saved the soldiers from falling into Hamas’ trap, but what’s really impressive about this attack is the way it exploited the infrastructure provided by giant technology companies.
Google’s Android system has been criticized for years as less secure than Apple’s iOS, and Google has tried repeatedly to prove that it’s fixing the problem by beefing up its protections. Last year, for instance, it unveiled Google Play Protect — a platform that provides improved protection for users, both on their devices and Google Play, the company’s digital app store.
In previous cases, like one exposed by the Kaspersky Lab cybersecurity firm in 2017, the attackers used malware downloaded from external sites. But in the current case, everything happened under Google’s watchful eye.
The company promises that all Android apps undergo stringent security checks before being allowed in Google Play. Yet an app like Golden Cup managed to get into the store a month ago, with an insane list of authorizations — like access to the cellphone’s camera, microphone and contacts — for an app that merely promises to provide updates on the World Cup. Only about a week later was it removed.
Like Google, Facebook has promised to wage war on fake profiles, but this time, too, it failed to identify the problematic ones. And this is despite the fact that at least one took its pictures from a female yoga and physical fitness guru with at least 500,000 followers on Instagram. That’s not exactly difficult to detect.
The tech giants are doubly guilty
As noted, it’s very easy to blame the users, but we’re just a tiny cog in a world that for years has made it money with the aid of practices meant to help us fall into such traps. We aren’t talking, heaven forbid, about malicious hackers hiding out in Gaza’s tunnels or on Revolutionary Guards bases in Iran, but transparent offices in Silicon Valley, Tel Aviv’s Ramat Hahayal and all the other places where companies like Google and Facebook operate.
Google and Facebook forbid developers to use their information for undeclared purposes and demand transparency. But a report issued last week by Norway’s data protection commissioner revealed the duplicity of the tech giants’ policies.
“The combination of privacy intrusive defaults and the use of dark patterns, nudge users of Facebook and Google, and to a lesser degree Windows 10, toward the least privacy friendly options to a degree that we consider unethical,” it said.
Want to enjoy 'Zen' reading - with no ads and just the article? Subscribe todaySubscribe now