AI Is the Next National Security Frontier, but Israel May Be Losing Its Edge

A new national defense study recommends setting up an agency like the National Cyber Directorate to take charge of integrating artificial intelligence into the defense establishment and make sure Israel doesn't fall behind

Sagi Cohen
Send in e-mailSend in e-mail
Existing facial recognition technologies highlight risks and potential of AI
Facial recognition technologies that use artificial intelligence as part of police work highlight risks and potential of AICredit: Bloomberg
Sagi Cohen

Developing a national strategy for artificial intelligence, including its ethical aspects, is critical for Israel’s future security, a study published last week by the Institute for National Security Studies argued.

“Proper management of the field of artificial intelligence in Israel holds great potential for preserving and improving national security,” wrote Dr. Liran Antebi, an INSS research fellow, in the Hebrew-language study, which was prepared with assistance from experts in the high-tech industry, the defense establishment, the government and academia.

Titled “Artificial intelligence and Israeli national security,” the study starts from the assumption that AI will eventually be of decisive importance worldwide in terms of both economics and security, especially if predictions that AI’s capabilities will someday exceed human ones prove accurate.  

“Artificial intelligence will create a new industrial revolution of the greatest scope in history,” Antebi wrote. And this will naturally widen the gaps between countries with high technological capabilities and those that are left behind.

Drones & big data

Dr. Liran Antebi, author of “Artificial intelligence and Israeli national security,” says Israel's security requires a real AI policy
Dr. Liran Antebi, author of “Artificial intelligence and Israeli national security,” says Israel's security requires a real AI policyCredit: Eyal Toueg

The study detailed numerous military applications, both extant and future, for AI. One example is autonomous weapons systems like robots and drones that are capable of searching for, identifying and attacking targets independently, with almost no human involvement. 

But the revolution won’t take place only on the battlefield, the study noted. Other examples include intelligence systems capable of processing vast quantities of video footage to identify targets autonomously; autonomous vehicles; drone swarms; improved logistics systems; cyberwarfare and cyber-defense; planning, decision making, command and control; and brain-machine interface (controlling machines and computers via the brain).

Therefore, the study argued, Israel must define AI as a strategic goal. To keep Israel from being left behind, decision makers must become familiar with the field and set policies that will enable it to cope with the enormous competition that is emerging and preserve its competitive advantage. 

>> Do you work in Israeli hi-tech and have a story to share with us? We can promise full anonymity: Click here to send us an enycryped email

The study’s main recommendation is to draft a national strategy and then set up an agency to manage its implementation, based on a multiyear plan that includes funding allocations.

“A field this important shouldn’t be left to market forces,” Antebi wrote. “Israel can’t allow itself to delay, because a failure in this field may well have serious ramifications.”

In recent years, two committees have been established with the goal of developing a national strategy and attendant policies for AI. The first was headed by Orna Berry and the second by Isaac Ben-Israel and Eviatar Matania. But the latter committee’s preliminary conclusions were harshly criticized by several government agencies after being reported in the media. 

Antebi argued that it’s essential to set up an operative agency similar to the National Cyber Directorate, with a special emphasis on integrating AI into the defense establishment.

Many countries – primarily China, the United States and some European states – have already developed national strategies for AI and allocated funding for them, the study noted. As one example, Antebi cited the Joint Artificial Intelligence Center set up by the U.S. Department of Defense in 2018 to coordinate efforts to develop and apply AI systems. 

In 2019, U.S. President Donald Trump signed an executive order for the American AI Initiative, whose goal is to promote AI technology. The Defense Department said that by 2023, it will invest $2 billion in projects in this field.

Haaretz has also reported in the past that the United Arab Emirates is trying to position itself as leader in this field and even have an AI minister. 

National security multiplier 

Elbit's Hermes 900 drone
Elbit's Hermes 900 drone Credit: Elbit Systems

The study argued that Israel must encourage greater integration of AI capabilities in the defense establishment – the army, other security agencies and defense industries – in fields such as cyber, drones and intelligence.

“Israel has a comparative advantage in technological fields, among other things in unmanned vehicles and cyber, which are significant defense fields,” it said. “Integrating them with AI as a force multiplier could greatly assist Israel in preserving and augmenting its national security, both through military means and due to its other economic and international ramifications.”

Nevertheless, the study warned, the army and the defense establishment are having trouble keeping up with changes in the field. This is primarily due to the small amount of defense funding earmarked specifically for AI, as well as the difficulty of retaining high-quality personnel due to competition from the private sector.

Moreover, Antebi noted, there is bureaucratic resistance within the defense establishment to rapid technological change – a problem typical of many large organizations. This is evident in its reliance on “legacy” systems that it has used for many years. Such systems are very hard to replace.

She therefore recommended creating structural models that will enable the defense establishment to keep up with the pace of change – something that will require the system to be more flexible. 

She also advised investing in training designated personnel and allocating funding to incentivize talented people to stay in the army. In addition, she wrote, it’s necessary to train people who aren’t high-tech experts to ensure that the spine of the army’s chain of command is acquainted with AI.

A competitive advantage

Antebi’s study poses a challenge that may come as a surprise to many people. The defense establishment, she wrote, “Does almost no independent research and development that creates a basis for future capabilities.” Instead, it relies on technology developed by commercial companies and academia.

Consequently, she recommended that the defense establishment invest more resources in basic research in general, and particularly in research and development in areas of AI where Israel already has a competitive advantage, like drones and cyber.

Another recommendation was to set up an orderly system for monitoring and analyzing the progress different players have made on AI and encouraging information sharing within the defense establishment.

Israeli army soldiers working on artificial intelligence for the IDF
Israeli army soldiers working on artificial intelligence for the IDFCredit: IDF's Spokesman Unit

The prospect of AI being integrated into the defense establishment naturally rouses quite a few fears. We’re all familiar with the horrific scenarios from science fiction films – smart systems that get out of control and do things for their own reasons. That isn’t likely to happen anytime soon, but the study did warn against integrating AI into the army too quickly, without any ability to understand the system and the factors that lead it to make decisions.

For instance, albeit in the context of the police rather than the army, it has become clear in recent years that existing facial recognition technologies discriminate against minorities. Very troubling scenarios obviously arise if smart weapons or intelligence systems were to be encoded with the same biases  against minorities.

The study also discussed the moral dilemmas inherent in war in this regard. It noted that some people believe AI would be able to make better and more accurate decisions during combat, since it wouldn’t be influenced by fear, fatigue or other emotions (like hatred) that affect human beings. But others argue that without human emotions, it’s impossible to make proper moral decisions about the use of the military, such as refraining from attacking civilian targets and not employing disproportionate force against the enemy.

Therefore, the study recommended that standards and supervisory mechanism be based on ensuring safety – i.e., on ensuring that any AI tools developed comply with existing norms and principles. This should include drafting a code of ethics for the defense establishment’s use of AI, Antebi said.

She also recommended focusing research and development into AI on tools that assist people rather than replacing them until the safety and reliability of the technology has been proven. Finally, she advised that the administrative and legal questions arising from the use of AI systems be addressed as soon as possible.

Comments