AI
Posted By rockyh Posted On

AI, a vulnerability?

The Issue

In a recent article by Paul Mcleary, Navy’s New Unmanned Plan To Build In Critical New Data-Sharing Effort raised the discussion of AI powering unmanned naval ships. Having had several problems with the Lockheed Martin supplied Littoral Combat Ships [Popular Mechanics, USNI News] Chief of Naval Operations Adm. Mike Gilday has expressed strong support for fewer configurations, with more proven technology, guided by AI. Specifically, Gilday wants to do the work now to make sure that the Command-and-Control systems on all vessels manned or unmanned are connected and integrated to prevent having to patch together systems later. This program is called Project Overmatch [USNI, Breaking Defense]. Given the suggested expansion of the US Navy’s projected 2030 fleet of 355 manned ships, to about 500 ships including manned and unmanned means about a 25-30% increase in fleet size and they will be largely unmanned and controlled from or through Project Overmatch.

When comparing this to China’s approximately 335 manned ships as of 2019 according to RAND, and projected number of 425 by 2030, the US is playing catch-up in this area. China’s large workforce and near limitless local funding allow it to rapidly field new capabilities including naval ships. China has also been rapidly growing their electronic and AI capabilities. This naturally will affect their naval command ability, and quite logically result in their own fleet of unmanned ships.

What does this mean for the new navies? Given the propensity toward electronic warfare abilities of both nations, or even say the Communist States and Democratic States, it clearly introduces new vulnerabilities and attack vectors to the fleet. It is not just the fleets either, it is also the land and air warfare components that will be integrated into these systems. Unmanned assets mean on-board AI, and a strong communications channel will be required.

The Situation

AI and Comms are two things that are inherently firmware and software defined in modern theatres. They are also the ones with dubious supply chains, and difficult to identify vulnerabilities. When considering ways to eliminate an enemy’s ability to conduct war, shutting down their war machinery ranks high on the To-Do list. The US Navy is taking the approach of lightly manned ships that can also operate unmanned. But this also presents an opportunity, capture rather than destroy.

“In the practical art of war, the best thing of all is to take the enemy’s country whole and intact; to shatter and destroy it is not so good. So, too, it is better to recapture an army entire than to destroy it, to capture a regiment, a detachment or a company entire than to destroy them.” – Sun Tzu

This would then lend itself towards the emerging aspects of modern warfare. The next war will not be fought with bullets and bombs, but with bits and bytes. Being able to affect a system AI, or its communications backbone will greatly reduce the effectiveness of the asset. It means that being able to remotely shut down a vessel through its AI or render it inert by shutting down its AI and onboard systems, is a very attractive option to capture enemy technology and military assets. Therefore, these attack vectors need to be heavily defended.

Creating an AI that can defend itself from attack would mean a couple basic defence in depth principles need to be inserted into our practice of AI coding as it has been in software development for some time. I think this is an often-overlooked aspect of AI/ML. Everyone has been so enamoured with the amazing things that AI can do from identifying breeds of dogs, to making your favourite video game enemies seem alive, to identifying a downed pilot in a sea of blue from a satellite image that they forgot to inject defensive coding measures into its creation.

Even basic principals such as Assume Breach, Defence in Depth, and Zero Trust need to be injected into the entire AI lifecycle. Models not only need to be threat modelled and pentested, but the training material used to train them, and its provenance, all need to be secured. The deployment process should use the best possible practices to ensure supply chain integrity and taper proof update procedures. The impact could be extreme, especially if it gets to the point of not only rendering AI controlled assets inert, but in fact turning them against their original controllers. There are what I think of as 4 main categories of attack against AI controlled assets from the AI perspective.

These types of attacks take on several forms,

  • Attack the learning system of the AI to ‘teach it’ to fail through false positives and false negatives
  • Confuse it by presenting it with situations outside of its learning parameters or that confuse its situational understanding
  • Attack the underlying systems running the AI
  • Attack the peripherals such as sensors, power and cooling to disrupt the hardware and underlying platform

There are many examples of this already happening. Tay [Wikipedia, IEEE] and Tesla [CNBC, Forbes] as two more prominent examples. IBM Research even used AI to create detection avoidance malware back in 2018. Called DeepLocker, it is “a new breed of highly targeted and evasive attack tools powered by AI.” Which can use undetectable AI to attack AI.

AI systems used to control any kind of weapons platform will be the target of countless attacks. This level of attack will be sophisticated, and potentially long term. These attacks can be initiated as early in the AI’s service life as its inception. Many of these inception time attacks may even be accidental. If an AI is designed for a particular purpose, and its training data is flawed or skewed, the model will be designed in a way to be overfit for certain situations.

Attack AI Learning

In a recent project I worked on we were building a model to identify commercial aircraft in satellite imagery using Synthetic Data [Microsoft, Wikipedia]. The Rare Planes data was not fit for purpose and the real data we had access to was too scarce. We discovered that if you train an AI model to identify commercial aircraft from satellite imagery, and your training data is 60% Boeing 737-800 aircraft and the other 40% is made up of 12 other different aircraft, the AI will tend to try to match what it sees to that aircraft model first. Even though this kind of distribution is common at domestic and international airports this results in many false positives in the examined target imagery. The AI model basically assumes that most aircraft it sees will be 737-800 so it is biased toward that result. So when training, you need a more even distribution.

As far as confusing or guiding the models once they are trained, this is exactly what happened to Tay. Tay, Microsoft’s experimental chatbot, was designed to learn as it interacted with people on Twitter. However, as the Internet is prone to doing, in less than 16 hours the twitterverse turned Tay into a racists, anti-semitic rogue. One that learned what expected behaviour was like and then emulated it. Tay was launched on March 23, 2016 and had to be turned off on March 25, 2016 including taking down the website associated with the experiment.

Attack its situational awareness

A feasible attack vector would be to let the AI learn that what it was initially taught to think of as a threat, can be ignored due to its rules of engagement. For example, if an adversary repeatedly comes close to the vessel, but does not engage it [Navy Times, Independent UK]. This will teach the AI and in fact humans, a false sense of complacency. ‘An enemy aircraft in my immediate vicinity is not a threat. This is typical behaviour as evidenced by my recent experience‘. Then during one of these typically mundane close calls, the adversary launches an attack at close range too late to intercept it.

Attack underlying systems

All AI must run on some form of computing platform. These platforms are subject to similar issues that any computing platform operating in harsh environments is. One of the more up and coming attacks of the physical platform are EMP weapons. According to Dr. Peter Vincent Pry in his paper China: EMP Threat, China is already very actively developing its EMP capabilities as is the USA and other NATO countries. They have created systems for attacking the power grid of their adversaries, as well as electronics on various platforms. This could take out the computers, even ‘disconnected’ ones, that run the AI which could render a fully autonomous ship inoperable, ripe for capture.

Attack the peripherals

Another way to affect a situational AI is to attack its ability to make sense of the world around it. This is how two Tesla Model S cars were attacked successfully. One was attacked in a way that make the car accelerate to 85mph in a 35mph zone by tricking the Mobileye EyeQ3 system into thinking a sign that read 35MPH actually read 85MPH. This was as simple as using a small bit of black tape to extend the middle bar of the 3. Humans still saw it and read it as 3, but the Mobileye sensor, misclassified it as an 8. A similar tape trick was used for lane recognition. The other was coerced into changing lanes into oncoming traffic.

Solutions?

There are already papers and books being produced on how to handle this. One from the Belfer Center by Marcus Comiter goes so far as to suggest an “AI Security Compliance” regime. For certain we need threat and vulnerability assessments for assets that rely on AI to extend to the AI and its supply chain as much as the engines, guns, guidance, or C&C systems have been. Considering how stringent the compliance standards are for typical IT systems, they often overlook what goes into the AI itself. They will check the platform, or the system used to create it such as your Jupyter notebooks etc, but they do not typically check the AI model itself.

There are things that can be done to help protect against attacks directed at the physical layers of the asset such as armour, EM shielding, countermeasures etc. But we still have a weak spot when it comes to attacking the situational awareness and the peripherals as mentioned above. These components of the system will literally mean making better AI. Perhaps using AI to create AI, then turning around and using AI to attack AI in what we can hope is comprehensive testing is the answer.

There have been great strides in using AI to build faster AI, but what about better AI? Some of the work by Azalia Mirhoseini over at Google Brain has been looking into this very topic. She discusses her work on deep reinforcement learning in this video interview. Perhaps this is the area we need to be looking at.

I would like to hear your thoughts on how the industry could approach making more secure, reliable, and resilient AI (without creating Skynet and releasing it on the human race). Please leave them in the comments (with all appropriate professional decorum and respect) below.

RH

Comments (0)

Leave a Reply

Your email address will not be published. Required fields are marked *