According to this article, Hugo Teso, German computer security expert, determined that the very latest communication systems and software approved by the FAA and its counterparts worldwide are “unencrypted and unauthenticated.” The end result is that he was able to write Android software to reprogram Boeing jets’ avionics from a mobile phone.
This is somewhat surprising result considering that the software and systems in question are the subject of years of certification review by sizable committees of extremely risk-averse individuals. I would have expected that the committee-intensive nature of the process would slow development and innovation but would increase security due to the fact that this should be precisely the kind of thing that a reviewer would be looking for.
I was recently out at Robinson Helicopter Company for recurrent training. The CEO was asked why the company had not released any helicopters with glass cockpit instruments (LCD screens instead of mechanical gauges and gyros). Such instruments are actually approved for aftermarket certification, the CEO said, but the FAA keeps asking for more and more paperwork to justify why the factory should be able to install them. Keep in mind that this is for a helicopter that cannot legally be flown in instrument conditions and therefore the pilot can and does fly safely simply by looking out the window. An instrument failure in a Robinson helicopter has no safety consequence.
[According to the CEO, about three years ago the FAA simply stopped acting. Their former glacial pace changed to something more like plate tectonics. He didn’t have an explanation for this but I note that this roughly coincides with the collapse of the private aviation industry from 2008-2010. An FAA employee is now probably paid about 8 times per hour what he or she might earn in the private sector (the salaries are not 8X higher, of course, but consider the actual number of working hours demanded) so the consequences of being fired are enormous. The easiest way to avoid being fired is to avoid acting. If you don’t approve something you can’t be blamed when something turns out not to work.]
A single dissenting voice can hold up an aircraft design for years. The new Robinson R66, for example, caught the attention of a Canadian government worker (story). He looked at the 400 psi hydraulic system on the Robinson and said “There was a failure in a Sikorsky’s 2000 psi system a few years ago. Prove to me that the Robinson system doesn’t need some extra redundancy so as to avoid a situation like that.” Robinson pointed out that the R66’s hydraulic flight control assistance system was virtually identical to that which was flying uneventfully in about 5000 R44s worldwide but this was unavailing. The Canadian dissenter did not hold up U.S. certification but he managed to get the Europeans and Russians to deny certification and that has cost Robinson perhaps $100 million in sales thus far (80 percent of Robinson’s sales are to foreign countries; supposedly 2013 will be the year when the helicopter is finally certified worldwide, three years later than in the U.S. due to this one Canadian guy; note that the original Canadian dissenter eventually took a closer look and apologized for making such a big fuss, but of course it is Robinson that bears all of the costs).
It seems reasonable to expect that a couple of trailblazing developers, excited to get their new protocols and systems into the hands of users, would leave open a security hole. But why doesn’t adding layers upon layers of review by committee and years of delay result in one committee member raising his or her hand to ask “Shouldn’t this be encrypted?”
I haven’t seen any security people I trust vouch for Teso’s work. It seems to be very high on publicity and low on actually working.
I think most of the people doing the auditing are auditors because they don’t know how to make the actual product. (Conversely, you have people trying so hard to find holes unlikely to be spotted they miss the easy stuff. “Not encrypted? What are you high? Keep looking for real holes.”)
In our company, problems discovered in the outside world (other, much larger insurance companies) are brought up and usually fixed in a sane manner. Problems brought up by internal auditors (or, god forbid, external auditors), have us frantically screaming for months to patch a non-solution to a problem that doesn’t really exist. For instance, we have company-wide problems with moving stuff from QA to PROD. We have a crappy system in place in, yes, Lotus Notes, for “sign offs”, but basically these are rubber stamping managers who could care less about a few-line code change. So, when crud code makes it into production, what’s our company’s solution? More approvals. An IT change management process (upgrading servers, databases, SAN, etc.) that is only approved, by committee, twice a week blows up in our face, solution: approval process only once a week.
Since I’m married to a professional cryptographer, and I’m a GA nut (but _not_ a cryptographer)… I have to agree with Dan. I’ve not heard of anyone credible saying Teso’s work is bulletproof, by any means.
But it does raise some interesting questions: like should ADS-B have cryptography in it? If it does, how would you solve the key distribution problem? What is the safety consequence of a loss or corruption of a key? How about a crypto researcher extracting a private key from a transponder they bought on ebay? How about the crypto algorithm itself being compromised? How would you do key rotation or revocation in such an environment (and would you)? What are the potential attacks, and can they be nullified through non-cryptographic means?
For a technology like ADS-B, is it safer to be permissive in what you accept (including potentially falsified aircraft position reports), or restrictive (potentially rejecting accurate position reports)?
I saw someone claim on a forum that the integrity of ADS-B position data is double checked against signal timing data at the groundstations (aka, they receive your signal at multiple stations and triangulate), and if the timing doesn’t match the claimed position it is flagged as potentially wrong. If that sort of physics-based anti-spoofing technique is really employed, it is likely more reliable, and harder to spoof over the long term than crypto.
I really don’t think it is as simple as “they didn’t put crypto in the standard, and hence they are idiots.”
I don’t believe Tesco’s claims.
He is claiming that by faking an ACARS message (which is possible) he can take control of an airplane’s autopilot via the FMS and drive said airplane around with an Android app.
I can imagine that a badly formed ACARS message might be able to crash an FMS, but it’s not possible that any ACARS message could take over the flight controls. I’d be surprised if write access to the FMS software is even possible without throwing a hardware switch in a black box.
It’s a pretty big stretch to claim to be able to drive an airliner with ACARS messages, too big to take without actual evidence. Which Tesco has yet to provide.
The claim (at https://www.nruns.com/fileadmin/downloads/n.runs_Vortrag_Aircraft_Hacking_by_Hugo_Teso.pdf) seems to be that they hacked the Honeywell and Rockwell Collins FMS PC trainers. There _are_ big differences between simulated FMS’s on a PC and the actual certified equipment… Why include protection and redundancy in a simulator? I’d guess that the avionics manufacturers would be looking at this closely, but probably not panicking.
@Jeff – All PC trainers have an imaginary airplane in them that the student can drive around with a mouse, keyboard, or any of a large number of usb joysticks, throttles, etc. In many cases the whole simulation is written in a program called XPlane, which has such public interfaces.
In my opinion all this Android app is hook into what is probably a published interface to the PC simulator.