From the article:
“Reuters reports that last week’s computer glitch at a California air traffic control center that led officials to halt takeoffs at Los Angeles International Airport was caused by a U-2 spy plane still in use by the US military, passing through air space monitored by the Los Angeles Air Route Traffic Control Center that appears to have overloaded ERAM, a computer system at the center. According to NBC News, computers at the center began operations to prevent the U-2 from colliding with other aircraft, even though the U-2 was flying at an altitude of 60,000 feet and other airplanes passing through the region’s air space were miles below.”
One of the commenters on the post suggested the aircraft was flying VFR-on-Top, a set of rules governing how the ATC systems respond to potential collisions in airspace and warn aircraft (or operators) to move. The system was only designed to respond effectively to aircraft using this specification below 18,000 feet, but the U-2 was flying at 60,000 ft. Essentially every aircraft below that ceiling was “in the way” and the system overloaded attempting to parcel out commands to move those aircraft out the way.
I’ve been doing a lot of reading about system design and implementation and the idea behind design considerations in certain areas. I try and design code that works in many scenarios, but often you get a it narrow-minded and make an assumption about the entry point of a bit of code. I wonder though if the assumptions I’ve made in a certain design spec will be known, remembered, or even implemented years later. In this case, clearly someone didn’t know about this specific consideration of the system. Nor was their an opportunity to interrupt the feedback look that was generated– I don’t know if anyone tried as its not mentioned in the article.
There’s a strange balance to systems and input design. Do you assume the user is wrong, and the automated processes in the computer logic or code take over? Do you assume the computer is wrong and rely on (potentially) unnecessary human intervention?