Many of you have seen the media reports concerning the controversy over our National Aviation Operations Monitoring Service (NAOMS) project. I have not been pleased with the way this issue has been treated in the press, and I doubt that you have been, either. Because of this concern, I wanted to take a bit of your time and explain the issue more fully than has so far been done.
The NAOMS project began in 1998 with the goal of developing new methods for aviation system safety analysis. Because incidents posing a threat to aviation safety are relatively rare events, useful analysis requires the acquisition of a large, statistically meaningful database that is representative of the system. Accordingly, the NAOMS project selected a contractor to develop a survey methodology to acquire such data. This phase of the effort required about two years to complete.
It was recognized that the type of data which was sought -- information on incidents, infractions, and mistakes potentially affecting aviation safety -- carries with it the potential for significant risk for the person reporting such information. Accordingly, the project team offered anonymity to those taking part in the survey, and one of the contractor's responsibilities was to "anonymize" the dataset. Data collection using the methodology began in April 2001 and ended in December 2004, during which time the project team conducted approximately 24,000 surveys of commercial airline pilots and approximately 5,000 surveys of general aviation pilots.
There has been some confusion between NAOMS and the Aviation Safety Reporting System (ASRS), established by statute in 1976, funded by the Federal Aviation Administration (FAA), and managed by NASA since that time.
There are certain similarities between the two efforts. Like NAOMS, the ASRS relies upon voluntary reports by pilots to gather information relevant to aviation safety. Like NAOMS, ASRS offers anonymity to those who voluntarily report infractions or other incidents. However, ASRS differs from NAOMS in at least one crucial respect. With ASRS, a pilot who voluntarily reports an incident is protected, by statute, from an enforcement action by the FAA. This immunity (which does not extend to accidents or criminal offenses) is compromised if the matter should come to the attention of authorities by other means prior to being revealed by the pilot himself. Thus, pilots have a strong incentive under the law to report incidents promptly and voluntarily through the ASRS channel.
It might be expected that a large, comprehensive dataset concerning a topic as critical as that of aviation safety would attract interest from the public and the media, and this proved to be the case with the NAOMS data.
NASA received a Freedom of Information Act (FOIA) request by a reporter representing the Associated Press for the data obtained in the NAOMS survey.
The present controversy stems from our denial of that request. In response to the appeal letter from the AP, NASA cited concerns for "public confidence" and for the "commercial welfare" of air carriers as the supporting basis for the exemption cited in denying the request for the data. With that, we gave the unfortunate impression that NASA was putting airline commercial interests ahead of public safety. As I have stated on many occasions, I regret that impression. When the matter was brought to my attention, I corrected it immediately, and publicly stated that we would release the requested data if possible, and as soon as possible. I have made this point many times.
However, it does not follow that the data can be released to the public in unredacted form. We have specific legal obligations under FOIA law not to release voluntarily supplied information of a "commercial confidential" nature. Such information could include that which makes it possible to identify a specific pilot or organization. And again, survey respondents were promised that their responses would be anonymous, meaning not traceable to individuals.
To address the question of whether and when we could release the requested data, we conducted an initial internal review of the responses. We concluded that the survey responses in their "raw" format make it likely that the identities of at least some of the individual respondents could be derived from facts such as career flight hours, make and model of aircraft flown, airports named, specific events reported, and the time period associated with those facts.
In brief, the study contractor failed to redact the data appropriately in at least some cases. Thus, to ensure that NASA fully complied with the law, we determined that an independent review of the data was necessary in order to prevent the compromise of protected information.
Because there was intense public and Congressional interest in this inherently sensitive matter, we promised the Congress that we would complete an initial release by the end of 2007. We took that promise seriously. A team selected and led by Bryan O'Connor, Chief of the Office of Safety and Mission Assurance, examined the data, developed an initial redaction strategy, and directed the contractor in the execution of that strategy.
The result was our initial release, provided on 31 December 2007. As I have stated many times, we will in the coming year be examining the data more carefully to develop a redaction strategy which allows the release of the maximum amount of additional data that can be provided without compromising anonymity or confidential commercial information. It will not be quick, and it will not be cheap. However, I believe it is mandatory in the present situation, where there is public interest in data which has been gathered by NASA at taxpayer expense.
Some have said that the initial release date of 31 December was chosen because it was a "slow news day". That is not the case. It was the earliest date we could achieve.
Some have said that the necessary data redaction was overly conservative, that more data could have been released. This is absolutely true, but it could not have been done by the end of the year. Again, I have promised the Congress that a more refined effort will be undertaken. However, a careful examination of the data to ensure that no inappropriate release of information occurs will require many months. We will make every practicable effort to expedite the work.
It has been claimed that NAOMS was to be established as a "permanent survey" by 2004 and, therefore, that NASA "cut" the project. NAOMS project briefings in 2000 and beyond do reference a "permanent survey" to be in place by 2004. However, the responsibility to implement such a survey belongs with an operational aviation organization, which NASA is not.
NASA's goal with NAOMS was to develop and demonstrate a capability, and to provide others with the knowledge required to use it, if so desired, over the long term (e.g., as a "permanent survey"). This was not, and is not, a NASA responsibility. Indeed, in a 2002 briefing to senior program management, the NAOMS team requested an extension of funding to 2005, explaining that "opportunities for hand-off will be explored" in order to accomplish a "permanent survey." It was the intention and responsibility of the NAOMS project to transition to an operational entity, and they were provided funds in both FY2005 and FY2006 to do so.
Early NAOMS briefings also indicated an intention to collect data not just from pilots, but from cabin crew, maintenance crew, and air traffic controllers as well. All of these things were supposed to begin before 2004. The fact that the project team did not accomplish this in the planned time period does not imply that their funding was cut.
NASA has been criticized for not "doing more" with the NAOMS data, though again this was never our intention. However, in responding to this criticism I must note that the survey methodology was not subjected to formal peer review prior to its implementation, and the data were not validated after collection. Such validation would normally be performed by comparing the survey results, where possible and appropriate, with other information that was independently gathered or otherwise believed to be well known. If a new research tool agrees with well-established results in areas that they share in common, confidence is increased in the validity of the results which are found in other areas where no comparable data exists.
When the NAOMS project became the subject of public controversy, the Aeronautics Research Mission Directorate conducted a cursory examination of the data and of the briefings provided by the NAOMS project team. Numerous inconsistencies were found. As one example, which I cited in Congressional testimony, the NAOMS team noted a rate of engine failures some four times higher than the accepted value, based on data accumulated over long periods of time by independent means. I am sure I don't have to point out that the aviation community takes engine failures very seriously; the rate of such incidents is considered to be quite well known. A new survey citing a failure rate substantially different from the accepted value must demonstrate the highest possible standards of validation and verification in order to be considered credible. Otherwise, it is the new survey, and not the accepted data, that will be regarded as suspect. As Carl Sagan was fond of saying, "extraordinary claims require extraordinary proof."
Such inconsistencies cast doubt on the validity of the larger dataset, and are one consequence of the lack of appropriate initial peer review. Accuracy and completeness cannot be retroactively peer-reviewed into the dataset. At this point, the product exists in its final state. Funding was provided in 2007 for the contractor to prepare a final report; NASA has received that report. It is posted on our website. The project is ended and it is my intention that the contract will be closed as of 31 January 2008.
We have been requested by the Congress in legislative report language to perform an assessment of the NAOMS methodology. We are presently working with the National Research Council to initiate this assessment. It will focus on the methodology, its potential limitations, likely sources of error and estimates of their magnitude, the potential utility of the data, and recommendations for such use.
Some have stated that NASA attempted to "destroy" the NAOMS data. This is completely untrue. Because of its inherently confidential nature, we did seek to protect the data by asking all contractors having access to it to return their copies of the dataset to NASA, where it would be preserved, or to destroy those copies. With the contract closed and the project ended, NASA cannot legally allow independent contractors to retain confidential data. But there is not now, and never was, any intention to destroy the NAOMS data. Further to this point, the unredacted raw data has been furnished to the Congress.
Finally, we have been criticized for releasing the data in PDF. This is the standard form in which we release data publicly when no particular format has been specified or requested. However, it is true that the sheer volume of NAOMS survey data makes the use of PDF data somewhat cumbersome. Accordingly, I have made an exception to our standard practice in this case, and both the initially redacted data, and all subsequent data, will be published on our website in Excel format.
I think that's about it. As usual in such circumstances, there are lessons to be learned, remembered, and applied. The NAOMS case demonstrates again, if such demonstrations were needed, the importance of peer review, scientific integrity, admitting mistakes when they are made, correcting them as best we can, and keeping our word, despite the criticism that can ensue.
Michael D. Griffin