NASA's Contributions to Aeronautics, Volume 2 by National Aeronautics & Space Administration. - HTML preview

PLEASE NOTE: This is an HTML preview only and some elements such as links or page numbers may be incorrect.
Download the book in PDF, ePub, Kindle for a complete version.

CASE

11

Introducing Synthetic Vision to the Cockpit

Robert A. Rivers

The evolution of flight has witnessed the steady advancement of instrumentation to furnish safety and efficiency. Providing revolutionary enhancements to aircraft instrument panels for improved situational awareness, efficiency of operation, and mitigation of hazards has been a NASA priority for over 30 years. NASA’s heritage of research in synthetic vision has generated useful concepts, demonstrations of key technological breakthroughs, and prototype systems and architectures.

Seattle Trip Nov 09 001.tif

Case-11 Cover Image: NASA synthetic vision research promises to increase flight safety by giving pilots perfect positional and situation awareness, regardless of weather or visibility conditions. Richard P. Hallion.

The connection of the National Aeronautics and Space Administration (NASA) to improving instrument displays dates to the advent of instrument flying, when James H. Doolittle conducted his “blind flying” experiments with the Guggenheim Flight Laboratory in 1929, in the era of the Ford Tri-Motor transport.[1] Doolittle became the first pilot to take off, fly, and land entirely by instruments, his visibility being totally obscured by a canvas hood. At the time of this flight, Doolittle was already a world-famous airman, who had earned a doctorate in aeronautical engineering from the Massachusetts Institute of Technology and whose research on accelerations in flight constituted one of the most important contributions to interwar aeronautics. His formal association with the National Advisory Committee for Aeronautics (NACA) Langley Aeronautical Laboratory began in 1928. In the late 1950s, Doolittle became the last Chairman of the NACA and helped guide its transition into NASA.

The capabilities of air transport aircraft increased dramatically between the era of the Ford Tri-Motor of the late 1920s and the jetliners of the late 1960s. Passenger capacity increased thirtyfold, range by a factor of ten, and speed by a factor of five.[2] But little changed in one basic area: cockpit presentations and the pilot-aircraft interface. As NASA Ames Research Center test pilot George E. Cooper noted at a seminal November 1971 conference held at Langley Research Center (LaRC) on technologies for future civil aircraft:

Controls, selectors, and dial and needle instruments which were in use over thirty years ago are still common in the majority of civil aircraft presently in use. By comparing the cockpit of a 30-year-old three-engine transport with that of a current four-engine jet transport, this similarity can be seen. However, the cockpit of the jet transport has become much more complicated than that of the older transport because of the evolutionary process of adding information by more instruments, controls, and selectors to provide increased capability or to overcome deficiencies. This trend toward complexity in the cockpit can be attributed to the use of more complex aircraft systems and the desire to extend the aircraft operating conditions to overcome limitations due to environmental constraints of weather (e.g., poor visibility, low ceiling, etc.) and of congested air traffic. System complexity arises from adding more propulsion units, stability and control augmentation, control automation, sophisticated guidance and navigation systems, and a means for monitoring the status of various aircraft systems.[3]

Assessing the state of available technology, human factors, and potential improvement, Cooper issued a bold challenge to NASA and the larger aeronautical community, noting: “A major advance during the 1970s must be the development of more effective means for systematically evaluating the available technology for improving the pilot-aircraft interface if major innovations in the cockpit are to be obtained during the 1980s.”[4] To illustrate his point, Cooper included two drawings, one representative of the dial-intensive contemporary jetliner cockpit presentation and the other of what might be achieved with advanced multifunction display approaches over the next decade.

11 Pilot Aircraft Interface 1971 NASA SP-292.tif

The pilot-aircraft interface, as seen by NASA pilot George E. Cooper, circa 1971. Note the predominance of gauges and dials. NASA.

12 Simplified Flt Control for 1980s 1971 NASA SP-292.tif

Cooper’s concept of an advanced multifunction electronic cockpit. Note the flightpath “highway in the sky” presentation. NASA.

At the same conference, Langley Director Edgar M. Cortright noted that, in the 6 years from 1964 through 1969, airline costs incurred by congestion-caused terminal area traffic delays had risen from less than $40 million to $160 million per year. He said that it was “symptomatic of the inability of many terminals to handle more traffic,” but that “improved ground and airborne electronic systems, coupled with acceptable aircraft characteristics, would improve all-weather operations, permit a wider variety of approach paths and closer spacing, and thereby increase airport capacity by about 100 percent if dual runways were provided.”[5] Langley avionics researcher G. Barry Graves noted the potentiality of revolutionary breakthroughs in cockpit avionics to improve the pilot-aircraft interface and take aviation operations and safety to a new level, particularly the use of “computer-centered digital systems for both flight management and advanced control applications, automated communications, [and] systems for wide-area navigation and surveillance.”[6]

But this early work generated little immediate response from the aviation community, as requisite supporting technologies were not sufficiently mature to permit their practical exploitation. It was not until the 1980s, when the pace of computer graphics and simulation development accelerated, that a heightened interest developed in improving pilot performance in poor visibility conditions. Accordingly, researchers increasingly studied the application of artificial intelligence (AI) to flight deck functions, working closely with professional pilots from the airlines, military, and flight-test community. While many exaggerated claims were made—given the relative immaturity of the computer and AI field at that time—researchers nevertheless recognized, as Sheldon Baron and Carl Feehrer wrote, “one can conceive of a wide range of possible applications in the area of intelligent aids for flight crew.”[7] Interviews with pilots revealed that “descent and approach phases accounted for the greatest amounts of workload when averaged across all system management categories,” stimulating efforts to develop what was then popularly termed a “pilot’s associate” AI system.[8]

In this growing climate of interest, John D. Shaughnessy and Hugh P. Bergeron’s Single Pilot Instrument Flight Rules (SPIFR) project constituted a notable first step, inasmuch as SPIFR’s novel “follow-me box” showed promise as an intuitive aid for inexperienced pilots flying in instrument conditions. Subsequently, Langley’s James J. Adams conducted simulator evaluations of the display, confirming its potential.[9] Building on these “follow-me box” developments, Langley’s Eric C. Stewart developed a concept for portraying an aircraft’s current and future desired positions. He created a synthetic display similar, to the scene a driver experiences while driving a car, combining it with highway-in-the-sky (HITS) displays.[10] This so-called “E-Z Fly” project was incorporated into Langley’s General-Aviation Stall/Spin Program, a major contemporary study to improve the safety of general-aviation (GA) pilots and passengers. Numerous test subjects, from nonpilots to highly experienced test pilots, evaluated Stewart’s concept of HITS implementation. NASA flight-test reports illustrated both the challenges and the opportunities that the HITS/E-Z Fly combination offered.[11]

E-Z Fly decoupled the flight controls of a Cessna 402 twin-engine, general-aviation aircraft simulated in Langley’s GA Simulator, and HITS offered a system of guidance to the pilot. This decoupling, while making the simulated airplane “easy to fly,” also reduced its responsiveness. Providing this level of HITS technology in a low-end GA aircraft posed a range of technical, economic, implementation, and operational challenges. As stated in a flight-test report, “The concept of placing inexperienced pilots in the National Airspace System has many disadvantages. Certainly, system failures could have disastrous consequences.”[12] Nevertheless, the basic technology was sound and helped set the stage for future projects. NASA Langley was developing the infrastructure in the early 1990s to support wide-ranging research into synthetically driven flight deck displays for GA, commercial and business aircraft (CBA), and NASA’s High-Speed Civil Transport (HSCT).[13] The initial limited idea of easing the workload for low-time pilots would lead to sophisticated display systems that would revolutionize the flight deck. Ultimately, in 1999, a dedicated, well-funded Synthetic Vision Systems Project was created, headed by Daniel G. Baize under NASA’s Aviation Safety Program (AvSP). Inspired by Langley researcher Russell V. Parrish, researchers accomplished a number of comprehensive and successful GA and CBA flight and simulation experiments before the project ended in 2005. These complex, highly organized, and efficiently interrelated experiments pushed the state of the art in aircraft guidance, display, and navigation systems.

Significant work on synthetic vision systems and sensor fusion issues was also undertaken at the NASA Johnson Space Center (JSC) in the late 1990s, as researchers grappled with the challenge of developing displays for ground-based pilots to control the proposed X-38 reentry test vehicle. As subsequently discussed, through a NASA-contractor partnership, they developed a highly efficient sensor fusion technique whereby real-time video signals could be blended with synthetically derived scenes using a laptop computer. After cancellation of the X-38 program, JSC engineer Jeffrey L. Fox and Michael Abernathy of Rapid Imaging Software, Inc., (RIS, which developed the sensor fusion technology for the X-38 program, supported by a small business contract) continued to expand these initial successes, together with Michael L. Coffman of the Federal Aviation Administration (FAA). Later joined by astronaut Eric C. Boe and the author (formerly a project pilot on a number of LaRC Synthetic Vision Systems (SVS) programs), this partnership accomplished four significant flight-test experiments using JSC and FAA aircraft, motivated by a unifying belief in the value of Synthetic Vision Systems technology for increasing flight safety and efficiency.

Synthetic Vision Systems research at NASA continues today at various levels. After the SVS project ended in 2005, almost all team members continued building upon its accomplishments, transitioning to the new Integrated Intelligent Flight Deck Technologies (IIFDT) project, “a multi-disciplinary research effort to develop flight deck technologies that mitigate operator-, automation-, and environment-induced hazards.”[14] IIFDT constituted both a major element of NASA’s Aviation Safety Program and a crucial underpinning of the Next Generation Air Transportation System (NGATS), and it was itself dependent upon the maturation of SVS begun within the project that concluded in 2005. While much work remains to be done to fulfill the vision, expectations, and promise of NGATS, the principles and practicality of SVS and its application to the cockpit have been clearly demonstrated.[15] The following account traces SVS research, as seen from the perspective of a NASA research pilot who participated in key efforts that demonstrated its potential and value for professional civil, military, and general-aviation pilots alike.

Synthetic Vision: An Overview

NASA’s early research in SVS concepts almost immediately influenced broader perceptions of the field. Working with NASA researchers who reviewed and helped write the text, the Federal Aviation Administration crafted a definition of SVS published in Advisory Circular 120-29A, describing it as “a system used to create a synthetic image (e.g., typically a computer generated picture) representing the environment external to the airplane.” In 2000, NASA Langley researchers Russell V. Parrish, Daniel G. Baize, and Michael S. Lewis gave a more detailed definition as “a display system in which the view of the external environment is provided by melding computer-generated external topography scenes from on-board databases with flight display symbologies and other information from on-board sensors, data links, and navigation systems. These systems are characterized by their ability to represent, in an intuitive manner, the visual information and cues that a flight crew would have in daylight Visual Meteorological Conditions (VMC).”[16] This definition can be expanded further to include sensor fusion. This provides the capability to blend in real time in varying percentages all of the synthetically derived information with video or infrared signals. The key requirements of SVS as stated above are to provide the pilot with an intuitive, equivalent-to-daylight VMC capability in all-weather conditions at any time on a tactical level (with present and near future time and position portrayed on a head-up display [HUD] or primary flight display [PFD]) and far improved situation awareness on a strategic level (with future time and position portrayed on a navigation display [a NAV display, or ND]).

In the earliest days of proto-SVS development during the 1980s and early 1990s, the state of the art of graphics generators limited the terrain portrayal to stroke-generated line segments forming polygons to represent terrain features. Superimposing HITS symbology on these displays was not difficult, but the level of situational awareness (SA) improvement was somewhat limited by the low-fidelity terrain rendering. In fact, the superposition of HITS projected flight paths to include a rectilinear runway presentation at the end of the approach segment on basic PFD displays inspired the development of improved terrain portrayal by suggesting the simple polygon presentation of terrain. The development of raster graphics generators and texturing capabilities allowed these simple polygons to be filled, producing more realistic scenes. Aerial and satellite photography providing “photo-realistic” quality images emerged in the mid-1990s, along with improved synthetic displays enhanced by constantly improving databases. With vastly improved graphics generators (reflecting increasing computational power), the early concept of co-displaying the desired vertical and lateral pathway guidance ahead of the airplane in a three-dimensional perspective has evolved from the crude representations of just two decades ago to the present examples of high-resolution, photo-realistic, and elevation-based three-dimensional displays, replete with overlaid pathway guidance, providing the pilot with an unobstructed view of the world. Effectively, then, the goal of synthetically providing the pilot with an effective daylight, VMC view in all-weather has been achieved.[17]

3 Rivers Elevation Based Generic PFD.tif

Elevation-based generic primary flight display used on a NASA SVS test in 2005. NASA.

Though the expressions Synthetic Vision Systems, External Vision Systems (XVS), and Enhanced Vision Systems (EVS) have often been used interchangeably, each is distinct. Strictly speaking, SVS has come to mean computer-generated imagery from onboard databases combined with precise Global Positioning System (GPS) navigation. SVS joins terrain, obstacle, and airport images with spatial and navigational inputs from a variety of sensor and reference systems to produce a realistic depiction of the external world. EVS and XVS employ imaging sensor systems such as television, millimeter wave, and infrared, integrated with display symbologies (altitude/airspeed tapes on a PFD or HUD, for example) to permit all-weather, day-night operations.[18]

Confusion in terminology, particularly in the early years, has characterized the field, including use of multiple terms. For example, in 1992, the FAA completed a flight test investigating millimeter wave and infrared sensors for all-weather operations under the name “Synthetic Vision Technology Demonstration.”[19] SVS and EVS are often combined as one expression, SVS/EVS, and the FAA has coined another term as well, EFVS, for Enhanced Flight Vision System. Industry has undertaken its own developments, with its own corporate names and nuances. A number of avionics companies have implemented various forms of SVS technologies in their newer flight deck systems, and various airframe manufacturers have obtained certification of both an Enhanced Vision System and a Synthetic Vision System for their business and regional aircraft. But much still remains to be done, with NASA, the FAA, and industry having yet to fully integrate SVS and EVS/EFVS technology into a comprehensive architecture furnishing Equivalent Visual Operations (EVO), blending infrared-based EFVS with SVS and millimeter-wave sensors, thereby creating an enabling technology for the FAA’s planned Next Generation Air Transportation System.[20]

The underlying foundation of SVS is a complete navigation and situational awareness system. This Synthetic Vision System consists mainly of integration of worldwide terrain, obstacle, and airport databases; real-time presentation of immediate tactical hazards (such as weather); an Inertial Navigation System (INS) and GPS navigation capability; advanced sensors for monitoring the integrity of the database and for object detection; presentation of traffic information; and a real-time synthetic vision display, with advanced pathway or equivalent guidance, effectively affording the aircrew a projected highway-in-the-sky ahead of them.[21] Two enabling technologies were necessary for SVS to be developed: increased computer storage capacity and a global, real-time, highly accurate navigation system. The former has been steadily developing over the past four decades, and the latter became available with the advent of GPS in the 1980s. These enabling technologies utilized or improved upon the Electronic Flight Information System (EFIS), or glass cockpit, architecture pioneered by NASA Langley in the 1970s and first flown on Langley’s Boeing 737 Advanced Transport Operating System (ATOPS) research airplane. It should be noted that the research accomplishments of this airplane—Boeing’s first production 737—in its two decades of NASA service are legendary. These included demonstration of the first glass cockpit in a transport aircraft, evaluation of transport aircraft fly-by-wire technology, the first GPS-guided blind landing, the development of wind shear detection systems, and the first SVS-guided landings in a transport aircraft.[22]

The development of GPS satellite navigation signals technology enabled the evolution of SVS. GPS began as an Air Force–Navy effort to build a satellite-based navigation system that could meet the needs of fast-moving aircraft and missile systems, something the older TRANSIT system, developed in the late 1950s, could not. After early studies by a variety of organizations—foremost of which was the Aerospace Corporation—the Air Force formally launched the GPS research and development program in October 1963, issuing hardware design contracts 3 years later. Known initially as the Navstar GPS system, the concept involved a constellation of 24 satellites orbiting 12,000 nautical miles above Earth, each transmitting a continual radio signal containing a precise time stamp from an onboard atomic clock. By recording the time of each received satellite signal of a required 4 satellites and comparing the associated time stamp, a ground receiver could determine position and altitude to high accuracies. The first satellite was launched in 1978, and the constellation of 24 satellites was complete in 1995. Originally intended only for use by the Department of Defense, GPS was opened for civilian use (though to a lesser degree of precision) by President Ronald Reagan after a Korean Air Lines Boeing 747 commercial airliner was shot down by Soviet interceptors in 1983 after it strayed miles into Soviet territory. The utility of the GPS satellite network expanded dramatically in 2000, when the United States cleared civilian GPS users to receive the same level of precision as military forces, thus increasing civilian GPS accuracy tenfold.[23]

Database quality was essential for the development of SVS. The 1990s saw giant strides taken when dedicated NASA Space Shuttle missions digitally mapped 80 percent of Earth’s land surface and almost 100 percent of the land between 60 degrees north and south latitude. At the same time, radar mapping from airplanes contributed to the digital terrain database, providing sufficient resolution for SVS in route and specific terminal area requirements. The Shuttle Endeavour Radar Topography Mission in 2000 produced topographical maps far more precise than previously available. Digital terrain databases are being produced by commercial and government organizations worldwide.[24]

With the maturation of the enabling technologies in the 1990s and its prior experience in developing glass cockpit systems, NASA Langley was poised to develop the concept of SVS as a highly effective tool for pilots to operate aircraft more effectively and safely. This did not happen directly but was the culmination of experience gained by Langley research engineers and pilots on NASA’s Terminal Area Productivity (TAP) and High-Speed Research (HSR) programs in the mid- to late 1990s. By 1999, when the SVS project of the AvSP was initiated and funded, Langley had an experienced core of engineers and research pilots eager to push the state of the art.

TAP, HSR, and the Early Development of SVS

In 1993, responding to anticipated increases in air travel demand, NASA established a Terminal Area Productivity program to increase airliner throughput at the Nation’s airports by at least 12 percent over existing levels of service. TAP consisted of four interrelated subelements: air traffic management, reduced separation operations, integration between aircraft and air traffic control (ATC), and Low Visibility Landing and Surface Operations (LVLASO).[25]

Of the four Agency subelements, the Low Visibility Landing and Surface Operations project assigned to Langley held greatest significance for SVS research. A joint research effort of Langley and Ames Research Centers, LVLASO was intended to explore technologies that could improve the safety and efficiency of surface operations, including landing rollout, turnoff, and inbound and outbound taxi; making better use of existing runways; and thus making obvious the need for expensive new facilities and the rebuilding and modification of older ones.[26] Steadily increasing numbers of surface accidents at major airports imparted particular urgency to the LVLASO effort; in 1996, there had been 287 incidents, and the early years of the 1990s had witnessed 5 fatal accidents.[27]

LVLASO researchers developed a system concept including two technologies: Taxiway Navigation and Situational Awareness (T-NASA) and Rollout Turnoff (ROTO). T-NASA used the HUD and NAV display moving map functions to provide the pilot with taxi guidance and data link air traffic control instructions, and ROTO used the HUD to guide the pilot in braking levels and situation awareness for the selected runway turnoff. LVLASO also incorporated surface surveillance concepts to provide taxi traffic alerting with cooperative, transponder-equipped vehicles. LVLASO connected with potential SVS because of its airport database and GPS requirements.

In July and August 1997, NASA Langley flight researchers undertook two sequential series of air and ground tests at Atlanta International Airport, using a NASA Boeing 757-200 series twin-jet narrow-body transport equipped with Langley-developed experimental cockpit displays. This permitted surface operations in visibility conditions down to a runway visual range (RVR) of 300 feet. Test crews included NASA pilots for the first series of tests and experienced airline captains for the second. All together, it was the first time that SVS had been demonstrated at a major airport using a large commercial jetliner.[28]

LVLASO results encouraged Langley to continue its research on integrating surface operation concepts into its SVS flight environment studies. Langley’s Wayne H. Bryant led the LVLASO effort, assisted by a number of key researchers, including Steven D. Young, Denise R. Jones, Richard Hueschen, and David Eckhardt.[29] When SVS became a focused project under AvSP in 1999, these talented researchers joined their colleagues from the HSR External Vision Systems project.[30] While LVLASO technologies were being developed, NASA was in the midst of one of the largest aeronautics programs in its history, the High-Speed Research Program. SVS research was a key part of this program as well.

After sporadic research at advancing the state of the art in high-speed aerodynamics in the 1970s, the United States began to look at both supersonic and hypersonic cruise technologies more seriously in the mid-1980s. Responding to a White House Office of Science and Technology Policy call for research into promoting long-range, high-speed aircraft, NASA awarded contracts to Boeing Commercial Airplanes and Douglas Aircraft Company in 1986 for market and technology feasibility studies of a potential High-Speed Civil Transport. The speed spectrum for these studies spanned the supersonic to hypersonic regions, and the areas of study included economic, environmental, and technical considerations. At the same time, LaRC conducted its own feasibility studies led by Charles M. Jackson, Chief of the High-Speed Research Division; his deputy, Wallace C. Sawyer; Samuel M. Dollyhigh; and A. Warner Robbins. These and follow-on studies by 1988 concluded that the most favorable candidate considering all factors investigated was a Mach 2 to Mach 3.2 HSCT with transpacific range.[31]

NASA created the High-Speed Research program in 1990 to investigate technical challenges involved with developing a Mach 2+ HSCT. Phase I of the HSR program was to determine if major environmental obstacles could be overcome, including ozone depletion, community noise, and sonic boom generation. NASA and its indus