NASA's Contributions to Aeronautics, Volume 2 by National Aeronautics & Space Administration. - HTML preview

PLEASE NOTE: This is an HTML preview only and some elements such as links or page numbers may be incorrect.
Download the book in PDF, ePub, Kindle for a complete version.

CASE

4

Human Factors Research: Meshing Pilots with Planes

Steven A. Ruffin

The invention of flight exposed human limitations. Altitude effects endangered early aviators. As the capabilities of aircraft grew, so did the challenges for aeromedical and human factors researchers. Open cockpits gave way to pressurized cabins. Wicker seats perched on the leading edge of frail wood-and-fabric wings were replaced by robust metal seats and eventually sophisticated rocket-boosted ejection seats. The casual cloth work clothes and hats presaged increasingly complex suits.

Dummies NASA LRC Photo 74-H-3710001.tif

Case-4 Cover Image: A Langley Research Center human factors research engineer inspects the interior of a light business aircraft after a simulated crash to assess the loads experienced during accidents and

develop means of improving survivability. NASA.

As Mercury astronaut Alan B. Shepard, Jr., lay flat on his back, sealed in a metal capsule perched high atop a Redstone rocket on the morning of May 5, 1961, many thoughts probably crossed his mind: the pride he felt of becoming America’s first man in space, or perhaps, the possibility that the powerful rocket beneath him would blow him sky high . . . in a bad way, or maybe even a greater fear he would “screw the pooch” by doing something to embarrass himself—or far worse—jeopardize the U.S. space program.

After lying there nearly 4 hours and suffering through several launch delays, however, Shepard was by his own admission not thinking about any of these things. Rather, he was consumed with an issue much more down to earth: his bladder was full, and he desperately needed to relieve himself. Because exiting the capsule was out of the question at this point, he literally had no place to go. The designers of his modified Goodrich U.S. Navy Mark IV pressure suit had provided for nearly every contingency imaginable, but not this; after all, the flight was only scheduled to last a few minutes.

Finally, Shepard was forced to make his need known to the controllers below. As he candidly described later, “You heard me, I’ve got to pee. I’ve been in here forever.”[1] Despite the unequivocal reply of “No!” to his request, Shepard’s bladder gave him no alternative but to persist. Historic flight or not, he had to go—and now.

PHOTO 1--Alan Shepard.jpg.tif

Mercury 7 astronaut Alan B. Shepard, Jr., preparing for his historic flight of May 5, 1961. His gleaming silver pressure suit had all the bells and whistles . . . except for one. NASA.

When the powers below finally accepted that they had no choice, they gave the suffering astronaut a reluctant thumbs up: so, “pee,” he did . . . all over his sensor-laden body and inside his gleaming silver spacesuit. And then, while the world watched—unaware of this behind-the-scenes drama—Shepard rode his spaceship into history . . . drenched in his own urine.

This inauspicious moment should have been something of an epiphany for the human factors scientists who worked for the newly formed National Aeronautics and Space Administration (NASA). It graphically pointed out the obvious: human requirements—even the most basic ones—are not optional; they are real, and accommodations must always be made to meet them. But NASA’s piloted space program had advanced so far technologically in such a short time that this was only one of many lessons that the Agency’s planners had learned the hard way. There would be many more in the years to come.

As described in the Tom Wolfe book and movie of the same name, The Right Stuff, the first astronauts were considered by many of their contemporary non-astronaut pilots—including the ace who first broke the sound barrier, U.S. Air Force test pilot Chuck Yeager—as little more than “spam in a can.”[2] In fact, Yeager’s commander in charge of all the test pilots at Edwards Air Force Base had made it known that he didn’t particularly want his top pilots volunteering for the astronaut program; he considered it a “waste of talent.”[3] After all, these new astronauts—more like lab animals than pilots—had little real function in the early flights, other than to survive, and sealed as they were in their tiny metal capsules with no realistic means of escape, the cynical “spam in a can” metaphor was not entirely inappropriate.

But all pilots appreciated the dangers faced by this new breed of American hero: based on the space program’s much-publicized recent history of one spectacular experimental launch failure after another, it seemed like a morbidly fair bet to most observers that the brave astronauts, sitting helplessly astride 30 tons of unstable and highly explosive rocket fuel, had a realistic chance of becoming something akin to America’s most famous canned meat dish. It was indeed a dangerous job, even for the 7 overqualified test-pilots-turned-astronauts who had been so carefully chosen from more than 500 actively serving military test pilots.[4] Clearly, piloted space flight had to become considerably more human-friendly if it were to become the way of the future.

NASA had existed less than 3 years before Shepard’s flight. On July 19, 1958, President Dwight D. Eisenhower signed into law the National Aeronautics and Space Act of 1958, and chief among the provisions was the establishment of NASA. Expanding on this act’s stated purpose of conducting research into the “problems of flight within and outside the earth’s atmosphere” was an objective to develop vehicles capable of carrying—among other things—“living organisms” through space.[5]

Because this official directive clearly implied the intention of sending humans into space, NASA was from its inception charged with formulating a piloted space program. Consequently, within 3 years after it was created, the budding space agency managed to successfully launch its first human, Alan Shepard, into space. The astronaut completed NASA Mercury mission MR-3 to become America’s first man in space. Encapsulated in his Freedom 7 spacecraft, he lifted off from Cape Canaveral, FL, and flew to an altitude of just over 116 miles before splashing down into the Atlantic Ocean 302 miles downrange.[6] It was only a 15-minute suborbital flight and, as related above, not without problems, but it accomplished its objective: America officially had a piloted space program.

This was no small accomplishment. Numerous major technological barriers had to be surmounted during this short time before even this most basic of piloted space flights was possible. Among these obstacles, none was more challenging than the problems associated with maintaining and supporting human life in the ultrahostile environment of space. Thus, from the beginning of the Nation’s space program and continuing to the present, human factors research has been vital to NASA’s comprehensive research program.

The Science of Human Factors

To be clear, however, NASA did not invent the science of human factors. Not only has the term been in use long before NASA ever existed, the concept it describes has existed since the beginning of mankind. Human factors research encompasses nearly all aspects of science and technology and therefore has been described with several different names. In simplest terms, human factors studies the interface between humans and the machines they operate. One of the pioneers of this science, Dr. Alphonse Chapanis, provided a more inclusive and descriptive definition: “Human factors discovers and applies information about human behavior, abilities, limitations, and other characteristics to the design of tools, machines, systems, tasks, jobs, and environments for productive, safe, comfortable, and effective human use.”[7] The goal of human factors research, therefore, is to reduce error, while increasing productivity, safety, and comfort in the interaction between humans and the tools with which they work.[8]

As already suggested, the study of human factors involves a myriad of disciplines. These include medicine, physiology, applied psychology, engineering, sociology, anthropology, biology, and education.[9] These in turn interact with one another and with other technical and scientific fields, as they relate to behavior and usage of technology. Human factors issues are also described by many similar—though not necessarily synonymous—terms, such as human engineering, human factors engineering, human factors integration, human systems integration, ergonomics, usability, engineering psychology, applied experimental psychology, biomechanics, biotechnology, man-machine design (or integration), and human-centered design.[10]

The Changing Human Factors Dimension Over Time

The consideration of human factors in technology has existed since the first man shaped a wooden spear with a sharp rock to help him grasp it more firmly. It therefore stands to reason that the dimension of human factors has changed over time with advancing technology—a trend that has accelerated throughout the 20th century and into the current one.[11]

Man’s earliest requirements for using his primitive tools and weapons gave way during the Industrial Revolution to more refined needs in operating more complicated tools and machines. During this period, the emergence of more complex machinery necessitated increased consideration of the needs of the humans who were to operate this machinery—even if it was nothing more complicated than providing a place for the operator to sit, or a handle or step to help this person access instruments and controls. In the years after the Industrial Revolution, human factors concerns became increasingly important.[12]

The Altitude Problem

The interface between humans and technology was no less important for those early pioneers, who, for the first time in history, were starting to reach for the sky. Human factors research in aeronautics did not, however, begin with the Wright brothers’ first powered flight in 1903; it began more than a century earlier.

Much of this early work dealt with the effects of high altitude on humans. At greater heights above the Earth, barometric pressure decreases. This allows the air to expand and become thinner. The net effect is diminished breathable oxygen at higher altitudes. In humans operating high above sea level without supplemental oxygen, this translates to a medical condition known as hypoxia. The untoward effects on humans of hypoxia, or altitude sickness, had been known for centuries—long before man ever took to the skies. It was a well-known entity to ancient explorers traversing high mountains, thus the still commonly used term mountain sickness.[13]

The world’s first aeronauts—the early balloonists—soon noticed this phenomenon when ascending to higher altitudes; eventually, some of the early flying scientists began to study it. As early as 1784, American physician John Jeffries ascended to more than 9,000 feet over London with French balloonist Jean Pierre Blanchard.[14] During this flight, they recorded changes in temperature and barometric pressure and became perhaps the first to record an “aeromedical” problem, in the form of ear pain associated with altitude changes.[15] Another early flying doctor, British physician John Shelton, also wrote of the detrimental effects of high-altitude flight on humans.[16]

During the 1870s—with mankind’s first powered, winged human flight still decades in the future—French physiologist Paul Bert conducted important research on the manner in which high-altitude flight affects living organisms. Using the world’s first pressure chamber, he studied the effects of varying barometric pressure and oxygen levels on dogs and later humans—himself included. He conducted 670 experiments at simulated altitudes of up to 36,000 feet. His findings clarified the effects of high-altitude conditions on humans and established the requirement for supplemental oxygen at higher altitudes.[17] Later studies by other researchers followed, so that by the time piloted flight in powered aircraft became a reality at Kitty Hawk, NC, on December 17, 1903, the scientific community already had a substantial amount of knowledge concerning the physiology of high-altitude flight. Even so, there was much more to be learned, and additional research in this important area would continue in the decades to come.

Early Flight and the Emergence of Human Factors Research

During the early years of 20th century aviation, it became apparent that the ability to maintaining human life and function at high altitude was only one of many human factors challenges associated with powered flight. Aviation received its first big technological boost during the World War I years of 1914–1918.[18] Accompanying this advancement was a new set of human-related problems associated with flight.[19] As a result of the massive, nearly overnight wartime buildup, there were suddenly tens of thousands of newly trained pilots worldwide, flying on a daily basis in aircraft far more advanced than anyone had ever imagined possible. In the latter stages of the war, aeronautical know-how had become so sophisticated that aircraft capabilities had surpassed that of their human operators. These Great War pilots, flying open-cockpit aircraft capable of altitudes occasionally exceeding 20,000 feet, began to routinely suffer from altitude sickness and frostbite.[20] They were also experiencing pressure-induced ear, sinus, and dental pain, as well as motion sickness and vertigo.[21] In addition, these early open-cockpit pilots endured the effects of ear-shattering noise, severe vibration, noxious engine fumes, extreme acceleration or gravitational g forces, and a constant hurricane-force wind blast to their faces.[22] And as if these physical challenges were not bad enough, these early pilots also suffered devastating injuries from crashes in aircraft unequipped with practically any basic safety features.[23] Less obvious, but still a very real human problem, these early high flyers were exhibiting an array of psychological problems, to which these stresses undoubtedly contributed.[24] Indeed, though proof of the human limitations in flying during this period was hardly needed, the British found early in the war that only 2 percent of aviation fatalities came at the hands of the enemy, while 90 percent were attributed to pilot deficiencies; the remainder came from structural and engine failure, and a variety of lesser causes.[25] By the end of World War I, it was painfully apparent to flight surgeons, psychologists, aircraft designers, and engineers that much additional work was needed to improve the human-machine interface associated with piloted flight.

Because of the many flight-related medical problems observed in airmen during the Great War, much of the human factors research accomplished during the following two decades leading to the Second World War focused largely on the aeromedical aspects of flight. Flight surgeons, physiologists, engineers, and other professionals of this period devoted themselves to developing better life-support equipment and other protective gear to improve safety and efficiency during flight operations. Great emphasis was also placed on improving pilot selection.[26]

Of particular note during the interwar period of the 1920s and 1930s were several piloted high-altitude balloon flights conducted to further investigate conditions in the upper part of the Earth’s atmosphere known as the stratosphere. Perhaps the most ambitious and fruitful of these was the 1935 joint U.S. Army Air Corps/National Geographic Society flight that lifted off from a South Dakota Black Hills natural geological depression known as the “Stratobowl.” The two Air Corps officers, riding in a sealed metal gondola—much like a future space capsule—with a virtual laboratory full of scientific monitoring equipment, traveled to a record altitude of 72,395 feet.[27] Little did they know it at the time, but the data they collected while aloft would be put to good use decades later by human factors scientists in the piloted space program. This included information about cosmic rays, the distribution of ozone in the upper atmosphere, and the spectra and brightness of sun and sky, as well as the chemical composition, electrical conductivity, and living spore content of the air at that altitude.[28]

Although the U.S. Army Air Corps and Navy conducted the bulk of the human factors research during this interwar period of the 1920s and 1930s, another important contributor was the National Advisory Committee for Aeronautics (NACA). Established in 1915, the NACA was actively engaged in a variety of aeronautical research for more than 40 years. Starting only with a miniscule $5,000 budget and an ambitious mission to “direct and conduct research and experimentation in aeronautics, with a view to their practical solution,”[29] the NACA became one of this country’s leading aeronautical research agencies and remained so up until its replacement in 1958 by the newly established space agency NASA. The work that the NACA accomplished during this era in design engineering and life-support systems, in cooperation with the U.S. military and other agencies and institutions, contributed greatly to information and technology that would become vital to the piloted space program, still decades—and another World War—in the future.[30]

World War II and the Birth of Human Factors Engineering

During World War II, human factors was pushed into even greater prominence as a science. During this wartime period of rapidly advancing military technology, greater demands were being placed on the users of this technology. Success or failure depended on such factors as the operators’ attention span, hand-eye coordination, situational awareness, and decision-making skills. These demands made it increasingly challenging for operators of the latest military hardware—aircraft, tanks, ships, and other complex military machinery—to operate their equipment safely and efficiently.[31] Thus, the need for greater consideration of human factors issues in technological design became more obvious than ever before; as a consequence, the discipline of human engineering emerged.[32] This branch of human factors research is involved with finding ways of designing “machines, operations, and work environments so that they match human capacities and limitations.” Or, to put it another way, it is the “engineering of machines for human use and the engineering of human tasks for operating machines.”[33]

During World War II, no area of military technology had a more critical need for both human factors and human engineering considerations than did aviation.[34] Many of the biomedical problems afflicting airmen in the First World War had by this time been addressed, but new challenges had appeared. Most noticeable were the increased physiological strains for air crewmen who were now flying faster, higher, for longer periods of time, and—because of wartime demands—more aggressively than ever before. High-performance World War II aircraft were capable of cruising several times faster than they were in the previous war and were routinely approaching the speed of sound in steep dives. Because of these higher speeds, they were also exerting more than enough gravitational g forces during turns and pullouts to render pilots almost instantly unconscious. In addition, some of these advanced aircraft could climb high into the stratosphere to altitudes exceeding 40,000 feet and were capable of more hours of flight-time endurance than their human operators possessed. Because of this phenomenal increase in aircraft technology, human factors research focused heavily on addressing the problems of high-performance flight.[35]

The other aspect of the human factors challenge coming into play involved human engineering concerns. Aircraft of this era were exhibiting a rapidly escalating degree of complexity that made flying them—particularly under combat conditions—nearly overwhelming. Because of this combination of challenges to the mortals charged with operating these aircraft, human engineering became an increasingly vital aspect of aircraft design.[36]

During these wartime years, high-performance military aircraft were still crashing at an alarmingly high rate, in spite of rigorous pilot training programs and structurally well-designed aircraft. It was eventually accepted that not all of these accidents could be adequately explained by the standard default excuse of “pilot error.” Instead, it became apparent that many of these crashes were more a result of “designer error” than operator error.[37] Military aircraft designers had to do more to help the humans charged with operating these complex, high-performance aircraft. Thus, not only was there a need during these war years for greater human safety and life support in the increasingly hostile environment aloft, but the crews also needed better-designed cockpits to help them perform the complex tasks necessary to carry out their missions and safely return.[38]

In earlier aircraft of this era, design and placement of controls and gauges tended to be purely engineer-driven; that is, they were constructed to be as light as possible and located wherever designers could most conveniently place them, using the shortest connections and simplest attachments. Because the needs of the users were not always taken into account, cockpit designs tended not to be as user-friendly as they should have been. This also meant that there was no attempt to standardize the cockpit layout between different types of aircraft. This contributed to longer and more difficult transitions to new aircraft with different instrument and control arrangements. This disregard for human needs in cockpit design resulted in decreased aircrew efficiency and performance, greater fatigue, and, ultimately, more mistakes.[39]

An example of this lack of human consideration in cockpit design was one that existed in an early model Boeing B-17 bomber. In this aircraft, the flap and landing gear handles were similar in appearance and proximity, and therefore easily confused. This unfortunate arrangement had already inducted several pilots into the dreaded “gear-up club,” when, after landing, they inadvertently retracted the landing gear instead of the intended flaps. To address this problem, a young Air Corps physiologist and Yale psychology Ph.D. named Alphonse Chapanis proved that the incidence of such pilot errors could be greatly reduced by more logical control design and placement. His ingeniously simple solution of moving the controls apart from one another and attaching different shapes to the various handles allowed pilots to determine by touch alone which control to activate. This fix—though not exactly rocket science—was all that was needed to end a dangerous and costly problem.[40]

As a result of a host of human-operator problems, such as those described above, wartime aircraft design engineers began routinely working with industrial and engineering psychologists and flight surgeons to optimize human utilization of this technology. Thus was born in aviation the concept of human factors in engineering design, a discipline that would become increasingly crucial in the decades to come.[41]

The Jet Age: Man Reaches the Edge of Space

By the end of the Second World War, aviation was already well into the jet age, and man was flying yet higher and faster in his quest for space. During the years after the end of the war, human factors research continued to evolve in support of this movement. A multiplicity of human and animal studies were conducted during this period by military, civilian, and Government researchers to learn more about such problems as acceleration and deceleration, emergency egress from high-speed jet aircraft, explosive decompression, pressurization of suits and cockpits, and the biological effects of various types of cosmic rays. In addition, a significant amount of work concentrated on instrument design and cockpit display.[42]

During the years leading up to America’s space program, humans were already operating at the edge of space. This was made possible in large part by the cutting-edge performance of the NACA–NASA high-speed, high-altitude rocket “X-planes”—progressing from the Bell X-1, in which Chuck Yeager became the first person to officially break the sound barrier, on October 14, 1947, to the phenomenal hypersonic X-15 rocket plane, which introduced man to true space flight.[43]

These unique experimental rocket-propelled aircraft, developed and flown from 1946 through 1968, were instrumental in helping scientists understand how best to sustain human life during high-speed, high-altitude flight.[44] One of the more important human factors developments employed in the first of this series, the Bell X-1 rocket plane, was the T-1 partial pressure suit designed by Dr. James Henry of the University of