NASA's Contributions to Aeronautics, Volume 1 by National Aeronautics & Space Administration. - HTML preview

PLEASE NOTE: This is an HTML preview only and some elements such as links or page numbers may be incorrect.
Download the book in PDF, ePub, Kindle for a complete version.

CASE

5

Toward Transatmospheric Flight: From V-2 to the X-51

T.A. Heppenheimer

The expansion of high-speed aerothermodynamic knowledge enabled the attainment of hypersonic speeds, that is, flight at speeds of Mach 5 and above. Blending the challenge of space flight and flight within the atmosphere, this led to the emergence of the field of transatmospherics: systems that would operated in the upper atmosphere, transitioning from lifting flight to ballistic flight, and back again. NACA–NASA research proved essential to mastery of this field, from the earliest days of blunt body reentry theory to the advent of increasingly sophisticated transatmospheric concepts, such as the X-15, the Shuttle, the X-43A, and the X-51.

10- Armstrong & X-15 suit NASA FRC Pho E60-6286 1960.tif

Case-5 Cover Image: X-15 research pilot (and, subsequently, Gemini and Apollo astronaut) Neil A. Armstrong, wearing the X-15’s Clark MC-2 full-pressure suit, 1960. NASA.

On December 7, 1995, the entry probe of the Galileo spacecraft plunged downward into the atmosphere of Jupiter. It sliced into the planet’s hydrogen-rich envelope at a gentle angle and entered at Mach 50, with its speed of 29.5 miles per second being four times that of a return to Earth from the Moon. The deceleration peaked at 228 g’s, equivalent to slamming from 5,000 mph to a standstill in a single second. Yet the probe survived. It deployed a parachute and transmitted data from its onboard instruments for nearly an hour, until overwhelmed by the increasing pressures it encountered within the depths of the Jovian atmosphere.[1]

The Galileo probe offered dramatic proof of how well the National Aeronautics and Space Administration (NASA) had mastered the field of hypersonics, particularly the aerothermodynamic challenges of double-digit high-Mach atmospheric entries. That level of performance was impressive, a performance foreshadowed by the equally impressive (certainly for their time) earlier programs such as Mercury, Gemini, Apollo, Pioneer, and Viking. But NASA had, arguably, an even greater challenge before it: developing the technology of transatmospheric flight—the ability to transit, routinely, from flight within the atmosphere to flight out into space, and to return again. It was a field where challenge and contradiction readily mixed: a world of missiles, aircraft, spacecraft, rockets, ramjets, and combinations of all of these, some crewed by human operators, some not.

Transatmospheric flight requires mastery of hypersonics, flight at speeds of Mach 5 and higher in which aerodynamic heating predominates over other concerns. Since its inception after the Second World War, three problems have largely driven its development.

First, the advent of the nuclear-armed intercontinental ballistic missile (ICBM), during the 1950s, brought the science of reentry physics and took the problem of thermal protection to the forefront. Missile nose cones had to be protected against the enormous heat of their atmosphere entry. This challenge was resolved by 1960.

Associated derivative problems were dealt with as well, including that of protecting astronauts during demanding entries from the Moon. Maneuvering hypersonic entry became a practical reality with the Martin SV-5D Precision Recovery Including Maneuvering Entry (PRIME) in 1967. In 1981, the Space Shuttle introduced reusable thermal protection—the “tiles”—that enabled its design as a “cool” aluminum airplane rather than one with an exotic hot structure. Then in 1995, the Galileo mission met demands considerably greater than those of a return from the Moon.

A second and contemporary problem, during the 1950s, involved the expectation that flight speeds would increase essentially without limit. This hope lay behind the unpiloted air-launched Lockheed X-7, which used a ramjet engine and ultimately reached Mach 4.31. There also was the rocket-powered and air-launched North American X-15, the first transatmospheric aircraft. One X-15 achieved Mach 6.70 (4,520 mph) in October 1967. This set a record for winged hypersonic flight that stood until the flight of the Space Shuttle Columbia in 1981. The X-15 introduced reaction thrusters for aircraft attitude, and they subsequently became standard on spacecraft, beginning with Project Mercury. But the X-15 also used a “rolling tail” with elevons (combined elevators and ailerons) in the atmosphere and had to transition to and from space flight. The flight control system that did this later flew aboard the Space Shuttle. The X-15 also brought the first spacesuit that was flexible when pressurized rather than being rigid like an inflated balloon. It too became standard. In aviation, the X-15 was first to use a simulator as a basic tool for development, which became a critical instrument for pilot training. Since then, simulators have entered general use and today are employed with all aircraft.[2]

A third problem, emphasized during the era of President Ronald Reagan’s Strategic Defense Initiative (SDI) in the 1980s, involved the prospect that hypersonic single-stage-to-orbit (SSTO) air-breathing vehicles would shortly replace the Shuttle and other multistage rocket-boosted systems. This concept depended upon the scramjet, a variant of the ramjet engine that sustained a supersonic internal airflow to run cool. But while scramjets indeed outperformed conventional ramjets and rockets, their immaturity and higher drag made their early application as space access systems impossible. The abortive National Aero-Space Plane (NASP) program consumed roughly a decade of development time. It ballooned enormously in size, weight, complexity, and cost as time progressed and still lacked, in the final stages, the ability to reach orbit. Yet while NASP faltered, it gave a major boost to computational fluid dynamics, which use supercomputers to study airflows in aviation. This represents another form of simulation that also is entering general use. NASP also supported the introduction of rapid-solidification techniques in metallurgy. These enhance alloys’ temperature resistance, resulting in such achievements as the advent of a new type of titanium that can withstand 1,500 degrees Fahrenheit (°F).[3] Out of it have come more practical and achievable concepts, as evidenced by the NASA X-43 program and the multiparty X-51A program of the present.

Applications of practical hypersonics to the present era have been almost exclusively within reentry and thermal protection. Military hypersonics, while attracting great interest across a range of mission areas, such as surveillance, reconnaissance, and global strike, has remained the stuff of warhead and reentry shape research. Ambitious concepts for transatmospheric aircraft have received little support outside the laboratory environment. Concepts for global-ranging hypersonic “cruisers” withered in the face of the cheaper and more easily achievable rocket.

Moving Beyond the V-2: John Becker Births American Hypersonics

During the Second World War, Germany held global leadership in high-speed aerodynamics. The most impressive expression of its technical interest and competence in high-speed aircraft and missile design was the V-2 terror weapon, which introduced the age of the long-range rocket. It had a range of over 200 miles at a speed of approximately Mach 5.[4] A longer-range experimental variant tested in 1945, the A-4b, sported swept wings and flew at 2,700 mph, reentering and leveling off in the upper atmosphere for a supersonic glide to its target. In its one semi-successful flight, it completed a launch and reentry, though one wing broke off during its terminal Mach 4+ glide.[5] One appreciates the ambitious nature and technical magnitude of the German achievement given that the far wealthier and more technically advantaged United States pursued a vigorous program in piloted rocket planes all through the 1950s without matching the basic performance sought with the A-4b.

Key to the German success was a strong academic-industry partnership and, particularly, a highly advanced complex of supersonic wind tunnels. The noted tunnel designer Carl Wieselsberger (who died of cancer during the war) introduced a blow-down design that initially operated at Mach 3.3 and later reached Mach 4.4. The latter instrument supported supersonic aerodynamic and dynamic stability studies of various craft, including the A-4b. German researchers had ambitious plans for even more advanced tunnels, including an Alpine complex capable of attaining Mach 10. This tunnel work inspired American emulation after the war and, in particular, stimulated establishment of the Air Force’s Arnold Engineering Development Center at Tullahoma, TN.[6]

1- A-4b Jan 1945 USAF Photo.tif

The German A-4b, being readied for a test flight, January 1945. USAF.

At war’s end, America had nothing comparable to the investment Germany had made in high-speed flight, either in rockets or in wind tunnels and other specialized research facilities. The best American wartime tunnel only reached Mach 2.5. As a stopgap, the Navy seized a German facility, transported it to the United States, and ran it at Mach 5.18, but it did this only beginning in 1948.[7] Even so, aerodynamicist John Becker, a young and gifted engineer working at the National Advisory Committee for Aeronautics (NACA) Langley Laboratory, took the initiative in introducing Agency research in hypersonics. He used the V-2 as his rationale. In an August 1945 memo to Langley’s chief of research, written 3 days before the United States atom-bombed Hiroshima, he noted that planned NACA facilities were to reach no higher than Mach 3. With the V-2 having already flown at Mach 5, he declared, this capability was clearly inadequate.

2- 11 in Hypersonic Tunnel NASA TM-X-1130 1965.tif

The layout of the Langley 11-inch hypersonic tunnel advocated by John V. Becker. NASA.

He outlined an alternative design concept for “a supersonic tunnel having a test section four-foot square and a maximum test Mach number of 7.0.”[8] A preliminary estimate indicated a cost of $350,000. This was no mean sum. It was equivalent six decades later to approximately $4.2 million. Becker sweetened his proposal’s appeal by suggesting that Langley begin modestly with a small demonstration wind tunnel. It could be built for roughly one-tenth of this sum and would operate in the blow-down mode, passing flow through a 1-foot-square test section. If it proved successful and useful, a larger tunnel could follow. His reasoned idea received approval from the NACA’s Washington office later in 1945, and out of this emerged the Langley 11-Inch Hypersonic Tunnel. Slightly later, Alfred J. Eggers began designing a hypersonic tunnel at the NACA’s West Coast Ames Aeronautical Laboratory, though this tunnel, with a 10-inch by 14-inch test section, used continuous, not blow-down, flow. Langley’s was first. When the 11-inch tunnel first demonstrated successful operation (to Mach 6.9) on November 26, 1947, American aeronautical science entered the hypersonic era. This was slightly over a month after Air Force test pilot Capt. Charles E. Yeager first flew faster than sound in the Bell XS-1 rocket plane.[9]

Though ostensibly a simple demonstration model for a larger tunnel, the 11-inch tunnel itself became an important training and research tool that served to study a wide range of topics, including nozzle development and hypersonic flow visualization. It made practical contributions to aircraft development as well. Research with the 11-inch tunnel led to a key discovery incorporated on the X-15, namely that a wedge-shaped vertical tail markedly increased directional stability, eliminating the need for very large stabilizing surfaces. So useful was it that it remained in service until 1973, staying active even with a successor, the larger Continuous Flow Hypersonic Tunnel (CFHT), which entered service in 1962. The CFHT had a 31-inch test section and reached Mach 10 but took a long time to become operational. Even after entering service, it operated much of the time in a blow-down mode rather than in continuous flow.[10]

Emergent Hypersonic Technology and the Onset of the Missile Era

The ballistic missile and atomic bomb became realities within a year of each other. At a stroke, the expectation arose that one might increase the range of the former to intercontinental distance and, by installing an atomic tip, generate a weapon—and a threat—of almost incomprehensible destructive power. But such visions ran afoul of perplexing technical issues involving rocket propulsion, guidance, and reentry. Engineers knew they could do something about propulsion, but guidance posed a formidable challenge. MIT’s Charles Stark Draper was seeking inertial guidance, but he couldn’t approach the Air Force requirement, which set an allowed miss distance of only 1,500 feet at a range of 5,000 miles for a ballistic missile warhead.[11]

Reentry posed an even more daunting prospect. A reentering 5,000-mile-range missile would reach 9,000 kelvins, hotter than the solar surface, while its kinetic energy would vaporize five times its weight in iron.[12] Rand Corporation studies encouraged Air Force and industry missile studies. Convair engineers, working under Karel J. “Charlie” Bossart, began development of the Atlas ICBM in 1951. Even with this seemingly rapid implementation of the ballistic missile idea, time scales remained long term. As late as October 1953, the Air Force declared that it would not complete research and development until “sometime after 1964.”[13]

Matters changed dramatically immediately after the Castle Bravo nuclear test on March 1, 1954, a weaponizable 15-megaton H-bomb, fully 1,000 times more powerful than the atomic bomb that devastated Hiroshima less than a decade previously. The “Teapot Committee,” chaired by the Hungarian emigree mathematician John von Neumann, had anticipated success with Bravo and with similar tests. Echoing Bruno Augenstein of the Rand Corporation, the Teapot group recommended that the Atlas miss distance should be relaxed “from the present 1,500 feet to at least two, and probably three, nautical miles.”[14] This was feasible because the new H-bomb had such destructive power that such a “miss” distance seemed irrelevant. The Air Force leadership concurred, and only weeks after the Castle Bravo shot, in May 1954, Vice Chief of Staff Gen. Thomas D. White granted Atlas the service’s highest developmental priority.

3 NEW Allen Blunt Body Drawing.jpg

Extract of text from NACA Report 1381 (1953), in which H. Julian Allen and Alfred J. Eggers postulated using a blunt-body reentry shape to reduce surface heating of a reentry body. NASA.

But there remained the thorny problem of reentry. Only recently, most people had expected an ICBM nose cone to possess the needle-nose sharpness of futurist and science fiction imagination. The realities of aerothermodynamic heating at near-orbital speeds dictated otherwise. In 1953, NACA Ames aerodynamicists H. Julian Allen and Alfred Eggers concluded that an ideal reentry shape should be bluntly rounded, not sharply streamlined. A sharp nose produced a very strong attached shock wave, resulting in high surface heating. In contrast, a blunt nose generated a detached shock standing much further off the nose surface, allowing the airflow to carry away most of the heat. What heating remained could be alleviated via radiative cooling or by using hot structures and high-temperature coatings.[15]

There was need for experimental verification of blunt body theory, but the hypersonic wind tunnel, previously so useful, was suddenly inadequate, much as the conventional wind tunnel a decade earlier had been inadequate to obtaining the fullest understanding of transonic flows. As the slotted throat tunnel had replaced it, so now a new research tool, the shock tube, emerged for hypersonic studies. Conceived by Arthur Kantrowitz, a Langley veteran working at Cornell, the shock tube enabled far closer simulation of hypersonic pressures and temperatures. From the outset, Kantrowitz aimed at orbital velocity, writing in 1952 that: “it is possible to obtain shock Mach numbers in the neighborhood of 25 with reasonable pressures and shock tube sizes.”[16]

Despite the advantages of blunt body design, the hypersonic environment remained so extreme that it was still necessary to furnish thermal protection to the nose cone. The answer was ablation: covering the nose with a lightweight coating that melts and flakes off to carry away the heat. Wernher von Braun’s U.S. Army team invented ablation while working on the Jupiter intermediate-range ballistic missile (IRBM), though General Electric scientist George Sutton made particularly notable contributions. He worked for the Air Force, which built and successfully protected a succession of ICBMs: Atlas, Titan, and Minuteman.[17]

4 NEW Jupiter IRBM Reentry Test 18 May 1958 US Army Pho.tif

A Jupiter IRBM launches from Cape Canaveral on May 18, 1958, on an ablation reentry test. U.S. Army.

Flight tests were critical for successful nose cone development, and they began in 1956 with launches of the multistage Lockheed X-17. It rose high into the atmosphere before firing its final test stage back at Earth, ensuring the achievement of a high-heat load, as the test nose cone would typically attain velocities of at least Mach 12 at only 40,000 feet. This was half the speed of a satellite, at an altitude typically traversed by today’s subsonic airliners. In the pre-ablation era, the warheads typically burned up in the atmosphere, making the X-17 effectively a flying shock tube whose nose cones only lived long enough to return data by telemetry. Yet out of such limited beginnings (analogous to the rudimentary test methodologies of the early transonic and supersonic era just a decade previously) came a technical base that swiftly resolved the reentry challenge.[18]

Tests followed with various Army and Air Force ballistic missiles. In August 1957, a Jupiter-C (an uprated Redstone) returned a nose cone after a flight of 1,343 miles. President Dwight D. Eisenhower subsequently showed it to the public during a TV appearance that sought to bolster American morale a month after Sputnik had shocked the world. Two Thor-Able flights went to 5,500 miles in July 1958, though their nose cones both were lost at sea. But the agenda also included Atlas, which first reached its full range of 6,300 miles in November 1958. Two nose cones built by GE, the RVX-1 and –2, flew subsequently as payloads. An RVX-2 flew 5,000 miles in July 1959 and was recovered, thereby becoming the largest object yet to be brought back. Attention now turned to a weaponized nose cone shape, GE’s Mark 3. Flight tests began in October, with this nose cone entering operational service the following April.[19]

Success in reentry now was a reality, yet there was much more for the future. The early nose cones were symmetric, which gave good ballistic characteristics but made no provision for significant aerodynamic maneuver and cross-range. The military sought both as a means of achieving greater operational flexibility. An Air Force experimental uncrewed lifting body design, the Martin SV-5D (X-23) PRIME, flew three flights between December 1966 and April 1967, lofted over the Pacific Test Range by modified Atlas boosters. The first flew 4,300 miles, maneuvering in pitch (but not in cross-range), and missed its target aim point by only 900 feet. The third mission demonstrated a turning cross-range of 800 miles, the SV-5D impacting within 4 miles of its aim point and subsequently was recovered.[20]

Other challenges remained. These included piloted return from the Moon, reusable thermal protection for the Shuttle, and planetary entry into the Jovian atmosphere, which was the most demanding of all. Even so, by the time of PRIME in 1967, the reentry problem had been resolved, manifested by the success of both ballistic missile nose cone development and the crewed spacecraft effort. The latter was arguably the most significant expression of hypersonic competency until the return to Earth from orbit by the Space Shuttle Columbia in 1981.

Transitioning from the Supersonic to the Hypersonic: X-7 to X-15

During the 1950s and early 1960s, aviation advanced from flight at high altitude and Mach 1 to flight in orbit at Mach 25. Within the atmosphere, a number of these advances stemmed from the use of the ramjet, at a time when turbojets could barely pass Mach 1 but ramjets could aim at Mach 3 and above. Ramjets needed an auxiliary rocket stage as a booster, which brought their general demise after high-performance afterburning turbojets succeeded in catching up. But in the heady days of the 1950s, the ramjet stood on the threshold of becoming a mainstream engine. Many plans and proposals existed to take advantage of their power for a variety of aircraft and missile applications.

The burgeoning ramjet industry included Marquardt and Wright Aeronautical, though other firms such as Bendix developed them as well. There were also numerous hardware projects. One was the Air Force-Lockheed X-7, an air-launched high-speed propulsion, aerodynamic, and structures testbed. Two were surface-to-air ramjet-powered missiles: the Navy’s ship-based Mach 2.5+ Talos and the Air Force’s Mach 3+ Bomarc. Both went on to years of service, with the Talos flying “in anger” as a MiG-killer and antiradiation SAM-killer in Vietnam. The Air Force also was developing a 6,300-mile-range Mach 3+ cruise missile—the North American SM-64 Navaho—and a Mach 3+ interceptor fighter—the Republic XF-103. Neither entered the operational inventory. The Air Force canceled the troublesome Navaho in July 1957, weeks after the first flight of its rival, Atlas, but some flight hardware remained, and Navaho flew in test for as far as 1,237 miles, though this was a rare success. The XF-103 was to fly at Mach 3.7 using a combined turbojet-ramjet engine. It was to be built largely of titanium, at a time when this metal was little understood; it thus lived for 6 years without approaching flight test. Still, its engine was built and underwent test in December 1956.[21]

The steel-structured X-7 proved surprisingly and consistently productive. The initial concept of the X-7 dated to December 1946 and constituted a three-stage vehicle. A B-29 (later a B-50) served as a “first stage” launch aircraft; a solid rocket booster functioned as a “second stage” accelerating it to Mach 2, at which the ramjet would take over. First flying in April 1951, the X-7 family completed 100 missions between 1955 and program termination in 1960. After achieving its Mach 3 design goal, the program kept going. In August 1957, an X-7 reached Mach 3.95 with a 28-inch diameter Marquardt ramjet. The following April, the X-7 attained Mach 4.31—2,881 mph—with a more-powerful 36-inch Marquardt ramjet. This established an air-breathing propulsion record that remains unsurpassed for a conventional subsonic combustion ramjet.[22]

At the same time that the X-7 was edging toward the hypersonic frontier, the NACA, Air Force, Navy, and North American Aviation had a far more ambitious project underway: the hypersonic X-15. This was Round Two, following the earlier Round One research airplanes that had taken flight faster than sound. The concept of the X-15 was first proposed by Robert Woods, a cofounder and chief engineer of Bell Aircraft (manufacturer of the X-1 and X-2), at three successive meetings of the NACA’s influential Committee on Aerodynamics between October 1951 and June 1952. It was a time when speed was king, when ambitious technology-pushing projects were flying off the drawing board. These included the Navaho, X-2, and XF-103, and the first supersonic operational fighters—the Century series of the F-100, F-101, F-102, F-104, and F-105.[23]

Some contemplated even faster speeds. Walter Dornberger, former commander of the Nazi research center at Peenemünde turned senior Bell Aircraft Corporation executive, was advocating BoMi, a proposed skip-gliding “Bomber-Missile” intended for Mach 12. Dornberger supported Woods in his recommendations, which were adopted by the NACA’s Executive Committee in July 1952. This gave them the status of policy, while the Air Force added its own support. This was significant because its budget was 300 times larger than that of the NACA.[24] The NACA alone lacked funds to build the X-15, but the Air Force could do this easily. It also covered the program’s massive cost overruns. These took the airframe from $38.7 million to $74.5 million and the large engine from $10 million to $68.4 million, which was nearly as much as the airframe.[25]

The Air Force had its own test equipment at its Arnold Engineering Development Center (AEDC) at Tullahoma, TN, an outgrowth of the Theodore von Kármán technical intelligence mission that Army Air Forces Gen. Henry H. “Hap” Arnold had sent into Germany at the end of the Second World War. The AEDC, with brand-new ground test and research facilities, took care to complement, not duplicate, the NACA’s research facilities. It specialized in air-breathing and rocket-engine testing. Its largest installation accommodated full-size engines and provided continuous flow at Mach 4.75. But the X-15 was to fly well above this, to over Mach 6, highlighting the national facilities shortfall in hypersonic test capabilities existing at the time of its creation.[26]

While the Air Force had the deep pockets, the NACA—specifically Langley—conducted the research that furnished the basis for a design. This took the form of a 1954 feasibility study conducted by John Becker, assisted by structures expert Norris Dow, rocket expert Maxime Faget, configuration and controls specialist Thomas Toll, and test pilot James Whitten. They began by considering that during reentry, the vehicle should point its nose in the direction of flight. This proved impossible, as the heating was too high. He considered that the vehicle might alleviate this problem by using lift, which he was to obtain by raising the nose. He found that the thermal environment became far more manageable. He concluded that the craft should enter with its nose high, presenting its flat undersurface to the atmosphere. The Allen-Eggers paper was in print, and he later wrote that: “it was obvious to us that what we were seeing here was a new manifestation of H.J. Allen’s ‘blunt-body’ principle.”[27]

To address the rigors of the daunting aerothermodynamic environment, Norris Dow selected Inconel X (a nickel alloy from International Nickel) as the temperature-resistant superalloy that was to serve for the aircraft structure. Dow began by ignoring heating and calculated the skin gauges needed only from considerations of strength and stiffness. Then he determined the thicknesses needed to serve as a heat sink. He found that the thicknesses that would suffice for the latter were nearly the same as those that would serve merely for structural strength. This meant that he could design his airplane and include heat sink as a bonus, with little or no additional weight. Inconel X was a wise choice; with a density of 0.30 pounds per cubic inch, a tensile strength of over 200,000 pounds per square inch (psi), and yield strength of 160,000 psi, it was robust, and its melting temperature of over 2,500 °F ensured that the rigors of the anticipated 1,200 °F surface temperatures would not weaken it.[28]

Work at Langley also addressed the important issue of stability. Just then, in 1954, this topic was in the forefront because it had nearly cost the life of the test pilot Chuck Yeager. On the previous December 12, he had flown the X-1A to Mach 2.44 (approximately 1,650 mph). This exceeded the plane’s stability limits; it went out of control and plunged out of the sky. Only Yeager’s skill as a pilot had saved him and his airplane. The problem of stability would be far more severe at higher speeds.