NASA's Contributions to Aeronautics, Volume 1 by National Aeronautics & Space Administration. - HTML preview

PLEASE NOTE: This is an HTML preview only and some elements such as links or page numbers may be incorrect.
Download the book in PDF, ePub, Kindle for a complete version.

CASE

8

NASA and Computational Structural Analysis

David C. Aronstein

NASA research has been pivotal in its support of computational analytical methods for structural analysis and design, particularly through the NASTRAN program. NASA Centers have evolved structural analysis programs tailored to their own needs, such as assessing high-temperature aerothermodynamic structural loading for high-performance aircraft. NASA-developed structural tools have been adopted throughout the aerospace industry and are available on the Agency Web site.

X29 lFEM w/bkg.tif

Case-8 Cover Image: NASTRAN model of the X29A aircraft. NASA.

The field of computer methods in structural analysis, and the contributions of the National Aeronautics and Space Administration (NASA) to it, is wide-ranging. Nearly every NASA Center has a structural analysis group in some form. These groups conduct research and assist industry in grappling with a broad spectrum of problems. This paper is an attempt to show both aspects: the origins, evolution, and application of NASA Structural Analysis System (NASTRAN), and the variety and depth of other NASA activities and contributions to the field of computational structural methods.

In general terms, the goal of structural analysis is to establish that a product has the required strength and stiffness—structural integrity—to perform its function throughout its intended life. Its strength must exceed the loads to which the product is subjected, by some safety margin, the value of which depends on the application.

With aircraft, loads derive from level flight, maneuvering flight, gusts, landings, engine thrust and torque, vibration, temperature and pressure differences, and other sources. Load cases may be specified by regulatory agency, by the customer, and/or by the company practice and experience. Many of the loads depend on the weight of the aircraft, and the weight in turn depends on the design of the structure. This makes the structural design process iterative. Because of this, and also because a large fraction of an aircraft’s weight is not actually accounted for by primary structure, initial weight estimates are usually based on experience rather than on a detailed buildup of structural material. A sizing process must be performed to reconcile the predicted empty weight and its relationship to the assumed maximum gross weight, with the required payload, fuel, and mission performance.[1]

After the sizing process has converged, the initial design is documented in the form of a three-view drawing with supporting data. From there, the process is approximately as follows:

  • The weights group generates an initial estimate of the weights of the major airframe components.
  • The loads group analyzes the vehicle at the defined condition(s) to determine forces, bending moments, etc., in the major components and interfaces.
  • The structures group defines the primary load paths and sizes the primary structural members to provide the required strength.
  • Secondary load paths, etc., are defined to the required level of detail.

Process details vary between different organizations, but at some point, the structural definition reaches a level of maturity to enable a check of the initial weight estimate. Then the whole design may be iterated, if required. Iteration may also be driven by maturing requirements or by evolution in other aspects of the design, e.g., aerodynamics, propulsion, etc.

Structural Analysis Prior to Computers

Basic principles of structural analysis—static equilibrium, trusses, and beam theory—were known long before computers, or airplanes, existed. Bridges, towers and other buildings, and ships were designed by a combination of experience and some amount of analysis—more so as designs became larger and more ambitious during and after the Industrial Revolution.

With airplanes came much greater emphasis on weight minimization. Massive overdesign was no longer an acceptable means to achieve structural integrity. More rigorous analysis and structural sizing was required. Simplifications allowed the analysis of primary members under simple loading conditions:

  • Slender beams: axial load, shear, bending, torsion.
  • Trusses: members carry axial load only, joined to other such members at ends.
  • Simple shells: pressure loading.
  • Semi-monocoque (skin and stringer) structures: shear flow, etc.
  • Superposition of loading conditions.

With these simplifications, primary structural members could be sized appropriately to the expected loads. In the days of wood, wire, and fabric, many aircraft structures could be analyzed as trusses: externally braced biplane wings; fuselage structures consisting of longerons, uprights, and cross braces, with diagonal braces or wires carrying torsion; landing gears; and engine mounts. As early as the First World War and in the 1920s, researchers were working to cover every required aspect of the problem: general analysis methods, analysis of wings, horizontal and vertical tails, gust loads, test methods, etc. The National Advisory Committee for Aeronautics (NACA) contributed significantly to the building of this early body of methodology.[2]

Structures with redundancy—multiple structural members capable of sharing one or more loading components—may be desirable for safety, but they posed new problems for analysis. Redundant structures cannot be analyzed by force equilibrium alone. A conservative simplification, often practiced in the early days of aviation, was to analyze the structure with redundant members missing. A more precise solution would require the consideration of displacements and “compatibility” conditions: members that are connected to one another must deform in such a manner that they move together at the point of connection. Analysis was feasible but time-consuming. Large-scale solutions to redundant (“statically indeterminate”) structure problems would become practical with the aid of computers. Until then, more simplifications were made, and specific types of solutions—very useful ones—were developed.

While these analysis methods were being developed, there was a lot of airplane building going on without very much analysis at all. In the “golden age of aviation,” many airplanes were built in garages or at small companies that lacked the resources for extensive analysis. “In many cases people who flew the airplanes were the same people who carried out the analysis and design. They also owned the company. There was very little of what we now call structural analysis. Engineers were brought in and paid—not to design the aircraft—but to certify that the aircraft met certain safety requirements.”[3]

Through the 1930s, as aircraft structures began to be formed out of aluminum, the semi-monocoque or skin-and-stringer structure became prevalent, and analysis methods were developed to suit. “In the 1930s, ’40s, and ’50s, techniques were being developed to analyze specific structural components, such as wing boxes and shear panels, with combined bending, torsion, and shear loads and with stiffeners on the skins.”[4]

A number of exact solutions to the differential equations for stress and strain in a structural member were known, but these generally exist only for very simple geometric shapes and very limited sets of loading conditions and boundary conditions. Exact solutions were of little practical value to the aircraft designer or stress analyst. Instead, “free body diagrams” were used to analyze structures at selected locations, or “stations.” The structure was considered to be cut by a theoretical plane at the station of interest. All loads, applied and inertial, on the portion of the aircraft outboard of the cut had to be borne (reacted) by the structure at the cut.

In principle, this allowed the stress at any point in the structure to be analyzed—given the time to make an arbitrarily large number of these theoretical cuts through the aircraft. In practice, free body diagrams were used to analyze the structure at key locations—selected fuselage stations, the root, and selected stations of wings and tail surfaces. Structural members were left constant, or tapered appropriately, according to experience and judgment, between the analyzed sections. For major projects such as airliners or bombers, the analysis would be more thorough, and consequently, major design organizations had rooms full of people whose jobs were to perform the required calculations.

The NACA also utilized this brute-force approach to large calculations, and the people who performed the calculations—overwhelmingly women—were called “computers.” Annie J. Easley, who worked at the NASA Lewis (now Glenn) Research Center starting in 1955, recalls:

. . . we were called computers until we started to get the machines, and then we were changed over to either math technicians or mathematicians. . . . The engineers and the scientists are working away in their labs and their test cells, and they come up with problems that need mathematical computation. At that time, they would bring that portion to the computers, and our equipment then were the huge calculators, where you’d put in some numbers and it would clonk, clonk, clonk out some answers, and you would record them by hand. Could add, subtract, multiply, and divide. That was pretty much what those big machines, those big desktop machines, could do. If we needed to find a logarithm or an exponential, we then pulled out the tables.[5]

After World War II, with jet engines pushing aircraft into ever more demanding flight regimes, the analytical community sought to keep up. The NACA continued to improve the methodologies for calculating loads on various parts of an aircraft, and some of the reports generated during that time are still used by industry practitioners today. NACA Technical Report (TR) 1007, for horizontal tail loads in pitch maneuvers, is a good example, although it does not cover all of the conditions required by recent airworthiness regulations.[6]

For structural analysis, energy methods and matrix methods began to receive more attention. Energy methods work as follows: one first expresses the deflection of a member as a set of assumed shape functions, each multiplied by an (initially unknown) coefficient; expresses the total strain energy in terms of these unknown coefficients; and finally, finds the values of the coefficients that minimize the strain energy. If the shape functions, from which the solution is built, satisfy the boundary conditions of the problem, then so does the final solution.

Energy methods were not new. The concept of energy minimization was introduced by Lord Rayleigh in the late 19th century and extended by Walter Ritz in two papers of 1908 and 1909.[7] Rayleigh and Ritz were particularly concerned with vibrations. Carlo Alberto Castigliano, an Italian engineer, published a dissertation in 1873 that included two important theorems for applying energy principles to forces and static displacements in structures.[8] However, in the early works, the shape functions were continuous over the domain of interest. The idea of breaking up (discretizing) a complex structure into many simple elements for numerical solution would lead to the concept of finite elements, but for this to be useful, computing technology needed to mature.

The Advent of Direct Analog Computers

The first computers were analog computers. Direct analog computers are networks of physical components (most commonly, electrical components: resistors, capacitors, inductances, and transformers) whose behavior is governed by the same equations as some system of interest that is being modeled. Direct analog computers were used in the 1950s and 1960s to solve problems in structural analysis, heat transfer, fluid flow, and other fields.

Analog Circuits MacNeal.tif

Representation of structural elements by analog circuits. NASA.

The method of analysis and the needs that were driving the move from classical idealizations such as slender-beam theory toward computational methods are well stated in the following passage, from an NACA-sponsored paper by Stanley Benscoter and Richard MacNeal (subsequently a cofounder of the MacNeal Schwendler Corporation [MSC] and member of the NASTRAN development team):

The theory is expressed entirely in terms of first-order difference equations in order that analogous electrical circuits can be readily designed and solutions obtained on the Caltech analog computer. . . . In the process of designing thin supersonic wings for minimum weight it is found that a convenient construction with aluminum alloy consists of a rather thick skin with closely spaced spars and no stringers. Such a wing deflects in the manner of a plate rather than as a beam. Internal stress distributions may be considerably different from those given by beam theory.[9]

Their implementation of analog circuitry for bending loads is illustrated here and serves as an example of the direct analog modeling of structures.[10]

Direct analog computing had its advocates well into the 1960s. “For complex problems [direct analog] computers are inherently faster than digital machines since they solve the equations for the several nodes simultaneously, while the digital machines solve them sequentially. Direct analogs have, moreover, the advantage of visualization; computer setups as well as programming are more closely related to the actual problem and are based primarily on physical insight rather than on numerical skills.”[11]

The advantages came at a price, however. It could take weeks, in some cases, to set up an analog computer to solve a particular type of problem. And there was no way to store a problem to be revisited at a later date. These drawbacks may not have seemed so important when there was no other recourse available, but they became more and more apparent as the programmable digital computer began to mature.

Hybrid direct-analog/digital computers were hypothesized in the 1960s: essentially a direct analog computer controlled by a digital computer capable of storing and executing program instructions. This would have overcome some of the drawbacks of direct analog computers.[12] However, this possibility was most likely overtaken by the rapid progress of digital computers. At the same time these hybrid analog/digital computers were just being thought about, NASTRAN was already in development.

A different type of analog computer—the active-element, or indirect, analog—consisted of operational amplifiers that performed arithmetic operations. These solved programmed mathematical equations, rather than mimicking a physical system. Several NACA locations—including Langley, Ames, and the Flight Research Center (now Dryden Flight Research Center)—used analog computers of this type for flight simulation. Ames installed its first analog computer in 1947.[13] The Flight Research Center flight simulators used analog computers exclusively from 1955 to 1964 and in combination with digital computers until 1975.[14] This type of analog computer can be thought of as simply a less precise, less reliable, and less versatile predecessor to the digital computer.

Digital Computation Triggers Automated Structural Analysis

In 1946, the ENIAC, “commonly accepted as the first successful high-speed electronic digital computer,” became operational at the University of Pennsylvania.[15] It took up as much floor space as a medium-sized house and had to be “programmed” by physically rearranging its control connections. Many advances followed rapidly: storing instructions in memory, conditional control transfer, random access memory, magnetic core memory, and the transistor-circuit element. With these and other advances, digital computers progressed from large and ungainly experimental devices to programmable, useful, commercially available (albeit expensive) machines by the mid-1950s.[16]

Simple Finite Element.tif

Simple example of discretized structure and single element. NASA.

The FORTRAN programming language was also developed in the mid-1950s and rapidly gained acceptance in technical communities. This was a “high level language,” which allowed programming instructions to be written in terms that an engineer or analyst could understand; a compiler handled the translation into “machine language” that the computer could understand. International Business Machines (IBM) developed the original FORTRAN language and also some of the early practical digital computers. Other early digital computers were produced by Control Data Corporation (CDC) and UNIVAC. These developments made it possible to take the new methods of structural analysis that were emerging and implement them in an automated, repeatable manner.

The essence of these new methods was to treat a structure as a finite number of discrete elastic elements, rather than as a continuum. Reactions (forces and moments) and deflections are only calculated at specific points, called “nodes.” Elements connect the nodes. The stress and strain fields in the regions between the nodes do not need to be solved in the global analysis. They only need to be solved when developing the element-level solution, and once this is done for a particular type of element, that element is available as a prepackaged building block. Complex shapes and structures can then be built up from the simple elements. A simple example—using straight beam elements to model a curved beam structure—is illustrated here.

To find, for example, the relationship between the displacements of the nodes and the corresponding reactions, one could do the following (called the unit displacement method). First, a hypothetical unit displacement of one node in one degree of freedom (d.o.f.) only is assumed. This displacement is transposed into the local element coordinate systems of all affected elements. (In the corresponding figure, this would entail the relatively simple transformation between global horizontal and vertical displacements, and element axial and transverse displacements. The angular displacements would require no transformation, except in some cases a sign change.) The predetermined element stiffness matrices are used to find the element-level reactions. The element reactions are then translated back into global coordinates and summed to give the total structure reactions—to the single hypothetical displacement. This set of global reactions, plus zeroes for all forces unaffected by the assumed displacement, constitutes one column in the “stiffness matrix.” By repeating the exercise for every degree of freedom of every node, the stiffness matrix can be built. Then the reactions to any set of nodal displacements may be found by multiplying the stiffness matrix by the displacement vector, i.e., the ordered list of displacements. This entails difficult bookkeeping but simple math.

It is more common in engineering, however, to have to find unknown displacements and stresses from known applied forces. This answer is not possible to obtain so directly. (That is, if the process just described seems direct to you.If it does, you are probably an engineer. If it seems too trivial to have even mentioned, then you are probably a mathematician.)

Instead, after the stiffness matrix is found, it must be inverted to obtain the flexibility matrix. The inversion of large matrices is a science in itself. But it can be done, using a computer, if one has time to wait. Most of the science lies in improving the efficiency of the process. Another important output is the stress distribution throughout the structure. But this problem has already been solved at the element level for a hypothetical set of element nodal displacements. Scaling the generic stress distribution by the actual displacements, for all elements, yields the stress state throughout the structure.

There are, of course, many variations on this theme and many complexities that cannot be addressed here. The important point is that we have gone from an insoluble differential equation to a soluble matrix arithmetic problem. This, in turn, has enabled a change from individual analyses by hand of local portions of a structure to a modeling effort followed by an automated calculation of the stresses and deflections of the entire structure.

Pioneering papers on discretization of structures were published by Alexander Hrennikoff in 1941 at the Massachusetts Institute of Technology and by Richard Courant in 1943 at the mathematics institute he founded at New York University that would later bear his name. These papers did not lead to immediate application, in part perhaps because they were ahead of the necessary computational technology and in part because they were still somewhat theoretical and had not yet developed a well-formed practical implementation. The first example of what we now call the finite element method (FEM) is commonly considered to be a paper by M.J. Turner (Boeing), R.W. Clough (University of California at Berkeley, Civil Engineering Department), H.C. Martin (University of Washington, Aeronautical Engineering Department), and L.J. Topp in 1956.[17] This paper presented a method for plane stress problems, using triangular elements. John Argyris at the University of Stuttgart, Germany, also made important early contributions. The term “finite element method” was actually coined by Clough in 1960. The Civil Engineering Department at Berkeley became a major center of early finite element methods development.[18]

By the mid-1960s, aircraft companies, computing companies, universities, and Government research centers were beginning to explore the possibilities—although the method allegedly suffered some initial lack of interest in the academic world, because it bypassed elegant mathematical solutions in favor of numerical brute force.[19] However, the practical value could not long be ignored. The following insightful comment, made by a research team at the University of Denver in 1966 (working under NASA sponsorship), sums up the expectation of the period: “It is certain that this concept is going to become one of the most important tools of engineering in the future as structures become more complex and computers more versatile and available.”[20]

NASA Spawns NASTRAN, Its Greatest Computational Success

The project to develop a general-purpose finite element structural analysis system was conceived in the midst of this rapid expansion of finite element research in the 1960s. The development, and subsequent management, enhancement, and distribution, of the NASA Structural Analysis System, or NASTRAN, unquestionably constitutes NASA’s greatest single contribution to computerized structural analysis—and arguably the single most influential contribution to the field from any source. NASTRAN is the workhorse of structural analysis: there may be more advanced programs in use for certain applications or in certain proprietary or research environments, but NASTRAN is the most capable general-purpose, generally available, program for structural analysis in existence today, even more than 40 years after it was introduced.

Origins of NASTRAN

In the early 1960s, structures researchers from the various NASA Centers were gathering annually at Headquarters in Washington, DC, to exchange ideas and coordinate their efforts. They began to realize that many organizations—NASA Centers and industry—were independently developing computer programs to solve similar types of structural problems. There were several drawbacks to this situation. Effort was being duplicated needlessly. There was no compatibility of input and output formats, or consistency of naming conventions. The programs were only as versatile as the developers cared to make them; the inherent versatility of the finite element method was not being exploited. More benefit might be achieved by pooling resources and developing a truly general-purpose program. Thomas G. Butler of the Goddard Space Flight Center (GSFC), wholed the team that developed NASTRAN between 1965 and 1970, recalled in 1982:

NASA’s Office of Advanced Research and Technology (OART) under Dr. Raymond Bisplinghoff sponsored a considerable amount of research in the area of flight structures through its operating centers. Representatives from the centers who managed research in structures convened annually to exchange ideas. I was one of the representatives from Goddard Space Flight Center at the meeting in January 1964. . . . Center after center described research programs to improve analysis of structures. Shells of different kinds were logical for NASA to analyze at the time because rockets are shell-like. Each research concentrated on a different aspect of shells. Some were closed with discontinuous boundaries. Other shells had cutouts. Others were noncircular. Others were partial spans of less than 360°. This all seemed quite worthwhile if the products of the research resulted in exact closed-form solutions. However, all of them were geared toward making some simplifying assumption that made it possible to write a computer program to give numerical solutions for their behavior. . . . Each of these computer programs required data organization different from every other. . . . Each was intended for exploring localized conditions rather than complete shell-like structures, such as a whole rocket. My reaction to these programs was that . . . technology was currently available to give engineering solutions to not just localized shells but to whole, highly varied structures. The method was finite elements.[21]

Doug Michel led the meetings at NASA Headquarters. Butler, Harry Runyan of Langley Research Center, and probably others proposed that NASA develop its own finite element program, if a suitable one could not be found already existing. “The group thought this was a good idea, and Doug followed up with forming the Ad Hoc Group for Structural Analysis, which was headed by Tom Butler of Goddard,” recalled C. Thomas Modlin, Jr., who was one of the representatives from what is now Johnson Space Center.[22] The committee included representatives from all of the NASA Centers that had any significant activity in structural analysis methods at the time, plus an adjunct member from the U.S. Air Force at Wright-Patterson Air Force Base, as listed in the accompanying table.[23]

CENTER

REPRESENTATIVE(S)

Ames

Richard M. Beam and Perry P. Polentz

Flight Research (now Dryden)

Richard J. Rosecrans

Goddard

Thomas G. Butler (Chair) and Peter A. Smidinger

Jet Propulsion Laboratory

Marshall E. Alper and Robert M. Bamford

Langley

Herbert J. Cunningham

Lewis

William C. Scott and