risk assessment is most crucial. The inset photograph
(courtesy of NASA) was taken during actual landing operations
of the Space Shuttle Discovery.
In such advanced designs,
measurements that define the environment on a component -
such as pressures and temperatures-are difficult to obtain
experimentally. More emphasis on analysis is required, as
well as allowances for variations in parameters to account
for estimated rather that measured loading. Further, new developments
such as the Advanced Launch System (ALS) will require analytically
quantified assessments of risk during the design phase when
extensive testing is not available. These factors have been
the impetus for evolving probabilistic methodology for use
in designing for reduced risk and uncertainty by calculating
component reliability which is defined as the complement of
the failure rate.
Analysis techniques as currently practiced
are based on design methodologies, with load factors and safety
factors determined on more than four decades of experience,
and are commonly gathered under the heading of deterministic
The word deterministic, in the strict sense,
is used when the outcome of an experiment is certain. For
example, if we have a two-headed coin, then the outcome of
repeated experiments is always heads-a certainty. On the other
hand, if we flip a regular coin, then there is a 50/50 probability
that it can be heads or tails and, therefore, the result is
probabilistic. The word deterministic is used in structural
design to convey that extreme values, real or hypothetical,
have been used in the design and that no probabilities need
to be considered.
As such, deterministic structural analysis
means that: (1) A load or condition is based on a set of design
operating points and load factors to account for variability
in the load definition, ensuring that the maximum load ever
seen in test or flight is accounted for; (2) a minimum strength
or allowable fatigue that ensures that material variations
are accounted for; and (3) a safety factor to cover the unknowns
in analysis, loads, fabrication or human error. The product
of the design load, limiting factors, and safety factors must
always be less than the strength or allowable fatigue to assure
safe operation. In actual practice, a design may have to meet
several analysis conditions such as strength, fatigue, deflection,
buckling or burst, so the basic design approach typically
uses maximum loads and minimum strength conditions in the
It is important to understand that the load
factors are based on hardware failures that were initially,
analyzed by this same method - with factors that specified
that none of the hardware was to fail. So for a specific duct
or component, the analysis may be very conservative, but the
methodology still has no way of accounting for it. Thus, the
overall need is for no failures, even though some of the parts
are actually over designed.
As we have indicated, Pratt & Whitney Rocketdyne has experienced
less than a handful of flight failures using a deterministic
analysis approach. Mistakes in analysis or manufacturing (human
error), system integration and operation or loading conditions
that were unknown until the engine was tested do occur. But
these oversights have been typically found in design reviews,
structural audits or quality checks, and during the extensive
ground testing and certification of the engine prior to flight.
Newer propulsion system designs require high reliability right from the beginning of the program. Experimental demonstrations of reliability such as 99% with 90% confidence level require a large number of tests without failure. Probabilistic analysis tools provide an analytical measure of reliability from the design stage.
(Click image for larger version)
As an alternative to the deterministic approach, a probabilistic structural design approach considers the uncertainty in the situation in a more structured manner. The major element in the probabilistic approach is that design variables are not seen as single values and they are not weighed to an upper or lower bound condition. Instead, the actual distribution or variation of the parameters is represented. A distribution can be thought of as a histogram of discrete values of the parameters or as a mathematical model that represents a smooth description of the variation. At each value of the variation, the number of occurrences in the distribution is plotted as the ordinate. When the area under the histogram or curve is normalized to a value of one, the function is called the probability density function. The parameters are termed random variables. The distribution functions, then, are used to determine the probability of occurrence of a given value of the random variable s.
The value of a random variable changes from part to part or during a test firing in an undetermined manner; i.e., the variation of peak amplitude in the random vibration loads present in every rocket engine. In a probabilistic analysis, the variations of several random variables such as loads, geometries and material properties are all accounted for simultaneously.
A comparison of an engine duct analyzed using both methodologies is helpful to point out the differences in the two approaches.
Loads on a typical duct include pressures, temperatures, end displacements and vibration. In the design process, pressures and temperatures are chosen to represent the maximum values that can occur during ground testing or flight. The duct differential end displacements are based on upper bound displacement envelopes that can occur from fabrication tolerances, installation, and intentional movement of the engine to steer the vehicle, while vibration loads are based on engine tests where measurements are taken close to the ends of the ducts. The selected vibration loads are an upper bound envelope of a series of tests that cover the engine operating power levels, These upper bound loads are increased by a limit load factor (one or greater) and applied to static and dynamic analytical models of the duct. The limit load factors are usually retained throughout the life of the hardware. With sufficient test data, conservatism in these factors can be reduced. Loads are always incre ased if test data show that the design conditions are low; the same limit load factor is again utilized.
Structural models of the duct are used in both the static and dynamic analyses, and are typically based on the nominal geometry of the hardware. The resulting displacements, loads and stresses are then directly compared to structural material allowables such as ultimate strength or fatigue. The material allowables are lower bound values of the available material test data. If a statistically significant amount of test data is not available, the minimum values are lowered, based on an experience factor. The ratio of the material allowables to the calculated responses must be less than the specified safety factor.
analysis of SSME High Pressure Oxidizer Turbopump Discharge
The vibration level applied at the ends of the ducts and
the system damping can vary from firing to firing and
from build to build. Considering these variations, this
example illustrates the computed variation that can be
expected in the bending moment at a typical location,
along with a probability statement. The graph also ranks
the variables that contribute most to the variation through
(Click image for larger version)
The deterministic engineering analysis takes
a pessimistic view of the loading and the material strengths
on which to base any calculated safety factors. Yet as conservative
as this sounds, there are still occasional failures in ducting.
If we consider the duct analysis from a probabilistic
approach, we find the following: The same loads are considered,
but they are chosen based on a nominal condition and a distribution
parameter such as the standard deviation. For example, the
dynamic loads are based on a mean response value with a standard
deviation of 10 to 20% of the mean value. (The mean value
is typically only two-thirds of the maximum value used in
the deterministic analysis.) The structural response has mean
values and distributions for each of the individual loading
variables. Weld joint parameters such as weld offset and weld
stress concentration factor are also specified as distributions.
Material properties are furnished as a distribution and they
are compared to the stress response.
The resulting answer in our analysis appears
in the form of a distribution of the potential for failure
rather than a simple factor of safety. Additional information
is also available, such as the sensitivity of each variable
to failure in the total analysis. Hence, an equivalent to
the factor of safety can be obtained when one specifies the
allowed failure rate of the duct. If only one or two variables
are considered, the difference in the deterministic and probabilistic
analysis on a factor of safety basis is typically small. But
when many random variables are considered, the effect of the
distributions is that they do not all occur together at their
max-max condition except in a few extreme cases. This often
results in a much higher indicated factor of safety using
the probabilistic approach.
Thus, the success of deterministic product
design prompts the obvious question: Why change to a new methodology
like probabilistic design, seemingly a "design for uncertainty?"
Basically, more reliable products are required, and quantitative
methods for managing risk are needed. Associated needs are
lower cost, lower weight and the requirement to build quality
into the design.
The fact is that we have significantly improved
our analysis tools in the last few years, but we still cannot
predict operating conditions, loads, or material strengths
with 100% certainty. In reality, considering the design conditions,
material limits, and load factors as a single value are assumptions
of convenience that have been consistent with our design analysis
tools. Current analysis methods remain deterministic, so the
loads and requirements are necessarily defined in a deterministic
manner. Yet in truth, they all leave variability or uncertainty.
For critical situations, we have considered this variation
by using sensitivity analysis and qualitative judgments about
the acceptability of a design.
Probabilistic design, by contrast, allows
for both the variations actually inherent in hardware and
engine operation and a quantification of the answer in terms
that are communicable on an engineering basis. Knowing the
inherent risk of failure has become critical if we are to
meet the design requirements of the hardware. The skills and
methodologies must be attained to assess this risk of failure,
minimize it within the design constraints (i.e., cost or weight)
and understand which features of the design are the dominant
cause of the risk.
The probabilistic approach, using an assessment
of risk rather than a factor of safety, can help level the
conservatism in each part while still maintaining the required
safety of the hardware. Obviously, the last thing that is
desired is a decrease in safety, measurable or otherwise.
And yet, a better reliability factor can be obtained if the
probabilistic approach is used during the design phase. The
sensitivity of the component to variations in the critical
variable can be reduced by intelligently using the sensitivity
information furnished by a probabilistic analysis. This can
increase the reliability of the part, often with only minor
changes in the design. Thus, the best approach to defining
this risk, other than hundreds of tests and flights, is through
a probabilistic analysis and risk
A common misconception is that a probabilistic
analysis requires extensive test data to allow accurate quantification
of the variable. This is not the case. The optimum probabilistic
design starts in the preliminary design phase and extends
throughout the design, development and flight phase of the
hardware. In the early phases of a design, nominals, limits,
and uncertainties are based on prior experience. As test data
becomes available, these estimates are updated and actual
characterization of engine and load variables, as well as
response variables, are validated.
The reliability design process can be divided
into conceptual design, preliminary design, detail design,
development test and flight. The conceptual design process
- in a classical deterministic sense - includes defining operating
characteristics and configurations and the use of simplistic
design guidelines for definition of configuration. At this
stage the initial sizing is based on deterministic analysis
for primary loads and the design features incorporate the
essence of the load-carrying features of the hardware. Many
design details are ignored that will later be accounted for.
Where possible, simple approximations are utilized, based
on past experience, that allow the designer and analyst the
opportunity to visualize the overall operation and essence
of the design. Design for reliability at this stage adds in
reliability allocation for the engine and its components.
A reliability allocation is the reliability value that the
piece part must have if the overall system is to meet its
reliability goal. An engine reliability, therefore, of 99%
may mean a piece part is required to have a 0.9995 reliability.
The preliminary design typically entails a
deterministic design analysis, a failure modes and effects
analysis, and the definition of a critical hardware list.
The new features in a preliminary design include screening
hardware for probabilistic analysis and establishing firm
hardware reliability estimates. At this stage the initial
sizing is based on deterministic analysis for primary loads
and the design features incorporate the essence of the load-carrying
features of the hardware. In a probabilistic design methodology,
it is at this point that the initial decisions are made as
to which variables are crucial; an initial quantification
of uncertainty is also determined. In addition, the design
must be critically reviewed to define possible failure modes
and how they relate to the uncertainty of variables.
Calculating the reliability of critical components involves several stages of analysis, considering the variations of fundamental variables such as inlet pressures and temperatures. Engine system models predict component load variability. These variations are combined with other component level uncertainties to calculate probabilistic response using complex finite element models. The stress variations are combined with material strength variations to calculate reliability, considering several possible failure modes.
Detail design using the deterministic analysis will proceed using our standard approaches with maximum loads, minimum properties and associated deterministic models and failure techniques. Factors of safety or life factors are calculated for these multiple conditions required: Ultimate, yield, buckling, deflection, low cycle fatigue and high cycle fatigue. (This is necessary until we mature our understanding and "feel" for probabilistic methodology; the deterministic analysis must be our baseline analysis technique.) In addition, an approximate probabilistic estimate is made for a lower bound failure value. This lower bound value will include an analytical probability estimate plus an additional factor to allow for human errors and other variations not included in the basic calculations. In a design that requires quantified reliability, all elements of the design must have a reliability estimate. For non-critical items where high reliability is readily obtained, a simple re liability estimator will be utilized. Non-critical items include fail-safe redundant elements, simple geometries, and loadings and items that are tolerant of the operating environment. Typically, these items will have reliabilities very close to 1.0. Critical items are those that have (1) potentially catastrophic failure modes, (2) complex geometries, and (3) sensitive operating environments. These components require a detailed probabilistic analysis, considering (1) component load distributions, (2) geometric tolerances and variations, (3) material property variation, (4) failure model characterization such as ultimate load, buckling or fatigue, and (5) allowances for human error, model error, fabrication and assembly. Using this detailed evaluation, loads, responses and damage assessment will be quantified as a component reliability that has considered the sensitivities of the hardware and uncertainties.
The results of the design process are used to fabricate hardware and to define development and flight-testing. The probabilistic method uses reliability estimates and sensitivity factors to define tests for validation of these calculations. Potentially, fewer exploratory tests are required when more directed test requirements are specified. The approximated load and response distributions are quantified as engine test measurements are collected. Additional analyses are required when measured data does not reflect initial distributions or when new information not originally considered is obtained. Estimated reliabilities are gradually improved and replaced with fact-based quantities. Unlike the deterministic approach, where safety factors are not quantified, the test data can be directly used to validate the calculated reliabilities.
Rather than get involved with the mathematical details of the methodology, it is more constructive to discuss the breadth of applications and some existing hardware examples that have been or are in the process of being analyzed. The primary concern of the discussions so far has been structural applications. The methodologies, nevertheless, are applicable throughout the design analysis process, including various phases of engine models, aerodynamic loading, thermal analysis, mechanical vibration structures, dynamics and damage assessment and fatigue and fracture mechanics analysis. A brief discussion of work done relative to fracture mechanics-the SSME bearing cartridge and combustion chamber liner analysis is furnished to give a flavor of methodology application.
A study of dye penetrant and component applications has recently been completed. Different penetrants detect different size flaws with differing reliability, thus it is important to specify the correct type for specific hardware. Too sensitive a penetrant results in so much information that it is hard to sort out the critical flaws, running the cost of inspection and analysis well beyond actual needs. Conversely, a penetrant that is too insensitive results in missed critical-sized flaws. Thus, the ideal solution is to use a dye penetrant which is sensitive enough to find critical flaws reliably, yet does not reveal smaller, uncritical flaws. But if such a discriminating penetrant does not exist, a penetrant, in combination with a flaw acceptance program, can be used to detect critical flaws while allowing the inspector to ignore flaws-perhaps smaller than a certain length-which are deemed uncritical for the component being inspected.
The initial approach to the problem - the deterministic analysis - took a conservative view to every aspect of the problem: the worst possible flaw location and orientation, the largest length flaw, the worst shaped flaw, the lower bound NDE - nondestructive evaluation - detection limit for the specific dye penetration, and lowest material properties. The sensitivity analysis using this approach for the allowable flaw sizes from selected welds in the SSME weld data could not be used to quantify the selection criteria.
It was recognized that the analysis was conservative, but the deterministic approach had no good way to quantify this conservatism. This led to a probabilistic analysis that utilized the distributional data available. A Monte Carlo simulation was used to quantify the relative reliability in using one type of penetrant over another. To simplify the flaw acceptance procedure, penetrants were assigned allowable flaw lengths. In order to maintain reliability while accepting flaws, the least sensitive penetrant which does not significantly increase risk was used. Distributions were characterized for the simulation, using as random variables, flaw length, flaw shape, probability of detection of flaws as a function of length, inspector flaw length estimates, and flaw growth material properties. The remaining variables were taken as deterministic (single value conservative) quantities. This procedure has led to a prototype for realistically deciding on which penetrant to use for a c omponent.
Another application of probabilistic fracture mechanics involves ALS engine concepts. The design philosophy allows for inherent material variations in the manufacturing process for castings. This includes voids, flaws and different material grades. The effects of these defects will be covered in material curve allowables where possible. For fracture critical hardware such as pressure vessels and rotating machinery that can cause catastrophic failure, a probabilistic analysis will be performed in addition to the standard analysis.
The probabilistic analysis of the SSME HPOTP (high pressure oxygen turbopump) bearing cartridge is an example of the application of probabilistic methodology for risk assessment. One of the components of the HPOTP has a resonant condition within the operating speed range of the turbopump. Four times the shaft speed has the possibility of coinciding with the natural frequency of the cartridge at a specific power level. The Phase II engine cartridge frequency match occurs near the 100% power level- since the engine is not intentionally operated at these power levels except during the first ten seconds of flight, the cartridges have sufficient life to meet mission goals. However, the speed at a given power level changes from engine to engine and from test to test. Furthermore, for a given engine and a given test and a given power level, the speed changes from one time slice to the next. In addition, the natural frequency of the bearing cartridge also changes from one cartridge t o another. The system is observed to possess a very small damping which corresponds to the cartridge "tilt mode" where it resonates. Consequently, the chance that the system frequency will match the turbopump exciting frequency for a long time during a flight is very small.
In addressing the phenomenon, the deterministic analysis takes a conservative approach. It first finds the maximum dwell time (i.e., the time it stays at a given speed), Then it assumes that the natural frequency of the system coincides within the forcing function (4N) speed where maximum dwell time is observed and compiles damage accordingly. As a result of this conservative assumption, it shows a limited life for the cartridge.
An initial probabilistic analysis has been made to see whether similar results are obtained. Six random variables were considered: cartridge natural frequency, damping, and four variables to describe the pump speed (4N level) during a duty cycle operation. The initial results showed that the cartridge has a high reliability, since the probability of several worst case events occurring simultaneously based on measured data is very small. It also identified that the damping was a main driver in the analysis. This has led to an ongoing review of the damping that has been determined from the strained gauged test results of bearing cartridges during engine operation.
Pratt & Whitney Rocketdyne has had an ongoing IR & D task to address probabilistic thermal analysis. It is apparent that the same techniques that are being developed for structural analysis are applicable to thermal analysis. By the end of the year, we will have implemented a technique for thermal analysis to utilize the probabilistic response code being developed for structural analysis. The SSME main combustion chamber liner is being used as a part of this study, with seven random variables under consideration: hot and cold heat transfer coefficient, curvature enhancement, hot spot conditions, conductance of the super alloy materials, flow resistance and hot wall thickness.
Pratt & Whitney Rocketdyne has technology contracts in the area of probabilistic load model development (analytical methods and procedures that describe the physics of the problem) with NASA-Lewis Research Center and is subcontractor to Southwest Research Institute on a NASA-Lewis Research Center contract to develop probabilistic structural analysis methods (PSAM). These "models" quantify the variations in loading environment that are observed in practice and provides a framework for predicting the load variations for future engines and is referred to as composite load spectra (CLS). For example, the magnitude of nozzle side loads generated during ground testing is highly variable and is best characterized in a probabilistic footing. It might take several firings before the maximum strains are observed in a nozzle component. Other examples include engine vibration environment, which varies from pump to pump, and turbine temperatures due to ignition spike. These are just examples, and one can say all performance or load variables have inherent variations, some large and some small.
The probabilistic load model is a composite of several models, such as an engine system model, a component interface model and individual component load scaling models. In the most general sense, the composite load model provides a tool for predicting the variations in component loads that can be expected, given the variations of primitive variables such as inlet pressures, temperatures and mixture ratios in a rocket engine. Further, the model has provisions to include local variables that can dominate a component's environment, such as heat shield gap and seal leakage. So in a sense, the engine system can be visualized as a complex filter, serving as a tool for predicting the variations in output, given the variations in input. The interaction of random variables can be very complex in an engine and probabilistic tools are of great help when one's intuitions tend to be misleading.
The probabilistic structural analysis contract takes off from where the composite load model contract effort ends. The PSAM contract's thrust is the structural analysis portion of the design process. Probabilistic theories have been developed in the past in structural analysis in specific areas such as random vibrations. Generally, those advancements addressed only the randomness in loads. What is new in PSAM is that many system parameters can also be treated as random in addition to identified random loads. The random system parameters can include, but are not limited to, mass, stiffness, material property, damping, and boundary conditions. It provides the tools in a computer code form to evaluate the probabilistic structural response. The mathematical model that predicts the structural response can either be simplistic approximate equations or can be finite-element analysis solutions from large finite-element models, although it should be noted that the majority of the com ponents of rocket engines in the detailed design phase are analyzed using finite-element analysis tools. This is due to the nature of complex geometry, loading conditions and material behavior. In short, PSAM has taken the conventional deterministic tools that we use and cast them in the probabilistic domain, with enough generality for use in practical applications.
Except for some simple functional forms, all probabilistic methods are approximate, since an exact closed-form probability calculation is impractical. The most common of the approximate methods is the Monte Carlo simulation, or some variation of it. This technique essentially performs repeated numerical experiments that represent the total spectrum of the population of the problem. The problem is to run enough simulations to accurately calculate the probabilities when small probabilities are involved. The mean value is adequately represented with a reasonable number of simulations. The methodology is very general with no simplifications or further assumptions. Increasingly accurate answers for the complete distribution are obtained with increasing numbers of simulations, and the method is perfectly suitable with today's computer speeds if 20,000 to 100,000 simulations can be done in a reasonable amount of computer time. This method is used mainly where the function evaluatio n usually consists of computation of results from a few number of equations. On the other hand, the method is not suitable if each function evaluation takes hours of supercomputer time, as is the case in a large finite-element analysis.
The challenge for the probabilistic structural analysis methods development was to apply and improve probability estimation methods that would require fewer function evaluations in the algorithm. One such technique proposed in PSAM is an advanced fast probability integration technique. In this method, certain simplifying assumptions are trade about the function that represent failure conditions; or more precisely, the function is linearized in the form of a Taylor series. If the failure function is nonlinear, there could be significant errors in linearization. In such cases, the algorithm provides for corrective iterations to improve the probability estimates. The method is computationally efficient if point estimates of the probability are needed. That is, rather than asking for the complete cumulative distribution function of the response variable, if the question is, "What is the response level for a given probability?" then the method is very efficient.
The composite load spectra primarily uses a third simulation method that is a compromise in computational effort between a conventional Monte Carlo and a fast probability integration-type solution. The CLS methodology is a discrete probabilistic distribution (DPD) approach where the individual distributions are lumped into essentially a histogram of constant probability levels. These lumped distributions are then used in a reduced number of individual simulations to define a response or failure distribution. In addition to the basic simulations, the DPD method requires significant calculations relative to the interactions of the random distributions. The number of actual simulations, though, is reduced to 20 to 50 for each variable versus thousands for Monte Carlo. Yet accuracy similar to Monte Carlo has been demonstrated. Several examples of the probabilistic structural analysis have been done at Pratt & Whitney Rocketdyne in the past few years as part of the verification efforts of the met hodology, and to demonstrate the methodology.
There are many reasons to believe that probabilistic structural analysis tools can be used to design more reliable products. Important results from probabilistic analysis are the sensitivity responses. These quantities, sometimes called importance factors, allow a quantitative ranking of the importance of each random variable relative to the scatter in structural response. These results can be used to tighten the allowable tolerance of key variables to reduce scatter and loosen tolerances on some variables to reduce production costs with little effect on reliability. It will also point out to the designers areas where more data is required to obtain reliable products. While it is granted that the probabilistic structural analysis is a more computational effort than a deterministic analysis, it also provides much greater information about design as to its reliability. Probabilistic models as tools are very general, so the concept can easily be extended to cover other disciplin es as well, such as heat transfer and fluid dynamics. With increasing emphasis on product reliability, which requires statistical concepts in manufacturing and quality control, the evolution of analytical tools to include probability and statistics in their methodology is natural and complementary. The new design approach will help in building an even more reliable product.