Design of Experiments

RonaldShewchuk 19,307 views 116 slides Feb 25, 2017
Slide 1
Slide 1 of 116
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75
Slide 76
76
Slide 77
77
Slide 78
78
Slide 79
79
Slide 80
80
Slide 81
81
Slide 82
82
Slide 83
83
Slide 84
84
Slide 85
85
Slide 86
86
Slide 87
87
Slide 88
88
Slide 89
89
Slide 90
90
Slide 91
91
Slide 92
92
Slide 93
93
Slide 94
94
Slide 95
95
Slide 96
96
Slide 97
97
Slide 98
98
Slide 99
99
Slide 100
100
Slide 101
101
Slide 102
102
Slide 103
103
Slide 104
104
Slide 105
105
Slide 106
106
Slide 107
107
Slide 108
108
Slide 109
109
Slide 110
110
Slide 111
111
Slide 112
112
Slide 113
113
Slide 114
114
Slide 115
115
Slide 116
116

About This Presentation

Process/product optimization with minimum experimental trials


Slide Content

Introduction 2/25/2017 Ronald Morgan Shewchuk 1 In previous presentations we reviewed the technique of response surface regression analysis to evaluate historical data and develop a modeling equation for an output variable which we want to optimize. Historical data is great insofar as it is “free”, that is, the data has already been generated as part of routine production operations. Historical data, however, is often compiled over a long period of time by multiple different personnel leading to uncertainty in the data. This uncertainty in the data manifests itself in uncertainty in the analysis, conclusions and recommendations. In cases where response surface regression analysis of historical data fails to identify correlations with adjusted R 2 values in excess of 85%, the technique of Design of Experiments (DOE) may prove useful. Design of Experiments is a set of controlled experiments, conducted in randomized order. The number of experiments required is dependent upon the DOE design which we select.

Definitions 2/25/2017 Ronald Morgan Shewchuk 2 Experiments are costly, especially if they are conducted in the production plant rather than in the pilot plant or laboratory, so we want to select a DOE design which minimizes the number of trials conducted without compromising the integrity of the data analysis, and without producing discrepant product. There are some common terms used in Design of Experiments which are defined below.   Factor: A controlled or uncontrolled input variable.   Level: A specific value or setting for a factor.   Response Variable: An output which is measured or observed.   Effect: The change in the response variable that occurs as experimental conditions change.   Interaction: Synergistic effect which occurs when the effect of one factor on the response variable depends on the setting of another factor(s).

Definitions 2/25/2017 Ronald Morgan Shewchuk 3 Two-way Interactions: Second order effects on the response variable resulting from the interaction of two factors ( eg AB, AC, BD, etc).   Three-way Interactions: Third order effects on the response variable resulting from the interaction of three factors ( eg ABC, ACD, BCD, etc).   Repetition: Running several samples during one experimental setup. Replication: Duplicating the entire experiment in a time sequence with different setups between each run.   Randomization: Technique used to randomize the order of the experimental runs or the assignment of experimental units to the different factor-level combinations.   Resolution: The degree to which the design of the experiment can differentiate levels of interactions.

Definitions 2/25/2017 Ronald Morgan Shewchuk 4 Residual: The difference between an observed value and the predicted value of the modeling equation.   Run: A single setup in a DOE from which data is gathered. For example, a three-factor full factorial DOE with two levels will have 2 3 = 8 runs.   Trial: Used interchangeably with Run.   Treatment Combination: Used interchangeably with Run.   Orthogonal Design: An experimental design is said to be orthogonal if each factor is tested in such a way that it can be evaluated independently of the other factors. Aliasing: Occurs when two factors or interaction terms are set at identical levels throughout the experiment.   Confounding: Two factors are considered to be confounded when their test profiles contain the same pattern of test settings making it impossible to evaluate the two factors independently.

Taguchi Loss Function 2/25/2017 Ronald Morgan Shewchuk 5 We have learned, during our review of Cause and Effect Diagrams, that any process will have input variables which may be categorized as Controllable (C) – variables which must be held constant and require standard operating procedures to ensure consistency Noise (N) – variables which are not controlled and thus introduce variation into the process Experimental (X) – key process variables which must be tested to identify their optimal settings. These variables act upon the process to influence the response variables which measure process and/or product performance. This may be visualized as in Figure 9.1.

2/25/2017 Ronald Morgan Shewchuk 6 Figure 9.1 Process Input/Output with Variable Categories Controllable Variables (C) Process Noise Variables (N) Experimental Variables (X) Response Variables (Y)

Taguchi Loss Function 2/25/2017 Ronald Morgan Shewchuk 7 We have also learned, during our review of Statistical Process Control, the importance of centering the process and reducing variation to maximize process capability. Genichi Taguchi, a Japanese scientist, also recognized this importance and furthermore theorized that there is a quadratic relationship between the financial loss of the process and the distance the process is off target. This relationship is represented by the Taguchi Loss Function described in Eqn 9.1 and graphically depicted in Figure 9.2. The financial loss typically results from inspection costs, scrap, rework, increased cycle time, increased inventories, design changes, reduced customer satisfaction, reduced market share, etc.

Taguchi Loss Function 2/25/2017 Ronald Morgan Shewchuk 8 L = k(y – T ) 2 where L = monetary loss y = response variable T = target value of response variable k = monetary constant Eqn 9.1 Figure 9.2 Taguchi Loss Function

Choosing and Experimental Design 2/25/2017 Ronald Morgan Shewchuk 9 The steps involved in conducting a Design of Experiments are summarized in Fig 9.3. The type of experimental design selected will depend upon your objectives. Screening DOE designs are used to identify the key process input variables (KPIV) that influence the process mean or variation. Modeling DOE designs build predictive equations for your output response variable as a function of the key process input variables. Optimizing DOE designs locate the sweet spot of the process within the modeling equation which optimizes the absolute value of the output response variable while minimizing variation due to input noise variables. As with any decision process, there are trade-offs. Screening DOE’s are subject to variable confounding and missing variable interaction effects but require fewer runs which results in lower experimental cost. Modeling DOE’s provide increased knowledge of the process by preventing confounding and accounting for all variable interactions but require many runs since all factor combinations must be tested.

2/25/2017 Ronald Morgan Shewchuk 10 Figure 9.3 Steps for Conducting a Design of Experiments Design of Experiments Steps     Planning Phase 1. State the problem 2. Define the objectives 3. Select the output response variables 4. Select the input factors to be explored 5. Select the factor levels 6. Select the DOE design 7. Determine the sample size 8. Determine the number of replicates 9. Define the experimental plan ( ie procedures, operators, measurement system, etc) Execution Phase 10. Conduct the experiment 11. Collect the data Analysis Phase 12. Identify significant factors and interactions 13. Fit and finalize the modeling equation 14. Validate results 15. Evaluate conclusions for impact on the current process and downstream customers 16. Conduct cost-benefit analysis of implementing conclusions 17. Consider next-generation DOE to further optimize the process    

Choosing and Experimental Design 2/25/2017 Ronald Morgan Shewchuk 11 Optimizing DOE’s provide a high level of process knowledge but require a large number of replicates to ensure that the true sweet spot of the process has been located. These design cost-benefits may be visualized as in Figure 9.4. A common mistake of novice experimenters is to select a comprehensive DOE design with multiple factors, levels and replicates. The seven factor, three level, three replicate DOE is bound to scare off the most open-minded Operations Manager. If your process has multiple input variables and you don’t know which variables are important, it is best to begin with a screening DOE design. Once you reduce the factor set your team can conduct a modeling DOE to develop a predictive equation for the output response variable. This predictive equation can then be subsequently used in response surface designs to optimize the settings of the reduced factor set. If your process is well defined and/or you want to explore the effects of only a handful of input variables then a modeling DOE design is probably a good starting point. The DOE design decision tree is shown in Figure 9.5.

2/25/2017 Ronald Morgan Shewchuk 12 Figure 9.4 Considerations in Selecting an Experimental Design DOE Objective Experimental Design Type Resolution III Fractional Factorial Resolution IV Fractional Factorial Plackett-Burman Taguchi Full Factorial Resolution V Fractional Factorial Central Composite Box- Behnken Response Surface Methodology Screening Modeling Optimizing Key Process Input Variables Knowledge Cost High Low Low High High (6-15) Low (2-5)

2/25/2017 Ronald Morgan Shewchuk 13 Figure 9.5 Decision Tree for Choosing an Experimental Design

Resolution 2/25/2017 Ronald Morgan Shewchuk 14 Experimenters typically use the term resolution to describe the discriminating power of a selected design. The resolution, R of a 2-level fractional factorial design is equivalent to the shortest interaction term in the defining relation. The resolution of a 2 k-q fractional factorial design where k is the number of input factors and q is the fraction magnitude (q=1 for ½ fraction, q=2 for ¼ fraction, q=3 for ⅛ fraction) will depend on the number of input factors, the fraction magnitude and the number of runs conducted. These influences are summarized in the Minitab resolution table of Figure 9.6. The higher the resolution of the design, the higher the discriminating power of the design. A R II design has main effects aliased with other main effects. Consequently, it is not a recommended experimental approach. R III designs do not alias main effects with each other but do alias main effects with 2-way interactions. Thus, R III designs are typically used in screening DOE’s to reduce the input factor set number.

2/25/2017 Ronald Morgan Shewchuk 15 Figure 9.6 Minitab 2-Level Fractional Factorial Design Resolution Table

Resolution 2/25/2017 Ronald Morgan Shewchuk 16 R IV designs do not alias main effects with 2-way interactions but do alias 2-way interactions with other 2-way interactions. R IV designs are typically used for building modeling equations where resource limitations preclude the use of an R V design. R V designs do not alias main effects with each other or with 2-way interactions. 2-way interactions are not aliased with other 2-way interactions. Main effects are aliased with 4-way interactions (which are rare) and 2-way interactions are aliased with 3-way interactions. In general, R V designs are well suited to building modeling equations which are devoid of significant interaction issues. Resolution attributes are summarized in Figure 9.7. You may have noticed the term Full along the leading diagonal of the resolution table in Figure 9.6. This stands for Full Factorial. Full Factorial designs do not contain aliasing, consequently their resolution is typically referred to as Full Factorial .

2/25/2017 Ronald Morgan Shewchuk 17 Figure 9.7 2-Level Design Resolution Attributes

Coding and Uncoding Input Variables 2/25/2017 Ronald Morgan Shewchuk 18 Mathematical analysis of DOE data requires that the experimental design be created in coded input variables . This coding avoids the complications caused by different variable units and scales through standardizing the inputs. Let’s say a microbiologist wanted to evaluate the effect of three factors on a targeted cell culture growth – temperature, nutrient concentration and incubation time. The low test setting for each factor is typically designated as –1, the high test setting +1, and the center point (if the experimental design is three-level) is designated as 0. This may be graphically visualized as in Figure 9.8. Conversion from uncoded to coded input variables may be accomplished by Eqn 9.2 and conversion back from coded to uncoded input variables may be accomplished by Eqn 9.3.

2/25/2017 Ronald Morgan Shewchuk 19 Figure 9.8 Relationship Between Coded and Uncoded Input Variables

Coding and Uncoding Input Variables 2/25/2017 Ronald Morgan Shewchuk 20

Full Factorial Designs 2/25/2017 Ronald Morgan Shewchuk 21 Let us begin our journey into designed experiments by considering a three factor, two level, two replicates DOE as described in Case Study XII. Case Study XII: Full Factorial DOE Analysis of Adhesive Shear Strength Raul Sanchez, the Production Manager of a large Automotive Parts and Assembly manufacturer in Veracruz, has just received a disturbing call from the Quality Control Manager. The lap shear strength measurements of the last lot of AB4356 subassembly parts have indicated a low shear strength average and a high standard deviation. This has resulted in a C pk below 1.0 which prevents the Quality Control Manager from certifying the lot and releasing the shipment. Raul has been expecting this call. He has been monitoring the SPC charts for this quality characteristic for the last several weeks and has noticed a downward trend with increased variation.   The AB4356 subassembly includes a Buna N rubber strap fixtured to an Acrylonitrile Butadiene Styrene (ABS) housing with a cyanoacrylate adhesive. Raul meets with his production team and they prepare a Cause and Effect Diagram of the possible sources of low shear strength. There are many potential causes, but the group decides to focus on three factors – temperature, humidity and surface roughness of the ABS substrate. Temperature and humidity are noise factors (the plant is not climate controlled). The substrate surface roughness can be increased or decreased within certain limits depending on choice of supplier. The group opts for a three factor, two level, two replicates Full Factorial DOE design to avoid confounding and the potential to miss interaction effects. The steps for creating the experimental design are captured in the screen shots of Figure 9.9.   The group reviews historical production records and selects the low and high test conditions for the DOE based upon the measured minimum and maximum values for the three factors over the last six months. The low and high values are summarized below.    Low High Temperature (  C) 21 30 Relative Humidity (%) 39 87 ABS Surface Roughness R a (  m) 0.29 1.05   The Quality Control Manager graciously agrees to allocate the resources of the environmental control chamber for the required four days of testing and to perform the 24hr Lap Shear Strength testing per ISO 4587. The shear strength results are entered into the design sheet and the DOE analysis is conducted as in Figure 9.10.

2/25/2017 Ronald Morgan Shewchuk 22 Figure 9.9 Steps for Creating a Full Factorial Experimental Design Open a new worksheet. Click on S tat → D OE  F actorial  C reate Factorial Design on the top menu.

2/25/2017 Ronald Morgan Shewchuk 23 Figure 9.9 Steps for Creating a Full Factorial Experimental Design Click the radio button for 2-level factorial (default generators) in the dialogue box and select 3 for the Number of factors. Click Designs.

2/25/2017 Ronald Morgan Shewchuk 24 Figure 9.9 Steps for Creating a Full Factorial Experimental Design Select Full Factorial Design in the dialogue box. Number of center points per block = 0. Number of replicates for corner points = 2. Number of blocks = 1. Click OK. Then Click Factors.

2/25/2017 Ronald Morgan Shewchuk 25 Figure 9.9 Steps for Creating a Full Factorial Experimental Design Enter Temp for the name of Factor A, RH for Factor B and Ra for Factor C in the dialogue box. Leave the Low and High values in their coded variable format. Click OK. Then Click OK one more time.

2/25/2017 Ronald Morgan Shewchuk 26 Figure 9.9 Steps for Creating a Full Factorial Experimental Design The session window indicates that you have created a full factorial design with three factors and sixteen runs. Click W indow → Worksheet on the top menu to display the randomized DOE template.

2/25/2017 Ronald Morgan Shewchuk 27 Figure 9.9 Steps for Creating a Full Factorial Experimental Design Enter Shear Strength for the name of column eight. Enter the measured shear strength results for each experiment in the appropriate cells of the design sheet.

2/25/2017 Ronald Morgan Shewchuk 28 Figure 9.10 Steps for Analyzing a Full Factorial Experimental Design Click on S tat → D OE  F actorial  A nalyze Factorial Design on the top menu.

2/25/2017 Ronald Morgan Shewchuk 29 Figure 9.10 Steps for Analyzing a Full Factorial Experimental Design Select C8 Shear Strength for the Response variable in the dialogue box. Click Graphs.

2/25/2017 Ronald Morgan Shewchuk 30 Figure 9.10 Steps for Analyzing a Full Factorial Experimental Design Select Normal and Pareto for Effects Plots in the dialogue box. Click OK. Then Click OK one more time.

2/25/2017 Ronald Morgan Shewchuk 31 Figure 9.10 Steps for Analyzing a Full Factorial Experimental Design The Pareto Chart of the Standardized Effects of Shear Strength indicates that humidity has the greatest impact on shear strength followed by a minor effect of surface roughness. The red vertical line at Standardized Effect 2.31 is the threshold H value above which we would reject the null hypothesis that there are no significant effects at an alpha level of 0.05. Click W indow → Effects Plot for Shear Strength on the top menu.

2/25/2017 Ronald Morgan Shewchuk 32 Figure 9.10 Steps for Analyzing a Full Factorial Experimental Design The Normal Plot of the Standardized Effects of Shear Strength indicates that B Relative Humidity and C Ra are significant factors. The further the points are away from the blue normal line, the more significant the factor. Click W indow → Session on the top menu.

2/25/2017 Ronald Morgan Shewchuk 33 Figure 9.10 Steps for Analyzing a Full Factorial Experimental Design The session window displays the ANOVA analysis results. Factors with P-values less than 0.05 are considered significant. Humidity is the most important factor with a minor contribution from surface roughness. Temperature is not a significant factor. There are no significant factor interactions.

2/25/2017 Ronald Morgan Shewchuk 34 Figure 9.10 Steps for Analyzing a Full Factorial Experimental Design Click on S tat → D OE  F actorial  F actorial Plots on the top menu.

2/25/2017 Ronald Morgan Shewchuk 35 Figure 9.10 Steps for Analyzing a Full Factorial Experimental Design Click on Main Effects Plot in the dialogue box. Click Setup.

2/25/2017 Ronald Morgan Shewchuk 36 Figure 9.10 Steps for Analyzing a Full Factorial Experimental Design Select C8 Shear Strength for Responses in the dialogue box. Include A: Temp, B: RH and C: Ra for Factors to Include in Plots. Click OK. Then Click OK one more time.

2/25/2017 Ronald Morgan Shewchuk 37 Figure 9.10 Steps for Analyzing a Full Factorial Experimental Design The greater the slope of the line in the Main Effects Plot for Shear Strength the greater that factor’s influence on shear strength. The plot indicates that the mean shear strength is maximized at high humidity.

2/25/2017 Ronald Morgan Shewchuk 38 Case Study XII: Full Factorial DOE Analysis of Adhesive Shear Strength Output Interpretation Raul has identified that the primary driver of lap shear strength is a noise variable which is outside his control. He now has the data and analysis results to support a capital expense proposal to convert one of the small storage rooms into a humidity-controlled staging room. Raul is optimistic that his boss will support the proposal since she is a chemist and will recognize that the DOE results are supported by the reaction chemistry of the adhesive. Cyanoacrylate adhesives are catalyzed by water. If she refuses, Raul will have to explore other adhesive alternatives which are insensitive to ambient humidity. Raul, employing the skills of a seasoned chess player, surmises that he could probably use a screening DOE to make this selection with one of the input factors being adhesive type. Plackett-Burman Screening Designs The Plackett-Burman designs were developed by R.L. Plackett and J.P. Burman in 1946 while working for the British Ministry of Supply. These designs permit the evaluation of multiple factors simultaneously in order to identify the vital few from the trivial many. Plackett-Burman designs are of resolution III, meaning that main effects are not aliased with one another but may be aliased with two-way interactions. They are an effective design approach for screening a large number of factors while minimizing the number experiments conducted.

2/25/2017 Ronald Morgan Shewchuk 39 Plackett-Burman Screening Designs Consider the case where you have ten factors to be evaluated for impact on a quality characteristic. A full factorial DOE with two levels and two replicates would require 2 10 · 2 = 2,048 experiments. A Plackett-Burman screening DOE would require only 24 experiments. Let us apply the technique of a screening DOE and analysis in Case Study XIII. Case Study XIII: Screening DOE Analysis of Drug Delivery Factors Shanti Chopra is a researcher with an upstart pharmaceutical company. He has just been assigned to the SPES1564 development program, a revolutionary drug intended to reduce the mortality rate caused by pancreatic cancer. Shanti’s first assignment is to develop an effective drug delivery system for SPES1564 which will target pancreatic tumors. Shanti has many ideas, but he must choose wisely since validation testing in mice typically takes fourteen days. He decides to begin with the ten factors identified below, utilizing two levels and two replicates. Peptide Coupler A Ligand Peptide Coupler B Block Copolymer Molecular Weight Peptide Coupler C Micelle Size Peptide Coupler D Aptamer A pH Aptamer B Shanti will track the % reduction in pancreatic tumor size in orthotopically -implanted mice via fluorescence imaging. The steps in creating the 12 Run Plackett-Burman screening DOE are captured in Figure 9.11. Experimental results are entered into the design sheet and the DOE analysis conducted as in Figure 9.12.

2/25/2017 Ronald Morgan Shewchuk 40 Figure 9.11 Steps for Creating a Plackett-Burman Screening Experimental Design Open a new worksheet. Click on S tat → D OE  F actorial  C reate Factorial Design on the top menu.

2/25/2017 Ronald Morgan Shewchuk 41 Figure 9.11 Steps for Creating a Plackett-Burman Screening Experimental Design Click the radio button for Plackett-Burman Design in the dialogue box and select 10 for the Number of factors. Click Designs.

2/25/2017 Ronald Morgan Shewchuk 42 Figure 9.11 Steps for Creating a Plackett-Burman Screening Experimental Design Select 12 for the Number of runs in the dialogue box. Enter 2 for the Number of replicates. Click OK. Then Click Factors.

2/25/2017 Ronald Morgan Shewchuk 43 Figure 9.11 Steps for Creating a Plackett-Burman Screening Experimental Design Enter the names of the ten factors in the dialogue box. The data type should be indicated as Numeric for all ten factors. Leave the values for Low and High as the coded values –1 and 1 respectively. Click OK. Then click Options.

2/25/2017 Ronald Morgan Shewchuk 44 Figure 9.11 Steps for Creating a Plackett-Burman Screening Experimental Design Uncheck the selection for Randomize runs just for this case in the dialogue box. This will facilitate our data entry into the design worksheet. Click OK. Then click OK one more time.

2/25/2017 Ronald Morgan Shewchuk 45 Figure 9.11 Steps for Creating a Plackett-Burman Screening Experimental Design The worksheet is populated with the factor settings for the 12 runs with two replicates.

2/25/2017 Ronald Morgan Shewchuk 46 Figure 9.11 Steps for Creating a Plackett-Burman Screening Experimental Design Scroll to the right of the design sheet and label the first open column % Reduction Tumor Sz .

2/25/2017 Ronald Morgan Shewchuk 47 Figure 9.11 Steps for Creating a Plackett-Burman Screening Experimental Design Conduct the experiments in randomized order and enter the % reduction in tumor size measurements in the response variable column.

2/25/2017 Ronald Morgan Shewchuk 48 Figure 9.12 Steps for Analyzing a Plackett-Burman Experimental Design Click on S tat → D OE  F actorial  A nalyze Factorial Design on the top menu.

2/25/2017 Ronald Morgan Shewchuk 49 Figure 9.12 Steps for Analyzing a Plackett-Burman Experimental Design Select C15 % Reduction Tumor Sz for the Response variable in the dialogue box. Click Graphs.

2/25/2017 Ronald Morgan Shewchuk 50 Figure 9.12 Steps for Analyzing a Plackett-Burman Experimental Design Select Normal and Pareto for Effects Plots in the dialogue box. Click OK. Then Click OK one more time.

2/25/2017 Ronald Morgan Shewchuk 51 Figure 9.12 Steps for Analyzing a Plackett-Burman Experimental Design The Pareto Chart of the Standardized Effects of % Reduction in Tumor Size indicates that Peptide Coupler C, pH, Aptamer A and Micelle Size have the greatest impact on tumor size. The red vertical line at Standardized Effect 2.16 is the threshold H value above which we would reject the null hypothesis that there are no significant effects at an alpha level of 0.05. Click W indow → Effects Plot for % Reduction Tumor Sz on the top menu.

2/25/2017 Ronald Morgan Shewchuk 52 Figure 9.12 Steps for Analyzing a Plackett-Burman Experimental Design The Normal Plot of the Standardized Effects of % Reduction in Tumor Size indicates that Peptide Coupler C, pH, Aptamer A and Micelle Size are significant factors. The further the points are away from the blue normal line, the more significant the factor. Click W indow → Session on the top menu.

2/25/2017 Ronald Morgan Shewchuk 53 Figure 9.12 Steps for Analyzing a Plackett-Burman Experimental Design The session window displays the ANOVA analysis results. Factors with P-values less than 0.05 are considered significant. Notice that factor interactions are absent. Plackett-Burman designs do not have the discriminating power to identify factor interactions.

2/25/2017 Ronald Morgan Shewchuk 54 Figure 9.12 Steps for Analyzing a Plackett-Burman Experimental Design Click on S tat → D OE  F actorial  F actorial Plots on the top menu.

2/25/2017 Ronald Morgan Shewchuk 55 Figure 9.12 Steps for Analyzing a Plackett-Burman Experimental Design Click on Main Effects Plot in the dialogue box. Click Setup.

2/25/2017 Ronald Morgan Shewchuk 56 Figure 9.12 Steps for Analyzing a Plackett-Burman Experimental Design Select C15 % Reduction Tumor Sz for Responses in the dialogue box. Click on >> to include all factors in the plots. Click OK. Then Click OK one more time.

2/25/2017 Ronald Morgan Shewchuk 57 Figure 9.12 Steps for Analyzing a Plackett-Burman Experimental Design The greater the slope of the line in the Main Effects Plot the greater that factor’s influence on tumor size reduction. The plot indicates that tumor size reduction is maximized with high concentration of Peptide Coupler C, high concentration of Aptamer A, low pH, and small micelle size. Shanti has effectively reduced his factor set from ten to four. He can now proceed to a modeling DOE with increased discriminating power.

2/25/2017 Ronald Morgan Shewchuk 58 Central Composite Modeling Designs Central Composite designs were developed by G.E. Box and K.B. Wilson in 1951. Consequently, they are sometimes referred to as Box-Wilson designs. These designs are the most effective and efficient second order modeling approaches where factors are purely quantitative. The design includes a factorial portion, a center point portion and an axial portion. The factorial portion consists of 2 k runs, where k is the number of factors. The number of center points required to maintain orthogonality may be calculated from the formula recommended by Peter John (7) n c = 4(  n F + 1) – 2k where n F is the number of runs in the 2-level factorial portion of the design. This formula tends to result in a large number of center points. You may cut back the number, recognizing that the primary purpose of the center points is to get an estimate of the pure experimental error. The number of axial points will typically equal 2k.

2/25/2017 Ronald Morgan Shewchuk 59 Central Composite Modeling Designs Alpha,  is a parameter of Central Composite designs representing the axial point distance from the center of the design. It may be calculated from the relation  = [ n F ] ¼ . The factorial portion of the design may be built for any resolution, thus providing a balance between discriminating power and the number of runs conducted. Central Composite designs can be run sequentially to save resources. For example, the factorial and center point portions of the design can be tested first. A linear model can then be built from the two-level portion to predict the center point results. If the linear model is not validated at the center points then the axial points can be added to complete the quadratic model. Let us apply the technique of a modeling design in Case Study XIV to extend the fine work that Shanti has completed in the previous case study.

2/25/2017 Ronald Morgan Shewchuk 60 Case Study XIV: Modeling DOE Analysis of Drug Delivery Factors Shanti Chopra, a researcher with an upstart pharmaceutical company, has been assigned to develop an effective drug delivery system for SPES1564 which will target pancreatic tumors. His previous screening DOE has identified four important factors. Shanti would like to identify a modeling equation to permit further optimization through Response Surface Methodology. Since his factors are all quantitative he elects to use a Central Composite design (CCD) with four factors, three levels and three replicates. n F = 2 4 = 16.  = [16] ¼ = 2. n c = 4(  16 + 1) – 2(4) = 12. This is too many center points so Shanti elects to cut back to three, to be consistent with the number of replicates for each trial. The levels of the four factors to be tested are summarized below.   (-  ) (-1) (0) (+1) (+  ) Unit Peptide Coupler C 12 18 24 30 36  g/ml pH 3.2 3.8 4.4 5.0 5.6 dimensionless Micelle Size 60 110 160 210 260 nm Aptamer A 50 62.5 75 87.5 100  g/ml Shanti will track the % reduction in pancreatic tumor size over fourteen days in orthotopically -implanted mice via fluorescence imaging as his response variable. The steps in creating the Central Composite DOE are captured in Figure 9.13. Experimental results are entered into the design sheet and the DOE analysis conducted as in Figure 9.14.

2/25/2017 Ronald Morgan Shewchuk 61 Figure 9.13 Steps for Creating a Central Composite Experimental Design Open a new worksheet. Click on S tat → D OE  R esponse Surface  C reate Response Surface Design on the top menu.

2/25/2017 Ronald Morgan Shewchuk 62 Figure 9.13 Steps for Creating a Central Composite Experimental Design Select the radio toggle button for Central composite in the dialogue box. Select the Number of factors to be 4. Click Designs.

2/25/2017 Ronald Morgan Shewchuk 63 Figure 9.13 Steps for Creating a Central Composite Experimental Design Select Full Design with one block in the dialogue box. Select the radio button for Custom Number of Center Points. Enter 3 for Cube block. Leave the value of Alpha at its default value of 2.000. Enter 3 for the Number of replicates. Click OK. Then click Factors.

2/25/2017 Ronald Morgan Shewchuk 64 Figure 9.13 Steps for Creating a Central Composite Experimental Design Select the radio button for Cube points and enter the names of the four factors in the dialogue box. Leave the values for Low and High as the coded values –1 and 1 respectively. Click OK. Then click Options.

2/25/2017 Ronald Morgan Shewchuk 65 Figure 9.13 Steps for Creating a Central Composite Experimental Design Uncheck the selection for Randomize runs just for this case in the dialogue box. This will facilitate our data entry into the design worksheet. Click OK. Then click OK one more time.

2/25/2017 Ronald Morgan Shewchuk 66 Figure 9.13 Steps for Creating a Central Composite Experimental Design The worksheet is populated with the factor settings for the 27 runs with 3 replicates.

2/25/2017 Ronald Morgan Shewchuk 67 Figure 9.13 Steps for Creating a Central Composite Experimental Design Scroll to the right of the design sheet and label the first open column % Reduction Tumor Sz .

2/25/2017 Ronald Morgan Shewchuk 68 Figure 9.13 Steps for Creating a Central Composite Experimental Design Conduct the experiments in randomized order and enter the % reduction in tumor size measurements in the appropriate cell of the response variable column.

2/25/2017 Ronald Morgan Shewchuk 69 Figure 9.14 Steps for Analyzing a Central Composite Experimental Design Click on S tat → D OE  R esponse Surface  A nalyze Response Surface Design on the top menu.

2/25/2017 Ronald Morgan Shewchuk 70 Figure 9.14 Steps for Analyzing a Central Composite Experimental Design Select C9 % Reduction Tumor Sz for the Response variable in the dialogue box. Click OK.

2/25/2017 Ronald Morgan Shewchuk 71 Figure 9.14 Steps for Analyzing a Central Composite Experimental Design The session window displays the response surface regression analysis results. Factors with P-values less than 0.05 are considered significant. This means that pH, Micelle Size, Aptamer A and Peptide Coupler C squared are important factors. All other quadratic terms and factor interactions are not significant.

2/25/2017 Ronald Morgan Shewchuk 72 Figure 9.14 Steps for Analyzing a Central Composite Experimental Design Let’s proceed to simplify the regression model. Click on S tat → D OE  R esponse Surface  A nalyze Response Surface Design on the top menu.

2/25/2017 Ronald Morgan Shewchuk 73 Figure 9.14 Steps for Analyzing a Central Composite Experimental Design Click on Terms in the dialogue box.

2/25/2017 Ronald Morgan Shewchuk 74 Figure 9.14 Steps for Analyzing a Central Composite Experimental Design Clear all selected terms from the dialogue box by clicking <<. Reselect A: Peptide Coupler C, B: pH, C: Micelle Size, D: Aptamer A and AA which stands for the A factor squared. Notice that we had to include the A term since AA was significant to avoid violation of hierarchy rule. Click OK. Then click OK one more time.

2/25/2017 Ronald Morgan Shewchuk 75 Figure 9.14 Steps for Analyzing a Central Composite Experimental Design The session window displays the reduced regression model analysis results. The reduced model has an adjusted R 2 of 91.58% which means that it explains 91.58% of the variation in reduction in tumor size. The modeling equation with coded coefficients is shown in Eqn 9.4.

2/25/2017 Ronald Morgan Shewchuk 76 Response Surface Methodology Shanti now has a modeling equation which will form the basis of finding the local optimum within the design space via the technique of Response Surface Methodology. Response Surface Methodology is an iterative process by which experiments are conducted along the pathway of steepest ascent ( ie the direction of greatest improvement in the response variable) in order to identify the factor settings which optimize the response variable. This may be visualized in Figure 9.15. Response Surface Methodology allows us to see the contours of improvement so that we may select factor settings that are well centered within the response variable plateau. This leads to robust process operation which is resistant to noise variables as opposed to running the process in an unstable region, akin to skateboarding on a handrail. Y = 58.304 – 1.733B – 2.517C + 5.136D – 7.745A 2 – 0.079A Eqn 9.4 ^

2/25/2017 Ronald Morgan Shewchuk 77 Figure 9.15 Response Surface Methodology Pathway of Steepest Ascent Consider the surface plots of Figure 9.16 for Shanti’s reduced regression model of Case Study XIV. These have been generated in Minitab by selecting S tat  D OE  R esponse Surface → Surface Plots on the top menu.

2/25/2017 Ronald Morgan Shewchuk 78 Response Surface Methodology Shanti has included four factors in his modeling equation, consequently there are six two-factor surface plots and hence, six pathways of steepest ascent for the response variable. But which pathway should we take? If Shanti’s modeling equation was first order, that is, of the form Y-hat = C 1 + C 2 x 1 + C 3 x 2 + C 4 x 3 …+ C k+1 x k we could use the analytical approach as outlined in Figure 9.17 to answer this question. But Shanti’s modeling equation includes the quadratic term A 2 and its form does not permit solution for the maxima using partial derivatives and simultaneous equations. Thus, we must use the procedure indicated in Figure 9.18. Let us apply these steps in Case Study XV to locate the optimum factor settings for Shanti Chopra’s targeted pancreatic cancer drug development.

2/25/2017 Ronald Morgan Shewchuk 79 Figure 9.16 Surface Plots for Case Study XIV Factors

2/25/2017 Ronald Morgan Shewchuk 80 Figure 9.16 Surface Plots for Case Study XIV Factors

2/25/2017 Ronald Morgan Shewchuk 81 Figure 9.16 Surface Plots for Case Study XIV Factors

2/25/2017 Ronald Morgan Shewchuk 82 Figure 9.16 Surface Plots for Case Study XIV Factors

2/25/2017 Ronald Morgan Shewchuk 83 Figure 9.16 Surface Plots for Case Study XIV Factors

2/25/2017 Ronald Morgan Shewchuk 84 Figure 9.16 Surface Plots for Case Study XIV Factors

2/25/2017 Ronald Morgan Shewchuk 85 Figure 9.17 Optimization via Response Surface Methodology – 1st Order Modeling Equations

2/25/2017 Ronald Morgan Shewchuk 86 Figure 9.18 Optimization via Response Surface Methodology – 2nd Order Modeling Equations

2/25/2017 Ronald Morgan Shewchuk 87 Case Study XV: Optimization of Drug Delivery Factors by Response Surface Methodology Shanti Chopra, a researcher with a pharmaceutical company, has been assigned to develop an effective drug delivery system for SPES1564 which will target pancreatic tumors. His previous screening DOE has identified four important factors and his modeling DOE has identified a quadratic relationship with the concentration of Peptide Coupler C. Shanti would like to use the technique of Response Surface Methodology to determine the values of Peptide Coupler C concentration (A), pH (B), Micelle Size (C) and Aptamer A concentration (D) which maximize the % reduction in pancreatic tumor size over fourteen days in orthotopically -implanted mice.   Shanti’s previous DOE using a Central Composite Design has resulted in the following modeling equation.   Y-hat = 58.304 – 1.733B – 2.517C + 5.136D – 7.745A 2 – 0.079A   Factors A 2 and D have the largest absolute value coefficients. Y-hat is maximized if factors B and C are each set to the coded values of –2 (their lower extremity of the CCD). The resulting surface plot is shown in Figure 9.19 with a theoretical maximum reduction in tumor size of 77%.

2/25/2017 Ronald Morgan Shewchuk 88 Figure 9.19 Surface Plot Generation from Central Composite Design Modeling Equation Open the Minitab file which contains the reduced regression modeling equation identified by Central Composite Design. Click on S tat → D OE  R esponse Surface  C ontour/Surface Plot on the top menu.

2/25/2017 Ronald Morgan Shewchuk 89 Figure 9.19 Surface Plot Generation from Central Composite Design Modeling Equation Check Surface plot in the dialogue box. Click Setup.

2/25/2017 Ronald Morgan Shewchuk 90 Figure 9.19 Surface Plot Generation from Central Composite Design Modeling Equation Click the radio button for Select a pair of factors for a single plot in the dialogue box. Choose A: Peptide Coupler C for the X Axis and D: Aptamer A for the Y Axis. Choose coded units. Click Settings.

2/25/2017 Ronald Morgan Shewchuk 91 Figure 9.19 Surface Plot Generation from Central Composite Design Modeling Equation Enter -2 for pH and -2 for Micelle Size. Click OK. Then click OK two more times.

2/25/2017 Ronald Morgan Shewchuk 92 Figure 9.19 Surface Plot Generation from Central Composite Design Modeling Equation The Surface Plot is generated for the predicted value of % Reduction in Tumor Size for Peptide Coupler C vs Aptamer A.

2/25/2017 Ronald Morgan Shewchuk 93 Figure 9.19 Surface Plot Generation from Central Composite Design Modeling Equation Click on S tat → D OE → R esponse Surface → R esponse Optimizer on the top menu.

2/25/2017 Ronald Morgan Shewchuk 94 Figure 9.19 Surface Plot Generation from Central Composite Design Modeling Equation Select C9 % Reduction Tumor Sz for the response variable to optimize in the dialogue box. Click Setup.

2/25/2017 Ronald Morgan Shewchuk 95 Figure 9.19 Surface Plot Generation from Central Composite Design Modeling Equation Select Maximize for the Goal of the response optimizer in the dialogue box. Enter 0 for the lower limit and 100 for the target. Click OK. Then click Options.

2/25/2017 Ronald Morgan Shewchuk 96 Figure 9.19 Surface Plot Generation from Central Composite Design Modeling Equation Enter starting values of 0, -2, -2 and 2 for Peptide Coupler C, pH, Micelle Size and Aptamer A respectively in the dialogue box. Click OK. Then click OK one more time.

2/25/2017 Ronald Morgan Shewchuk 97 Figure 9.19 Surface Plot Generation from Central Composite Design Modeling Equation The optimization plot indicates a predicted maximum Reduction in Tumor Size of 77.0762%. Shanti is now ready to conduct a 2 2 full factorial DOE to identify the pathway of steepest ascent for factors B and C. The steps for creating the DOE template are outlined in Figure 9.20.

2/25/2017 Ronald Morgan Shewchuk 98 Figure 9.20 Steps for Generating a 2 2 Full Factorial Design – Climbing DOE Number One Open a new worksheet. Click on S tat → D OE  F actorial  C reate Factorial Design on the top menu.

2/25/2017 Ronald Morgan Shewchuk 99 Figure 9.20 Steps for Generating a 2 2 Full Factorial Design – Climbing DOE Number One Click the radio button for 2-level factorial design in the dialogue box. Select 2 Factors. Click Designs.

2/25/2017 Ronald Morgan Shewchuk 100 Figure 9.20 Steps for Generating a 2 2 Full Factorial Design – Climbing DOE Number One Enter 3 for the number of replicates for corner points. Click OK. Then click Factors.

2/25/2017 Ronald Morgan Shewchuk 101 Figure 9.20 Steps for Generating a 2 2 Full Factorial Design – Climbing DOE Number One Enter pH for the name of factor A and Micelle Size for the name of factor B. Both factor data types are numeric. Click OK. Then click Options.

2/25/2017 Ronald Morgan Shewchuk 102 Figure 9.20 Steps for Generating a 2 2 Full Factorial Design – Climbing DOE Number One Uncheck the selection for Randomize runs just for this case in the dialogue box. This will facilitate our data entry into the design worksheet. Click OK. Then click OK one more time.

2/25/2017 Ronald Morgan Shewchuk 103 Figure 9.20 Steps for Generating a 2 2 Full Factorial Design – Climbing DOE Number One The worksheet is populated with the factor settings for the 4 runs with 3 replicates.

2/25/2017 Ronald Morgan Shewchuk 104 Case Study XV: Optimization of Drug Delivery Factors by Response Surface Methodology Shanti can now proceed to run the first climbing DOE at the factor levels summarized below. Remember that our starting point is 0, -2, -2, 2 for Peptide Coupler C concentration, pH, Micelle Size and Aptamer A concentration respectively. We will be striking our ice pick into virgin territory for pH and Micelle Size in an attempt to incrementally climb the % reduction in tumor size mountain.   (-1) (+1) (Constant) Unit Peptide Coupler C 24  g/ml pH 2.6 3.2 dimensionless Micelle Size 10 60 nm Aptamer A 100  g/ml Experimental results are entered into the design sheet and the DOE analysis conducted as in Figure 9.21.

2/25/2017 Ronald Morgan Shewchuk 105 Figure 9.21 Steps for Analyzing 2 2 Full Factorial Climbing DOE Number One Enter % Reduction Tumor Sz as the name for column 7.

2/25/2017 Ronald Morgan Shewchuk 106 Figure 9.21 Steps for Analyzing 2 2 Full Factorial Climbing DOE Number One Conduct the experiments in randomized order and enter the % reduction in tumor size measurements in the appropriate cell of the response variable column.

2/25/2017 Ronald Morgan Shewchuk 107 Figure 9.21 Steps for Analyzing 2 2 Full Factorial Climbing DOE Number One Click on S tat → D OE  F actorial  A nalyze Factorial Design on the top menu.

2/25/2017 Ronald Morgan Shewchuk 108 Figure 9.21 Steps for Analyzing 2 2 Full Factorial Climbing DOE Number One Select C7 % Reduction Tumor Sz for the Response variable in the dialogue box. Click OK.

2/25/2017 Ronald Morgan Shewchuk 109 Figure 9.21 Steps for Analyzing 2 2 Full Factorial Climbing DOE Number One The session window summarizes the regression analysis and ANOVA results for the Y-hat model.

2/25/2017 Ronald Morgan Shewchuk 110 Case Study XV: Optimization of Drug Delivery Factors by Response Surface Methodology The % reduction in tumor size may be predicted from the following equation.   Y-hat = 73.042 – 3.992B + 4.108C + 0.875BC Y-hat will be maximized at coded values of -1 and +1 for pH (B) and Micelle Size (C) respectively. We have moved pH in the right direction but we have moved micelle size in the wrong direction. That is, % reduction in tumor size has suffered by reducing micelle size from the baseline level of 60 nm.   If we substitute -1 for B and +1 for C in the above equation, the predicted maximum reduction in tumor size is 80.267%. Our new anchor point on the DOE mountain in the original coded units is 0, -3, -2, 2 for Peptide Coupler C concentration, pH, Micelle Size and Aptamer A concentration respectively.   Shanti can now proceed to his second climbing DOE at the factor levels summarized below   (-1) (+1) (Constant) Unit Peptide Coupler C 24  g/ml pH 2.0 2.6 dimensionless Micelle Size 60 110 nm Aptamer A 100  g/ml Experimental results are entered into the design sheet and the DOE analysis conducted as in Figure 9.22.

2/25/2017 Ronald Morgan Shewchuk 111 Figure 9.22 Steps for Analyzing 2 2 Full Factorial Climbing DOE Number Two Create the 2 factor, 3 replicate full factorial DOE template. Conduct the experiments in randomized order and enter the % Reduction in Tumor Size measurements in the appropriate cells of the design sheet.

2/25/2017 Ronald Morgan Shewchuk 112 Figure 9.22 Steps for Analyzing 2 2 Full Factorial Climbing DOE Number Two Click on S tat → D OE  F actorial  A nalyze Factorial Design on the top menu.

2/25/2017 Ronald Morgan Shewchuk 113 Figure 9.22 Steps for Analyzing 2 2 Full Factorial Climbing DOE Number Two Select C7 % Reduction Tumor Sz for the Response variable in the dialogue box. Click OK.

2/25/2017 Ronald Morgan Shewchuk 114 Figure 9.22 Steps for Analyzing 2 2 Full Factorial Climbing DOE Number Two The session window summarizes the regression analysis and ANOVA results for the Y-hat model.

2/25/2017 Ronald Morgan Shewchuk 115 Case Study XV: Optimization of Drug Delivery Factors by Response Surface Methodology The % reduction in tumor size may be predicted from the following equation.   Y-hat = 73.883 + 11.083B + 3.783C + 1.517BC   Y-hat will be maximized at coded values of +1 and +1 for pH (B) and Micelle Size (C) respectively. We have moved pH in the wrong direction but we have moved micelle size in the right direction. That is, % reduction in tumor size is maximized if we had left the pH at the baseline level of 2.6 as identified in climbing DOE number one.   The predicted maximum reduction in tumor size is 90.266%. Our new anchor point on the DOE mountain in the original coded units is 0, -3, -1, 2 for Peptide Coupler C concentration, pH, Micelle Size and Aptamer A concentration respectively.   It has taken one month to climb the response surface of this product development and move the response variable from the benchmark level of 77% pancreatic tumor size reduction to 90%. Shanti is satisfied with the outcome and can now proceed to conduct confirmatory runs (step 14 in Figure 9.18) at the optimized factor levels summarized below.   Level Unit Peptide Coupler C 24  g/ml pH 2.6 dimensionless Micelle Size 110 nm Aptamer A 100  g/ml This case demonstrates the power of Response Surface Methodology to identify factor settings which optimize the output response variable. It is well suited to product development and commissioning new processes. For existing processes, especially those producing at capacity to meet market demand, the technique of Evolutionary Operation is better suited.

2/25/2017 Ronald Morgan Shewchuk 116 References Anderson, Mark J. and Whitcomb, Patrick J., DOE Simplified – Practical Tools for Effective Experimentation , Productivity Press, New York, NY, 2000 Barker, Thomas B., Quality by Experimental Design , Marcel Dekker, New York, NY, 1994 Barnett, E. Harvey, Introduction to Evolutionary Operation , Industrial and Engineering Chemistry, Vol 52, No 6, 1960, 500-503 Box, George E.P., Evolutionary Operation: A Method for Increasing Industrial Productivity , Applied Statistics, 1957, 81-101 Box, George E.P. and Draper, Norman R., Evolutionary Operation: A Statistical Method for Process Improvement , John Wiley & Sons, New York, NY, 1969 Francis, Febe et al, Use of Response Surface Methodology for Optimizing Process Parameters for the Production of  -amylase by Aspergillus Oryzae , Biochemical Engineering Journal, Vol. 15, 2003, 107-115 John, Peter W.M., Statistical Design and Analysis of Experiments , Macmillan Publishing Co., New York, NY, 1971 Montgomery, Douglas C., Introduction to Statistical Quality Control , 5th Edition, John Wiley & Sons, New York, NY, 2005 Myers, Raymond H., Montgomery, Douglas C., and Anderson-Cook, Christine M., Response Surface Methodology , 3 rd edition, John Wiley & Sons, Inc., Hoboken, NJ, 2009 Plackett , R.L. and Burman , J.P., The Design of Optimum Multifactorial Experiments , Biometrika , Vol. 33, 1946, 305-325 Schmidt, Stephen R., Launsby , Robert G., Understanding Industrial Designed Experiments – Blending the Best of the Best Designed Experiment Techniques , 4 th edition, Air Academy Press, Colorado Springs, CO, 1998