background preloader

SEMATECH e-Handbook of Statistical Methods

SEMATECH e-Handbook of Statistical Methods
Related:  DoER Resources

Design of Experiments (DOE) Tutorial Design of experiments (DOE) is a powerful tool that can be used in a variety of experimental situations. DOE allows for multiple input factors to be manipulated determining their effect on a desired output (response). By manipulating multiple inputs at the same time, DOE can identify important interactions that may be missed when experimenting with one factor at a time. When to Use DOE Use DOE when more than one input factor is suspected of influencing an output. DOE can also be used to confirm suspected input/output relationships and to develop a predictive equation suitable for performing what-if analysis. DOE Procedure Acquire a full understanding of the inputs and outputs being investigated. The negative effect of the interaction is most easily seen when the pressure is set to 50 psi and Temperature is set to 100 degrees. Conduct and Analyze Your Own DOE Conduct and analyze up to three factors and their interactions by downloading the 3-factor DOE template (Excel, 104 KB). Summary

ChemWiki: The Dynamic Chemistry E-textbook - Chemwiki Basic Steps of Applying Reliability Centered Maintenance (RCM) Part II Basic Steps of Applying Reliability Centered Maintenance (RCM) Part II Although there is a great deal of variation in the application of Reliability Centered Maintenance (RCM), most procedures include some or all of the seven steps shown below: Prepare for the Analysis Select the Equipment to Be Analyzed Identify Functions Identify Functional Failures Identify and Evaluate (Categorize) the Effects of Failure Identify the Causes of Failure Select Maintenance Tasks If we were to group the seven steps into three major blocks, these blocks would be: DEFINE (Steps 1, 2 and 3) ANALYZE (Steps 4, 5 and 6) ACT (Step 7) The previous issue of Reliability HotWire discussed the DEFINE stage. Identify Functional Failures A functional failure is defined as the inability of an asset to fulfill one or more intended function(s) to a standard of performance that is acceptable to the user of the asset. Also, remember that functional failures do not have to be simple definitions or a single statement.

Basic Statistics Descriptive Statistics "True" Mean and Confidence Interval. Probably the most often used descriptive statistic is the mean. Shape of the Distribution, Normality. More precise information can be obtained by performing one of the tests of normality to determine the probability that the sample came from a normally distributed population of observations (e.g., the so-called Kolmogorov-Smirnov test, or the Shapiro-Wilks' W test. The graph allows you to evaluate the normality of the empirical distribution because it also shows the normal curve superimposed over the histogram. Correlations Purpose (What is Correlation?) The most widely-used type of correlation coefficient is Pearson r, also called linear or product- moment correlation. Simple Linear Correlation (Pearson r). This line is called the regression line or least squares line, because it is determined such that the sum of the squared distances of all the data points from the line is the lowest possible. Significance of Correlations.

Exploratory data analysis In statistics, exploratory data analysis (EDA) is an approach to analyzing data sets to summarize their main characteristics, often with visual methods. A statistical model can be used or not, but primarily EDA is for seeing what the data can tell us beyond the formal modeling or hypothesis testing task. Exploratory data analysis was promoted by John Tukey to encourage statisticians to explore the data, and possibly formulate hypotheses that could lead to new data collection and experiments. EDA is different from initial data analysis (IDA),[1] which focuses more narrowly on checking assumptions required for model fitting and hypothesis testing, and handling missing values and making transformations of variables as needed. EDA encompasses IDA. Overview[edit] Exploratory data analysis, robust statistics, nonparametric statistics, and the development of statistical programming languages facilitated statisticians' work on scientific and engineering problems. EDA development[edit] John W.

'r' tag wiki Reliability Centered Maintenance (RCM) An Overview of Basic Concepts Reliability Centered Maintenance (RCM) analysis provides a structured framework for analyzing the functions and potential failures for a physical asset (such as an airplane, a manufacturing production line, etc.) with a focus on preserving system functions, rather than preserving equipment. RCM is used to develop scheduled maintenance plans that will provide an acceptable level of operability, with an acceptable level of risk, in an efficient and cost-effective manner. According to the SAE JA1011 standard, which describes the minimum criteria that a process must comply with to be called "RCM," a Reliability Centered Maintenance Process answers the following seven questions: What are the functions and associated desired standards of performance of the asset in its present operating context (functions)? This document provides a brief general overview of Reliability Centered Maintenance techniques and requirements. Basic Analysis Procedure Prepare for the Analysis

Comparison of statistical packages The following tables compare general and technical information for a number of statistical analysis packages. General information[edit] This section contains basic information about each product (developer, license, user interface etc.). Price note[1] indicates that the price was promotional (so higher prices may apply to current purchases), and note[2] indicates that lower/penetration pricing is offered to academic purchasers. (For example, give-away editions of some products are bundled with some student textbooks on statistics.) Operating system support[edit] ANOVA[edit] Support for various ANOVA methods Regression[edit] Support for various regression methods. Time series analysis[edit] Support for various time series analysis methods. Charts and diagrams[edit] Support for various statistical charts and diagrams. Other abilities[edit] See also[edit] References[edit] Reviews of statistical packages[edit]

What Can Classical Chinese Poetry Teach Us About Graphical Analysis? - Statistics and Quality Data Analysis | Minitab A famous classical Chinese poem from the Song dynasty describes the views of a mist-covered mountain called Lushan. The poem was inscribed on the wall of a Buddhist monastery by Su Shi, a renowned poet, artist, and calligrapher of the 11th century. Deceptively simple, the poem captures the illusory nature of human perception. Written on the Wall of West Forest Temple --Su Shi From the side, it's a mountain ridge. Our perception of reality, the poem suggests, is limited by our vantage point, which constantly changes. In fact, there are probably as many interpretations of this famous poem as there are views of Mt. Centuries after the end of the Song dynasty, imagine you are traversing a misty mountain of data using the Chinese language version of Minitab 17... From the interval plot, you are extremely (95%) confident that the population mean is within the interval bounds. From the individual value plot, the data may contain an outlier (which could bias the estimate the mean).

Cluster Analysis R has an amazing variety of functions for cluster analysis. In this section, I will describe three of the many approaches: hierarchical agglomerative, partitioning, and model based. While there are no best solutions for the problem of determining the number of clusters to extract, several approaches are given below. Data Preparation Prior to clustering data, you may want to remove or estimate missing data and rescale variables for comparability. # Prepare Data mydata <- na.omit(mydata) # listwise deletion of missing mydata <- scale(mydata) # standardize variables Partitioning K-means clustering is the most popular partitioning method. # Determine number of clusters wss <- (nrow(mydata)-1)*sum(apply(mydata,2,var)) for (i in 2:15) wss[i] <- sum(kmeans(mydata, centers=i)$withinss) plot(1:15, wss, type="b", xlab="Number of Clusters", ylab="Within groups sum of squares") A robust version of K-means based on mediods can be invoked by using pam( ) instead of kmeans( ). Hierarchical Agglomerative

Related: