FTC 2019 Short Courses

Short courses will be held on September 25, 2019.

 

FULL DAY COURSES

Practical DOE: “Tricks of the Trade”

Martin Bezener and Shari Kraber, Stat-Ease, Inc.

In this dynamic short-course, Stat-Ease consultants reveal tricks of their trade that make the most from statistical design and analysis of experiments. Come and learn many secrets for design of experiment (DOE) success garnered over decades for statistical practice.

The presentation begins with simple, but effective, planning processes for factorial designs and response surface methods (RSM). The key to success is ‘right-sizing’ the experiment via calculation of power, or, in the case of RSMs (particularly mixture designs), fractions of design spaces (FDS).

The workshop then continues with coverage of many special topics of interest to advanced DOE practitioners, including:

  • How to build irregularly-shaped DOE spaces that cover your region of interest
  • Deploying specialized models for mixture-process experiments
  • Using logistic regression to get the most from binomial data such as a pass/fail response
  • Clever tweaks to numerical optimization
  • Cool tools for augmenting a completed experiment to capture a peak beyond its reach
  • Other valuable tips and tricks as time allows and interest expressed

Don’t miss this chance to sharpen up your DOE skills via an eclectic array of specialized statistical tools! (Bring along your laptop and a software trial and data files will be provided.)

CONTACT: shari@statease.com

MARTIN BEZENER is a Director of R&D and Principal with Stat-Ease, Inc. He received his PhD in statistics from the University of Minnesota, Twin Cities in 2015. While a graduate student he spent a year working at the university statistical consulting center on a wide variety of projects. He also taught undergraduate-level statistics for several years. Martin enjoys putting his skills to use in the development of Design-Expert® software, as well as doing consulting projects and teaching workshops on DOE. In addition, Martin is co-author of the book Formulation Simplified: Finding the Sweet Spot through Design and Analysis of Experiments with Mixtures.

SHARI KRABER is a consultant with Stat-Ease, Inc. She received her MS in statistics from Rochester Institute of Technology in 2003 and worked for many years “in the trenches” of manufacturing as a process engineer and engineering supervisor for 3M and Frigidaire. As a process engineer, Shari mastered the statistical tools of design of experiments (DOE). Her industry experience ranges from appliance manufacturing to tape making to warehouse and distribution services. Shari has performed DOEs on ultrasonic welding, foam insulation, cabinet assembly, winders, slitters, and packaging equipment. As a Certified Professional in Training and Performance, she now devotes her considerable energy, experience and leadership skills to workshops on DOE, teaching others how to make breakthroughs.

Functional Data analysis for Big Data

Jiguo Cao, Simon Fraser University, Canada

Functional data analysis (FDA) is a growing statistical field for analyzing curves, images or any manifold objects, in which each random function is treated as a sample element. Functional data can be commonly found in many big data applications such as spectral analysis, fitness data from the wearable device, air pollution, longitudinal studies, time-course gene expressions, and brain images. This workshop will cover major FDA methods such as nonparametric regression methods, functional principal component analysis, functional linear regression models, clustering and classification of functional data. All these methods will be demonstrated with real data applications. The R codes for these methods will also be provided and explained.

DR. JIGUO CAO is a Tier 2 Canada Research Chair in Data Science and an Associate Professor in the Department of Statistics and Actuarial Science at Simon Fraser University (SFU), Burnaby, BC, Canada. He also holds an appointment as Associate Faculty Member in the School of Computing Science at SFU. Dr. Cao obtained his PhD in statistics at McGill University in 2006 and worked as postdoctoral associate in statistics genetics at Yale University in 2006-2007 before joining SFU with an assistant professor position in 2007. He has published over 60 papers in the top statistical journals such as Journal of the American Statistical Association (JASA) and Biometrics. He has given over 60 invited talks/seminars in international conferences and 4 workshops/summer schools in Australia, Canada, China and USA. He is serving as the associate editor of four international statistical journals – Biometrics, Canadian Journal of Statistics, Journal of Agricultural, Biological, and Environmental Statistics, and Statistics and Probability Letter. His research interests include developing novel statistical methodologies and applications for complex data analysis and estimating parameters in differential equations from real data.

An Approach to Statistical Process Control that is New, Nonparametric, Simple, and Powerful

William Conover, Texas Tech University and Victor Tercero-Gómez, Tecnológico de Monterrey México

To maintain the desired quality of a product or service it is necessary to monitor the process that results in the product or service. This monitoring method is called Statistical Process Management, or Statistical Process Control. It is in widespread usage in industry. Extensive statistical methodology has been developed to make it possible to detect when a process goes out of control, while allowing for natural variability that occurs when the process is in control. This paper introduces new nonparametric methods for monitoring data, whether it is univariate or multivariate, and whether the interest is in detecting a change of location or scale or both. These methods, based on sequential normal scores, are much simpler than the most popular nonparametric methods currently in use and have good power for detecting out of control observations. Sixteen new statistical tests are presented in this short course, with examples, 31 tables and 48 figures to complete the instructions for their application. Detailed instructions will be given on the use of R to apply these methods.

DR. W. J. CONOVER is a Paul Whitfield Horn Professor of Statistics in the Department of Mathematics and Statistics at Texas Tech University. He received his Ph.D. in Mathematical Statistics from The Catholic University of America. He is a Fellow of the American Statistical Association, and a Highly Cited Researcher as declared by the ISI, with over 40,000 citations to his books and papers, mostly in the area of nonparametric statistics. His bio appears in Who’s Who in America as well as Who’s Who in the World. He has consulted with researchers at Los Alamos National Lab, Sandia National Lab, Pfizer Pharmaceuticals, Upjohn, and many other organizations.

DR. VÍCTOR G. TERCERO-GΌMEZ is a professor at the School of Engineering and Science at Tecnologico de Monterrey. He received his Ph.D. in Systems and Engineering Management from Texas Tech University and his Ph.D. in Engineering Sciences from Tecnologico de Monterrey. Certified Black Belt and Master Black Belt in Six Sigma from Tecnologico de Monterrey. He has more than 15 years of experience leading and coaching improvement projects in manufacture and service organizations using statistical quality engineering approaches. He has numerous papers in the field of statistical process monitoring, nonparametric statistics and quality engineering.

 

 

HALF DAY COURSES

Sample Size Theories and Applications in Quality and Reliability Engineering

Harry Guo

In quality and reliability engineering, determining the right sample size is often times critical since the cost of tests is usually high and obtaining prototypes is often not easy. If the sample size used is too small, not much information can be obtained from the test, limiting one’s ability to draw meaningful conclusions; on the other hand, if it is too large, information obtained through the tests may be beyond what’s needed, thus incurring unnecessary costs. When a quality or reliability test design is solely based on resource constraints, the results are not very useful, often yielding an estimate with a very large amount of uncertainty. Therefore, test designs always involve a trade-off between resource expenditure and confidence in the results.

In general, there are two methods for determining the sample size needed in a test. The first one is based on the theory of confidence intervals, which is referred to in this tutorial as the estimation approach, while the other is based on controlling Type I and Type II errors and is referred to as the risk control approach. The second one is also called power and sample size in the design of experiments (DOE) literature since power is defined as 1 minus the Type II error.

This tutorial will provide an overview of the methods that are available to help quality and reliability engineers determine the required sample size.

DR. HARRY GUO is the Technical Fellow on Reliability, Quality and Statistics at FCA US LLC. His current tasks include reliability growth, warranty data analysis, degradation and accelerated life testing, and applications of statistical methods on quality and reliability engineering. He provides internal trainings for release, design, quality, and reliability engineers in the company. Before joining FCA, he was the Director of the Theoretical Development at ReliaSoft. He was in charge of the theory development for Weibull++, RGA, BlockSim, ALTA, DOE++. He also conducted trainings and consulting service on statistical methods for many different industries. Harry obtained his PhD in systems and industrial engineering from University of Arizona in 2004. He is a Certified Reliability Professional (CRP), a Certified Reliability Engineer (CRE) and a Certified Quality Engineer (CQE). He is the winner of multiple best paper awards from IIE (Institute of Industrial Engineers) and SRE (Society of Reliability Engineers). He is also the winner of the best tutorial award from RAMS2018 (Reliability and Maintainability Symposium)