FTC 2017 Short Courses
Short courses will be held on Wednesday, October 4, 2017 8:30am – 5:30pm.
Modern Nonparametrics and Data Discovery Science
Subhadeep Mukhopadhyay, Department of Statistical Science, Temple University
The critical and most exciting aspect of big data science is discovery not “management” (Parzen, 2001). A bird’s eye view of statistical learning pipeline:
Data → Discovery → Modeling → Prediction → Inference.
Over the past two decades, there have been tremendous advancements made in the prediction and statistical inference front, leading to a growing number of textbooks (e.g, Efron and Hastie (2016), Hastie, Tibshirani, and Friedman (2009)) written in this category. However, somewhat surprisingly, data scientists have to struggle very hard to find any textbook that is primarily devoted for “Data Exploration and Discovery Machine”– a critical component of big data science. To address this “gap”, this course introduces a new statistical semantics and unified algorithm design principles, with a wide range of important data science applications in science, technology, and society. The course learning objectives seek to provide the foundational concepts of modern nonparametric statistics that simultaneously extend and integrate traditional and novel statistical methods for small and big data. The material heavily borrowed from recent papers by the author such as Mukhopadhyay and Parzen (2014), Mukhopadhyay (2017, 2016, 2015), Parzen and Mukhopadhyay (2013, 2012). This course is designed for data science researchers from industry, academia (including but not limited to computational biology, neuroscience, high energy physics, astronomy, economics and the social sciences), and government who wish to equip themselves with the fundamentals of modern Statistics in order to go from data to discovery.
Modern Response Surface Methods & Computer Experiments
Robert B. Gramacy, Virginia Tech
This course details statistical techniques at the interface between mathematical modeling via computer simulation, computer model meta-modeling (i.e., emulation/surrogate modeling), calibration of computer models to data from field experiments, and model-based sequential design and optimization under uncertainty (a.k.a. Bayesian Optimization). The treatment will include some of the historical methodology in the literature, and canonical examples, but will primarily concentrate on modern statistical methods, computation and implementation, as well as modern application/data type and size. The course will return at several junctures to real-word experiments coming from the physical and engineering sciences, such as studying the aeronautical dynamics of a rocket booster re-entering the atmosphere; modeling the drag on satellites in orbit; designing a hydrological remediation scheme for water sources threatened by underground contaminants; studying the formation of super-nova via radiative shock hydrodynamics. Course material will emphasize deriving and implementing methods over proving theoretical properties.
Quality by Design: Concept, tools and applications
Ron S. Kenett, KPA Group, Neaman Institute, Technion and Institute for Drug Research, Hebrew Univ.
David M. Steinberg, Tel Aviv University
The course is designed to present modern Quality by Design methods that combine recent research in the Design of Experiments and conceptual advances in product and process development, backed up by computer algorithms. In the pharmaceutical industry, encouragement by regulatory guidelines has also been an important source of support. The course is designed for both practitioners and researchers in general areas of industrial statistics and quality. The examples used in the course will come from several types of industries and the focus will be on Quality by Design concepts and tools. Background to the course, focused on the pharma industry, can be found in the blog written by the instructors http://blogs.sas.com/content/jmp/tag/qbd/ Background to the course, in a general set up, can be found in the book by Kenett and Zacks on Modern Industrial Statistics with Applications in R, MINITAB and JMP (Wiley, 2014), www.wiley.com/go/modern_industrial_statistics.
Statistical Design of Clinical Trials
Li Tang, Ph.D., and Hui Zhang, Ph.D., St. Jude Children’s Research Hospital
Clinical study design is the formulation of studies which may include both intervention and observational studies in medical, clinical and other research areas when human beings are involved. Different phases of clinical trials are usually set to focus on different purposes. For example, when investigating novel therapy or medication, the safety (phase I), efficacy (phase II), comprehensive investigating effectiveness and side effects (phase III) and long-term effect and side effect of marketed drugs (phase IV) are usually studied at different phases. A similar strategy is also implemented for medical procedure or device. The short course will provide an overview of statistical considerations for design of phase I/II/III clinical trials. It will start with introduction of basic statistical concepts, including random error, bias, sample size, type I error and power. It will include but not limited to the context of clinical trials, dose-finding, study cohort and treatment allocation, treatment effect monitoring, estimating clinical effects, and popular design frames such as factorial design, crossover design, adaptive design and sequential design. Meta-analysis will also be briefly discussed. Numerous examples will be introduced to illustrate the corresponding topics. The course is designed for anyone (statistician or non-statistician) who desires to know more about statistical considerations to design a modern phase I, II and III clinical trials. Prerequisites include elementary statistics training and basic knowledge of clinical research.
Short Course Committee
Yongtao Cao (STAT, Chair)
Indiana University of Pennsylvania
Maria Weese (CPID)
Shan Ba (Q&P)
Procter & Gamble
Matt Pratola (SPES)
The Ohio State University