Ockuly, Rebecca A.
Weese, Maria L.
Smucker, Byran J.
Edwards, David J.
Chang, Le
Response Surface Methodology is a set of experimental design techniques for system and process optimization that is commonly employed as a tool in chemometrics. In the last twenty years, thousands of studies involving response surface experiments have been published. The goal of the present work is to study regularities observed among factor effects in these experiments. Using the Web of Science Application Program Interface, we searched for journal articles associated with response surface studies and extracted over 20,000 records from all Science Citation Index and Social Science Citation Index disciplines between 1990 and the end of 2014. We took a random sample of these papers, stratified by the number of factors, and ended up with a total of 129 experiments and 183 response variables. Extracting the data from each publication, we reanalyzed the experiments and combined the results together in a meta-analysis to reveal information about effect sparsity, heredity, and hierarchy. We empirically quantify these principles to provide a better understanding of response surface experiments, to calibrate experimenter expectations, and to guide researchers toward more realistic simulation scenarios and improved design construction.
Weese, Maria L.
Martinez, Waldyn G.
Jones-Farmer, L. Allison
The k-chart, based on support vector data description, has received recent attention in the literature. We review four different methods for choosing the bandwidth parameter, s, when the k-chart is designed using the Gaussian kernel. We provide results of extensive Phase I and Phase II simulation studies varying the method of choosing the bandwidth parameter along with the size and distribution of sample data. In very limited cases, the k-chart performed as desired. In general, we are unable to recommend the k-chart for use in a Phase I or Phase II process monitoring study in its current form. Copyright (c) 2017 John Wiley & Sons, Ltd.
Weese, Maria L.
Smucker, Byran J.
Edwards, David J.
An important property of any experimental design is its power, defined roughly as its ability to detect active factors. For supersaturated designs, power is even more critical. We consider several popular supersaturated design construction criteria in the literature, propose several of our own, and perform a simulation study to evaluate them in terms of power. We use two analysis methods-forward selection and the Dantzig selector-and find that, although the Dantzig selector clearly outperforms forward selection, there is no clear winner among the design construction criteria.
We present a sequential method for experimenting with mixtures when the number of mixture components is large and the experimental goal is to gain information on a subset of components for the purpose of mixture product improvement. The model–independent method utilises parameter estimates from the Cox mixture model with a current product formulation from which to begin experimentation. The advantages of our method are the ability to perform experiments using a split–plot like structure and utilise it in a sequential manner on an operating process.
Martinez, Waldyn G.
Weese, Maria L.
Jones-Farmer, L. Allison
In phase I of statistical process control (SPC), control charts are often used as outlier detection methods to assess process stability. Many of these methods require estimation of the covariance matrix, are computationally infeasible, or have not been studied when the dimension of the data, p, is large. We propose the one-class peeling (OCP) method, a flexible framework that combines statistical and machine learning methods to detect multiple outliers in multivariate data. The OCP method can be applied to phase I of SPC, does not require covariance estimation, and is well suited to high-dimensional data sets with a high percentage of outliers. Our empirical evaluation suggests that the OCP method performs well in high dimensions and is computationally more efficient and robust than existing methodologies. We motivate and illustrate the use of the OCP method in a phase I SPC application on a N=3D354, p=3D1917 dimensional data set containing Wikipedia search results for National Football League (NFL) players, teams, coaches, and managers. The example data set and R functions, OCP.R and OCPLimit.R, to compute the respective OCP distances and thresholds are available in the supplementary materials.
An emergent stream of research in management employs configurational and holistic approaches to understanding macro and micro phenomena. In this study, we introduce mixture modelsa related class of modelsto organizational research and show how they can be applied to nonexperimental data. Specifically, we reexamine the long-standing research question concerning the CEO pay-firm performance relationship using a novel empirical approach, treating individual pay elements as components of a mixture, and demonstrate its utility for other research questions involving mixtures or proportions. Through this, we provide a step-by-step guide for other researchers interested in compositional modeling. Our results highlight that a more nuanced approach to understanding the influence of executive compensation on firm performance brings new insights to this research stream, showcasing the potential of compositional models for other literatures.
The inter-discplinary science of Drug and Lead discovery continues to evolve and it does so by bringing more technologies to the lab. In the last decade we saw the advent of technologies that change the way we approach target biology (the human genome project), build compound collections (combinatorial, automated and contract synthesis), conduct screening (high throughput and high content screens), and ultimately advance compounds from discovery to development. While the rate of NMEs is constant or even down, the “easy targets”, clearly a qualitative term, are done. Now we are faced with solving the challenges of hard targets. In the next decade we will see more technology driven advances as we continue to put safe and efficacious medicines in the market. What are these new technologies and approaches? A few of them are high-lighted in this special issue of Current Pharmaceutical Biotechnology. First, David Dunn et al. discuss systems biology in “Taking a systems approach to the discovery of novel therapeutic targets and biomarkers”. Systems biology focuses on the integrated roles of cellular pathways and networks rather than single biomolecules as we have done previously. A systems biology approach requires technology that can generate and analyze, large multi-dimensional data sets. Changes in phenotype are evaluated in high content phenotypic screens, and changes in transcriptional and protein networks are evaluated against collections of small molecules, peptides or siRNA to identify agents that modulate the cellular phenotypic signatures. Such agents can be used to analyze pathways and networks. The power of this technology is its ability to generate patterns of complex biological data. These patterns can then identify new pathways and targets that are relevant to drug discovery. Next, Attila Seyhan et al. have contributed a critical discussion on “RNAi screening for the discovery of novel modulators in human disease”. RNA interference (RNAi)-mediated gene inhibition allows for systematic loss-of-function screens to be conducted to interrogate the biological functions of specific genes and pathways. The use of RNAi in various screening formats and against various targets is discussed. In addition to finding new targets and cellular pathways and networks, we discuss new twists to screen old targets. Jim Beasley and Robert Swanson have contributed a paper on sophisticated counter-screening in “Pathway-specific, species, and sub-type counter-screening for better GPCR hits in HTS”. GPCRs are well-trodden targets with great success as medicines, but some notable failures as well. Time and cost may be saved if GPCR modulators are assessed in terms of signaling pathway selectivity, species selectivity, and selectivity against closely-related family members at the stage of high-throughput screening. Examples of how these kinds of selectivity have been addressed during screening are given. In the realm of academic based screening, Rathnum Chagaturu et al. discuss the progress and challenges of academic screening in “Open access high throughput drug discovery in the public domain: A mount everest in the making”. In the current drug discovery landscape, the pharmaceutical industry is embracing strategies with academia to maximize their research capabilities and feed their drug discovery pipeline. The goals of academic research have therefore expanded from target identification and validation to probe discovery, chemical genomics, and compound library screening. This trend is reflected in the emergence of HTS centers in the public domain. The various facets of academic HTS centers as well as the implications on technology transfer and drug discovery are discussed, and a roadmap for successful drug discovery in the public domain is presented. Where do all of these new technological approaches to targets and screening take us? To the interrogation of a target through medicinal chemistry. To be sure, chemi