2017-18 Colloquia talks
Date | Speaker | Talk |
---|---|---|
April 19, 2018 | Steven Clontz, 91桃色 |
Games Topologists Play Abstract: This presentation will begin with an overview of a two-player game using the real numbers that can be used to prove the uncountability of the reals. Using this motivation, a more general class of games called selection games will be introduced. For a selection game played on certain sets related to a topological space (e.g. its collection of open covers, its collection of dense subsets), the existence of an unbeatable strategy by the second player characterizes a topological property of the space (e.g. a covering property, a separability property). Unbeatable strategies that can be defined using "finite memory" (called Markov strategies) characterize stronger properties of the space, so the presenter's theorem demonstrating sufficient conditions for improving an unbeatable strategy to an unbeatable Markov strategy will be given. This talk is appropriate for graduate students and senior undergraduate students in mathematics. |
April 5, 2018 | Ollie Nanyes, Bradley University |
Everywhere Wild Knots in 3-Space Abstract: An old open problem in topology is this: are there simple closed curves in 3-space so pathologically embedded that they cannot be deformed into a standard smooth unknotted circle (called "the unknot")? Note: the deformation in question is non-ambient; in general such deformations can NOT be extended to all of 3-space. Such a deformation is called a `"non-ambient isotopy". We will discuss this open question and some work related to it. In the process we will present some aesthetically pleasing geometric constructions. |
April 3, 2018 | Augustine O'Keefe, Connecticut College |
Cellular Resolutions of Monomial Ideals Abstract: Given an ideal of polynomials, one can define a homological object called the minimal free resolution of the ideal. Minimal free resolutions give rise to a lot of invariants that measure the complexity of the ideal. In some special cases, namely for squarefree monomial ideals, one can define a simplicial complex whose chain complex on the oriented faces returns the minimal free resolution of the monomial ideal. In this case we say that a monomial ideal has a cellular resolution. This notion gives a nice interplay between topological combinatorics and commutative algebra, i.e. using the combinatorial data of the generators encoded in a simplicial complex we can describe the algebraic invariants of the minimal free resolution. In this talk we will formally define all of these objects, look at examples, and if time permits, discuss some joint work with Uwe Nagel (University of Kentucky) where we show a certain class of monomial ideals have cellular resolutions. This will be an introductory talk that will be friendly for a general mathematics audience. |
March 20, 2018 | Allison Cynthia Fialkowski, University of Alabama at Birmingham |
A Comparison of Prediction Accuracy in Machine Learning Techniques Using Simulated Pharmacogenetic Data Abstract: Pharmacogenetic prediction requires the analysis of 鈥滲ig Data鈥 with thousands of heterogeneous variables from multiple sources, including clinical, genomic, metabolomic, gene expression, proteomic, neuroimaging, and other measures. A common challenge is how to create accurate statistical models when the number of predictors far exceeds the number of subjects, i.e., the 鈥漛ig p, small n鈥 problem. This challenge is enhanced when the study involves testing multiple drug treatments in a rare disease, i.e., in pharmacogenetic prediction of psychiatric disorders. These studies frequently compare 3 - 5 drugs, with a sample size per drug of less than 50. Additional problems include a heterogeneous patient population (a byproduct of attempts to maximize the sample size), confounding factors (adjunctive medicine, concomitant diseases), and missing data. Since most of an individual鈥檚 genetic information is inherited in 鈥漛locks鈥, the information at various genetic loci is most likely correlated within families or homogeneous populations. Standard regression techniques will yield biased coefficients in these cases. The traditional alternative is to adopt a univariate approach that tests the association of a single genetic factor (i.e. single nucleotide polymorphism, SNP) and necessary clinical covariates with the outcome. However, this approach fails to consider the predictive contribution of groups of genetic factors, such as those located together in a gene. Statistical learning (machine learning, ML) based models provide effective methods for overcoming these challenges and may help develop decision support systems for individualized medicine. While classical statistical methods seek to explain the variance of or test for group effects, ML models focus on prediction at the individual level. ML techniques literally learn from data by continuously updating themselves to minimize model error with few a priori assumptions. The idea is that the model built from training on one sample of a population (the training set) can be applied to any other sample from the same population (the testing set) with similar predictive accuracy. Advantages of ML techniques include feature reduction through internal feature selection and application of K-fold cross-validation (CV) for model selection. ML methods such as regression trees, multivariate adaptive regression splines, support vector machines, and elastic net regression have been shown to have good predictive accuracy in high-dimensional studies with correlation between predictors. The purpose of this analysis was to compare various ML techniques in predictive performance and ability to detect drug treatment effects in the presence of multiple treatment groups, small sample sizes, and non-normal error terms. Elastic net (EN, 伪 = 0, 0.25, 0.5, and 0.75) penalized regression and Multivariate Adaptive Regression Spline (MARS) models were compared to Bayesian lasso (BL) regression models using correlated SNP data and four clinical factors (drug treatment, sex, race, age). The data was simulated using the SimCorrMix R package. The minor allele frequencies (MAF) for 50 SNP鈥檚 were randomly generated from a Uniform(0.1, 0.5) distribution. The marginal distribution for each genotype was created based on the assumption of Hardy-Weinberg equilibrium. The correlation among SNP鈥檚 was structured in 5 haplotype blocks of 10 so that the correlation between SNP鈥檚 was based on distance, with those closer together having higher correlations. Ten SNP鈥檚 were designated as having significant effects with the highest beta value assigned to the causal SNP. Three different treatment group sizes (2, 3, and 5) and three different sample sizes per group (100, 250, and 500) were considered for each of the six models. The outcome was generated by adding the product of the design matrix (formed from the SNP minor allele counts and four covariates) and the beta coefficients to an error 1 term. The error term was given a Logistic distribution so that the probability of observing values in the tails of the distribution (鈥漮utliers鈥) was larger than for a Normal distribution. The data was simulated using a sample size of n = 10,000 for 20,000 repetitions. The first 10,000 were used as training sets to build the models. The second 10,000 were used as validation sets to determine predictive root mean squared error (RMSE). The penalty term 位 for the EN models and the number of terms in the MARS models were chosen in the training sets using 10 times K-fold CV (where K = 5 or 10). The 鈥漛est鈥 models were then applied to the validation sets. The six models were compared in each of the nine scenarios based on predictive RMSE, type I error, and power. The BhGLM package was used to implement the EN and Bayesian lasso models. The earth package was used to implement the MARS models. |
March 13, 2018 | Subhash Bagui, University of West Florida |
Convergence of Known Distributions to Normality or Non-Normality and a Few Counterexamples to CLT Abstract: This talk presents an elementary informal technique for deriving the convergence of known distributions to limiting normal or non-normal distributions. Further, we discuss a few counterexamples useful in teaching the Central Limit Theorem (CLT). The presentation should be of interest to teachers and students of first-year graduate-level courses in probability and statistics. |
March 8, 2018 | Scott Carter, 91桃色 |
A Categorical Description of Embedded Surfaces in 3-Dimensional Space Abstract: The result that I want to demonstrate today is that the free naturally monoidal, strictly 2-pivotal, weakly 3-pivotal, rotationally commutative, and triply tortile 3-category that has one object and is generated by a weakly self-invertible non-identity 1-morphism coincides with the 3-category of embedded surfaces in 3-dimensional space. Obviously, the talk will be devoted to understanding the definitions of the various adjectives. |
March 6, 2018 | Sandip Barui, University of Waterloo, Canada |
Cure Rate and Destructive Cure Rate Models under Proportional Hazards Lifetime Distributions Abstract: Cure rate models or long-term survival models play an important role in survival analysis and some other applied fields and have gained significance over time due to remarkable advancements in the drug development industry resulting in cures for a number of diseases. By assuming a Conway鈥揗axwell (COM) Poisson distribution under a competing causes scenario, we study a flexible cure rate model in which the lifetimes of non-cured individuals are described by a proportional hazard model. The baseline hazard is assumed to be defined by a Weibull distribution and a piece-wise linear function. Further, we consider a destructive cure rate model in which the initial number of competing causes are modeled by a weighted Poisson distribution. We focus mainly on three special cases, viz., destructive exponentially weighted Poisson, destructive length-biased Poisson and destructive negative binomial cure rate models. Inference is then developed for a right censored data by the maximum likelihood method with the use of expectation-maximization algorithm. An extensive simulation study is performed, under different scenarios including various censoring proportions, sample sizes, and lifetime parameters, in order to evaluate the performance of the proposed inferential method. Discrimination among some common cure rate models is then done by using likelihood-based and information-based criteria. Finally, for illustrative purpose, the proposed model and associated inferential procedure are applied to analyze a cutaneous melanoma data. |
March 1, 2018 | Soo-Young Kim, Colorado State University |
Monotone Variance Function Estimation using Regression Splines Abstract: The importance of estimating variance function in the heteroscedastic regression model is emphasized over the years because ignoring heteroscedasticity in the regression analysis may lead to substantial loss of efficiency and incorrect inference. When the variance function is monotone, its estimation problem becomes more interesting and challenging. We consider the maximum likelihood estimation of a smooth monotone variance function using constrained quadratic regression splines with iteratively re-weighted cone projections. Parametrically modeled covariates affecting the variance function are readily incorporated. A maximum likelihood estimate using a penalty term is also discussed as an extension. We show that the constrained spline estimator attains the optimal rate of convergence. Simulations show that our proposed method performs well compared to existing methods in a variety of scenarios. In addition, the estimated variance function provides improved inference about the mean function, in terms of a coverage probability and an average interval length for an interval estimate. The utility of the method is illustrated through the analysis of real datasets. |
February 27, 2018 | Christine Lee, The University of Texas at Austin |
Understanding Quantum Link Invariants via Surfaces in 3-Manifolds Abstract: Quantum link invariants lie at the intersection of hyperbolic geometry, 3-dimensional manifolds, quantum physics, and representation theory, where a central goal is to understand its connection to other invariants of links and 3-manifolds. In this talk, we will introduce the colored Jones polynomial, an important example of quantum link invariants. We will discuss how studying properly embedded surfaces in a 3-manifold provides insight into the topological and geometric content of the polynomial. In particular, we will describe how relating the definition of the polynomial to surfaces in the complement of a link shows that it determines boundary slopes and bounds the hyperbolic volume of many links, and we will explore the implication of this approach on these classical invariants. |
February 22, 2018 | Jay Pantone, Dartmouth College |
Sorting Permutations with C-Machines Abstract: In his seminal work, The Art of Computer Programming, Donald Knuth asks the reader to prove that the permutations that can be sorted by a stack are exactly those that avoid the pattern 231. This is one of the earliest questions from a field now known as Permutation Patterns that has been steadily growing and developing in the decades since. In this talk, we'll return to and generalize this genesis question by introducing a sorting device called a C-machine which generalizes stacks and queues from computer science. After introducing the field of Permutation Patterns, I'll show how the framework of C-machines unlocks a structural description that helps us answer many intriguing questions. For several cases where we can't find exact answers, I'll describe two computational and experimental methods that are guiding the way toward solutions: automated conjecturing of generating functions, and differential approximation of asymptotic behavior. |
February 20, 2018 | Yisu Jia, Clemson University |
Models for Stationary Count Time Series Abstract: There has been growing interest in modeling stationary series that have discrete marginal distributions. Count series arise when describing storm numbers, accidents, wins by a sports team, disease cases, etc. Superpositioning methods have proven useful in devising stationary count time series having Poisson and binomial marginal distributions. Here, properties of this model class are established and the basic idea is developed. Specifically, we show how to construct stationary series with binomial, Poisson, and negative binomial marginal distributions; other marginal distributions are possible. A second model for stationary count time series is then proposed. The model uses a latent Gaussian sequence and a distributional transformation to build stationary series with the desired marginal distribution. The autocovariance functions of the count series are derived using a Hermite polynomial expansion. This model has proven to be quite flexible. It can have virtually any marginal distribution, including generalized Poisson and Conway-Maxwell. As an application, we also study trends in the presence/absence of snow cover (not depths) in Napoleon, North Dakota from 1966-2015 via satellite data. Statistically, a two-state Markov chain model with periodic dynamics is developed to describe snow cover presence and its changes. The results indicate increasing snow coverage in Napoleon, North Dakota. |
February 8, 2018 | Mrinal Roychowdhury, University of Texas - Rio Grande Valley |
Optimal Quantization Abstract: The basic goal of quantization for probability distribution is to reduce the number of values, which is typically uncountable, describing a probability distribution to some finite set and thus approximation of a continuous probability distribution by a discrete distribution. Though the term 'quantization' is known to electrical engineers for the last several decades, it is still a new area of research to the mathematical community. In my presentation, first I will give the basic definitions that one needs to know to work in this area. Then, I will give some examples, and talk about the quantization on mixed distributions. Mixed distributions are an exciting new area for optimal quantization. I will also tell some open problems relating to mixed distributions. |
February 6, 2018 | Jeff Mermin, Oklahoma State University |
Monomial Resolutions and CW Complexes Abstract: Let R be a polynomial ring, and I a homogeneous ideal. Almost all algebraic and geometric information about I is encoded in a related object called a minimal free resolution: a long exact sequence of free modules terminating in I. Finding these free resolutions is thus a central problem in modern commutative algebra. I'll work some examples showing that the problem is prohibitively difficult even for monomial ideals, and discuss modern techniques that can (in good situations) describe the resolution of a monomial ideal in terms of a suitable topological object, such as a simplicial or CW complex. The talk should be accessible to students who are comfortable computing the homology of a simplicial complex. |
November 9, 2017 | William Hardesty, Louisiana State University |
The Representation Theory of p-Restricted Lie Algebras Abstract: A p-restricted Lie algebra is defined to be a Lie algebra over an (algebraically closed field) k of characteristic p > 0 which is equipped with an additional structure called the "p-power map". Unlike the characteristic 0 case, the study of finite-dimensional representations for (restricted) reductive Lie algebras (such as gln) is an incredibly deep and complicated subject. The goal of the talk will be to give a brief overview of this area of research. I will begin with a quick review of the representation theory for complex Lie algebras. Then I will define and give some important examples of p-restricted Lie algebras and their representations, as well as a summary of their basic properties. In the remainder of the talk various problems of interest to representation theorists will be discussed, such as computing extension groups, multiplicity formulas and radical filtrations for various types of representations. If time permits, I may also mention some results from my ongoing joint work with V. Nandakumar. |
November 2, 2017 | Jonas Hartwig, Iowa State University |
Galois Orders and Gelfand-Zeitlin Modules Abstract: Galois orders form a class of noncommutative algebras introduced by Futorny and Ovsienko in 2010. They appear in many places in Lie theory and quantum algebra. We present new constructions of Galois orders and their representations, generalizing recent results by several different authors. I will end by stating some current open problems in the area. |
October 19, 2017 | H. N. Nagaraja, College of Public Health, The Ohio State University |
Some Applications of Ordered Data Models Abstract: We introduce three probability models for ordered data viewing them as (i) order statistics, (ii) record values, and (iii) order statistics and their concomitants. Applications of spacings of order statistics to auction theory and actuarial science will be illustrated with two examples: (a) properties of expected rent in regular and reverse auctions and (b) finding approximation to finite-time ruin probabilities for a company with large initial reserves. The problem of estimating mobility rates in search models using record value theory will be discussed. We will see how the concept of concomitants of order statistics can be used to model data-snooping biases in search engines, two-stage designs, and tests of financial asset pricing models. Some recent work on order statistics and spacings will be introduced. |
October 18, 2017 This talk is aimed at a general audience! |
H. N. Nagaraja, College of Public Health, The Ohio State University
Fourth Satya Mishra Memorial Lecture |
Statistical Methods for Public Health and Medicine Abstract: Probabilistic modeling, statistical design, and inferential methods form the backbone of the remarkable advances in medicine and public health. General goals of inference are hypothesis testing (as in clinical trials), estimation (of risk for a disease), and prediction (of a future condition). We illustrate them by introducing examples, data types, statistical models, and methods. With summary statistics on commonly used statistical concepts in major public health and medical journals, we discuss popular statistical methods that drive current research in public health and medical science. We examine trends in biostatistical research and observe the evolving field of data science and bioinformatics. |
September 21 & 28, 2017 | Selvi Beyarslan, 91桃色 |
Algebraic Properties of Toric Rings of Graphs I & II Abstract: Let G = (V,E) be a simple graph. We investigate the Cohen-Macaulayness and algebraic invariants, such as the Castelnuovo-Mumford regularity and the projective dimension, of the toric ring k[G] via those toric rings associated to induced subgraphs of G. |
September 19, 2017 | Kodai Wada, Waseda University, Tokyo, Japan |
Linking Invariants of Virtual Links Abstract: In the first half of the talk, we introduce the notion of an even virtual link and define a certain linking invariant of even virtual links, which is similar to the linking number. Here, a virtual link diagram is even if the virtual crossings divide each component into an even number of arcs. The set of even virtual link diagrams is closed under classical and virtual Reidemeister moves, and it contains the set of classical link diagrams. For two even virtual link diagrams, the difference between the linking invariants of them gives a lower bound of the minimal number of forbidden moves needed to deform one into the other. Moreover, we give an example which shows that the lower bound is best possible. In the second half of the talk, we define a polynomial invariant of any virtual link which is a generalization of the linking invariant above. The polynomial invariant is a natural extension of the index type invariants of virtual knots, for example, the writhe polynomial and the affine index polynomial. |
September 7 & 14, 2017 | Bin Wang, 91桃色 |
A Mixture Model for Next-Generation Sequencing Gene Expression Data I & II Abstract: Gene expression data are usually highly skewed with a lot of weakly- or non-expressed genes. As a result, gene expression data profiled using next-generation sequencing (NGS) techniques usually contain a large amount of zero measurements. We propose to model the NGS data using a mixture model. Via data binning, the expectation-maximization algorithm performs well to estimate the distributions of gene profiles. We also propose a novel normalization method by assuming the existence of a common distribution among all gene profiles. |
August 24, 2017 | Andrei Pavelescu, 91桃色 |
Complete Minors of Self-Complementary Graphs Abstract: Some topological properties of graphs are connected to the existence of complete minors. For a simple non-oriented graph G, a minor of G is any graph that can be obtained from G by a sequence of edge deletions and contractions. In this talk, we show that any self-complementary graph with n vertices contains a K[(n+1)/2] minor. We also prove that this bound is the best possible and present some consequences about which self-complementary graphs are planar, intrinsically linked or intrinsically knotted. This is joint work with Dr. Elena Pavelescu. |