2018 Mathematics and Systems Science Forum

2018 Mathematics and Systems Science Forum

Content

Title: A Survey on the CAESAR Finalist

Tao Huang,(Nanyang Technological University, Singapore)

2018.06.06 10:00-11:00

【Abstract】In this talk, I will do a survey on the authenticated encryption schemes in the finalist of the CAESAR competition, which are ACORN, Ascon, AEGIS, MORUS, OCB, COLM and Deoxys-II. The seven authenticated encryption schemes are divided into three categories according to their use cases: for lightweight applications, for high performance applications and for defense in depth. For each design, the latest specification, the design method and the features will be discussed. The software and hardware performance data will also be discussed for each use case.


Title: Boomerany Connectivity Table: A New Cryptanalysis Tools

Tao Huang,(Nanyang Technological University, Singapore)

2018.06.07 11:00-12:00

【Abstract】A boomerang attack is a cryptanalysis framework that regards a block cipher E as the composition of two sub-ciphers E_1 and E_0 and builds a particular characteristic for E with probability p^2q^2 by combining differential characteristics for E_0 and E_1 with probability p and q, respectively. Crucially the validity of this figure is under the assumption that the characteristics for E_0 and E_1 can be chosen independently. Indeed, Murphy has shown that independently chosen characteristics may turn out to be incompatible. On the other hand, several researchers observed that the probability can be improved to p or q around the boundary between E_0 and E_1 by considering a positive dependency of the two characteristics, e.g. the ladder switch and S-box switch by Biryukov and Khovratovich. This phenomenon was later formalised by Dunkelman et al. as a sandwich attack that regards E as E_1\circ E_m \circ E_0, where E_m satisfies some differential propagation among four texts with probability r, and the entire probability is p^2q^2r.
We revisit the issue of dependency of two characteristics in E_m, and propose a new tool called Boomerang Connectivity Table (BCT), which evaluates r in a systematic and easy-to-understand way when E_m is composed of a single S-box layer. With the BCT, previous observations on the S-box including the incompatibility, the ladder switch and the S-box switch are represented in a unified manner. Moreover, the BCT can detect a new switching effect, which shows that the probability around the boundary may be even higher than p or q. To illustrate the power of the BCT-based analysis, we improve boomerang attacks against Deoxys-BC, and disclose the mechanism behind an unsolved probability amplification for generating a quartet in SKINNY. Lastly, we discuss the issue of searching for S-boxes having good BCT and extending the analysis to modular addition.


Title: A Complete Semidefinite Algorithm for Detecting Copositive Matrices and Tensors

Zi Yang,(UCSD)

2018.06.25, 10:30-12:00, N202

【Abstract】A real symmetric matrix (resp., tensor) is said to be copositive if the associated quadratic (resp., homogeneous) form is greater than or equal to zero over the nonnegative orthant. The problem of detecting their copositivity is NP-hard. This paper proposes a complete semidefinite relaxation algorithm for detecting the copositivity of a matrix or tensor. If it is copositive, the algorithm can get a certificate for the copositivity. If it is not, the algorithm can get a point that refutes the copositivity. We show that the detection can be done by solving a finite number of semidefinite relaxations, for all matrices and tensors.


Title: The combinatorics of the tropical Hodge bundles

Bo Lin,(UT Austin)

2018.06.25, 15:30-17:00, N109

【Abstract】The moduli space $M_g^{trop}$ of tropical curves of genus $g$ is a generalized cone complex that parameterizes metric graphs of genus $g$. For each such graph $\Gamma$, the associated canonical linear system $\vert K_\Gamma\vert$ has the structure of a polyhedral cell complex. We propose a tropical analogue of the Hodge bundle on $M_g^{trop}$ and study its combinatorial properties. Our construction is illustrated with explicit computations and examples. This is a joint work with Martin Ulirsch.


Title: High-dimensional Cost-constrained Regression via Non-convex Optimization

Yufeng Liu,(University of North Carolina at Chapel Hill)

2018.06.26, 8:30-9:30, N109

【Abstract】In modern predictive modeling process, budget constraints become a very important consideration due to the high cost of collecting data using new techniques such as brain imaging and DNA sequencing. This motivates us to develop new and efficient high-dimensional cost constrained predictive modeling methods. In this talk, to address this challenge, we first study a new non-convex high-dimensional cost-constrained linear regression problem, that is, to find the cost-constrained regression model with the smallest expected prediction error among all models satisfying a budget constraint. The non-convex budget constraint makes this problem NP-hard. In order to estimate the regression coefficient vector of the cost-constrained regression model, we propose a new discrete extension of recent first-order continuous optimization methods. We further show some extensions of our proposed method for statistical learning problems using loss functions with Lipschitz continuous gradient. It can be also extended to problems with groups of variables or multiple constraints. Theoretically, we prove that the series of the estimates generated by our iterative algorithm converge to a first-order stationary point, which can be a globally optimal solution to the nonconvex high-dimensional cost-constrained regression problem. Computationally, our numerical studies show that the proposed method can solve problems of fairly high dimensions and has promising estimation, prediction, and model selection performance.


Title: Improving the power for testing against uniform stochastic ordering

Dewei Wang,(University of South Carolina)

2018.06.26, 9:30-10:30, N109

【Abstract】When comparing two continuous distributions F and G with respect to the uniform stochastic order, a convenient graphical tool is the ordinal dominance curve (ODC). If the ODC is star-shaped, then F and G are uniformly stochastically ordered. Motivated by this fact, a goodness-of-fit test of uniform stochastic ordering was proposed in Tang, Wang, and Tebbs (2017, Annals of Statistics), the test statistic of which depends on the Lp distance between an empirical estimator of the ODC and the estimator's least star-shaped majorant. To well control the probability of type I error, a unique least favorable configuration was found and used to determine fixed critical values. Though using fixed critical values yields a consistent test, it could severely weaken the power against alternatives that are nearby nulls other than the least favorable configuration. In this talk, a new method will be introduced to compute sample-based critical values, which facilitate significant improvements to the power of the test. Simulations and real data analysis are conducted to illustrate the advantages of the new method.


Title: The Distribution of Certain Restricted Numbers

Xianchang Meng,(Centre de Recherches Mathématiques in Montréal, Canada)

2018.06.26 14:00-15:00, 2018.06.27 9:30-10:30, 2018.06.28 14:00-15:00

【Abstract】Chebyshev noticed that there seems to be more number of primes congruent to 3 mod 4 than those congruent to 1 mod 4. Questions related to the distribution of prime numbers among different arithmetic progressions are known as ``Prime Race Problems". I will introduce some generalizations of the prime number races: 1) the distribution of products of k primes in different arithmetic progressions; the results are different if we count the number of prime factors with multiplicity or not; 2) a generalization of a very recent result of Dummit, Granville, and Kisilevsky who studied the distribution of products of two primes pq with p, q both from the residue class 3 mod 4; 3) Function field version of prime number races. Probabilistic method is a very useful tool to study prime number races. If time permits, I may mention how to improve the error term in the counting function of k-free numbers using probabilistic method under some reasonable conjectures.


Title: Computer Algebra Algorithms for Proving Jacobi Theta Function Relations

Liangjie Ye,(RISC, Johannes Kepler University, Austria)

2018.06.26 15:15-16:15, 2018.06.27 10:45-11:45, 2018.06.29 14:00-15:00

【Abstract】This talk will be focused on proving Jacobi theta function identities. In the past centuries, many number theorists, e.g., Ramanujan, Hardy, Rademacher, Berndt, Borwein, etc., have proved a substantial amount of theta function relations by hand. There was no general method for proving such relations, and the computation in their proofs are usually tedious. Thanks to symbolic computation, now we have developed some computer algebra algorithms to prove and produce rich classes of such identities automatically. In this talk, I will present a nutshell of our research on this topic. I will also demonstrate a Mathematica package called ``ThetaFunctions" equipped with our algorithms.


Title: Post-GWAS Secondary Phenotype Analysis is Cost-Benefit Only with Valid Analytical Approach

Guolian Kang,(St. Jude Children’s Research Hospital)

2018.06.27 9:00-10:00,N219

【Abstract】Genome-wide association studies (GWAS) have been successful in the last decades to identify common variants associated with common or rare diseases. Study designs most commonly used for GWAS are based on a primary outcome including the case-control study (CC) for studying a common disease or extreme phenotype sequencing design (EPS) for studying an ordinal or continuous phenotype, such as the well-known National Heart, Lung, and Blood Institute Exome Sequencing Project. Besides the primary outcome, extensive data on secondary phenotypes (SP) that may correlate and share the common genetic variants with the primary outcomes are available. Although na?ve methods for GWAS could be applied to analyze the secondary phenotypes, they lead to biased risk estimates if there is correlation between the primary outcome and secondary phenotype. This is resulted from the fact that the GWAS samples selected are not a random representative sample of the secondary outcome. Thus, the critical question is how to analyze these secondary outcomes in post-GWAS era? Here, two novel statistical methods for CC (STcc) and EPS (STEPS) designs are proposed. Extensive simulation studies show that the two methods can control false positive rate well and have larger power compared to na?ve methods, which is robust to effect pattern of the genetic variant (risk or protective), rare or common variants, and trait distributions). To show their cost-benefit, we also mimicked to re-design two new retrospective studies as in the real practice based on primary outcomeof interest, which is same as SP in the EPS study. Application to a genome-wide association study of Benign Ethnic Neutropenia with 7 SPs under an EPS design also demonstrates the striking superiority of the proposed two methods over their alternatives.


Title: Introduction to Interactive Theorem Proving

Bohua Zhan,(Technical University of Munich, Germany

2018.06.28 15:15-16:15,  2018.06.29 15:15-16:15

【Abstract】Interactive theorem proving studies the construction of formal proofs on the computer with human guidance. It can be applied to formally verify results in both mathematics and computer science. Formalizations in mathematics can serve one of several purposes: verify potentially controversial results in mathematical research, support verification of computer programs and systems, and as an aid to teaching proofs in mathematics.
In this series of two talks, I will give an introduction to the field of interactive theorem proving, with a focus on formalizations in mathematics. The talks will be self-contained, and no background in logic is assumed.
In the first talk, I will begin by reviewing the basic concepts of the field, then describe some major recent results in formalization of mathematics. In the second talk, I will discuss the important concept of proof automation, and my own work in this area. Finally, I will suggest some ideas for future