TBA by Kisun Lee
- Series
- Student Algebraic Geometry Seminar
- Time
- Monday, November 18, 2019 - 13:30 for 1 hour (actually 50 minutes)
- Location
- Skiles 254
- Speaker
- Kisun Lee – Georgia Tech – kisunlee@gatech.edu
The Bergman fan is a tropical linear space with trivial valuations describing a matroid combinatorially as it corresponds to a matroid. In this talk, based on a plenty of examples, we study the definition of the Bergman fan and their subdivisions. The talk will be closed with the recent result of the Maclagan-Yu's paper (https://arxiv.org/abs/1908.05988) that the fine subdivision of the Bergman fan of any matroid is r-1 connected where r is the rank of the matroid.
The topological entropy of a subshift is the exponential growth rate of the number of words of different lengths in its language. For subshifts of entropy zero, finer growth invariants constrain their dynamical properties. In this talk we will survey how the complexity of a subshift affects properties of the ergodic measures it carries. In particular, we will see some recent results (joint with B. Kra) relating the word complexity of a subshift to its set of ergodic measures as well as some applications.
We report on the discovery of a general principle leading to the unexpected cancellation of oscillating sums. It turns out that sums in the
class we consider are much smaller than would be predicted by certain probabilistic heuristics. After stating the motivation, and our theorem,
we apply it to prove a number of results on integer partitions, the distribution of prime numbers, and the Prouhet-Tarry-Escott Problem. For example, we prove a "Pentagonal Number Theorem for the Primes", which counts the number of primes (with von Mangoldt weight) in a set of intervals very precisely. In fact the result is stronger than one would get using a strong form of the Prime Number Theorem and also the Riemann Hypothesis (where one naively estimates the \Psi function on each of the intervals; however, a less naive argument can give an improvement), since the widths of the intervals are smaller than \sqrt{x}, making the Riemann Hypothesis estimate "trivial".
Based on joint work with Ernie Croot.
In this talk, we provide the details of our faster width-dependent algorithm for mixed packing-covering LPs. Mixed packing-covering LPs are fundamental to combinatorial optimization in computer science and operations research. Our algorithm finds a $1+\eps$ approximate solution in time $O(Nw/ \varepsilon)$, where $N$ is number of nonzero entries in the constraint matrix, and $w$ is the maximum number of nonzeros in any constraint. This algorithm is faster than Nesterov's smoothing algorithm which requires $O(N\sqrt{n}w/ \eps)$ time, where $n$ is the dimension of the problem. Our work utilizes the framework of area convexity introduced in [Sherman-FOCS’17] to obtain the best dependence on $\varepsilon$ while breaking the infamous $\ell_{\infty}$ barrier to eliminate the factor of $\sqrt{n}$. The current best width-independent algorithm for this problem runs in time $O(N/\eps^2)$ [Young-arXiv-14] and hence has worse running time dependence on $\varepsilon$. Many real life instances of mixed packing-covering problems exhibit small width and for such cases, our algorithm can report higher precision results when compared to width-independent algorithms. As a special case of our result, we report a $1+\varepsilon$ approximation algorithm for the densest subgraph problem which runs in time $O(md/ \varepsilon)$, where $m$ is the number of edges in the graph and $d$ is the maximum graph degree.
Let X be a random variable taking values in {0,...,n} and f(z) be its probability generating function. Pemantle conjectured that if the variance of X is large and f has no roots close to 1 in the complex plane, then X must be approximately normal. We will discuss a complete resolution of this conjecture in a strong quantitative form, thereby giving the best possible version of a result of Lebowitz, Pittel, Ruelle and Speer. Additionally, if f has no roots with small argument, then X must be approximately normal, again in a sharp quantitative form. These results also imply a multivariate central limit theorem that answers a conjecture and completes a program of Ghosh, Liggett and Pemantle. This talk is based on joint work with Julian Sahasrabudhe.
One of the outstanding open problems of statistical mechanics is about the hard-core model which is a popular topic in mathematical physics and has applications in a number of other disciplines. Namely, do non-overlapping hard disks of the same diameter in the plane admit a unique Gibbs measure at high density? It seems natural to approach this question by requiring the centers to lie in a fine lattice; equivalently, we may fix the lattice, but let the Euclidean diameter D of the hard disks tend to infinity. In two dimensions, it can be a unit triangular lattice A_2 or a unit square lattice Z^2. The randomness is generated by Gibbs/DLR measures with a large value of fugacity which corresponds to a high density. We analyze the structure of high-density hard-core Gibbs measures via the Pirogov-Sinai theory. The first step is to identify periodic ground states, i.e., maximal-density disk configurations which cannot be locally `improved'. A key finding is that only certain `dominant' ground states, which we determine, generate nearby Gibbs measures. Another important ingredient is the Peierls bound separating ground states from other admissible configurations. In particular, number-theoretic properties of the exclusion diameter D turn out to be important. Answers are provided in terms of Eisenstein primes for A_2 and norm equations in the cyclotomic ring Z[ζ] for Z^2, where ζ is the primitive 12th root of unity. Unlike most models in statistical physics, we find non-universality: the number of high-density hard-core Gibbs measures grows indefinitely with D but
non-monotonically. In Z^2 we also analyze the phenomenon of 'sliding' and show it is rare.
This is a joint work with A. Mazel and Y. Suhov.
Starting from mathematical approaches for image processing, we will discuss different models, analytic aspects of them, and numerical challenges. If time permits we will consider numerical applications to data understanding. A few other applications may be presented.
I will discuss an ongoing project to reconstruct a gene network from time-series data from a mammalian signaling pathway. The data is generated from gene knockouts and the techniques involve computational algebra. Specifically, one creates an pseudomonomial "ideal of non-disposable sets" and applies a analogue of Stanley-Reisner theory and Alexander duality to it. Of course, things never work as well in practice, due to issue such as noise, discretization, and scalability, and so I will discuss some of these challenges and current progress.