Talks displayed in this page are ordered by alphabetical order of the authors, with respect their surnames. The schedule can be consulted in the schedule page.
The talks will take place at the Großer Hörsaal in the Institut für Informatik of the Freie Universität Berlin, at Takustr. 9. More information about the location can be found in the venue page.
All talks will be recorded in video, but the video will be released publicly only upon permission of the speaker.
The ground-breaking research on the uniformization of complex algebraic curves was conducted in the early decades of the 19th century. Nevertheless, there are few examples in the literature of algebraic curves for which an explicit uniformization is known. Prototypes are the circle, the elliptic curves and the modular curves. In particular, the modular curves \(X_0(N)\) are uniformized by the functions \((j(z), j(Nz))\), where \(j(z)\) stands for a complex function which is invariant for the action of the modular group \(\mathrm{PSL}(2, \mathbb{Z})\). Our purpose will be to obtain explicit uniformizations of the Fermat curves \(X^N+Y^N=Z^N\) by making use of functions which are invariant under the action of discrete groups acting on the complex upper half-plane. The talk is based in joint work with Jordi Guárdia.
The slides of the talk can be accessed in this link and the video of the talk in this link.
More information about Pilar Bayer in this link.
Query evaluation is one of the most fundamental tasks in databases, and a vast amount of literature is devoted to the complexity of this problem. This talk will focus on query evaluation in the "dynamic setting", where the database may be updated by inserting or deleting tuples. In this setting, an evaluation algorithm receives a query Q and an initial database D and starts with a preprocessing phase that computes a suitable data structure to represent the result of evaluating Q on D. After every database update, the data structure is updated so that it represents the result of evaluating Q on the updated database. The data structure shall be designed in such a way that it quickly provides the query result, preferably in constant time (i.e., independent of the database size). We focus on the following flavours of query evaluation.
Here, as usual, Q(D) denotes the k-ary relation obtained by evaluating a k-ary query Q on a relational database D. For Boolean queries, all three tasks boil down to
Compared to the dynamic descriptive complexity framework introduced by Patnaik and Immerman (1997), which focuses on the expressive power of first-order logic on dynamic databases and has led to a rich body of literature, we are interested in the computational complexity of query evaluation. We say that a query evaluation algorithm is efficient if the update time is either constant or at most polylogarithmic in the size of the database.
In this talk I want to give an overview of recent results in this area.
Circle patterns are just a set of circles in the plane, where we consider some of the intersecting circles as neighbours and therefore gain an additional graph structure. As an example, any triangulation of the plane or disc yields a circle pattern via the set of its circumcircles. The special class of integrable circle patterns admits factorizing intersection angles. In this talk we will also discover several geometric properties of these patterns, and how they allow us to define dynamics on the triangular, the hexagonal and the square lattice. It turns out that these dynamics do not only preserve the given integrable structure, but at the same time conserve the "electric" properties of the patterns. In fact, they represent a geometric way of doing the classical Y-\(\Delta\) move. This connects the patterns and the dynamics to statistical physics and to the modern notion of cluster algebras.
On a theorem about conics. The theorem is useful for constructing images of touching conics. The construction can be described in terms of a point particle that is moving inside a conic. The particle moves in a straight line which is reflected when the particle hits the conic. Circles, inscribed in quadrilaterals, make an unexpected appearance when the particle's trajectory obeys a familiar law of geometric optics.
The talk is based on personal work. However, I benefitted from a discussion with Prof. Bobenko on [1].
[1] A.V. Akopyan, A.I. Bobenko. Incircular nets and confocal conics, Trans. AMS, 2017.
In graph exploration, one or more so-called agents or robots have to deterministically visit all vertices of a given unknown graph. In this talk, we investigate the memory requirement for multiple cooperative agents to explore an undirected graph. We show that \(\Theta(\log \log n)\) agents with only constant memory are necessary and sufficient to explore any graph with at most \(n\) vertices.
The slides of the talk can be accessed in this link and the video of the talk in this link although it is password protected. To request the password write an e-mail to the organisers.
The \(\mathbf{P}\) versus \(\mathbf{NP}\) question is among the most prestigeous of modern mathematics, but deemed out of reach by many of the leading researches in the field. Less widely known is its algebraic analogue, the question of \(\mathbf{VP}\) versus \(\mathbf{VNP}\). We will present the problem and a recent approach to it known as Geometric Complexity Theory, which transitions from computational complexity to algebraic geometry and representation theory. If time permits, the speaker will present some results from his PhD thesis (at TU Berlin) in this context.
The video of the talk in this link.
One of the major problems in computer vision is the detection of visual events. We study such events from the perspective of algebraic geometry. For this, we take pictures of a moving curve or surface, which means to consider its image or contour curve that arises by projecting from different viewpoints. Qualitative changes in that curve occur when the viewpoint crosses the visual event surface. We examine the components of this ruled surface, observe that these coincide with the iterated singular loci of the coisotropic hypersurfaces associated with the original curve or surface, and show how to compute exact representations for all visual event surfaces using algebraic methods.
The slides of the talk can be accessed in this link and the video of the talk in this link.
Imagine drawing lines on a surface. Most of us are pretty lazy, so we most likely manage only a small doodle. However, what if the drawing for the rest of the surface can be filled in by invoking symmetries. If the surface we are drawing on is arbitrary, what are all the ways we can scribble such that this actually works? Is there a way to enumerate these different ways? If the goal was to find molecular structures by drawing them on surfaces, what surfaces would we start with and why?
The first and greater part of my talk will motivate and answer these questions, while focusing on a new technique to explicitly enumerate and construct all essentially different ways to decorate prominent examples of triply periodic minimal surfaces.
The second part will focus on what we can say about the kinds of structures that arise from this process and the kind of advantages this new approach offers. This is a rather controversial topic, as most chemists exclusively use crystallographic tables for the study of symmetries in 3D structures.
There will be tie-ins to geometry, braid theory, combinatorial group and tiling theory, physics, and even some chemistry.
The slides of the talk can be accessed in this link and the video of the talk in this link.
One need not distinguish between convexity and positive curvature in the case of compact hypersurfaces in Euclidean space - the two notions are equivalent. This freedom of perspective is useful not only because convexity is a completely extrinsic condition, and the curvature completely intrinsic. Moving from hypersurfaces to submanifolds of higher codimension, Jordan abandons us, and convexity no longer makes sense. In the talk, we will consider an extrinsic 'pinching' condition for submanifolds of Euclidean space which generalises convexity in at least two ways: firstly, it forces positivity of the curvature, and secondly, pinched solutions to the mean curvature flow behave exactly like convex solutions.
The slides of the talk can be accessed in this link and the video of the talk in this link.
Measure partitions are challenging problems in topological combinatorics. Given a collection of measures in a Euclidean space, the question is whether there exists a partition of the ambient space, such that it cuts the measures in a prescribed way. One of the best known measure partition results is the Ham Sandwich theorem proved by Banach in 1938, that states that any collection of \(d\) measures in \(\mathbb{R}^d\) can be cut by a hyperplane, so that each measure gets partitioned into two equal parts.
In this talk a short survey of measure partition results will be presented and some methods from topological combinatorics will be explained on the example of the paper Cutting a part from many measures, arXiv:1710.05118.
We prove a continuous analogue of the conjecture of Holmsen, Kynčl and Valculescu about partitions of finite colored sets. Indeed, for integers \(m, c\) and \(d\) and a prime power \(n=p^k\) such that \(d \geq 2\) and \(m \geq n(c-d)+\frac{dn}{p}-\frac{n}{p}+1\), and for \(m\) positive finite absolutely continuous measures \(\mu_1, \dots, \mu_m\) on \(\mathbb{R}^d\), we prove that there exists a partition of \(\mathbb{R}^d\) into \(n\) convex sets, such that every set has positive measure with respect to at least \(c\) of the measures \(\mu_1, \dots, \mu_m\). Additionally, we obtain an equipartition of the measure \(\mu_m\).
The proof relies on a configuration space/test map scheme that will be the main part of this talk. It translates the problem into a novel question from equivariant topology -- a non-existence of \(\mathfrak{S}_n\)-equivariant maps from the ordered configuration space of \(n\) points in \(\mathbb{R}^d\) into the union of an arrangement of affine subspaces of a Euclidean space.
Joint with Pavle V. M. Blagojević and Günter M. Ziegler.
The slides of the talk can be accessed in this link and the video of the talk in this link.
In 1986, the three physicists Mehran Kardar, Giorgio Parisi, and Yi-Cheng Zhang derived a stochastic partial differential equation whose solution describes the fluctuations of the boundary separating two competing materials. This model found a wide range of applications, from describing the expansion of a forest fire to the interface separating water from ice. Recently there has been an increasing interest in the KPZ equation, since it partially motivated the development of the theory of regularity structures, which aims at tackling certain classes of ill-posed stochastic PDEs. Surprisingly, this solution theory dives deep in the field of stochastic analysis, providing new ways to understand the stochastic integral, a central object in probability theory. I will present one (or maybe two) ideas behind this theory, show some pictures referring to the KPZ equation and very briefly explain my work, namely the construction of a solution on the whole real line.
The video of the talk in this link.
The Moving Least Squares (MLS) approach is a powerful means to operate on point clouds. It has been investigated in great detail by David Levin in several papers and was already put to use in geometry processing and shape modeling. The goal of this talk is to briefly give an introduction to the MLS procedure and to report on some recent developments and applications of the method in the field of Point Cloud Processing.
In this talk, we derive the Hamilton-Jacobi-Equation $$\begin{align*} 0 &= \frac{\partial}{\partial t}S(x,t) + \mathbf{H}\left(\frac{\partial}{\partial x}S(x,t), x \right)\\ S(x,0) &= S_0(x), \end{align*}$$ which describes the evolution of the action \(S\) of a physical system governed by the Hamiltonian \(\mathbf{H}\). This nonlinear first-order partial differential equation is fundamental in classical physics and has many applications in calculus of variation and geometry. Using methods from convex analysis, we present the fascinating method of E. Hopf for solving the Hamilton-Jacobi-Equation in the potential free case.
The slides of the talk can be accessed in this link and the video of the talk in this link.
In spatial telecommunication networks, it is a prominent question how to route many messages in the same time. We propose a random mechanism to choose the trajectories of messages in a network, where users are situated randomly in a compact subset of \(\mathbb R^d\), and each user sends one message to the single base station. Messages are transmitted either directly or via other users, with a given upper bound on the number of hops. We define a Gibbsian probability measure on the set of such trajectory families, which favours trajectories with little interference (measured in terms of the signal-to-interference ratio (SIR)) and trajectory families with little congestion (measured in terms of the number of pairs of incoming messages of the users).
We derive the behaviour of this system in the limit of a high spatial density of users using a large-deviation analysis, and provide a law of large numbers for the empirical measure of message trajectories. The limit of these empirical measures is given as the minimizer(s) of a characteristic variational formula. In the special case when congestion is not penalized, we analyze this minimizer and investigate the questions of the typical number of hops, the typical length of a hop and the typical shape of a trajectory in the highly dense telecommunication system.
The topic of this talk is joint work with my supervisor Wolfgang König.
The slides of the talk can be accessed in this link and the video of the talk in this link.
At the end of the 19th century, the artists Georges Seurat and Paul Signac developed the technique of pointillism, which is based in the idea that a continuous shape can be represented by a discrete cloud of points. This principle, which is the foundation of all screens, was incorporated almost a century later to what is known as Topological Data Analysis. However, how good can a cloud of points approximate a geometric object?
In this talk, we will review the Niyogi-Smale-Weinberger Approximation Theorem which relates how difficult is to approximate topologically a geometric object by a cloud of points to the geometric quantity known as reach.
The video of the talk in this link.
Numerical discretizations of differential equations are often studied through their modified equation. This is a differential equation, usually obtained as a power series, with solutions that exactly interpolate the discretization. By comparing the modified equation to the original equation, the error propagation of the numerical method can be studied.
When we consider a very simple discrete dynamical system — one which we can solve exactly — we can reverse the direction of the argument and use the discrete dynamics to understand the power series defining the modified equation. We use this method to derive the series expansion $$ \left( \arcsin \frac{h}{2} \right)^2 = \frac{1}{2} \sum_{k=1}^\infty \frac{(k-1)!^2}{(2k)!} h^{2k}, $$ which can be used to prove the well-known identity $$ \sum_{k=1}^\infty \frac{1}{k^2}= \frac{\pi^2}{6} . $$
The slides of the talk can be accessed in this link and the video of the talk in this link.