Invited Speakers

Eva Rotenberg (Technical University of Denmark):

Amortised Analysis of Dynamic Data Structures 

In dynamic data structures, one is interested in efficiently facilitating queries to a data set, while being able to efficiently perform updates as the data set undergoes changes. Often, relaxing the efficiency measure to the amortised setting allows for simpler algorithms. A well-known example of a data structure with amortised guarantees is the splay tree by Sleator and Tarjan [JACM'85].

Similarly, in data structures for dynamic graphs, one is interested in efficiently maintaining some information about the graph, or facilitating queries, as the graph undergoes changes in the form of insertion and deletion of edges. Examples of such information include connectivity, planarity, and approximate sparsity of the graph: is the graph presently connected? Is it planar? Has its arboricity grossly exceeded some specified number ã? The related queries could be: is x connected to y? Are the edges uv and uw consecutive in the ordering around u in its current planar embedding? Or, report the O(α) out-edges of a specified vertex.

In this talk, we will see Brodal and Fagerberg's amortised algorithm for orienting sparse graphs (i.e. of arboricity ≤α), so that each vertex has O(α) out-edges [WADS'99]. The algorithm itself is extremely simple, and uses an elegant amortised argument in its analysis. Then, we will visit the problem of dynamic planarity testing: is the graph presently planar? Here, we will see an elegant amortised reduction to the seemingly easier problem, where planarity-violating edges may be detected and rejected [JCSS'96]. We will see a sketch of how the current state-of-the-art algorithm for efficient planarity testing [STOC'20] uses ideas similar to those in [WADS'99] to analyse the behaviour of a greedy algorithm via a possibly inefficient algorithm with provably low recourse [SODA'20]. If time permits, we will touch upon a recent simple amortised data structure for maintaining information in dynamic forests [SOSA'23], which builds on ideas from splay trees.

 

Karoliina Lehtinen (CNRS, Aix-Marseille Univercity, LIS, Marseille, France ):

A brief history of history-determinism

Most nondeterministic automata models are more expressive, or at least more succinct, than their deterministic counterparts; however, this comes at a cost, as deterministic automata tend to have better algorithmic properties. History-deterministic automata are an intermediate model that allows a restricted form of nondeterminism: all nondeterministic choices must be resolvable on-the-fly, with only the knowledge of the word prefix read so far—as opposed to general nondeterminism, which allows for guessing the future of the word. History-deterministic automata combine some of the algorithmic benefits of determinism with some of the increased power of nondeterminism, thus enjoying (some of) the best of both worlds. 

History-determinism, as it is understood today, has its roots in several independently invented notions: Kupferman, Safra and Vardi’s automata recognising tree languages derived from word languages [Ann. Pure Appl. Logic’06] (a notion that has been later referred to as automata that are good-for-trees [ICALP’13]), Henzinger and Piterman’s good-for-games automata [CSL’06], and Colcombet’s history-deterministic automata, introduced in his work on regular cost-automata [ICALP’09]. In the ω-regular setting, where they were initially most studied, the notions of good-for-trees, good-for-games and history-determinism are equivalent, despite differences in their definitions. The key algorithmic appeal of these automata is that like deterministic automata, they have good compositional properties. This makes them particularly useful for applications such as reactive synthesis, where composition of games and automata is at the heart of effective solutions. 

Since then, history-determinism has received its fair share of attention, not least because of its relevance to synthesis. Indeed it turns out to be a natural and useful form of nondeterminism more broadly, and can be generalised to all sorts of different automata models: alternating auto- mata [CONCUR’19], pushdown automata [Methods Comput. Sci.’22, MFCS’21], timed automata [CONCUR’22, RP’22], Parikh automata [CoRR’22], and quantiative automata [FSTTCS’21], to name a few. In each of these models, history-determinism offers some trade-offs between the power of nondeterminism and the algorithmic properties of determinism. In particular, depending on the model, they can be either more expressive or more succinct than their deterministic counterparts, while retaining better algorithmic properties—in particular with respect to deciding universality, language inclusion and games—than fully nondeterministic automata. 

The drive to extend history-determinism to more powerful automata models has also lead to a better understanding of the properties of these automata, of how they compare to related notions (such as good-for-games automata and determinisability by pruning), and of the various games and tools used to study them. 

This talk aims to give a broad introduction to the notion of history determinism as well as an overview of some of the recent developements on the topic. It will also highlight some of the many problems that remain open. It is loosely based on a recent survey, written jointly with Udi Boker, which gives an informal presentation of what are, in our view, the key aspects of history-determinism [ACM SIGLOG News’22]. 

 

Moshe Y. Vardi (Rice University):

Logical Algorithmics: From Theory to Practice

The standard approach to algorithm development is to focus on a specific problem and develop for it a specific algorithm. Codd's introduction of the relational model in 1970 included two fundamental ideas: 

(1) Relations provide a universal data representation formalism, and 

(2) Relational databases can be queried using first-order logic. 

Realizing these ideas required the development of a meta-algorithm, which takes a declarative query and executes it with respect to a database. In this talk, I will describe this approach, which I call Logical Algorithmics, in detail, and explore its profound ramification.