## ~ Abstracts ~

Finitary proof systems for Kozen's mu
Bahareh Afshari, Technische Universität Wien, Austria

Almost all modal logics are known to admit finite cut-free sequent calculi with modal mu-calculus being a notable exception. In the talk we will present several sound and complete axiomatisations for the mu­calculus and examine the eliminability of the cut-rule in Kozen's original axiomatisation. Joint work with Graham Leigh.

« Back...

The interplay between word equations with rational constraints and graph querying with path comparisons
Pablo Barcelo, Universidad de Chile, Chile

It is natural in some contexts to extend word equations with regular or rational constraints. Such constraints restrict the space of solutions, by imposing some pairs of variables to be witnessed by a pair of words in a given regular or rational language. As of late, word equations with such constraints have played a prominent role in understanding the complexity of evaluation for graph query languages with path comparisons. These languages find applications, e.g., in provenance, route-finding, social networks, and the semantic web.

In this tutorial, we present several results generated in the interplay of word equations with rational constraints and graph querying. While word equations with rational constraints are, in general, undecidable, the focus on "practical" rational relations -- e.g., prefix, subsequence, or suffix -- that graph querying imposes force us to revisit such results and perform a more fine grained analysis of the complexity of such equations. On the other hand, we apply known
techniques from word equations with constraints to obtain bounds for graph query evaluation that would have been difficult to obtain otherwise.

« Back...

Probabilistic couplings for privacy and security
Gilles Barthe, The IMDEA Software Institute, Spain

Probabilistic couplings are a powerful tool for analyzing the convergence of probabilistic processes. Recently, it has been observed that probabilistic couplings are closely tied to probabilistic relational Hoare logics that have been used to reason about computational security of cryptographic constructions and differential privacy of data-mining algorithms. The tutorial will introduce couplings and outline its connection with probabilistic relational program logics, and present recent developments that further leverage this connection.

« Back...

Distributed local strategies in broadcast networks
Nathalie Bertrand, Institut de Recherche en Informatique et en Automatique, France

We study the problems of reaching a specific control state, or converging to a set of target states, in networks with a parameterized number of identical processes communicating via broadcast. To reflect the distributed aspect of such networks, we restrict our attention to executions in which all the processes must follow the same local strategy that, given their past performed actions and received messages, provides the next action to be performed. We show that the reachability and target problems under such local strategies are NP-complete, assuming that the set of receivers is chosen non-deterministically at each step. On the other hand, these problems become undecidable when the communication topology is a clique. However, decidability can be regained for reachability under the additional assumption that all processes are bound to receive the broadcast messages.

« Back...

Programming constraint services with Z3
Nikolaj Bjorner, Microsoft Research, USA

Many program verification, analysis, testing and synthesis queries reduce to solving satisfiability of logical formulas, yet there are many applications where satisfiability, and optionally a model or a proof, is insufficient. Examples of useful additional information include interpolants, models that satisfy optimality criteria, solving quantified formulas using strategies, enumerating and counting solutions. This tutorial describes some of the additional services available in Z3, how to use them, and their underlying algorithms.

« Back...

Mikolaj Bojanczyk, University of Warsaw, Poland

The question is: what happens to Mazurkiewicz traces and their automata, when we allow the alphabet to be infinite, but in a well behaved way. The formal statement is about orbit-finite alphabets, in the sense of sets with atoms. This is not a well-researched question, so an answer (probably in a different syntax) might be known already.

« Back...

Towards a regular theory of parameterized concurrent systems
Benedikt Bollig, Centre National de la Recherche Scientifique, France

We give an introduction to parameterized communicating automata (PCAs), where finite-state processes exchange messages via rendez-vous or through bounded FIFO channels. Unlike classical communicating automata, a given PCA can be run on any network topology of bounded degree. We present various Büchi-Elgot-Trakhtenbrot theorems for PCAs, which roughly read as follows: Let F be an existential MSO formula and T be any of the following topology classes: pipelines, ranked trees, grids, or rings. There is a PCA that is equivalent to F on all topologies from T. In the case where each process executes a bounded number of contexts (each context restricting communication in a suitable way), we can even show that PCAs are closed under complementation, are expressively equivalent to full MSO logic, and have a decidable emptiness problem.

The talk contains results from joint works with Paul Gastin, Akshay Kumar, and Jana Schubert.

« Back...

Verification of concurrent data structures
Ahmed Bouajjani, University Paris Diderot, Paris 7, France

We address the issue of verifying the correctness of libraries of shared data structures (such as concurrent stacks, queues, etc.). Such a library provides a number of operations for the abstract manipulation of the data structure (e.g., push, pop, is-empty, etc.) The users of the library assume that these operations are executed atomically, in some interleaving order between operations issued by different threads. While implementations based on coarse-grained locking
allow to ensure easily correctness (w.r.t. the abstract, sequential specification of the data structure), they are not satisfactory due to their poor performances. In general, efficient implementations employ lock-free synchronization techniques instead in order to maximise parallelism while minimizing contention between concurrent operations. However, these implementations are quite complex and very hard to get right.The conformance of these implementation to the abstract specification of the data structure is captured by the concept of linearizability. In this tutorial, we overview the decidability and complexity of the problem of verifying linearizability w.r.t. a sequential specification, and we provide reductions of this problem to reachability problems under various assumptions (on the implementation and on the specification). These reductions are presented following a generic schema that consists in defining monitors (state machines) for detecting linearization violations that implementations may generate.

This tutorial is mainly based on joint work with Michael Emmi, Constantin Enea, and Jad Hamza.

« Back...

Static analysis: from finding bugs to building proofs
Franck Cassez, Macquarie University, Australia

In this talk, I'll discuss some current challenges in scalable automatic program verification techniques. I will present the trace abstraction refinement technique and how it can help in designing scalable static analysis tools. I will also address the challenges of building counter-examples and correctness proofs.

« Back...

The commutativity problem of MapReduce programs

MapReduce is a popular programming model for data-parallel computation. In MapReduce, the reducer produces an output from a list of inputs. Due to the scheduling policy of the platform, the inputs may arrive at the reducers in a different order. The commutativity problem of reducers asks if the output of a reducer is independent of the order of its inputs. Although the problem is undecidable in general, the MapReduce programs in practice are usually used for data analytics and thus require very simple data and control flow, which shed light on solving some practical instances of the problem. Today I will talk about our current progress towards a complete solution to the commutativity problem.

« Back...

Rational and irrational interpolants for polytopes
Dmitry Chistikov, University of Oxford, UK

Let S and T be two (bounded) polytopes in \mathbb R^m defined by systems of linear inequalities, S = { x : Ax<=b } and T = { x : Cx<=d }. An interpolant of S and T is a polytope I that is nested between S and T, i.e., S is contained in I, and I is contained in T.

We construct a pair of polytopes S and T with the following property: the interpolant with the smallest number of vertices is unique and has vertices with irrational coordinates, even though the inequalities which define S and T use only rational numbers (and so all vertices of S and T are rational). Our polytopes are 3-dimensional, and we show that in dimension 2 no such polytopes exist.

A strengthening of this result implies that, for a rational matrix, its nonnegative rank over the reals can be strictly smaller than its nonnegative rank over the rationals, which answers a question posed by Cohen and Rothblum in 1993.

Joint work with Stefan Kiefer, Ines Marusic, Mahsa Shirmohammadi, and James Worrell.

« Back...

The complexity of regular abstractions of one-counter languages
Dmitry Chistikov, University of Oxford, UK

One-counter automata (OCA) are finite-state automata with a counter that supports increments, decrements, and tests for zero. They correspond to an intermediate class between regular and context-free languages and are suitable for modeling "counting" phenomena. However, reasoning about OCA is often intractable: for example, language equivalence is undecidable for nondeterministic OCA, and for deterministic OCA it took 40 years to prove the existence of short distinguishing words.

In this work, we study the computational and descriptional complexity of the following transformation: Given an OCA A, construct a nondeterministic finite automaton (NFA) B that recognizes an abstraction of the language L(A): its (1) downward closure, (2) upward closure, or (3) Parikh image. We find polynomial and quasi-polynomial algorithms for these tasks; it was previously unknown whether appropriate NFA of sub-exponential size exist.

Based on joint work with Mohamed Faouzi Atig, Piotr Hofman, K Narayan Kumar, Prakash Saivasan, and Georg Zetzsche.

« Back...

Nested words for order-2 pushdown systems
Aiswarya Cyriac, Chennai Mathematical Institute, India

We study linear time model checking of collapsible higher-order pushdown systems (CPDS) of order 2 (manipulating stack of stacks) against MSO and PDL (propositional dynamic logic with converse and loop) enhanced with push/pop matching relations. To capture these linear time behaviours with matchings, we propose order-2 nested words. These graphs consist of a word structure augmented with two binary matching relations, one for each order of stack, which relate a push with matching pops (or collapse) on the respective stack. Due to the matching relations, satisfiability and model checking are undecidable. Hence we propose an under-approximation, bounding the number of times an order-1 push can be popped. With this under-approximation, which still allows unbounded stack height, we get decidability for satisfiability and model checking of both MSO and PDL. The problems are ExpTime-Complete for PDL.

This is a joint work with Paul Gastin and Prakash Saivasan.

« Back...

On hierarchical communication topologies in the pi-calculus
Emanuele D'Osualdo, University of Oxford, UK

In this talk, I will present recent work on automatic analysis of concurrent message-passing systems, modelled by pi-calculus terms. In this kind of systems the number of processes and names is unbounded and the communication topology changes dynamically. We are interested in the shape invariants satisfied by the communication topology, and the automatic inference of these invariants. We introduce the concept of "hierarchical" system, a pi-term where the communication topology is shaped according to a finite forest T specifying hierarchical relationships between names. I will illustrate the design of a static analysis to prove a term hierarchical by means of a novel type system that enjoys decidable inference. The soundness proof of the type system employs a non-standard view of pi-calculus reactions. By also showing that the coverability problem for hierarchical terms is decidable, we obtain an expressive static fragment of the pi-calculus with decidable safety verification problems.

« Back...

Topological self-stabilization with name-passing process Calculi
Van Thinh Dao, National University of Singapore

Topological self-stabilization is the ability of a distributed system to have its nodes themselves establish a meaningful overlay network. Independent from the initial network topology, it converges to the desired topology via forwarding, inserting, and deleting links to neighboring nodes. We adapt a linearization algorithm, originally designed for a shared memory model, to asynchronous message-passing. We use an extended localized pi-calculus to model the algorithm and to formally prove its essential self-stabilization properties: closure and weak convergence for every arbitrary initial configuration, and strong convergence for restricted cases.

« Back...

Formal reasoning for quantum programming languages
Yuxin Deng, East China Normal University, China

In the last decade, various models and programming languages for describing quantum computation have been introduced. The semantic foundation of quantum programming languages is being built. In this talk, we will review some recent results on generalizing traditional proof techniques such as coinduction and Hoare logic to reason about quantum programs.

« Back...

Quantitative verification of parameterized systems
Rayna Dimitrova, Max Planck Institute, Germany

In distributed protocols, randomization is often used for symmetry breaking when identical processes must independently reach a coordinated state. Some randomized protocols, such as for example Rabin's choice coordination protocol, are not guaranteed to terminate with probability one, but provide weaker, quantitative termination guarantees. Their verification entails checking parameterized, and hence infinite-state, probabilistic models with respect to properties specified in probabilistic temporal logics. Due to the infinite state space, model-checking techniques cannot be directly applied, while techniques for infinite-state probabilistic systems are limited in terms of the specifications they can handle.In this talk, I will present a deductive approach to the verification of countable-state systems against properties specified in probabilistic CTL*, on models featuring both nondeterministic and probabilistic choices. I will illustrate the effectiveness of the approach by application to Rabin's choice coordination protocol. This is joint work with Luis Maria Ferrer Fioriti, Holger Hermanns and Rupak Majumdar.

« Back...

Tutorial on PAT
Jin Song Dong, National University of Singapore

In this tutorial, we will show how to develop a model checker for a language combining real-time and probabilistic features using the PAT (Process Analysis Toolkit) step-by-step, and show that it could take as short as a few weeks to develop your own model checker with reasonable efficiency. The PAT system is designed to facilitate development of customized model checkers. It has an extensible and modularized architecture to support new languages (and their operational semantics), new state reduction or abstraction techniques, new model checking algorithms, etc. Since its introduction 8 years ago, PAT has attracted more than 4000 registered users (from 900+ organisations in 72 countries) and has been applied to develop model checkers for 20 different languages.

« Back...

Verifying temporal properties via dynamic program execution
Zhenhua Duan, Xi Dian University, China

This talk presents a unified model checking approach where the program to be verified is written in a Modeling, Simulation and Verification Language (MSVL) program M and the desired property is specified with a Propositional Projection Temporal Logic (PPTL) formula P. Different from the traditional model checking approach, the negation of the desired property, is translated into an MSVL program M' first. Then whether M violates P can be checked by evaluating whether there exists a feasible execution of the new MSVL program (M and M'). This problem can be efficiently conducted with the compiler of MSVL namely MSV. The proposed approach has been implemented in a tool called UMC4MSVL which is capable in verifying real-world programs.

« Back...

Limit-deterministic autoamta for probabilistic model checking
Javier Esparza, Technische Universität München, Germany

Limit-deterministic Büchi automata can replace deterministic Rabin automata in probabilistic model checking algorithms, and can be significantly smaller. We present a direct construction from an LTL formula to a limit-deterministic Büchi automaton. Our translation is compositional and has a clear logical structure. Moreover, due to its special structure, the resulting automaton can be used not only for qualitative, but also for quantitative verification of Markov Decision Processes, using the same model checking algorithm as for deterministic automata. This allows one to reuse existing efficient implementations of this algorithm without any modification. Our construction yields much smaller automata for formulas with deep nesting of modal operators and performs at least as well as the existing approaches on general formulas. Joint work with Stefan Jaax, Jan Kretinsky, and Salomon Sickert.

« Back...

An empirical understanding of conflict-driven clause-learning SAT solvers
Vijay Ganesh, University of Waterloo, Canada

Conflict-driven clause-learning (CDCL) SAT solvers have deeply influenced software engineering and security research over the last two decades, thanks largely to the fact that these solvers can easily solve real-world constraints with millions of variables and clauses in them. This phenomenon has puzzled theoreticians and practitioners alike. It is widely believed that industrial instances solved efficiently by SAT solvers are highly structured. Neverthekess, until recently there was little understanding of the structure of industrial instances or the way solvers go about exploiting the said structure.

In this talk, I will introduce CDCL SAT solvers, and the most important heuristics that power them. In addition, I will give an answer to the question of why SAT solvers are efficient on industrial instances. My answer is based on empirical discoveries my collaborators, students, and I have made via a rigorous and systematic experimental study of CDCL SAT solvers and industrial instances. I will conclude with a mathematical model that incorporates our empirical discoveries.

« Back...

Population protocols and predicates

In populations protocols, identical, anonymous, finite-state processes interact pairwise through rendezvous synchronization. Angluin et al. (PODC 2004) introduced population protocols as a distributed computation model. In that context, an interesting subclass of protocols are the ones computing predicates. Intuitively, a population protocol computes a predicate Φ(N) as follows: instantiate the protocol with N processes and let them interact until they reach a stable unanimity on value true or false. I will present recent results relative to three natural questions:
- does this protocol computes a predicate?
- does this protocol computes this predicate?
- what predicate does this protocol compute?

« Back...

First steps towards probabilistic higher-order model-checking
Charles Grellois, University Paris 7, France

Intersection types have been used successfully to model-check MSO properties, expressed by alternating parity tree automata, over the infinite trees finitely represented by higher-order recursion schemes. In our work with Melliès, we have related the original intersection type systems by Kobayashi and by Kobayashi and Ong to models of linear logic in which the denotation of a recursion scheme determines whether the infinite tree it represents is accepted by the automaton of interest.

In this talk, we introduce our ongoing work with dal Lago, in which we aim at extending this approach to a probabilistic setting in two ways. In the first way, we refine the approach of Kobayashi and Ong using non­idempotent intersection types, with the aim to capture a relaxed version of the parity condition in which we allow a negligible set of loosing branches. In the second approach, we use higher­order recursion schemes to represent programs with probabilistic choices, and define a probabilistic variant of alternating tree automata (without parity condition so far) allowing to check whether a property is satisfied with a given probability. We explain how the intersection type systems used in "traditional" higher-order model-checking can be accommodated to fit this probabilistic setting, and discuss our preliminary investigations towards a connection with linear logic and its models. These investigations rely on a decomposition of the automaton's action using Melliès' tensorial logic with effects.

« Back...

Formalisation and static analysis of CSS3 selectors
Matthew Hague, Royal Holloway, University of London, UK

Cascading Style Sheets (CSS) is the de facto language for styling a webpage, as developed and maintained by World Wide Web Consortium (W3C). Together with HTML and JavaScript, CSS has played an integral role in modern web technologies. At the heart of CSS are CSS selectors, which are formal specifications of which nodes in a given document object model (DOM) are to be selected for declarations of certain visual properties (e.g. set colour to red). Despite the apparent simplicity of CSS selectors, CSS selectors allow many complex features, e.g., tree traversals, string constraints on attributes, constraints for counting the number of children, and restricted negations. This paper describes the first precise formalisation of CSS Level 3 (CSS3) selector language, currently the stable version of the language. Unlike previous formalisations, our formalisation attempts to capture CSS3 in its entirety, as specified in the latest W3C Recommendation. We then study two fundamental static analysis problems which have found applications in optimisation of CSS files: (1) satisfiability: given a CSS selector, decide whether it can ever select a node in some DOM tree (2) intersection: given two CSS selectors, decide whether they can ever select the same node in some DOM tree. The second problem, in particular, has immediate applications in minimisation of CSS files by refactoring similar styling rules. We show that they are decidable and, in fact, NP-complete. Furthermore, we show that both problems can be reduced in polynomial-time to satisfiability of quantifier-free Presburger formulas, for which highly optimised SMT solvers are available.

« Back...

Constrained dynamic tree networks
Matthew Hague, Royal Holloway, University of London, UK

Bouajjani et al introduced Constrained Dynamic Pushdown Networks, in which pushdown systems may form tree networks by spawning children and monitoring their progress subject to a "stability constraint". They showed that the set of backwards reachable configurations is both regular and computable.

We show that this model can be generalised in several ways. First, the nodes of the tree need not contain pushdown systems, but may contain any system for which the alternating reachability problem is decidable. Second, we show that the notion of stability can be relaxed to allow a finite number of "unstable" transitions. Finally, we permit a node to monitor its parent in a limited fashion.

We show that backwards reachability of such systems remains decidable. Moreover, when the backwards reachability sets of the node systems is regular, we can also compute a regular representation of the backward reachability set of the system as a whole. In particular, our result can be applied to networks of collapsible pushdown systems.

« Back...

The benefits of continuous over-approximations for vector addition systems with states
Christoph Hasse, Centre National de la Recherche Scientifique, France

Vector addition systems with states (VASS), equivalently known as Petri nets, are a well-suited mathematical model in order to reason about the correctness of concurrent shared-memory programs. They are, however, algorithmically difficult to tackle due to EXPSpace-hardness of many of their decision problems. One way to circumvent those high computational costs is to consider continuous over-approximations of VASS and Petri nets. In continuous VASS and Petri nets, transitions can be fired a rational number of times, and consequently places can contain a rational number of tokens. This brings down the computational complexity of many decision problems, making many of them even polynomial-time decidable. In this talk, I will review the state-of-the-art for such continuous over-approximations, and demonstrate their usefulness in the applied verification of concurrent shared-memory programs.

Based on joint work with M. Blondin (Montreal), A. Finkel (Cachan) and

« Back...

A state equation for Petri nets with unordered data
Piotrek Hofman, Centre National de la Recherche Scientifique and École Normale Supérieure de Cachan, France

The state equation for Petri Nets is one of the most basic partial tests for the reachability relation. Solving it is equivalent to checking if a given vector in Z^n can be expressed as a sum of vectors from a given set of vectors(effects of transitions). During the talk, we will define Petri Nets with Unordered Data and the state equation for them. Next, we will show that it can be solved in NP.

This is joint work with Jerome Leroux and Patrick Totzke.

« Back...

Higher-order Buchi types
Martin Hofmann, Ludwig-Maximilians-Universität München, Germany

In previous work [HC] we demonstrated how to obtain an abstract lattice in the sense of abstract interpretation from a given Buchi word automaton and hence
via the known connection between abstract interpretation and types a type and effect system capable of expressing infinitary trace properties for a first­order language with recursive procedures. It has since been an open problem to generalise this type system or similar ones to higher­order functions without going via tree properties as is done in higher­order model checking. Here we report on recent progress on this question (joint work with Jeremy Ledent).

[HC] Martin Hofmann, Wei Chen: Abstract interpretation from Büchi automata. Proc
LICS­CSL 2014.

« Back...

String constraints for verification
Lukas Holik, Brno Unversity of Technology, Czech Republic

Motivated by problems such as detection of security vulnerabilities in web applications, we present a decision procedure for a logic that combines (i) word equations over string variables denoting words of arbitrary lengths, together with (ii) constraints on the length of words, and on (iii) the regular languages to which words belong. Decidability of this general logic is still open. Our procedure is sound for the general logic, and a decision procedure for a particularly rich fragment that restricts the form in which word equations are written. In contrast to many ex- isting procedures, our method does not make assumptions about the maximum length of words. We have developed a prototypical implemen- tation of our decision procedure, and integrated it into a CEGAR-based model checker for the analysis of programs encoded as Horn clauses. Our tool is able to automatically establish the correctness of several programs that are beyond the reach of existing methods.

« Back...

Rare events for statistical model checking
Cyrille Jegourel, National University of Singapore

This talk is devoted to the verification of rare properties in statistical model checking. We present a state-of-the-art importance sampling algorithm for models described by a set of guarded commands. Lastly, we motivate the use of importance splitting for statistical model checking and set up an optimal splitting algorithm. Both methods pursue a common goal to reduce the variance of the estimator and the number of simulations. Nevertheless, they are fundamentally different, the first tackling the problem through the model and the second through the properties.

« Back...

QBF solving and certification
Roland Jiang, National Taiwan University, Taiwan

In this tutorial we will introduce Quantified Boolean Formulas (QBFs). In particular, QBF representations, interpretations, solving techniques, and certification methods will be covered. Also we will give examples how QBF can be applied in synthesis and verification applications.

« Back...

Compiling untyped Lambda Calculus to lower-level code by game semantics and partial evaluation
Neil Jones, University of Copenhagen, Denmark

What: Any expression M in ULC (the untyped Lambda Calculus) can be compiled into a rather low­level language we call LLL, whose programs contain none of the traditional implementation devices for functional languages: environments, thunks, closures, etc. A compiled program is first­order functional and has a fixed set of working variables, whose number is independent of M. The generated LLL code in effect traverses the subexpressions of M.

How: We apply the techniques of game semantics to ULC, but take a more operational viewpoint that uses much less mathematical machinery than traditional presentations via game semantics. Further, ULC is compiled into LLL by partially evaluating a traversal algorithm for ULC.

Joint work with Daniil Berezun, JetBrains, St Petersburg, Russia.

« Back...

Expected run-times of probabilistic programs
Benjamin Kaminski, RWTH Aachen University, Germany

We present a wp-style calculus for obtaining bounds on the expected run-time of probabilistic programs. Its application includes determining the (possibly infinite) expected termination time of a probabilistic program and proving positive almost-sure termination - does a program terminate with probability one in finite expected time?
We provide several proof rules for bounding the run-time of loops, and prove the soundness of the approach with respect to a simple operational model. We analyze the expected run-time of some example programs including the coupon collector's problem.

« Back...

Tutorial on the probabilistic model checking landscape
Joost-Pieter Katoen, RWTH Aachen University, Germany

Randomization is a key element in sequential and distributed computing. Reasoning about randomized algorithms is highly nontrivial. In the 1980s, this initiated first proof methods, logics, and model-checking algorithms. The field of probabilistic verification has developed considerably since then. We survey the algorithmic verification of probabilistic models, in particular probabilistic model checking. We provide an informal account of the main models, the underlying algorithms, applications from reliability and dependability analysis-and beyond-and describe recent developments towards abstraction and automated parameter synthesis.

« Back...

Run time analysis of probabilistic programs
Joost-Pieter Katoen, RWTH Aachen University, Germany

In this talk, I'll present a weakest-precondition style reasoning for determining (bounds on) the expected run-time of probabilistic programs.Its application includes determining the (possibly infinite) expected termination time ofa probabilistic program and proving positive almost-sure termination: does a program terminate with probability one in finite expected time?

I'll present several proof rules for bounding the run-time of loops, and prove the soundness of the approach with respect to a simple operational model. The approach is shown to be a conservative extension of Nielson's approach for reasoning about the run-time of deterministic programs.

I'll discuss its extension towards treating recursive probabilistic programs, and present a Sherwood variant of binary search and the coupon collector benchmark.

« Back...

Scalable parameter synthesis in Markov models
Joost-Pieter Katoen, RWTH Aachen University, Germany

A major practical obstacle is that probabilistic model checking assumes that all probabilities (rates) in Markov models are a priori known. However, at early development stages, certain system quantities such as fault rates, reaction rates, packet loss ratios, etc. are often not-or at the best partially-known. In such cases, parametric probabilistic models can be used, where transition probabilities are specified as arithmetic expressions over real-valued parameters.

This talk considers the problem of parameter synthesis is: Given a finite-state parametric Markov model, what are the parameter values for which a given (conditional) reachability property exceeds a given threshold? Put differently, what probabilities in the system are tolerable such that the overall probability to reach some critical states is below a given threshold like 10^-8? We present an exact approach using SMT techniques, and an approximate technique using parameter lifting.

« Back...

On two notions of higher-order model checking
Naoki Kobayashi, The University of Tokyo, Japan

There are two different higher-order extensions of finite state model checking in the literature. One is the model checking of higher-order recursion schemes (HORS), where the language for describing models is extended to higher-order, and the other is the higher-order modal fixpoint logic (HFL) model checking, where the logic for describing specifications is extended to higher-order. We reveal a close relationship between these different notions of "higher-order" model checking, by showing mutual, arguably natural translations between them.
This is joint work with Etienne Lozes and Florian Bruse.

« Back...

Using graph decompositions to verify concurrent recursive programs
Narayan Kumar Krishnan, Chennai Mathematical Institute, India

Concurrent recursive programs (even over a finite data domain) are Turing-powerful. One way to circumvent this is to identify decidable under-approximations.

The behaviours of concurrent recursive programs can be described as graphs where the nodes represent events and the edges denote communication or sequencing. An underapproximation identifies a collection graphs representing the set of permitted behaviours.

The theory of graph decompositions provides measures on the structure of graphs (eg. tree-width, clique-width, path-width, split-width and so on). We examine the connection between decidability of under-approximations and measures on the behaviours they admit.

« Back...

Graph decompositions and the verification of concurrent recursive
Narayan Kumar Krishnan, Chennai Mathematical Institute, India

Concurrent recursive programs (even over a finite data domain) are Turing-powerful. One way to circumvent this is to identify decidable under-approximations.

The behaviours of concurrent recursive programs can be described as graphs where the nodes represent events and the edges denote communication or sequencing. An underapproximation identifies a collection graphs representing the set of permitted behaviours.

The theory of graph decompositions provides measures on the structure of graphs (eg. tree-width, clique-width, path-width, split-width and so on). We examine the connection between decidability of under-approximations and measures on the behaviours they admit.

« Back...

Solving deductive games
Antonin Kucera, Masaryk University, Czech Republic

We propose a general framework for modelling and solving deductive games, where one player selects a secret code and the other player strives to discover this code using a minimal number of allowed experiments that reveal some partial information about the code. The framework is implemented in a software tool Cobra, and its functionality is demonstrated by producing new results about existing deductive games.

« Back...

Soundness in negotiations
Denis Kuperberg, Technical University of Munich, Germany

Negotiations are a formalism for describing multiparty distributed cooperation. Alternatively, they can be seen as a model of concurrency with synchronized choice as communication primitive. Well-designed negotiations must be sound, meaning that, whatever its current state, the negotiation can still be completed. The present work investigates the complexity of deciding soundness for different classes of negotiations, as well as the design of efficient algorithms for deciding properties (called race and omission) of sound negotiations. These algorithms can be applied for the analysis of workflow nets with data. This is joint work with Javier Esparza, Anca Muscholl and Igor Walukiewicz.

« Back...

Tutorial on PRISM
Marta Kwiatkowska, University of Oxford, UK

Probabilistic model checking is an automated method for establishing the correctness of probabilistic system models against temporal logic specifications. The techniques have been implemented in the probabilistic model checker PRISM (www.prismmodelchecker.org) and applied to analyse and detect flaws in several real-world systems, across a wide range of application domains. In this tutorial, we give a brief overview of probabilistic model checking, focusing on the model of Markov decision processes, and briefly summarise recent developments of the tool, including parametric models, strategy generation, multi-objective properties and Pareto front computation. In addition, the tutorial will give an introduction to PRISM-games (http://www.prismmodelchecker.org/games/), an extension of PRISM for verification and strategy synthesis for stochastic multi-player games.

« Back...

Satisfiability modulo heap-based programs
Quang Loc Le, Singapore University of Technology and Design

In this work, we present a semi-decision procedure for a fragment of separation logic with user-defined predicates and Presburger arithmetic. To check the satisfiability of a formula, our procedure iteratively unfolds the formula and examines the derived disjuncts. In each iteration, it searches for a proof of either satisfiability or unsatisfiability. Our procedure is further enhanced with automatically inferred invariants as well as detection of cyclic proof. We also identify a syntactically restricted fragment of the logic for which our procedure is terminating and thus complete. This decidable fragment is relatively expressive as it can capture a range of sophisticated data structures with non-trivial pure properties, such as size, sortedness and near-balanced. We have implemented the proposed solver and a new system for verifying heap-based programs. We have evaluated our system on benchmark programs from a software verification competition.

« Back...

Finitary proof systems for Kozen's mu
Graham Leigh, University of Gothenburg, Sweden

Almost all modal logics are known to admit finite cut-free sequent calculi with modal mu-calculus being a notable exception. In the talk we will present several sound and complete axiomatisations for the mu­calculus and examine the eliminability of the cut-rule in Kozen's original axiomatisation. Joint work with Bahareh Afshari.

« Back...

Nested antichains for WS1S
Ondrej Lengal, Brno University of Technology, Czech Republic

We propose a novel approach for coping with alternating quantification as the main source of nonelementary complexity of deciding WS1S formulae. Our approach is applicable within the state-of-the-art automata-based WS1S decision procedure implemented, e.g. in MONA. The way in which the standard decision procedure processes quantifiers involves determinization, with its worst case exponential complexity, for every quantifier alternation in the prefix of a formula. Our algorithm avoids building the deterministic automata instead, it constructs only those of their states needed for (dis)proving validity of the formula. It uses a symbolic representation of the states, which have a deeply nested structure stemming from the repeated implicit subset construction, and prunes the search space by a nested subsumption relation, a generalization of the one used by the so-called antichain algorithms for handling nondeterministic automata. We have obtained encouraging experimental results, in some cases outperforming MONA by several orders of magnitude.

« Back...

Fully automated shape analysis based on forest automata
Ondrej Lengal, Brno University of Technology, Czech Republic

Forest automata (FAs) have recently been proposed as a tool for shape analysis of complex heap structures [1]. FAs encode sets of tree decompositions of heap graphs in the form of tuples of tree automata. In order to allow representing complex heap graphs, the notion of Fas allowed one to provide user-defined FAs (called boxes) that encode repetitive graph patterns of shape graphs to be used as alphabet symbols of other, higher-level FAs. Later, we proposed a novel technique of automatically learning the FAs to be used as boxes that avoids the need of providing them manually [2]. The result was an efficient, fully-automated analysis that could handle even as complex data structures as skip lists. In the most recent contribution [3], we extended the framework with constraints between data elements of nodes in the heaps represented by FAs, modifying the abstract interpretation used, to allow us verify programs depending on ordering relations among data values. We evaluated our approach on programs manipulating different flavours of lists (singly/doubly linked, circular, nested, . . .), trees, skip lists, and their combinations. The experiments show that our approach is not only fully automated, rather general, but also quite efficient.

« Back...

Constrained validity and querying uncertain data: exact and approximate solutions
Leonid Libkin, University of Edinburgh, UK

Answering queries over complete databases amounts to evaluating logical formulae over finite structures. When we have uncertain data, a database may represent many different complete databases, and one needs to check if a formula is valid in all of them, to be sure about the answer. In other words, one needs to solve a validity problem, modulo some constraints. In general, this problem is of high complexity (or undecidable) even for simple queries. In this tutorial, I shall describe two ways around this issue. First, quite often constrained validity can be reduced to model checking. There is close connection between such reductions and homomorphism preservation theorems in model theory. Even when this reduction is not possible, it is possible to devise efficient approximation schemes that not only exhibit good theoretical complexity but also behave well in practice.

« Back...

Automatic compositional verification: modern apporaches
Shang-Wei Lin, Nanyang Technological University

Model checking suffers from the state space explosion problem. Compositional verification techniques such as assume-guarantee reasoning (AGR) can help to alleviate the problem. However, it is challenging to construct the assumption required in AGR automatically, which usually requires human creativity and experience. In this talk, we would like to introduce modern approaches to automate compositional verification based on AGR, where the assumptions are automatically generated based on learning techniques as well as interpolations.

« Back...

Proving liveness in parameterised probabilistic concurrent systems
Anthony Widjaja Lin, Yale-NUS College and National University of Singapore

We consider the problem of proving liveness in parameterised families of probabilistic concurrent systems, i.e., satisfies a liveness specification under a class of schedulers. A parameterised family defines an infinite-state system: for each number n, the family consists of an instance with n finite-state processes. Examples of such systems include standard probabilistic distributed protocols, e.g., Lehmann-Rabin protocols for the dining philosopher problem and randomised self-stabilising protocol by Israeli and Jalfon. The parameterised verification problem is to prove that *every* member of the family satisfies the specification. In contrast to safety, the parameterised verification of liveness is currently still considered extremely challenging especially in the presence of probability in the model.

In this talk I will present a symbolic framework based on "regular model checking" for proving such a property, and an automatic technique for producing "regular proofs" of liveness for such systems in the framework based on CEGAR/learning and SAT-solvers. The technique has been implemented and is able to prove liveness fully-automatically for well-known randomised distributed protocols, including Lehmann-Rabin Randomised Dining Philosopher Protocol and randomised self-stabilising protocols (such as the Israeli-Jalfon Protocol). Part of the talk is based on the CAV'16 paper "Liveness of Randomised Parameterised Systems under Arbitrary Schedulers" with Philipp Ruemmer.

« Back...

String solving with word equations and transducers: towards a logic for analysing mutation XSS
Anthony Widjaja Lin, Yale-NUS College and National University of Singapore

String solving (a.k.a. SMT over strings) for detecting security vulnerabilities (especially those related to SQL injections and cross-site scripting, a.k.a., XSS) in web applications has been the subject of many papers in the past 6-7 years. In this talk, we will argue that the existing string solving frameworks are not sufficient for analysing real-world XSS vulnerabilities. In particular, they do not accommodate constraints that allow both string concatenations and transducers; the latter can capture sanitisation functions (e.g. htmlescape) and implicit browser transductions (e.g. innerHTML). Unfortunately, naively combining concatenations and transducers easily results in undecidability. In this talk, we will present a fragment of the string logic, called the straight-line fragment, that allows both concatenations and transductions, whose satisfiability problem is still decidable. We show that the fragment is sufficiently expressive in that it can encode many practical examples where XSS arise, which furthermore cannot be encoded in other string solving frameworks. We will mention a few extensions of the straight-line fragment (e.g. adding length constraints, and disequality) that preserve decidability. A number of future challenges will be mentioned at the end of the talk.
The talk is based on my recent POPL'16 paper (joint with Pablo Barcelo).

« Back...

Modelling and analysis of collective adaptive systems
Michele Loreti, Università degli Studi di Firenze, Italy

Modern software systems typically consist of massive numbers of components, featuring complex interactions among components and with humans and other systems. Each component in the system may exhibit autonomic behaviour depending on its properties, objectives and actions. Decision-making in such systems is complicated and interaction between the components may introduce new and sometimes unexpected behaviours. This class of systems is usually referred to as Collective adaptive systems (CAS). CAS present a significant research challenge in terms of both representation and reasoning about their behaviour. The pervasive yet transparent nature of the applications developed in this paradigm makes it of paramount importance that their behaviour can be thoroughly assessed during their design, prior to deployment, and throughout their lifetime. Indeed their adaptive nature makes modelling essential and models play a central role in driving their adaptation. Moreover, the analysis should encompass both functional and non-functional aspects of behaviour. Thus it is vital that we have available robust modelling techniques which are able to describe such systems and to reason about their behaviour in both qualitative and quantitative terms. To move towards this goal, it is important to develop a theoretical foundation for CAS that will help in understanding their distinctive features. From the point of view of the language designers, the challenge is to devise appropriate abstractions and linguistic primitives to deal with the large dimension of systems, to guarantee adaptation to (possibly unpredicted) changes of the working environment, to take into account evolving requirements, and to control the emergent behaviours resulting from complex interactions. Attributed based communication has been recently introduced as a new interaction paradigm to model typical interactions in CAS: involved agents are equipped with a set of attributes while the participants of an interaction are identified by predicates over attributes. In this talk we will first introduce attribute based communication; then we will show how it impacts on the general process of CAS design. We will also show how this paradigm has been integrated in process specification languages and how these can be used to support specification, modelling, analysis and monitoring of CAS.

« Back...

Algorithmic analysis for parameterized systems
Rupak Majumdar, Max Planck Institute for Software Systems, Germany

Parameterized verification is the problem of checking that every member in an infinite family of systems satisfies a given specification. They arise naturally in the analysis of concurrent protocols, where an algorithm is designed to work no matter how many processes participate, in the analysis of concurrent programs with thread creation, in models arising in chemistry or biology, and so on. A parameterized verification problem is usually described by specifying a model that generates the systems in the family. For example, one can describe parameterized finite-state protocols by giving a template automaton and a communication pattern (broadcast or shared variable or rendezvous). The general parameterized verification problem is undecidable, even if each instance is finite state. However, there are several models of parameterized systems, such as Petri nets, broadcast protocols, population protocols, etc., for which verification is decidable. These models are inspired by practical application domains such as asynchronous programming, biological models, etc. In this tutorial, we will describe decidability and complexity results on parameterized systems, and some useful heuristics in case the original problems are not decidable.

« Back...

Tutorial on program refinement and information flow security
Annabelle McIver, Macquarie University, Australia

Formal treatments of security express secrecy by ascribing a special "high" or "low" security status to particular variables. Once this has been done, a program is judged to be secure or not based on whether during its execution examination of its low-security variables (only) reveals anything about the high-security variables. Variations on this idea focus on revealing the initial, final or intermediate high-security values.

In general refinement is useful because it allows complex behaviour to be summarised clearly in an abstracted program, so that functional properties are preserved for all refinements; a great number of successful tools and techniques based on this idea have been developed and used. In these talks I will survey the developments in program semantics which support a notion of refinement preserving both security and functional properties. In the first talk I will describe a qualitative model for refinement of security, and in the second talk I will illustrate how similar techniques can be used in quantitative models. Quantitative models are interesting because in computer security, it is frequently necessary in practice to accept some leakage of confidential information. This motivates the development of theories of Quantitative Information Flow aimed at showing that some leaks are "small" and therefore tolerable.

« Back...

Replication and consistency
Roland Meyer, University of Kaiserslautern, Germany

For performance reasons, modern multiprocessors implement relaxed memory consistency models that admit out-of-program-order and non-store atomic executions. While data-race-free programs are not sensitive to these relaxations, they pose a serious problem to the development of the underlying concurrency libraries. Library routines that work correctly under Sequential Consistency (SC) show undesirable effects when run under a relaxed memory model. These routines are not robust against the relaxations that the processor supports. To enforce robustness, the programmer has to add safety net instructions to the code that control the hardware - a task that has proven to be difficult, even for experts.

We recently developed algorithms that check and, if necessary, enforce robustness against prominent relaxed memory models like TSO implemented in x86 processors and Power implemented in IBM architectures. Given a program, our algorithms decide whether the actual behavior on the processor coincides with the SC semantics. If this is not the case, they synthesize safety net instructions that enforce robustness. When built into a compiler, our algorithms thus hide the relaxed memory model from the programmer and provide the illusion of Sequential Consistency. In this talk, I will give an introduction to relaxed memory models, motivate the robustness problem, and explain how to reduce it to a language emptiness problem.

« Back...

Summaries for context-free games
Roland Meyer, University of Kaiserslautern, Germany

The motivation of our work is to generalize the language-theoretic approach from verification to synthesis. Central to language-theoretic verification are inclusion queries L(G) $\subseteq$ L(A), where G is a context-free grammar representing the flow of control in a recursive program and A is a finite automaton representing (iterative refinements of) the specification. When moving to synthesis, we replace the inclusion query by a strategy synthesis for an inclusion game. This means G comes with an ownership partitioning of the non-terminals. It induces a game arena defined by the sentential forms and the left-derivation relation. The winning condition is inclusion in the regular language given by A.

For the verification of recursive programs, the two major paradigms are summarization and saturation. Procedure summaries compute the effect of a procedure in the form of an input-output relation. Saturation techniques compute the set of predecessors of a set of configurations of a pushdown system (including the stack). Both were extensively studied, optimized, and implemented. What speaks for summaries is that they seem to be used more often, as witnessed by the vast majority of verification tools participating in the software verification competition. The reason, besides simpler implementability, may be that the stack present in the pre-image increases the search space.

Saturation has been lifted to games and synthesis. We fill in the empty spot and propose the first summary-based solver and synthesis method for context-free inclusion games. Our algorithm is based on a novel representation of all plays starting in a non-terminal. The representation uses the domain of Boolean formulas over the transition monoid of the target automaton. The elements of the transition monoid are essentially procedure summaries. We show that our algorithm has optimal (doubly exponential) time complexity, that it is compatible with recent antichain optimizations, and that it admits a lazy evaluation strategy. Our preliminary experiments show encouraging results, indicating a speed up of three orders of magnitude over a competitor.

This talk is based on joint work with Lukas Holik and Sebastian Muskalla.

« Back...

General description of network systems
Ugo Montanari, University of Pisa, Italy

Many systems are made of components, connectors and wires, can evolve in parallel/concurrently with limited, local reconfiguration and have interfaces through which they can be observed and composed.

Lots of models have been proposed for them. Examples: (i) Petri nets; (ii) linear time-invariant dynamical systems; (iii) (soft) constraint satisfaction problems; (iv) Bayesian networks; (v) electric circuits; (vi) bigraphs; (vii) computational fields.

General approaches have been proposed for describing such systems. We distinguish three of them:
- Milner flowgraphs: networks of constraints; treewidth decomposition for dynamic programming; interpreted flow algebras; synchronized hyperedge replacement systems.
- PROPs and string diagrams: connector algebras for nets with boundaries; signal flow graphs; bigraphs.
- From graphs to categories: Petri nets and monoidal categories; rewriting logic; tiles.

Some examples of the three approaches will be presented and compared.

« Back...

A demonic lattice of information
Carroll Morgan, University of New South Wales, Australia

The Lattice of Information (Landauer and Redmond) was an early and influential formalisation of the structure of secure systems: a partial order was defined that related to each other the mechanisms of deterministic information-flow out of a hidden state. In modern terms we would say that more-secure systems were refinements of less-secure ones under this order. In this deterministic case the refinement order is a lattice.

In more recent work the same approach has been taken to information flow in purely probabilistic systems, that is those in which information is released by a (probabilisitic) channel in the sense of Shannon. There too a refinement order can be defined (Alvim, Chatzikokolakis, McIver, Morgan, Palamidessi, Smith); but it is not a lattice.

In between those two structures above there are purely demonic systems, that is those whose information flow can be described by demonic channels'' whose behaviour is not determinstic but neither can it be quantified probabilistically. We show here that these demonic systems can be treated in the same style as in the the deterministic- and the probabilistic case, and there are results concerning compositionality, testing, soundness and completeness that allow interesting comparisons between all three.

And all three of these structures can be used as security-oriented semantics for sequential programming languages: the hierarchy is deterministic languages, languages with demonic nondeterminism (à la Dijkstra) and languages with probabilistic choice (à la Kozen). Adapting the security-oriented semantics to languages with both probabilistic- and demonic choice (our own work McIver, Morgan et al) remains a tougher challenge.

« Back...

Markus Mueller-Olm, Universität Münster, Germany

We survey work on automata-based optimal analysis of programs with thread-creation and potentially recursive procedures. Specifically, we introduce dynamic pushdown networks (DPNs) that extend pushdown processes by thread creation as a model for such programs, introduce their semantics, and summarize basic results on reachability analysis and its applications. Moving from a word-shaped to tree-shaped views of executions allows us to impose regular constraints on the communicated actions in symbolic backward analysis or even to describe the entire set of executions by regular means. This in turn enables us to do lock-join-sensitive reachability analysis as the set of action trees that have a lock-join-sensitive schedule turns out to be regular.

The talk is based on joint work with Ahmed Bouajjani, Tayssir Touili, Peter Lammich, Alexander Wenner, Thomas Gawlitza, Helmut Seidl, and Benedikt Nordhoff.

« Back...

Game semantics for middleweight Java
Andrzej Murawski, University of Warwick, UK

Recent years have seen game semantics emerge as a robust semantic paradigm. It has been used to construct the first fully abstract models for a wide spectrum of programming languages, previously out of reach of denotational semantics. Game semantics models computation as an exchange of moves between two players, representing respectively the program and its computational environment. Accordingly, a program is interpreted as a strategy in a game corresponding to its type. I shall give an overview of the latest developments in the area, which have most recently led to a fully abstract model of Middleweight Java. I will also present a classification of decidable cases for contextual equivalence in a finitary verson of the language, obtained using game semantics and pushdown automata over infinite alphabets. This is joint work with Steven Ramsay and Nikos Tzevelekos.

« Back...

Automata theory and game semantics of higher-order computation
Andrzej Murawski, University of Warwick, UK

I will give a survey of various classes of automata that have been used to capture the game semantics of higher­order programs and, consequently, obtain decidability results for contextual equivalence.

« Back...

On parametrized verification of asynchronous, shared-memory pushdown systems
Anca Muscholl, Université Bordeaux, France

We consider the model of parametrized asynchronous shared-memory pushdown systems, as introduced by Hague. In a series of recent papers it has been shown that reachability in this model is Pspace-complete [Hague'11] [Esparza, Ganty, Majumdar'13] and that liveness is decidable in Nexptime [Durand-Gasselin, Esparza, Ganty, Majumdar'15]. We show here that the liveness problem is Pspace-complete. We also introduce the universal reachability problem. We show that it is decidable, and coNexptime-complete. Finally, using these results, we prove that the verification of regular properties of traces, satisfying some stuttering closure, is also Nexptime-complete in this model.
Joint work with M. Fortin and I. Walukiewicz.

« Back...

Hitting families of schedules
Filip Niksic, Max Planck Institute, Germany

We consider the following basic task in the testing of concurrent systems. The input to the task is a partial order of events, which models actions performed on or by the system and specifies ordering constraints between them. The task is to determine if some scheduling of these events can result in a bug. The number of schedules to be explored can, in general, be exponential.

Empirically, many bugs in concurrent programs have been observed to have small bug depth; that is, these bugs are exposed by every schedule that orders d specific events in a particular way, irrespective of how the other events are ordered, and d is small compared to the total number of events. To find all bugs of depth d, one needs to only test a d-hitting family of schedules: we call a set of schedules a d-hitting family if for each set of d events, and for each allowed ordering of these events, there is some schedule in the family that executes these events in this ordering. The size of a d-hitting family may be much smaller than the number of all possible schedules, and a natural question is whether one can find d-hitting families of schedules that have small size.

In general, finding the size of optimal d-hitting families is hard, even for d = 2. We show, however, that when the partial order is a tree, one can explicitly construct d-hitting families of schedules of small size. When the tree is balanced, our constructions are polylogarithmic in the number of events.

« Back...

Interpolants from SAT solving certificates
Adrian Rebola Pardo, Technische Universität Wien, Austria

Interpolants have become a fundamental tool in applications of automated theorem proving to formal systems verification. Several methods for generating propositional interpolants from resolution proofs exist. However, resolution proof generation is a deprecated feature in most state-of-the-art SAT solvers, due to the advent of the more technically advanced DRAT proof system.

The DRAT proof standard allows very powerful inferences, to the extent that it polynomially simulates the strongest known proof systems. This allows the emission of reasonably short proofs for some successful techniques used in SAT solving that would yield exponential resolution proofs.

Unfortunately, generating interpolants from DRAT proofs present both theoretical and practical challenges. For one, the efficient interpolation property is lost: generating an interpolant is no longer polynomial in the size of a proof. For another, it was so far unknown how to generate an interpolant from a DRAT proof without effectively running again a solving procedure, thus rendering the DRAT proof useless.

In this talk, I will talk about these challenges and introduce a novel method to modularly reconstruct resolution proofs from DRAT certificates, thus allowing for interpolant generation. Furthermore, our method is based on direct proof manipulation, thus avoiding costly proof search.

« Back...

Introduction to collapsible pushdown automata and higher-order recursion schemes
Pawel Parys, University of Warsaw, Poland

In this tutorial I will introduce two equivalent models of computations, essential for model checking of higher-order programs. Collapsible pushdown automata are a generalization of pushdown automata, where instead of a simple stack we have a stack of stacks of . . . of stacks. Higher-order recursion schemes generalize context­free grammars, and are essentially the simply typed lambda-calculus with recursion. These models are considered as generators of infinite trees. I will show the main known facts and open problems regarding relations between these models, their expressivity, and algorithmic questions concerning them.

« Back...

SAT-based explicit LTL reasoning
Geguang Pu, East China Normal University, China

We present here a new explicit reasoning framework for linear temporal logic (LTL), which is built on top of propositional satisfiability (SAT) solving. As a proof-of-concept of this framework, we describe a new LTL satisfiability algorithm. We implemented the algorithm in a tool, Aalta v2.0, which is built on top of the Minisat SAT solver. We tested the effectiveness of this approach by demonstrating that Aalta v2.0 significantly outperforms all existing LTL satisfiability solvers.

« Back...

Refinement types and higher-order constrained horn clauses
Steven Ramsay, University of Oxford, UK

In this talk I will describe joint work with Luke Ong in which we introduce and study systems of higher-order constrained horn clauses. Such systems are a natural generalisation of first-order constrained horn clause problems, recently the focus of attention in the verification community, in which relations are allowed to be of higher-type. We argue for refinement types as a natural notion of higher-order symbolic model in this setting and thus recast the logical problem of solvability as a typing problem. Building on work from the refinement types community, we develop an algorithm that reduces type checking to first-order constrained horn clause solving. Overall, the effect is to reduce higher-order constrained horn clause solvability to first-order constrained horn clause solvability to first-order constrained horn clause solvability.

« Back...

Bounded rigid E-unification
Philipp Ruemmer, Uppsala University, Sweden

The efficient integration of theory reasoning in first-order calculi with free variables (such as sequent calculi or tableaux) is a long-standing challenge. For the case of the theory of equality, an approach that has been extensively studied in the 90s is rigid E-unification, a variant of equational unification in which the assumption is made that every variable denotes exactly one term (rigid semantics). The fact that simultaneous rigid E-unification is undecidable, however, has hampered practical adoption of the method, and today there are few theorem provers that use rigid E-unification.

One solution is to consider incomplete algorithms for computing (simultaneous) rigid E-unifiers, which can still be sufficient to create sound and complete theorem provers for first-order logic with equality; such algorithms include rigid basic superposition proposed by Degtyarev and Voronkov, but also the much older subterm instantiation approach introduced by Kanger in 1963 (later also termed minus-normalisation). We introduce bounded rigid E-unification (BREU) as a new variant of E-unification corresponding to subterm instantiation. In contrast to general rigid E-unification, BREU is NP-complete for individual and simultaneous unification problems, and can be solved efficiently with the help of SAT; BREU can be combined with techniques like congruence closure for ground reasoning, and be used to construct theorem provers that are competitive with state-of-the-art tableau systems.
Joint work with Peter Backeman.

« Back...

Illustrating the coalgebraic method: circularity and coinduction?
Jan Rutten, Centrum Wiskunde & Informatica, The Netherlands

Since the early nineties, coalgebra has become an active area of research in which one tries to understand all kinds of infinite data types, automata, transition systems and dynamical systems from a unifying perspective. The focus of coalgebra is on observable behaviour and one uses coinduction as a central methodology.

In this lecture, we shall first summarize the key ingredients of the coalgebraic method, explaining how they largely derive from the theory of categories. As an extensive example, we shall then use coinduction to give new proofs of the theorems by Moessner (1951) and Paasche (1952), which are two non-trivial and entertaining combinatorial observations about infinite sequences (streams) of natural numbers. As we shall see, the heart of the matter in the coinductive approach is the identification of the circularity in the streams at hand, which then leads to surprisingly simple end elementary proofs.

« Back...

Modular verification of higher-order functional programs
Ryosuke Sato, University of Tokyo, Japan

In our previous paper, we have shown how to construct a software model checker for higher-order functional programs, by combining higher-order model checking, predicate abstraction, and CEGAR. The naïve application of the proposed approach, however, suffered from scalability problems. To obtain a more scalable software model checker, we propose a \emph{modular} verification method that uses an existing software model checker. The method takes a program that consists of several modules, and infers refinement types of each module. We repeat the following two steps: (i) Inference of type candidates for each module in a counterexample­guided manner. (ii) Type checking by using an existing software model checker.

« Back...

Dealing with uncertainty in the verification of MDPs
Yamilet Rosario Serrano Llerena, National University of Singapore

Markov Decision Processes (MDPs) are a fundamental tool for modeling nondeterministic systems for probabilistic verification. However, in many real-life contexts, certain transition probabilities in an MDP are unknown a prior or subject to variation in the environment. In such cases, slight modifications to those probabilities may lead to invalid verification results. To address this problem, we present a perturbation approach for studying the consequences of uncertainty, expressed as small perturbations attached to the transitions probabilities, in the reachability verification of an MDP.

« Back...

How to decide quantitative properties for higher-order program with nature?
Olivier Serre, Centre National de la Recherche Scientifique, France

In this talk we are interested in deciding properties of systems with (possibly higher-order) recursion and involving uncontrollable and unpredictable actions. More precisely the system we consider involves a controllable program, an uncontrollable and possibly malicious environment, and a third actor that we refer as Nature and that is uncontrollable and unpredictable.

The classical semantics for Nature is the stochastic one but it quickly leads to undecidability. Hence, we propose in this talk two different semantics for Nature, a first one based on cardinality and a second one on topology. For both of them we argue that it leads to decidability results.

This is a joint work with Arnaud Carayol based on our paper "How good is a strategy in a game with Nature" that was presented in LiCS'15.

« Back...

Automated verification of safety hyperproperties
Marcelo Sousa, University of Oxford, UK

Unlike safety properties which require the absence of a "bad" program trace, safety hyperproperties properties stipulate the absence of a "bad" interaction between multiple traces. Several hyperproperties, e.g. non-interference, have been extensively studied in the context of security protocols. In this talk, I will shift the focus to general programming languages and show that natural examples of hyperproperties arise in programs. I will present a new calculus based on relational hoare logic and algorithmic insights for automated verification of hyperproperties from the perspective of leveraging recent advances in SAT/SMT solvers.

« Back...

Asymptotic analysis in probabilistic model checking
Guoxin Su, National University of Singapore

Probabilistic model checking is a verification technique that has been the focus of intensive research for over a decade. One important issue with probabilistic model checking, which is crucial for its practical significance but is overlooked by the state-of-the-art largely, is the potential discrepancy between a stochastic model and the real-world system it represents when the model is built from empirical data. In the worst case, a tiny but nontrivial change to some model quantities might lead to misleading or even invalid verification results. In this talk, I will present an asymptotic approach to characterizing the consequences of model perturbations on the verification distance. The formal model is a parametric variant of discrete-time Markov chains equipped with a vector norm to measure the perturbation. The main technical results that I will present include a closed-form formulation of asymptotic perturbation bounds, and computational methods for two arguably most useful forms of those bounds, namely linear bounds and quadratic bounds.

« Back...

Register automata as a model for local computation in distributed query evaluation
Tony Tan, National Taiwan University, Taiwan

We present three formal models of distributed systems for query evaluation on massive databases: Distributed Streaming with Register Automata (DSAs), Distributed Streaming with Register Transducers (DSTs), and Distributed Streaming with Register Transducers and Joins (DSTJs). These models are based on the key-value, a.k.a. MapReduce, paradigm where the input is transformed into a dataset of key-value pairs, and on each key a local computation is performed on the values associated with that key resulting in another set of key-value pairs. Computation proceeds in a constant number of rounds, where the result of the last round is the input to the next round, and transformation to key-value pairs is required to be generic. The difference between the three models is in the local computation part. In DSAs it is limited to making one pass over its input using a register automaton, while in DSTs it can make two passes: in the first pass it uses a finite-state automaton and in the second it uses a register transducer. The third model DSTJs is an extension of DSTs, where local computations are capable of constructing the Cartesian product of two sets.

Some of the results that will be presented: (1) DSAs can evaluate first-order queries over bounded degree databases; (2) DSTs can evaluate semi join algebra queries over arbitrary databases; (3) DSTJs can evaluate the whole relational algebra over arbitrary databases; (4) DSTJs are strictly stronger than DSTs, which in turn, are strictly stronger than DSAs; (5) within DSAs, DSTs and DSTJs there is a strict hierarchy w.r.t. the number of rounds.

If time permits, we will also present some implementations (on top of Hadoop) and their experimental results on multi-semi-join evaluation based on DSTs, and compare their performance with well-known systems such as Pig and Hive.
(The presented work is based on collaboration works with Jonny Daenen, Frank Neven, Nicole Schweikardt and Stijn Vansumeren.)

« Back...

Higher-order model checking in direct style
Taku Terao, The University of Tokyo, Japan

Higher-order model checking, or model checking of higher­order recursion schemes, has been recently applied to fully automated verification of functional programs. The previous approach has been indirect, in the sense that higher-order functional programs are first abstracted to (call-by-value) higher-order Boolean programs, and then further translated to higher­order recursion schemes (which are essentially call-by-name programs) and model checked. These multi-step transformations caused a number of problems such as code explosion. In this talk, we advocate a more direct approach, where higher-order Boolean programs are directly model checked, without transformation to higher-order recursion schemes. To this end, we develop a model checking algorithm for higher-order call-by-value Boolean programs, and prove its correctness. According to experiments, our prototype implementation outperforms the indirect method for large instances.

« Back...

Intruder deduction in sequent calculus
Alwen Tiu, Nanyang Technological University

An approach to modelling the intruder in analysing security protocols is to formalise the capabilities of the intruder via a natural deduction calculus, or equivalently, via a rewrite system capturing the proof normalisation processes of the natural deduction system. In proof theory, it is well known that natural deduction systems can be equivalently presented in Gentzen's sequent calculus. Sequent calculus enjoys the so-called subformula property, in which many cases entail bounded proof search. In this talk, I will present some results in using sequent calculus as a framework to structure proof search for intruder deduction problems. I will show how one can capture a range of intruder models involving extensions of Dolev-Yao model with AC-convergent theories and the theory of blind signatures in sequent calculus. I will then discuss how the different frameworks are related to each other. In particular, I will show how the sequent calculus and the natural deduction formalisms are related via a well-known classic result in proof theory.

« Back...

Negations in refinement intersection type systems
Takeshi Tsukada, University of Tokyo, Japan

Refinement intersection type systems are a basic tool for higher-order model checking that is important from theoretical and practical points of views. In general, a derivation gives a witness of derivability whereas there is no simple witness of underivability of a given judgement. In this talk, we study a type system that proves underivability of a judgement.

Given a type $\tau$ that refines a simple­type $A$, its negation $\neg \tau$ is defined as the type for terms that do not have type $\tau$, that means, $\nvdash t : \tau$ if and only if $\vdash t : \neg \tau$ (for every $t$ of simple type $A$). So the derivability of $\vdash t : \neg \tau$ is equivalent to the underivability of $\vdash t : \tau$. We show that the negation is a definable connective for certain intersection type systems. Hence, for such a system, the set of types is closed under all Boolean operations as the class of regular languages.

« Back...

Nominal games: a semantics paradigm for effectful languages
Nikos Tzevelekos, Queen Mary University of London, UK

Game semantics has been developed since the 90's as a denotational paradigm capturing observational equivalence of functional languages with imperative features. While initially introduced for PCF variants, the theory can nowadays express real-life languages ranging from ML fragments and Java programs to C-like code. In this talk we present recent advances in devising game models for effectful computation, and describe an emerging correspondence with trace models which departs from the denotational view of games towards a more operational one. Central in this approach is the use of names for representing in an abstract fashion different forms of notions and effects, such as references, higher-order values and polymorphism.

« Back...

Less is more: estimating probabilistic rewards over partial system explorations
Sebastián Uchitel, Imperial College London, UK and Universidad de Buenos Aires, Argentina

Despite tremendous advances model checking remains in many cases an automated process that runs out of memory or time, stops and provides no useful feedback. We are interested in providing feedback to an engineer on the degree to which and artefact under analysis has been verified, why it is was not possible to fully verify it and guarantees can be stated about the artefact after a model check that did not finish. In this talk I will discuss two results that complement each other and that move us towards our goal of providing guarantees over partial system explorations.The first is how to compositionally endow system models with probabilistic operational profiles while preserving the probabilistic behaviour of both of them. At the core of the problem of pCTL preservation is an appropriate treatment of controlability. The second result is a novel automated technique for metric estimations over probabilistic models and reward stratuctures that combines simulation, invariant inference and probabilistic model checking. Simulation produces a probabilistically relevant set of traces from which a state invariant is inferred. The invariant characterises a partial model which is then exhaustively explored using probabilistic model checking. I will report on experiments that suggest that metric estimation using this technique (for both fully probabilistic models and those exhibiting non-determinism) can be more several orders of magnitude more effective than (full model) probabilistic and statistical model checking, especially for system models where the events of interest are rare.

« Back...

Applications of higher-order model checking to program verification
Hiroshi Unno, University of Tsukuba, Japan

This tutorial will focus on applications of higher-order model checking to program verification. Higher-order model checking generalizes classical model checking by replacing finite-state models with more expressive models called higher­order recursion schemes (HORS). HORS may be considered as a simply-typed functional program for generating a (possibly infinite) tree, and higher­order model checking is the problem of deciding properties of the tree generated by HORS. The decidability results and the development of practical algorithms for higher­order model checking have enabled the construction of fully automated verification tools for higher-order programs.

This tutorial will explain the theory and the construction of one of such tools called MoCHi, a software model checker for the functional language OCaml. This tutorial covers, in addition to higher-order model checking, the following verification techniques, which have also been generalized to the higher­order setting and used to construct MoCHi:
1) predicate abstraction and CEGAR for safety verification,
2) binary reachability analysis for termination verification, and
3) automata­theoretic approach to liveness verification.
Here, MoCHi leverages dependent and intersection refinement types and

Horn constraint solvers, respectively for expressing and inferring properties of higher-order programs.

« Back...

Ensuring liveness properties of distributed systems (A research agenda)
Rob van Glabbeek, Commonwealth Scientific and Industrial Research Organisation, Australia

Often fairness assumptions need to be made in order to establish liveness properties of distributed systems, but in many situations these lead to false conclusions.

This talk presents a research agenda aiming at laying the foundations of a theory of concurrency that is equipped to ensure liveness properties of distributed systems without making fairness assumptions. This theory will encompass process algebra, temporal logic and semantic models, as well as treatments of real-time. The agenda also includes developing a methodology that allows successful application of this theory to the specification, analysis and verification of realistic distributed systems, including routing protocols for wireless networks.

Contemporary process algebras and temporal logics fail to make distinctions between systems of which one has a crucial liveness property and the other does not, at least when assuming justness, a strong progress property, but not assuming fairness. Setting up an alternative framework involves giving up on identifying strongly bisimilar systems, inventing new induction principles, developing new axiomatic bases for process algebras and new congruence formats for operational semantics, and creating new treatments of time and probability.

Even simple systems like fair schedulers or mutual exclusion protocols cannot be accurately specified in standard process algebras (or Petri
nets) in the absence of fairness assumptions. Hence the work involves the study of adequate language or model extensions, and their expressive power.

Paper available at
http://theory.stanford.edu/~rvg/abstracts.html#agenda

« Back...

Symbolic Automata and Monadic second-order logic on finite sequences
Margus Veanes, Microsoft Research, USA

Symbolic Finite Automata or SFAs extend classical automata by allowing transitions to carry predicates over an effective Boolean algebra, instead of concrete labels. In this talk we discuss the main algorithmic properties of SFAs and illustrate some of their applications. We focus in more detail on one application that is to use SFAs as a backend for a symbolic extension of the weak monadic second-order logic of one successor. The extension is to allow character predicates to range over decidable alphabets instead of finite alphabets. We present a decision procedure for this logic based on a reduction to SFAs. An important part of the reduction is the use of Cartesian products of Boolean algebras. We show how one can use a variant of algebraic decision diagrams to implement Cartesian products of Boolean algebras.

« Back...

Byte-precise verification of low-level list manipulation
Tomas Vojnar, Brno University of Technology, Czech Republic

We present the concept of symbolic memory graphs (SMGs) which are behind the Predator shape analyzer for low-level list manipulating programs. SMGs are to some degree inspired by works on separation logic with higher-order list predicates, but they are graph-based and use a more fine-grained (byte-precise) memory model in order to support various low-level memory operations. In particular, SMGs allow one to track sizes of allocated memory chunks, interval-based constraints on pointer offsets, placement of allocated structures in various parts of the memory, or nesting of memory structures, and support operations like pointer arithmetic, address alignment, block operations, or reinterpretation of memory contents. A support for such program features and operations is necessary for analysing memory safety of low-level programs with dynamic data structures.

« Back...

Soundness in negotiations
Igor Walukiewicz, Université Bordeaux and Centre National de la Recherche Scientifique, France

Negotiations are a formalism for describing multiparty distributed cooperation. Alternatively, they can be seen as a model of concurrency with synchronized choice as communication primitive. Well-designed negotiations must be sound, meaning that, whatever its current state, the negotiation can still be completed. In their work, Esparza and Desel have shown that deciding soundness of a negotiation is PSPACE-complete, and in PTIME if the negotiation is deterministic. They have also provided an algorithm for an intermediate class of acyclic, non-deterministic negotiations, but left the complexity of the soundness problem open.

We show that soundness of acyclic, weakly non-deterministic negotiations is in PTIME, and that checking soundness is already NP-complete for slightly more general classes.

We also study two further analysis problems for sound acyclic deterministic negotiations, called the race and the omission problem, and give polynomial algorithms. We use these results to provide the first polynomial algorithm for some analysis problems of workflow nets with data previously studied by Trcka, van der Aalst, and Sidorova.

« Back...

Igor Walukiewicz, Université Bordeaux and Centre National de la Recherche Scientifique, France

We consider multi-threaded systems which combine recursion, or even higher-order recursion, with dynamic thread creation. Communication between threads is via global variables as well as via local variables that are shared between a thread and its sub-threads. Reading and writing are atomic operations, while we do not allow locks or operations of the type compare-and-set. The resulting systems are still too expressive to be decidable. As an abstraction, we therefore replace thread creation with parametric thread creation and show for the resulting systems, that the reachability problem is decidable.
Joint work with Anca Muscholl and Helmut Seidl.

« Back...

Verification problems in SPARK programs

Data parallel computation is widely used in many areas. SPARK is one of the most popular platforms for such tasks. The platform defines Resilient Distributed Datasets (RDDs) as its data abstraction and a (large) set of functions to perform parallel computation on RDDs. In this talk, I will give a brief introduction to SPARK and discuss some verification problems in SPARK programs.

« Back...

Interpolation algorithms and their applications in model checking
Georg Weissenbacher, Technische Universität Wien, Austria

In the last decade, interpolation has become the cornerstone of numerous modern verification tools. In this talk, I will survey the applications of interpolation in contemporary model checking algorithms, including recent approaches that combine interpolation-based model checking and the IC3 paradigm. The second half of the talk will be dedicated to labelled interpolation systems, a generalized framework for the extraction of interpolants from resolution proofs and resolution chains.

« Back...

Monoids as storage mechanisms
Georg Zetzsche, École Normale Supérieure de Cachan, France

The investigation of models extending finite automata by some storage mechanism is a central theme in theoretical computer science. Choosing an appropriate storage mechanism can yield a model that is expressive enough to capture a given behavioral aspect while admitting desired means of analysis.

It is therefore a central concern to understand which storage mechanisms have which properties regarding expressiveness and (algorithmic) analysis. This talk presents a line of research that aims for general insights in this direction. In other words: How does the structure of the storage mechanism influence expressiveness and analysis of the resulting model?

In order to study this question, one needs a model in which the storage mechanism appears as a parameter. Such a model is available in valence automata, where the storage mechanism is given by a (typically infinite) monoid. Choosing a suitable monoid then yields models such as Turing machines, pushdown automata, vector addition systems, or combinations thereof.

This talk surveys a selection of results that characterize storage mechanisms with certain desirable properties, such as decidability of reachability, semilinearity of Parikh images, and decidability of logics.

« Back...

Invariant-based verification and synthesis for hybrid systems
Naijun Zhan, Chinese Academy of Sciences, China

Hybrid systems (now also called cyber-Physical systems) are quite omnipresent in our daily life, many of them are safety-critical. "How can we design cyber-physical systems people can bet their lives on" is a grand challenge for computer science and control theory. Formal methods is thought an effective solution to the challenge, and has been widely and successful used in practice. Invariant generation plays a key role in the formal design of hybrid systems.

In this talk, I will first report our recent work on a complete approach to synthesizing semi-algebraic invariants for polynomial hybrid systems, which gave a confirmative answer to the open problem if there is a complete method to discover all semi-algebraic invariants for polynomial hybrid systems. Then, I will show how to use the results to synthesize controllers in the design of hybrid systems. I also discuss how to extend the results to deal with non-polynomial hybrid systems. Finally, I will demonstrate how to apply the results to solve real-world problems, such as the controller synthesis of oil pump, and the verification of the descent control program of a lunar lander, etc.

« Back...

A multi estimations approach for computing backbones of hard and dense propositional formulae
Yueling Zhang, East China Normal University, China

Backbone Variables are widely used in random SAT solving, all-sat computing and SAT based model checking. In this paper, we propose a novel approach to find backbones in dense backbones formulae using modern SAT solvers. For the formulae that with disperse backbones, our approach is able to compute an estimation of literals that are highly likely to be backbones. Experiments show the efficiency of the approaches using satisfiable instances from SAT Comp.2015. Moreover, compared with state-of-the-art approach proposed in 2015, our approaches has advantages in hard formulae.

« Back...

Best viewed with IE 7 and above