Higher-Order Complexity Analysis of Rewrite Systems
We are increasingly relying on computer systems, whereas the level of sophistication of such systems steadily increased over the last decades. To produce robust systems, we require tools which support verification in an automated setting. Limited computational resources, such as space (i.e., memory) and time, render resource analysis a central topic in software verification
We aim to advance the field of static resource analysis, by devising new methods for the automated runtime complexity analysis of functional programs. Given a program, such methods should be able to deduce useful bounds on the runtime (i.e., number of computation steps) of the provided programs, as a function of the size of the inputs. To ensure that results are broadly applicable, one usually considers an abstract model of computation instead of a specific programming language. Term rewrite systems are particularly well suited as such an abstract language, and runtime complexity analysis is a very active research area in rewriting. However, the lack of facilities to faithfully model higher-order functions has prevented the use of rewrite systems for the analysis of functional programs written in languages like Haskell or OCaml, where the use of higher-order functions is paramount.
To overcome this regrettable situation, we base our studies on higher-order rewrite systems (HORSs for short), one of the most general formal models of functional programs. HORSs extend term rewrite systems to terms with abstraction and application. To date, almost no results are known on the resource analysis of HORSs. Nevertheless, we can draw inspiration from related areas of research, such as program analysis and the area of implicit computational complexity (ICC for short). Notably, the ICC community has a long history of producing characterisations of the class of polytime computable functions and related classes, based on specific instances of higher-order rewrite systems like the lambda calculus. We will seek to synthesise complexity analysis techniques for HORSs from such implicit characterisations. Although this synthesis is clearly feasible, the so obtained methods are rarely precise, and often only effective on a small class of systems. To obtain methods of practical relevance, we will have to calibrate the techniques for precision and effectiveness.
To shed light on expressiveness and to demonstrate the feasibility of the established techniques, we will develop prototypical implementations along the theoretical development. Building upon the lessons learned from those prototypes, we integrate the established techniques into the Tyrolean Complexity Tool (TCT for short). TCT is a highly modular tool for the automated complexity analysis of rewrite systems. To date, it implements a majority of the state-of-the-art techniques for the analysis of first-order rewrite systems. Our additions will allow TCT to fully automatically analyse the runtime complexity of functional programs, via abstractions to HORSs.
Funding
This three years project is supported by the FWF Austrian Science Funds, project number J-3563.
List of Publications
-
Martin Avanzini and Ugo Dal Lago. “On sharing, memoization, and polynomial time”. Inf. Comput. 261: 3–22 (2018)
-
Martin Avanzini and Ugo Dal Lago. “Automated Sized-Type Inference and Complexity Analysis”. Proceedings 8th Workshop on Developments in Implicit Computational Complexity and 5th Workshop on Foundational and Practical Aspects of Resource Analysis, DICE-FOPARAETAPS 2017, Uppsala, Sweden, April 22-23, 2017, volume 248 of EPTCS, pages 7–16, (2017).
-
Martin Avanzini and Ugo Dal Lago. “Automating sized-type inference for complexity analysis”. Proc. ACM Program. Lang. 1 (ICFP): 43:1–43:29 (2017)
-
Martin Avanzini and Michael Schaper. “GUBS Upper Bound Solver (Extended Abstract)”. Proceedings 8th Workshop on Developments in Implicit Computational Complexity and 5th Workshop on Foundational and Practical Aspects of Resource Analysis, DICE-FOPARAETAPS 2017, Uppsala, Sweden, April 22-23, 2017, volume 248 of EPTCS, pages 17–23, (2017).
-
Martin Avanzini and Georg Moser. “Complexity of Acyclic Term Graph Rewriting”. 1st International Conference on Formal Structures for Computation and Deduction, FSCD 2016, June 22-26, 2016, Porto, Portugal, volume 52 of LIPIcs, pages 10:1–10:18, (2016).
Schloss Dagstuhl - Leibniz-Zentrum für Informatik.
-
Martin Avanzini, Georg Moser and Michael Schaper. “TcT: Tyrolean Complexity Tool”. Tools and Algorithms for the Construction and Analysis of Systems - 22nd International Conference, TACAS 2016, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2016, Eindhoven, The Netherlands, April 2-8, 2016, Proceedings, volume 9636 of Lecture Notes in Computer Science, pages 407–423, (2016).
Springer.
-
Martin Avanzini and Ugo Dal Lago. “On Sharing, Memoization, and Polynomial Time”. 32nd International Symposium on Theoretical Aspects of Computer Science, STACS 2015, March 4-7, 2015, Garching, Germany, volume 30 of LIPIcs, pages 62–75, (2015).
Schloss Dagstuhl - Leibniz-Zentrum für Informatik.
-
M. Avanzini, U. Dal Lago and G. Moser. “Higher-Order Complexity Analysis: Harnessing First-Order Tools.”. 6th International Workshop on Developments in Implicit Complexity, (2015).
-
Martin Avanzini, Ugo Dal Lago and Georg Moser. “Analysing the complexity of functional programs: higher-order meets first-order”. Proceedings of the 20th ACM SIGPLAN International Conference on Functional Programming, ICFP 2015, Vancouver, BC, Canada, September 1-3, 2015, pages 152–164, (2015).
ACM.
-
Martin Avanzini, Christian Sternagel and René Thiemann. “Certification of Complexity Proofs using CeTA”. 26th International Conference on Rewriting Techniques and Applications, RTA 2015, June 29 to July 1, 2015, Warsaw, Poland, volume 36 of LIPIcs, pages 23–39, (2015).
Schloss Dagstuhl - Leibniz-Zentrum für Informatik.
-
M. Avanzini and B. Felgenhauer. “Type Introduction for Runtime Complexity Analysis”. 14th Workshop on Termination (WST), (2014).