Research

My research concerns certain aspects of logic in computer science, specifically, semantics of computation. I describe my topics of interest below.

Certification of programs. The idea of certification is propagated by members of the project CompCert. This idea is relatively recent and has been given considerable pragmatic value due to an extensive growth of theorem provers and proof assistants — notably, Coq. Certification of programs can be assisted by automated tools; therefore, certifying a program amounts to proving corectness of a function using a proof assistant and extracting the computational content of such a proof into a functional programming language, with the extracted content being the program which is said to be certified by the correctness proofs and by the correctness of extraction procedure.

Denotational semantics of computation. When one wants to reason about abstract features a program, one may question its meaning. Denotational semantics assigns such meaning to programs, that is it tells what programs "really are". This meaning is independent of a programming language syntax and does not explicitely refer to the evaluation process. In this sense, denotational semantics is the most theoretical kind of reasoning about meaning of programs. And this is where one finds a remarkable connection with deep mathematical results, and applies such results to models of computation. My interest is focused on application of certain fundamentals of category theory such as the Yoneda lemma and Eilenberg-Moore coalgebras.

Neural networks and logic programming. Neural networks are powerful computational devices able not only to compute a given function but also to adapt to changing data and construct better approximations of functions by means of learning strategies. Neural networks provide a simple denotational semantics for logic programming languages that is essentially the least-fixed point semantics. The true advantage of neural networks reveals when one works with meanings of full-featured first-order logic programs (that is the ones allowing negations in antecedents of implications). Then, employing neural networks that learn, one avoids considering, for example, infinite Herbrand bases and hence infinite "neural networks", which is essential for realistic implementations.

Many-valued logic. I studied many-valued logic in my university diploma project and also in my PhD thesis. As it is widely known, many-valued logic represents a departure from Boolean two-valued logic. Algebraic reasoning is an implicit part of any proof in many-valued logic. Therefore purely syntactic approach, such as the one that can be practiced in intuitionistic proof theory, is impossible in most cases in proof theory of many-valued logic. This poses a problem of finding elegant deduction systems for many-valued logic. This problem was addressed in my thesis. Apart from that, in many-valued algebras one has a remarkable combinatorial problem called functional completeness due to E. Post. Recently there was a development on group-theoretic functional completeness, the relationship of which to proof theory could have become my main research topic after my PhD, provided that we lived in the ideal world where it was easy to find funding for theoretical research. At present the term "many-valued logic" has a nostalgic value to me.