Citations
To cite the Bigloo software, please use the following Biblatex entry.
@software{ bigloo, title = {Bigloo, a Practical Scheme Compiler}, author = {Serrano, Manuel}, year = {1992}, institution = {Inria}, url = {http://www-sop.inria.fr/indes/fp/Bigloo/} }
For referring to the current release, please use:
@softwareversion{ bigloo-4.5b, version = {4.5b}, year = {2023}, month = {December}, file = {ftp://ftp-sop.inria.fr/indes/fp/Bigloo/biglo-4.5b}, crossref = {bigloo} }
References
Serrano M.
The Computer Scientist Nightmare
A List of Successes that Can Change the World,
Edinburgh, Scotland,
Apr,
2016
This text relates the story of particularly shocking bug that occurred during the development of a Web application. The bug is first described. The battle to understand and fix it is then presented. This experimental report concludes with some questionings about the way we conceive programming languages and programming environments.
Grande J., Boudol G., Serrano M.
Jthread, a deadlock-free mutex library
Proceedings of the 17th International Symposium on Principles and Practice of Declarative Programming (PPDP'15),
Siena, Italy,
Jul,
2015
This article presents several independent optimizations of operations on monitors. They do not involve the low-level mutual exclusion mechanisms but rather their integration with and usage within higher-level constructs of the language. The paper reports acceleration of Hop, the Web programming language for which these optimizations have been created. The paper shows that other languages such as C and Java would also benefit from these optimizations.
Serrano M., Grande J.
Locking Fast
Proceedings of the ACM Symposium on Applied Computing (SAC'14),
Gyeongju, Korea,
Mar,
2014
This article presents several independent optimizations of operations on monitors. They do not involve the low-level mutual exclusion mechanisms but rather their integration with and usage within higher-level constructs of the language. The paper reports acceleration of Hop, the Web programming language for which these optimizations have been created. The paper shows that other languages such as C and Java would also benefit from these optimizations.
Serpette B., Serrano M.
An Interpreter for Server-Side Hop
Proceedings of the Dynamic Language symposium (DLS),
Portland, USA,
Oct,
2011
HOP is a Scheme-based multi-tier programming language for the Web. The client-side of a program is compiled to JavaScript, while the server-side is executed by a mix of natively compiled code and interpreted code. At the time where HOP programs were basic scripts, the performance of the server-side interpreter was not a concern; an inefficient interpreter was acceptable. As HOP expanded, HOP programs got larger and more complex. A more efficient interpreter was necessary. This new interpreter is described in this paper. It is compact, its whole implementation counting no more than 2.5 KLOC. It is more than twice faster than the old interpreter and consumes less than a third of its memory. Although it cannot compete with static or JIT native compilers, our experimental results show that it is amongst the fastest interpreters for dynamic languages.
Serrano M., Gallesio E.
An Adaptive Package Management System for Scheme
Proceedings of the Second Dynamic Languages Symposium (DLS),
Montréal, Québec, Canada,
Oct,
2007
This paper presents a package management system for the Scheme programming language. It is inspired by the "Comprehensive Perl Archive Network" ("Cpan") and various GNU/Linux distributions. It downloads, installs, and prepares source codes for execution. It manages the dependencies between packages. The main characteristic of this system is its neutrality with respect to the various Scheme implementations. It is neutral with respect to the language extensions that each Scheme implementation proposes and with respect to the execution environment of these implementations. This allows the programmer to blend, within the same program, independent components which have been developed and tested within different Scheme implementations. ScmPkg is available at: "http://hop.inria.fr/hop/scmpkg"
Bres Y., Serpette B., Serrano M.
Bigloo.NET: compiling Scheme to .NET CLR
Journal of Object Technology,
,
Oct,
2004
This paper presents the compilation of the Scheme programming language to .NET. This platform provides a virtual machine, the Common Language Runtime (CLR), that executes bytecode, the Common Intermediate Language (CIL). Since CIL was designed with language agnosticism in mind, it provides a rich set of language constructs and functionalities. As such, the CLR is the first execution environment that offers type safety, managed memory, tail recursion support and several flavors of pointers to functions. Therefore, the CLR presents an interesting ground for functional language implementations. We discuss how to map Scheme constructs to CIL. We present performance analyses on a large set of real-life and standard Scheme benchmarks. In particular, we compare the speed of these programs when compiled to C, JVM and .NET. We show that in term of speed performance of the Mono implementation of .NET, the best implementing running on both Windows and Linux, still lags behind C and fast JVMs such as the Sun's implementations.
Bres Y., Serpette B., Serrano M.
Compiling Scheme programs to .NET Common Intermediate Language
2nd International Workshop on .NET Technologies,
Plzen, Czech Republic,
May,
2004
We present in this paper the compilation of the Scheme programming language to .Net platform. .Net provides a virtual machine, the Common Language Runtime (CLR), that executes bytecode, the Common Intermediate Language (CIL). Since CIL was designed with language agnosticism in mind, it provides a rich set of language constructs and functionalities. Therefore, the CLR is the first execution environment that offers type safety, managed memory, tail recursion support and several flavors of pointers to functions. As such, the CLR presents an interesting ground for functional language implementations. We discuss how to map Scheme constructs to CIL. We present performance analyses on a large set of real-life and standard Scheme benchmarks. In particular, we compare the performances of Scheme programs when compiled to C, JVM and .Net. We show that .Net still lags behind C and JVM.
Serrano M., Boussinot F., Serpette B.
Scheme Fair Threads
6th ACM Sigplan Int'l Conference on Principles and Practice of Declarative Programming (PPDP),
Verona, Italy,
Aug,
2004
This paper presents "Fair Threads", a new model for concurrent programming. This multi-threading model combines preemptive and cooperative scheduling. User threads execute according to a cooperative strategy. Service threads execute according to a preemptive strategy. User threads may ask services from service threads in order to improve performance by exploiting hardware parallelism and in order to execute non-blocking operations. Fair threads are experimented within the context of the functional programming language Scheme. This paper also presents the integration in this language. That is, it presents a semantics for Scheme augmented with Fair Threads and the main characteristics of the implementation.
Gallesio E., Serrano M.
Programming Graphical User Interfaces with Scheme
Journal of Functional Programming,
,
Sep,
2003
This paper presents Biglook, a widget library for an extended version of the Scheme programming language. It uses classes of a Clos-like object layer to represent widgets and Scheme closures to handle graphical events. Combining functional and object-oriented programming styles yields an original application programming interface that advocates a strict separation between the implementation of the graphical interfaces and the user-associated commands, enabling compact source code. The Biglook implementation separates the Scheme programming interface and the native back-end. This permits different ports for Biglook. The current version uses GTK and Swing graphical toolkits, while the previous release used Tk.
Serpette B., Serrano M.
Compiling Scheme to JVM bytecode: a performance study
7th ACM Sigplan Int'l Conference on Functional Programming (ICFP),
Pittsburgh, Pensylvanie, USA,
Oct,
2002
We have added a Java virtual machine (henceforth JVM) bytecode generator to the optimizing Scheme-to-C compiler Bigloo. We named this new compiler BiglooJVM. We have used this new compiler to evaluate how suitable the JVM bytecode is as a target for compiling strict functional languages such as Scheme. In this paper, we focus on the performance issue. We have measured the execution time of many Scheme programs when compiled to C and when compiled to JVM. We found that for each benchmark, at least one of our hardware platforms ran the BiglooJVM version in less than twice the time taken by the Bigloo version. In order to deliver fast programs the generated JVM bytecode must be carefully crafted in order to benefit from the speedup of just-in-time compilers.
Gallesio E., Serrano M.
Biglook: a Widget Library for the Scheme Programming Language
2002 Usenix annual technical conference, Freenix track,
Monterey, Californie, USA,
Jun,
2002
Serrano M., Boehm H-J.
Understanding Memory Allocation of Scheme Programs
5th ACM Sigplan Int'l Conference on Functional Programming (ICFP),
Montréal, Québec, Canada,
Sep,
2000
[Memory is the performance bottleneck of modern architectures. Keeping memory consumption as low as possible enables fast and unobtrusive applications. But it is not easy to estimate the memory use of programs implemented in functional languages, due to both the complex translations of some high level constructs, and the use of automatic memory managers.,(linebreak 2) To help understand memory allocation behavior of Scheme programs, we have designed two complementary tools. The first one reports on frequency of allocation, heap configurations and on memory reclamation. The second tracks down memory leaks. We have applied these tools to our Scheme compiler, the largest Scheme program we have been developing. This has allowed us to drastically reduce the amount of memory consumed during its bootstrap process, without requiring much development time.,(linebreak 2) Development tools will be neglected unless they are both conveniently accessible and easy to use. In order to avoid this pitfall, we have carefully designed the user interface of these two tools. Their integration into a real programming environment for Scheme is detailed in the paper.]
Serrano M.
Bee: an Integrated Development Environment for the Scheme Programming Language
Journal of Functional Programming,
,
May,
2000
The Bee is an integrated development environment for the Scheme programming language. It provides the user with a connection between Scheme and the C programming language, a symbolic debugger, a profiler, an interpreter, an optimizing compiler that delivers stand alone executables, a source file browser, a project manager, user libraries and online documentation. This article details the facilities of the Bee, its user interface and presents an overview of the implementation of its main components.
Serrano M.
Inline expansion: when and how
9th Int'l Symposium on Programming Language Implementation and Logic Programming (PLILP),
Southampton, UK,
Sep,
1997
Inline function expansion is an optimization that may improve program performance by removing calling sequences and enlarging the scope of other optimizations. Unfortunately it also has the drawback of enlarging programs. This might impair executable programs performance. In order to get rid of this annoying effect, we present, an easy to implement, inlining optimization that minimizes code size growth by combining a compile-time algorithm deciding when expansion should occur with different expansion frameworks describing how they should be performed. We present the experimental measures that have driven the design of inline function expansion. We conclude with measurements showing that our optimization succeeds in producing faster codes while avoiding code size increase.
Serrano M., Feeley M.
Storage Use Analysis and its Applications
1fst ACM Sigplan Int'l Conference on Functional Programming (ICFP),
Philadelphia, Penn, USA,
May,
1996
In this paper we present a new program analysis method which we call ,(emph "Storage Use Analysis"). This analysis deduces how objects are used by the program and allows the optimization of their allocation. This analysis can be applied to both statically typed languages (e.g. ML) and latently typed languages (e.g. Scheme). It handles side-effects, higher order functions, separate compilation and does not require CPS transformation. We show the application of our analysis to two important optimizations: stack allocation and unboxing. The first optimization replaces some heap allocations by stack allocations for user and system data storage (e.g. lists, vectors, procedures). The second optimization avoids boxing some objects. This analysis and associated optimizations have been implemented in the Bigloo Scheme/ML compiler. Experimental results show that for many allocation intensive programs we get a significant speedup. In particular, numerically intensive programs are almost 20 times faster because floating point numbers are unboxed and no longer heap allocated.
P. H. Hartel , M. Feeley , M. Alt , L. Augustsson , P. Baumann , M. Beemster , E. Chailloux , C. H. Flood , W. Grieskamp , J. H. G. Van Groningen , K. Hammond , B. Hausman , M. Y. Ivory , P. Lee , X. Leroy , S. Loosemore , N. Rand M. Serrano , J.-P. Talpin , J. Thackray , P. Weis , P. Wentworth
Pseudoknot: a Float-Intensive Benchmark for Functional Compilers
Journal of Functional Programming,
,
,
1996
[Over 25 implementations of different functional languages are benchmarked using the same program, a floating-point intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important consideration is how the program can be modified and tuned to obtain maximal performance on each language implementation.,(linebreak 2) With few exceptions, the compilers take a significant amount of time to compile this program, though most compilers were faster than the then current GNU C compiler (GCC version 2.5.8). Compilers that generate C or Lisp are often slower than those that generate native code directly: the cost of compiling the intermediate form is normally a large fraction of the total compilation time.,(linebreak 2) There is no clear distinction between the runtime performance of eager and lazy implementations when appropriate annotations are used: lazy implementations have clearly come of age when it comes to implementing largely strict applications, such as the Pseudoknot program. The speed of C can be approached by some implementations, but to achieve this performance, special measures such as strictness annotations are required by non-strict implementations.,(linebreak 2) The benchmark results have to be interpreted with care. Firstly, a benchmark based on a single program cannot cover a wide spectrum of `typical' applications. Secondly, the compilers vary in the kind and level of optimisations offered, so the effort required to obtain an optimal version of the program is similarly varied.]
Serrano M., Weis P.
Bigloo: a portable and optimizing compiler for strict functional languages
2nd Static Analysis Symposium (SAS),
Glasgow, Scotland,
Sep,
1995
We present Bigloo, a highly portable and optimizing compiler. Bigloo is the first compiler for strict functional languages that can efficiently compile "several languages": Bigloo is the first compiler for full Scheme "and" full ML, and for these two languages, Bigloo is one of the most efficient compiler now available (Bigloo is available by anonymous ftp at "http://www.inria.fr/mimosa/fp/Bigloo". This high level of performance is achieved by numerous high-level optimizations. Some of those are classical optimizations adapted to higher-order functional languages (e.g. inlining), other optimization schemes are specific to Bigloo (e.g. a new refined closure analysis, an original optimization of imperative variables, and intensive use of higher-order control flow analysis). All these optimizations share the same design guideline: the reduction of heap allocation.
Serrano M.
A Fresh Look to Inlining Decision
4th International Computer Symposium (invited paper),
Mexico city, Mexico,
Nov,
1995
Included in a compiler for functional or object oriented languages, inline function expansion has been reported as one of the most valuable optimizations. Unfortunately, it has an important counterpart: since it duplicates function body, it enlarges the code of the compiled programs as well as the resulting object code. The main contribution of this paper is to present a simple compile time inlining decision algorithm where the code length increasing factor is a constant that can be tuned by the compiler designer and where execution improvements are comparable with other previous sophisticated technics. Our major concern is about functional languages. With these languages, recursive functions are widely used: the second contribution of this paper is the presentation of an original ad hoc inlining framework for recursive functions which is more accurate than function unfolding. Experimental results demonstrate that our inlining technics succeed in producing small ,(emph "and") efficient compiled object codes.
Serrano M.
Rgc: un générateur d'analyseurs lexicaux efficaces en Scheme
Avancées applicatives, Actes des journées JFLA,
,
Feb,
1992
Cet article présente Rgc, un générateur d'analyseurs lexicaux rapides, développé pour Scheme. Nous ne décrivons pas ici une maquette mais un produit final efficace. Par ses performances, il est en concurrence directe avec le logiciel Flex. Après mesures, il apparaît que Rgc est entre 5 et 10% plus rapide que Flex et entre 250 et 260% plus rapide que Lex. Pour obtenir ce niveau de performance, nous avons réalisé un compilateur spécialisé restreint Scheme->C. De plus, puisque Scheme ne possède pas de primitives rapides de lecture il s'est avéré indispensable de programmer les requêtes systèmes et la gestion des tampons en C. Le code est donc composé de 90% de Scheme et 10% de C.