Compiler project: Miscellaneous announcements

Compiler project’s information

First of all, I want to Shayan at Clean Typecheck for his GSoC on adding type-checking and name resolution capabilities to Haskell-Source with Extensions. That will greatly facilitate writing the front-end.

Second, I may very well implement versions of Haskell pre-dating Haskell 98. After all, I got my hands on the preceding versions’ standards (see here), so why not do a old-standards compliant compiler…if that’s feasible, of course.

Lastly, I have chosen the name of the compiler. It shall be named lhc for, at your choice: Loïc’s Haskell Compiler, Lambda Haskell Compiler, a reference to the Large Hadron Collider, Light Haskell Compiler, Le Haskell Compiler, Last Haskell Compiler. An alternative spelling is λhc.
P.S. : The two last names are courtesy of a good friend, homer. Go bug him there !
P.P.S. : You avoided, amongst other more or less sane ideas, the THC.
P.P.P.S. : I’ll try, amongst other insane ideas, to enforce the alternative spelling as a possibility to call the compiler.

Other information

Ok, I forgot to celebrate my first Hackage package (that was hs-json-rpc)…I’ll redeem myself by celebrating my first contributions to a wider project in which I am not one of the founders (No, my contributions to Genetic Invasion don’t count for that milestone). Let’s get that show on the road!

I said “contributions”, so first contribution: I contributed a patch to Evolving Objects because the library didn’t compile with Visual C++ when OpenMP was used. I had this patch in a private version of the library, used to compile Genetic Invasion for Windows, but given that this private branch served exclusively under Windows, I didn’t knew that a terrible error in my modifications nuked compilation under Linux…I cleaned up the patch and submitted it once I became aware of that error.

Second one, I made a library proposal for GHC (see here). Nothing big, but it was a good introduction to Haskell’s library proposal process.

Compiler project: Design, first phase

First target choice

I choose to have as first target .Net’s virtual machine because it is — at least theoretically — made to support various paradigms, it already has support for:

There also is a package on Hackage to write and manipulate code in .Net’s intermediate language. Ah, and it supports unsigned integral types unlike, for example, the JVM.

Type mappings

Haskell’s values will be mapped to Lazy<T> values with the exception of functions values who will be mapped to Func<T, TResult> values and tuples who will be mapped to Tuple<T> values. It should be noted that because of .Net’s Func<T, TResult> delegate implementation, it can’t represent functions of more than 16 variables, so the compiler should automatically curry functions of 16 variables or more when they are used as values. .Net’s tuples going only up to 8-tuples, the last element of a 8-tuple will be used to store a tuple containing the rest of the tuple, that is a one-tuple for 8-tuples, a pair for 9-tuples, a triple for 10-tuples, et cætera.

N.B. : The limitation to 16 variables for a function value will probably stay a limitation of the implementation, but to comply with Haskell 98, I must support tuples up to 15 elements.

Haskell’s numeric types will be mapped to .Net’s numeric types, e.g. Integer will be mapped to BigInteger. Arrays will be mapped to .Net’s arrays, or to a class backed by a .Net array. Character handling will most certainly be a tall order: I would like to have a string type backed by .Net’s native string type but it’s a UTF-16 encoded string type giving only access to enumerators on, length in and indexation of 16-bit entities. StringInfo is better in this aspect, but it gives access to the .Net concept of a text element, something akin to a grapheme.

I will have to find a mean to export type aliases, but I’ll probably implement newtypes as classes extending the base type and adding nothing to the base type, as well as modifying nothing I don’t need to.

Module mappings

Each Haskell module will be mapped to three things:

  • A namespace, for the scoping role of the module
  • An assembly or a netmodule, for the “hiding” role of the module
  • A set of classes (one for each datatype and most probably one for the functions) for the code

In its scoping role, a module named X.Y will correspond to a namespace named Haskell.X.Y. In its role to hide elements, it will probably correspond to an assembly.

Notes & musings

N.B. : The following paragraph contains open questions for the implementation. All suggestions are welcomed.

I’ll perhaps implement typeclasses with a map linking types with dictionaries, but I will need to decide one thing: do I enter only the type without the Lazy<T> wrapping, only the type with the Lazy<T> wrapping or both ? And if I enter both, should I just enter a generic function forcing the value and calling the version without the Lazy<T> wrapping ? Or, at least, should I just do that for user-defined types ? And if I do that, shouldn’t I offer a option (with a pragma perhaps) to do the inverse, i.e. consider that the default version is the one with the Lazy<T> wrapping and the dictionary I create for the strict type just wrap the value and call the version for the lazy type.

There is another delegate family, Action<T>, for functions that do not return a value. Should I use it for function returning () ? And should I use Func<TResult> for functions that only take a () as parameter ? Of course, this paragraph questions are for the case where a function is used as a function value, but they are nonetheless interesting.

Compiler project: The beginning

This is the first post of a series concerning one of my craziest projects: writing from scratch an Haskell compiler. This post will be about the context of that project and a “mission statement”. Let’s get that show on the road !


Amongst my many projects is an Haskell compiler targeting virtual machines or other such environments. One reason I have this project is because I like Haskell, and writing a compiler for it seems to be a good way to progress in the language. Another reason is that I am curious about compilers, and writing one — even for as peculiar a language as Haskell — seems to be a good way to quench that curiosity.

The project is to write an Haskell compiler, targeting virtual machines beginning with .Net’s one but it may in the future target also the JVM or Parrot. In the future, it might even target more exotic platforms such as WAM, SQL, OpenCL, OpenGL or PostScript. One goal of that choice is portability, another is that this toy compiler not be totally useless.

Objectives, wistful goals & non-objectives

N.B. : In the following, when I talk about an “host”, I am talking about the target for which we generate code, be it .Net’s VM, the JVM or SQL.

We’ll begin with what the compiler should do when it’s finished:

  • Compile code conforming to both Haskell 98 and Haskell 2010, with possibility to choose between both standards.
  • Compile code using common extensions, be they syntax extensions (e.g. monad comprehensions) or library ones (e.g. Concurrent Haskell).
  • Facilitate as much as possible calling to/from host functions.
  • Use as much as feasible the host’s standard library to implement Haskell’s one.
  • Map as far as is reasonable Haskell’s concepts to the host’s ones (e.g. Haskell’s packages).

The first two points are par for the course in a compiler, but note that points 3 to 5 are here because of an objective to generate code as “transparent” as possible, from the host point of view.

We’ll continue with what the compiler might do if I have the time:

  • Have an option to generate code with a non-strict non-lazy evaluation strategy (e.g. a classic call-by-name or a more exotic call-by-future).
  • Experiment with code generation (e.g. automatic memoization)
  • Facilitate as much as possible other FFI calls.
  • Use as much as possible the host’s facilities for debugging, profiling…
  • Be able to self-compile a working version of its Haskell parts
  • Be able to target from one compiler all other targets

It may take some effort, but it would be awesome to be able to seamlessly interact between lazy and non-lazy code, with both code bases keeping their non-strict semantics and without needing a recompilation of the called code. The calling code might then need informations on the called code evaluation strategy.
P.S. : While the last point is reasonable for hosts such as .Net or Parrot, at the extreme limit OpenCL or OpenGL, I think it is not the case for hosts such as SQL or PostScript. Thus, I wouldn’t hold my breath in having a PostScript-based compiler targeting SQL, for example.

We’ll conclude with what the compiler won’t even pretend to do:

  • Be an highly optimizing and efficient compiler: It is a pet project of mine, after all.
  • Have a stable behaviour from one target to another: I wouldn’t bat an eyebrow if .Net’s version is a lazy implementation and the JVM’s one is a non-strict non-lazy one.
  • Generate interoperable code from one version to another, at least in the beginning.

Yes, that does mean that performance will not be my primary goal: correctness will be difficult enough a goal, I’m afraid.

Projects list


Here are the list of my projects featured in this blog, with their status:

  • JSON-RPC client implementation in Haskell (started: repo is here)
  • RFC 707 implementation in Haskell (on hold)
  • ONC RPC client implementation in Haskell (not started)
  • AWT implementation in curses, using caciocavallo (not started)
  • Haskell compiler in Haskell (not started)
  • file(1) implementation using shared-mime-info as its source of information (started: see here) That project's in Perl.

For the time being, all my projects are on hold or not even started due to my studies…I will however talk about my RFC 707 implementation project, given it already started.

N.B. : I may do some of these projects also in other languages (e.g. Perl) if the fancy takes me.