Coder Social home page Coder Social logo

zippy's Introduction

I like most things, especially programming languages. I'm very good at starting new projects and very bad at finishing them.

  • ๐Ÿ”ญ Currently working on a static/dynamic duo of programming languages
  • ๐Ÿ’ฌ Ask me about programming languages
  • ๐Ÿ˜„ Pronouns: they/them

zippy's People

Stargazers

 avatar  avatar

Watchers

 avatar

zippy's Issues

pretty-printing of hir and lir

The only pretty-printing supported at the moment is of the MIR and the generated x64 code (through sad and hacky means). It basically goes without saying that pretty-printing of IRs can be very helpful for debugging.

holes not reporting

I forgot to implement reporting of holes when refactoring the THIR-MIR lowerer. That should be fixed.

explicit coercions

At the moment, the type checker allows going from a small numeric type to a bigger one:

let a: 10 = 5
let b: 100 = a

The C code generator assumes no such conversions happen, and so generates cast-less code. Since C has implicit conversions, this probably (?) isn't an issue, but it would still be nice to have this explicit. Basically, every time we unify two types which aren't exactly equal (e.g. going from a structurally smaller to a bigger one, or initializing a nominal type), the type checker should insert an explicit coercion.

imprecise partial evaluation

When the elaborator evaluates code like

let x = f 5
fun f (x: 10) : 10 = x

it generates a late init for x which immediately returns, when it really should generate a static value. This means that non-trivial bounds in range types will generate error messages in the backend for being unevaluated number expressions.

unjank the partial evaluator

The current partial evaluator is very eager with what it does, expanding and inlining as much as it can, as soon as it can. Additionally, it is implemented as essentially a tree-walking interpreter. All together, this means that some simple (though perhaps silly code) like

let f: 1 -> 1 = x => f x

causes a stack overflow. Not great!

A better approach would

  • not stack overflow on infinite loops, instead actually looping forever (and informing the user of where the loop occurs with Driver::report_eval)
  • only eagerly evaluate things when all inputs are known (including effect handlers)
  • be able to use heuristics to determine when to inline and when to leave alone a function call with none or only some known inputs

crateify the compiler

The entirety of the compiler is stored in a single, monolithic crate. This would be more manageable if the compiler library was separated out from the main driver, and potentially split into several crates (e.g. front-, mid-, and backend)

project name

This project began as an offshoot of another (very similar) language I was working on called zippy. corollary has never really been a favourite name of mine (mostly because I don't think it really fits), so I want to keep a log of name alternatives before 0.0.1.

I like the letter z as an extension (and something like zc for the compiler), so I hope to find one with an initial z.

  • zippy - I like this one, but worry that maybe it gives the wrong impression of the language (e.g. it is vaguely similar to Zig, and maybe it sounds a bit like it's performance-focused), but it is also light hearted and fun

how to handle unknown numbers?

With the introduction of arbitrary expressions in range types, the backend may no longer know the size of such a type if partial evaluation is disabled. For some expressions, the type is enough (e.g. a range type 0 upto x where x: 0 upto 10 is at most 0 upto 9) but for something like 0 upto 2 * 10 the expression 2 * 10 has the internal type number, which, semantically, is an unbounded, arbitrary precision number.

This is problematic because there's no native support for such a type. Bytes, integers, and even >64 bit types can be stack allocated, but unbounded arbitrary precision types are usually implemented with dynamic allocations. As a start, I'll probably just say "number is whatever large integer that works best", so like long long on C.

The basic problem is that I want any program that compiles with partial evaluation to compile without. The partial evaluator has the advantage of always using arbitrary precision numbers, something the backend might not.

use register args pervasively in the lir

Everything from code generation to register allocation has turned out to be much, much easier if we only allow lir::Registers in the arguments to calls and returns. The major benefit of doing it this way is that we don't have to worry about which registers we can and cannot clobber in the code generator; instead it is free to just ignore the arguments or parameters all together since any necessary moves or shuffles are inserted beforehand.

The backend should therefore use this technique for all kinds of arguments, including un/conditional jumps.

name resolution trials and tribulations

I had a (perhaps obvious) realization that some trie-like structure might be the magic bullet that makes name resolution not be a janky mess. After thinking a bit about it, I think the core thing that makes name resolution complicated is this question of "finding some prefix (reachable from some given place) that makes a given suffix a valid name". Basically, we're given some incomplete name, like b.c, plus a bunch of names, like a.b, a.b.c, a.b.e, a.b.f.g.b.c, and some context (like a.b.f.g), and we want to find the largest slice of this context we can prefix to the incomplete name that makes it a valid name in the entry.

This sounds an awful lot like a trie, I think! The declaring pass would construct the trie with the resolution pass using it for search. I think this is something that should be explored a bit, since the current name resolver is a fragile thing I'm afraid of touching.

use `let` for functions

For parsing reasons, let always binds a value, while fun is a shorthand for functions. My intention before starting this project was to distinguish in a slightly different way: let is an eager binding, while fun is a lazy binding; both can use the <kw> name arg1 arg2 ... = shorthand syntax for declaring functions. Though this eager/lazy thing might be somewhat of a dead end, I still think it would be nice and simple if the language needed just one keyword for binding.

One question is how this will interact with constructors, when that time comes. The short answer is "I don't know"; the long answer is that I might want to say that "constructors must always appear in qualified form" which would make parsing easier - if the application pattern has a raw name in the function part, it's a function; if it is a path, it's a constructor. But there are many options to explore here and things to consider.

Pros

  • lexical simplicity: the language grammar keeps at a "let" simple-expr "=" expr
  • semantic simplicity: let always introduces a value in a nice"let this expression be defined as this expression"
  • consistency: patterns become much more free-form, and functions can be defined using operator syntax

Cons

  • lexical complexity: the parser now needs to do more work to extract the function name from a function definition
  • consistency: having separate keywords like let and fun might make code ever so slightly more readable

spans everywhere

Most spans and type annotations are kinda just dropped when lowering to the LIR. This should not be; keeping spans all the way through to code generation will let us embed debug info in the binaries.

redo instantiation during type checking

Currently, when the type checker instantiates a polymorphic value, it also needs to keep track of that instantiation since that value's full type may not be known yet. This is problematic because it means code like

fun main (?: 1) = id (id |10| 5)
fun id |T| (x: T) = x

will have to settle on one instantiation for id when it has to unify the result of id |10| 5 with the argument of id.

I'm not entirely sure how to avoid this. I think a good first step would be to type check the strongly connected components of the call graph, which would ideally let us instantiate the type right away instead of having to keep track of the instantiation (like in HM). The problem might persist in mutually recursive scenarios, though that gets close to polymorphic recursion so it might not be a problem.

monomorphisation is bugged

There are two main problems with monomorphisation at the moment:

  1. There's no reuse of monomorphic functions with similar types
  2. Recursive functions mess up the monomorphiser

Basically we need some way to cache which instantiations we have, as well as a stack of which functions we are currently monomorphising.

types-only partially evaluation

Implement EvalAmount::Types, which should evaluate any expressions referenced by types and leave everything else unevaluated.

partial evaluation might be bugged?

The partial evaluator uses frame indicies to distinguish values originating in a caller versus values originating in the current function. The idea is that a frame index < the current frame index indicates a dynamic value, so the partial evaluator won't use it when rewriting code.

I haven't tested, but because the frame index is currently just the length of the call stack, we could end up in a situation where a function returns a value (e.g. with frame index 5), and that value is passed to a different call (which makes for a current frame index of 5). That latter callee might then mistakenly think the dynamic value is part of its static data.

I believe a fix would involve making the frame index always monotonically increasing, so it would serve as a measure of whether a value was produced in a "past" call frame. This might not interact well with effect handlers though, when the time comes for that.

rethink the lir and codegen

The current code generator is a pretty janky mess. This is mostly because I don't really have a solid mental model for what steps are necessary to translate MIR into something like x64. There's a lot of edgecases and stuff to think about, and things which we can neatly ignore in the higher-level MIR become painfully present in the code generator. Things like:

  • leaing function pointers while moving constants (as is done in 0836bf8)
  • moving to the 8-bit registers doesn't clear the high-bits, so results look wrong
  • which kinds of operands are legal varies by instruction, so extra instructions may need to be added or operands may be flipped around
  • handling values which are 64-bits or larger is hard! they don't fit neatly into registers and must live on the stack, so whether we refer to some value directly through a register or indirectly through a pointer depends on its size

There is a lot of thorny stuff here, and I'm not really sure how to work around them. I think one very fundamental issue with the current approach is that the code generator treats registers as if they are exactly like variables in the source, just with a limited amount. But this isn't the case; registers are not like variables at all!

Variables Registers
Unbounded size Bounded (usually small) size
Unbounded number Very limited number
Limited scope Global
Hard to accidentally overwrite Clobbering happens all the time

Yes, keeping things in registers is a hugely beneficial optimization, but it should be just that - an optimization. Because what is it that does have all of the properties of variables? Stack-allocated locals!


This rant of sorts was spurred on by the commit linked above, which revealed to me just how borked the current setup is. To be concrete, use the compiler as is on the commit above and compile with --no-eval the following code

fun main (?: 1) = apply (id, 9)
fun apply (f: 10 -> 10, x) = f x
fun id (x: 10) = x

The expected result of this should be 9, because it just passes 9 to the identity function through some convoluted means. But what you actually get if you link and execute is instead something like this:

corollary master * = $ link /entry:wmain /subsystem:console .\artifacts\test.lib Kernel32.lib /out:artifacts\test.exe
Microsoft (R) Incremental Linker Version 14.33.31630.0
Copyright (C) Microsoft Corporation.  All rights reserved.

corollary master * = $ .\artifacts\test.exe
corollary master * = $ echo $LASTEXITCODE
663425033

That's not 9! Is the code generator wrong? Well, kind of. See, 663425033 in hex is 278b1009. And as it turns out, the lowest byte will always be 09 when you execute. The code generator is actually working precisely as intended. The problem lies in apply, whose assembly code looks something like

_napply:
    push rbp
    mov rbp,rsp
    mov rdx,rdi
    mov dil,sil
    call rdx
    leave
    ret

It expects x in sil (lower 8-bits of rsi) and f in rdi, but f expects its argument (a number) in dil (lower 8-bits of rdi), so apply shuffles things around to make that happen. The problem is that mov dil, sil doesn't actually clear the upper 56 bits of rdi, so when control eventually comes back to the entry point and it passes the value in rdi to ExitProcess (or exit or whatever), the only relevant stuff is in the lowest part of the register and everything else contains garbage.

I could presumably add in a bunch of extra code to make sure registers are zero when I don't use all of them, but I don't think that would actually solve anything. As mentioned, I think the problem is much more fundamental and a thorough restructuring is required to make this more maintainable, more correct, and less ad-hoc.

expand the type system, part 1

  • Arbitrary expressions in range types
  • Nominal types
  • Type functions
  • Records
  • Constructors & pattern matching
  • Existentials (in particular, functions)

because for context in messages

The type checker has a nice little Because thing that gets passed around as a way of reifying why particular constraints or decisions were made. It isn't really used at the moment, but it'd maybe be useful to have a with_context() method on the message adder thingy for more useful error messages

three pass type checking?

While working on #24, I've been thinking a bit about how to simplify the type checker. I think I have an idea for how to do this, but I haven't fully thought it through to see if it would work.

Basically, the current type checker needs to

  • Check/infer the kind of types
  • Check/infer the types of values
  • Concretify existentials (not yet implemented)

which makes things complicated because these all work in different ways. However, if we make some assumptions, I think we can separate these into separate passes:

  1. The kind of a type cannot depend on value information (e.g. no pattern matching on a value where branches have different kinds)
  2. The concrete type of an existential cannot affect inference

I believe 1. is a perfectly reasonable restriction. I'm not quite sure if 2. is too restrictive, though - one thing I'm worried about is how this interacts with implicits (ideally, I'd like to be able to use the inferred implicit to direct type checking, but maybe that's okay since these kinds of implicits should only appear explicitly)

Anyways, if we can separate these, then we can do fun things like use one of the bog standard HM inference algorithms with generalization for kind checking and concrete type inference, while having the more bidirectional coercy one in the value world.

c backend

Writing my own x64 backend may have been taking on too much to chew (#8, #4, #6, ...). The primary reason I went with a custom backend instead of something like a C-based one is because I eventually want to fiddle with a segmented stack-based approach to compiling effect handlers.

However, I think starting out with a C backend may be a wiser choice than trying to force one I don't have a great mental model of to work. Apart from the fact that compiling to C can be done (I suspect) much more easily than to x64, it would also give me easier FFI immediately.

bad naming

The MIR contains ExprSeq which is a sequence of "expressions" (Expr) followed by a branch. This should really be called Block and Statement, or something to that effect.

move to a query based system

Move to e.g. Salsa 2022 instead of the compiler pipeline model. Things might be a bit messy wrt. the cratification of the compiler, but I think this should be doable, and would open up for making an LSP as well.

  • Querify the frontend
  • Querify the midend
  • Querify the backend
  • Move away from the Messages diagnostic system and into something that uses the native salsa accumulators
  • Figure out how to deal with compiler-generated names
  • Make things queries!
    • Strongly connected component discovery (zippy_fronted::tyck::components)
    • Make things more pull-based than push-based (e.g. instead of typeck taking a Decls, it will take a name or program id or something)

runtime initialization

Currently, only top-level functions are supported. Several things need to be implemented to support top-level values too:

  • Implementing values in the backend
  • Differentiating between functions and function pointers in the code generator
  • Handle relocations properly (no longer relevant: C backend)
  • Mutable variables vs. constants (no longer relevnat: C backend)
  • Initialization code (necessary if e.g. --no-eval is used)

call the linker from within the compiler

The compiler currently just generates object files, which must be linked manually, which is pretty annoying (especially on Windows where you might have to go through vcvars{all,32,64}). Do this automatically from within the compiler driver (though potentially with an option to not) using e.g. cc.

tuple/annotation precedence

Currently, tuples have a higher precedence than annotations. The reason is that the HIR requires annotations on value declarations, and that patterns have no annotation node. So in order to be able to write both

let a, b = 1, 2
let x: 10 = 5

without parentheses, the grammar needs to look something like

let-decl = "let" small-expr [":" small-expr] "=" expr
anno-expr = small-expr [":" small-expr]
small-expr = tuple-expr

However, this means that

fun apply (f: 10 -> 10, x) = f x

parses as

fun apply (f: ((10 -> 10), x)) = f x

Also, it means that annotations apply to the "whole" tuple:

let a, b: 10 = 1, 2
-- same as
let (a, b): 10 = 1, 2

which is not what we want! I suspect the solution is easy, however: make annotations bind more tightly than tuples, add an annotation pattern, and drop the annotation on let-decls (adding a wildcard there instead and making it the type-checker's job to solve it).

strongly connected type checker

The current type checker uses a worklist-based approach, which lets it shuffle the order in which constraints are solved. The benefit of this is that the type checker can be order independent even with order dependent constraints.

The problem, however, is that it's a bit messy. If the type checker has a series of unsolvable constraints, then the only information it has is just the list of constraints with no indication of where or why progress is not being made. On a small scale, this is fine, since it can just e.g. take the span of the first unsolved constraint and say "unable to make progress here", hoping the user will figure it out. But I imagine that as programs get larger, it may be easier to end up in a situation with long constraint cycles where this becomes less and less helpful.

To add on to that, some kinds of constraints simply don't work, such as generalization. I don't plan on having generalization for value types, but I do plan on it for effect types, so I'd need to make that work anyways. The basic issue is that we need to know whether or not a name is polymorphic before we infer its type (if it is polymorphic, we need to instantiate fresh unification variables, otherwise we don't do anything). Since unification variables are monomorphic, this can create some ordering issues. I suspect this could be solved with a clever enough constraint, but I'm not sure that's desirable.

A more robust solution might be to identify the strongly connected components in the call graph, and to do HM-style generalization after checking each components. This would also make it easier to provide better error messages in the case of unsolved unification variables.

This might require some care when it comes to mutually recursive modules.

elide zero-sized types

I think the flattening step of the MIR elaborator has the necessary type information to determine (conservatively) whether a type is zero-sized or not, and to elide it if so. I also think this can be a completely local transformation (like flattening itself), so it shouldn't be overly difficult

polymorphic destructuring

For the moment, the compiler only accepts polymorphic definitions with a simple Name as the pattern. This should be easy to extend to more complicated patterns, like

let x, y |T| = (a: T => a), (b: T => b)

The main challenge is (as per usual) the THIR-to-MIR step.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.