Coder Social home page Coder Social logo

Comments (438)

askeksa-google avatar askeksa-google commented on May 30, 2024 4

Looking into the flow analysis needed to verify that all uses of non-defaultable locals are dominated by definitions, it is my impression that this will not add much complexity to validation. The structured control flow and monotonicity of initialization (that from a runtime timeline point of view, initialized variables can never go back to being uninitialized) help us out here.

AFAICS, computing the set of definitely initialized variables at every instruction can be done in a single, linear traversal:

  1. local.set and local.tee set the variable as initialized.
  2. The set after the end of a block is the intersection of the set before the end (if reachable) and the sets before all branches to the end.
  3. The set after the end of an if without an else is the set before the if.
  4. The set after an else is the set before the corresponding if.
  5. The set after the end of an if with an else is the intersection of the set before the else (if reachable), the set before the end (if reachable) and the sets before all branches to the end.

Interestingly, no special handling of loop is required. Since the only entry into the loop is at the beginning, and the set can only increase from there, the intersection of the sets for all incident flow to the start of the loop is identical to the set at entry.

An alternative formulation to the "(if reachable)" clauses would be for all instructions that break reachability (e.g. br) to set the set to include all variables. This is probably more along the lines of how the spec is generally formulated.

The extra state needed during validation is a bit vector with a bit for each non-defaultable local, plus one such vector for each block and two for each if (which can be reused for non-overlapping constructs).

Thus, option (7): Allow non-defaultable locals, and require all local.get of such locals to be dominated by local.set / local.tee of the same variable.

  • No new instructions added.
  • Covers more use cases that let, i.e. also typical join situations.
  • No impact on runtime behavior, i.e. no checks needed.
  • Both unoptimized and optimized code benefit from the null check elimination.
  • Moderate addition to validation which is amenable to one-pass compilation.

From a code generation perspective, I think the patterns allowed by this analysis correspond well with typical use cases for non-defaultable locals. Does this also seem feasible from an implementation perspective?

from function-references.

lars-t-hansen avatar lars-t-hansen commented on May 30, 2024 3

It would almost have to be the indices of the locals, wouldn't it, for typing to work out? (It seems unreasonable to me for the locals not to be typed, the way (4) as originally stated opens up for, I'm not sure what the benefit of that would be, locals are not necessarily all of the same size, and the more information the better.) If it's the number of locals then we'd have some kind of stack discipline in the "overflow" locals area, but different paths through the code would require different types on that "stack". Instead, I think what we want is a set of typed locals and let picks a few it wants from the set of currently-unused, with the correct types, and gives them values.

Re not reusing space for locals, this comes up from time to time. As @tlively said, we use the baseline compilers for debugging, and each local gets a separate slot, which makes debugging easy. But sometimes this means the baseline frames are very much larger than optimized frames, and we have occasional reports (hi @kripken) about stack overflows during debugging that don't happen in production. Having strict stack discipline instead of a locals area might alleviate this but depends on the producer; with a pick et al set of operations, the producer would still be at liberty to allocate all its perceived locals on the stack at function entry rather than trying to optimize for their lifetime, effectively simulating what we have now.

from function-references.

RossTate avatar RossTate commented on May 30, 2024 2

Thanks for the clarifications! That's what I suspected @tlively had in mind, but some other comments regarding max depth seemed to be relying on each let having different types, which is why I felt the need for clarification.

So my concern with this approach as that single-pass compilers will need to implement it by allocating a single stack frame that can fit all of the locals. At present, that probably isn't so bad because wasm has a coarse type system that enables generators to reuse wasm-level locals across multiple high-level locals. Furthermore, there is no type refinement, so each surface-level local needs only one corresponding wasm-level local. But with non-nullable types and the gc proposal, both of those factors are going to change. As an example, the this pointer in OO method implementation will typically take on at least two types: $object at first (since the type system cannot guarantee it actually belongs to the implementing class) and then $impl-class pretty much immediately (after which the original $object local will no longer be used). Similarly, many variables will be potentially-null at first, and then at some point will become null. So I worry that stack frames will become large due to ununused-local-bloat. I also worry that there will be a lot of lets and lots of local.gets, for the reasons discussed in WebAssembly/design#1381.

As @lars-t-hansen points out, even with stack-manipulation instructions, generators can produce stack-wasteful code. But at least it's their choice. With (4), they have no choice but to produce stack-wasteful code for single-pass compilers, or at least they have to consider how to trade off between type precision/casts and stack size. Of course, that can be improved upon—I just wanted to point out the lack of control generators currently have in (4).

from function-references.

jakobkummerow avatar jakobkummerow commented on May 30, 2024 2

From my perspective, there is no urgency; we can split out let (or alternatives) into its own proposal.

The ability to have non-null locals isn't worth much if (nearly?) nobody uses them (see first post in this thread). So we may as well go with option (5): do nothing, continue to require all locals to be defaultable, rely on engines to maybe eliminate provably-unnecessary null checks, see if this causes tangible (convenience/performance/...) problems in practice. (Non-nullable types are still useful for struct fields, value stack entries, and potentially sometimes arrays.)

If we do want usable non-nullable locals right away, option (7) provides a very nice way to get them. In particular, it does not at all stand in the way of introducing let (or something else) later on, if we eventually decide that there is a concrete need for that.

In conjunction with GC, this can actually have implications on heap usage, too.

Yeah, but it would have to be a pretty extreme (contrived?) case where it actually matters whether an on-heap object held alive only by a local can be freed before the current function terminates.

from function-references.

askeksa-google avatar askeksa-google commented on May 30, 2024 2

In source languages, variables are scoped in the same way, so in principle, that should always be translatable. IIUC, the reason this can become problematic is that many compiler middle ends lose the original structure, e.g., due to SSA, so that you have to recover some of the structure. Is that correct?

Not necessarily. For example, in Dart, you can declare a non-nullable local variable without an initializer, and then write to it later. It can be used anywhere where it is initialized on all paths, following rules similar to the definite assignment rules of Java.

String answer(bool condition) {
  String s;
  if (condition) {
    s = "yes";
  } else {
    s = "no";
  }
  // s is definitely initialized here
  return s;
}

Nullable variables are promoted to non-nullable by null checks:

void maybe(Fooable? x) {
  if (x != null) {
    // x is non-null here
    x.foo();
  }
}

Here it makes sense for the Wasm code to assign the non-null x (for instance obtained via br_on_non_null) to a non-nullable local for accesses inside the scope of the promotion. In this particular case, this could be done using let, but there are more involved cases, where the scope of the promotion does not follow the lexical structure of the program, such as:

void also(Fooable? x) {
  if (x == null || /* x is non-null here */ x.shouldNotFoo()) {
    return;
  }
  // x is non-null here
  x.foo();
}

In all cases, the defs and uses adhere to the plain dominance relationship on the CFG.

what level of urgency do people feel about this topic at this time?

I would like to see non-nullable locals in the MVP. I agree with @rossberg that non-nullable types seem half-baked if they can't be used for locals. Until support is added, the next best solution is to use nullable locals and sprinkle ref.as_non_null, relying on good nullability propagation (from parameters, fields and other null checks) in the optimizing runtime compiler. My guess is that this could remove most of the extra null checks, so it is mainly a question of elegance, code size and baseline compiler output performance.

from function-references.

conrad-watt avatar conrad-watt commented on May 30, 2024 2

I'd prefer that we add more stack operations to take advantage of the fact that the value stack is already flow-sensitive, rather than making locals flow-sensitive and requiring explicit merge annotations for locals which duplicate the annotations already needed for the stack.

The previous discussion in WebAssembly/gc#187 converged to the conservative idea of disallowing non-nullable locals for now. This would leave open our ability to spend as long as we want bikeshedding future let/stack/flow-sensitive local features.

from function-references.

jakobkummerow avatar jakobkummerow commented on May 30, 2024 2

I'd rather consider removing non-nullable types altogether for the time being. They just seem like a broken feature in such a setting.

Why does it have to be all-or-nothing? Even if non-nullable locals are disallowed for now, non-nullable types can still be useful for globals, parameters, struct/array fields, and the value stack. (Engines could probably deduce the latter, but only the latter, automatically.)

Aesthetic arguments like "seem[s] like a broken feature" hold little water when we're talking about a compilation target. Human writers of Wasm modules might get confused when they can't use certain types in certain places. Compilers don't care.

Of course, if you're arguing that non-nullable types aren't all that important to begin with, then dropping them is a logical consequence. Given that we still have performance goals to prove, I'd be careful with that argument though: every little bit helps.

from function-references.

rossberg avatar rossberg commented on May 30, 2024 2

Having experimented with let some in the Wob compiler, I can confirm two observations:

  • Non-nullable types without locals are mostly useless. A type that you cannot create temporaries for cannot generally be used effectively in a compiler.

  • The index shifts for locals induced by the current design of let can indeed create an extra hurdle in a code gen (at least in a simple one).

So I'm starting to warm up to ditch composability in favour of a variant where let does not introduce new locals, but merely initialises them and marks them as accessible within a given scope. This is similar to (4) from the OP if I interpret that correctly.

For the sake of concreteness, let me run through a possible design. Roughly, the let construct would look like this:

let <blocktype> <localidx>* <instr>* end

where the local indices are references to normally declared locals. The instruction expects a corresponding number of arguments on the stack and sets these locals to their values (essentially, a bulk local.set). Furthermore, within the scope of the let these locals are then marked accessible. To that end, the typing context has a new component that tracks which locals are "accessible", which is updated at the beginning and reset at the end of a let block (that is the main extra cost for engines).

In terms of typing rules:

  • let bt x* instr* end : [t1* t*] -> [t2*]

    • iff bt = [t1*] -> [t2*]
    • and (local x : t)*
    • and instr* : [t1*] -> [t2*] with local x* accessible
  • local.get x : [t] -> []

    • iff local x : t and local x accessible
  • At the beginning of a function, locals with defaultable types start out as accessible, whereas locals with non-defaultable type start out as inaccessible.

While this loses composability and requires slightly more work in engines, the upshot is that it remains structured and avoids the complications, cost and other potential issues of flow-sensitive typing. While I understand the desire for that, I still don't think that we want to go there for Wasm. (And to clarify, even ignoring the problem of state duplication, that requires a linear number of subset checks, each of which is on sets whose size can itself be linear in program size. So despite other arguments, I believe this would make validation inherently quadratic in the worst case; using bitsets does not change that, as they only shave off a constant factor.)

from function-references.

conrad-watt avatar conrad-watt commented on May 30, 2024 2

I agree that @rossberg's criticism of flow-sensitive locals doesn't seem accurate (especially considering in the extreme case we could add enough annotations to essentially make them equivalent to the value stack).

Overall though, I see his comment as pretty constructive. Given that we seem to be moving away from concretely proposing flow-sensitive locals anyway, could we let the disagreement drop for now, on the understanding that if the flow-sensitive local idea were ever revived, we'd have to see this debate to the end?

For the record, I'd personally still (somewhat weakly) prefer a suite of pick-like stack operations over a "pre-declared let" approach. Is there a concern that a stack-based approach would cause code size issues (e.g. due to the number of shuffles/annotations required to carry stack-held non-nullable values in and out of blocks)?

EDIT: I originally wrote "non-nullable" in a bunch of places where I meant "flow-sensitive".

from function-references.

RossTate avatar RossTate commented on May 30, 2024 1

Hmm, I feel like an expression-based IR shouldn't be a big problem. I would suspect this would only affect the serialization phase. That is, you're free to use (your own notion of) locals in your IR, but when you serialize your IR to WebAssembly, rather than emitting local.get/set $index_of_local_in_your_IR you emit get/set $index_of_local_on_stack. The complication is that $index_of_local_on_stack depends on $index_of_local_in_your_IR, which locals are currently initialized (as the uninitialized ones are not on the stack), and how many temporaries are currently on the stack, so your emitter would have to track that as it recurses through your expressions.

from function-references.

kripken avatar kripken commented on May 30, 2024 1

@rossberg

Just for completeness, a much simpler solution along the lines of (1) and (2) would be to simply say that every local.get may trap if not yet initialised, and simply leave it to the cleverness of the VM to eliminate the checks where possible. Sort of what JavaScript does with let. Still not a satisfactory property for an assembly language.

That sounds like option (6)?

I think the key thing we want out of non-nullable types is performance, in at least three ways:

  1. Speed in the optimizing tier.
  2. Speed in the baseline tier.
  3. Download/parsing speed.

The downsides of let include binary size (a block etc. for each let) and toolchain complexity as @jakobkummerow mentioned. An additional form of toolchain complexity is @RossTate 's point about control flow merges: it is not trivial to create a phi with a non-nullable value (as locals cannot be used for it, which otherwise is the typical solution in wasm). All of these harm us on 3 (and possibly 2).

On the other hand option (6) will only harm us on 2, and it seems like the only option that will completely avoid any harm on 3 (as it adds literally no bytes over non-nullable types). I think it is actually a quite nice solution for an assembly language with wasm's constraints.

from function-references.

lars-t-hansen avatar lars-t-hansen commented on May 30, 2024 1

I think wasting stack space is very, very far down on my list of concerns for a baseline compiler and only slightly higher on the list for a debugging compiler - there aren't often recursions that are deep enough to make it an issue - and I'd probably prefer to solve the problem by making the stack larger during debugging or by running a prepass when creating code for debugging to reduce the frame size. Happy to discuss this of course, but the argument, while indisputably true, feels a little beside the point. Clearly there may be single-pass compilers that will feel more pain here, but then I'd like to hear from their authors.

from function-references.

askeksa-google avatar askeksa-google commented on May 30, 2024 1

Let's take a step back and consider the problems that let is supposed to solve from a Wasm producer perspective. Based on what I read in various issues (such as #9, WebAssembly/design#1381, WebAssembly/gc#187 and this one), the motivations fall broadly into three categories:

  1. The ability to have non-defaultable locals, primarily to make non-nullable reference types more useful.
  2. Easily do local transformations on the binary format, such as inlining and adapter code, without having to go back and add more locals.
  3. Declaratively indicate that some locals are only used inside a scope, which can be useful in stack inspection, debugging, and perhaps to save stack space in simple interpreters.

It is my impression (based on experiences with dart2wasm) that let is not a good solution to the first issue, due to its scope-based shape. In many cases, using it would require a specialized liveness analysis to figure out which variables can be scoped like this and where their blocks should start and end (which can be in inconvenient places relative to the structure of rest of the code). It is much easier (and likely smaller too) to just use nullable variables and insert the necessary ref.as_non_null.

For the second issue, it would seems that any proposal that requires pre-declaring the variables will not provide the desired simplicity.

Thus, it looks like a solution like (4) falls between two stools from this perspective.

Maybe it would be better to consider these issues separately, instead of trying to fit multiple goals into one construct?

from function-references.

askeksa-google avatar askeksa-google commented on May 30, 2024 1

Fair enough, but for Dart this is primarily a type-checking feature, not a performance feature. You can always compile it to a nullable type on the Wasm level. Does this case matter enough that we have to go to length to allow optimising the last bit out of it?

Given that we have other sources of nullability from which the nullability of locals can be inferred, the peak performance is unlikely to be affected much. The main practical benefit is code size. Anecdotally, I have seen code from dart2wasm where 9% of the instructions were ref.as_non_null. Allowing the compiler to use non-nullable locals got rid of the majority of these, for an overall 7% reduction in the number of instructions emitted.

In that case, I'd rather consider removing non-nullable types altogether for the time being. They just seem like a broken feature in such a setting.

Having non-nullable types in other places (parameters and return values in particular) is crucial for performance, since eliminating null checks through nullability propagation gets very difficult (or limited) without them. Again anecdotally, I have seen performance overhead from null checking of more than 15% (measured by modifying TurboFan to not emit null checks).

from function-references.

tlively avatar tlively commented on May 30, 2024 1

I don't have a concrete worry about code size for using pick-like stack operations (although I wouldn't rule out there being a code size issue either). The reason I would prefer an approach based on locals rather than stack instructions is much less principled; Binaryen would have to translate a stack-machine design to a local-based alternative at parse time, them convert back at writing time. So if WebAssemby uses a local-based design, Binaryen will be less complicated and its IR will be less divergent from WebAssembly. Yes, this is putting the cart before the horse a little bit, but that is my feedback as a consumer of the spec :)

from function-references.

 avatar commented on May 30, 2024

(not part of the Wasm team, just a consumer/user of Wasm compilers)
Regarding point three, I've seen Clang++ prefer implementing its own null pointer function as a function that traps upon being called (e.g. unreachable), instead of allowing genuine out-of-bounds access. Declaring locals with a function that exists, yet will still trap upon being called seems to preserve the same semantics of using a null function reference, while removing the branching ref.is_null check, and in this case, I can see some compilers easily following that route.

from function-references.

kripken avatar kripken commented on May 30, 2024

About (3) (local initializers that are similar to global initializers), I think that could be simplified to just referring to an immutable global. That is, in global initializers we need a lot of generality, but here we just need to avoid a null value, and the default value will in most cases not be used and so not matter. So in a module each GC type could have a singleton immutable global containing a "dummy" instance of that type which locals would refer to.

A more compact encoding of that could be to define types with default values. Such types would be defined after the global section, and just pair a type with an immutable global. This would avoid any extra bytes in functions. And such types with custom defaults may have other uses than this.

from function-references.

 avatar commented on May 30, 2024

If we take Kripken's idea, why do null references even need to exist in the binary format at all?
Compilers could all implement their own nulls, defaults, and is_null checks.

local.get $func_ref
ref.is_null

vs

local.get $func_ref
ref.func $null_func
ref.eq

Contrary to what @rossberg said at WebAssembly/reference-types#71 (comment), Wasm could be a null-free language.

from function-references.

tlively avatar tlively commented on May 30, 2024

I like (4) because it has the best ratio of simplification to change, but I would be happy to consider larger changes as well.

Taking @kripken's line of thought a step in the opposite direction that @00ff0000red took it, if every non-nullable type would have a dummy global default value that is never used in normal execution, then why waste code size and tool effort on defining those globals? Instead we could just have the engine generate a meaningless default value for each non-nullable type. The only difference between these dummy values and null values are that dereferencing the dummy values does not trap, but rather does nothing or yields another dummy value.

The benefit of non-nullable values is that you know that there is always meaningful data in them, so I think any solution that involves global default values for non-nullable types somewhat diminishes their benefit to the point where we might as well not have them.

from function-references.

skuzmich avatar skuzmich commented on May 30, 2024

Do we still want let instruction, if we were to go with (4) ?

Instead, local.set could form a scope till the end of the current block, where you can safely local.get non-nullable variable.

from function-references.

tlively avatar tlively commented on May 30, 2024

I think engines would have to validate that every local.get from a non-nullable local is dominated by a local.set to the same local. It looks like there are efficient algorithms for this, but it would certainly add validation complexity.

from function-references.

kripken avatar kripken commented on May 30, 2024

Speaking of domination, another option might be

(6) Trap at runtime if a non-nullable local is used before being assigned to. I know that sounds bad, but in an optimizing tier SSA analysis will anyhow prove that uses are dominated by values (always the case unless the wasm emitter has a bug), so it has no overhead to either compile time or runtime. Of course, in a baseline compiler this will mean null checks on local.get. But it's simple and has no code size overhead.

from function-references.

skuzmich avatar skuzmich commented on May 30, 2024

@tlively we don't have to do general dominance algorithm (at least initially). Validation algorithm could be equivalent to that of let instruction. We still would need to check that struct.get is inside let block, if we were to use let.

from function-references.

tlively avatar tlively commented on May 30, 2024

@skuzmich I don't think I understood your suggestion, then. Right now local.set instructions do not introduce new block scopes. Are you suggesting that we have a version of local.set that does introduce a new scope? If so, how would that be different from let?

from function-references.

skuzmich avatar skuzmich commented on May 30, 2024

@tlively Please, allow me to clarify. In my suggestion local.set would not form a proper Wasm block, it would not have a dedicated end instruction. Instead, it would merely create a variable scope from local.set instructions till the end of the current block it is in.

Benefits of this approach:

  • No let instruction in spec. No need to support it in decoder and execution. New rules for local.set apply only to validation phase and are as complex as let instruction rules.
  • No code size overhead for blocktype and end.

As far as I remember, let was designed as it is to simplify function inlining. You would want a proper branch target for inlined returns, and localdef section with relative indexing for inlined locals. But since (4) removes relative indexing from locals defined in let, this is no longer the case, and we might not need this extra instruction.

from function-references.

tlively avatar tlively commented on May 30, 2024

Thanks @skuzmich, I understand your suggestion now 👍

from function-references.

RossTate avatar RossTate commented on May 30, 2024

In WebAssembly/design#1381, I did an analysis of initialization and of let. I found that let was not well-suited for initialization. One example I gave is where a local is initialized on separate paths that join together. Many of the suggestions above would also have problems with that case. At the time I had considered a variety of other fixes to locals/let, but they all made the type system more complicated and without solving more advanced cases like type refinement. The simplest and most expressive solution by far was to just use the stack and add these four simple stack instructions: WebAssembly/design#1381 (comment).

The parts of the discussion in WebAssembly/design#796 regarding register allocation and this article make me think that such an approach could be easier for engines, though others here are better equipped to speak to this. The fact that there's a straightforward way to translate from using locals to using stack instructions makes me suspect it could be easier to generate, but that translation does require tracking more information about the current stack state, so I defer that judgement to others as well.

from function-references.

RossTate avatar RossTate commented on May 30, 2024

Oh, I forgot, regarding dummy values, that approach might not always be possible. For example, in the presence of polymorphism, you might not be able to construct a dummy value of a type ahead of time because you might not know what that type is. That is, of course, unless all value types are defaultable, which is a viable option but one that brings us back to #40.

from function-references.

tlively avatar tlively commented on May 30, 2024

Unfortunately stack manipulation instructions won't help in Binaryen because of its expression-based IR, but if they would be helpful in other contexts, I don't think that should be a big blocker. We're already hacking around things like multivalue results, block parameters, and let in Binaryen IR, so we would just hack around stack manipulation instructions as well. That being said, it would be nice if we had a solution that fit nicely into Binaryen IR.

from function-references.

kripken avatar kripken commented on May 30, 2024

@tlively

The benefit of non-nullable values is that you know that there is always meaningful data in them

The underlying point of them is speed, though, isn't it? (that's my understanding from this discussion) We still get that speed with default values that are never used. They are just lightweight ways to prove a null can't happen there.

Default values also solve @RossTate 's points about inits on joining paths, so I think they're worth considering. (However, other options do too.)

from function-references.

tlively avatar tlively commented on May 30, 2024

@RossTate you're totally right, but another (former?) design goal for Binaryen IR is to have it mirror WebAssembly as closely as possible. Obvious as we add more stacky things to WebAsssembly this becomes impossible, though, and I don't think we should be going out of our way to be convenient for Binaryen as we design WebAssembly.

@kripken Yeah, that's true. With a dummy default value, there's always a value meaningful to the engine (i.e. dereferenceable) so the engine can elide any null checks. I noticed a problem with my previous suggestion in which I wrote, "The only difference between these dummy values and null values are that dereferencing the dummy values does not trap, but rather does nothing or yields another dummy value." Specifying hypothetical engine-generated defaults to "do nothing" on e.g. struct.set would require the engine to check whether the struct it is modifying is the default struct, which defeats the purpose of eliding such checks. It would be gross to have an implicit engine-generated default value be modified by user code, so I guess I would rather have user-provided default values after all.

For purposes of compositionality and decompositionality of functions and modules, I prefer a let-like construct (that doesn't change any indices) over function- or module-level initializers. I would be fine with stack instructions too.

from function-references.

 avatar commented on May 30, 2024

There seems to be something that is being overlooked when it comes to user-provided defaults here, shown by comments like this,

Yeah, that's true. With a dummy default value, there's always a value meaningful to the engine (i.e. dereferenceable) so the engine can elide any null checks. I noticed a problem with my previous suggestion in which I wrote, "The only difference between these dummy values and null values are that dereferencing the dummy values does not trap, but rather does nothing or yields another dummy value."

I think this type of thinking was encouraged by my early comment,

...Declaring locals with a function that exists, yet will still trap upon being called seems to preserve the same semantics of using a null function reference, while removing the branching ref.is_null check...

The thing about defaults, is that they don't necessarily have to be a "dummy" value, in fact, they could very well be genuinely useful values, that are just filled in by default.

Or maybe the language that was compiled was more of a "dynamic" language.
In some dynamic languages, e.g. JS, Python, calling something that isn't a function as one doesn't require an explicit type-check, but instead just throws a runtime exception that can be caught. In this case, let's say that this is a subset of a dynamic language, one that is typed, yet allows null function references.

(assuming exception proposal) In this case, the compiler could define the default function as a function doesn't trap, but instead throws a runtime exception, actually allowing the code to catch it at runtime.

(func $default
    ...
    throw
)

(func
    (local $function ..typed func.. (ref.func $default))
    ...
    try
        local.get $function
        ref.call func
    catch
        ...
    end
)

Consider an author working on a WebGL web game.
if a Wasm trap occurs, they want to inform the client that an error has occurred, abort the process, and clean up some resources.

They could call their function from JS, and wrap it in a large try...catch block, expecting a WebAssembly.RuntimeError due to calling a null function reference.

Alternatiely, they could pass in a host-imported (JS) function that acts as the "default" function reference value, and if their code contains a logical error, leaving their function un-initialized, their JS function would end up being called (which may use the JS debugger, or use an error stack trace to help them debug their code).

And there are plenty of other use cases that haven't even crossed my mind for user-provided defaults, these are just a few right off of the top of my head.

from function-references.

RossTate avatar RossTate commented on May 30, 2024

While there are many types with easy to generate dummy values, that is not the case for all types. If I've imported a type, I don't necessarily have a dummy value for it. If I'm a polymorphic method and I'm not given an rtt for the type parameter (i.e. a type-erased function), I can't create a value for it. But both those examples involve type variables.

As an example not involving type variables, consider the following Kotlin class for the nodes of a doubly-linked list:

class Node() { var prev : Node; var elem : Int; var next : Node; ... }

This corresponds to the wasm type ref (fix node. (struct (mut (ref Node)) (mut i32) (mut (ref Node)))). It is non-trivial to create a dummy value for this type, and more generally to procedurally create dummy values for types from surface languages.

from function-references.

kripken avatar kripken commented on May 30, 2024

@00ff0000red

Yes, I agree. One reason I like the default values option is that defaults can be useful, separately from the issue of non-nullability. Another simple example is an i32 that defaults to 1 or another value, that can be nice for code size.

@RossTate

I agree it's not always easy to create such values, but that would be the responsibility of the language-specific compiler. I assume the Kotlin compiler knows how to create valid values for a doubly-linked list, and it would be practical for it to make a dummy value?

(In your example, btw, the references both forward and back are non-nullable - is this meant to be an infinite list or a cycle? Shouldn't they be nullable in a doubly-linked list?)

from function-references.

RossTate avatar RossTate commented on May 30, 2024

I should have clarified that this is for a circular doubly linked list, for which the standard is that prev and next are not null.

from function-references.

RossTate avatar RossTate commented on May 30, 2024

Also, it's not that it's not always easy to create dummy values—for some languages it's not always possible to create dummy values. For example, a cast-free, null-free, and memory-safe implementation of OCaml cannot create dummy values for its local variables—a type variable might represent the empty type.

So going with the current plan that we will eventually support non-null references and parametric polymorphism, we will need to provide a solution to this problem that does not assume the existence of dummy values. (There is the option of changing that plan, but that requires a group discussion.)

from function-references.

kripken avatar kripken commented on May 30, 2024

@RossTate

I'm not 100% on board with default values, to be clear, but I'd like to understand your objection. How can OCaml have a type that it uses for local variables, but is incapable of creating a value for? As long as there is some way to create a value for that type, that value could be used for the default. And if there is no way to create any values ever, how/why would the type ever be used? Clearly I'm missing something...

from function-references.

RossTate avatar RossTate commented on May 30, 2024

Yeah, it's counterintuitive. Here's an example:

let reverse (elems : 'a array) : unit =
    let len=Array.length elems in
    for i=0 to (len/2) do 
        let temp : 'a = elems.(i) in
        elems.(i) <- elems.(len-i-1);
        elems.(len-i-1) <- temp         
    done

In this example, it's possible for 'a to denote a type with no values whatsoever (in which case the array is necessarily empty), yet it has a local variable of type 'a.

from function-references.

kripken avatar kripken commented on May 30, 2024

@RossTate

I see, thanks. So this would be a problem if we add parametric polymorphism, and if such a function is used with a type with no values.

I think this should still be solvable, the wasm producer would need to avoid using such a type in such a way (e.g. to pass a different type for it, or to use a different function without a local), but I agree the extra complexity for producers (and also maybe larger wasm sizes) would be a downside of the default values approach.

from function-references.

rossberg avatar rossberg commented on May 30, 2024

Thanks for initiating the discussion! But to be honest, what I'm missing, is a concrete problem statement. The OP links to a couple of issues, but the only tangible one is Ben's concern about the effect on calling conventions in an interpreter, which is primarily concerned with the indexing scheme (I believe Lars has raised a similar question somewhere else). The other links are questions rather than actual problems, and I think they have already been answered on the issues.

The essence of Ben's concern could be addressed, I think, by separating the namespace of let indices from that of the existing locals, and e.g. introduce a separate let.get instruction. Slightly less elegant, but not complicated. I'll need to write something up more carefully.

Beyond that, I'm not sure what concrete issues there are and how the suggestions are meant to address them. Note that most of these suggestions are fairly big hammers that would make matters significantly worse in terms of complexity.

(1) A "locals_initialized_barrier" instruction with semantics: before the barrier, locals may be uninitialized/null, and reading from them incurs the cost of a check; after the barrier, such checks are dropped as all locals are guaranteed to be initialized. Execution of the barrier checks that all non-defaultable locals have been initialized.

You'd need to define what exactly "after" means, which ultimately remains a flow-dependent notion. And it requires some form of affinity, capability, or typestate to propagate that information, which would be a significant extension to the type system. This sort of direction tends to be a rabbit hole of complexity.

Notably, introducing a separate barrier instruction does not actually simplify anything over treating local.set as such a "barrier" directly.

Let also increases locality and thereby conveys additional life range information to a VM, which that can take advantage of without more expensive static analysis.

(2) A scope that indicates "within this scope, the given locals are non-null". Entering the scope performs null checks on the specified locals. Accessing these locals within the scope needs no null checks.

This would be somewhat simpler to type than (1) (because more structured), but still requires amending, changing and restoring typing environments with extra information in the algorithm, and still looses all live range information.

Operationally, the runtime checks still strike me as highly undesirable. I think we should strive for a solution that does not require checks. Allowing access to virtual registers to trap is not a very fitting property for an assembly language.

(As an aside, nullability is not the right notion for expressing knowledge about initialisation, since it doesn't apply to non-defaultable types that aren't references -- RTTs would be one example, we might have others in the future.)

(3) Introduce "local initializers" modeled after the existing global initializers (which are solving the same problem). We'd need to figure out how exactly to encode these in the text and binary formats. Execution of a function would begin with evaluating these local initializers; afterwards all locals are guaranteed to be initialized. Similar to globals, the rules would be something like: only constant instructions are allowed as local initializers; they can read other locals but only those with lower indices.

That assumes that one can always construct a dummy value for any given type. That's not generally the case, as RossTate points out correctly. The main purpose of the let-construct is to handle types where you can't construct dummies (or it would be expensive to do so) -- another interesting example would be values of imported types, acquired only through a call to an imported function.

(4) Require all locals to be pre-declared (at least by count, maybe also by type). Their initialization then still happens with let as currently proposed. That would prevent the size of the locals list/array/stack from changing dynamically, and would also keep each local's index constant throughout the entire function.

Similar to (2), this induces extra bookkeeping and complications to environment handling. And questions, such as: would it be legal to reuse the same pre-declared local in multiple lets?

The big conceptual disadvantage is that this destroys composability of let instructions. The ability to locally name a value without affecting the context is one motivation for let. I have repeatedly heard complaints from compiler writers about Wasm's current lack of that ability, e.g., to emit ops that need a scratch variable or achieve macro-definability of certain pseudo instructions.

(5) Drop let entirely, at least for now. We can always add it later if we have enough evidence of a concrete need for it. In the meantime, a workaround is to factor out the body of what would have been a let-block as a function, and call that. A JIT might still decide to inline that function. (This would limit Wasm module's ability to fine-tune for maximum performance; but based on binaryen's feedback it's unclear to what extent they'd do that anyway. This is not my preferred solution, just mentioning it here for completeness.)

You cannot easily outline let bodies that use (let alone mutate) other locals of the function. In practical terms, this would create a glaring hole in the language design that makes non-null references almost unusable in practice. It would be a strong incentive to never define functions to return a non-null reference, for example, which pretty much defeats the purpose.

Hope that sheds some more light on the design space.

Just for completeness, a much simpler solution along the lines of (1) and (2) would be to simply say that every local.get may trap if not yet initialised, and simply leave it to the cleverness of the VM to eliminate the checks where possible. Sort of what JavaScript does with let. Still not a satisfactory property for an assembly language.

from function-references.

jakobkummerow avatar jakobkummerow commented on May 30, 2024

Thanks for your thorough reply, Andreas!

I'm aware that the design space is complex here, and it's not obvious what the best solution is. The let instruction as is does have certain nice properties (such as composability, and elegantly fitting in with the type system). The concrete problem statement is pretty much:

language design that makes non-null references almost unusable in practice

When the Binaryen folks started looking into using let in order to optimize away null checks for reference-type locals, their impression was that let was so cumbersome to use (for toolchain logic, and also binary size) that they went as far as seriously questioning why we wanted non-nullable types in Wasm at all!

On the engine side, the let instruction is the one that currently causes "extra bookkeeping and complications to environment handling". Having the number and indices of locals constant throughout a function is much more convenient; and this spreads pretty far (e.g. think of a debugger allowing inspection of locals). As I said, it's all solvable, but in terms of implementation complexity the current let sure seems like one of the heavier solutions.

Regarding (3)/"local initializers", that doesn't necessarily mean constructing dummy values. E.g. if you wanted to use an RTT in a local (maybe a "truly local" rtt.fresh-allocated RTT?), that could be created right there in the initializer, or it could be read from a global. Of course there would be some limitations; that's essentially where the (1)/"barrier" idea came from: one could then use the early part of the function for initializing locals, but without any restrictions on the allowed instructions. Rather than relying on the engine to infer when initialization has been completed for each local, the barrier would be a manual hint to indicate this.

I agree with the skepticism about dummies, especially autogenerated default values. My feeling is that such a concept would only shift the problem around: instead of checking for null (for a nullable reference type), code would have to check for the dummy instead. And there's no good way to apply that idea to non-defaultable types.

from function-references.

tlively avatar tlively commented on May 30, 2024

About option (4):

Similar to (2), this induces extra bookkeeping and complications to environment handling. And questions, such as: would it be legal to reuse the same pre-declared local in multiple lets?

The big conceptual disadvantage is that this destroys composability of let instructions. The ability to locally name a value without affecting the context is one motivation for let. I have repeatedly heard complaints from compiler writers about Wasm's current lack of that ability, e.g., to emit ops that need a scratch variable or achieve macro-definability of certain pseudo instructions.

I can see that this change would add extra bookkeeping to the spec formalism because it uses a stack of local contexts, but it would remove extra bookkeeping from tools like Binaryen. In Binaryen, all local.get and local.set instructions identify the local they access by a simple index into a vector of locals owned by the function. The function is readily available when visiting an expression, but generally it is much more complicated to carry around any additional local context. The problem with let as currently specified is that it requires carrying extra local context to determine the identity of the local that is being accessed, which is a frequent operation. Essentially the complexity of carrying any local context at all outweighs the benefits of having composable local contexts for us.

I also understand that needing scratch locals would be annoying in many contexts, but for Binaryen specifically, we always just add scratch locals as necessary and optimize them out later.

from function-references.

taralx avatar taralx commented on May 30, 2024

Is it crazy to suggest a combination of (4) and a locals.add instruction? The function would declare a number of "dynamic locals" that are assigned numbers past the end of the static locals. The instruction stores a value from the stack in the next dynamic local (or a specified one? but that complicates validation) such that the set of accessible locals is always contiguous from 0. At control flow joins the minimum is taken, or maybe it's a validation error to have a mismatch. Might need locals.pop or similar.

(This starts to feel like the R stack in Forth.)

from function-references.

tlively avatar tlively commented on May 30, 2024

Heh, that's a neat solution to the join problem, but I don't think it would make the situation with carrying extra local context around any better.

from function-references.

taralx avatar taralx commented on May 30, 2024

Well, it depends. If you check local access doesn't violate the limit in validation, then you don't have to carry it around. Interpreters do need to carry around an extra number, for sure, but that seems to be difficult to avoid.

from function-references.

rossberg avatar rossberg commented on May 30, 2024

@kripken:

That sounds like option (6)?

Ah, yes, indeed!

I think the key thing we want out of non-nullable types is performance, in at least three ways:

  1. Speed in the optimizing tier.
  2. Speed in the baseline tier.
  3. Download/parsing speed.

The downsides of let include binary size (a block etc. for each let) and toolchain complexity as @jakobkummerow mentioned. An additional form of toolchain complexity is @RossTate 's point about control flow merges: it is not trivial to create a phi with a non-nullable value (as locals cannot be used for it, which otherwise is the typical solution in wasm). All of these harm us on 3 (and possibly 2).

Just to be sure, the purpose of non-nullable types is to improve runtime performance. (3) is not a goal, and I think it can't be: providing extra static information increases code size, there is no way around it. Best we can do is to minimise the "static" overhead, including in validation.

But note that the current design is conservative in that regard: you can always fall back to using nullable types and avoid all possible overhead or complications of non-nullable ones. All reference instructions can equally handle both versions. So the design enables producers to pick a trade-off without forcing anything on them. And this choice is per local, so a best-effort strategy is totally plausible.

On the other hand option (6) will only harm us on 2, and it seems like the only option that will completely avoid any harm on 3 (as it adds literally no bytes over non-nullable types). I think it is actually a quite nice solution for an assembly language with wasm's constraints.

I'm afraid it's a bit more serious than that. Using uninitialised locals would become defined behaviour and would have to trap reliably. A program exercising this behaviour would become totally legal! That means that for any local for which the VM cannot prove that it is initialised on all paths (which amounts to the halting problem in the general case), it will have to maintain extra information and perform extra runtime checks (which, in the general case, might require allocating extra space). And compilers or interpreters that do not perform flow analysis (either implicitly via SSA construction or explicitly by other means) will even have to do that for all non-defaultable locals. I'm pretty sure there would be a lot of resistance to that.

And of course, this entirely defeats the purpose of locals with non-nullable types. Their purpose is to avoid null checks. But with this approach, we are merely replacing those with init checks! So why not use nullable types in these situations?

@tlively:

I can see that this change would add extra bookkeeping to the spec formalism because it uses a stack of local contexts, but it would remove extra bookkeeping from tools like Binaryen.

The extra bookkeeping would impact every validator and hence every consumer, not just the spec! That choice would seem to run contrary to Wasm's general goal of optimising for fast and simple consumers, not producers.

from function-references.

skuzmich avatar skuzmich commented on May 30, 2024

I like (6). It is more powerful that let and is the easiest for compiler to produce.

@rossberg:

That means that for any local for which the VM cannot prove that it is initialised on all paths (which amounts to the halting problem in the general case), it will have to maintain extra information and perform extra runtime checks (which, in the general case, might require allocating extra space).

You can't express complex initialization using let, so you'll have resort to using nullable types and having runtime checks. So (6) is not worse in this case, but has a potential to be better depending on VM optimization effort.

And compilers or interpreters that do not perform flow analysis (either implicitly via SSA construction or explicitly by other means) will even have to do that for all non-defaultable locals.

For simple let-like initialization, runtime-check elimination for (6) will be as complex as original let support. In both cases you would need to keep context of initialized variables in current scope.

And of course, this entirely defeats the purpose of locals with non-nullable types. Their purpose is to avoid null checks. But with this approach, we are merely replacing those with init checks! So why not use nullable types in these situations?

Surely we can express (6) using nullable types by adding ref.as_non_null before every local.set and after every local.get. Non-nullable locals can be seen as a shortcut.

from function-references.

rossberg avatar rossberg commented on May 30, 2024

@skuzmich:

You can't express complex initialization using let, so you'll have resort to using nullable types and having runtime checks.

Yes, but that's fine, and no worse than using a runtime-checked let would be.

So (6) is not worse in this case, but has a potential to be better depending on VM optimization effort.

I believe the optimisation potential is largely the same. On the other hand, in a VM not doing extra analysis and optimisation, (6) proliferates the extra cost to all non-null locals, not just the few complex initialisation cases. It generally loses knowledge that first has to be reconstructed in a VM -- which sometimes is impossible or too costly at that point, even in an optimising VM.

For simple let-like initialization, runtime-check elimination for (6) will be as complex as original let support. In both cases you would need to keep context of initialized variables in current scope.

No, that's not true. let as proposed allows to directly convey static knowledge about initialisation from producer to consumers. It requires no further analysis on the consumer end, every variable is known to be initialised by construction.

from function-references.

skuzmich avatar skuzmich commented on May 30, 2024

let as proposed allows to directly convey static knowledge about initialisation from producer to consumers. It requires no further analysis on the consumer end, every variable is known to be initialised by construction.

I agree. But wouldn't (6) local.set and local.tee statically imply that variable is initialised from this point until the end of the current block? I still fail to see how this is "less" of a static information conveyed compared to let.

from function-references.

rossberg avatar rossberg commented on May 30, 2024

With let as proposed, every local.get is safe by construction. With uninitialised locals, it is arbitrarily hard in the general case (in fact, undecidable) to figure out whether a local.get is always preceded by a local.get, since this can involve arbitrary control flow. There are various ways to do approximate analysis, but that is something new that an engine would need to implement. And the better that analysis, the more complicated and expensive.

from function-references.

RossTate avatar RossTate commented on May 30, 2024

Before we continue discussing alternatives, can we establish consensus on a preliminary step in the discussion. Reading the posts, my impression is that let is unsatisfactory from both a generator side and an engine side, with good reasons for both laid out above. I can't tell if everyone agrees with that. Does anyone object to that assessment?

from function-references.

tlively avatar tlively commented on May 30, 2024

@rossberg,

I can see that this change would add extra bookkeeping to the spec formalism because it uses a stack of local contexts, but it would remove extra bookkeeping from tools like Binaryen.

The extra bookkeeping would impact every validator and hence every consumer, not just the spec! That choice would seem to run contrary to Wasm's general goal of optimising for fast and simple consumers, not producers.

This exchange was about option 4: pre-declaring all locals to avoid indices changing based on local context. I can see that validators built in a specific way would be inconvenienced by this change, but I'm not convinced that this would be bad for all consumers. Binaryen is a particular consumer that would be simplified by this change, although I acknowledge that it is probably not representative. @jakobkummerow, how would you say option 4 would affect the complexity of V8's validation? I guess more generally, what did you find complex about the current let spec when you were thinking about implementing it?

Edit: I guess @jakobkummerow already answered that question above:

Having the number and indices of locals constant throughout a function is much more convenient

from function-references.

rossberg avatar rossberg commented on May 30, 2024

Edit: I guess @jakobkummerow already answered that question above:

Having the number and indices of locals constant throughout a function is much more convenient

Until you realise that you are paying for this with an extra indirection through a second layer of data structure that varies in more complicated ways. ;)

from function-references.

jakobkummerow avatar jakobkummerow commented on May 30, 2024

I haven't only thought about implementing let: I've actually tried to implement it in our non-optimizing compiler. And I've given up for now, hoping instead that either let disappears, or no toolchains emit it so it doesn't matter whether we implement it properly or not. The primary difficulty is in teaching the (machine) stack handling mechanism how to add and drop locals (shifting all other machine stack values) at the right points. I still think we could figure it out with enough time investment, but given the general sentiment here I'm not sure it's worth it.

I'm still entertaining the idea to prototype a few of the alternatives, such as (4) or (6).

from function-references.

rossberg avatar rossberg commented on May 30, 2024

@jakobkummerow, this goes back to the indexing scheme, not the construct as such, IIUC?

Can you elaborate on the difficulties? It sounds as if your problem is with indexing stack operands, not locals? If so, how does predeclaring locals help? Or is the problem with the relative indexing locals? If so, how is it different from indexing labels?

Context: For baseline compilers/interpreters, there is a rather straightforward way of implementing let as is, simply by reinterpreting the locals vector as a stack, indexed from the top. That shouldn't affect operand handling. The only extra task is that you need to maintain the current depth of that locals stack, and use that for pushing/popping let locals, and for indexing locals relatively. Other than that, the max size of this stack is static and can easily be computed in the prepass, so a fixed-sized array is still sufficient at runtime (and might be embedded in the stack frame).

However, that approach may interfere with other implementation choices (Ben gave an example in earlier discussions). I'm trying to understand what exactly the problem is in your case. So far, I think the discussion has been a bit too vague about whether there are problems with let in general, or just with the specifics of the chosen semantics.

from function-references.

jakobkummerow avatar jakobkummerow commented on May 30, 2024

Our non-optimizing compiler maintains a single conceptional stack of values: at the bottom of the stack are the locals (indexed from the bottom), on top are the values on the Wasm value stack (which, of course, are pushed and popped all the time, by almost every Wasm instruction). This entire conceptional stack is mapped to the machine stack in a semi-dynamic way: to the extent possible, some entries are kept in registers, and they are spilled as needed. This concept works pretty well for pushing and popping value-stack entries, and generates pretty decent code despite being a single-pass compiler. Inserting/dropping locals means having to shift everything else around (both logically and physically), which is both somewhat expensive and (empirically) hard to get right. Dropping needs to happen not only at the end of let-blocks, but also at arbitrary br* (and return) instructions, where additionally the different paths (which could have made different spilling decisions) have to be merged (what an optimizing compiler would call "phi resolution").

That's why I've previously summed it up as "the difficulty is the changing number of locals over the course of a function".

(I can imagine that if you had a naive interpreter where the locals are stored as some sort of vector/array/stack on the heap, then supporting let would be fairly easy. But that's not our situation.)

from function-references.

lars-t-hansen avatar lars-t-hansen commented on May 30, 2024

FWIW, SpiderMonkey's baseline compiler has the same problem: the mapping from local index to location is tricky and needs a side data structure, and in practice there will be a fair amount of value shuffling at runtime.

from function-references.

rossberg avatar rossberg commented on May 30, 2024

Thanks for the background. So IIUC, the motivation for having a contiguous conceptual stack is so that register usage and spilling can be handled by a single mechanism for both locals and operands? Is that so that locals can be put i registers, too? I can see how that requires shuffling around.

I can imagine that if you had a naive interpreter where the locals are stored as some sort of vector/array/stack on the heap, then supporting let would be fairly easy. But that's not our situation.

Ah, but no heap allocation is required for that. You know the max size of this array statically, so you can put it into the stack frame of your function, e.g., right below the operand stack. You certainly lose the property that its (used) slots are always contiguous with the operand stack. But the same would be true if all let-locals were predeclared and fixed, wouldn't it? It seems unavoidable in general.

Ultimately, any mechanism that solves the intended use case will have this problems to some extent. It requires the ability to store and address values in some place with limited scope (as with operands, but unlike func-level locals), but without a FIFO and use-once restriction (as with locals, but unlike operands). That makes some amount of shuffling either indices or values unavoidable. Of course, that doesn't mean that there isn't room for improvement over the current proposal.

from function-references.

RossTate avatar RossTate commented on May 30, 2024

@jakobkummerow @lars-t-hansen Does that mean that a local that's not used after initialization (i.e. after the first block of the function) still takes up space (on the stack or in a register) throughout the rest of the function?

from function-references.

jakobkummerow avatar jakobkummerow commented on May 30, 2024

@rossberg : pre-allocating stack space would require an analysis pass to find all nested let-blocks and sum up their locals. We could do that, sure, but we're talking about a single-pass compiler here.

Is that so that locals can be put i registers, too?

Yes, from the point of view of code generation there isn't really a difference between locals and operands on the value stack.

@RossTate : in a non-optimizing compiler, yes. Computing the precise live range and reusing the space is an optimization that non-optimizing compilers skip for compile time reasons. A nice side effect is that it improves debuggability (which is why V8 switches to non-optimized code when you start debugging with DevTools).

from function-references.

RossTate avatar RossTate commented on May 30, 2024

So if each local has an explicit cost associated with it, shouldn't we be designing away from locals? I understand that they were necessary when WebAssembly was expression-based, but now that it's stack-based it seems like we should be designing towards the stack, where liveness is explicit. Adding this few instructions seems to address the use cases presented here and enable programs to avoid using locals and their implicit costs to begin with. let, on the other hand, does not serve use cases well, is hard to generate, and seems to primarily make programs less efficient by increasing register/stack pressure.

from function-references.

rossberg avatar rossberg commented on May 30, 2024

@jakobkummerow, given that all local indexing is relative, isn't it good enough to know the max at the end of compiling the function? It only affects the size of the stack frame. Hm, though I suppose you may need to clear it inline in the function prefix...

Edit: I believe Ben already suggested predeclaring the max depth at the functions start. Would that solve your problem?

from function-references.

RossTate avatar RossTate commented on May 30, 2024

I think max depth would not address the use cases. For the use cases we are discussing, functions would need to have multiple (non-nested) lets binding locals of different types. Max-depth seems to assume there is no variation across non-nested lets.

from function-references.

tlively avatar tlively commented on May 30, 2024

I may be beating a dead horse here, but (4) makes @jakobkummerow's codegen problem go away because after validation there would be no difference in structure between let-initialized non-null locals and any other locals. It wouldn't make let trivial to use in Binaryen, but it would still be a nice simplification over the current let. Beyond that, it has all the nice (no analysis or dynamic checks required) and not-so-nice (doesn't handle phis well) properties of the current let. And unlike the solution of using more general stack manipulation instructions, it sticks with the typical blocks-and-locals design of current WebAssembly, which makes it less risky overall. (We don't really know how feasible or easy it would be to adapt current tools to make good use of stack manipulation instructions.)

from function-references.

RossTate avatar RossTate commented on May 30, 2024

I hear the issue regarding stack instructions and current tooling. Unfortunately, it's hard to plan hear without a good sense of what the long-term plan is. For example, (4) won't work with existentials as it assumes all local's types can be described up front. If we give up on that, then (4) is a possible option (although let would have to be changed to say which locals it is initializing). If not, then (4) would be introducing complexity that will only gunk things up down the road. Similar planning issues arise with other options.

from function-references.

tlively avatar tlively commented on May 30, 2024

If we even need to have locals with types that cannot be forward declared, we will need a new mechanism to support that. I agree that some proposed solutions in this discussion would satisfy that use case and that other proposed solutions would not. However, I don't think there's any way we will able to determine at this moment whether or not we will ever need to have locals with such types. Having a long term plan that would answer such a question would be at odds with our iterative design process. I suggest that we ensure this discussion can be resolved sooner rather than later by focusing only on the immediate needs of tools and engines to support non-nullable types. Yes, that means that we might need to introduce another mechanism in the future, but that is the cost of iterative design.

from function-references.

kripken avatar kripken commented on May 30, 2024

To not clutter the discussion here I posted an issue about option (6), WebAssembly/gc#187

from function-references.

conrad-watt avatar conrad-watt commented on May 30, 2024

AFAIU, echoing @rossberg, the engine concerns would be satisfied by requiring any function containing a let to pre-declare the max local depth reached at any point in its body (no pre-declared local types necessary, just a single i32 in the function declaration). This seems to be the least invasive change.

from function-references.

RossTate avatar RossTate commented on May 30, 2024

Can someone clarify what (4) is? There seem to be two interpretations of it being used. One description seems to make let easier for engines but no easier for generators, and the other seems to make let easier for generators but no easier for engines.

from function-references.

tlively avatar tlively commented on May 30, 2024

My interpretation of (4) is that non-nullable locals are declared exactly like any other locals but that local.gets of them are only valid inside let blocks that initialize them to some value. I'm not sure what another interpretation would be.

from function-references.

RossTate avatar RossTate commented on May 30, 2024

Thanks, @tlively! In your interpretation, uninitialized locals are declared up front with their types. In that case, are lets specifying the indices of the locals they initialize, or are they specifying the number of locals they initialize?

from function-references.

tlively avatar tlively commented on May 30, 2024

Yes, lets would have to specify the indices of the locals they initialize in my interpretation. If it were restricted to initializing just one local, this version of let would essentially be a local.set with a block scope attached to it. (Whether we actually make that restriction is an orthogonal question that can be trivially answered by looking at its effect on code size in real-world programs once prototypes are complete.)

from function-references.

RossTate avatar RossTate commented on May 30, 2024

Ah, I had misread your earlier comment as suggesting that stack size was becoming a problem. It's useful to learn it's not a big concern. On a related note, something I've been wondering is how much we care about the performance of single-pass compilers at all. For example, the lack of indication that a local is no longer live can confuse the greedy register-allocation algorithms these compilers use into pinning the local for the no-longer-needed $object local to a register at the start of the loop (since many methods of collections will simply enter a loop right after casting this). I don't really have a sense as to how much that matters in the grand scheme of things, (e.g. how long is single-pass generated code even used before being optimized?), so it'd be helpful to get y'all's perspective on that in order to get a sense of how to balance tradeoffs.

from function-references.

lars-t-hansen avatar lars-t-hansen commented on May 30, 2024

Not a problem as such, just occasionally the larger frames bite us since we don't compute lifetimes in baseline and don't common locals; when the emitter produces code (often with optimization off, for debugging) with a lot of locals and deep recursion it can be an issue. I've only seen the one bug report from @kripken, though.

Yeah, how much do we care about the performance of baseline-compiled code? The optimizing compiler is easily an order of magnitude slower than the baseline compiler, and I would think sometimes slower than that too, plus it should run in the background on fewer threads, so baseline code can in principle run for a while. (I wish I had good numbers to give you but I don't, at the moment.) Mostly baseline code will run startup code, bring up a splash screen, perform asset creation, etc, and perceived perf of this matters. (And there can be plenty of startup code.) And in our design we don't OSR to optimized code; the switch happens on callouts from baseline code. In practice that's OK for now, since the web encourages frequent returns to the runloop. Finally, we aim to cache the optimized code for reuse, so baseline code will normally only run on first load.

I would say that we care enough that I have been a little nervous about all the stack shuffling that could be caused by let if let becomes the dominant means of introducing locals. But without a concrete implementation it's been hard to quantify how bad that is going to be. And of course if let becomes dominant it may be desirable to rewrite the compiler to support it better, but that's not a fun thing to do.

Personally I feel option (4) suits wasm better than the current design, as I think of locals as separate from values on the value stack and (4) exposes the management of the locals better. It would be worth fleshing out the semantics in detail, perhaps.

from function-references.

RossTate avatar RossTate commented on May 30, 2024

Cool. Thanks for the insights!

You mentioned a detail about returning to the runloop that I should follow up on in case it's relevant. If we get stack-switching support for things like async/await, I expect "returning to the runloop" will look different than it does now. A common model I expect to see is for applications to (during startup) allocate their own stack. The application's will have its exports simply switch control to this internal stack, run its internal code on that stack, and then switch back to the main stack when it needs to await something or simply yield a but to give the runloop a chance to process more events. So the need for applications to frequently return to the runloop won't necessarily translate to clearing out stack frames from the single-pass compiler like it does now.

from function-references.

lars-t-hansen avatar lars-t-hansen commented on May 30, 2024

If the stack switching design becomes a reality, OSR might become more important than it is now ;-)

For context, the reported bug is https://bugzilla.mozilla.org/show_bug.cgi?id=1409124. The discussion there suggests a workaround, that we wait for the optimizing compiler if a very large frame is encountered in baseline during compilation. The other option is as I wrote above to recognize this problem (it's cheap to do so) and then pay for the cost of a pass that shrinks the frame, if the frame becomes large.

In practice I expect that getting stuck in slow code due to no OSR is going to be a bigger problem than large frames.

from function-references.

RossTate avatar RossTate commented on May 30, 2024

Just to add to @askeksa-google's suggestion, there's a practical extension that would support irreducible CFGs as well. Since an irreducible CFG would list its blocks in some order, we can call an edge a "back" edge if it goes from a block later in the CFG to one earlier in the CFG. If we require that, for a given block, any variable that's initialized via all forward edges must also be initialized via all back edges, then we can still validate (and compile) in a single pass. Although this is technically more restrictive than necessary, I suspect it would be practical. (I'm just mentioning this to illustrate that (7) is likely forwards-compatible with irreducible CFGs, should wasm ever want to add that feature.)

from function-references.

rossberg avatar rossberg commented on May 30, 2024

Before we throw the baby out with the bath water, perhaps let us talk through other possible solutions to eliminate potential complexity with let.

Two more lightweight suggestions came up before:

  • Separate the index space of let-locals from func-locals. There would be an instruction let.get for the former. => Let blocks do no longer affect allocation and indexing of parameters and func locals.
  • Require to predeclare the max depth of the let stack (i.e., the maximum number of let locals live at the same time). => An interpreter or baseline compiler can pre-allocate the let stack in the function frame without needing a pre-pass to compute its size.

The former for example would seem to address some of titzer's problems with calling conventions. The latter would seem to address @jakobkummerow's problems with computing the stack frame layout in a single-pass baseline compiler.

So, this would address some of the difficulties for interpreters and baseline compilers, I believe, though perhaps not all. But a few concerns were rather vague, so I'm trying to understand where it would help or not.

I'm particularly interested in understanding the nature of the problems for which a less localised solution like (4) would help but these won't. And I'm trying to understand how folks imagine (4) working in the first place.

My attempt of an interpretation would perhaps be the following:

  • The construct is let <blocktype> <localidx>* instr* end. Typing is similar to the let in the current proposal, except that the local types are derived from the local indices. Execution pops the values from the stack and assigns them to the respective locals.
  • However, the validation environment additionally keeps track which locals are "alive" where. Locals can be declared as uninitialised. Those start out dead. Ordinary locals are always alive.
  • A let-block can only bind dead variables. They are alive within that let-block, and become immediately dead again afterwards.
  • Local instructions can only be used on live locals.

This is still structured and fully explicit and doesn't require tracking and computing arbitrary live sets along all control flow edges. So it would be a comparably moderate extension to the validation algorithm. It would, however, no longer be composable in the way the current let is, which still is a significant downside for producers and certain tools.

from function-references.

kripken avatar kripken commented on May 30, 2024

Another broad question that I'm curious about, what level of urgency do people feel about this topic at this time?

From my perspective, it seems like we can defer a final decision on let and its alternatives until we get more data (speed, size, implementation feedback), since

  1. We have indications that not having let will not affect optimizing VM performance, and maybe not even baseline if @askeksa-google 's validation proposal is adopted.
  2. Pretty much all the proposals under discussion can be added at a later time, should data justify it (even ones that change validation, like that validation proposal, only make things that previously did not validate start to validate).

from function-references.

rossberg avatar rossberg commented on May 30, 2024

@askeksa-google:

  1. Declaratively indicate that some locals are only used inside a scope, which can be useful in stack inspection, debugging, and perhaps to save stack space in simple interpreters.

In conjunction with GC, this can actually have implications on heap usage, too.

It is my impression (based on experiences with dart2wasm) that let is not a good solution to the first issue, due to its scope-based shape. In many cases, using it would require a specialized liveness analysis to figure out which variables can scoped like this and where their blocks should start and end (which can be in inconvenient places relative to the structure of rest of the code).

In source languages, variables are scoped in the same way, so in principle, that should always be translatable. IIUC, the reason this can become problematic is that many compiler middle ends lose the original structure, e.g., due to SSA, so that you have to recover some of the structure. Is that correct?

@kripken:

what level of urgency do people feel about this topic at this time?

From my perspective, non-null references would be a pretty broken feature in the context of Wasm if there was no way to store them in a local. For better or worse, locals are Wasm's canonical mechanism for, e.g., using a value more than once, and that should be possible for values of any type.

from function-references.

rossberg avatar rossberg commented on May 30, 2024

The ability to have non-null locals isn't worth much if (nearly?) nobody uses them (see first post in this thread).

Isn't that just speculation at this point? Or do we have concrete evidence?

option (7) provides a very nice way to get them.

I respectfully disagree with that. As I said before, flow-dependent typing tends to be a rabbit hole the further you evolve a language. And even in its simplest form it already doubles the structural complexity of the current -- intentionally dead simple -- validation algorithm (having to compute set structures etc). And its algorithmic space complexity goes from linear to quadratic worst-case, I believe (number of locals * number of nested blocks).

Yeah, but it would have to be a pretty extreme (contrived?) case where it actually matters whether an on-heap object held alive only by a local can be freed before the current function terminates.

Mostly yes, though bear in mind that for Wasm off the web, functions are not bounded by event turns and can be arbitrarily long-running.

from function-references.

RossTate avatar RossTate commented on May 30, 2024

As I said before, flow-dependent typing tends to be a rabbit hole the further you evolve a language.

@rossberg Flow-dependent typing is quite tractable. It is the predominant (maybe universal?) typing strategy used by the more realistic typed assembly languages. A literature review would have revealed that a plan for supporting flow-dependent typing would be necessary (as I had tried to foreshadow last year in WebAssembly/multi-value#42), and case studies (like even the simple ones repeated multiple times above) would have revealed that let would be insufficient (as well as the variants you suggested).

For better or worse, locals are Wasm's canonical mechanism for, e.g., using a value more than once, and that should be possible for values of any type.

If this remains the case, then we will likely eventually need to introduce explicit flow-sensitive typing for locals (and come up with a plan to deal with existentials). @askeksa-google's suggestion is essentially doing this implicitly, taking advantage of the fact that the lattice for initialization is just a boolean and so joins are trivial to compute. But eventually we'll get to cases where joins are not so trivial, at which point we'll need to explicitly annotate labels with the flow-sensitive type of locals.

from function-references.

rossberg avatar rossberg commented on May 30, 2024

@askeksa-google:

For example, in Dart, you can declare a non-nullable local variable without an initializer, and then write to it later.

Fair enough, but for Dart this is primarily a type-checking feature, not a performance feature. You can always compile it to a nullable type on the Wasm level. Does this case matter enough that we have to go to length to allow optimising the last bit out of it?

The other examples you gave I think are not problematic with let, including the || one (which is just a shorthand for if).

@RossTate:

Flow-dependent typing is quite tractable.

It has quadratic complexity, AFAICT. I'm surprised this doesn't bother you this time.

@conrad-watt:

I'd prefer that we add more stack operations to take advantage of the fact that the value stack is already flow-sensitive, rather than making locals flow-sensitive and requiring explicit merge annotations for locals which duplicate the annotations already needed for the stack.

I agree. Of course I'd argue that let is essentially trying to do that.

The previous discussion in WebAssembly/gc#187 converged to the conservative idea of disallowing non-nullable locals for now. This would leave open our ability to spend as long as we want bikeshedding future let/stack/flow-sensitive local features.

In that case, I'd rather consider removing non-nullable types altogether for the time being. They just seem like a broken feature in such a setting.

from function-references.

RossTate avatar RossTate commented on May 30, 2024

@RossTate speaking: Flow-dependent typing is quite tractable.

@rossberg speaking: It has quadratic complexity, AFAICT. I'm surprised this doesn't bother you this time.

Maybe we mean different things by this term. With flow-INsensitive typing, the type of things like variables do not change (ignoring shadowing). With flow-Sensitive typing, the type of a variable can change, typically to reflect information that was learned by branching on some condition, but also to reflect recent assignments. Thus the difference is primarily whether or not the type context is mutable. Flow-sensitivity has no effect on the complexity of type-checking. As @conrad-watt pointed out above, the value stack is already flow-sensitive.

Now maybe you're thinking of features like intersection types or operations like joins that are often associated with flow-sensitive typing. Those can be problematic, but they can also be just fine—it depends on the specifics at hand. In the case of initialization, the join is trivial to compute, and there is no need for intersection types. @askeksa-google already pointed out in #44 (comment) that structured loops are surprisingly easy to handle, and I gave what I think would be a reasonable restriction in #44 (comment) should wasm ever be extended with potentially irreducible CFGs. So, for the specifics at hand, flow-sensitive definedness of locals seems fine (though in the longer term I think relying on the flow-sensitivity already present in the value stack would be better for more advanced features).

from function-references.

titzer avatar titzer commented on May 30, 2024

Flow-sensitivity for the definedness of locals has a significant impact on practical implementations of type checkers. Specifically, it means that the typechecker's view of a local's type must be duplicated at control flow splits and then merged at control merges. This is not the case with today's flow-sensitivity of stack slots, because they are structured and information learned upon branch does not escape to outer blocks.

I would strongly prefer to avoid any design that requires a typechecker to duplicate state for locals at branches; it makes a huge difference in performance.

from function-references.

RossTate avatar RossTate commented on May 30, 2024

Again, specifics matter. The additional state is a bitset that would typically fit within a word, making representation small and duplication cheap. Any instruction that introduces a label already has to remember the type of that label, and so you'd just add an additional bitset (that's initially all TRUE). Merging is a bitwise AND. The only instructions that duplicate the current bitset state are loop (where it's used to initialize the bitset associated with the label) and if. Besides, the experimental evaluations that were done on TALx86 suggest that duplicating state might be cheaper than alternatives, even for state that is more than just a word or two.

I am not trying to advocate for flow-sensitive initialization of locals; I am just trying to prevent inaccuracies about flow-sensitive typing from spreading. Even if I prefer moving from locals to the stack, I want people who would prefer to have flow-sensitive initialization of locals to have an accurate understanding of the tradeoffs.

from function-references.

askeksa-google avatar askeksa-google commented on May 30, 2024

Update: with local.tee discarding its input type (as discussed in WebAssembly/gc#201) the numbers are more like 11% ref.as_non_null and 9% reduction in the number of instructions with non-nullable locals.

from function-references.

RossTate avatar RossTate commented on May 30, 2024

I am upset by @rossberg's continued dismissiveness of people's concerns, suggestions, experiences, and expertise in his comment above. We should not have to wait for him to implement a compiler for a more realistic language (such as one of many with flow-sensitive initialization, which his offered solution still poorly addresses for reasons already explained above) for him to take those concerns seriously, and we should not have to deconstruct yet another strawman argument just for him to even consider the viability of the various flow-sensitive suggestions people have made above.

from function-references.

conrad-watt avatar conrad-watt commented on May 30, 2024

The reason I would prefer an approach based on locals rather than stack instructions is much less principled; Binaryen would have to translate a stack-machine design to a local-based alternative at parse time, them convert back at writing time.

I wouldn't mind this argument tipping the scales, but now I'm interested in how Binaryen handles the Wasm stack even today. For example, would roundtripping the code fragment (call $a) (call $a) (call $a) (call $b) (call $b) through the Binaryen IR, where $a :: [] -> [i32] and $b :: [i32, i32] -> [i32], result in additional local.get/set instructions being inserted?

(I promise not to go any further off-topic)

from function-references.

RossTate avatar RossTate commented on May 30, 2024

Thanks for the well-balanced comment, @conrad-watt! (I'm also interested in the answer to the question you asked.)

@tlively You expressed this pragmatic constraint before, and so I spent more time trying to think of good ways to make a local-based approach accommodate flow-sensitive needs well. In short, label types (and label-type annotations) could be extended to indicate which locals are initialized and what additional "existential" constraints those locals satisfy (e.g. local 1's "class" equals local 2's "class"). Non-nullability could be cleanly tied into that (i.e. local 1 is non-null) in a way that works well with other "existential" constraints (for reasons I won't go into here to maintain focus) and without requiring one to use a different local depending on whether the surface-level variable's value is currently possibly null or definitely non-null. This would help address the excessive stack-allocation problem, though not as effectively as using the value stack (which can further encode liveness). Nonetheless, it seems like a viable option.

from function-references.

tlively avatar tlively commented on May 30, 2024

@conrad-watt, yes, Binaryen will insert a local to hold the the return value of the first call $a as part of its parsing. In order to avoid emitting that extra local into the final (ostensibly optimized) binary, there is a second-class stack-machine-based IR with a couple simple optimization passes that is used directly before writing the binary. I'm working on making that part of Binaryen more powerful and more first-class, but I don't expect it will ever replace the main expression-tree-based Binaryen IR.

@RossTate, just to make sure I understand where you're coming from, you're trying to think through how a Binaryen-friendly local-based approach to the current problem could be extended to other future use cases for which flow-sensitive typing would be useful, such as existential types? I'm glad it seems viable for those use cases. I wouldn't be opposed to having different mechanisms for different future use cases, but of course it's nice when one mechanism can be reused instead.

from function-references.

RossTate avatar RossTate commented on May 30, 2024

@tlively Confirming that your understanding of my comment sounds correct 😃 I should add that there's also a theoretical reason to make such an approach work out (in addition to Binaryen's pragmatic reason). In short it enables a stratification in the type system that in turn enables a more expressive sort of typed function references that are particularly useful for supporting polymorphic languages, but that first relies on having a nominal/stratification, so I'll leave details for another time.

from function-references.

askeksa-google avatar askeksa-google commented on May 30, 2024

AFAICS, validation is already worst-case quadratic in the body size.

You can have a block with a blocktype with O(n) outputs containing O(n) branches to its label.

from function-references.

titzer avatar titzer commented on May 30, 2024

@askeksa-google

In order to write those O(n) branches to the label, each of them must have O(n) elements on the stack for the branch. The only way to do that without writing O(n) instructions in each of them would be to call a function that returns O(n) items. So technically yes, but practically no.

from function-references.

titzer avatar titzer commented on May 30, 2024

@RossTate The limit on the number of local variables for wasm functions is 50000. Hundreds of locals is not uncommon for big functions.

You need to copy state at all control flow splits, i.e. ifs and brs too. For example you can't just fall through from the true arm of an if to the false arm of an if with the state you had before. Also, because ifs can be nested, you of course need to keep the copy associated with the "other" branch associated with the control stack entry. This is what optimizing compile tiers have to do for SSA construction. I really don't think we want to kick validation into the same complexity class as the optimizing tier; java bytecode already did that.

from function-references.

RossTate avatar RossTate commented on May 30, 2024

@titzer br only needs to duplicate (or really combine) state if you're doing flow-sensitive inference rather than just flow-sensitive checking. The same goes for if if you have annotated input types (as is currently the case), though the only research paper I know of on the topic found that duplication is substantially faster than annotation for this case. This latter point relates to inference versus checking. The alternative to flow-sensitive inference of local initialization is explicit annotation of local initialization encoded in some way or another (such as let above). So either you're copying/merging bitsets, or you're parsing/comparing bitsets. That is, once it's possible to have non-defaultable types there is going to be a cost, whether that cost is parsing and checking dynamically introduced locals (the current let), pushing locals to the value stack and consequently having larger label types, parsing and checking the initialization state of locals (the revised let just above), or inferring the initialization state of locals. Reflexively eliminating one option on the basis that it has costs ignores the fact that every solution has costs.

Consider the Kotlin example I gave in WebAssembly/design#1381:

val s: String
if (...) {
    ... // do stuff
    ... // initialize s
    ... // do more stuff using s
} else {
    ... // do stuff
    ... // initialize s
    ... // do more stuff using s
}
... // do even more stuff using s

Using the "initialize inside let" approach, this translates to

(local $s (ref $String))
...
(if (result (ref $String))
  ... ;; do stuff
  ... ;; initialize s
  (let (result (ref $String)) ($s)
    ... ;; do more stuff using s
    (local.get $s)
  )
else
  ... ;; do stuff
  ... ;; initialize s
  (let (result (ref $String)) ($s)
    ... ;; do more stuff using s
    (local.get $s)
  )
)
(let (result (ref $String)) ($s)
  ... ;; do even more stuff using s
)

Using local-initialization inference, this translates to

(local $s (ref $String))
...
(if
  ... ;; do stuff
  ... ;; initialize s
  ... ;; do more stuff using s
else
  ... ;; do stuff
  ... ;; initialize s
  ... ;; do more stuff using s
)
... ;; do even more stuff using s

Notice that the former parses (ref $String) 4 additional times, 5 additional opcodes (3 lets and 2 local.gets), 3 additional bitsets (the ($s) for each let), 2 additional locals (the $s for each local.get), and it performs 7 additional subtyping checks. The latter performs 1 additional bitset copy, 1 additional bitset initialization, and 2 additional bitset intersections (and I wouldn't be surprised if these are being done anyways as part of streaming nullness/default-value analysis).

So is the former more efficient and simpler than the latter? Maybe, but I definitely don't think it's so obvious we can preemptively eliminate the latter option. To the contrary, if I had to make a choice without doing any detailed experimental analysis, I would go with the latter. Of course, there are other options that we should consider as well.


New observation: if label-type annotations included a bitset for locals, in addition to encoding initialization information, that bitset could encode liveness information. That is, a local's bit could be set to false, even if it's initialized (and even if it's defaultable), to indicate its value is no longer needed, thereby relieving register pressure for even streaming compilers. There are also good reasons not to do this; I just wanted to contribute the thought.

from function-references.

titzer avatar titzer commented on May 30, 2024

I think any algorithm that requires duplicating bitsets (or really, any data structure) for the entirety of locals at every control flow split is DOA. Annotations don't fix that, because then the annotations just get big. We don't need to design for the common case of a handful of locals; we need to design for scalability; huge functions that have thousands of locals. We took care to avoid wasm's validation algorithm being anything other than linear because of prior experience with other bytecode formats like Java that require flow-sensitive algorithms just explode. And yeah, those huge functions show up in the wild and they turn into JIT compiler bombs that will ruthlessly find all nonlinearities.

from function-references.

askeksa-google avatar askeksa-google commented on May 30, 2024

In order to write those O(n) branches to the label, each of them must have O(n) elements on the stack for the branch. The only way to do that without writing O(n) instructions in each of them would be to call a function that returns O(n) items. So technically yes, but practically no.

Since conditional branches leave the stack unchanged on fall-through (except for the condition / checked value), the same stack elements can flow into an arbitrary number of branches. So you don't need any calls to achieve the O(n) values into O(n) branches situation.

from function-references.

askeksa-google avatar askeksa-google commented on May 30, 2024

Simple example:

(block (result i32 /* n times */)
  (i32.const 0) /* 2n times */
  (br_if 0) /* n times */
end)

Requires validation of n distinct lists of n types each.

from function-references.

titzer avatar titzer commented on May 30, 2024

@askeksa-google Ah yes, you are right. I had forgotten about br_if not popping its arguments.

from function-references.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.