Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rationale for the memory store instructions #1357

Closed
verbessern opened this issue Jul 22, 2020 · 42 comments
Closed

Rationale for the memory store instructions #1357

verbessern opened this issue Jul 22, 2020 · 42 comments

Comments

@verbessern
Copy link

I'm having difficulties to understand the rationale behind the operands order in the operand stack for the memory store instructions. Its a common practice when one is doing an assignment, to first calculate the source value, then the target value, and then do the storing of the source value over the targets value memory location. However, the store instructions are expecting the operands on the stack in the reverse order. On top is the source value, and as a second value is the offset. That implies that one have to first calculate the target memory offset location, then the source value, or after they are evaluated correctly (source -> target) to swap the top 2 values before the store instruction. There is no work around in the general case, because there is no instructions to store directly into the local variables without the help of the operand stack. So for every single memory store, that is the final operation of an assignment of one value to another, extra work is needed. Is there need for memory store instructions that are going to be expecting [value][offset as top] operands order? Or the penalty have to be payed, on every such normal evaluation order.

@binji
Copy link
Member

binji commented Jul 23, 2020

For a simple wasm producer, I'd expect that they would build generate it as you've described, first generating the target offset, then the value. For more complex producers, they may not even materialize assignments as memory locations, so the memory store order is less important, I think.

As for whether WebAssembly would have an alternate instruction -- I'd say it is unlikely, unless we could show a large code-size improvement by doing so.

@verbessern
Copy link
Author

verbessern commented Jul 23, 2020

Ah....ah...One issue here, one there, and hop..., loss of speed in the end. That reminds me of the missing 'dup' instruction.

I described that first the source value will be calculated as it is in any known to me language, then the target. I have to disagree, that the assignments are not using the memory. I think that nearly every assignment that stores into the memory, will suffer this overlooked order of operands, that in turn will slow down by some degree the whole world for years to come. I also have to disagree, that the code size is the issue here, that is a performance issue.

@binji
Copy link
Member

binji commented Jul 23, 2020

Good point, in most languages the order is guaranteed. But if there's enough information to generate the address without side-effects you can put it on the stack first. If not, you need to store it in a local: e.g. https://godbolt.org/z/K79bqo.

But I'd be surprised if this is a performance issue; as with dup, the cost would only be paid by an interpreter or baseline wasm compiler. I wrote a quick test for this:

(import "" "addr" (func $addr (result i32)))
(import "" "value" (func $value (result i32)))
(memory 1)

(func (export "a")
  (local i32)
  call $addr
  local.set 0
  call $value
  local.get 0
  i32.store)

(func (export "b")
  (local i32)
  call $value
  call $addr
  i32.store)

The code v8 generates is nearly identical, aside from the functions being called in the opposite order.

@aardappel
Copy link

@binji value and addr names appear inverted in the code above, but the point about code generated should be equally valid.

Once we have multi-value, a swap instruction might be nice, since using locals puts burden on a code generator to allocate them.

I agree with @verbessern though that the order we have is a bit odd.. is there any circumstance where the current ordering is superior? Why was it chosen? One example I can imagine is where you want to use most of the address computation repeatedly, and fill with simple scalars, for e.g struct initialization:

(complex address calculation)
dup
i32.const 1
i32.store offset=0
i32.const 2
i32.store offset=4

@binji
Copy link
Member

binji commented Jul 23, 2020

@aardappel oops, you're right. I always get the order mixed up. :-)

Why was it chosen?

Not sure, but it appears to have been this order back to the initial ML implementation at least (Aug 2015): https://github.com/rossberg/wasm/blob/master/src/eval.ml#L165

@rossberg
Copy link
Member

it appears to have been this order back to the initial ML implementation at least (Aug 2015)

And I just followed @titzer's V8 prototype at the time. So maybe he knows the rationale.

@verbessern
Copy link
Author

verbessern commented Jul 24, 2020

And that is maybe the biggest issue I know about web assembly. The assumption that an implementation of something somewhere is a law. I have expressed in many places (and got my down votes) that this creates more problems then it solves. The web assembly is a unique (by my knowledge) byte code and every of its features must be logically thought regardless of what was implemented in some way somewhere. Another example is that there is no instruction as i32.not_equal_z, which forces the generators to emit i32.const 0 i32.not_equal, its not a big deal, because some runtimes will optimize it as asm jnz, but here we are again, the whole world is not that implementation. That in turn puts pressure on all runtimes to behave the same way, or just be slower.

@verbessern
Copy link
Author

As for the dup example of @binji, that is what I claim, not that it could not be optimized, but that there must be no need to optimize it. The runtime weight for the instructions local.get/set will never be zero, and why pay the price, when a single instruction, that is logical to exists in a stack machine could actually exist. That is just raising the bar for every runtime, or again...just be slower.

@verbessern
Copy link
Author

verbessern commented Jul 24, 2020

The example of @aardappel is cool (with the note that there is no dup instruction at the moment), and that it would require one to use a local variable, and that in turn will make you do (if the operands order was natural):

(complex address calculation)
local.set 0
i32.const 1
local.get 0
i32.store offset=0
i32.const 2
local.get 0
i32.store offset=4

if the dup instruction was available, then the stack may overflow with many dups, as with the code up will not (and the instructions count for many fields, is the same +2, but you win the stack length)

@taralx
Copy link

taralx commented Jul 24, 2020

I wouldn't be surprised to find that this dates from when things were s-expressions, since the natural order of args there is (store addr val), which translates into (push addr) (push val) (store) in stack-world.

@rossberg
Copy link
Member

@verbessern:

And that is maybe the biggest issue I know about web assembly. The assumption that an implementation of something somewhere is a law.

There is no such assumption, quite the contrary. The initial design was discussed and iterated on by the group over the course of 1.5 years. Lots of changes and additions were made, it's simply that nobody during the time suggested something else for this particular aspect.

@taralx:

I wouldn't be surprised to find that this dates from when things were s-expressions, since the natural order of args there is (store addr val), which translates into (push addr) (push val) (store) in stack-world.

Ah yes, indeed, that probably is the actual explanation. Wasm started out as an expression-based language (even before we had a text format), somewhat inspired by the structure of asm.js. Only later we reinterpreted that as a stack machine, with very minimal changes, so this simply carried over as is.

@verbessern
Copy link
Author

@rossberg I will account that as "an existing somewhere feature has propagated, not because it would require code refactoring of the existing projects, but because nobody has noticed, despite their best intentions"

@binji
Copy link
Member

binji commented Jul 24, 2020

Another example is that there is no instruction as i32.not_equal_z, which forces the generators to emit i32.const 0 i32.not_equal

No, that instruction doesn't exist. But we could add it via the proposal process. Similarly, we could add new memory instructions with the opposite operand order too. My comment above was to suggest that it would be a stronger case if we could show that this would have tangible performance or code-size improvements.

@verbessern
Copy link
Author

verbessern commented Jul 25, 2020

When you are on it, br_if_not, is also missing, and forces extra work, the same way as i32.not_equal_z that is the same as i32.not. The test of not equal to zero is probably the most common test computed ever. I read, before like 10 years somewhere some article about the Intel assembly instructions and their op codes, that if the op codes were "properly" assigned, that would speed the runtime of everything and will decrease the size of all executables. In that sense, yours truly, has issued an issue #1308.

ps. Personally, I like the rich APIs, but its questionable, should more instructions be added, or an optimization of this common cases, be assumed as "normal" and "expected".

@carlsmith
Copy link

If writing WAT by hand is a concern these days, there is a case for adding some instructions purely to improve the language superficially. WAT essentially has isEqual, isMore, isLess and isZero complemented by notEqual, notMore and notLess, without an instruction that means notZero. A br_if_not (or br_unless) instruction would also make sense, regardless of performance, code size etc, just because it makes WAT more pleasant.

@titzer
Copy link

titzer commented Sep 6, 2020

The order wasn't chosen arbitrarily. The address comes first, then the store value because it most closely matches the natural left-to-right evaluation order of many languages, e.g *ptr = val [1]. It is very common to write the destination of a move first in assembly language e.g. mov [%rax], %rbx. It's also very common in compiler IRs to have the address first, then value. So going the other way would have gone against all three of these cases, and thus been confusion.

[1] C's evaluation order rules are complex, involving sequence points, but almost all implementations follow left-to-right evaluation order unless they have a really good reason not to. And, of course, most other languages in existence have strong left-to-right order rules.

I think swap is nice shorthand instruction that we should consider adding to Wasm for situations like this.

@verbessern
Copy link
Author

verbessern commented Sep 17, 2020

@titzer I thought (please correct me if I'm wrong) that the C++ and many other languages (your argument 1), are evaluating first the source then the target. This means that any language of that sort, when its compiling to wasm will have to do extra care, to switch the operands. This example for the assembly language (argument 2) is hardly a valid one, because this is pure syntax, it does not prescribe any actual evaluation order, as in fact the source/target order does in wasm. For the argument 3 I have no information in hand, but maybe you can share where is the survey that covered the "very common" argument?

int k, j;
int &f1(void) { message("a"); return k; }
int &f2(void) { message("b"); return j; }

main
  f1() = f2();

This pseudo C++, strangely emits 'ba' on my side. Check this out: https://en.cppreference.com/w/cpp/language/operator_precedence number 16 in the table the assignment operator.

@tlively
Copy link
Member

tlively commented Sep 18, 2020

@verbessern This has only been defined for C++ since C++17. But I disagree that this requires any additional work for most compilers. Compilers already have to deal with arbitrary cases of two values being produced and used in FIFO order with side effects that prevent the producing computations from being moved past each other and there is nothing special about stores in this regard.

At any rate, it great to ask for context around old design decisions, but it's really not productive to second guess or argue against the design of things that have already shipped. We couldn't change these things even if we wanted to because we need to maintain backward compatibility. A more helpful way to address the potential problem you've identified would be to propose a new addition to the WebAssembly spec along with concrete data showing its benefits.

@verbessern
Copy link
Author

@tlively You are claiming that, C++17 does right to left assignment and the older versions are left to right assignment? With other words, you are saying that C++17 is not back compatible regarding the assignment evaluation order with the older C++ versions?

I cannon believe to what I'm reading "not productive to second guess", when you have no arguments, just tell to everyone to shut up and go away. "even if we wanted to" seems to mean, that there is no intentions what so ever, to introduce features, that are actually making sens, and this issues here are just to find bugs for you, correct?

When one tool, is able to analyze and change the order of the operands, and the standard is made that way, this rises the bar for any other future tool, because if the future tools do not have this analysis, then they will simply be slower (surprise).

@tlively
Copy link
Member

tlively commented Sep 18, 2020

@tlively You are claiming that, C++17 does right to left assignment and the older versions are left to right assignment? With other words, you are saying that C++17 is not back compatible regarding the assignment evaluation order with the older C++ versions?

No, in previous versions of C++ the order of evaluation for the simple assigment operator was unspecified. Compilers could evaluate the subexpressions in whatever order they wanted.

I cannon believe to what I'm reading "not productive to second guess", when you have no arguments, just tell to everyone to shut up and go away. "even if we wanted to" seems to mean, that there is no intentions what so ever, to introduce features, that are actually making sens, and this issues here are just to find bugs for you, correct?

As multiple folks have said on this thread, we frequently make changes and improvements to WebAssembly. You can see the current proposals in flight here. We have implemented and shipped proposals such as the nontrapping float-to-int conversions and sign extension operations that added missing instructions, just like you are suggesting. But we cannot go back and change instructions that have already shipped because that would break existing users.

When one tool, is able to analyze and change the order of the operands, and the standard is made that way, this rises the bar for any other future tool, because if the future tools do not have this analysis, then they will simply be slower (surprise).

This is something all WebAssembly compilers will have to deal with no matter what. Consider this code:

a = foo();
b = bar();
use(a);
use(b);

Since the function calls may all have side effects, none of them can be reordered with respect to the other ones. The only way to compile this to WebAssembly is to store a somewhere besides the value stack, probably in a local.

@verbessern
Copy link
Author

This example is again irrelevant, because we are talking about will the source of the assignment operand be first calculated or the target. This is important of how the store instruction expects the order of the operands in the stack. When there are more then one variable, this is completely different topic. I can't see where its written "to change the shipped instructions", its quite clear, that the issue is to complement the already existing ones with the proper instructions.

@tlively
Copy link
Member

tlively commented Sep 18, 2020

A good next step would be to do an experiment to measure the code size wins possible from using new store instructions with their operands reversed. With that concrete data, we could have a discussion about the proposal at one of the biweekly CG meetings and move it to phase 1.

@verbessern
Copy link
Author

There is (as discussed in the previous posts) that the code size is not the single related property. The ability of the generators to actually generate code in the same way that the high level languages actually operate, without the use of extra variables (please read up) and code reordering, and probably the minor property of readability.

@conrad-watt
Copy link
Contributor

@verbessern it's great that you're passionate about making Wasm as good as it can be, but we should keep in perspective that this discussion concerns a technical issue with a tiny corner of the spec's design. Please give @tlively credit for taking the time to engage with you, even if you disagree with him!

There is (as discussed in the previous posts) that the code size is not the single related property. The ability of the generators to actually generate code in the same way that the high level languages actually operate, without the use of extra variables (please read up) and code reordering, and probably the minor property of readability.

I'd find code size concerns a much more convincing argument than either of these.

  • Any generator which passes through a dataflow graph/SSA representation (as LLVM does) on its way to Wasm isn't going to care about the details of how the argument order is wired up, because that will be handled in a generic way during the lowering to a stack-based representation.

  • With regards to readability, as I think @titzer was alluding to, with the current approach the text format can look like
    (t.store (addr_calc) (value_calc))
    which visually mirrors
    addr_calc := value_calc
    in a high-level language.


It's clearly arguable that the current design has disadvantages. The question is whether the best thing for the language is to

  1. do nothing and accept the minor deficiency
  2. introduce swap
  3. introduce t.store_ with swapped arguments

My leaning is against (3), because we would have to do the same for every future store variant (e.g. if we spec first-class memories, or a heap GC object). I would be happy with either (1) or (2). In a perfect world, maybe store would have been designed differently, but we can't break webcompat now.

@carlsmith
Copy link

I've never understood the exclusion of stack operators like dupe and swap from Wasm, when it's a virtual ISA. The cost of adding them is very low (I assume), and many people feel these instructions better articulate some algorithms. Why not just make them happy, however subjective the benefits?

@conrad-watt
Copy link
Contributor

I've never understood the exclusion of stack operators like dupe and swap from Wasm, when it's a virtual ISA. The cost of adding them is very low (I assume), and many people feel these instructions better articulate some algorithms. Why not just make them happy, however subjective the benefits?

Since these operations can be mimicked using local variables, I think there was a feeling that the 1.0 spec should be as minimal as possible. Local variables are mostly "costless" (in terms of execution time, not code size) because they just serve as another location + set of pure assignments for the purposes of register allocation/SSA, so the compiled code is arguably no less efficient.

This position becomes somewhat less reasonable as more types are added (since each type needs a different "scratch" local), and especially if a type is added that has no default representation. Still, the stack manipulation of swap and dup would be subsumed by let, which is currently being spec'd. So the main argument for adding one of these instructions (given let is on the way) would be based on code size savings.

This issue might be a good example that motivates swap. As a general design principle though, we're trying to keep the ISA small and minimise overlaps. Since WebAssembly must (more or less) guarantee perpetual backwards compatibility, there's a permanent cost to introducing a new instruction. There's probably a little more wiggle-room when considering basic stack operations like swap and dup, but we'd still want some practical motivation (especially considering let, as above).

@tlively
Copy link
Member

tlively commented Sep 19, 2020

The cost of adding them is very low (I assume)

Some of the details of how unreachable code is type checked actually make them difficult to add because they introduce new constraints that the type checker has to solve. @conrad-watt has a good explanation of this here, which incidentally is on a very similar issue about adding missing stack machine instructions.

many people feel these instructions better articulate some algorithms. Why not just make them happy, however subjective the benefits?

Because this requires real effort across many teams of people. A quick back-of-the-envelope calculation puts the cost of such a change at multiple tens of thousands of dollars of engineer time at a bare minimum. We're not going to spend that effort without compelling data showing its benefit.

@conrad-watt
Copy link
Contributor

@tlively is also completely right that I didn't even get into the current issues with dup (although I'm hopeful we can solve those soon!).

@verbessern
Copy link
Author

verbessern commented Sep 19, 2020

Exactly @tlively , there is the thing in the interest and the investments, not in the actual web assembly specification. Not every company, which wants to compile to and run web assembly, is a billion dollar one. Every piece of expected complex analysis pushes back every other player out of the field.

Alright, let it be, the world will have to just invest in this to be able to compile to and run web assembly code as good as the big players.

@conrad-watt
Copy link
Contributor

conrad-watt commented Sep 19, 2020

@verbessern this issue (which is about compilation to, not execution of, WebAssembly) is not a good example of your point. Converting SSA to a stack representation isn't complex analysis, its more or less an unavoidable step in implementing a decent compiler targetting Wasm. Adding additional instructions would not reduce the implementation complexity in this case. This is why a code size argument would be the best way to motivate a need for change.

Also, @tlively's point about the cost of implementing even trivial instructions seems to imply the opposite of what you're saying. Again, swap is a really simple instruction. It's probably true that its implementation in Web engines would cost a non-trivial number of engineering hours across major tech companies, but this fact only advantages more nimble, independent implementations.

I do think that Wasm will eventually reach a level of complexity such that a hobbyist implementer can't expect to handle all of the language's features with Web engine-level performance. However, this doesn't mean that writing a compiler to Wasm will get any harder, because one can choose not to target complex features as appropriate.

EDIT: there have been a few edits to the comment above; I was responding to its original iteration

@verbessern
Copy link
Author

@conrad-watt It seems I'm overwhelmed with tasks and start to write irrelevant stuff, I will let you deal with the issue alone for a while.

@verbessern
Copy link
Author

verbessern commented Sep 19, 2020

Anyway, its not only a compilation issue, its also an issue for the runtimes - the costly runtimes will reorder the instructions and eliminate the locals that are generated to overcome the lack of explicit instructions. And here comes to word "descent", and the gray area what should be expected and what not.

@verbessern
Copy link
Author

verbessern commented Sep 19, 2020

I have actually opened this issue, because this "just use a local" for every non trivial memory store instruction, make it practically a requirement of the runtimes to optimize locals, that will cost to every runtime now and in the future. The byte size for me personally is completely irrelevant. If the byte size was a thing in the web assembly the instructions had to be just encoded with a variable length codes.

@J0eCool
Copy link

J0eCool commented Sep 19, 2020

There's also the issue where every new instruction that gets standardized only adds to the implementation burden of every future wasm implementation. Producers of wasm can pick and choose the subset that best matches their compilation model, but consumers will need to implement 100% of the spec, or lose any hopes of compatibility. Adding new instructions is a marginal cost for anyone who already has an implementation, but adds to the work needed for new players.

I do believe we've been operating under the (incredibly reasonable) assumption that there will be vastly more producers than consumers, so small/hobbyist wasm engines hasn't historically been something we've optimized for, mind.


I have actually opened this issue, because this "just use a local" for every non trivial memory store instruction, make it practically a requirement of the runtimes to optimize locals

I'm curious what your mental model here is. My understanding is that all the existing engines treat both locals and stack values the same way, as virtual registers that are then reified via register allocation. Which doesn't seem prohibitively sophisticated to my eyes. I feel like there's a certain minimum level of complexity that a wasm engine can be assumed to take on, just by virtue of the problem domain.

@conrad-watt
Copy link
Contributor

I'm curious what your mental model here is. My understanding is that all the existing engines treat both locals and stack values the same way, as virtual registers that are then reified via register allocation. Which doesn't seem prohibitively sophisticated to my eyes. I feel like there's a certain minimum level of complexity that a wasm engine can be assumed to take on, just by virtue of the problem domain.

Pretty much this. Locals really are just register allocation constraints - they don't normally translate to actual computation. Even Chromium's Liftoff compiler, which is one-pass and doesn't do any intermediate optimisation stages, avoids generating any code for local.get/set (link). At the very worst, a completely naive compiler might have to unconditionally mov their initial values and spill an additional register or two, but this is addressed by basic register allocation strategies.

@verbessern
Copy link
Author

verbessern commented Sep 19, 2020

I can agree to that. My mental process is that at the moment I cannot bare the complexity of the topic. I think I have wrote it up somewhere, that if that is the expected situation to bare, let it be, but not because the operands order as such is fine. The effort that I have spend to explain the issue already surpassed the effort I would have to do, to actually implement many of the algorithms involved. I think that is a good lesson take have.

@verbessern
Copy link
Author

Thanks to everyone involved.

@J0eCool
Copy link

J0eCool commented Sep 19, 2020

The effort that I have spend to explain the issue already surpassed the effort I would have to do, to actually implement many of the algorithms involved.

To be clear: this is vastly less than the effort involved in actually implementing any new proposal, to say nothing of the effort needed to drive it through the standardization process.

This is, in general, the answer to the question "why don't you just do this one simple thing?" because nothing in this space is actually simple, and because people would rather spend the effort on higher-impact work.

I don't say this to be dismissive, or to say that in order to participate one must be willing to spend all this effort alone. I say this so that we can remember the level of effort that everyone involved in wasm has already spent, and is spending at the moment, especially when asking that people shift their priorities to focus on any particular issue.


Thinking about this more, adding instructions doesn't ever solve the problem of naive wasm compilers, in general.

If we make a mistake and our default instructions are difficult for languages to target, that's something we can fix by adding new instructions. (Actually this has already happened with nontrapping float-to-int instructions). But if something is difficult for engines to implement, there's no way to undo that.

Say we add a full set of t.store_ instructions. Then simple compilers will still be on the hook to implement the old store_ instructions if they want to be spec-compliant, and indeed if they want to run any programs compiled by languages that target the non-transposed version. So they're on the hook to implement them regardless. We're right back where we started, where they have to choose between more sophisticated analysis, or doing the naive thing and having suboptimal performance, so all that adding alternate instructions has done is increase the overall implementation effort.

This suggests that we should categorically reject proposals that add new instructions for the sake of wasm engine convenience. It may still be worth doing if it makes it easier for languages to target wasm by being able to select from a wider array of instructions, but engines aren't able to pick&choose a subset, so additional instructions never makes their jobs easier.

(There is still something to be said for adding new instructions for performance reasons. However there the standard is much higher; we need both engines and a bulk of languages to be able to take advantage of the new instructions to see any real-world benefit, so the questions are "do the existing instructions prevent optimizations?", and also "is this compelling enough that language implementations will actually target the new instructions?")

@carlsmith
Copy link

carlsmith commented Sep 20, 2020

The cost of adding them is very low (I assume)

I stand corrected. Good points all round. Need to look into let...

@carlsmith
Copy link

There's a slight conflict between different goals. On the one hand, the Engine aims to be a lean target for compilers, and the text format is essentially a one-for-one representation of the binary format, so can never provide new features or novel semantics. On the other hand, the text format is trying to be a nice language for developers authoring Wasm modules, and some sugar and convenience features make sense in that context, but can never be added, except via the Engine.

@J0eCool
Copy link

J0eCool commented Sep 20, 2020

On the other hand, the text format is trying to be a nice language for developers authoring Wasm modules

I disagree pretty strongly with this :P. The sense I've always got has been that most developers should not need to write .wat by hand, and it hasn't really been optimized for that use case in a general programming sense. People working on wasm spec proposals and some compiler authors are the two main audiences for the text format as far as I'm aware.

and some sugar and convenience features make sense in that context, but can never be added, except via the Engine.

That's not strictly true; we could spec text-only instructions without much backwards compatibility cost to runtimes, because wasm engines (that I'm aware of) only consume the binary format. However, because maintaining that 1:1 correspondence with the underlying binary is extremely important for reading disassembly and proposing new features, it's unlikely that we will standardize much additional sugar. (The annotations proposal is probably the closest thing to sugar we've thought about adding?)

The approach that seems most fruitful to my eyes would be to create a language that is a minor extension to .wat, and use that. Walt is a pretty good example here, as well as the output of wabt's wasm-decompile (though that's not a language you can write), with respect to how far you can go in that direction.

@carlsmith
Copy link

The approach that seems most fruitful to my eyes would be to create a language that is a minor extension to .wat, and use that. Walt is a pretty good example here, as well as the output of wabt's wasm-decompile (though that's not a language you can write), with respect to how far you can go in that direction.

I had never seen wabt's wasm-decompile format. At first glance, it's very interesting. Thank you.

I'm personally working on something like that - closer to WAT, but with significant whitespace instead of all the parens and dots, a skinny arrow for mapping params to results, and the ability to infer a few things to reduce redundancy here and there. I like it, but I've no serious ambitions at this stage, as I'm still learning Wasm (and language design generally), so it's just a hobby project.

The sense I've always got has been that most developers should not need to write .wat by hand, and it hasn't really been optimized for that use case in a general programming sense. People working on wasm spec proposals and some compiler authors are the two main audiences for the text format as far as I'm aware.

That seemed to be the consensus at one point, but things drifted. Proposals are now being made to improve WAT for human authors, like allowing typed number literals (not just strings) in data nodes, and allowing users to assign identifiers to the offsets of the values - features compilers won't even use.

I agree that the original role of the text format is imperative, and that creates a place for alternative languages (outside of the standards process) that can experiment, innovate and specialize, but other people clearly feel WAT should also aim to be a language for authoring modules directly. I was just noting the conflict.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants