Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

expr/expr~ #21

Open
grrrr opened this issue Jul 16, 2021 · 38 comments
Open

expr/expr~ #21

grrrr opened this issue Jul 16, 2021 · 38 comments
Labels
enhancement New feature or request help wanted Extra attention is needed

Comments

@grrrr
Copy link
Collaborator

grrrr commented Jul 16, 2021

Hi all,
thanks for the great initiative!
One addition which would be a big leap forward, is the inclusion of expr/expr~ objects.
The current code size and performance overhead of messaging for simple calculations is considerable and an expr port could generate higly optimized static code for that purpose.
Especially on daisy, with its 128kB flash size, i am constantly running out of code space.
best, Thomas

@grrrr grrrr added the enhancement New feature or request label Jul 16, 2021
@dromer dromer added the help wanted Extra attention is needed label Jul 16, 2021
@dromer
Copy link
Collaborator

dromer commented Jul 16, 2021

Nice suggestion! A better coverage of vanilla Pd objects would certainly be nice.

Knowing only the basics of Pd and Heavy internals I cannot comment on this though, I hope others can jump in with some suggestions on how to bring these objects in to the project.

@dromer dromer linked a pull request Jul 16, 2021 that will close this issue
@dromer
Copy link
Collaborator

dromer commented Jul 24, 2021

Some possibly useful reads:
enzienaudio/hvcc#21
https://github.com/Simon-L/pd-static-expr

@dromer dromer reopened this Jul 24, 2021
@grrrr
Copy link
Collaborator Author

grrrr commented Jul 25, 2021 via email

@dromer
Copy link
Collaborator

dromer commented Jul 25, 2021

You are right, I mostly put these here as a reference to myself of how these are used (I'm not as experienced with pd as I'd like).
I briefly looked at how pd implements these internally and realize that this could get complex rather quickly.

Honestly I have no idea how this should be done, but I do see the potential of having this capability.

Fingers crossed someone has a magical insight and opens a PR ;)

@dromer dromer removed a link to a pull request Nov 11, 2021
@dromer
Copy link
Collaborator

dromer commented Nov 13, 2021

Hmm, maybe useful? -> https://github.com/codeplea/tinyexpr

@grrrr
Copy link
Collaborator Author

grrrr commented Nov 13, 2021 via email

@dromer
Copy link
Collaborator

dromer commented Nov 14, 2021

So @diplojocus suggested (and I was also thinking) to leave the expr until the last stage ir2c and then create these functions as needed. The messaging graph before/after has already been established, just the function signatures and definitions need to be created. tinyexpr could help with the definition and simply wrap this up, but we'll need to dynamically create the signature for these as well I suppose.

Do we want to limit the number of allowed inputs? https://web.archive.org/web/20201111221923/yadegari.org/expr/expr.html describes 9 inputs only, but perhaps this was for convenience.
"infinite inputs" could make this rather nasty. Perhaps fixing the signature to a set amount and initializing them as 0 is the easiest?

Not sure, just thinking out loud here.

@dromer
Copy link
Collaborator

dromer commented Nov 14, 2021

And of course there's the deal with multi-line [expr] with multiple outputs ..

@dromer dromer added this to Review in Missing vanilla objects Jul 16, 2022
@dromer dromer moved this from Review to Todo in Missing vanilla objects Jul 16, 2022
@dromer dromer moved this from Todo to Review in Missing vanilla objects Jul 16, 2022
@dgbillotte
Copy link
Collaborator

I started diving into the hvcc code last night and have sense of direction for the Pd -> Heavy part and then how to slip tinyExpr in on the c generation side, but just started digging into HeavyIR... I think once I see how that part works this should be pretty straight fwd.

One question: I notice that some pd blocks get implemented as HeavyLangObjects, others as HeavyIrObjects, and some as both (send & receive). What is the line separating what should be handled by Heavy and what by HeavyIr.

Since this seems like a new fundamental operation at the HeavyIr end and as such would warrant a HeavyIrExprObject, does there need to be a HeavyLangExprObject? I'm thinking not...

Has anybody else done any work on expr/expr~ since Nov last year?

@dgbillotte
Copy link
Collaborator

On the HeavyIrExprObject itself:

an expr in pd can have multiple expressions with an outlet for each. It seems to me that since HeavyIr is a primitive language that it would make most sense for it to have an expr that handles an single expression (and has a single outlet) and then build up specific instances of pd-expr using 1 heavyIR-expr per expression.

Any thoughts about the Heavy philosophy/style and whether that fits or not?

@dgbillotte
Copy link
Collaborator

@dromer have you found any other useful docs or info about Heavy since the ones you posted last year?

Currently I'm just walking the path of a given Pd object through the transformations to C code and making a lot of guesses from there...

@dromer
Copy link
Collaborator

dromer commented Sep 2, 2022

@dgbillotte I'm pretty much in the same position. Just go over the entire flow of things every time and see what part of the chain I end up. It's still very complex for me and a learning process at every step.

A downside of tinyexpr is that it will likely impact performance. Which is bad. Also I don't know if it could be used for all architectures.
While it would be nice to be able to easily translate the plain [expr] functions directly, ultimately it would be better to generate actual C-code.

I've been thinking about this regularly, but have not made any tangible steps towards an implementation. Also not being able to anticipate the impact on performance and program size. And not to mention the possible impact on eventual DSP results (we really need signal tests on the CI) makes me weary to even start one avenue of research/implementation .. and then to fail and have to start all over again ..

Considering there's still a lot of other things to do I've just been holding it off, but happy that others are looking into the code and what could be possible :)

For now I'm actually considering that doing multiple expressions would be not supported as it complicates the graph and code generation a lot I think. From the usability perspective even being able to make single [expr] with a few i/o would already be very beneficial, so best to start there.

Btw I discussed with a friend whether or not translation to the internal heavy functions would be needed to be able to make use of compiler optimizations and such -> https://github.com/Wasted-Audio/hvcc/blob/develop/hvcc/generators/ir2c/static/HvMath.h
Also something to think about and consider.

@diplojocus
Copy link
Contributor

diplojocus commented Sep 2, 2022 via email

@dgbillotte
Copy link
Collaborator

@dromer sounds like an adventure, I'll share any insights I find.

re tinyexpr: agreed

re proper performance: I'm thinking in baby steps for now. The SIMD part of it started dawning on me as well. I'd be happy to get a POC working and see where it goes...

re multiple expressions: as far as I can tell it is just syntactic sugar, so limiting it to a single expression seems very reasonable to me. That said, if the Heavy/HeavyIR part is done right, it should be easy enough to turn mult-expression expr's into multiple single-expression Heavy expr's, once those are working :-)

@diplojocus , after some thought and considering your and @dromer's responses I'm getting a clearer idea of the abstractions intended by the Heavy folks. As such I think that decomposing the expression into HeavyIR primitives is what the orig authors would have done and I'm gonna head down that path.

At first I was concerned that HeavyIR didn't have the core primitives needed to cover all of the math functions available in expr, but after digging into heavy.ir.json some, I can see that most-ish of the stuff is there. I'll do a further analysis of that and see exactly what is missing.

With above thoughts in mind, this flow seems to make sense to me:

  • Pd-expr -> PdExprObject -> HeavyLangExpr:
    • validate/enforce sanity and any heavy restrictions (single expression, whatever else)
    • figure out the number of inlets
    • squash the args back into a single string
  • HeavyLangExpr -> HeavyIr*:
    • compile the expression into HeavyIr primitives

I guess I was liking the idea of tinyexpr because we wouldn't have to do that last step, but again, this smells like an adventure, so....

@dgbillotte
Copy link
Collaborator

@dromer: do you have support for pd [value] objects on your radar? I'm not pushing for it, but it has ramifications on expr implementation. If value is expected to be supported soon I would want to build that expectation into the expr stuff.

@dromer
Copy link
Collaborator

dromer commented Sep 3, 2022

@dgbillotte I'm not sure if using HeavyIR primitives is really needed. What I was thinking of is to hold off creating any code until after the HeavyIR step, and then create actual C functions that become part of the core. Unwrapping the whole C expression into a HeavyIR graph would introduce a lot of messaging overhead, which would likely kill any advantage in terms of code-size.

In terms of adding pd-vanilla objects there is no roadmap at all. Whenever I see something that is trivial to add (like some of the Midi objects) I work on it, but there are no specific implementations planned. Check out the Projects tab for some of the things on the to-do list. In the near future I'd like to add more people to this section of the repo so some ideas/planning can be worked out into attainable steps and an actual kind of roadmap :)

@dgbillotte
Copy link
Collaborator

@dromer if I'm seeing it clearly, that would imply that there is a HeavyIrExpr object that takes in the complete expression and it would be the job of the c-generator to then turn that string into execute-able C. Is that correct?

I was thinking that going the route of the HeavyIr primitives would offload all of the SIMD related logic to the primitives where it would, presumably, be easier to deal with. I was not thinking, however, of all the extra message-passing overhead and can see how that could surpass any of the gains from SIMD.

I think doing it that way would be easier to implement. I'm just trying to understand the intentions of the layers and respect them instead of just paving a by-pass straight through ;-)

@dromer
Copy link
Collaborator

dromer commented Sep 3, 2022

Yup understood, but in the case of [expr] the complete bypass might be the best approach in the end ;)

@diplojocus
Copy link
Contributor

diplojocus commented Sep 6, 2022 via email

@dromer
Copy link
Collaborator

dromer commented Sep 6, 2022

@diplojocus I thought that inside heavy there is no difference between control and signal rate. All objects are evaluated at the same rate, no?

@diplojocus
Copy link
Contributor

diplojocus commented Sep 6, 2022 via email

@dgbillotte
Copy link
Collaborator

dgbillotte commented Sep 10, 2022

I'm working on expr for now, just to get my bearings, but expr~ is my real goal once I get there...

I've been going back and forth conceptually between using the existing primitives or creating a new one. I'm currently thinking that the nature of the operation justifies a new IR primitive. But I guess it comes down to the question of what hvcc is ultimately for. It seems to me that the OWL, Bela, Daisy, etc products are what is keeping hvcc alive at the moment, though I would love to better understand who the actual users are. I'm coming from the audio/OWL perspective (Befaco LICH) so that's what I know about this world so far and am biased toward ;-)

If max support is not wanted (I haven't heard/seen much from the max community in the discussions) and C is the primary IR target (with the other targets each building off of the c code), then really the purpose of hvcc seems to be to "create accurate translations from a pd patch into c-code" and since timing is a critical part of what pd is designed to do, it should be of a higher priority than honoring the abstractions as they exist. I am all about using software abstractions properly, 💯 %, but the abstraction exists to serve a purpose. When the abstraction becomes a hinderance to the purpose, its value is questionable... That's just my 2 ¢, I would appreciate learning of other perspectives :-)

If that is the case, I think that the existing IR primitives do not adequately serve the core purpose of creating an accurate translation of pd source patches and should be extend to do so.

@diplojocus re "an optimisation pass on the heavyIR graph to do some code folding", by "folding" do you mean some form of combining multiple bin/unary ops into a single graph node, thus eliminating the extra message passing? If this is possible, I think it would be more of the "right" way to do it and would be happy to wander down that path some.

At the current time, hvcc seems to fill a void where I think the only real alternative is to use max instead of pd and as such I hope folks efforts can come together to create something that is stable and will be around for a while.

@dromer
Copy link
Collaborator

dromer commented Sep 10, 2022

My personal interest is with DPF (vst2/3/lv2/clap plugins), OWL, Daisy, Bela, and webassembly. However there are also still users of Unity and Wwise.

So basically "all the targets" ;)

The max2hv path is there, but I have absolutely no idea if it even works. I'd be very happy to deprecate it. A discussion for it is here -> #25

@dgbillotte
Copy link
Collaborator

Not extensively tested, but I have expr working for a simple patch.

I wanted to solicit any feedback on the approach I took and some next steps.

For this go at it I created a new HeavyIR object, __expr.

In short, I create per-instance evaluate functions in the Heavy_heavy.hpp/cpp files that get passed into the the cExpr_init() function for each "instance" and is stored in the
instance's ControlExpr "object" where it can later be called by cExpr_onMessage() any time it needs to evaluate the expression. The passed in function just binds the variables in the expression to the input array, evaluates itself, and returns the value. With the expression compiled in they should run plenty fast for any control-rate needs.

I like how it works in theory, but the implementation could probably be cleaner. I used the
get_C_impl() and get_C_def() functions to inject the functions and their prototypes into the Heavy_heavy.hpp/cpp files, but not sure if that is working with the system or against it...

I'll add some tests and open a PR once I've banged on it some.

You can have a look at https://github.com/dgbillotte/hvcc

@dromer
Copy link
Collaborator

dromer commented Sep 14, 2022

So you are evaluating the expressions at runtime, rather than create a compiled C-function?
Something tells me that for embedded purposes this could end up giving too much of a performance hit, but something that will of course need testing.

Will need to play with this myself a bit, will try out your branch this weekend if I find the time. Thnx for giving a go at this!

@dgbillotte
Copy link
Collaborator

no, they're compiled in, it just seems a round about way to do it. They live in Heavy_heavy.cpp like:

float Heavy_heavy::cExpr_ZRMzpAT8_evaluate(float* args) {
  	return 3 + 5; // simple test, no variables
}

float Heavy_heavy::cExpr_KVwa098b_evaluate(float* args) {
  	return ((float)(args[0])) + ((float)(args[1]));
}

and passed into cExpr_init like:

Heavy_heavy::Heavy_heavy(double sampleRate, int poolKb, int inQueueKb, int outQueueKb)
    : HeavyContext(sampleRate, poolKb, inQueueKb, outQueueKb) {
  numBytes += cExpr_init(&cExpr_ZRMzpAT8, &Heavy_heavy::cExpr_ZRMzpAT8_evaluate);
  numBytes += cExpr_init(&cExpr_KVwa098b, &Heavy_heavy::cExpr_KVwa098b_evaluate);
  numBytes += cExpr_init(&cExpr_Qev1EDBU, &Heavy_heavy::cExpr_Qev1EDBU_evaluate);
  
  // schedule a message to trigger all loadbangs via the __hv_init receiver
  scheduleMessageForReceiver(0xCE5CC65B, msg_initWithBang(HV_MESSAGE_ON_STACK(1), 0));
}

@dromer
Copy link
Collaborator

dromer commented Sep 14, 2022

Aaah I see. However there is one individual function created for every [expr] object?
So if you use the exact same expression, it becomes a whole new function definition?

What I would do is keep a list of used expressions by taking a heavyhash of the entire expression string, then if that expression already exists, simply point to the same one. This way code duplication could be reduced a lot.

One thing we also lose in your approach is any architecture specific optimizations. Extensive use of these expressions could then really create a performance hit (even for control rate). On desktop pc that maybe not be very apparent, but for the more bespoke architectures this could become quite a penalty (depending on the expression complexity of course).

Just some things to consider if you move forward with this approach.

@dgbillotte
Copy link
Collaborator

I def have μ-controllers in mind and open to all thoughts in that direction. The current approach is just a stab at it and has been useful just to learn all of the parts of this thing. I could happily toss what I have and take a different direction on it. I'm all for geeking out on making it spit out small and fast code.

If it is a likely case of having many [expr]s with the same expression, hashing and cacheing would be a good route to go. That case had not occurred to me... Thinking about it that way, I like the idea of there being a single function lookup table instead of a bunch of spurious function defns.

re optimizations: what approach do you have in mind that keeps the arch-specific optimizations possible?

@dromer
Copy link
Collaborator

dromer commented Sep 14, 2022

Worst case we'll have to actually parse the expressions and put __hv specific operations in place.
Not an easy task of course, but it would give the most control over the eventual code output.

@dgbillotte
Copy link
Collaborator

I put together some random-ish "try to break it" kind of patches and was amazed that they just kept working... Only bug I found was with an expression like "$f1 + $f3", which will have 3 inlets in pd, seg-faults on args[2], no surprise, easy fix...

I'll put together a thoughtful test for it this eve and then I'm gonna start investigating the signal-rate side of things. I'm sure it will be educational...

@dgbillotte
Copy link
Collaborator

After studying hvcc/generators/ir2c/static/HvMath.h last night I think I better understand your references to utilizing the architecture specific optimizations much better. 🎓

I think the approach I am using could easily be extended to expand the expressions out to take advantage of the stuff that is in HvMath.h. I look forward to getting to that part of it :-)

@dgbillotte
Copy link
Collaborator

I will say that I was naive in how I thought the expressions would ultimately get turned into to C code. Studying HvMath.h and SignalMath.py shows that for the SIMD stuff to work, the incoming expression needs to be rearranged from an infix notation to a sequential-prefix notation. I make the distinction of "sequential" because in my first estimations of this I was picturing a "nested"-prefix notation. An example:

Input expression: "sin($f1 + 2) / sqrt($f2)"

A nested prefix representation would be:

hv_div_f(hv_sin_f(hv_add_f($f1, 2)), hv_sqrt_f($f2));

However, for the sake of efficient buffer handling,HvMath.h deals with outputs as output parameters instead of return values, so a different processing pattern is needed, which I'll call "sequential" prefix notation, in which the output buffers from earlier steps are setup to be the input buffers for later steps:

__hv_add_f($f1, 2, BO0);
__hv_sin_f(BO0, BO1);
__hv_sqrt_f($f2, BO0);
__hv_div_f(BO1, BO0, BO2);

To handle that, I wrote an expression parser/rewriter that can output either of the forms above (for use in expr and expr~). I have it at https://github.com/dgbillotte/ExprParser for now. The parsing is correct as far as I've tested it, which includes some unit tests and some ad-hoc "throw crap at it and see what comes out" kind of tests. The generated C-code is pretty rough at this point but proves the point and helps to inspire the next step.

With that taken care of I have the form of what a per-object process function would look like. For context, the following function would get declared in Heavy_heavy.hpp and then called in Heavy_heavy.cpp in the Heavy_heavy::process method under the "// process all signal functions" comment.

This function is roughly what would be generated for the input expression that I started with: "sin($f1 + 2) / sqrt($f2)"

 void Heavy_heavy::cExprSig_rUZ70xyj_evaluate(hv_bInf_t* bIns, hv_bOutf_t bOut) {
  // declare tmp buffers
  hv_bufferf_t BO0, BO1;

  // declare buffers for constants
  hv_bufferf_t const_2; // initialize this to all 2's

  __hv_add_f(bIns[0], const_0, BO0);
  __hv_sin_f(BO0, BO1);
  __hv_sqrt_f(bIns[1], BO0);
  __hv_div_f(BO1, BO0, bOut);
 }

the calling site in Heavy_heavy::process() would look like kind of like this:

hv_bufferf* ins[2] = {&Bf2, &Bf0}
cExprSig_rUZ70xyj_evaluate(ins, VOf(Bf1));

The piece that creates a buffer of constants to deal with a single constant seems less than ideal, but it is a laziness that I am ok with at the moment. I've seen that there are a number of SIMD binary-op primitives that will operate on a vector and a constant which would be nice to use here, but that would involve some deeper changes/additions to HvMath.h...

This is where my brain is at on this thing at this point, any thoughts welcome...

- Daniel

@dgbillotte
Copy link
Collaborator

Right as I hit send above I glanced over at a generated Heavy_heavy.cpp that I have and saw these two lines inside process():

    __hv_var_k_f(VOf(Bf1), 0.5f, 0.5f, 0.5f, 0.5f, 0.5f, 0.5f, 0.5f, 0.5f);
    __hv_sub_f(VIf(Bf0), VIf(Bf1), VOf(Bf1));

It looks like a buffer of constants is how the heavy team was dealing with constants, so I'm not going to give that any more thought to that for a while...

@dromer
Copy link
Collaborator

dromer commented Dec 1, 2022

@dromer: do you have support for pd [value] objects on your radar? I'm not pushing for it, but it has ramifications on expr implementation. If value is expected to be supported soon I would want to build that expectation into the expr stuff.

Having looked a bit more at certain missing objects I can see that value indeed has some impact on expr, since expr can make use of the same variables throughout a Pd program. As I understand it [value] basically defines a global variable that can be called/set from any part of the program. Superficially it seems that it shouldn't be too hard to implement, however I have no idea where to start with this and if it would be compatible with your expr work :#

@dromer
Copy link
Collaborator

dromer commented Mar 13, 2023

Hmm, I'm actually thinking value might not be that difficult, if we consider it simply as a kind of send/receive in a single object. You "just" have to get the hash value and put a receive on it to get it. Maybe I'm thinking too simplistic here, but I think it could be possible to mostly emulate value behavior. Will try to prototype something for this some time.

However what would be very difficult (currently) is supporting arrays. According to PD docs you can give the name of an array and then have the input as an index that reads from it. At the moment we only support table so this would definitely not be possible in the current state.

@dromer
Copy link
Collaborator

dromer commented Mar 14, 2023

I think it could be possible to mostly emulate value behavior.

Forget what I said here, I'm an idiot. Even though value can be set from a send, it ostensibly works very differently than a send/receive pair.

I'd say: lets implement expr/expr~ without value/array capability in the MVP. A future addition would need changes across the board, so no need to worry about that here I think.

@Wasted-Audio Wasted-Audio deleted a comment from 60-hz Jun 24, 2023
@dromer
Copy link
Collaborator

dromer commented Jul 19, 2023

Much more extensive (and clear) docs about current expr in pd: https://pd.iem.sh/objects/expr~/

Clearly there is a lot of functionality that we won't be able to support. It mentions up to 100 (!!) inputs and things like value and array will not be possible right now. And then there is a number of additional functions that may need extended parsing.

I'm currently writing some tests for control rate to at least explore to what extend we can support this part of the objects.
9ff861f

@dgbillotte
Copy link
Collaborator

dgbillotte commented Jul 20, 2023 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Development

No branches or pull requests

4 participants