Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WD-40 Issues thread was accidentally deleted by @keean #49

Open
shelby3 opened this issue Jun 22, 2020 · 172 comments
Open

WD-40 Issues thread was accidentally deleted by @keean #49

shelby3 opened this issue Jun 22, 2020 · 172 comments

Comments

@shelby3
Copy link

shelby3 commented Jun 22, 2020

I will continue the discussion here, until or if @keean can get Github to restore the thread. Luckily I still have a copy of the deleted #35 thread loaded, so I can copy and paste from it. If the original thread is restored, I will copy my posts from here back to that thread and delete this thread. I will give you an opportunity to copy your replies also before the deleting the thread. Perhaps @keean is the only person who can delete an issues thread.[or perhaps we’ll just link the old thread to this new one, because that thread was getting too long to load anyway]

@keean

I can find a few references to automatic memory management and GC vs ARC, but it seems not widespread. To avoid confusion we should probably use MS (mark sweep) and RC (reference counting) are forms of AMM (automatic memory management), and avoid use of GC altogether.

I find the term and acronym GC to be more recognizable than AMM which is why I continue to use it. And nearly no one knows the acronym MS. And you continue to use RC which widely known as radio-control and not ARC which is the correct term (employed by Rust docs for example) for automatic reference counting, because there is such as thing a manual reference counting.

IOW, memory has no more life if it can’t be referenced anymore and it continues to have a life for as long as any reference can access it. Whereas non-memory resources live on beyond (or need to die before) the life of the reference to the handle to the resource. The reference to the file handle is not isomorphic with the life of the resource referred to by the file handle.

You can always refactor so the life of the handle is the life of the resource.

And you completely failed to respond to my holistic reasons why doing so (for the multiple strong references case) would be problematic. And I grow extremely tired of this discussion with you because I have to repeat myself over and over ahead you repeat the same rebuttal which ignores my holistic points.

So am just going to let the argument with you stop now on this point. I do not concede that you’re correct. And I do not think you are correct. I already explained why and I will not reply again when you reply again totally ignoring and not addressing my holistic point. Total waste of my time to around and around in a circle with you making no progress on getting you to respond in substance.

If the resource needs to be shorter lived you can use a Weak reference. If the resource needs to be longer lived you can store it in a global container (global array, hash map, singleton object etc).

Which as we have already agreed is being explicit and which is what I wrote yesterday. But yet you still side-step my holistic points. Yes you can do the same things with ARC that we can do without ARC in the weak references case, but that does not address my point that in that case ARC is not better and is arguably worse because it conflates separate concerns.

ARC is for freeing access to a reference (thus it follows that the memory resource can be freed because no reference can access it anymore), not optimally semantic congruent with freeing (non-memory) resources that reference used to know about. They are not semantically equivalent.

There is no reason for them not to have the same scope. If I have access to the handle, I should be able to read from the file. If I don't need access any more destroy the handle. There is no need to have a handle to a closed file.

I guess you did not assimilate what I wrote about breaking encapsulation. But nevermind. I don’t wish to expend more verbiage trying to get you to respond to something you wish to ignore and skip over with addressing it.

Also I am starting to contemplate that the encapsulation problem is fundamental and we need to paradigm-shift this entire line of discussion to a new design for handling resources (but that will come in a future comment post).

Even GC + implicit destructors if we use static escape analysis with a single explicitly typed strong reference.

No, because a strong reference must never be invalid, so you cannot close a strong reference to a file.

You’ve got your model inverted from my model. I noted that in one my prior responses.

In my model the “strong finalizer reference” controls when the resource is closed. And the weak references are slaves. When you tried to fix my example, you did not, because you used a weak reference but I wanted the resource to actually be released before the asynchronous event. Thus I wanted to close the strong reference. The static linear type system can ensure no access to the strong reference after calling the finalizer. Thus it can also work with non-ARC. (However as I alluded to above, I do not want to recreate Rust and thus I am thinking about paradigm-shifting this entire line of discussion in a future post).

A strong reference to a file must be RAII.

Nope as explained above.

Basically with a strong reference you must always be able to call read on the handle without an error.

By definition, if you want to be able to explicitly close the handle, it is by definition a weak reference to a file.

By your definition, but I had a different model in mind.

So with GC the close is explicit, the weakness of the handle is implicit. You have no choice over this (unless you want unsoundness).

I explained it above. Open your mind.

Well we could even have a single explicit strong reference (thus any weak references are implicitly typed by default, although of course access to a weak reference always requires either explicit conditional test or an implicit runtime exception when use-after-destructed/use-after-finalized) with implicit (or explicit) destructors and make it work with either ARC or non-ARC, GC. Thus I see no advantage to conflating with ARC? And I do conceive disadvantages to conflating it with ARC, not only including that it won’t work with non-ARC, but also because conflating with ARC encourages the programmer to conflate reference access lifetimes with resource lifetimes which are not semantically isomorphic.

To repeat above you should not use a strong reference to the resource with GC, because that would rely on finalizers to release the handle, and that can lead to resource starvation. It's not safe.

To repeat you do not seem to understand what I explain.

Edit: Regarding C++, yes you are right you would swap a null there, but that's C++, which is not an ARC language. This would imply that "Weak" is defined:

type Weak<T> = T | null

And therefore Weak would be a billable reference to the strong file handle. However you would not be allowed to just write a null. Weak is an abstract datatype, so the value is private, and you would have to call weak.delete() which calls the destructor on the object contained, and then replaces with a null.

Okay but it doesn’t not rebut any of the context of my point which I will quote again as follows:

I was referring to the implicit release of the resource upon the destruction of the strong reference in the Map of your upthread example code, which is conflated with your code’s explicit removal from the Map (regardless of how you achieve it by assigning null or explicit move semantics which assigns a null and destructs on stack frame scope). You’re ostensibly not fully assimilating my conceptualization of the holistic issue, including how semantic conflation leads to obfuscation of intent and even encouraging the programmer to ignore any intent at all (leading to unreadable code and worse such as very difficult to refactor code with insidious resource starvation).

@shelby3
Copy link
Author

shelby3 commented Jun 22, 2020

@Ichoran

Um, your supposed code counterexample isn't a counterexample. In fact, the disproof of it being a counterexample was in my Rust example! Here it is again:

https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=2baade67d8ce2cc9df628de2b753f0e6

Here is the code, in case you want to see it here instead of run the runnable example there:

struct Str {
    string: String
}

impl Drop for Str {
    fn drop(&mut self) { println!("Dropping!"); }
}

fn main() {
    let mut s = String::new();
    s.push_str("salmon");
    let ss = Str{ string: s };
    let l = ss.string.len();
    std::mem::drop(ss);
    println!("I was size {}", l);
}

Rust drops at block closure, but it resolves lifetimes to last use when needed ("non-lexical lifetimes"), so it could do it automatically at the point where it says std::mem::drop.

So your first example would magically just work. If not, the explicit drop with move semantics (so you can't use it again) would non-magically work.

Seems you completely ignored what I wrote earlier in the thread in response to @sighoya:

If there is only one reference (or at least only one strong reference) to the resource handle then the compiler should be able to reason (by tracking the implicit linear type) when that reference dies either upon exit from its terminal stack scope or assignment of null, i.e. Rust does this but hopefully it can be accomplished without lifetime annotations? (If not, then @Ichoran may be correct to cite that as a utility advantage for Rust’s borrow checker) With multiple strong references to the same resource handle I presume the static analysis required by the compiler would require total, dependent type knowledge a la Coq and thus wouldn’t be viable for a general purpose PL. For multiple strong references I think only a conflation with ARC would be viable, but I have posited that comes with other meta-level baggage.


In contrast, RAII handles this flawlessly.

No it doesn’t. It obscures intent which can lead to inadvertent resource starvation. Yet I repeat myself again.

And Rust is not doing the ARC implementation of the form of RAII which can return the resource from the stack frame scope (i.e. not the “basic use case where the lifetime of an RAII object ends due to scope exit”), it is entirely a static analysis which is what I was discussing with @sighoya.

AFAIK Rust can’t handle the multiple strong references case, although I’m arguing against allowing (or at least discouraging it) it anyway. But surely C++ allows it via std::shared_ptr.

Having to remember to deal with resources is a pain. If the compiler can figure out that they need to be closed, they should just plain be closed.

If programming was only so simple and one-dimensional as you want to claim here on this issue.

If you need help understanding why your life is so easy now, then an IDE could help you out.

I actually find they often get in my way. It’s usually better to paradigm-shift than pile complexity on top of complexity.

@keean
Copy link
Owner

keean commented Jun 22, 2020

@shelby3 my use of Weak/Strong is in line with the common usage, for example see weak references in Java and JavaScript. Suggest we try and stick to the standard terminology.

The RAII case is less likely to lead to resource starvation because the programmer cannot forget to free the resource, and the resource is freed as soon as the program holds no references to the resource.

There is no semantic conflation. If you have the handle you can read from the file. If there is no handle you cannot read from the file. It's simple and clearly correct. The case where you can have access to a handle that is closed is far more confusing to me.

Don't forget that by wrapping that handle in a weak reference you can explicitly destroy it at any time. The purpose of a weak reference is to encapsulate something that has a sorter lifetime than the reference.

So from my point of view the explicit distinction between strong (the default) and weak references is the important thing. If we have this then we can have RAII file handles, and can use them in all the ways you want. I don't really think there is any conflation here, just a simplification the prevents a whole class of mistakes (unless you explicitly make the reference to the handle weak).

@shelby3
Copy link
Author

shelby3 commented Jun 22, 2020

@keean

my use of Weak/Strong is in line with the common usage, for example see weak references in Java and JavaScript. Suggest we try and stick to the standard terminology.

No I suggest you assimilate what I write and understand that I specifically wrote (much earlier in the discussion) to not conflate it with (the traditional meaning of) strong and weak references.

There is no semantic conflation. If you have the handle you can read from the file. If there is no handle you cannot read from the file. It's simple and clearly correct. The case where you can have access to a handle that is closed is far more confusing to me.

As I predicted more blah blah from you repeating your same myopic argument for the Nth time and totally failing to address my holistic point.

The last use of the reference to a data structure which somewhere in the object graph references a resource handle (and especially opaquely if you allow cyclic references) doesn’t correspond necessarily to end of the last access of the resource handle or the resource.

Don't forget that by wrapping that handle in a weak reference you can explicitly destroy it at any time.

Which in your model does not release the resource. Again that is tied into my holistic analysis, but it has become too complex (requires too much verbiage which I am unwilling to continue) to untangle all your vacuous non-rebuttals.

So from my point of view the explicit distinction between strong (the default) and weak references is the important thing. If we have this then we can have RAII file handles, and can use them in all the ways you want.

We can have RAII or even slightly better than RAII with a variant of my desire to ask the programmer to be explicit to make sure intent is not obfuscated or forgotten. And we do not even need ARC but that leads us to something like Rust, which I don’t like. And we already discussed that ARC is incompatible with non-ARC, GC, and I don’t like conflating resource lifetimes with reference lifetimes anyway (for the reasons I had already stated including obfuscating intent and explicit duration of lifetimes, e.g. where delete an item from a Map causing a resource lifetime to implicitly end and that is just the tip of iceberg of the implicit cascades that can occur causing insidious bewilderment as to why there is a resource starvation), especially not implicitly for the reasons I have stated which apparently you and @Ichoran have ostensibly not understood (or do not agree with and have not articulated to me why).

I don't really think there is any conflation here, just a simplification the prevents a whole class of mistakes (unless you explicitly make the reference to the handle weak).

Even with a single strong reference (and even if no weak references) and given implicit RAII (whether it be via ARC or linear types a la Rust) there is still a conflation of resource lifetime semantics with (reference or access to) resource handle lifetime semantics.

@keean
Copy link
Owner

keean commented Jun 22, 2020

@shelby3 I think you just don't understand some things that are common knowledge in programming and it's frustrating trying to explain them to you.

I can understand you have a preference for a particular style of semantics, say GC with explicit destructors, and that is fine. You should also understand that there are other people who prefer other idioms like RAII.

Your criticisms that there is something wrong with RAII, or use cases it cannot cope with are wrong. I think because you don't like it, you have not really studied it looked at how it solves the problems.

I have written programs using RAII, so I know how it helps avoid bugs. I have directly seen the benefits in real projects compared with explicit destructor approaches. So the situation from my point of view is that real world experience contradicts your speculation.

@shelby3
Copy link
Author

shelby3 commented Jun 22, 2020

@keean

I think you just don't understand some things that are common knowledge in programming and it's frustrating trying to explain them to you.

That is an extremely unfair allegation when in fact I wrote earlier in the discussion that I was making a distinction from the common usage of the terms. Do I need to quote it for you?

There is nothing wrong with the logic of my model. And you put your foot in your mouth (and I use that terminology because you were forcefully stating I was wrong) because you did not read carefully what I wrote. And so now you ’backsplain by attempting to claim that I don’t understand when in fact I wrote earlier on that I was using a different model.

You should also understand that there are other people who prefer other idioms like RAII.

What part of the following can’t you read?

We can have RAII or even slightly better than RAII with a variant of my desire to ask the programmer to be explicit to make sure intent is not obfuscated or forgotten. And we do not even need ARC but that leads us to something like Rust, which I don’t like. And we already discussed that ARC is incompatible with non-ARC, GC […]

What part of RAII on non-ARC requires (something like) Rust do you not understand? Even I wanted implicit I would have to use Rust if I don’t want ARC. Why do you conflate everything into a huge inkblot instead of understanding the orthogonal concerns I explain.

Did you entirely forget that I wrote that my new ALP idea does not use ARC?

I can understand you have a preference for a particular style of semantics, say GC with explicit destructors, and that is fine.

My preference for the programmer to express explicit intent was not about explicitly calling a finalizer in every case. You completed failed to follow the discussion from start to finish. I was advocating an explicit indication when something like RAII was going to be used in each instance where the reference is initially assigned.

Your criticisms that there is something wrong with RAII, or use cases it cannot cope with are wrong. I think because you don't like it, you have not really studied it looked at how it solves the problems.

I have written programs using RAII, so I know how it helps avoid bugs. I have directly seen the benefits in real projects compared with explicit destructor approaches. So the situation from my point of view is that real world experience contradicts your speculation.

I used ARC also for resource destruction in C++. But I also did not have a highly concurrent program. And I wasn’t doing anything significant that would have significantly stressed resource starvation if my usage was suboptimal.

But the points I made against implicit RAII were not that it can’t be made to work and were not a claim that it isn’t convenient and doesn’t prevent bugs compared to an unchecked manual cleanup (which I never advocated completely unchecked manual cleanup!), which just goes to show you have not even bothered to really understand what I wrote.

My points are about the obfuscation in code it creates and how that can lead to suboptimal outcomes and unreadable code in some cases. Surely you can fix bugs and mangle your code to make it all work, but you may make it even more unreadable. My point was an attempt think about how to make it even better and how to solve the problem of ARC being incompatible with non-ARC, GC (as we agreed merging the two would bifurcate into a What Color is Your Function) and without having to take on the complexity of Rust’s borrow checker. While hopefully also gaining more transparency in the code.

You completely mischaracterize what I wrote after deleting the entire thread where I wrote all of that.

@keean
Copy link
Owner

keean commented Jun 22, 2020

@shelby3 I have not said that your model is wrong, I am sure it's probably correct, just your use of non-standard terminology make it too much effort to decode.

I think your criticisms of RAII amount to a personal preference, and you don't seem to appreciate the real word benefits that I, and some others have been trying to explain to you, and are rooted in practical experience not speculation.

What part of RAII on non-ARC requires (something like) Rust do you not understand? Even I wanted implicit I would have to use Rust if I don’t want ARC.

Well you can do RAII in C++ so that's not quite right, but I get what you are trying to say, and I agree with you. I don't think you have understood what I am trying to say, because I have not disputed this.

@shelby3
Copy link
Author

shelby3 commented Jun 22, 2020

@keean

I think your criticisms of RAII amount to a personal preference, and you don't seem to appreciate the real word benefits that I, and some others have been trying to explain to you, and are rooted in practical experience not speculation.

Have I ever stated I don’t appreciate the benefits of RAII as compared to manual unchecked explicit release of resources? No! If you think I did, you simply failed to read carefully.

What part of RAII on non-ARC requires (something like) Rust do you not understand? Even I wanted implicit I would have to use Rust if I don’t want ARC.

Well you can do RAII in C++ so that's not quite right,

You did not even understand what I wrote! Hahaha. I was stating that I would have to punt to something like Rust only if I wanted to achieve something like RAII in a non-ARC scenario. To any extent that C++ is not using ARC, then it is employing move semantics like Rust.

but I get what you are trying to say, and I agree with you. I don't think you have understood what I am trying to say, because I have not disputed this.

I don’t think I have imputed that you disputed that claim ­— rather that you do not seem to incorporate in your understanding of what I am writing about, that claim is one of the factors in my analysis. There is a logical distinction. Discussion with me can include some mind bending orthogonal concerns, and it is difficult to keep up apparently. I think I know what Bill Gates felt like talking his interviewer as I cited before. Maybe the blame could be put on me for not having a fully threshed out replacement for Rust or ARC and having it all written up in a tidy blog. But I never thought that discussion had to be only for the stage where ideas were fully formed and hatched.

@Ichoran
Copy link

Ichoran commented Jun 22, 2020

@shelby3 - Can you please just write a code example that actually truly genuinely shows your point that resource starvation is a likely outcome? Your existing example was disproved before you even posted it (whether or not you mentioned something to someone else later) because Rust lets you do what you said you couldn't do, and the resource is cleanable (if not clean because of Rust's drop-at-block-boundary semantics).

Like @keean I have spent a lot of time with RAII-based languages (C++ and Rust) and a lot of time with GC-based languages with using (mainly Scala).

When it comes to correctness in handling non-memory resources, empirically my observation is that it is far easier to do it in C++ and Rust. It's a constant problem with my Scala programs. It's basically never a problem in my Rust programs. (In C++ I had so many other problems that I can't recall how bad it was, just that various other things were worse.)

When it comes to explicit vs. implicit for things like this, I don't have a language that catches every case and insists that I be explicit about it, but I do have experience with parallel constructs (e.g. be-explicit-with-all-types, be-explicit-with-all-exceptions-thrown, be-explicit-with-all-returns) and in every case being explicit is a drawback. The reason is that attention is finite, and it wastes your attention on the obvious stuff that is working right every time (or which you can't even tell is working right because you're not a computer and can't trace every logical path in milliseconds like it can). Even saying raii val handle = new Handle is an extra hassle, an extra bit of type information that actually isn't useful in the vast majority of cases.

In addition to the extensive experience that this is the better way to do things (at least for someone with my cognitive capacities), there is also the critical invariant that @keean keeps mentioning over and over again, which you never have adequately and directly responded to, which is that RAII can prevent use-after-free (and use-before-initialization) errors.

So, write a code example that shows what you mean! So far you've failed every time. Everything is trivially fixable or isn't even the problem you claim it is. If it takes design to get it right, you additionally need to argue that this is harder than the alternative to get your proposal right. (So write two examples if you must in order to show the contrast.)

I am certain I am not understanding your objections without code; it seems as though @keean is also not.

Now, if your objection is just, "Rust does this all correctly, but I don't like Rust," then I understand that, but you keep saying things about risking resource starvation when my experience, and the logic of can-use-it-when-you-can-refer-to-it-can't-when-you-don't both argue that this is the way to avoid it.

@shelby3
Copy link
Author

shelby3 commented Jun 22, 2020

@Ichoran

Your existing example was disproved before you even posted it (whether or not you mentioned something to someone else later) because Rust lets you do what you said you couldn't do, and the resource is cleanable (if not clean because of Rust's drop-at-block-boundary semantics).

What part of I don’t want to use Rust which I have stated many times do you fail to assimilate?

So you did not disprove anything w.r.t. to my stated context.

The discussion started with you claiming that Rust’s ability to manage resource lifetimes was a big win for the borrow checker. And I followed by saying I had been thinking about I don’t want to implicitly conflate resource lifetimes with resource handle access lifetimes (which @keean is not the same as saying I want unchecked explicit finalization instead of RAII, come on keep orthogonal points orthogonal and not conflate them). And one of the reasons is because I don’t want to be forced to use Rust’s semantics and borrow checker (given that ARC is incompatible with my GC idea for ALPs). And the other reason is I think perhaps being implicit is obfuscating and can at least lead to unreadable code, which was exemplified in your subsequent example wherein @sighoya did not immediately realize that the function could have thrown an exception leading to a leak without implicit RAII. In other words implicit is not always conveying all the cases of implicit semantics to the programmer. Now whether that is a good or bad thing is a separate debate. But please come down off your “I disproved something” high horse.

@shelby3
Copy link
Author

shelby3 commented Jun 23, 2020

@shelby3 - Can you please just write a code example that actually truly genuinely shows your point that resource starvation is a likely outcome?

  1. That was not my only point against being implicit. I also cited unreadable code and incompatibility with my ALP GC given I don’t want to add the complexity of Rust’s lifetimes model.

  2. Never did I claim “likely outcome.” The point of such an example is to show how encapsulation can cause the resource release to be conflated with release of other things (perhaps another resource) that isn’t to be released at the same time. The last use of the reference to a data structure which somewhere in the object graph references a resource handle (and especially opaquely if you allow cyclic references) doesn’t correspond necessarily to end of the last access of the resource handle or the resource. Of course there will probably always be ways to refactor code to separate the concerns, but my point is a conceptual one in that the implicitness causes the programmer to not put much effort into keeping those concerns separate and thus can inadvertently land in a situation where resources get starved and then he has to go hunting for why and then mangle his code logic to unconflate reference access lifetimes from resource lifetimes which are not semantically isomorphic. And the reader of the code may have been none the wiser also because of all the implicit action such as when @keean deletes something from a Map then buried deep in the data structure (not in his example but hypothetically) of what was removed may be a reference to a resource. So maybe nobody was even thinking about when that resource actually gets released. It just happens opaquely and automagically.

Like @keean I have spent a lot of time with RAII-based languages (C++ and Rust) and a lot of time with GC-based languages with using (mainly Scala).

Remember you have pointed out that using is insufficient because it can’t check on resource lifetimes that escape the stack scope. So you are comparing RAII to an inferior paradigm. I was hoping to find someway to have checking without going all the way to Rust’s model, but I am now thinking it may not be possible. So I may have to paradigm-shift my entire approach to this thorny issue.

When it comes to correctness in handling non-memory resources, empirically my observation is that it is far easier to do it in C++ and Rust. It's a constant problem with my Scala programs. It's basically never a problem in my Rust programs. (In C++ I had so many other problems that I can't recall how bad it was, just that various other things were worse.)

This makes sense to me. But it is not a rebuttal to my desire to not want to be forced to use something as tedious (and ostensibly fundamentally unsound type system) as Rust. And punting to ARC will also not meet my ALP zero cost GC goals. Also ARC can not collect cyclic references! I think @keean may have forgotten that, otherwise maybe he would not be pitching ARC as a slam dunk.

EDIT: also apparently Rust can’t do unrestrained cyclic references (although some special casing of data structures with cyclic references apparently can be accommodated):

https://stackoverflow.com/questions/20698384/what-lifetimes-do-i-use-to-create-rust-structs-that-reference-each-other-cyclica/20704252#20704252

https://www.reddit.com/r/rust/comments/6rzim3/can_arenas_be_used_for_cyclic_references_without/

When it comes to explicit vs. implicit for things like this, I don't have a language that catches every case and insists that I be explicit about it, but I do have experience with parallel constructs (e.g. be-explicit-with-all-types, be-explicit-with-all-exceptions-thrown, be-explicit-with-all-returns) and in every case being explicit is a drawback. The reason is that attention is finite, and it wastes your attention on the obvious stuff that is working right every time (or which you can't even tell is working right because you're not a computer and can't trace every logical path in milliseconds like it can).

I was not proposing to be explicit about every catch and finally so the comparison is incorrect, as you admit below...

Even saying raii val handle = new Handle is an extra hassle, an extra bit of type information that actually isn't useful in the vast majority of cases.

Whether it is useful or not is an orthogonal debate, but at least it invalidates your comparison above.

In addition to the extensive experience that this is the better way to do things (at least for someone with my cognitive capacities), there is also the critical invariant that @keean keeps mentioning over and over again, which you never have adequately and directly responded to, which is that RAII can prevent use-after-free (and use-before-initialization) errors.

I have addressed it. You must have forgotten the entire discussion[my ranting] about bifurcation of high-level and low-level. Need I elaborate? I presume you understand that a high enough level language does not have use-after-free (and use-before-initialization) errors.

So, write a code example that shows what you mean! So far you've failed every time.

If you continue to make false allegations like that, then the exchanges between us are going to become more combative. You take something completely out-of-context and then make claims which thusly don’t apply.

Everything is trivially fixable or isn't even the problem you claim it is.

Nope. With that claim you implicitly presume or claim Rust is trivial. And by transitive implication you claim that ARC can handle cyclic references because my context has clearly been that Rust is not trivial, so the only alternative to Rust’s lifetimes currently concretely known to work for RAII is ARC.

EDIT: and apparently Rust can’t implement unfettered cyclic references well either. So your hubris has been deflated. Hey I was in an amicable mood of discussion. And then you and @keean started to attack me with an attitude of hubris with confident, false claims about how wrong I am, how you disproved me, how I don’t understand terminology, how I don’t understand the benefits (and tradeoffs) of RAII, ARC, Rust, etc... Tsk, tsk.

There’s always some tradeoffs in every paradigm choice. We should not declare that all the possible quadrants of the design space have been enumerated because it is difficult to prove a negative.

@shelby3
Copy link
Author

shelby3 commented Jun 23, 2020

Replying to myself:

The last use of the reference to a data structure which somewhere in the object graph references a resource handle (and especially opaquely if you allow cyclic references) doesn’t correspond necessarily to end of the last access of the resource handle or the resource.

So it should be easy to make an example that shows how this could lead to resource starvation which would not be detected by RAII (neither Rust nor ARC).

Just store the resource handle in a data structure and stop accessing it meaning you are finished with the resource handle, but continue to hold on to the reference to the data structure and access other members of that object. So neither Rust nor ARC will detect that the semantic resource life has ended before that of the lifetime of the data structure which contains it.

The only way around that is refactor the code to attempt remove the encapsulation (which has potentially other deleterious implications) or explicitly delete a strong reference (which @keean points out it is unsafe because the resource handle object would still be accessible).

I mentioned this scenario in my prior example (such as quoted below) and @NodixBlockchain also mentioned it.

So @keean may try to argue that the above examples are contrived and not representative other ways to refactor or thinking about implementing certain semantics. Yet my generative essence point (and this is why I say it is meta-level conceptualization and the coherent understanding can not be entirely conveyed in code) is that realize that Record may have more than one functionality and other references to it may come to exist dynamically due to other functionalities the data structure serves which may be entirely orthogonal to its functionality of also holding a weak resource handle for some functionality. You may myopically attempt to claim that you can always refactor code to make all concerns orthogonal but we simply know that’s impossible in programming, or at least where it is possible it can cause the programmer to so obfuscate the essential logic of the code that the code becomes unreadable, unmaintainable, and thus more likely to contain or accumulate (over time) bugs.

Essentially what I am saying is that the optimal resource lifetime is a separate concern from ARC lifetimes and by tying them together you discourage the programmer from modeling the optimal in his reasoning. Or you force the programmer to refactor code to maintain semantic isomorphism such that the code obfuscates the essential intent of the code.

@Ichoran what have you disproved? 🤦

EDIT: even just general discussions about memory leaks applies to why ARC can leak resource lifetimes if we conflate them with ARC (or for that matter RAII as implemented in Rust, because the lifetimes checker can’t resolve the last paragraph below which was my point all along):

https://www.lucidchart.com/techblog/2017/10/30/the-dangers-of-garbage-collected-languages/

Memory can still leak

The novice programmer may be misled into believing garbage collection prevents all memory leaks, but this is not the case. Although garbage collection prevents many types of memory leaks, it doesn’t prevent all of them.

In automatic reference counting systems, such as Perl or Objective-C, memory is leaked whenever there are cyclical references, since the reference count is never decremented to zero. The solution in these systems is to break the cycle by specifying that at least one of the references is a “weak” reference, which doesn’t prevent the object from getting garbage collected.

But even in languages with mark-and-sweep garbage collection where cyclical references are correctly garbage collected, such as Java, Javascript, and Ruby, there are still several ways to leak memory. These leaks occur when objects are still reachable from live objects but will never be used again. There are a number of ways this could happen.

@shelby3
Copy link
Author

shelby3 commented Jun 23, 2020

So now perhaps you can understand my perspective.

Given that I can’t have RAII in Task because:

  1. I don’t want the tsuris of Rust’s lifetimes which includes even inability to accommodate unfettered cyclic references in addition to numerous other points that have been, including that we have to punt to ARC in Rust when our dynamic needs outstrip what static lifetimes can model and @keean claiming out he can’t implement certain algorithms without unsafe code. @keean would slices not help?

  2. I don’t want the loss of performance of ARC (as compared to my novel GC in my ALP design conceptualization) nor do I want to leak cyclic references as ARC does.

And given that RAII has semantic vulnerabilities to resource starvation and implicit hiding of resource lifetimes as I have explained, then any solution I come up with could also have some vulnerabilities to resource starvation and not be at a disadvantage to RAII in every case.

(to be continued after I sleep with an explanation of my paradigm-shift idea, which isn’t that far from the original idea I was attempting to explain since yesterday)

@keean
Copy link
Owner

keean commented Jun 23, 2020

@Ichoran To summarise my comments on how we could do better than Rust, I think lifetime erasure is a problem in Rust. I propose a system that uses reference counting as its semantics, and the type system is then used to provide static guarantees over this. Where static behaviour can be proven, then the RC will be optimised out. The reason for ARC rather than Mark-Sweep memory management is then the semantics with destructors will be the same whether the compiler statically manages the memory or uses runtime/dynamic memory management. This allows using RAII consistently. The first pervasive simplification would be when there is only one reference (effectively an owning reference). In effect we have three kinds of reference:

Owning references:
Unique Reference (like C++ unique pointer)
Shared Reference (like C++ shared reference, needs ARC)

Not owning references (better name?):
Strong Reference (where the compiler can statically prove the reference lifetime is shorter than the resource lifetime)
Weak Reference (in all other cases where the required proof for a strong reference cannot be made).

Notes:

  • A unique reference is just a special case of a shared reference, where the RC = 1
  • We don't have to do any optimisation, we could just use weak references everywhere, which would be safe. So we can start with simple escape analysis for strong references.

So the idea is, unlike Rust we start with memory safe runtime semantics, and we optimise performance when we can make compile time proofs. So unlike Rust we don't need heuristic rules to keep the expressive power in the language, and static optimisations can be restricted to only those places we can actually prove the optimisation is safe.

This also avoids two things I don't like about Rust. There is no safe/unsafe code distinction, just performant and less performant code, and there is no source code distinction between ARC and non-arc code. This really helps with generics because we don't need to provide two different versions of generic functions to cope with ARC and non-ARC data.

@shelby3 If you want to use Mark Sweep style memory management, you would have to avoid static destructors to allow the compiler to optimise between runtime/dynamic memory management and static memory management with no change in the semantics. So the alternative architecture would be Mark Sweep with explicit destructor calls for non-memory resources.

My hypothesis, which could be wrong, is that enough code will optimise with static memory management that the performance difference between ARC and MS will not be noticeable. I think some people will prefer the RAII approach, and if we can negate the performance penalty of ARC with static management optimisations then that will be an interesting language.

I think both of these languages (RAII+ARC and ExplicitDestructor+MS) will be a lot simpler than Rust because we can hide all the lifetimes from the programmer, because we have safe runtime semantics with RC or MS, and then static lifetime management is an optimisation. We can implement increasingly sophisticated lifetime analysers without changing the semantics of the language, something Rust cannot do because it differentiates between dynamically managed resources (ARC) and statically managed resources in the language.

@Ichoran
Copy link

Ichoran commented Jun 23, 2020

@shelby3 - You can use std::sync::Weak to make cyclic references safely and with no lifetimes (use move semantics). You can use only safe Rust, have no memory leaks, and good (but not zero-overhead) runtime. But it's a pain. You have to manually construct things without cycles by imposing an orientation (and also hang on to the root so it doesn't get collected).

The downside of RAII is basically the opposite of what you're saying. It's not that it has semantic vulnerabilities. It's amazing at avoiding vulnerabilities compared to everything else that anyone has come up with. However, it comes with some awkwardness in that you have to pay attention to when things are accessible and get rid of them as soon as they're not. This is a heavy burden compared to standard GCed memory management where things get lost whenever, and then eventually it's noticed at runtime and cleaned up.

If one doesn't mind the conceptual overhead of having both systems active at the same time, one could have the best of both worlds. A resource could either be another way to declare something, e.g. val x = 7; var x = 12; resource val x = 19 at which point RAII semantics would work for it. If put into a struct, the struct would have to be a resource, etc.. Or it could be a smart pointer type, Resource<X>. And then everything that wasn't a resource wouldn't have to care about when it appeared and disappeared; it would be specific to that smart pointer type (or declaration).

Alternatively, one can have general-purpose resource-collecting capability in addition to memory. You'd have to provide some handles to detect when resources are growing short, and adjust the GC's mark-and-sweep algorithm to recognize the other kinds of resources so they could be cleaned up when they grow short without having to do a full collection (though generally the detection is the tough part anyway). Then every resource would act like GC--you'd never really know quite when they'd be cleaned up, but whenever you started to get tight they'd be freed. Sometimes this is good enough. Sometimes it's risky. (E.g. not closing file handles can increase the danger of data loss.)

Regarding how to make RAII fail:

Just store the resource handle in a data structure and stop accessing it meaning you are finished with the resource handle, but continue to hold on to the reference to the data structure and access other members of that object.

Yes, absolutely. If you hang on to a resource on purpose by sticking it in a struct then, sure, it's not going to get cleaned up because you might use it again.

Any resource can be held onto that way--actually finish using it, but don't say so, and require your computer to solve the equivalent of the halting problem in order to determine whether it'll actually get used again.

If you have users who are using stuff that is a resource without even knowing that it's a limited resource, and are making this kind of mistake a lot, then yeah, okay, I see that they might need some help. Having the type system yell at them when they create it to make sure they know what they're getting into is perhaps a plus. ("I opened a file? And I can't open every file at the same time, multiple times, if I want to? Golly gee, these computers aren't so flexible as everyone makes them out to be!")

If you have very slightly more canny users, they presumably won't do that. They'll learn what things use scarce resources, and use whatever the language provides for them to release them when they need to be.

@Ichoran
Copy link

Ichoran commented Jun 23, 2020

@keean - The cognitive overhead in Rust of avoiding Arc cycles is not entirely negligible--data structures where you used to not have to care end up being a mix of Arc and Weak. Do you have a plan for how to avoid that?

@keean
Copy link
Owner

keean commented Jun 23, 2020

@Ichoran

The cognitive overhead in Rust of avoiding Arc cycles is not entirely negligible

Yes, owning pointers must form a Directed Acyclic Graph, enforced by the type checker.

@shelby3
Copy link
Author

shelby3 commented Jun 23, 2020

@keean any feedback yet from Github?

If we loose that entire issues thread #35, that is somewhat catastrophic to my work on the new PL. There were so many important comments posts that I had linked to from my grammar file which contained various explanations of various issues in the PL design space.

I know you have that backup you made 2 years ago. At the moment I still have a copy of that #35 issues thread still open on my web browser. So in theory I could start copying (at least my) hundreds of posts out the thread (or at least the ones which were added or changed since your backup), but that would be a significant manual undertaking, unless perhaps I figured out how to write some scraper in JavaScript or a browser plugin.

Does Github have a paid support level? Should we pay to get the service we need? How much would it cost? I might be willing to chip in to get that issues thread restored.

Also it may be the case that if they restore, they do it from a nightly backup so perhaps we may still lose some of the last comment posts. So the sooner I could know, the more chance that the copy open in my browser window will still be there, so I can recover the last comment posts as may be necessary. So what I am saying is can you push harder on Github support for feedback sooner?

@keean
Copy link
Owner

keean commented Jun 23, 2020

@shelby3 nothing heard back yet. Its probably be sensible to make sure you don't lose what you have, but hold off posting up yet.

@shelby3
Copy link
Author

shelby3 commented Jun 23, 2020

@keean

I haven’t read the context yet, but I actually thought this point might be raised so I was thinking about this already just before I drifted off to sleep...

@Ichoran

The cognitive overhead in Rust of avoiding Arc cycles is not entirely negligible

Yes, owning pointers must form a Directed Acyclic Graph, enforced by the type checker.

That Rust essentially prevents one from creating willy-nilly cyclic references, isn’t necessarily a good thing. It’s a limitation which may impact on degrees-of-freedom. I have not fully explored what that limitation means in practice, but I have presume that being able to not restrict cyclic references is preferred in a general purpose PL, if it doesn’t incur some unacceptable tradeoff such as the level of performance priority required for the use case (which for example in a high-level language the performance priority is somewhat lowered relative to an increase in the need for degrees-of-freedom, flexibility/ease-of-expression, unobtrusive safety guarantees, etc)?

And ARC doesn’t prevent cyclic references yet can’t collect them. And I’m aware of partial tracing algorithms that attempt to cleanup cyclic references, but these don’t guarantee to catch them all without perhaps the extreme of essentially trashing the overall performance.

Also for any reentrant or multithreaded code, then ARC requires a multithread synchronization primitive on each modification of the reference count. There are other issues with performance which have some workarounds with some designs but various tradeoffs:

https://en.wikipedia.org/wiki/Reference_counting#Dealing_with_inefficiency_of_updates
https://news.ycombinator.com/item?id=10151176
https://www.quora.com/Why-doesnt-Apple-Swift-adopt-the-memory-management-method-of-garbage-collection-like-Java-uses
https://softwareengineering.stackexchange.com/questions/285333/how-does-garbage-collection-compare-to-reference-counting
https://www.quora.com/Why-dont-other-modern-programming-languages-support-Automatic-Reference-Counting-like-Objective-C-Swift-but-use-garbage-collection-instead

So this seems to point towards Rust as the only solution which is more performant while still admitting some cases of (but AFAIK not truly unrestrained) cyclic references, yet I would prefer to find a solution which is nearly as performant and zero cost abstraction as Rust (including lowest memory footprint), without any of the complexity of the lifetime tracking (annotations, limitations, unsoundness, etc) and which can fully integrate with cyclic references as tracing GC does.

I think I may have found another quandrant in the design space. There will be tradeoffs of course — there’s always some caveat. AFAICS, our job is to identify the tradeoffs and make engineering choices.

@shelby3
Copy link
Author

shelby3 commented Jun 23, 2020

@keean

@shelby3 nothing heard back yet. Its probably be sensible to make sure you don't lose what you have, but hold off posting up yet.

In the meantime could you ask the community what Github’s typical capabilities and reaction is to such a support request and what is the best way to go about seeking any remedies which may be available?

https://github.community/

Are you sure the history is not available via an API? Surely the issues threads are Git versioned, including deletes? Perhaps the community may have a solution which for us which doesn’t require action from Github Support, whose webpage for opening a support ticket states they’re currently understaffed due to “COVID-19”[China’s psyops revenge for Hong Kong’s intransigence and the U.S.A. turning against their unfettered mercantilism[1], among other Machiavellian goals of our overlords].

[1] Skip to p.265 of Bolton’s book to read the chapter about how China manipulated Trump. Keep in mind that Bolton is a war hawk and his book is written to inflame conservatives. Nevertheless I think it’s possibly also a valid indictment of China’s internal politics.

@keean
Copy link
Owner

keean commented Jun 23, 2020

@shelby3 When I searced for issues, it seems they normally restore deleted stuff, next day for some people, but they say they are responding more slowly due to the current situation.

@shelby3
Copy link
Author

shelby3 commented Jun 23, 2020

@keean

@shelby3 When I searced for issues, it seems they normally restore deleted stuff, next day for some people, but they say they are responding more slowly due to the current situation.

That’s a major relief. Thank you. I hope so.

@NodixBlockchain
Copy link

@keean

@shelby3 When I searced for issues, it seems they normally restore deleted stuff, next day for some people, but they say they are responding more slowly due to the current situation.

That’s a major relief. Thank you. I hope so.

i have the posts in mail from 03 jan 2018

@NodixBlockchain
Copy link

@Ichoran To summarise my comments on how we could do better than Rust, I think lifetime erasure is a problem in Rust. I propose a system that uses reference counting as its semantics, and the type system is then used to provide static guarantees over this.

It's in the idea of what i'm doing with the runtime, can switch from rc to ms, and still keep the rc semantic, that can help tracking subtle memory bugs, and make the required memory pattern more obvious.

@shelby3
Copy link
Author

shelby3 commented Jun 23, 2020

@Ichoran

@shelby3 - You can use std::sync::Weak to make cyclic references safely and with no lifetimes (use move semantics). You can use only safe Rust, have no memory leaks, and good (but not zero-overhead) runtime. But it's a pain. You have to manually construct things without cycles by imposing an orientation (and also hang on to the root so it doesn't get collected).

The strong and weak references paradigm is a pita. Although perhaps it should be an option, I’d hope (at least for a high-level, easy-to-use PL) not to choose a paradigm where it’s the only option, because it’s another deviation from the ideal degrees-of-freedom, being sort of another What Color Is Your Function bifurcation — which also describes at the generative essence Rust’s alternative to not using weak references ostensibly means an inability to have any cyclic references.

The downside of RAII is basically the opposite of what you're saying.

Again AFAICS you’re (perhaps unwittingly due to inadvertent choice of wording and not with tendentious nor combative intent[1]) making false allegations by insinuating that I didn’t recognize the downside of the tradeoffs (e.g. “heavy burden”) incurred to implement RAII and/or that I claimed that RAII has the downside of not improving upon unchecked lifetimes semantics, which I did not claim. Let me quote myself again:

But the points I made against implicit RAII were not that it can’t be made to work and were not a claim that it isn’t convenient and doesn’t prevent bugs compared to an unchecked manual cleanup (which I never advocated completely unchecked manual cleanup!), which just goes to show you have not even bothered to really understand what I wrote.

My points are about the obfuscation in code it creates and how that can lead to suboptimal outcomes and unreadable code in some cases. Surely you can fix bugs and mangle your code to make it all work, but you may make it even more unreadable. My point was an attempt think about how to make it even better […]

[…]

I think your criticisms of RAII amount to a personal preference, and you don't seem to appreciate the real word benefits that I, and some others have been trying to explain to you, and are rooted in practical experience not speculation.

Have I ever stated I don’t appreciate the benefits of RAII as compared to manual unchecked explicit release of resources? No! If you think I did, you simply failed to read carefully.


It's not that it has semantic vulnerabilities.

Yet it does have semantic vulnerabilities which you finally admit below.

It's amazing at avoiding vulnerabilities compared to everything else that anyone has come up with.

I never claimed otherwise, although I wouldn’t go quite so far as the hyperbole[1] “amazing” because it doesn’t prevent naive programmers from “live leaking” (i.e. not just forgetting to deallocate but deallocating too late) and it does lead to opaque implicitness — remember I am trying to design a PL that could possibly be popular which means as I have documented hordes of naive, young, male programmers with less then 5 years of experience (they significantly outnumber the experts). Does wanting to improve upon RAII have to mean (conflate) in your mind that I think the implicit cleanup of RAII is the worst thing ever invented?

I attempt to improve upon RAII because up until now the only ways to achieve it have been the all-or-nothing tsuris of Rust or the limitations/downsides of ARC. Meaning that although I think it was an improvement in some respects over the paradigm the programmer has been offered for example with a MS (aka tracing) GC, that the tradeoffs of using it are countervailed to some extent by significant tradeoffs in other facets of the PLs that offer RAII as @keean, you and I have discussed. And especially so the tradeoffs when we consider the aforementioned demographic I might be targeting, and in general the point I have made about a bifurcation between high-level and low-level PL taken in the context of my desire to focus first (for now) on the high-level and making it as fun and easy to work with as possible by not conflating it with low-level concerns. Note there are many orthogonal statements above, so (to all readers) please don’t conflate them into one inkblot. Give me some benefit of understanding please. Our discussions should be about trying to enumerate the ranges of the PL design space and raising all of our understandings and elucidations of the various possibilities which have already and have not yet been explored.

However, it comes with some awkwardness in that you have to pay attention to when things are accessible and get rid of them as soon as they're not.

Okay we are making progress towards some consensus of understanding and claims.

This is a heavy burden compared to standard GCed memory management where things get lost whenever, and then eventually it's noticed at runtime and cleaned up.

Agreed as I understand that Rust forces the programmer to prove certain lifetime invariants. I just want to add and note that I had already mentioned that Rust can also allow a “live resource leak”. My point being that the “heavy burden” paid for RAII as implemented in Rust does not resolve all risk of a “live resource leak”.

If one doesn't mind the conceptual overhead of having both systems active at the same time, one could have the best of both worlds. A resource could either be another way to declare something, e.g. val x = 7; var x = 12; resource val x = 19 at which point RAII semantics would work for it.

AFAICS, merging RAII with a MS (aka tracing) style GC’ed language would incur the same lifetime proving “heavy burden” as Rust (because ARC can’t be merged without creating a What Color Is Your Function bifurcation) unless as @keean’s post today is leading towards (@keean you took some of my design idea from my mind while I was sleeping but you’re not all the way there) the design instead becomes it’s only a best effort and not a guarantee.

Alternatively, one can have general-purpose resource-collecting capability in addition to memory. You'd have to provide some handles to detect when resources are growing short, and adjust the GC's mark-and-sweep algorithm to recognize the other kinds of resources so they could be cleaned up when they grow short without having to do a full collection (though generally the detection is the tough part anyway). Then every resource would act like GC--you'd never really know quite when they'd be cleaned up, but whenever you started to get tight they'd be freed. Sometimes this is good enough. Sometimes it's risky. (E.g. not closing file handles can increase the danger of data loss.)

I guess we could teach a GC to have prioritization for cleaning up certain resources when they become starved but this is not research I am aware of and it sounds to me that it would have many pitfalls including throttling throughput. I am not contemplating this design. My design idea essentially parallels what @keean wrote today but combined with my new ALP-style GC which requires no MS (no tracing).

Regarding how to make RAII fail:

Just store the resource handle in a data structure and stop accessing it meaning you are finished with the resource handle, but continue to hold on to the reference to the data structure and access other members of that object.

Yes, absolutely. If you hang on to a resource on purpose by sticking it in a struct then, sure, it's not going to get cleaned up because you might use it again.

Thank you. Finally we presumably have some level of consensus that I am not completely batshit crazy, don’t “you've failed every time”[sic] and was not writing complete nonsense.

Any resource can be held onto that way--actually finish using it, but don't say so, and require your computer to solve the equivalent of the halting problem in order to determine whether it'll actually get used again.

Which BTW is the generative essence of why Rust’s lifetime checker has no prayer of ever being able to analyze every case where something leaks semantically or something is safe but Rust thinks it is unsafe.

And remember I wrote in my ranting about Rust that Rust can’t catch all semantic errors. (Not implying you disagreed with that claim).

If you have users who are using stuff that is a resource without even knowing that it's a limited resource, and are making this kind of mistake a lot, then yeah, okay, I see that they might need some help. Having the type system yell at them when they create it to make sure they know what they're getting into is perhaps a plus. ("I opened a file? And I can't open every file at the same time, multiple times, if I want to? Golly gee, these computers aren't so flexible as everyone makes them out to be!")

I believe you at least slightly mischaracterize the vulnerability. The convenience of being implicit (just declare a destructor and fuhgeddaboudit) can lead to not paying careful attention.

@keean has for example documented how not-so-naive programmers leak in JavaScript with implicit closures. Implicitness encourages not thinking.

If you have very slightly more canny users, they presumably won't do that. They'll learn what things use scarce resources, and use whatever the language provides for them to release them when they need to be.

They will make less errors, but they can still be snagged. Don’t me tell you never have because I will not believe you.

[1] It’s understandable that possibly my rants against Rust have presumably caused (perhaps subconsciously) you to push-back with more vigor than you would had I not expressed what you probably perceive to be unnecessarily too aggressive, flippant, one-sided, uncivil, discourteous, incendiary, emotionally charged, unprofessional, community-destroying, self-destructive, etc..

@shelby3
Copy link
Author

shelby3 commented Jun 23, 2020

@keean

I propose a system that uses reference counting as its semantics, and the type system is then used to provide static guarantees over this. Where static behaviour can be proven, then the RC will be optimised out. The reason for ARC rather than Mark-Sweep memory management is then the semantics with destructors will be the same whether the compiler statically manages the memory or uses runtime/dynamic memory management. This allows using RAII consistently.

[…]

So the idea is, unlike Rust we start with memory [i.e. resource leak and use-after-free] safe runtime semantics, and we optimise performance when we can make compile time proofs. So unlike Rust we don't need heuristic rules to keep the expressive power in the language, and static optimisations can be restricted to only those places we can actually prove the optimisation is safe.

This also avoids two things I don't like about Rust. There is no safe/unsafe code distinction, just performant and less performant code, […]

@keean I like that you were trying to paradigm-shift in a way that somewhat analogous to the paradigm-shift I’m contemplating, but unfortunately there’s a flaw in your design idea.

[…] and there is no source code distinction between ARC and non-arc code. This really helps with generics because we don't need to provide two different versions of generic functions to cope with ARC and non-ARC data.

The flaw is that contrary to your claim of avoiding a What Color Is Your Function-like bifurcation, because there’s still such a bifurcation in your design.

This will still be a bifurcation w.r.t. to generic functions that can’t be monomorphised statically because non-ARC and ARC data can’t fully interopt.[c.f. correction] You can assign non-ARC data to a reference in an ARC data structure but you can’t allow assignment between incompatible non-ARC and ARC because the write barrier for ARC data operates on the refcount.

You may perhaps consider that flaw to be a worthwhile tradeoff but frankly given that ARC leaks cyclic references then I think it’s a non-starter for me to even consider.

And I don’t think your idea will be as performant as my paradigm-shift which I will attempt to explain today.

@shelby3 If you want to use Mark Sweep style memory management, you would have to avoid static destructors to allow the compiler to optimise between runtime/dynamic memory management and static memory management with no change in the semantics. So the alternative architecture would be Mark Sweep with explicit destructor calls for non-memory resources.

My ALP design (c.f. also) doesn’t employ MS nor tracing. It’s a bump pointer heap (i.e. nearly as efficient to allocate as static stack frames) which is released in entirety with a single assignment to reset the bump pointer in between the processing of each incoming message of the ALP.

Although you’re correct that in lieu of the guarantee of RAII via ARC (even Rust’s lifetime model requires ARC because it can’t model everything statically!), some explicitness is required, it will not preclude some of the static analysis you were mentioning for your paradigm-shift idea — it just won’t be a 100% RAII guarantee. But here’s the kicker which you were missing in your presumptions about what I am formulating: the guaranteed fallback in my ALP idea is still a 100% check and that typically occurs reasonably soon as the ALP returns to process the next message in the queue (although this will require overhead in addition to the single-assignment for resetting the heap, but this will only occur for those not explicitly tagged as RAII, which of course the compiler statically checks). If the programmer is doing something which can cause a long pause, they out to make sure that either the statically proven RAII was implicitly carried out (i.e. destructor called, which is the reason for some terse explicit keyword) or they should manually finalize. EDIT: There appears to be another place in the design space.

My hypothesis, which could be wrong, is that enough code will optimise with static memory management that the performance difference between ARC and MS will not be noticeable.

Your ARC will still be less performant than my bump pointer heap with single assignment reset (no mark, no sweep, no tracing, nothing aka nearly “zero cost abstraction”). Mine should even better approach Rust’s performance than Go, and without the tsuris “heavy burden”.

I think some people will prefer the RAII approach, and if we can negate the performance penalty of ARC with static management optimisations then that will be an interesting language.

Well if not for the aforementioned bifurcation flaw I think you might be onto another interesting possibility in the design space. But that flaw taken together with leaking of cyclic references significantly dampens my interest. Also the performance would not be as good as what I’m contemplating. And also I have explained why I think implicit RAII is not an unequivocal guarantee of timely release of resources nor an inviolable guarantee against resource leaks. Also I think RAII encourages programmers to not optimize their resource lifetimes as I wrote, “The convenience of being implicit (just declare a destructor and fuhgeddaboudit) can lead to not paying careful attention.”

I think both of these languages (RAII+ARC and ExplicitDestructor+MS) will be a lot simpler than Rust because we can hide all the lifetimes from the programmer,

I agree except for the bifurcation flaw in your idea. I think my idea is superior although the guarantee on timely cleanup is different (but arguably better!)

because we have safe runtime semantics with RC or MS, and then static lifetime management is an optimisation.

Agreed.

We can implement increasingly sophisticated lifetime analysers without changing the semantics of the language, something Rust cannot do because it differentiates between dynamically managed resources (ARC) and statically managed resources in the language.

Unfortunately your idea must also per the flaw I mentioned. But AFAICS my idea does not!

@shelby3
Copy link
Author

shelby3 commented Jun 23, 2020

@NodixBlockchain

@keean

@shelby3 When I searced for issues, it seems they normally restore deleted stuff, next day for some people, but they say they are responding more slowly due to the current situation.

That’s a major relief. Thank you. I hope so.

i have the posts in mail from 03 jan 2018

Does it have the link for each post? Because I refer to the posts by their URLs, so finding which posts I cited would be implausible without the links.

@keean
Copy link
Owner

keean commented Jun 23, 2020

@shelby3

This will still be a bifurcation w.r.t. to generic functions that can’t be monomorphised statically

The source code of the function will be the same whether it can or cannot be monomorphised statically, so there will be no bifurcation of source. In some cases where we can prove static management of the resource is sufficient we will emit a different function, compared to those where the optimisation cannot be made. This second part is trivially obvious, because if we did not emit different code depending on the use case, we would not be optimising anything, and instead we would just have a language with ARC memory management.

@shelby3
Copy link
Author

shelby3 commented Jun 23, 2020

This will still be a bifurcation w.r.t. to generic functions that can’t be monomorphised statically

The source code of the function will be the same whether it can or cannot be monomorphised statically, so there will be no bifurcation of source. In some cases where we can prove static management of the resource is sufficient we will emit a different function, compared to those where the optimisation cannot be made.

You will not, but you will.

Readers @keean agreed with me.

This second part is trivially obvious, because if we did not emit different code depending on the use case, we would not be optimising anything, and instead we would just have a language with ARC memory management.

So all functions will be bifurcated. Which is what I said.[@keean is correct but with a potentially significant limitation on what can be optimized]

@shelby3
Copy link
Author

shelby3 commented Sep 17, 2020

I proposed Pony-esque recover blocks for constructing immutables? for Vlang and perhaps also for any programming language I may create.

Also I want to correct my error, if I have ever (especially recently) implied or stated that Scala enforces immutability. Scala’s val only prevents reassignment, and does not make the referent object immutable. This is an egregious flaw, but it’s understandable because a programming language will need at least Pony’s recover block feature in order to initialize immutables. C.f. the linked proposal for more details.

@shelby3
Copy link
Author

shelby3 commented Sep 22, 2020

The Death of Hype: What’s Next for Scala — A Solid Platform for Production Usage points out that the speed of Scala’s compiler has doubled in the past 3 years.

@keean wrote

My main concern about Scala is that it is a Kitchen-Sink language that has resulted in a really-complex type system. I agree with you that it is not opinionated. But from the above it should be clear that Scala does not really suffer from the 'Monad Problem' like Haskell does, because Scala is imperative.

Agreed and Scala’s creator Martin Odersky admits in his recent talk about Scala 3 that his goal has been to create a unification of disparate programming paradigms (which you and I assert are ill-fit, although @Ichoran may disagree per the dicussion in the Why Typclasses #30).

Odersky does mention some of the downsides which Scala 3 attempts to fix, but he doesn’t address the essence of too many paradigms in one PL:

Odersky admits when he discusses the new exports capability that inheritance is bad and should be avoided. So why keep it in the PL? Probably because Scala has it genesis and thus DNA as a research language.

Note I will come back to the remainder of what you wrote in that post, but first I will respond to the post you made before that as follows. I quote the above here because I want to point out that I think Scala sacrifices the opportunity to be sufficiently opinionated enough to have an optimum balance of limited but sufficiently powerful abstractions for a simple language per your point quoted below.

@keean wrote:

As you know my interests are more in developing a small/simple language with powerful abstraction capabilities.

Agreed. Yet it seems we differ (in the way we conceptualize) on what the correct abstractions should be in order to achieve that simple and thus popular language.

Starting with Stepanov’s ideas from "Elements" as a core idea. Something that lets you manipulate bits and bytes (write hardware drivers for example), yet allows safe coding at a higher level.

Powerful, mathematical or algebraically generative essence abstractions can be too low-level and too literal. For example my recent epiphany obviating and swiping away Rust’s literal and low-level encoding of a total order on exclusive writability, in exchange for a simpler abstraction by cleverly leveraging that types exist.

A tradeoff has to be made because there’s no free lunch. We can choose abstractions which serve the 80/20 needs and arrive at an elegant, popular and simple PL, or we can attempt to be as literal and exhaustive as you ostensibly want to be and end up with another STL mess. Perfection can be the enemy of good.

I’m not claiming the Stepnov’s model of for example iterators is undesirable per se. Perhaps it was the attempt to have the STL perform as well as hand-coded LLL programming that is the culprit of the complexity.

I’m skeptical about whether it’s possible to design a PL that is optimal for the extremes of both HLL and LLL programming. Rust, C++ and other PLs have attempted to combine for example HLL generics, closures, etc with LLL optimizations and the result has been painful complexity, unsoundness, and extreme difficulty in staying within the intended paradigm without punting to for example inefficient and leaky ARC. In general for example low-level specializations seems to multifurcate generics into a noncomposable mess analogous to What Color Is Your Function?

However I’m also contemplating that Low Level Language Programming Is Considered Harmful and Antithetical To Optimization at least due to radical paradigm shifts on the long-term trend.

Something like V(lang) or Go(lang) is all I should need for non-extreme (i.e. not as performance and space optimized as C++ or Rust) LLL programming in most cases that don’t require the optimization of assembly language or extreme control over avoiding redundant copying such optimization of result value temporaries, as can be accomplished with C++ move semantics and perhaps also in Rust? V even allows pointer arithmetic in unsafe{} blocks, although that is not really even desirable for performance not mention being unsafe and Go doesn’t make that mistake. Note I explained in the aforelinked that my ALP bump pointer heap idea combined escape analysis may in the non-extreme use case, ameliorate the necessity of further optimization of result value temporaries. Remember the power-law distribution aka Pareto principle aka the 80/20 rule that performance critical paths of the code are likely to be a very small percentage of the code case of an application.

I might be more enthusiastic about Rust for LLL coding if they hadn’t tried to compete with C++ and instead just made a better C, e.g. the complexity around closures, c.f. also and also. But Rusts wants to be a HLL tool also, which thus afaics makes it extremely complex as is also the case C++ with the complexity lurking in the apparently unsound type system.

Eric Raymond wrote an unflattering opinion about Stepanov’s STL and C++’s overall complexity:

[…] After which I swore a mighty oath never to go near C++ again.

My problem with the language, starkly revealed by that adventure, is that it piles complexity on complexity upon chrome upon gingerbread in an attempt to address problems that cannot actually be solved because the foundational abstractions are leaky. It’s all very well to say “well, don’t do that” about things like bare pointers, and for small-scale single-developer projects (like my eqn upgrade) it is realistic to expect the discipline can be enforced.

Not so on projects with larger scale or multiple devs at varying skill levels (the case I normally deal with). With probability asymptotically approaching one over time and increasing LOC, someone is inadvertently going to poke through one of the leaks. At which point you have a bug which, because of over-layers of gnarly complexity such as STL, is much more difficult to characterize and fix than the equivalent defect in C. My Battle For Wesnoth experience rubbed my nose in this problem pretty hard.

What works for a Steve Heller (my old friend and C++ advocate) doesn’t scale up when I’m dealing with multiple non-Steve-Hellers and might end up having to clean up their mess. So I just don’t go there any more. Not worth the aggravation. C is flawed, but it does have one immensely valuable property that C++ didn’t keep – if you can mentally model the hardware it’s running on, you can easily see all the way down. If C++ had actually eliminated C’s flaws (that it, been type-safe and memory-safe) giving away that transparency might be a trade worth making. As it is, nope.

Also Eric blogged C++ Considered Harmful:

C++ is an overcomplexity generator. It was designed to solve what turned out to be the wrong problems; as a result, it lives in an unhappy valley between two utility peaks in language-design space, with neither the austere elegance of C nor the expressiveness and capability of modern interpreted languages. The layers, patches, and added features designed to lift it out of that valley have failed to do so, resulting in a language that is bloated, obfuscated, unwieldy, rigid, and brittle. Programs written in C++ tend to inherit all these qualities.

One of the commentators wrote:

The thing that really convinced me that C++ was a language whose time had passed was reading Scott Meyers’ books Effective C++, More Effective C++ and Effective STL, which could be seen as a long list of gotchas and workarounds that must be borne in mind when writing C++. Any language that required you to deal with this nonsense on a daily basis was clearly wasting its users’ time.

So I suspect Linus Torvalds is correct that C will remain the PL for operating systems if one has to routinely punt to unsafe in Rust and because I argued to @Ichoran upthread that even just one unsafe instance in your program incurs the potential liability of vacating all the guarantees of the type system (in various insidious ways).

My own direction seems currently based around Algebraic Effects, A row polymorphic type system with HTK based on Prolog, Typeclasses, Actors (single threaded / arena things), Clojure's Epochal timeline, and Pythons layout is syntax.

I appreciate your insight where you explained that typed row polymorphism is a response to one of Rich Hickey’s major criticisms of typing.

Afaics, Clojure's Epochal concept is essentially just immutability and making immutability more efficient with persistent data structures that only duplicate changed data. Persistent data structures share (between threads) those items in the data structure which haven’t be modified. I suggested that each thread should inform other threads sharing the same data when a change is made to a private copy of the data. I wrote:

One possible model for mutating shared immutable data is for the exclusive owner to create a new mutated, immutable copy (perhaps via an efficient data structure that only copies the changed part which afaik is what Clojure employs), then send a message to threads to tell them to update the copy of the immutable data they are referencing.

I have yet to read a coherent argument in favor in the benefit vs. drawbacks analysis of Algebraic Effects. I’m not claiming there isn’t one? Your prior attempts at explanation apparently hasn’t stuck yet in my mind.

Regarding non-composition of Monads. we can replace monads with Algebraic Effects which are composable.

Reminding readers that Algebraic Effects are the free monad and it’s argued that they’re a duplication of what can instead be achieved with typeclasses in your collegue Oleg’s final tagless encoding.

I quote:

That’s the basics of the free monad: it’s something we can wrap around an arbitrary type constructor (a F[_]) to construct a monad. It allows us to separate the structure of the computation from its interpreter, thereby allowing different interpretation depending on context.

So I suppose understanding that the free monad retains control-flow context might be the best hint to their justifiable utility of coding control-flow and interpretation as separate concerns, c.f. also, also, also, also and also.

I hope to combine these features in a synergistic way (the whole is greater than the sum of the parts) embracing Pythons "one way to do things". Something like a combination of Rust (using the Reference Counting memory model) at the low level and TypeScript (with HKT and Typeclasses) at the high level. However much simpler versions of both of these. So no explicit lifetimes in the low level (RC memory model allows implicit lifetimes to statically optimise without creating red/blue function bifurcation for generics between static and dynamic memory management), and only Typeclasses/Algebraic effects at the high level (so no classes/interfaces).

We discussed that upthread with @Ichoran.

What needs more work is the Actor/Module system, can that be folded into the Typeclasses/Algebraic effects?

I’m still progressing on my ALP idea, but it’s not yet a holistically, defensible design.

One of the problems I want to solve is the library problem with typed languages (something the Clojure talk mentions) where you may have a library function like "sort" that effectively calls back to your own code for "compare", the generics should be sufficient that you can change the type of compare without having to change any of the library code, also there should only be one implementation of compare and sort, not different ones for pure/impure compare, or different ones for static/dynamic memory management etc.

Agreed. And we’ve devised some ways to tackle the modularity #39 issue with typeclasses in light of Robert Harper’s criticism, “…the integers can be ordered in precisely one way (the usual ordering), but obviously there are many orderings (say, by divisibility) of interest…” Although I surmise you’re somewhat unsatisfied with violations of the total ordering required to insure the abstract algebra.

Clojure’s, Haskell’s and Go’s mistake of structural instead of nominal matching of functions to (typeclass) interfaces amplifies Rich Hickey’s criticism of an explosion of interference and boilerplate with types.

If you can have one optimal implementation for a kind of sort (say quicksort) then Stepanov's idea of an online algorithm repository can become a reality. The type signature would not have to change for different implementations, and so the language standard library becomes a collection of algorithms which can work on any data-structures (you just have to provide the needed typeclasses).

The goal is only one implementation of quicksort is needed in the whole world (of this language). This would be managed by a Deno like import manager. The language repo itself would provide the 'approved' algorithms.

Perfection can be the enemy of good.

Somehow we also need to encourage libraries to only contain algorithms and typeclass definitions, so I am wondering if we can have a syntactic limitation that disallows type definitions in libraries. I'm thinking about the Clojure presentation that it's what you choose to leave out of a language that is important. They probably need internal types for book-keeping, so maybe just ban concrete types in the interface/module specification?

Philosophically I agree. I will need to think about implications of this specific design suggestion.

@shelby3
Copy link
Author

shelby3 commented Sep 24, 2020

I responded on the thread Dotty becomes Scala 3:

Here we go! I’m really hoping this sparks some renewed interest in scala. Scala 3 is exciting, but it really needs adoption. I’ve been using scala for a long time, but it’s really tough to find clients (other than the “we use scala 2.11 with Spark).

There’s a lot more spread now - it seems Rust, Go, Dart, etc are all getting more hype, not to mention all the JavaScript variants/spin-offs.

I think a real “this is how you use <cool framework/paradigm> in a fast/productive way with scala3/scalajs” would really help - or similar w/ scala native

It going to be hard competing with languages like Kotlin. What i personally like about with Scala though is the attention to get it right from a theoretical perspective. Perhaps Scala won't ever be one of the bigger languages but perhaps that won't matter as long as it has enough traction to stay in its current place. Maybe in the future when webassembly becomes huge then platforms wont matter as much and then it will be even easier to mix and match languages. Then perhaps people can move more freely between languages and experiment.

As you allude to “right from a theoretical perspective,” Scala enables abstractions which aren’t possible and/or not nearly as elegantly expressed in Kotlin due to for example the lack of native higher-kinded type (HKT), typeclasses, structural intersection/union type lattice, operator overloading, path dependent types and robust pattern matching in Kotlin (and probably many other nuances such a value types which are ostensibly useful for example for efficiently emulating new type system features). I realize there’s the Arrow_kt and KindedJ emulation of HKT, but I presume this at least forsakes some facets of elegant expression. Personally coming from the stance that class inheritance (aka subclassing) is an antipattern compared to typeclasses (which even Scala’s creator Odersky admits) so in comparison Kotlin’s focus on OOP and better integration with precipitously declining JVM/Java platform is not the direction I want to go in order to be ‘future proof.’ Afaics the JVM is inferior in terms of performance scaling for backend services compared to native green thread optimized platforms such as Go(lang). This is why I am contemplating a Go(lang) target for Scala from a subset of Scala which discards the unwanted subclassing. Focus on strengths. Be an opinionated language, not just a research language. Click the following link to dive into the details of my thoughts on the linked posts and the comment posts which follow the linked one:

#49 (comment)

In short Scala should let Kotlin have the JVM and Android ecosystems and not focus there although maintain compatibility. And focus more on its strengths for backend services wherein the advanced type theory features are more compelling. And focus on a runtime that will be superior to the JVM and which is can scale to massively multicore which has arrived. The JVM will be more difficult to scale to massively multicore. Even Go is not ideally designed to address the fact that immutability and no global AMM will afaics be the only reasonable way to scale. Both Kotlin and Scala are inching towards native support, so do we want to end up with the less elegant but more pragmatic Kotlin as our future language as the transition away from the eventually deprecated JVM emerges into view? I also wonder if by pruning Scala’s features of antipattern programming paradigms, if that might unfurl opportunities to increase the compiler’s performance? (Note I typically tend to think about 5 to 10 years ahead and so I often look like a foolish person until nearly every time I am proven correct in the end, as has happened so many times in my life)

Excuse me in advance if my comments are myopic, which is quite plausible because I haven’t been active developer for at least the past 5 years (and not really intensely for more than a decade). I have been following Scala on and off at a distance since ~2009. I am excited about Scala 3 but still on the fence because of the legacy baggage of subclassing and the JVM diluting focus. I realize such a sentiment may be unpopular. Please do not downvote me because you disagree. Disagreement should be expressed by not upvoting and responses. Downvoting should be reserved for flagging trolling. Reddit(ard) needs a separate metric for expressing opinion because downvoting impacts visibility and ostensibly even shadow banning.


I will quote from Kotlin vs Scala: Which Problems Do They Solve? for readers who may not be familiar with the comparison of Kotlin vs. Scala:

Scala has been designed in the Academia, Kotlin in a leading software company. The background in which they are born explain most of their differences. Scala has been born to try some cool ideas on functional programming and mixing different paradigm. Kotlin is about solving practical problems. Kotlin designers cared about compilation times and great tool support, because those things matter to programmers, even if they do not sound exciting for other language designers.

@sighoya
Copy link

sighoya commented Nov 6, 2020

Very interesting:
Notes on smaller Rust
Revisiting a smaller Rust

The corresponding blog is here.

@shelby3
Copy link
Author

shelby3 commented Aug 7, 2021

Attempting to paradigm-shift our discussion about resource cleanup and incidentally RAII. In a purposely (re-)designed operating system (OS) I can’t think of any reason for non-memory “resource starvation” in terms of forgetting to timely release the handle to the resource? The entire filesystem could be virtual memory mapped or at least there’s a nearly unbounded number of 64-bit file handles. TCP/IP streams should be limited only by data structures allocated in memory so any resource limitation is synonymous with memory resource starvation.

There can be resource starvation for example reading too infrequently from Internet stream buffers thus causing the TCP/IP connection to go stale or timeout (arises as we had discussed in years past, for example in the tradeoff between throughput versus latency in concurrency), or forgetting to release exclusive access to a shared resource such as exclusive write. But the first has nothing to do with forgetting to release access to the resource and the second exists within the broader scope problem of deadlocks and livelocks. For example perhaps the only way to guarantee there will never be file write access deadlocks and livelocks, is to allow for its entire lifetime file exclusive write access only for an owner application and allow other readers unfettered access to the file. Owner batch file writes could be employed so that readers never access partially written inconsistent data (yet readers would never wait they would just read the prior version of the data while the batch is not yet completed), although in theory this could create an obscure livelock wherein the reader needs the batch update to interact (even indirectly) with a dependency for the writer’s completion of the batch write.

One could imagine resource starvation due to forgetting to release for example the camera of the mobile device, yet presumably the OS can give the user a manual override for such a buggy application given it is a resource the user interacts directly with.

Thus to the extent that AMM (e.g. via a tracing GC such as mark-and-sweep) is acceptable for automated collection of unused memory resources then it should be acceptable for collecting unused access to all types of resources?

Not closing a TCP/IP stream may keep resources on the other side of the connection tied up until the connection times out, although typically should either being employing long-lived connections or connections set to close automatically after the response. And closing a connection shouldn’t be conflated as we ostensibly did upthread with release of the resource handle where we can’t be sure they can occur simultaneously. I would return to my original upthread point that neither ARC-based RAII nor Rust lifetimes will always insure timely release due to the potential for insidious cyclical or dead references although Rust apparently minimizes the potential for cyclical references at the cost of making some dynamic lifetime management very onerous or implausible. Thus the programmer won’t be able to rely on those in all cases without carefully studying the semantics of the source code which is thus perhaps not arguably better than explicitly coding the close of the connection with the exception that if there was an explicit lifetime destructor in Rust then the programmer could assert his precise timing (thus correctly conflating lifetime and for example close of a connection) which Rust could presumably check for use-after-free safety.

The resource cleanup drama is a red-herring where there’s no starvation due to untimely release of the resource handle. In the (probably rare?) cases where there needs to be timely close of a connection which can be tied to resource handle lifetime, then Rust may offer (with nested blocks or improved presumably by adding a lifetime destructor) the ability to prove use-after-free safety assumptions about the explicit timing. But just relying on RAII or lifetimes without careful study isn’t going to be safe from resource starvation in said cases where timely release is paramount.

@sighoya I never heard back from you again in email after a year. I hope you have survived.

@shelby3
Copy link
Author

shelby3 commented Aug 7, 2021

I wrote:

I don’t want the tsuris of Rust’s lifetimes which includes even inability to accommodate unfettered cyclic references in addition to numerous other points that have been, including that we have to punt to ARC in Rust when our dynamic needs outstrip what static lifetimes can model and @keean claiming out he can’t implement certain algorithms without unsafe code. @keean would slices not help?

In addition to inability of Rust lifetimes to model some semantics correctly and the unsafe (i.e. unchecked) code that must be employed to subvert the limitations, Rust’s exclusive write access (via moves and borrowing) can’t be employed to make static (i.e. compile-time) guarantees against unintended concurrent access in dynamic parallelism — parallelism sorted by contextual values at runtime. Rust’s lifetimes add a significant cost to the tsuris and clarity/clutter/boilerplate of programming and for what gain in features or performance?

Rust enables to prove static lifetimes for a subset of semantics. For this subset it improves performance, but when your program requires semantics that Rust is unable to model then Rust interoperates poorly given the hoops one has to jump through to accomplish said semantics.

The Pareto principle applies to performance optimization. Only a small fraction of the source code needs to be fully optimized for maximum performance. It is overkill to apply Rust’s onerous lifetimes to all the source code.

Imagine a blockchain application that groups conflicting transactions based on the UTXO records they will invalidate, replace or write to. Each group can run in a separate concurrent thread and each can employ a queue so that there’s no race condition among conflicting transactions because conflicting transactions don’t ever run in concurrent threads. This invariant is only enforced dynamically at runtime.

Ditto a smart contract for an exchange that has to sort bids and asks by trading pairs so that conflicting transactions for each trading pair are queued. The only other way to speed this up would be pipelining of the said sequential queue.

I can’t envision a way to structure Rust’s exclusive write access moves and borrowing to make any compile-time check of the said runtime invariant.

One way to structure such code would be to put immutable records in a (e.g. list, tree, etc) data structure with statically checked exclusive write access on the references (i.e. pointers) to immutable records which are the leaves (aka contained elements or items) of said data structure. The owner of the statically checked write access can remove immutable records and queue them as aforementioned. But given that (borrowing) read access (i.e. to immutable records) isn’t exclusive this doesn’t provide any statically checked guarantee that the owner can’t queue them incoherently in a conflict that would result in a race condition.

Alternatively (the records need not be immutable and) the exclusive write could somehow be moved to the thread that will queue and operate on them and their associated transaction. But how does the function accepting the move return immediately so that queuing of other transactions can proceed concurrently while also enabling the said function to move the exclusive write access back to the original caller when the said operation is complete?

Thus it seems that Rust’s conflation of lifetimes with exclusive write access complicates matters? Whereas, Pony’s reference capabilities don’t track lifetimes and thus can be freely passed around independent of tracking the function (stack) call hierarchies involved with lifetimes, which solves my question in the prior paragraph.

Lifetime tracking as an efficiency improvement for AMM and to proof RAII lifetimes could still be useful and afaics could be unconflated from access permissions such as exclusive write.

Apparently the reason that Rust requires littering the code with lifetime annotations is because otherwise the lifetime signature of the function could change depending on the lifetimes of the caller.

The Rust docs say:

When annotating lifetimes in functions, the annotations go in the function signature, not in the function body. Rust can analyze the code within the function without any help. However, when a function has references to or from code outside that function, it becomes almost impossible for Rust to figure out the lifetimes of the parameters or return values on its own. The lifetimes might be different each time the function is called. This is why we need to annotate the lifetimes manually.

For example:

// this code sample does *not* compile
fn f(s: &str, t: &str) -> &str {
    if s.len() > 5 { s } else { t }
}

// this code sample does *not* compile
fn f<'a, 'b>(s: &'a str, t: &'b str) -> &'??? str {
    if s.len() > 5 { s } else { t }
}

[…]

The way to achieve this is to give both input parameters the same lifetime annotation. It’s how we tell the compiler that as long as both of these input parameters are valid, so is the returned value.

fn f<'a>(s: &'a str, t: &'a str) -> &'a str {
    if s.len() > 5 { s } else { t }
}

Thus it seems it might be possible for the compiler to infer lifetimes if we remove the constraint that a function’s lifetimes signature must not change for different callers?

https://vlang.io/ says:

Most objects (~90-100%) are freed by V's autofree engine: the compiler inserts necessary free calls automatically during compilation. Remaining small percentage of objects is freed via GC.

The developer doesn't need to change anything in their code. "It just works", like in Python, Go, or Java, except there's no heavy GC tracing everything or expensive RC for each object.

Note as of January 15, 2021, Vlang has Go-like coroutines and channels. And it compiles to C.


Mainly what I seem to want to add to Vlang (other than perhaps a different syntax which is not really a big deal to implement a transpiler parser) is Pony’s concurrency model via reference capabilities typing.

This is to be sure we don’t have race condition bugs in concurrent code. For the neophyte readers, a race condition is where two or more simultaneous threads of execution will overwrite each other’s scratch pads. I presume you all know that modern microprocessors have multiple cores so they can run multiple program slices simultaneously. Another reason that concurrency is introduced into programs is that for example the program is waiting on some resource(s) that is/are being fetched over the Internet so it will store that memory scratch pad for the task that is waiting, and go work on something else interim. We want to prevent these concurrent activities from corrupting each other. And not only do we want to think we prevented it, we ideally want the compiler to check to make sure we didn’t have any insidious mistakes.

Note there are multiple ways to address concurrency safety. For example the Clojure programming language instead employs persistant data structures, which are immutable except that one can efficiently create a new copy that is mutated without needed to copy the entire data set. Those data structures efficiently keep track of changes. Immutability avoids the potential for any tasks to share a mutable reference to the same scratch pads.

@shelby3
Copy link
Author

shelby3 commented May 16, 2022

@Ichoran since I don’t want to put this argument with you in the Scala Contributors user group, I will continue the discussion here.

I can’t believe the intransigence and disingenuous lies that some people make in an attempt to hamstring Odersky and prevent him from experimenting with ways to possibly elevate Scala from a obscure programming language that has nearly died on the vine.

And you’re one of the prime transgressors there, lying about your use case of Scala (which you had previously shared with us here in this thread) by pretending you will be harmed if others will program in Python’s braceless style (while also continuing your underhanded vendetta against me personally).

You do not even like typeclasses, so you are just holding Scala back from ever being anything at all significant. Scala will never be a better Java than Kotlin is. Scala has to make a different niche based on its strengths.

You ought to just GTFO of Scala and go use Kotlin and stop being a thorn in the side of those who actually have some vision for how to make a popular programming language.

Also you were totally fucking wrong about everything about the Certificate Of Vassal IDentity scam. You’re a smart idiot, obstructionist of truth, wealth and prosperity. Yeah I hate you with a passion, you dickhead loser, useless blob of protoplasm.

@Ichoran
Copy link

Ichoran commented May 16, 2022

It's going to be a one-sided "discussion". There isn't much I can offer if you can't or won't distinguish your misunderstandings from other people lying. Feel free to rant, though, if it's cathartic.

@keean
Copy link
Owner

keean commented May 16, 2022

@shelby3 Just want to remind you to keep to programming language discussions here to keep the noise down.

Regarding syntax, one option is to regard the language abstract syntax tree as the fixed-point, which you might store as JSON or XML or some such thing that can store tree formatted data. Then users can choose to render the tree according to house style rules when they view things.

I think this is a good solution to the religious wars that start about the exact coding style that should be checked in repositories. Should there be one space or two, tabs or spaces etc...

Store a machine readable AST directly, and leave it to the IDE to render. In this way each developer can see all the code in the repository in their preferred style.

@shelby3
Copy link
Author

shelby3 commented May 16, 2022

@Ichoran

It's going to be a one-sided "discussion". There isn't much I can offer if you can't or won't distinguish your misunderstandings from other people lying. Feel free to rant, though, if it's cathartic.

If you are referring to COVID, you are on the side of the lies. And I have all the proof accumulated in a massive trove. I am going to nail your sophistry ass to the wall.

Additionally you do not even know about the actual history of the U.S. since the Civil War. You are not a State National. You are some freak and you will be cast out into the technofeudalism corporate gulag where you belong dimwit.

@shelby3
Copy link
Author

shelby3 commented May 16, 2022

@keean I essentially suggested that but they poopooed it over there and notice @Ichoran did not support my effort to propose that there, yet he upvotes here. The disingenuous, duplicitous twat that he is.

@Ichoran
Copy link

Ichoran commented May 16, 2022

@keean - For reference, I think the AST-is-ground-truth idea (not "essentially" proposed elsewhere) is really interesting. I don't know if it would be practical, but I think it's an interesting idea especially for Scala since it already has an internal AST that is stored (TASTY). Could we form an adequate bijection between TASTY and editable forms? Probably not. But that doesn't sink the idea--you can then ask whether you could have a rich text AST that could be reduced to actual TASTY, and the rich text AST would be ground truth.

Would that work? Dunno. And it wouldn't solve the problem of needing to read code pasted into discord and github and stuff. But it's an interesting intellectual contribution towards solving the how-do-you-edit-your-favorite-flavor-and-let-others-do-likewise (which, for the record, I never opposed...I just didn't think it solved the whole problem, so the other potential issues with language dialects could not be dismissed wholesale without addressing them). The reason it's interesting, unlike the others, is that writing code is really just our way to express that we want a particular AST. Having the AST itself be the common form (even if not really readable on its own) is therefore a particularly pure way to handle dialects: in principle, one could consider any dialect for which there is a bijection with the AST.

(The main issue I can foresee is that it would be difficult to allow ad-hoc indentation and spacing, like vertical alignment of common elements to reduce the chance of error. A rich text AST might be able to support this if all dialects could permit the same thing. But without this, historical code bases would be grandfathered in at best: you could have an surjection from them to the rich text AST, but you couldn't get back.)

Anyway, it's an interesting idea that I hadn't heard before that's a step above the usual "run the code formatter" / "write a syntax translator" idea.

@shelby3
Copy link
Author

shelby3 commented May 16, 2022

@Ichoran would you elaborate (example?) on @morgen-peschke’s claims that there are subjective decisions to make about where to place braces in Scala where apparently according to him (if I interpreted his statement correctly) they are optional and don’t change the semantics at all but change the readers interpretation of code? That is bizarre and seems like bad programming language design?

(If you do not want to elaborate for me, you could do so in the context of a discussion with Keean)

@shelby3
Copy link
Author

shelby3 commented May 16, 2022

@Ichoran

The reason it's interesting, unlike the others, is that writing code is really just our way to express that we want a particular AST.

Apparently not according to @morgen-peschke’s claims. Apparently there are cases where the mere presence of braces in Scala even where their presence makes no difference to the AST, has some bearing on the reader’s interpretation, at least according to him.

So don’t go trying to act so hoity-toity as if you had some highly intellectual reasoning in mind that had escaped anyone else here. You egotistical fucktard.

@shelby3
Copy link
Author

shelby3 commented May 17, 2022

@Ichoran @keean

EDIT: I almost forgot to drop a hint. What does chemistry have to do with chain of custody @Ichoran? That is one of many possible myopias of geeks who can’t think outside of a box.

I don’t mind if @keean deletes all our recent exchange. Because I am obviously unloading on @Ichoran and I want him to know I think he is a despicable freak. I also fault @keean for believing all the technocracy lies about COVID and what not, but @keean gets a pass because he is open to debate. Also because I view @keean as a human being who is trying to understand the truth each person is attempting to convey or find (even if that person might be off course). @keean is more apt to try to help a person than to cut them down, if he can. And eventually @keean and I will have that debate. He may find some sophistry to continue on believing in his enslavement and that is fine maybe it is his destiny. But at least @keean will engage me respectfully when I am ready to do so. Whereas @Ichoran was cursed with a very high IQ so thus thinks he knows more than the person he is engaging with. @Ichoran would not respectfully engage in a challenge to his belief system. It is his UNDESERVED arrogance that is the huge turnoff. But he will be judged and he will reap what he has sown. That’s not my job to mete out his punishment. That is above my pay grade. He will not escape the truth nor punishment for the horror he has arrogantly help perpetrate and perpetuate on humanity. @Ichoran was gifted with the intellect and the domain knowledge to help humanity at its time of great need in 2020, but instead he decided to memebot the sophistry instead of using his intellect to actually find the flaws in the cherry picking and what not. And for that he has become the de facto enemy of humanity and complicit in crimes against humanity. Maybe he should read the Nuremberg Code and Crimes Against Humanity to be tried at the Hague when all this eventually comes to light.

When he says lies, I probably know what he is referring to. I have been to that highly technical scammer website that purports to be refute all the arguments against the scam. What is so hilarious is you two guys subscribe to a criminal syndicate yet then @Ichoran is arrogant enough to think he is actually some intellectual.

Come @Ichoran stop being so fucking ignorant of the world and actually learn something useful other than chemistry.

@shelby3
Copy link
Author

shelby3 commented May 17, 2022

@Ichoran @keean

Here we go again Scala doing their usual unnecessary complexity routine.

For example, by not adopting the 3 spaces enforced rule, and instead trying to be too cute, you all are creating problems and complexity:

https://contributors.scala-lang.org/t/feedback-sought-optional-braces/4702/34?u=shelby3

The above is presumably the attempt to automatically detect unintentional single-spaces. Very, very bad. What is Odersky doing? He always does this sort of stupid shit. He ruins a good innovation by trying to be too clever.

https://contributors.scala-lang.org/t/feedback-sought-optional-braces/4702/35?u=shelby3

The hardcoded 3 spaces rule would be much more regular and easier for tool developers.

Stop adding unnecessary complexity.

I probably understand. Odersky is probably trying to be better than Python so he can justify adding this feature to Scala as BDFL to overcome the extreme resistance to the idea. Bad. Just do it and do it sanely. Or don’t do it. But don’t do it badly as a crutch for not wanting to be perceived as the dictator. I have no qualms about being a dictator when I need to be and I will explain it very matter-of-factly, as I did in the thread which the mod shut down. I do not speak out of both sides of my mouth like a weasel. I tell people straight what is.

@Ichoran
Copy link

Ichoran commented May 17, 2022

@shelby3 - I will discuss political philosophy and historical and current world events in an appropriate venue. This is not such a venue, even if it is less stringently not such a venue than the Scala forums. Pick an appropriate one and I will engage for an amount of time that I can afford (probably ~2000 words max). Time and place permitting, I'm always open to discussing important issues.

@shelby3
Copy link
Author

shelby3 commented May 17, 2022

@Ichoran, okay fine. Please realize I am not a total dunce. My mother has a rigorously tested 137 IQ and my father has a significantly higher IQ, so I am at least not retarded in any case. I have a somewhat Aspie profile but with a spike to maximum on neurotypical perception but very weak on communication on both sides of the NT-Autistic spectrum. Perhaps it is my weak (output) communication skills that deceives you about my intellect.

https://contributors.scala-lang.org/t/new-braceless-syntax-needs-more-thought-before-being-adopted/5722

@shelby3
Copy link
Author

shelby3 commented May 17, 2022

@Ichoran @keean:

I added this just for you guys:

https://contributors.scala-lang.org/t/new-braceless-syntax-needs-more-thought-before-being-adopted/5722/2

Does anyone remember when they could hold the entire K&R C book in their head.

@shelby3
Copy link
Author

shelby3 commented May 17, 2022

@keean

Regarding syntax, one option is to regard the language abstract syntax tree as the fixed-point, which you might store as JSON or XML or some such thing that can store tree formatted data. Then users can choose to render the tree according to house style rules when they view things.

I think this is a good solution to the religious wars that start about the exact coding style that should be checked in repositories. Should there be one space or two, tabs or spaces etc...

Store a machine readable AST directly, and leave it to the IDE to render. In this way each developer can see all the code in the repository in their preferred style.

Impossible because Scala screwed itself early on:

https://contributors.scala-lang.org/t/make-fewerbraces-available-outside-snapshot-releases/5024/130

@shelby3
Copy link
Author

shelby3 commented May 18, 2022

So this is what the world has come to. We really do need to enslave all of you. Just continue your support for the technocracy guys, it’s your destiny. Scala is really in deep shit if radical, leftist loons are steering the ship. Glad I realized it sooner than later.

https://contributors.scala-lang.org/t/which-features-of-scalas-use-of-braces-making-bijection-translation-from-braceless-block-implausible/5723

Archived:

https://web.archive.org/web/20220518031012/https://archive.ph/https://contributors.scala-lang.org/t/which-features-of-scalas-use-of-braces-making-bijection-translation-from-braceless-block-implausible/5723/1

https://web.archive.org/web/20220518031012/https://contributors.scala-lang.org/t/which-features-of-scalas-use-of-braces-making-bijection-translation-from-braceless-block-implausible/5723/1

From Telegram:

Let’s be honest with ourselves. Men congregate for a mutual benefit. If men didn’t need other men they would prefer to not have any around, except maybe for the need for power elevation, ego stimulation, decompression (e.g. humor, getting drunk with the guys, etc), brotherhood, mental stimulation, camaraderie and competition— we would probably get very bored hanging out only with women all the time.

Obviously when men don’t find value in something they lose any commitment to it. Women aren’t seeking the same values out of activities that men are.

Nearly all men overestimate their ego. In fact we need to (it’s instinctive) in order to signal to females that we are worthy because females are hypergameous. So attacking/shattering a man’s ego is the surest way to incur wrath.

As a result in community we have to deal with the disconnect between a man’s actual value and their ego. This is a huge problem because a man’s actual value in society is rarely as high as their ego. The power elite leverage this discord by offering us debt-based enslavement so that we can pump our egos (and fulfill the shopping habits of our females) far beyond the Minsky Moment value of our production.

So the take away is that everything that is happening now is entirely expected. The egos which enslaved themselves in debt (or participating in the debt-based society even if not in debt themselves) will now be enslaved in the technocracy. The technocracy is the end game of unproductive ego.

So that is really what my prior message was about. But I hadn’t yet formulated the generative essence. This is why I despise unproductive offspring. It is why I despise the normie existence.

So the task at hand for us if we want to not end up subjugated by the technocracy is to get our ego inline with our productivity. That is why I have no patience for bullshit about political correctness and the inability to share a diseased liver photo. Cripes we had better fucking get out shit in gear and pronto on being productive in the way we need to be to separate ourselves from the end game of the debt-based delusion.

Continued…

Here is the corollary and it is very, very important so pay attention.

As the egos of men are being crushed over this decade expect men to find solace in any turf they can find. They will become vicious in group forums employing any underhanded tactics they can do smite any man who attempts to project any ego. They will roam and rampage if they can. They will embrace the technocracy when it enslaves other men, just so they can feel better about their own ego. Don’t underestimate the power of the need for a man to elevate his ego— it may be one of the greatest forces on earth which the elite understand how to harvest and direct.

All the worst you can imagine will happen when the debt spigot is turned off and the rationing is turned on. The technocracy will leverage this wrath to maximum effect.

I hope I am scaring you. You should be scared as fuck and damn well get your plans organized.

@shelby3
Copy link
Author

shelby3 commented May 18, 2022

Don’t expect any communication from me for a couple of days, as I have my head in the programming sand attempting to accomplish something Herculean. (Some people pissed me off and that is a big mistake when I am healthy)

I now have diarrhea with the Ivermectin but I interpret this as a good sign. My health seems to be improving. My concentration is unreal. I am coding as I did in the past. Didn’t sleep for 24 hours. No problems so far.

👍👍👍👍👍👍👍👍

@shelby3
Copy link
Author

shelby3 commented May 19, 2022

Yuri Bezmenov (Leftists are useful IDIOTS)

Yuri Bezmenov (Leftists are useful IDIOTS)

Have a moment to jot down what has transpired which I will do now, because that ostensibly (possibly radical) leftist @morgen-peschke ostensibly mistakenly thought that—by violating the Scala Contributors discussion group’s Code of Conduct, by linking to off-topic discussion here in this thread—he would malign my character in the eyes of others. I guess he didn’t realize that I am delighted if others know that I am angry at @Ichoran for his complicity in the COVID scam, which has been a massive crime against humanity. Why would I be ashamed of protecting humanity against criminal syndicates and their sycophants? But I wasn’t going to spam the Scala Contributors group with my longstanding issue with @Ichoran. So that juvenile twat did it for me, lol. The mods (presumably @sjrd who is also the lead programmer for Scala.js) hid all my responses to that freakazoid’s Code of Conduct violating activity but not the actual CoC violations of my protagonist (c.f. link above). So @sjrd who is ostensibly the sole mod for Scala Contributors (ya know the Scala is community is really tiny with only 0.7% market share up 0.2% recently due to Scala 3 compared to 1.4% for Go, 1.6% for Kotlin, 2.4% for TypeScript, 9% for JavasScript, 18% for Java and 28% for Python) might be breaking the law (will need to ask my attorney to look at this) by violating contractual language resulting in intentionally schemed libel (private companies have a lot of leeway but they’re not entirely above the law although I’m not an attorney). The legal stuff is not my purview, but I share the need for a programming language with the community-at-large (not including the leftists cretins will never have a safe haven for their disruptive activities in any project I manage so they better not even try) and the technology is my forte, so…

This was after I had been graciously receptive to @morgen-peschke’s militaristic attitude against the new Scala feature to offer an optional braceless syntax. I was attempting to have a reasonable discussion with him about his claims that being autistic means he can’t read code without braces. I noted he could continue to code in braced style as Scala will offer both. I posited that maybe we could encourage the Scala team to offer libraries in both braced and braceless. I also raised the possibility of an automatic translation between the two styles, as well I suggested multiple colored vertical traces (to help distinguish them rapidly) in the IDE. I even pushed back against @Ichoran’s FUD about small snippets of braceless shared in Discord being hard-to-read as even he agrees small snippets are often easier to read in the braceless style. But no, @morgen-peschke would have none of my attempts to find reasonable compromise (and even I am not the one forcing the new feature down his throat as the Scala BDFL Odersky is pushing it through). He insisted that massive exodus of large companies would ensue from the Scala ecosystem so at that juncture my bullshit and FUD detection alarm kicked on. I told him that many autistics have an inability to perceive reality correctly and that he was exaggerating. First of all, large companies don’t use Scala, instead Java or if another then much more likely Kotlin because it doesn’t have all the abstruse crap and corner cases that have been the bane of Scala of the years—the “kitchen sink” language which Odersky is about to fuck up again, lol. Note @sjrd hid that very important aforelinked post (which aims to head off bugs and unnecessary complexity Odersky is foolishly adding to the new feature), so I have screen captured it as follows:

image
image

The mod (presumably @sjrd) deleted my post in which I was justifying the claims I started making about why Scala needs a better native compile target. Note the mod also closed that thread to prevent the points from being made. Clearly they’re trying to hide Scala's weaknesses from the users, as they must realize Scala is very near to extinction. And they made a big mistake because they have now motivated me to proceed to remove any reason for Scala to exist. More on that in the next paragraph. Anyway, so in my post which was removed I pointed out that Scala Native because it compiles to LLVM has no reasonable way to implement green user mode threads (e.g. cactus stacks) which are usually more performant than and preserve stack traces (for debugging, sane exception handling, etc) compared to coroutines. That’s because coroutines (e.g. Promise or Future) rewrite the call stack into callback closures, which incurs a memory allocation penalty. Seems I remember some development that might eventually make green threads available in LLVM. Also Scala Native is still stuck with a global garbage collector which stalls all threads during collection cycles, or even if incremental collection still incurs a performance and/or memory penalty, which impacts scaling to massively mutithreaded servers and P2P nodes. Project Loom will bring green threads back to the JVM for JDK 19, but this won’t address the global garbage collection nor will that be a win for Scala because that will also be available for Java and Kotlin which compete with Scala on the JVM. Right now Scala is hanging on by a thread of adoption significantly due to the Scala.js output target. Scala 3 will garnish some interested because unfortunately Go and Rust lack HKT type classes, anonymous disjunctions, default parameters, and Go even lacks unions. I was going to adopt Scala 3, because it is a pita to write a compiler and I prefer to get work done than go off on a tangent.

But seeing what a clusterfuck Scala is now with @sjrd being the lead on Scala.js and Odersky running “kitchen sink” wild as usual, I really started to doubt the wisdom of depending on such an ecosystem. I investigated what eventually would be required for me to write a Go output target Scala plugin, and the plugin development is largely undocumented with much needed to be gleaned though printout dumps and trial and error. Worse yet I glanced at the Scala.js Scala plugin code, and it has registered to process a multiple phases (compiler stages) of the compiler. Presumably meaning it has to collect typing information after the typing phase, then classes information after another phase, etc.. Massive, undocumented complexity. I realized it was going to take me more time to master Scala plugins than it would for me to write my own compiler for my own programming language with an output target to Go and/or TypeScript! Besides why do I want to invest my time to learn some stupid compiler design created by cretins? Not motivating whatsoever! Although I would not not have to write a Go output target now if I wanted to start using Scala 3 now for the Scala.js output target, I would be investing a lot of coding inertia in a fucked up ecosystem which is teetering on extinction (just waiting for Odersky to get COVID then Scala could be toast, lol). World War 3 (involving nuclear war between NATO and Russia) is right on schedule to begin 2025ish, so that could also put Odersky out of commission. Also massive famines enveloping for 2023. Got to love these leftists who are intentionally creating this. 👏👏👏 Well done.

Then on top of that the Scala compiler is slow as molasses. And there is still some cruft in the syntax and language features. So maybe it is time to just deprecate Scala. Take it’s most unique features of importance, compile them faster to Go and/or Typescript and remove the reason for anyone to use Scala. So I pulled out the grammar I had been working on since circa ~2019 and whittling it down now that I have much clearer design understanding as everything we discussed has settled over the ~2 year hiatus as well as fixing my decades long liver disease had always been a prerequisite to being productive again. Turns out 2015 research unequivocally shows that Ivermectin entirely cures chronic fatty liver disease. Just don’t ever ask any doctor because doctors don’t can’t get their head out of their arse anymore than @srjd can.

And so it goes…and I have more than ample funds to hire whoever I might need, but for the moment I will make the initial push because I need to flex my coding talent after more than a decade of being too ill to do much of anything but wallow in bed.

Warning to cretins. Don’t fuck with me when I am healthy. You don't know me. I am fiercely competitive.

image
image
image

@keean
Copy link
Owner

keean commented May 19, 2022

@shelby3 It would be great if we could try and not stray too far from the topic of programming languages.

My general thoughts are that it is always hard to move the status quo. I think it is better to develop a new language rather than change an existing one. This is partly because it is hard to overcome momentum, but also because adding features not designed from the start leads to complex and unwieldy languages. I think opinionated languages that strongly encourage a certain coding style are better than kitchen-sink languages that try to do everything (poorly).

@swoogles
Copy link

Pack it up folks, Shelby has all but obliterated any reason for Scala to exist.

By the end of the week, this new project, powered by his clarity and irresistibly friendly personality will reign supreme. It will steal Scala's (if not all existing languages') market share once and for all

Flee, flee from the wrath to come!

@shelby3
Copy link
Author

shelby3 commented May 19, 2022

Pack it up folks, Shelby has all but obliterated any reason for Scala to exist.

By the end of the week, this new project, powered by his clarity and irresistibly friendly personality will reign supreme. It will steal Scala's (if not all existing languages') market share once and for all

Flee, flee from the wrath to come!

Hahaha. Thanks @swoogles for the inspiration. I'm already 200 lines into reviewing, checking, refining, condensing and recalibrating the 500 lines of EBNF grammar code I had written in 2019 and 2020.

And remember my advice about wealth generation, don’t associate with poverty-striken, aspirational, leftist “power rangers” losers and their Ponzi schemes as they have a tendency self-immolate in their leftist holiness spirals. Stalin murdered most of them to stop the holiness spiral from razing Russia with unabated megadeath— which is the way all power vacuum holiness spirals end and roughly what awaits these clowns (this time it may be Bill Gates’ WHO pandemic treaty euthanizing them with “COVID” clot shots). Understand paradigmatically and methodically how their virtue signaling politics sustains them and also destroys them in the end. They’ve roosted themselves in every open source project of significance, e.g ousting Linus Torvalds and Eric S. Raymond. They may “think” (if a virtue signaling ganging up is considered rational thought) that gives them power and they are mighty offended when a BDFL such as Odersky overrules their “community power." Odersky is now wading in feces and Scala (unlike Linux and Python) is not on firm footing to begin with, so it won’t require much to tip it over the cliff, e.g. the loss or retirement of Odersky. I won’t be surprised if the leftists find a way to kick Odersky out for ramming through the braceless style.

When a @morgen-peschke says that Scala will suffer immensely from ramming through the braceless style, he’s probably not referring to some organic failure. I won’t be surprised if the leftists organize some drama. Hand the keys to these privileged, undisciplined juveniles who’ve never been spanked in their life and they will run amok (Western societalcide is careening now with demoralization and destabilization that Yuri Bezmenov described and predicted).

I don’t yet know @Ichoran’s stance on leftist holiness spirals. He keeps most of his opinions close to his chest (apparently being an introverted Myers-Brigg type), all I know is he was hoodwinked by the COVID scam at the outset and I wasn‘t (so one or two SD in extra IQ didn’t help him at all). I want to read his 2000 words rebuttal someday when we can agree on a proper forum to have a debate. Maybe LessWrong? I want to get all my ducks in a row first and published, so he has access to my trove of evidence so we can establish ground facts.

@swoogles remember this maxim. Often the most success programming tools are built by those intensely motivated programmers who were building it only for themselves. When you’re building it for yourself, you’re building it with passion. Linus was building Linux for himself. I am building a compiler for myself because I really need it and I don’t care if nobody else in the entire world uses it. If I build something (assuming I succeed in completing it) and others find it useful, then so be it. I could use Scala, but the compiler is slow and will cost me a lot of time even if it is only a few seconds during each incremental recompile. Worse is that when I code the crypto ledger (not a block chain), I loathe to be running on Node.js with JavaScript or on the JVM. And I will also loathe attempting to learn scalac (now dotc) internals so that I could write the better server scaling compile target I would need then. Why should I invest and help a leftist community then they will one day oust me also with all my effort being vandalized by cretins? Lastly I am not building a complete compiler, but rather a transpiler to another compiler such as Go and/or TypeScript, which significantly reduces the landscape I need to master. I certainly don’t want to be rewriting Go at this juncture.

Your comment seems to underestimate what significant wealth can be summoned to accomplish, although I will not try to hire anyone until I have at least a working skeleton proof-of-concept so I am knowledgeable enough about what’s needed. I need more experience first before going headstrong into expanding human resources.

I had made the tentative decision to embrace Scala 3 because as you astutely point out that it is a lot of work to create a compiler. Why reinvent the wheel to needlessly create busy-work for myself. And I would prefer to add to something that is. The maximum of division-of-labor is virtuous. But unfortunately Scala is so discombobulated (I will not reiterate the numerous points of my prior post) that it’s really not a valid option for anyone suitably astute. It was 13 years ago when I wrote my response to the postulate that the complexity of the Scala 2.8 Collections Library was “the longest suicide note in history.” Notably even in 2009 I knew everything that would come to pass by now (except that a chronic illness would delay me for more than a decade). Essentially it’s still the same crap with Scala shooting itself in the feet with complexity and not focusing on making a tool without the corner cases that has significant adoption by developers and can actually help IT departments do something really well that they could not do as well with another tool.

Note the extension methods in Kotlin was my idea. Yeah I was there helping Kotlin at the outset. Kotlin has been more successful than Scala.

I just remembered we had encountered another one of those Scala developers @alexandru who also must have had an overflowing tampon the day he decided to interact with us in reaction to my refutation of every point in favor of Scala’s OOP in In Defense of OOFP. I guess he was involved in Typelevel, Monix, Cats. What is it with feminine “male” (probably lacking one or both of ACTN3 and ACE genes) programmers and cats? Do they realize cats can carry a disease that can cause jaundice, blindness and/or schizophrenia. (I have one of the R athletic genes not two as some Africans do apparently as an adaptation to malaria. ACTN3 R/X genotype.)

@keean

It would be great if we could try and not stray too far from the topic of programming languages.

Agreed it would be ideal. Unfortunately politics is a reality in this world. I don’t foresee why I would need to belabor the point after this response. And I am mixing in discussion about programming languages. For example what is the threshold that could motivate someone to not find an existing programming language to choose. And specifically why Scala which we have discussed often, has perhaps still not pulled itself out of the “kitchen sink“ routine, etc..

My general thoughts are that it is always hard to move the status quo. I think it is better to develop a new language rather than change an existing one. This is partly because it is hard to overcome momentum, but also because adding features not designed from the start leads to complex and unwieldy languages. I think opinionated languages that strongly encourage a certain coding style are better than kitchen-sink languages that try to do everything (poorly).

You’ve stated this before and I think it helps to have that statement here again at this juncture.

P.S. I’m pinging @jdegoes who introduced me to Scala in 2007 (or was it 2008?) when I was expressing frustration with HaXe’s limited capabilities in the HaXe discussion group. I know for the past several years (or at least until I last exchanged messages with him circa 2020) he had been suffering from some sort of ailment in his gut and digestive organs and I want to make him aware of research that Ivermectin has tumor shrinking and cancer inhibition abilities as well fully cured fatty liver disease in mice. I started to have leg edema and chronic fatigue circa 2008 which worsened eventually into a near-death perforated ulcer and ER hospitalization episode in 2012 followed by declining chronic health. Diagnosed with dengue then tuberculosis in 2017. The 5000mg daily antibiotics for 6 months to treat the TB was highly liver toxic, apparently triggering my endocrine system into some permanent state of metabolic disease which kept my liver in a worsening fatty liver state ever since. That is why I had not been able to work effectively since ~2010. I began the Ivermectin treatment several days ago and I am very encouraged. Chronic disease is very, very difficult to handle from a mental health perspective (readers can lookup some research on this if interested). Of course it is no longer possible to buy the human form of Ivermectin anymore without a prescription from the criminal syndicate medical system, because some people were attempting to use as a treatment for the imaginary disease COVID, but the equine form is ostensibly the exact same active ingredient, just in an injectable form at 1% concentration (which I dose orally).

John might remember his 2016 blog post Twitter’s Fuckedwhich it may be still—and my comment on it then reflects what I wrote two days ago above.

@shelby3
Copy link
Author

shelby3 commented May 19, 2022

@keean

I think I finally arrived at a good solution to the modularity problem with total orders, abstract algebra and type classes.

So to repeat the background issue, while for example the concept of sort and semigroup/monoid are total orders, their specifics are not. For example there’s ascending or descending sort order, and monoids can be additive or multiplicative.

I mentioned to you in a private message carving out the natural total order portion of algorithms into an abstract algebra devoid of the complex modularity issues that plague type classes. You mentioned the need to be able to apply the partial order remnants modularly so as to not discard one of the key benefits of type classes being that the injection of the algorithms is orthogonal to the function hierarchies—a benefit for refactoring and such.

We can organize the so called remnants as separate abstract algebras, e.g. ascending, descending being distinct total orders, which are forked off from the overall abstract algebra of sort order. Then an unopinionated usage will requires the root of the abstract algebra tree. Whereas usage which needs to be opinionated will requires a fork (or multiple forks) on the tree and inject them down the call hierarchy. Or if there’s no “opinionation” in the call hierarchy, then the specific fork is injected orthogonal to the call hierarchy at the module/modularity layer as you had mentioned (e.g. Scala’s using).

Tada! Why didn‘t we think of that before.

@shelby3
Copy link
Author

shelby3 commented May 20, 2022

But seeing what a clusterfuck Scala is now with @sjrd being the lead on Scala.js and Odersky running “kitchen sink” wild as usual, I really started to doubt the wisdom of depending on such an ecosystem. I investigated what eventually would be required for me to write a Go output target Scala plugin, and the plugin development is largely undocumented with much needed to be gleaned though printout dumps and trial and error. Worse yet I glanced at the Scala.js Scala plugin code, and it has registered to process a multiple phases (compiler stages) of the compiler. Presumably meaning it has to collect typing information after the typing phase, then classes information after another phase, etc.. Massive, undocumented complexity.

I had made the tentative decision to embrace Scala 3 because as you astutely point out that it is a lot of work to create a compiler. Why reinvent the wheel to needlessly create busy-work for myself. And I would prefer to add to something that is. The maximum of division-of-labor is virtuous. But unfortunately Scala is so discombobulated (I will not reiterate the numerous points of my prior post) that it’s really not a valid option for anyone suitably astute.

It’s instructive to elaborate these points.

I remember being bewildered when first joined Fractal Design Corporation in 1993 for the Painter X2 project with an $80,000 salary and $1 million in stock options,[1] (that’s $1m and $13m properly inflation adjusted so readers will understand how impoverished and enslaved you are!) Mark Zimmer (eventually personally recruited by Steve Jobs) and Tom Hedges (could memorize an entire page of a phone book) explained to me that the reason they wrote all their own code and refused to rely on third party libraries is because they wanted to control the outcome (i.e. didn’t want to be screwed over). Being the young aspirant “power ranger” idiot that I was at that time, I was recalcitrant as are these young idiots on the Scala team who don’t yet understand how the world works (or think they can change it, lol).

Eric S. Raymond’s Linus Law “given enough eyeballs, all bugs are shallow” doesn’t apply as well when a group of insiders have managed to write a giganormous body of highly undocumented, poorly commented, highly complex code.[2] Thus the learning curve attrition is very high. Who would make the investment to overcome that hurdle if not employed[enslaved] by the Scala community? If not there’s the risk that the upfront investment (i.e. sunk cost) could worthless if your needs or intentions diverge from those leading the direction of the project, because it’s unlikely you will master it sufficiently to carry the burden of completely forking such a complex project. So open source really only works correctly when the projects are well documented and well organized, not some discombobulated, rushed mess that was scalac and ostensibly again dotc (at least the few minutes I expended perusing the code).

So if you want to know the real reason that (astute, sane) companies don’t embrace Scala, the above is easily is as important as the single-minded FUD that @morgen-peschke was attempting to parlay. At least with Kotlin corporations know they’re relying on a sane, profitable corporation beholden to its paying customers with sane leaders with a history of sanity, profitability and a paying customer base— unlike Scala history of discombobulated insanity with self-important, egotistical, entitled, free-riding leftists. It’s blatantly obvious that free-rider @morgen-peschke was trying to construct a strawman of FUD to elevate his claimed (probably exaggerated) mental handicap to importance by attempting to scare paying customers—these sort of dysfunctional open source projects (where one lone, self-important, leftist, free-rider loon can run roughshod over community discussion) should scare the heebie jeebies out of corporations. As much as I wanted those few unique features in Scala, it just isn’t worth it especially when it only takes one day in their discussion groups to be attacked with the insanity. Scala’s community doesn’t understand what professionalism means, although they may think their virtue signaling implementation of their CoC is some form of professionalism that is yet another example of their aspirant “power rangers” delusion and the common psychosis.

Let this post serve as the notice of the day Scala finally died. We hardly knew ye.

[1] Eventually became Corel Painter via acquisition.

[2] Even perusing what should be the simplest code of the compiler, the dotc parsing code (e.g. Parsers.scala and Scanners.scala) lacks any holistic documentation. Looking at my ~500 lines of EBNF grammar which accomplishes a comparable syntax, I have numerous comments and a holistic overview explaining how the algorithm functions, which I will be refining as I translate this tool-validated context-free grammar into actual code. Is the Scala grammar validated?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants
@shelby3 @keean @swoogles @Ichoran @sighoya @NodixBlockchain and others