Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Wiki documentation for TLC options #297

Open
CIPop opened this issue Jul 18, 2023 · 3 comments
Open

Add Wiki documentation for TLC options #297

CIPop opened this issue Jul 18, 2023 · 3 comments
Labels
documentation Improvements or additions to documentation enhancement New feature or request

Comments

@CIPop
Copy link

CIPop commented Jul 18, 2023

(I don't know how to send a PR towards the Wiki portion.)

I found the following two resources useful to better utilize the resources available on the machine running my model:

Also, there is a Wiki broken link at https://github.com/tlaplus/vscode-tlaplus/wiki/Settings: "TLC Model Checker: Statistics Sharing allows to opt-in / opt-out sharing of TLC usage statistics. More details." "More details" Link should be: https://github.com/tlaplus/tlaplus/blob/master/tlatools/org.lamport.tlatools/src/util/ExecutionStatisticsCollector.md

@lemmy
Copy link
Member

lemmy commented Jul 19, 2023

I fixed the broken link and added a new link to https://github.com/tlaplus/tlaplus/blob/master/general/docs/current-tools.md#command-line-options. This current-tools.md document covers various aspects, including the -workers option. However, I'm a bit hesitant to add performance tuning tips since optimizing a VM can be quite challenging. Maybe we should consider referring to official Java documentation on performance tuning instead?

@lemmy lemmy added documentation Improvements or additions to documentation enhancement New feature or request labels Jul 19, 2023
@CIPop
Copy link
Author

CIPop commented Jul 19, 2023

Thank you @lemmy - I didn't find the command-line options MD file in my searches earlier this week... much better than just the code comment!

Maybe we should consider referring to official Java documentation on performance tuning instead?

I spent considerable time figuring out how to use all RAM and CPUs available on my machine which was unexpected. TLC should automatically scale-up to all resources if it needs to, unless constrained through configuration.
Instead of providing the options, I would prefer if the extension (or TLC) would magically use all resources on my dev box with an opt-in restriction mechanism.

In the absence of that, I would document how that can be achieved.

Also, -fpmem > 1 gives this warning which was ambiguous to me:

Using -fpmem with an abolute memory value has been deprecated. Please allocate memory for the TLC process via the JVM mechanisms and use -fpmem to set the fraction to be used for fingerprint storage.

I had to spend some time figuring out what "Please allocate memory for the TLC process via the JVM mechanisms" really means since I'm not familiar with JVM memory limits.
Even more, since I was running in a HyperV VM with dynamic RAM allocation (VM starts with <1GB actually allocated and reported by htop), I originally thought that was the issue.

To make things more confusing, official Oracle JVM documents (https://docs.oracle.com/cd/E13150_01/jrockit_jvm/jrockit/jrdocs/refman/optionX.html) point out that Java processes should not be limited by Xmx:

Note: -Xmx does not limit the total amount of memory that the JVM can use.

@lemmy
Copy link
Member

lemmy commented Jul 20, 2023

I spent considerable time figuring out how to use all RAM and CPUs available on my machine which was unexpected. TLC should automatically scale-up to all resources if it needs to, unless constrained through configuration.
Instead of providing the options, I would prefer if the extension (or TLC) would magically use all resources on my dev box with an opt-in restriction mechanism.
In the absence of that, I would document how that can be achieved.
Also, -fpmem > 1 gives this warning which was ambiguous to me:
Using -fpmem with an abolute memory value has been deprecated. Please allocate memory for the TLC process via the JVM mechanisms and use -fpmem to set the fraction to be used for fingerprint storage.
I had to spend some time figuring out what "Please allocate memory for the TLC process via the JVM mechanisms" really means since I'm not familiar with JVM memory limits.
Even more, since I was running in a HyperV VM with dynamic RAM allocation (VM starts with <1GB actually allocated and reported by htop), I originally thought that was the issue.

@CIPop The TLA+ Toolbox implements what you described here, except it uses an opt-out mechanism instead of an opt-in one. One option is to configure a TLC run in the Toolbox and then reuse its generated VM and TLC parameters in VSCode, until someone implements better support for large-scale model checking in VSCode.

To make things more confusing, official Oracle JVM documents (https://docs.oracle.com/cd/E13150_01/jrockit_jvm/jrockit/jrdocs/refman/optionX.html) point out that Java processes should not be limited by Xmx:

Are you sure that the JRockit documentation applies to your local VM? I believe JRockit was discontinued a while ago and partially merged into Hotspot. However, I haven't been keeping up with JVM development in recent years. It's possible that the art of VM tuning has completely changed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation enhancement New feature or request
Development

No branches or pull requests

2 participants