Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't build the current v2_develop branch 37f59d2 #3450

Open
BDisp opened this issue May 6, 2024 · 44 comments
Open

Can't build the current v2_develop branch 37f59d2 #3450

BDisp opened this issue May 6, 2024 · 44 comments
Assignees
Labels
build-and-deploy Issues regarding to building and deploying Terminal.Gui dependencies Pull requests that update a dependency file docs Can be resolved via a documentation update v2 For discussions, issues, etc... relavant for v2
Milestone

Comments

@BDisp
Copy link
Collaborator

BDisp commented May 6, 2024

I get erros like this:

CS0016 Could not write to output file '..\gui-cs\Terminal.Gui\Analyzers\Terminal.Gui.Analyzers.Internal\obj\Debug\netstandard2.0\generated\Meziantou.Polyfill\Meziantou.Polyfill.PolyfillGenerator\M_System.Collections.Concurrent.ConcurrentDictionary2.GetOrAdd1(`0,System.Func{`0,0,1},``0).g.cs' -- 'Could not find a part of the path '..\gui-cs\Terminal.Gui\Analyzers\Terminal.Gui.Analyzers.Internal\obj\Debug\netstandard2.0\generated\Meziantou.Polyfill\Meziantou.Polyfill.PolyfillGenerator\M_System.Collections.Concurrent.ConcurrentDictionary2.GetOrAdd1(`0,System.Func{`0,0,1},``0).g.cs'.' Terminal.Gui.Analyzers.Internal ..\gui-cs\Terminal.Gui\Analyzers\Terminal.Gui.Analyzers.Internal\CSC

The main README file doesn't mention anything about what this might happen if someone wants to clone the repository.

@dodexahedron
Copy link
Collaborator

dodexahedron commented May 6, 2024 via email

@dodexahedron dodexahedron self-assigned this May 6, 2024
@dodexahedron dodexahedron added build-and-deploy Issues regarding to building and deploying Terminal.Gui dependencies Pull requests that update a dependency file docs Can be resolved via a documentation update v2 For discussions, issues, etc... relavant for v2 labels May 6, 2024
@dodexahedron dodexahedron added this to the v2 milestone May 6, 2024
@dodexahedron
Copy link
Collaborator

There are also other means of including them in the project, but every approach has pros and cons. I opted for what seemed to me to be the best available mix of transparency and ease of use vs other alternatives.

Any feedback you have on the powershell scripts is more than welcome, too!

@dodexahedron
Copy link
Collaborator

dodexahedron commented May 6, 2024

Here's what I'm thinking, if you guys could chime in with any thoughts @BDisp @tig :

I'm thinking that this section of the /README.md file is the right place to mention things, and I'm thinking that adding a step in the existing instructions there to first run the necessary commands to build the analyzers after clone, with a link to the /Scripts/README.md is a good mix to be brief there, but also provide easy navigation to a more detailed reference if someone needs it.

@dodexahedron
Copy link
Collaborator

dodexahedron commented May 6, 2024

One option that came to mind but that I don't think is a particularly good idea for both transparency and compatibility is to define a git hook to just run it automatically on pull.

I mean, I could make the scripts do extra checking and such to avoid errors, but I don't like the idea of something like that automatically running without explicit user consent/interaction.

@BDisp
Copy link
Collaborator Author

BDisp commented May 7, 2024

Thanks. That's a good thing to point out. I'll make sure instructions are clear. Yuck - Email reply was horrible... Anyway... Be sure to build them first, before loading the solution for the first time. I'll add explicit instructions and see if there's a good way to maybe make it happen automatically without too much trouble. As for right now, there are instructions in the README.md in the Scripts folder. Please do that and you should be all set. Or, you can just build the analyzer project ONLY, and then restart VS. The powershell modules make sure it is all done correctly, though, so that's the best way to do it.

I followed all the steps in this README and I can't compile the v2_develop branch in any way. I even ran powershell as administrator. Here is a screenshot of the script not working. What am I doing wrong?

image

@tig
Copy link
Collaborator

tig commented May 7, 2024

Me neither. The scripts are obtuse and need better documentation @dodexahedron.

@BDisp I was able to do a dotnet build at the command line before starting VS and it works.

@BDisp
Copy link
Collaborator Author

BDisp commented May 7, 2024

Me neither. The scripts are obtuse and need better documentation @dodexahedron.

@BDisp I was able to do a dotnet build at the command line before starting VS and it works.

So this is not documented. I understand that the Analyzers project has to be compiled first before opening the solution, but if it were possible to do this with vs2022 open it would be great. Like using p pre-build to compile if the dll doesn't exist or any type of automation. Thanks for the info.

@tig
Copy link
Collaborator

tig commented May 7, 2024

Also, @dodexahedron , the Publish github action for v2_develop is not working. See

https://github.com/gui-cs/Terminal.Gui/actions/runs/8989765575

Can you please advise?

@dodexahedron
Copy link
Collaborator

dodexahedron commented May 7, 2024

I'll take a look at all of these questions in an hour or so, when I get back from dropping my truck off for a brake job.

I did run them in a clean environment to test, but I'll do it again on a brand new VM if I can't replicate the problems you guys got. That would most likely just mean I'm assuming something that isn't written explicitly in the docs that I'm taking for granted.

My initial thoughts, before dedicated troubleshooting, are:

  • Make sure you're in PowerShell Core (not Windows PowerShell). Launch it via running pwsh from any console to be sure.
  • Make sure your ExecutionPolicy in PowerShell is Unrestricted, because these aren't authenticode signed files. Or you can sign them yourself with a cert you trust. I'll sign them with a real EV cert at some future point.
  • cd to the scripts folder before importing (this should be optional, because it manages its own working directory, but maybe something isn't working right in other environments, so it's just something to isolate)
  • Be sure to pay close attention to periods and slashes in instructions. Some of those commands have to be absolutely verbatim unless I stick the modules up on PSGallery (which I suppose I can do)
  • If the prompt didn't change to having a TGPS prefix after import, you didn't import it.
  • Can't run any of it til import, like any other module. I can toss an import script in there but it'd just be a one-liner already.

If you get any un-good output from PowerShell, definitely let me know. Want to iron out any kinks in anything to do with it, be it the modules, the code, the instructions, the docs - whatever. :)

@dodexahedron
Copy link
Collaborator

Oh and... Once you import, Get-Help works on all commands exported by the module, so definitely check that out both for help and just to see if there's anything that needs improvement/clarification/etc, if you would, plz. :)

@tig
Copy link
Collaborator

tig commented May 7, 2024

I'll take a look at all of these questions in an hour or so, when I get back from dropping my truck off for a brake job.

I did run them in a clean environment to test, but I'll do it again on a brand new VM if I can't replicate the problems you guys got. That would most likely just mean I'm assuming something that isn't written explicitly in the docs that I'm taking for granted.

My initial thoughts, before dedicated troubleshooting, are:

  • Make sure you're in PowerShell Core (not Windows PowerShell). Launch it via running pwsh from any console to be sure.
  • Make sure your ExecutionPolicy in PowerShell is Unrestricted, because these aren't authenticode signed files. Or you can sign them yourself with a cert you trust. I'll sign them with a real EV cert at some future point.
  • cd to the scripts folder before importing
  • Be sure to pay close attention to periods and slashes in instructions. Some of those commands have to be absolutely verbatim unless I stick the modules up on PSGallery (which I suppose I can do)
  • If the prompt didn't change to having a TGPS prefix after import, you didn't import it.
  • Can't run any of it til import, like any other module. I can toss an import script in there but it'd just be a one-liner already.

If you get any un-good output from PowerShell, definitely let me know. Want to iron out any kinks in anything to do with it, be it the modules, the code, the instructions, the docs - whatever. :)

I just think you need to provide better docs with instructions. In some tome you wrote in some PR or Issue weeks ago, you gave examples of running the PS commands. I can't find those.

@dodexahedron
Copy link
Collaborator

It's in the README.md in the Scripts folder, is it not?

Or is that doc unclear?

@dodexahedron
Copy link
Collaborator

BTW, I'm back and ready to dive into this, so lay it on me 😅

@dodexahedron
Copy link
Collaborator

dodexahedron commented May 7, 2024

Yep, the doc is there.

It does not name the specific commands exported by the modules, but I guess I either forgot to give specific analyzer build instructions or assumed people would/could just run Get-Command -Module Terminal.Gui.PowerShell for a command listing as per common ps usage.

But it does explain how to import and remove the module, as well as how to get help with commands.

I thought I had put a passage in there about how to list the commands (as above), but I guess that must have gotten lost in my editing and is something I think should go in there as a quick note or maybe a linked footnote for those unfamiliar with powershell conventions.

Or another idea:

What if I have the script it runs on import do the Get-Command call to list commands at the beginning? I imagine there's probably a simple way to hook that in there.

@dodexahedron
Copy link
Collaborator

It even goes as far as adding the Scripts folder to your PSModulePath environment variable for that PowerShell session (and sets it back to normal on remove-module or simply closing the session and doesn't actually change anything on the system). That allows it to be referenced by name, as PowerShell only looks in specific directories for modules, otherwise, and also prevents certain possible though uncommon combinations of powershell settings from resulting in not being able to import it.

@dodexahedron
Copy link
Collaborator

dodexahedron commented May 7, 2024

Also

If you actually install the module either by install-module or copying it all to one of the normal PSModulePath folders, you'd be able to run any of its commands without explicitly importing, since modern PowerShell automatically loads installed modules that have a manifest that make their functions discoverable, upon first attempt to call one of its methods.

But I wasn't ready for that yet and wanted exactly this kind of feedback and teething issues to get ironed out first, from various different points of view. :)

And I don't recommend doing that with the current version of them, anyway, because there are 2 or 3 assumptions in them that depend on the actual module manifest being in the Scripts folder of the repo root.

@dodexahedron
Copy link
Collaborator

Another way to get information about a module before even importing it is this:

Test-ModuleManifest .\Terminal.Gui.Powershell.psd1

You just give it the path to a module manifest file.

Lists details about the manifest, including sub-modules included and all exported commands. You can explore properties of the returned object via any standard method you like.

@dodexahedron
Copy link
Collaborator

dodexahedron commented May 7, 2024

Current work items I'm thinking of doing, for this and other questions/comments etc:

  • On main project README.md, in the build instructions, add all commands to do it in the simplest way possible to the instructions.
  • In the README.md in the Scripts folder, add a section or sub-section explaining usage beyond import and removal of the module.
  • In the root Terminal.Gui.PowerShell module, output a list of important commands and any required syntax or how to get the detailed help already included in the modules.

And some less-critical but maybe helpful bits:

  • Maybe also add a note about PowerShell ExecutionPolicy setting, since it defaults to something that won't let you run unsigned scripts, which is something a new or otherwise novice PowerShell user might not be aware of (though it does tell you about it anyway if it's set too tight to allow it).
  • Maybe a screenshot or quick screen capture video showing the import and building the analyzers?

And instructions for commands to run of course will be in copy/paste-friendly code fences with powershell language specified for syntax coloring, as existing ones already are.

@dodexahedron
Copy link
Collaborator

some tome you wrote

That doesn't sound like me at all! 😇 😅

@dodexahedron
Copy link
Collaborator

dodexahedron commented May 7, 2024

Also, @dodexahedron , the Publish github action for v2_develop is not working. See

https://github.com/gui-cs/Terminal.Gui/actions/runs/8989765575

Can you please advise?

This looks like a dependency issue, likely caused by build order not happening in the right order and also straight-up not building the analyzer project since it isn't told to do so. I'll fiddle with it on a linux system to get you an appropriately tweaked line to make it happen correctly. The solution configuration currently is set to NOT build the analyzers, as a workaround to keep visual studio from breaking itself, so just building the solution isn't enough. Most likely all you'll actually need to do is first explicitly build the analyzer project, then build Terminal.Gui.

I made an off-hand remark about us likely benefiting from cache functionality available in the github action runners in the PR that was submitted with the github action improvements, mostly as a future note to myself, and this is a good use for it, in particular, since that stuff rarely will need to actually be rebuilt unless analyzer code changes or its dependencies are updated.

@dodexahedron
Copy link
Collaborator

Yeah, the publish action is going to need changes for sure, and probably should be split regardless, since the build isn't really the same anyway, for v1 vs v2.

For sake of formality, cleanliness, and no unexpected side-effects from anything, v2 and v1 deserve their own actions. Which is of course simple.

I'll go ahead and write up the changes for the publish action(s) in one PR as my first to-do, starting in just a few minutes, after I finish stuffing my face.

@tig
Copy link
Collaborator

tig commented May 8, 2024

Yeah, the publish action is going to need changes for sure, and probably should be split regardless, since the build isn't really the same anyway, for v1 vs v2.

For sake of formality, cleanliness, and no unexpected side-effects from anything, v2 and v1 deserve their own actions. Which is of course simple.

I'll go ahead and write up the changes for the publish action(s) in one PR as my first to-do, starting in just a few minutes, after I finish stuffing my face.

This is a fix:

    - name: Build Release
      run: |
        dotnet-gitversion /updateprojectfiles
        import-Module ./Scripts/Terminal.Gui.PowerShell.psd1
        Build-Analyzers
        dotnet build --no-restore -c Release

@dodexahedron
Copy link
Collaborator

dodexahedron commented May 8, 2024

I've got a much more robust solution in the works which also tests on windows and mac as well, among other things.

But that's certainly plenty as an immediate fix so I say SHIP IT! :)

@dodexahedron
Copy link
Collaborator

dodexahedron commented May 8, 2024

Oh I did have a quick question (though I'm sure I can find it myself). What, specifically, is that gitversion step/tool doing?

Is it just substituting parts of the version on build?

Because we can do it without that dependency, also freeing up the dependency/restriction to not touch those parts of the csproj files (could just remove the empty ones, in fact), since it can be provided on the command line or a compiler response file, separately, and pretty much anything about the entire context the runner is executing in is already implicitly available in some variables accessible by the actions.

@dodexahedron
Copy link
Collaborator

Oh. I'd suggest, for formality's sake, to add a Remove-Module Terminal.Gui.PowerShell as the final step, to avoid interference with other possible jobs, especially on forks and such.

@dodexahedron
Copy link
Collaborator

dodexahedron commented May 8, 2024

And is the no-restore a good idea for the build? The Build-Analyzers part won't necessarily cause all of the dependencies from other projects to be restored, if it doesn't specify them.

And remember - that's only valid for the v2_develop branch unless I back-port them, which would require several non-trivial changes to the generated code for the earlier TFMs to be compileable, so I'm not in favor of doing that.

Unless I just made a dummy empty module as a temporary kludge to make the same script work for v1 with no effect on the guild, I guess.

But I should have something done tonight anyway, so I'd just say don't bother beyond what you already have, for a temporary fix.

But add -ErrorAction SilentlyContinue to all powershell calls so they don't cause it to error out on v1 builds.

@dodexahedron
Copy link
Collaborator

dodexahedron commented May 8, 2024

RE: The gitversion tool:

I noticed the Release.ps1 script in root already started to do that sort of thing.

Something similar either in PowerShell or just in the workflows themselves is the same basic idea I was thinking, depending on what, exactly, we want in the full version string (InformationalVersionString is the property on the output assembly that has ALL of it, including any VersionSuffix component).

And building the whole library was already planned for inclusion in the Terminal.Gui.PowerShell.Build module, too, so it naturally fits there and I can expose simple and validated parameters for stuff there.

There's actually already a no-op function in it for that already (Build-TerminalGui), that I wanted to put in for initial feedback on what might be wanted in it before I finish it.

@dodexahedron
Copy link
Collaborator

Bah. For a few reasons, I can't test these properly without cloning the repo to a new sandbox that isn't a fork.

So, in the interest of time, I'm reducing the scope of what I'm going to set up for the initial reorg so I can finish quicker.

Still probably going to take a little longer than being ready tonight, though, just due to the tedium and frustration of the process with github, and the fact I'll likely get annoyed and want to do some other real work at some point instead. 😅

But it's coming - don't worry. It'll probably just end up being much simpler than the likely overkill I was imagining. 😆

@dodexahedron
Copy link
Collaborator

dodexahedron commented May 8, 2024

Ha... Ya know what...

So I'm using some of my internal github workflows as templates to make life easier and I'm realizing many of them also have spots for some improvements due mostly to the github api evolving and these mostly just being re-used since they work - but they could be better.

As such, I'll make those tweaks internally so they get a bunch more real use/testing than if I just make bespoke workflows for us. Also means other eyes will be on them, too.

My top priority for tomorrow's work day.

That'll also help me filter out things that are over-engineered for this project (one of my biggest personal urges/challenges to ignore). 😅

I still support the temp fix, though, just so we have working CI for v2.

I gotta tell myself: "Don't let better be the enemy of good enough."

@dodexahedron
Copy link
Collaborator

dodexahedron commented May 8, 2024

A couple of important questions for you @tig that aren't blockers for me being able to write most of the basics up but which are needed to finish and properly test:

(As questions with informational commentary about ways I do it for work projects or have seen it done elsewhere, for your consideration)

  • Would you like builds/tests to continue to run specifically for every push to the mainline branches, and version tags or tweak that a bit, at least for v2?
    • I often do a lot of what we do, too, but more selective, with most basic being tag-based only and with tags for different purposes. Fairly often there are PR events, but we try to make them fork-friendly so the author can use them too. Checks can still require that in a PR on the base repo. Triggering on pushes to mainline are less common (maybe 1/4 of our repos) because it's already been done by then, and explicit triggering can be used if and when necessary. That helps reduce noise when doing several merges in a row, especially.
    • Also, hybrid approaches where maybe builds run but tests don't for certain actions can also help make things less noisy and speed some interactions up.
    • Tests can also be run more selectively.
      • If tests are annotated with categories and such, the test runner can be told to filter on them.
      • I usually have the first place I care about tests do only the most basic tests, for quick confirmation it at least runs.
      • Debug builds or other triggers then cause all "normal" tests to run.
      • Release builds also include much more exhaustive and/or expensive tests like using wider parameter ranges as well as tests that are just slow and unhelpful to run with every push for whatever reason.
  • Mind if I componentize it?
    • A lot of reasons and benefits, like simplicity, less repetition (DRY), and easier parallelization of runs (such as for testing on more platforms).
    • Mainly, defining and separating pre-build, build, test, and release searately (with release itself being split to one for nuget and one for github), all with workflow_call triggers, which essentially turns them into "functions" that get called as needed in the workflows with automated and manual triggers like usual.
    • That also makes it so that something relevant to one of those platforms doesn't mandate another push to the other one.
    • Things like analyzers can also be built once and then cached/re-used rather than doing it for every single run, if they haven't actually changed.
  • Since I'm already writing all that powershell stuff, how about having the CI system use it all the way around, for consistency?
  • Want any benchmarking runs?
    • I could define a skeleton now to be used by benchmark tooling when we get to that kind of thing.
    • Handy to help catch performance regressions and just generally keep an eye on how we're doing with performance metrics.
  • Any other needs, wishes, nice-to-haves, etc you want to see?
  • Would you like fries with that?
    • Too bad. I already ate them. 😛

@BDisp
Copy link
Collaborator Author

BDisp commented May 8, 2024

I tried to put the three projects related to Analyzers in a separate solution but it is really impossible to compile with VS2022. Do you know what the reason is? Will VS2022 have blocked processes, folders, files that do not allow compilation? Is it some nuget package that cannot be restored with VS2022?

@BDisp
Copy link
Collaborator Author

BDisp commented May 8, 2024

I think it's not because something is blocked by VS2022 because if I open powershell with Terminal.Gui open in VS2022 and run the "dotnet build" command it compiles fine. Therefore, I suspect that it is some nuget package that VS2022 cannot restore and that is only possible through the command line and I am not surprised that it is Meziantou.Polyfill itself.

@dodexahedron
Copy link
Collaborator

dodexahedron commented May 8, 2024

VS is a .net Framework host, which has a few consequences.

The most important that are relevant to this are:

  • It cannot load .net (core) assemblies.
  • It cannot unload them once they are loaded.

That's why you run the scripts.

Once the analyzer project is built, you don't need to build it again and VS will work fine with it.

Open pwsh

Navigate to the Scripts folder.

Import-Module ./Terminal.Gui.PowerShell.psd1
Build-Analyzers -AutoClose -AutoLaunch -NoClean -Force
# If you are done with powershell at that point, also run the next line or just exit
Remove-Module Terminal.Gui.Powershell

This will do all of this for you, automatically:

  1. Close visual studio if it is open and has the Terminal.sln solution open
  2. Build the analyzers for both the Debug and Release configurations, without further prompting and without cleaning anything else.
  3. Launch visual studio again when it's done

You will then have a working setup.

Don't ever run "clean solution" or "rebuild solution" (which implies clean) in visual studio, or you will need to run that again.

Just build and run the projects you want to build and run.

@dodexahedron
Copy link
Collaborator

dodexahedron commented May 8, 2024

After they're built, the only reasons you'd ever need to build them again on the same machine are:

  • The clean target on the solution has been run, for any reason
  • The code or anything affecting the compilation of the Terminal.Gui.Analyzers.Internal project, specifically has been altered
  • You clone a fresh repo and thus don't have them built yet
  • The compiled analyzer assembly has been removed or otherwise doesn't exist for any reason.

@BDisp
Copy link
Collaborator Author

BDisp commented May 8, 2024

Very well. Don't forget to add a link in the principal README to the Scripts README with all this instructions, please. Thanks.

@dodexahedron
Copy link
Collaborator

Another fun inconsistency with Visual Studio (yes, even still in 2022 17.10 Preview 6), is that yes, you can alter the code of the analyzer while VS is still running, and anything in a project referencing it will immediately be able to see the intellisense from anything public in the analyzer porject. But it won't load the actual new analyzer assembly, so you'll have intellisense that doesn't match the generator until you re-launch.

@dodexahedron
Copy link
Collaborator

Very well. Don't forget to add a link in the principal README to the Scripts README with all this instructions, please. Thanks.

💯

It's on my list, but I'm adding more documentation and some changes/additions to the PowerShell modules, too, so that'll be updated in the same PR when it's ready.

@dodexahedron
Copy link
Collaborator

dodexahedron commented May 8, 2024

If you need a reference, everything but the Build-Analyzers step is in the current README.md in the Scripts folder.

You know...

Detailed documentation for the scripts might be a good fit for the wiki, too. 🤔

There's already help text similar to how intellisense works in the modules. The -Full version of that could be dumped to wiki pages. That would also enable the use of the -Online parameter with Get-Help, which could be linked to each function's page in the wiki.

@dodexahedron
Copy link
Collaborator

Or another thought I just had...

It could be part of the build, release, or publish process to do exactly that.

I'll have to remember that for later. Automating that would be helpful and keep things up-to-date.

@tig
Copy link
Collaborator

tig commented May 9, 2024

It's in the README.md in the Scripts folder, is it not?

Or is that doc unclear?

You need to provide much more robust documentation. I have no idea what commands you defined, so I can't do Get-Help because I don't know what to get Get-Help on.

Sorry if I'm being dense and missing something obvious, but this needs to be done such that it's completely obvious to someone without any context.

@dodexahedron
Copy link
Collaborator

This comment belongs over here, so just copying from the other issue:

I'm going to try out the doc processor/formatter thing that Microsoft suggests for PowerShell code that already has get-help metadata. I've never used it myself, since it's never really come up internally, but apparently it's what they use to generate theirs, too, so it should have some really nice results if that's actually the case.

That's for generating API documentation style stuff.

The README.md in both the Scripts folder and the repo root are both getting necessary steps for quick-start build instructions, of course, and the one in the Scripts folder is getting a ton more default, assuming basically 0 PowerShell experience. Aside from launching it, I guess. But if they can't figure out how to launch PowerShell and navigate around a CLI, they're gonna have a tough time with building and consuming a dll project for creating TUIs. 😅

@dodexahedron
Copy link
Collaborator

dodexahedron commented May 10, 2024

And no, it's not you being dense, really.

I've just been programming and sysadmining for 30+ years and sometimes take stuff that is second nature or practically muscle memory for granted.

I may have gone in with certain expectations, as well, since there already was a powershell script in the repo before I got started on those, but that doesn't matter.

TBH, though, I seriously must have been tired that night, too, because there's no way I intended to not have it give a step-by-step from the first PR. Not sure what happened there. I mean you see how much I write lol.

Anyway...

My goal is for it to be copy-paste ready for someone stumbling upon the repo out of nowhere.

I do like 90% mean that tongue-in-cheek comment at the end of my above comment, though. We do have to draw the line somewhere reasonable for what we want to deal with supporting, in terms of users who are so novice that I have to replicate the entire MS Learn PowerShell fundamentals doc tree. The library itself is significantly more difficult to use than a PowerShell module in plain text files exposing like 6 commands (which is to be expected).

So, I am going to make it simple, but I'm not going to be explaining the inner workings of PowerShell outside of a couple of bits I've added in my working copy that describe the layout/design of the modules and how they're intended to relate and work and all that.

And I'll generate the docs from the help metadata I wrote, of course.

Then y'all can let me know if there's anything that still doesn't take care of everyone.

Same as any other new component 🤷‍♂️

I'll probably do a little more in bits and pieces, tonight, but I'm pretty drained rn and probably gonna at least shut the programmer part of my brain down pretty soon. 😪

@tig
Copy link
Collaborator

tig commented May 11, 2024

A couple of important questions for you @tig that aren't blockers for me being able to write most of the basics up but which are needed to finish and properly test:

(As questions with informational commentary about ways I do it for work projects or have seen it done elsewhere, for your consideration)

  • Would you like builds/tests to continue to run specifically for every push to the mainline branches, and version tags or tweak that a bit, at least for v2?

yes.

  • I often do a lot of what we do, too, but more selective, with most basic being tag-based only and with tags for different purposes. Fairly often there are PR events, but we try to make them fork-friendly so the author can use them too. Checks can still require that in a PR on the base repo. Triggering on pushes to mainline are less common (maybe 1/4 of our repos) because it's already been done by then, and explicit triggering can be used if and when necessary. That helps reduce noise when doing several merges in a row, especially.

  • Also, hybrid approaches where maybe builds run but tests don't for certain actions can also help make things less noisy and speed some interactions up.

  • Tests can also be run more selectively.

    • If tests are annotated with categories and such, the test runner can be told to filter on them.
    • I usually have the first place I care about tests do only the most basic tests, for quick confirmation it at least runs.
    • Debug builds or other triggers then cause all "normal" tests to run.
    • Release builds also include much more exhaustive and/or expensive tests like using wider parameter ranges as well as tests that are just slow and unhelpful to run with every push for whatever reason.
  • Mind if I componentize it?

No.

  • A lot of reasons and benefits, like simplicity, less repetition (DRY), and easier parallelization of runs (such as for testing on more platforms).

  • Mainly, defining and separating pre-build, build, test, and release searately (with release itself being split to one for nuget and one for github), all with workflow_call triggers, which essentially turns them into "functions" that get called as needed in the workflows with automated and manual triggers like usual.

  • That also makes it so that something relevant to one of those platforms doesn't mandate another push to the other one.

  • Things like analyzers can also be built once and then cached/re-used rather than doing it for every single run, if they haven't actually changed.

  • Since I'm already writing all that powershell stuff, how about having the CI system use it all the way around, for consistency?

  • Want any benchmarking runs?

    • I could define a skeleton now to be used by benchmark tooling when we get to that kind of thing.
    • Handy to help catch performance regressions and just generally keep an eye on how we're doing with performance metrics.
  • Any other needs, wishes, nice-to-haves, etc you want to see?

  • Would you like fries with that?

    • Too bad. I already ate them. 😛

Please ensure you've seen this: https://github.com/gui-cs/Terminal.Gui/tree/develop/Terminal.Gui

Also

@dodexahedron
Copy link
Collaborator

Looks like a good place to follow up when I've got more brain cells left operating. Plus, gotta get up for mother's day! 😅

I'll flag this notification for follow-up so I come back to it sometime soonish.

I'm on like a 4th degree of distraction right now from all the stuff going on at once. 😅

Oh well. It's good busy.

@tig tig modified the milestones: v2 Release, V2 Alpha May 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
build-and-deploy Issues regarding to building and deploying Terminal.Gui dependencies Pull requests that update a dependency file docs Can be resolved via a documentation update v2 For discussions, issues, etc... relavant for v2
Projects
Status: No status
Status: 🏗 Approved - In progress
Status: 🏗 Approved - In progress
Development

No branches or pull requests

3 participants