New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance degree between BC23 and BC24 #3487
Comments
Have you set usePwshForBC24 to true or false in settings? (or are you using the default value for this) |
No not in those scripts.. Later in the pipeline I do. Basically using the default |
Do you have detailed logs of before and after with timestamps? |
Please find attached logfiles It contains the raw logfile output from Azure devops for prepare and compile step.
For BC23
Beside this all should be the same. |
I'll also add the comparison for PostBuild. Also there huge difference, But I expect I used UsePwsBC24 there. PostBuild is for creating bacpacs |
Just confirming that I have also noticed this over the past two weeks. Our pipelines that used compiler folders were clocking in around 2 minutes, now they are closer to 4 and 5. Full container builds are seeing roughly the same increase is time. It's not just you, but I don't know if it was a BCContainerHelper change or a BC change. |
The whole -usePwsh thing new to me, so maybe this is unrelated but when running simple commands with invoke-scriptinbccontainer it will generally take 2x as long with PS7 compared to using PS5 (-usePwsh:$false) Measure-Command { Invoke-ScriptInBcContainer $containerName -scriptblock { $config = Get-Navserverconfiguration $ServerInstance} -usePwsh:$false }
Days : 0
Hours : 0
Minutes : 0
Seconds : 0
Milliseconds : 833
Ticks : 8335212
TotalDays : 9.64723611111111E-06
TotalHours : 0.000231533666666667
TotalMinutes : 0.01389202
TotalSeconds : 0.8335212
TotalMilliseconds : 833.5212
Measure-Command { Invoke-ScriptInBcContainer $containerName -scriptblock { $config = Get-Navserverconfiguration $ServerInstance} -usePwsh:$true }
Days : 0
Hours : 0
Minutes : 0
Seconds : 1
Milliseconds : 639
Ticks : 16392312
TotalDays : 1.89725833333333E-05
TotalHours : 0.000455342
TotalMinutes : 0.02732052
TotalSeconds : 1.6392312
TotalMilliseconds : 1639.2312 These were measurements on a BC23 container. In BC24 i'am unable to get these results (800ms) :
|
This is true for any container by the way because of the version check which adds the 800ms to any command executed by invoke-scriptinbccontainer:
|
and if you set $bcContainerHelperConfig.usePwshForBC24 = $false - then everything is back to normal? |
Freddy according to your question, is it related to my Initial problem? The build uses the default cmdlet to create the container,, without setting the $bcContainerHelperConfig.usePwshForBC24. Next it will compile and publish using the bccontainerhelper cmdlets. Basicly both steps are significant slower. But mainly the publish step. |
If that was question was related to my comments: setting it to $false does the same effect as using -usepwsh:$false (as you would expect) and lowers execution times for Versions. But BC24 is still slower then BC23 (~800ms compared to ~1400ms). I have no idea where these differences come from... @RonKoppelaar interesting findings, i checked by entering two containers (BC23/BC24) and running a simple Get-NAVServerInstance: BC23
BC24
Huge difference! is this comming from the ps5 -> ps7 wrapper? |
@marknitek Yes, this is the reason why I default containers to run PowerShell 7 by default. |
@freddydk i tested with both now:
|
@freddydk regarding psremoting in ps7. I tinkered around a bit and i think it works quite well but must be enabled with I had a look at
But that failed at first because there was no configuraiton for "Powershell.7". Then i enabled PSRemoting with:
Then i was able to use the session:
|
That is great news, thanks for investigating. |
@marknitek what did you use as credential? Did you create a windows user inside the container? |
It looks like you are using Windows Authentication, right? |
@freddydk i did not create anything, i just used the code that was present in But the code you have in place for generating all this still remains valid and works across powershell and pwsh:
|
Got it - that was what I was after - it creates a local administrator in the container which it can use for this. |
@freddydk is there anything I can do to further test/validate performance issues which could help for a resolution? |
I will modify the Invoke-ScriptInBcContainer to use sessions in all combinations (ps5 -> ps5, ps5 -> ps7, ps7 -> ps5 and ps7 -> ps7 - when that is done - I would like some tests and specific issues on which things might still be slow. |
@marknitek what OS are you running? |
Please try the ContainerHelper changes from this PR: and give me some feedback on which changes you see. If a Session is created from PS5 in admin mode, the New-PSSession with the ContainerId parameter is used - no credentials are needed.
should show the PS version used in the container. On my machine, the first invoke to create the winrm session takes 1-2 seconds (unless I am running PS5 as admin) and after that - every call takes ~300ms |
Here are some examples of running the new session mechanisms in the different modes, which differ.
Running in PS5 in admin mode
Running in PS5 in non-admin mode
Running in PS7 in admin mode
Running in PS7 in non-admin mode
In interesting observation is, that the second time we run a PS5 session inside the container (where the session is cached), it for some reasons takes much longer than the first time - I have no idea why this happens. Subsequent invokes are all faster - around 800ms. Note that when running any combination beside PS5 in admin mode, the user is a newly created local admin inside the container called winrm with a hardcoded password set to the UUID of the host computer. On my machine - SSL connection to winrm doesn't work at all - I have to set:
If you want to always use WinRm (also on PS5 in admin mode) you can set:
This isn't set by default as this would force everybody to always use new behavior - and with the number of people using BcContainerHelper, I know that any change causes disruption. Setting useSession to false will cause Invoke-ScriptInBcContainer to use docker exec when running scripts inside the container.
Running in PS5 in admin mode
Running in PS5 in non-admin mode
Running in PS7 in admin mode
Running in PS7 in non-admin mode
Using docker exec for script invoke inside the container are (on my machine) fairly consistent and the first call is faster then creating a winrm session. Things to investigate:
|
Wanting to investigate no. 1 from above, and the behavior disappeared without any changes...
Will not investigate further unless other people are reporting stranger things... |
Found out that the whoami for some reason some times takes a lot of time (5 seconds) - replaced this with $env:username |
Created new containers and running without using whoami Running in PS5 in admin mode (with alwaysUseWinRmSession = false)
Running in PS5 in admin mode (with alwaysUseWinRmSession = true)
Running in PS5 in non-admin mode
Running in PS7 in admin mode
Running in PS7 in non-admin mode
Things still to investigate:
|
@freddydk lot of investigations regarding session management. Is there anything I can already test in my build containers regarding the compile and publish step to see any improvements? |
@marknitek in the latest containerHelper I have removed all -usepwsh:$false - and there is a pscoreoverrides.ps1 in c:\run with these lines:
You can override this file by adding one with the same name in the my folder if you don't want these or if you want to add more. These pscoreoverrides are loaded in the prompt.ps1 together with the ps7 BC modules. |
@freddydk Once I get everything together I'll post a new issue. I don't think it's anything for BCContainerHelper so far. The size of the compiler folder seems to have increased by +/-30% to over 1GB and ALTool is significantly slower when enumerating through the artifacts compared the code analysis that was being run before. I think Microsoft has dumped every app in there as well. |
Thanks @MattTraxinger - I saw that myself. |
@freddydk Fair enough. Those are really the two big differences I saw. Compiler Folder is now 1.34GB instead of 984MB. The increase in copy time is proportional to the increase in size. So be it. The enumerating apps part is really bad, though. It's gone from 3s to 38s in my tests. It's all just a bunch of small things. 30s here. 30s there. 15s over there. Not that these were long builds to begin with, but suddenly they take twice as long. I won't put it in a separate issue unless you want me to since there's not a lot to be done. |
@freddydk Below script to create my container
Which results in error as earlier mentioned. |
My guess is that the ImageName parameter causes the issue - that you have an old image, which it is reusing. |
Allways pull is part of the parameter list. But I can remove the old local image to be sure |
Remaining local images...
Recreating container using BcContainerHelper is version 6.0.16-preview1184 After creating container these are the images
|
On my laptop (w11) the above script just works... :-( But not on buildserver W2019... |
Did you try without the image name on the build server? |
Are you using container helper preview on the build machine? It looks like it doesn’t pull the lots 2019-dev image??? |
BcContainerHelper is version 6.0.16-preview1184 |
Without local Docker image it passed. Shoudl I try to remove all images? |
Worth a try - |
FYI we are running multiple agents on our buildmachine. Each agent with each own local (admin) user. We install the PS modules in the user document folder. Meaning each agent can have its own PS module versions. For now I've removed ALL local images on the machine. |
After cleaning up all images and using -DockerImage "local" it failed with same error. |
So, on windows server 2019, it cannot build an image on the fly. |
For now I put a workaround in my build to not use image if using NewDatabase and artifacturl like /24. Just to bypass this error. To at least see if other improvements do work. |
@RonKoppelaar I have a repro of the problem and know what is happening - will see if I can figure out why? |
I was able to run my build end-to-end. Performance wise I still see two diffs:
There still seems to be an issue there. If you want I can create a local testset with all the apps to publish and to export. |
The reason why things failed when creating an image was due to session caching - which probably also means that your build was slow due to this. |
Can confirm the "local" image is working again. Performance is a bit harder to confirm. As both builds are running on the same machine. So all in all is a bit slower. Will run a new build when these are ready. |
Finished testing with latest pre-release 6.0.16-preview1186:
I'm setting up a local test case with BC23.5 and BC24. Keep you posted. |
I Created testcase local. Based on the results (using HyperV) all is kind of comparable. No big performance diff. Local 23.5 Local 24.0 Not sure if process isolation make big difference but I'll check. |
I would like to investigate the publish time - that one is the only thing that feels wrong. Thanks for this investigation @RonKoppelaar |
Latest preview of ContainerHelper shuold fix the last issues... |
Builds are running on my end again. Will also do the local check with the testcase I provided yesetrday |
Builds are kind of comparable with 23.5 again. Stil a bit slower but as you mentioned the publish cmdlet itself in BC24 is a bit slower. Also ran the local test case again compared to yesterday preview build I can see good improvements: Local 24.0 - HyperV - with preview build 24/04 Local 24.0 - HyperV - with preview build 25/04 For me its case closed. Thx freddy and all other contributors on solving this topic! |
BcContainerHelper 6.0.16 has shipped together with generic images 1.0.2.20 |
I noticed a performance degradation in the build pipeline moving to BC24.
If I look at the compile and publish steps these are the time used in minutes and seconds.
BC24 - https://bcartifacts.azureedge.net/sandbox/24.0.16410.18040/nl
Compile: 12:13 (mm:ss)
Publish: 8:34 (mm:ss)
BC23.5 - https://bcartifacts.azureedge.net/sandbox/23.5.16502.16887/nl
Compile 8:18 (mm:ss)
Publish: 3:58 (mm:ss)
Both pipelines compile and publish the same amount of apps using same build scripts
Buildservers are running with W2019 and uses Process Isolation. Do you know there are known issues?
According to things I read new platform is based on .Net8 which should actually be faster then .Net6
For Compile and publish I uses the std. cmdlets from BCContainer:
Compile-AppInNavContainer
Publish-NavContainerApp
BcContainerHelper is version 6.0.15
The text was updated successfully, but these errors were encountered: