Skip to content

Releases: pester/Pester

5.0.0-rc3

20 Apr 07:03
Compare
Choose a tag to compare
5.0.0-rc3 Pre-release
Pre-release

Adds -PassThru and -FullNameParameter, improves speed of discovery, defining and asserting mocks and should.

Release notes in this huge readme: https://github.com/pester/Pester/blob/v5.0/README.md

List of changes 5.0.0-rc1...5.0.0-rc3

5.0.0-rc2

20 Apr 07:02
Compare
Choose a tag to compare
5.0.0-rc2 Pre-release
Pre-release

Broken release. Do not use.

5.0.0-rc1

05 Apr 06:18
Compare
Choose a tag to compare
5.0.0-rc1 Pre-release
Pre-release

Release notes in this huge readme, full release expected in a week, go try it please :)

https://github.com/pester/Pester/blob/v5.0/README.md

5.0.0-beta

30 Apr 05:54
Compare
Choose a tag to compare
5.0.0-beta Pre-release
Pre-release

Pester v5 - beta

🙋‍ Want to share feedback? Go here

Pester5 beta is finally here. 🥳🥳🥳 Frankly there are more news than I am able to cover. Here some of the best new features:

Tags

Tags on everyting

The tag parameter is now available on Describe, Context and It and it is possible to filter tags on any level. You can then use -Tag and -ExcludeTag to run just the tests that you want.

Here you can see an example of a test suite that has acceptance tests and unit tests, and some of the tests are slow, some are flaky, and some only work on Linux. Pester5 makes runnin all reliable acceptance tests, that can run on Windows is as simple as:

Invoke-Pester $path -Tag "Acceptance" -ExcludeTag "Flaky", "Slow", "LinuxOnly"
Describe "Get-Beer" {

    Context "acceptance tests" -Tag "Acceptance" {

        It "acceptance test 1" -Tag "Slow", "Flaky" {
            1 | Should -Be 1
        }

        It "acceptance test 2" {
            1 | Should -Be 1
        }

        It "acceptance test 3" -Tag "WindowsOnly" {
            1 | Should -Be 1
        }

        It "acceptance test 4" -Tag "Slow" {
            1 | Should -Be 1
        }

        It "acceptance test 5" -Tag "LinuxOnly" {
            1 | Should -Be 1
        }
    }

    Context "unit tests" {

        It "unit test 1" {
            1 | Should -Be 1
        }

        It "unit test 2" -Tag "LinuxOnly" {
            1 | Should -Be 1
        }

    }
}
Starting test discovery in 1 files.
Discovering tests in ...\real-life-tagging-scenarios.tests.ps1.
Found 7 tests. 482ms
Test discovery finished. 800ms

Running tests from '...\real-life-tagging-scenarios.tests.ps1'
Describing Get-Beer
  Context acceptance tests
      [+] acceptance test 2 50ms (29ms|20ms)
      [+] acceptance test 3 42ms (19ms|23ms)
Tests completed in 1.09s
Tests Passed: 2, Failed: 0, Skipped: 0, Total: 7, NotRun: 5

Tags use wildcards

The tags are now also compared as -like wildcards, so you don't have to spell out the whole tag if you can't remember it. This is especially useful when you are running tests locally:

Invoke-Pester $path -ExcludeT "Accept*", "*nuxonly" | Out-Null
Starting test discovery in 1 files.
Discovering tests in ...\real-life-tagging-scenarios.tests.ps1.
Found 7 tests. 59ms
Test discovery finished. 97ms


Running tests from '...\real-life-tagging-scenarios.tests.ps1'
Describing Get-Beer
 Context Unit tests
   [+] unit test 1 15ms (7ms|8ms)
Tests completed in 269ms
Tests Passed: 1, Failed: 0, Skipped: 0, Total: 7, NotRun: 6

Logging

All the major components log extensively.I am using logs as a debugging tool all the time so I make sure the logs are usable and not overly verbose. See if you can figure out why acceptance test 1 is excluded from the run, and why acceptance test 2 runs.

RuntimeFilter: (Get-Beer) There is 'Flaky, Slow, LinuxOnly' exclude tag filter.
RuntimeFilter: (Get-Beer) Block did not match the exclude tag filter, moving on to the next filter.
RuntimeFilter: (Get-Beer) There is 'Acceptance' include tag filter.
RuntimeFilter: (Get-Beer) Block has no tags, moving to next include filter.
RuntimeFilter: (Get-Beer) Block did not match any of the include filters, but it will still be included in the run, it's children will determine if it will run.
RuntimeFilter: (Get-Beer.acceptance tests) There is 'Flaky, Slow, LinuxOnly' exclude tag filter.
RuntimeFilter: (Get-Beer.acceptance tests) Block did not match the exclude tag filter, moving on to the next filter.
RuntimeFilter: (Get-Beer.acceptance tests) There is 'Acceptance' include tag filter.
RuntimeFilter: (Get-Beer.acceptance tests) Block is included, because it's tag 'Acceptance' matches tag filter 'Acceptance'.
RuntimeFilter: (Get-Beer.acceptance tests.acceptance test 1) There is 'Flaky, Slow, LinuxOnly' exclude tag filter.
RuntimeFilter: (Get-Beer.acceptance tests.acceptance test 1) Test is excluded, because it's tag 'Flaky' matches exclude tag filter 'Flaky'.
RuntimeFilter: (Get-Beer.acceptance tests.acceptance test 2) There is 'Flaky, Slow, LinuxOnly' exclude tag filter.
RuntimeFilter: (Get-Beer.acceptance tests.acceptance test 2) Test did not match the exclude tag filter, moving on to the next filter.
RuntimeFilter: (Get-Beer.acceptance tests.acceptance test 2) Test is included, because its parent is included.
RuntimeFilter: (Get-Beer.acceptance tests.acceptance test 3) There is 'Flaky, Slow, LinuxOnly' exclude tag filter.
RuntimeFilter: (Get-Beer.acceptance tests.acceptance test 3) Test did not match the exclude tag filter, moving on to the next filter.
RuntimeFilter: (Get-Beer.acceptance tests.acceptance test 3) Test is included, because its parent is included.
RuntimeFilter: (Get-Beer.acceptance tests.acceptance test 4) There is 'Flaky, Slow, LinuxOnly' exclude tag filter.
RuntimeFilter: (Get-Beer.acceptance tests.acceptance test 4) Test is excluded, because it's tag 'Slow' matches exclude tag filter 'Slow'.
RuntimeFilter: (Get-Beer.acceptance tests.acceptance test 5) There is 'Flaky, Slow, LinuxOnly' exclude tag filter.
RuntimeFilter: (Get-Beer.acceptance tests.acceptance test 5) Test is excluded, because it's tag 'LinuxOnly' matches exclude tag filter 'LinuxOnly'.
RuntimeFilter: (Get-Beer.Unit tests) There is 'Flaky, Slow, LinuxOnly' exclude tag filter.
RuntimeFilter: (Get-Beer.Unit tests) Block did not match the exclude tag filter, moving on to the next filter.
RuntimeFilter: (Get-Beer.Unit tests) There is 'Acceptance' include tag filter.
RuntimeFilter: (Get-Beer.Unit tests) Block has no tags, moving to next include filter.
RuntimeFilter: (Get-Beer.Unit tests) Block did not match any of the include filters, but it will still be included in the run, it's children will determine if it will run.
RuntimeFilter: (Get-Beer.Unit tests.unit test 1) There is 'Flaky, Slow, LinuxOnly' exclude tag filter.
RuntimeFilter: (Get-Beer.Unit tests.unit test 1) Test did not match the exclude tag filter, moving on to the next filter.
RuntimeFilter: (Get-Beer.Unit tests.unit test 1) There is 'Acceptance' include tag filter.
RuntimeFilter: (Get-Beer.Unit tests.unit test 1) Test has no tags, moving to next include filter.
RuntimeFilter: (Get-Beer.Unit tests.unit test 1) Test did not match any of the include filters, it will not be included in the run.
RuntimeFilter: (Get-Beer.Unit tests.unit test 2) There is 'Flaky, Slow, LinuxOnly' exclude tag filter.
RuntimeFilter: (Get-Beer.Unit tests.unit test 2) Test is excluded, because it's tag 'LinuxOnly' matches exclude tag filter 'LinuxOnly'.
RuntimeFilter: (Get-Beer.Unit tests) Block was marked as Should run based on filters, but none of its tests or tests in children blocks were marked as should run. So the block won't run.

Please be aware that the log is currently only written to the screen and not persisted in the result object. And that the logging comes with a performance penalty.

Run only what is needed

Look at the last line of the above log. It says that the block will not run, because none of the tests inside of it, or inside of any of the children blocks will run. This is great because when the block does not run, none of its setups and teardowns run either.

Invoking the code below with -ExcludeTag Acceptance will filter out all the tests in the file and there will be nothing to run. Pester5 understands that if there are no tests in the file to run, there is no point in executing the setups and teardowns in it, and so it returns almost immediately:

BeforeAll {
    Start-Sleep -Seconds 3
}

Describe "describe 1" {
    BeforeAll {
        Start-Sleep -Seconds 3
    }

    It "acceptance test 1" -Tag "Acceptance" {
        1 | Should -Be 1
    }

    AfterAll {
        Start-Sleep -Seconds 3
    }
}
Starting test discovery in 1 files.
Found 1 tests. 64ms
Test discovery finished. 158ms
Tests completed in 139ms
Tests Passed: 0, Failed: 0, Skipped: 0, Total: 1, NotRun: 1

Skip on everyting

-Skip is now available on Describe and Context. This allows you to skip all the tests in that block and every child block.

Describe "describe1" {
    Context "with one skipped test" {
        It "test 1" -Skip {
            1 | Should -Be 2
        }

        It "test 2" {
            1 | Should -Be 1
        }
    }

    Describe "that is skipped" -Skip {
        It "test 3" {
            1 | Should -Be 2
        }
    }

    Context "that is skipped and has skipped test" -Skip {
        It "test 3" -Skip {
            1 | Should -Be 2
        }

        It "test 3" {
            1 | Should -Be 2
        }
    }
}
Starting test discovery in 1 files.
Found 5 tests. 117ms
Test discovery finished. 418ms
Describing describe1
 Context with one skipped test
   [!] test 1, is skipped 18ms (0ms|18ms)
   [+] test 2 52ms (29ms|22ms)
 Describing that is skipped
   [!] test 3, is skipped 12ms (0ms|12ms)
 Context that is skipped and has skipped test
   [!] test 3, is skipped 10ms (0ms|10ms)
   [!] test 3, is skipped 10ms (0ms|10ms)
Tests completed in 1.03s
Tests Passed: 1, Failed: 0, Skipped: 4, Total: 5, NotRun: 0

(Pending is translated to skipped, Inconclusive does not exist anymore. Are you relying on them extensively? Share your feedback.)

Collect all Should failures

Should can now be configured to continue on failure. This will report the error to Pester, but won't fail the test immediately. Instead, all the Should failures are collected and reported at the end of the test. This allows you to put multiple assertions into one It and still get complete information on failure.
...

Read more

4.10.1

07 Feb 20:04
Compare
Choose a tag to compare
  • Fix nuget description to not include domain that we don't own anymore.

4.10.0

07 Feb 20:03
Compare
Choose a tag to compare

4.10.0 (February 2, 2020)

  • Fix TestRegistry when executing in parallel
  • Remove logo from header because it is noisy #1428
  • Handle the case when the failure message contains an escape sequence #1426
  • Fix JaCoCo report so it can be processed by Codecov.io #1420
  • Add an Example of Should Be with an Array #1396
  • Handle when exceptions have no error messages. #1382
  • Added contributors to the README #1363

4.9.0

08 Sep 09:00
Compare
Choose a tag to compare

What is new in 4.9.0?

  • Adds JUnit xml output to allow Gitlab to consume Pester results, thanks @bgelens

Various small fixes and improvements

4.8.1

11 May 13:31
Compare
Choose a tag to compare

What is new in 4.8.1?

  • Fixes error that affected only PowerShell 2 users that made Mock not use default mocks at all

4.8.0

01 May 10:48
Compare
Choose a tag to compare

What is new in 4.8.0?

Relaxing mocks

Mock now has two new parameters that allow you to remove types and validation from the function signature. This is useful especially when you have an external cmdlet that has strongly typed parameters but you have no way of constructing those parameters. In that case you can relax the type signature and provide any object that you like. In this silly example I remove the int type as well as the range validation, and provide a string instead.

function f (
    [ValidateRange(1,10)]
    [int] $Count) {
    $Count
}

Describe "Removing type" {
    Context "c" {
        It "does not work" {
            Mock f -MockWith { "this is count: $($Count)" }
            f "my value" | Should -Be "this is count: my value"
        }    
    }

    Context "c" {
        It "works" {
            Mock f `
                -MockWith { "this is count: $($Count)" } `
                -RemoveParameterType Count `
                -RemoveParameterValidation Count 
            
            f "my value" | Should -Be "this is count: my value"
        }    
    }
}

## output
# Describing Removing type
#
#  Context c
#    [-] does not work 159ms
#      FormatException: Input string was not in a correct format.
#      Cannot convert value "my value" to type "System.Int32"
#
#  Context c
#    [+] works 186ms

There is a limitation that you should be aware of: When defining multiple mocks that mock the same command you need to specify the Remove* parameters on the mock that is created first. That mock creates the mock bootstrap function and that is where the function signature is defined. Any subsequent definition will use the same bootstrap function. Notice the Context blocks in the example code, if you removed them the same Mock scope would be used for the whole Describe and the second example would not work because the non-relaxed mock bootstrap function would be used.

Many thanks to @renehernandez for implementing this!

Other fixes

5.0.0-alpha3

23 Mar 20:40
Compare
Choose a tag to compare
5.0.0-alpha3 Pre-release
Pre-release

Pester v5 - alpha3

🙋‍ Have questions or want to discuss a point? Go here

Scoping of Describe & It

The scoping changed a bit from alpha2, it is now again very similar to how Pester v4 behaves. The setups run before the first It or Describe, but they run inside of the current Describe not outside of it, to avoid leaking variables outside of scopes.

The failures also work very similar to how they work in Pester v4, a failurs in a Describe block will fail the whole block. The nice side-effect of having test discovery is that we now know how many tests were in that failed block. So we can report all tests that were supposed to run but did not as failed. For example this would fail with 3 failed tests in v5 and 1 failed test in v4:

Describe "d1" {

    BeforeAll {
        throw "OMG!"
    }

    It "i1" {
        $true | Should -Be $true
    }

    It "i2" -TestCases @(
        @{ Value = 1 }
        @{ Value = 2 }
    ) {
        $true | Should -Be $true
    }
}

# v4 output
#  Describing d1
#    [-] Error occurred in Describe block 0ms
#      RuntimeException: OMG!
#
# Tests completed in 524ms
# Tests Passed: 0, Failed: 1, Skipped: 0, Pending: 0, Inconclusive: 0

# v5 output
# Describing d1
# Block 'd1' failed
# RuntimeException: OMG!
#
# Tests completed in 119ms
# Tests Passed: 0, Failed: 3, Skipped: 0, Pending: 0, Inconclusive: 0

Mocking

Mocking still keeps the scoping to It, and you can also provide scopes to Assert-MockCalled as in v4. Assert-VerifiableMocks also works.

The parameters of Assert-MockCalled removed are now mostly non-positional. The -CommandName and -Times are still accepted by position, but -ParameterFilter is not anymore.

Code coverage

Code coverage passes all the internal tests, and works just fine, but the parameters to Invoke-Pester are probably not fully passed.

Focus

Tests and blocks can be focused by using -Focus parameter. Focus runs only the tests that are focused, not matter what other filters are set. This works accross the whole test suite, and allows you to debug tests very easily.

In v4 when I set a breakpoint into some common function and have 10 passing tests and 1 failing test using that function I need to hit the breakpoint 10 times. With -Focus I simply set my breakpoints and run just that single test.

function Get-Hello {
    "Hello"
}
Describe "Get-Hello" {
    It "Gives Hello" {
        Get-Hello | Should -Be "Hello"
    }

    It -Focus "Has no spaces around hello" {
        $hello = Get-Hello
        $hello.Trim() | Should -Be $hello
    }
}

# Describing Get-Hello
#    [+] Has no spaces around hello 23ms
# Tests completed in 168ms
# Tests Passed: 1, Failed: 0, Skipped: 1, Pending: 0, Inconclusive: 0

Debugging

Pester is looking for a global PesterDebugPreference variable, that can confgure it to print complete error messages, and define which debugging info should be printed. The debugging can be enabled and disabled using WriteDebugMessages flag, and the debugging messages can be defined as an array of options (right now: CoreRuntime, Runtime, Mock, Discovery, SessionState or '*' for all ).

I for example debug like this:

$PSModuleAutoloadingPreference = "none"
Get-Module Pester | Remove-Module

Import-Module $PSSCriptRoot\..\Pester_main\Pester.psd1
$global:PesterDebugPreference = @{
    ShowFullErrors         = $true
    WriteDebugMessages     = $true
    WriteDebugMessagesFrom = "Mock"
}

$excludedTags = 'VersionChecks', 'Help', 'StyleChecks'
$exludedPaths = "*\demo\*"

$path = "$PSScriptRoot\..\Pester_main\"
# $path = "C:\projects\pester_main\Functions\Mock.Tests.ps1"

Invoke-Pester -PassThru -ExcludeTag $excludedTags -ExcludePath $exludedPaths -Path $path

What else works?

  • Tests and test blocks can be generated from loops, and are resolved correctly on runtime. As long as the data do not change between discovery and run. For such cases an external Id can be provided (to blocks, tests, and later also TestCases), but let's see if that is needed.

  • Discover fails with error on first failed file. I admit that is not very convenient, but it's better than seeing hundreds of errors at the same time.

  • Paths can be excluded from the run using like-wildcard

  • You can provide tests as scriptblocks to Invoke-Pester

  • Writing output only when the block should run (so no Describing "abc" but no tests afterwards)

  • Before* and After* blocks in parent scopes

  • Basic value expansion in test names when TestCases are used

  • Up to date with v4

  • Runs on windows, macos, linux and PowerShell v3+

What does not work?

  • interactive mode
  • timing is incorrect
  • Skip / Pending / Inconclusive, the params are there, but the runtime ignores them
  • Some of the parameters to Invoke-Pester such as Path, are simplified to their core. No fancy hashtables with params.
  • Muting the on-screen output