Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Review Selenium capabilities versus existing SeleniumLibrary functionality #1829

Open
2 tasks
emanlove opened this issue May 18, 2023 · 6 comments
Open
2 tasks

Comments

@emanlove
Copy link
Member

As part of the v6.2.0 milestone and beyond which is starting to add Selenium 4 capabilities I was just curious as to how many Selenium Python methods are used within the SeleniumLibrary and maybe what functionality we might be missing. I was thinking that rough comparison of methods and then the usage within the SeleniumLibrary code might show this difference of available versus implemented.

Thinking this would consist of two tasks

  • Review Selenium Python methods versus what exist within SeleniumLibrary code
  • Review Selenium Changelog since v4.0.0
@emanlove emanlove added this to the V6.2.0 milestone May 18, 2023
@bollwyvl
Copy link
Contributor

A repeatable, data-driven win would be running the full test suite under coverage... in addition to getting (and ideally, setting a baseline) for coverage of SeleniumLibrary itself, an augmented run could then (or perhaps, only) include selenium.

While pytest-cov is handy, given the extra complexity of the (u|a)test.py scripts, it likely makes the most sense to use coverage run -m utest.run (adding a __init__.py to both directories so they quack like modules). The rest of the details can be configured in pyproject.toml, and special cases like the selenium excursion can be configured by specifying an alternate config file via environment variable.

coverage html can definitely merge multiple .coverage files, even across platforms and architectures... though usually not multiple versions of the system-under-test.

This would provide both a top-level percentage, by version-under-test, as well as line-by-line information about which selenium.* functions are not excited by SeleniumLibrary's test suite... which would be made more accurate as coverage was observed and improved.

I can work up a strawman PR, if there's interest.

@aaltat
Copy link
Contributor

aaltat commented May 20, 2023

I made coverage implementation in Browser library side and could get it working also with RF. But I couldn’t get it working, if pabot was used to run the RF tests

@emanlove
Copy link
Member Author

@bollwyvl If it is not too much effort I would be interested in that implementation.

@bollwyvl
Copy link
Contributor

pabot was used to run the RF tests

I think (but have not tried, and will, on my downstream) that the CLI invocation of pabot (and probably the python one, haven't looked) would allow using something like:

pabot --command coverage run --parallel-mode -m robot --end-command

Probably would be painful if using a distributed pabot setup.

The worst case scenario would be doing it The Hard Way, with e.g. some environment variables that start coverage with the python API... this has other benefits, like the aforementioned possibility of per-robot-test contexts, but that sounds like something that would warrant a dedicated CoverageLibrary.

not too much effort

I am hoping to try to mentor one of my &{DAY_JOBS_COLLEAGUES} in actually doing the PR... we run hundreds of RFSL tests a day, and it would help folk to not just think of it as 🪄 (or when things don't work, 👿) that comes from free software sky daddy by getting their hands a little dirty.

@bollwyvl
Copy link
Contributor

I can happily report that the pabot --command coverage run ... -m pabot --end-command technique works just fine, and already handles much of the complexity of naming things by adding a timestamp and machine identifier. In my case, i just emit it the .coverage* files in a top-level folder (which must exist before starting!), again breaking the model for distributed pabot.

My downstream wrapper, with a relatively modest test suite, lights up a a mere 45% of selenium, 41% of SeleniumLibrary and (shamefully) only 62% of its own code 😊 .

I'll do a bit more digging to see if a listener API implementation can provide the suite/test context information, for starters.

Moving beyond coverage of python source: might we extract information about the current test back to the keyword/test level to provide line-level coverage for .robot and .resource files? This would actually be a huge boon for large test suites, which inevitably end up with big chunks of dead test/task code, and is at least theoretically possible a la coverage-jinja-plugin.

@emanlove
Copy link
Member Author

An example, should see that we don't use the pin_script method on selenium.webdriver.remote.webdriver.

@emanlove emanlove modified the milestones: v6.2.0, v6.3.0 Nov 19, 2023
@emanlove emanlove modified the milestones: v6.3.0, April 2024 Mar 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants