Skip to content
This repository has been archived by the owner on Dec 1, 2020. It is now read-only.

Exclude tests file and folders from analysis #2

Open
anapaulagomes opened this issue Oct 2, 2017 · 3 comments
Open

Exclude tests file and folders from analysis #2

anapaulagomes opened this issue Oct 2, 2017 · 3 comments

Comments

@anapaulagomes
Copy link
Owner

To skip filenames starting with test or folders named with test prefix.

@ayharano
Copy link

ayharano commented Oct 3, 2017

Given the description of the issue, a first thought about a way to solve this issue relies in the DRY principle: why not use method discover from Python stdlib's unittest [0] ?

I am not familiar with how exactly this method works. If somehow this is not what you have in mind as a method to list potential excluding test files and folders, please let us know.

As soon as you confirm or reject this idea, I shall look into this issue in the codebase.

  1. Link to discover method doc

@anapaulagomes
Copy link
Owner Author

Sounds good but I'm not sure if it fits with the current method. This issue is related with this line:

for dirpath, _, filenames in os.walk(full_directory):
The idea is to skip files that belong a test suite.

@ayharano
Copy link

ayharano commented Oct 3, 2017

The rationale is using preexisting code in the stdlib to

  1. generate a list using unittest's discover method the files and dir that match a pattern, which already defaults to test* (in a glob fashion, not a re one), but it's already configurable;
  2. use the resulting list to generate a set of files/dirs to be excluded suggested as to_exclude. The easiest manner that I can think of is as follows: generate a list of all raw unfiltered candidates from os.walk, convert it to a set lets say unfiltered_candidates. Then the list of files which should be analyzed is something like
    to_analyze = unfiltered_candidates.difference(to_exclude).

So the proposal is something like generate unfiltered_candidates from original dirpath and filenames and, after generating to_exclude, use set's difference method to create a set to_analyze and then proceed to iterate that to_analyze to do the proper analysis.

Following this idea or similarly, there is no need to reinvent the wheel and use an already existing mechanism from stdlib. Personally, I don't think python stdlib dev wouldn't break a established API like unittest and this project could reuse such logic.

Just to conclude the proposal: I don't know yet how to implement step 1, but I still think that it should not be very complex and I'd bet into reusing stdlib's API instead of writing this issue's conditional from scratch.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants