-
Notifications
You must be signed in to change notification settings - Fork 371
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LDAP secret engine support (#1032) #1033
Conversation
No dynamic role support yet
Hi @JordanStopford welcome! Thanks very much for putting this up. I haven't done an in-depth review yet, it might take some time. Based on the current release schedule, if accepted this will end up in a While the AD engine didn't have tests, I would strongly encourage adding some tests if possible. If you're able to give that a shot it would go a long way! Even units could be useful; I think integration are possible but there is some complication there with the LDAP components so it might a longer road. Thanks again! |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1033 +/- ##
==========================================
+ Coverage 87.68% 87.80% +0.12%
==========================================
Files 66 67 +1
Lines 3231 3264 +33
==========================================
+ Hits 2833 2866 +33
Misses 398 398
|
@briantist do you want me to open an issue for adding AD tests? I can lend a hand here. Are you potentially planning on adding LDAP to 2.1? |
@tot19 if you'd like to help review and work on this PR that would be great! @JordanStopford are you still interested in working on this (I'm sorry I've not been able to get back to reviewing) and are you up for potentially collaborating with @tot19 ? @tot19 if you were only speaking about adding tests to the existing AD engine, that's also appreciated of course, though I think the AD engine was deprecated so I'm not sure if it's the best place for your efforts. |
Hi @briantist yes I'm still up for working on this - I had some other things that required my attention but I should be able to spend some time on it this month. Happy to collaborate with @tot19 , I haven't had a chance to look in how this could actually be tested yet so if you have any ideas that would be a great start! |
I did some development for the .NET Vault client (VaultSharp), and I found that testing with the openldap docker image makes it pretty easy. I have a docker-compose.yml file to launch vault and openldap, and then configure the vault instance with terraform. Ideally, you would configure the vault instance with the docker-compose.yml
terraform.tf terraform {
required_providers {
vault = {
source = "hashicorp/vault"
version = "3.22.0"
}
}
}
provider "vault" {
address = "http://localhost:8200"
token = "root"
}
resource "vault_ldap_secret_backend" "config" {
path = "ldap"
binddn = "CN=admin,dc=example,dc=org"
bindpass = "adminpassword"
url = "ldap://openldap:1389"
insecure_tls = "true"
userdn = "dc=example,dc=org"
}
resource "vault_ldap_secret_backend_dynamic_role" "vault-dynamic" {
mount = vault_ldap_secret_backend.config.path
role_name = "vault-dynamic"
creation_ldif = <<EOT
dn: cn={{.Username}},ou=users,dc=example,dc=org
objectClass: person
objectClass: top
cn: learn
sn: {{.Password | utf16le | base64}}
userPassword: {{.Password}}
EOT
deletion_ldif = <<EOT
dn: cn={{.Username}},ou=users,dc=example,dc=org
changetype: delete
EOT
}
resource "vault_ldap_secret_backend_static_role" "vault-static" {
mount = vault_ldap_secret_backend.config.path
username = "vaulttest"
dn = "cn=vaulttest,ou=users,dc=example,dc=org"
role_name = "vault-static"
rotation_period = 600
} Commands to run and configure: docker-compose up -d vault openldap
terraform init
terraform apply -auto-approve I am not sure how much effort it would be to do something like this in the |
Use LDAP server in docker as well Configure Vault/LDAP with terraform Added LDAP tests - not yet finished
I've added some tests that are executed against vault and ldap running inside docker containers. Unfortunately it looks like there have been a lot of changes since I started doing this so whilst this was all working for me locally it now doesn't work after the merge. Problems:
Any help on these two points would be much appreciated! |
Ignore the above - it looks like ServerManager was changed to loop through it's initialization process. Perhaps I should move the code that triggers docker/terraform into there to make it a bit simpler. The tests are now running correctly... |
Hi @JordanStopford , the integration tests were totally revamped in Nov/Dec 2023, see: It was a huge job, it took me over a month, there were a lot of really complicated and unexpected snags. Believe it or not, I started off with trying to use containers for the whole thing (with docker-compose in fact), but I ultimately had to back out of those changes. When trying to introduce major changes to the project or testing suite like that I highly recommend posting about it first before doing much in the way of implementation so that we can discuss and you don't spend too much time on something that we might ultimately not be able to use. I triggered the CI but it looks like there are indeed issues with the changes here. When you tested locally, did you use I'm also not sure about introducing a container for only a single part of the tests... but will think on it if it's easy to get it working. If it's not easy to get it working with concurrency let's talk first before you spend too much time on it. |
Hi Brian,
No I wasn’t able to get the tests to run concurrently, which is why I moved
the multithreaded params out of the pyproject.toml file and into the
Makefile. I assumed the tests were ran via make in the CI pipeline as per
the contribution guidelines.
I’m not sure how you would go about testing this without using something
like docker to do proper integration tests against real systems. The way
I’ve set this up is that if docker and terraform have been found it’ll use
those instead of a local vault binary so it’s at least consistent for the
whole run. I’ll have a look later on and see if I can get the concurrency
working properly. I assume that it only uses a single vault instance to
test against even in the parallel testing mode?
…On Sat, 16 Mar 2024 at 21:11, Brian Scholer ***@***.***> wrote:
Hi @JordanStopford <https://github.com/JordanStopford> , the integration
tests were totally revamped in Nov/Dec 2023, see:
- #1105 <#1105>
It was a huge job, it took me over a month, there were a lot of really
complicated and unexpected snags.
Believe it or not, I started off with trying to use containers for the
whole thing (with docker-compose in fact), but I ultimately had to back out
of those changes.
------------------------------
When trying to introduce major changes to the project or testing suite
like that I highly recommend posting about it first before doing much in
the way of implementation so that we can discuss and you don't spend too
much time on something that we might ultimately not be able to use.
I triggered the CI but it looks like there are indeed issues with the
changes here. When you tested locally, did you use -n with a number
greater than 1 to ensure the tests work concurrently? Working through
concurrency issues was the hardest part of implementing the parallel
testing, and I want to be sure we don't have any regressions there.
I'm also not sure about introducing a container for only a single part of
the tests... but will think on it if it's easy to get it working. If it's
not easy to get it working with concurrency let's talk first before you
spend too much time on it.
—
Reply to this email directly, view it on GitHub
<#1033 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADEN7W4SBESBSOXCNRDHMLDYYSYRVAVCNFSM6AAAAAA33HEMIOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMBSGEZTIMRWHA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
We don't use the Makefile in CI; it's quite outdated unfortunately, has barely been updated so that could probably use some work. If you want to see what's run in CI, take a look at https://github.com/hvac/hvac/tree/main/.github/workflows
One way to is use local binaries for the things that would be run in docker. I like containers for this kind of thing, but sometimes using binaries directly can be more flexible, for example it then allows running the whole test suite inside of a container without the need for mounting a docker socket or otherwise having a container be able to control containers on the host. Using binaries can also be more flexible sometimes for running on different platforms, because of different containerization support, for example Windows can run Windows containers, but needs virtualization support to run linux containers, and on Windows Server that's not free (also on most cloud providers, standard instances are not capable of VinV so cannot doit anyway). The tests are currently broken in Windows, but that's something I actively want to change (this came up specifically because of LDAP!). As for how to get the binaries, there are two main possibilities: the tests themselves figure out the platform and download the required binaries (usually with standalone versions so as not to change system installation, if possible), or the binaries are pre-requisites of running the test suite. An even better approach that I like is that the tests are agnostic about how or even where the external services are running: the tests simply require that you tell them all the information needed to contact them, for example if the tests need LDAP, you give the address of the LDAP server, the credentials, etc. Usually this is assumed destructive. External service setup/teardown can be handled by an invoker script and so it allows for different bootstrapping scripts to set up the dependencies differently on different platforms and environments. This ensures that we don't bake those decisions deep into the test suite itself. Unfortunately, the current test suite does exactly that: it makes tons of assumptions about how the required services are run and initiated. This was what made #1105 so difficult, and it's the path I want to move away from.
It does not use a single Vault instance. That's an idea I like and would love to do, and I tried in #1105, but the tests already assume that they have full exclusive control of the Vault instance which will be spun up at the class level, due to deep integration into the test class, and as a result, having concurrent parallel tests that could be using the same backend names would step all over each other. Fixing that means large rewrites on most of the individual tests and on the entire test framework itself. A big part of that PR was implementing a way to grab all the free ports needed, in parallel, so that launched Vault instances did not fail due to their ports being in use. Containers don't actually solve that problem, since we still need to open up the ports to the host, unless we also launch another container within the container network and then inject the test running inside there, which brings its own set of complications.
And this is something I would actually like to do, I'd like to move away from #1105 was full of compromises because of that (and it still doesn't address the doctests which have a whole different set of difficulties, I could not get those to run in parallel without untangling that too). In truth, these difficulties are not your issue to solve as someone trying to put up a useful PR, so I am going to go over this and consider accepting it anyway, perhaps with notes in the documentation or something. I want to think a little bit on how to move changes like this forward so that we get useful features out to the library's users, while also trying to somehow indicate that these features are less tested (at no fault of the contributor). Thanks very much for your work on this and your patience! |
Hi @briantist thanks for your thorough response - I think I now have a better understanding of why everything is setup like this. I'm happy to have a crack at getting everything to work multi-threaded with Docker and pivot the tests to use that but only if you agree that is the way forward here. I think it's possible and would probably only take a few days due to the fact that Vault configuration is already handled by the test setup process. I've taken a look at the tests today and with a few minor modifications I've managed to get them mostly working multi-threaded, although there are some skips and failures that would inevitably take up the vast majority of the time debugging! Let me know your thoughts on the above and to this PR as a whole :) |
Thank you! I'm still inclined not to put a hard dependency on docker for the integration tests right now, in favor of a more flexible model where docker can be used, but isn't required, so I would say hold off on those changes please. I think no matter what, I would want to separate those larger testing framework changes from this PR, so what I'd recommend is to separate any of that out of this PR, and instead to focus on adding unit tests for the LDAP secret engine in this PR, since those don't need an LDAP server. It might not give us the same reassurances as an integration test would but it should provide at least some coverage. Take a look at the existing unit tests to see how those are set up now. Does that sound ok to you? |
No problem - the way I've done the setup in conftest.py is to check whether docker/terraform are present on the system so as long as they aren't it'll do the testing in the normal way. I suppose I could parameterise the tests so docker/terraform can be toggled on/off as well. Do you want me to push in what I have to far with regards to the docker multithreading stuff as part of this PR (but make it "disabled") as well as unit tests or completely back out the integration test changes and just stick some unit tests in? |
@briantist I've added some unit tests for the code which all pass. Let me know if we need anything else |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks, I will come back to give it a thorough review
looks you've got some lint errors, please run lint locally to catch those earlier! |
That's weird, flake8 doesn't report anything here. I'll see what the build reports now |
Ah looks like it was black instead of flake8 reporting the issue. I've fixed this now |
@briantist Looks like one of the test runs failed but not really sure why? |
Yeah there are occasionally individual flakes in the integration tests, nothing related to your PR: You already pushed a new commit which triggered a new run, but I would have otherwise re-run the test. |
Hi @briantist did you have a chance to review this in the end? |
Not yet unfortunately, I just added it to the next milestone so I don't miss it. |
Documentation updates
remove unused args/kwargs
This is pretty barebones at the moment and more or less mirrors the existing active directory secrets engine functionality. I didn't see any tests for that so haven't added any but can attempt to if necessary! This references #1032