-
Notifications
You must be signed in to change notification settings - Fork 23.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
meta: flush_handlers doesn't honor when clause #41313
Comments
Files identified in the description: If these files are inaccurate, please update the |
The behavior you're describing is expected. I merged PR recently that adds a warning when using conditional on meta tasks that don't support them - #41126. Also see #27565. Stop by on IRC or mailing list if you have any questions:
Thanks! |
mkrizek wrote:
No, it isn't. It obviously wasn't expected by me, that's why I opened this bug but, more interestingly, even you know this is unexpected, or else, why investing the time to warn about it? All in all, warning at run time about this behaviour makes for a nice addition, so thank you very much for that but, please, consider to reopen this so it's considered as what it in fact is: a bug or, at the very least, as a feature request. |
Oh, sorry for incorrect wording, let me rephrase, it works as intended and as such it's not a bug. We should at least clarify the behavior in the documentation. |
Thanks a lot for reopening this, if even as a feature request! mkrizek wrote:
Humm... I'd probably better say "current behaviour is known to developers" (but then, also known bugs are known to developers and that doesn't make them any less of a bug). In order to say as much as it works as intended there should be some kind of rationale supporting this behaviour as the desired one (I mean: not a technical explanation on why it happens to behave the way it does, but a functional requirement for the behaviour to be this instead of any other -which I doubt there exists; I think you and I could agree the proposed behaviour to be better than the current one). Well, enough chatting: thanks again for your effort. |
One of the comments in the issue I linked for you (#27565) states that some of the meta tasks are not host specific and so
The Like I said we didn't do a good job documenting this and we should fix that. |
The more I look at it, the more convinced I am that this is in fact a bug. mkrizek said:
I read #27565 and, while I understand the part about meta being "...a special kind of task which can influence Ansible internal execution or state" (which is the basis for the offered rationale), they don't even work the way you state: flush_handlers is not triggered here because the when clause resolved to true on one host and then run on every host because of a side-effect of meta's nature (that, for instance, could be the case for, say, the refresh_inventory or end_play meta tasks) but it's simply that meta is ignoring the when clause in full. See my example: the when clause (when: 1 == 2) simply can't resolve to true in any circumnstance and, still, flush_handlers is triggered. Now, some other details:
I know this comment is on the verge of being more proper for general mail list than issue request, as it's almost as much about "what a bug is and is not" than about the very nature of the problem at hand here, but I hope you won't see it as a critic (it's certainly not my intention) but as an attempt to clarify the full situation that's going on here (with my user's hat on, not a developer's one). |
I was hoping that: - when: condition
block:
- meta: flush_handlers would be a suitable workaround, but the handlers are even flushed in that case even when |
I'm ok with the flush not supporting conditionals. But it would sure be nice to be able to suppress the (new) warning since there is nothing that can be done about it. |
FWIW. Let me describe a valid scenario (below). "flush_hadlers" is needed after tasks abc.yml complete and before tasks def.yml start. But the file with the tasks xyz.yml is imported when OS is RH only. Here Ansible complains: It would be nice to be able to suppress the warning.
|
I completely agree with @vbotka, having higher conditioned imports makes the warning quite out of context, although I understand how tasks tags and conditions are processed at low level. |
I agree that all "helpful warnings", like this one, should be programmatically silenceable, so meta:[whatever] should allow for the ignore_warnings: yes option. After all, "Meta tasks are a special kind of task". So, if they are tasks, even if of a special kind, why they don't support ignore_warnings: yes? why they don't support being evaluated by conditionals? (and, again, here I'm not asking for the underlying code or technicality that makes this happen -or not happen, but the rationale that supports this as the desired behaviour). But, please, re-read this issue's subject: "meta: flush_handlers doesn't honor when clause". You see, this one is about a different beast. I'd suggest you open a feature request about it (while properly solving this one would make your further feature request moot). |
I'd support @jmnavarrol in this request. IMHO this behaviour is unexpected (if you are not an ansible developer) and the first (minor) bug is, that there hasn't been any warning about it until recently. The current implementation explains some eventual problems we had with our roles and which we couldn't really explain. If the current behaviour is in fact intended, then it should be (prominently) mentioned in the documentation. Our context of the warning is:
we have several such role constructs. Silencing the warning would't help us. It would be nice if it worked ;-) |
similar to #38074 |
+1 on this A big pain when integrating roles which rely on ie will die if I integrate my colleague's role (which is using handlers)
A must have! |
Just ended up here because of the exact condition that @vbotka describes. We have an NFS role and two plays that are included based on server or client configuration. In the Server play, which runs first, there is a flush_handlers meta to ensure changes to the NFS server config are applied before the clients are configured and attempt to mount based on the presumption these changes are applied. This warning is not helpful here, because the meta task has no when clause - the entire set of tasks is conditional. |
I'm facing the same situation as @vbotka, @marcinkubica and @Routhinator. |
I also have the same issue as @vbotka, @marcinkubica and @Routhinator. For me, this issue makes handlers useless unless you know you will never need to flush them. Over the maintenance life of our ansible playbooks, there is a high chance that parts may in future be refactored to be included conditionally. So any handler that you may at some point want to flush is just a booby-trap waiting to happen as far as I am concerned. |
Just hit this issue with a 3rd party playbook. |
Hi good afternoon,
The motivation for this issue not is find a workaround, is that in Ansible
include the flush with a when, because is a obvious behaviour. If I don't
find a workaround, then I will be breaded...
The fix is include in a flush task a when clause that works...
Regards,
Cesar Jorge
El dom., 16 ago. 2020 18:46, Eduardo <notifications@github.com> escribió:
… Just hit this issue with a 3rd party playbook.
IMO, if flush_handlers does not work with conditionals, it should either
be fixed (bug) or error out (not intended).
Having a warning makes no sense to me given that whatever depends on this
handler acting will actually fail, unnecessarily making it more difficult
to pinpoint the issue.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#41313 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AA5N2CK7OFH65XOGIFI4VHDSBAEQDANCNFSM4FD66ROQ>
.
|
flush_handlers requires a when clause because of the way that conditional dependencies have been implemented. flush handlers may appear in a role, roles may be a conditional dependency.... all tasks that may appear in a role require a When clause. |
Personally, I have the following:
In other words, a |
Hi,
This not works conditionally as related the issue.
Regards,
Cesar Jorge
El mar., 8 dic. 2020 15:05, Serge Y. Stroobandt <notifications@github.com>
escribió:
… Personally, I have the following:
- name: 'Installing utilities for Belgian eID.'
…
notify: configure eid
changed_when: True # NOTE: A notify signal is normally produced only when the task introduced a change!
- meta: flush_handlers # Execute the notified handlers now, without delay.
In other words, a changed_when: True is required to execute the notified
handlers even though nothing had changed. I hope this counts towards
reinstating this desired behaviour without any warning.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#41313 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AA5N2CJPXSB5KOWICW2VP4DSTYXDZANCNFSM4FD66ROQ>
.
|
IMHO this would completely defeat the sense of a handler. You could as well set the handler content as regular next task. |
Would it be possible to at least change the error message to state the file and line number where the problematic "flush_handlers:", as well as the offending "when:" around it, occurs? We have thousands of lines of ansible code, and every time we run it we see "[WARNING]: flush_handlers task does not support when conditional" at the start, but how on earth do we go about finding the cause? I still maintain that this issue makes "flush_handlers" and/or conditional inclusion of roles problematic features in ansible. Either fix the warning, or remove those features. |
Couldn't you just run ansible with |
Thanks @jimbo8098, I never thought to use Having explored the reason for this message in our codebase, I re-confirm my position that this issue makes the notion of "handlers" much less useful than they could be:
The lesson here is to not use handlers, but to just code the logic yourself by storing the result of both tasks. Because even if the task is not conditionally included today, who knows if it will be conditionally included in future. That implies handlers are only to be used in cases when you don't care when they are run, as long as they are eventually run. But given that a playbook run can fail halfway, leaving lots of handlers not yet run, that leads me to conclude: "Ansible Handlers Considered Harmful", so to speak. |
To make things worse, |
Well, you still can count on task stages: pre_tasks -> roles -> tasks -> post_tasks, each of them triggering handlers when they finish but I'd say yes, that's the basic way handlers work (I need this action to be done, like a service restart... eventually). It isn't without merit, as it allows "collecting" all those actions and running them once (you don't want your web server being restarted a dozen times when once is good enough just because you activated a dozen plugins, right?). ...and that "eventually" is what 'flush_handlers' was meant to alleviate: "the moment is... now!"... were not due to this bu^H^Hlack of feature. Conceptually-wise, flush_handlers should work exactly the same way the handlers' flushing that already happens by the end of a stage: it knows which handlers to trigger on what hosts under what conditions... just do the same, only do it now!
Failing half way of a playbook run may happen for very many different causes and it's up to the playbook's developer (or role's developer, for that matter) to understand the impact stopping the run half-way may have and how long wants to go to control/correct those situations. Ansible offers some help, though:
|
My comment on failing was rather this: imagine some role that ran ages ago notified some handlers that have not run yet. Imagine now an entirely unrelated later role fails. Then the handlers won't get run. The only way the author of the first role can deal with the situation is to make sure their handlers are run by the end of their role, by using The author of the second role can deal with anticipated errors in their own code, but (unless one tightly couples one's entire ansible codebase), they have no knowledge of what prior roles were trying to achieve. The best the author of the second role can do is use Handlers are a nice concise idiom. But based on the above, in playbooks of largish size, I have grown to be careful about using them. It would be nice if we could keep using handlers, but also limit them to a smaller scope. Which is, I guess, what most of us complaining about this bug are trying to achieve. Something like a notify that is only deferred to the end of the current role, instead of the end of the entire playbook, would be great. But I can live without this. I'll lean towards more explicit tasks that conditionally run code based on the results of prior tasks, instead of notifies and handlers. |
This issue has more than 3 years =/. Maybe this message should just be removed completely in case you use |
Quite off-topic here, but anyway... I see your point... but, then, software with bugs is going to render unintended consequences! (and for "software with bugs" I mean here our own Ansible code, not the Ansible product itself). Ansible runs are basically "stateless by definition" so, yes: it's true a following run won't know the previous one didn't finish properly, opening the door to situations like the one you mentioned (a pending [whatever] missed by an early failure is pending no more when you run the playbook again). But then, you can have "bugs in code" (that lead to early termination) and transient errors, not code related (in our case, normally network connectivity problems). I'd say the former you (we) need to deal with by not introducing bugs in code :) and about the later, they usually can be dealt with in code itself. Me, when an ansible run fails, it's been usually easy to see why and the resulting side effects (i.e.: an early failure after a change in a daemon configuration, but the daemon won't be restarted when you run ansible again), so I'm fine with current general Ansible approach. Strategically-wise, these kind of problems could/should be approached by changing Ansible's general behaviour, from "stateless" to "stateful" (at least optionally). I.e.: if instead of accounting for pending handlers transiently and in-memory, Ansible left that state on a file/database/shared status device/etc. subsequent Ansible runs could take that "pending status" into account and act accordingly. I don't think that approach would be out of question, as it's somehow similar to what it already does with respect to *.retry files or inventory persistence plugins, but certainly a matter of a completely separated feature request/development (and far from a trivial one, I'd say). |
@jmnavarrol I take your point on bugs in code. But we also do have mechanisms for dealing with anticipated problem conditions, which could include bugs. For example, we have try/catch/finally exception handlers. I see my proposal for notifies/handlers that are constrained to their own role as a species of finally block. Whatever happens elsewhere in the code, at least I know that this bit of code (this role) will clean up after itself. |
Test at https://github.com/jmnavarrol/ansible-issue-41313 upgraded to current Ansible version (2.11.3). Still the same behaviour. |
Handlers cannot be used to update apt cache because in case that a role is run conditionally (when:) from a playbook then handlers cannot be flushed with 'meta: flush_handlers' because of Ansible issue #41313. Ref.: ansible/ansible#41313
Thank you very much for your submission to Ansible. It means a lot to us that you've taken time to contribute. Unfortunately, this issue has been open for some time while waiting for a contributor to take it up but there does not seem to have been anyone that did so. So we are going to close this issue to clear up the queues and make it easier for contributors to browse possible implementation targets. However, we're absolutely always up for discussion. Because this project is very active, we're unlikely to see comments made on closed tickets and we lock them after some time. If you or anyone else has any further questions, please let us know by using any of the communication methods listed in the page below: In the future, sometimes starting a discussion on the development list prior to proposing or implementing a feature can make getting things included a little easier, but it's not always necessary. Thank you once again for this and your interest in Ansible! |
this definitely needs to remain open |
We are no longer keeping feature requests open for an extended period of time that the core team has no intentions of implementing ourselves. This is not a statement that we would not accept a change that implements this feature request. This issue has been open for nearly 4 years without anyone stepping up to implement it. |
sad ... |
@sivel , you've made a valid point, however keeping the feature closed reduces its visibility and chances of someone actually seeing there is a significant interest in the implementation. This bug will be celebrating 4 years anniversary in about a month. Perhaps keep it open with "needs contributor" tag? |
SUMMARY
meta: flush_handlers gets triggered even within a when clause that evaluates to false.
ISSUE TYPE
COMPONENT NAME
ansible-playbook
ANSIBLE VERSION
CONFIGURATION
OS / ENVIRONMENT
Ubuntu Bionic, running Ansible on a virtualenv (but don't think this is that relevant in this case)
STEPS TO REPRODUCE
The playbook below calls flush_handlers within a block that evaluates to false. To probe this, an additional task is within the block which, as expected, never gets run.
EXPECTED RESULTS
The expected result is for the handler be triggered after all tasks. Something like...
ACTUAL RESULTS
What happened is that the handler was triggered immediately when the ansible interpreter reached the flush_handlers spot, despite being within a block that shouldn't be run because of the wrapping when clause:
NOTE: I also tried applying the when clause just to the meta task, with same result. This problem can also be reproduced up to 2.11.
The text was updated successfully, but these errors were encountered: