Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

meta: flush_handlers doesn't honor when clause #41313

Closed
jmnavarrol opened this issue Jun 8, 2018 · 68 comments
Closed

meta: flush_handlers doesn't honor when clause #41313

jmnavarrol opened this issue Jun 8, 2018 · 68 comments
Labels
affects_2.4 This issue/PR affects Ansible v2.4 bot_closed docs This issue/PR relates to or includes documentation. feature This issue/PR relates to a feature request. has_pr This issue has an associated PR. support:core This issue/PR relates to code supported by the Ansible Engineering Team.

Comments

@jmnavarrol
Copy link

jmnavarrol commented Jun 8, 2018

SUMMARY

meta: flush_handlers gets triggered even within a when clause that evaluates to false.

ISSUE TYPE
  • Feature Idea
  • Documentation Report
COMPONENT NAME

ansible-playbook

ANSIBLE VERSION
ansible 2.4.1.0
  config file = /home/XXX/REPOS/physics-development/ansible/ansible.cfg
  configured module search path = [u'/home/XXX/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /home/XXX/.virtualenvs/ansible-2.4/local/lib/python2.7/site-packages/ansible
  executable location = /home/XXX/.virtualenvs/ansible-2.4/bin/ansible
  python version = 2.7.15rc1 (default, Apr 15 2018, 21:51:34) [GCC 7.3.0]
CONFIGURATION
ANSIBLE_PIPELINING(/home/XXX/REPOS/physics-development/ansible/ansible.cfg) = True
ANSIBLE_SSH_RETRIES(/home/XXX/REPOS/physics-development/ansible/ansible.cfg) = 2
DEFAULT_LOOKUP_PLUGIN_PATH(/home/XXX/REPOS/physics-development/ansible/ansible.cfg) = [u'/home/XXX/REPOS/physics-development/ansible/lookup_plugins']
DEFAULT_ROLES_PATH(/home/XXX/REPOS/physics-development/ansible/ansible.cfg) = [u'/home/XXX/REPOS/physics-development/ansible/roles']
OS / ENVIRONMENT

Ubuntu Bionic, running Ansible on a virtualenv (but don't think this is that relevant in this case)

STEPS TO REPRODUCE

The playbook below calls flush_handlers within a block that evaluates to false. To probe this, an additional task is within the block which, as expected, never gets run.

---
- name: Triggers a flush_handlers bug
  hosts: localhost
  
  tasks:
  - debug:
      msg: "Playbook running Ansible version {{ ansible_version.string }}"
    
  - name: First task
    set_fact:
      my_task: 'first task'
    changed_when: true
    notify: my_handler
    
  - block:
    - name: This task will never be run
      set_fact:
        my_task: 'task within block that evaluates to false'
      changed_when: true
      notify: my_handler
      
    - meta: flush_handlers
    when: 1 == 2
  
  - name: Last task
    debug:
      msg: 'This is the last task'
      
  handlers:
  - name: my_handler
    debug:
      msg: "Handler triggered from {{ my_task }}"
EXPECTED RESULTS

The expected result is for the handler be triggered after all tasks. Something like...

PLAY [Triggers a flush_handlers bug] **************************************************

TASK [Gathering Facts] *******************************************************************
ok: [localhost]

TASK [debug] *******************************************************************************
ok: [localhost] => {
    "msg": "Playbook running Ansible version 2.4.1.0"
}

TASK [First task] ****************************************************************************
changed: [localhost]

TASK [This task will never be run] *******************************************************
skipping: [localhost]

TASK [Last task] *****************************************************************************
ok: [localhost] => {
    "msg": "This is the last task"
}

RUNNING HANDLER [my_handler] *****************************************************
ok: [localhost] => {
    "msg": "Handler triggered from first task"
}

PLAY RECAP *********************************************************************************
localhost                  : ok=5    changed=1    unreachable=0    failed=0   
ACTUAL RESULTS

What happened is that the handler was triggered immediately when the ansible interpreter reached the flush_handlers spot, despite being within a block that shouldn't be run because of the wrapping when clause:

PLAY [Triggers a flush_handlers bug] ****************************************************

TASK [Gathering Facts] *********************************************************************
ok: [localhost]

TASK [debug] **********************************************************************************
ok: [localhost] => {
    "msg": "Playbook running Ansible version 2.4.1.0"
}

TASK [First task] *******************************************************************************
changed: [localhost]

TASK [This task will never be run] **********************************************************
skipping: [localhost]

RUNNING HANDLER [my_handler] ********************************************************
ok: [localhost] => {
    "msg": "Handler triggered from first task"
}

TASK [Last task] *******************************************************************************
ok: [localhost] => {
    "msg": "This is the last task"
}

PLAY RECAP ***********************************************************************************
localhost                  : ok=5    changed=1    unreachable=0    failed=0   

NOTE: I also tried applying the when clause just to the meta task, with same result. This problem can also be reproduced up to 2.11.

@ansibot
Copy link
Contributor

ansibot commented Jun 8, 2018

Files identified in the description:

If these files are inaccurate, please update the component name section of the description or use the !component bot command.

click here for bot help

@ansibot ansibot added affects_2.4 This issue/PR affects Ansible v2.4 bug This issue/PR relates to a bug. needs_triage Needs a first human triage before being processed. support:core This issue/PR relates to code supported by the Ansible Engineering Team. labels Jun 8, 2018
@mkrizek
Copy link
Contributor

mkrizek commented Jun 8, 2018

The behavior you're describing is expected. I merged PR recently that adds a warning when using conditional on meta tasks that don't support them - #41126. Also see #27565.

Stop by on IRC or mailing list if you have any questions:

Thanks!

@mkrizek mkrizek closed this as completed Jun 8, 2018
@mkrizek mkrizek removed the needs_triage Needs a first human triage before being processed. label Jun 8, 2018
@jmnavarrol
Copy link
Author

mkrizek wrote:

The behavior you're describing is expected.

No, it isn't.

It obviously wasn't expected by me, that's why I opened this bug but, more interestingly, even you know this is unexpected, or else, why investing the time to warn about it?

All in all, warning at run time about this behaviour makes for a nice addition, so thank you very much for that but, please, consider to reopen this so it's considered as what it in fact is: a bug or, at the very least, as a feature request.

@mkrizek
Copy link
Contributor

mkrizek commented Jun 8, 2018

Oh, sorry for incorrect wording, let me rephrase, it works as intended and as such it's not a bug. We should at least clarify the behavior in the documentation.

@mkrizek mkrizek reopened this Jun 8, 2018
@mkrizek mkrizek added docs This issue/PR relates to or includes documentation. feature This issue/PR relates to a feature request. and removed bug This issue/PR relates to a bug. labels Jun 8, 2018
@jmnavarrol
Copy link
Author

Thanks a lot for reopening this, if even as a feature request!

mkrizek wrote:

let me rephrase, it works as intended and as such it's not a bug.

Humm... I'd probably better say "current behaviour is known to developers" (but then, also known bugs are known to developers and that doesn't make them any less of a bug).

In order to say as much as it works as intended there should be some kind of rationale supporting this behaviour as the desired one (I mean: not a technical explanation on why it happens to behave the way it does, but a functional requirement for the behaviour to be this instead of any other -which I doubt there exists; I think you and I could agree the proposed behaviour to be better than the current one).

Well, enough chatting: thanks again for your effort.

@mkrizek
Copy link
Contributor

mkrizek commented Jun 8, 2018

One of the comments in the issue I linked for you (#27565) states that some of the meta tasks are not host specific and so when is ignored there. So in your case, flush_handlers runs handlers on those hosts which have been notified (so far at the point you call flush_handlers). This means that flush_handlers is not a host specific meta task, it can affect multiple hosts. And so imagine this:

- meta: flush_handlers
  when: condition

The condition might be true only on one of your hosts and false on the rest which means that handlers would be flushed even on hosts where the condition is false.

Like I said we didn't do a good job documenting this and we should fix that.

@jmnavarrol
Copy link
Author

The more I look at it, the more convinced I am that this is in fact a bug.

mkrizek said:

One of the comments in the issue I linked for you (#27565) states that some of the meta tasks are not host specific and so when is ignored there. So in your case, flush_handlers runs handlers on those hosts which have been notified (so far at the point you call flush_handlers).

I read #27565 and, while I understand the part about meta being "...a special kind of task which can influence Ansible internal execution or state" (which is the basis for the offered rationale), they don't even work the way you state: flush_handlers is not triggered here because the when clause resolved to true on one host and then run on every host because of a side-effect of meta's nature (that, for instance, could be the case for, say, the refresh_inventory or end_play meta tasks) but it's simply that meta is ignoring the when clause in full. See my example: the when clause (when: 1 == 2) simply can't resolve to true in any circumnstance and, still, flush_handlers is triggered.

Now, some other details:

  • Warn when cond is used on meta tasks that don't support them #41126 subject is: "Warn when cond is used on meta tasks that don't support them". This is not a bug-resolving statement (by itself), but a remediation one. I.e.: it's good getting a warning if I put, say, meta: end_play within a when block because it's easy that I overlook the real implications of what I'm doing. But then, flush_handlers is a host-oriented meta task, as "...Handlers are lists of tasks, not really any different from regular tasks", so it should fully support being called within conditional blocks and work as expected (honoring them, that is).
  • meta tasks do not appear to respect conditionals #27565: "meta tasks do not appear to respect conditionals". It is (somehow) a general case of this bug. While I considered only the meta: flush_handlers case, meta tasks do not appear to respect conditionals #27565 is about all meta tasks. As I said, I understand the rationale given there (which basically ends up being "because of its very nature, meta may not work the way you think it works" and, thus, calls for Warn when cond is used on meta tasks that don't support them #41126), I don't think it offers the proper solution because, as the case here illustrates, it is not that the when clause is processed properly but having hard to anticipate side-effects, but that meta tasks are simply not evaluated on their proper context. I'm almost 100% sure, even without looking at the underlying code, meta tasks are not being processed they way they should (and I'm also almost sure, it'll be a bug hard to track and solve).
  • meta: flush_handlers doesn't honor when clause #41313: this issue. Despite meta tasks do not appear to respect conditionals #27565 being the general case of this one, it still holds merit splitting it on a case-by-case basis if only to produce valid test-cases that can be hunted down one-by-one to insure they are solved so, once again, I suggest you to reconsider this issue's status (not feature request but bug).

I know this comment is on the verge of being more proper for general mail list than issue request, as it's almost as much about "what a bug is and is not" than about the very nature of the problem at hand here, but I hope you won't see it as a critic (it's certainly not my intention) but as an attempt to clarify the full situation that's going on here (with my user's hat on, not a developer's one).

@mintern
Copy link

mintern commented Jul 24, 2018

I was hoping that:

- when: condition
  block:
    - meta: flush_handlers

would be a suitable workaround, but the handlers are even flushed in that case even when condition doesn't hold.

@memelet
Copy link

memelet commented Nov 2, 2018

I'm ok with the flush not supporting conditionals. But it would sure be nice to be able to suppress the (new) warning since there is nothing that can be done about it.

@vbotka
Copy link
Contributor

vbotka commented Nov 21, 2018

FWIW. Let me describe a valid scenario (below). "flush_hadlers" is needed after tasks abc.yml complete and before tasks def.yml start. But the file with the tasks xyz.yml is imported when OS is RH only. Here Ansible complains:
[WARNING]: flush_handlers task does not support when conditional

It would be nice to be able to suppress the warning.

# cat xyz.yml
- include_tasks: abc.yml
- meta: flush_handlers
- include_tasks: def.yml

# cat playbook.yml
  ...
  tasks:
    - import_tasks: xyz.yml
      when: (ansible_os_family == "RedHat" )

@bmalynovytch
Copy link
Contributor

I completely agree with @vbotka, having higher conditioned imports makes the warning quite out of context, although I understand how tasks tags and conditions are processed at low level.
There should be a way to get rid of the warning, or, at least, mute it in some circonstances (such as conditionals on outer inclusions).

@jmnavarrol
Copy link
Author

jmnavarrol commented Nov 29, 2018

I completely agree with @vbotka, having higher conditioned imports makes the warning quite out of context, although I understand how tasks tags and conditions are processed at low level.

I agree that all "helpful warnings", like this one, should be programmatically silenceable, so meta:[whatever] should allow for the ignore_warnings: yes option.

After all, "Meta tasks are a special kind of task". So, if they are tasks, even if of a special kind, why they don't support ignore_warnings: yes? why they don't support being evaluated by conditionals? (and, again, here I'm not asking for the underlying code or technicality that makes this happen -or not happen, but the rationale that supports this as the desired behaviour).

But, please, re-read this issue's subject: "meta: flush_handlers doesn't honor when clause". You see, this one is about a different beast. I'd suggest you open a feature request about it (while properly solving this one would make your further feature request moot).

@fthommen
Copy link

I'd support @jmnavarrol in this request. IMHO this behaviour is unexpected (if you are not an ansible developer) and the first (minor) bug is, that there hasn't been any warning about it until recently. The current implementation explains some eventual problems we had with our roles and which we couldn't really explain.

If the current behaviour is in fact intended, then it should be (prominently) mentioned in the documentation.

Our context of the warning is:

- hosts: all
  name:  "====== BLAH ======"
  roles:
    - { role: blah, when: has_blah|default(False) }
  tags:
    - blah-only

we have several such role constructs. Silencing the warning would't help us. It would be nice if it worked ;-)

@Akasurde
Copy link
Member

similar to #38074

@marcinkubica
Copy link

marcinkubica commented Feb 4, 2019

+1 on this

A big pain when integrating roles which rely on when: conditions.

ie will die if I integrate my colleague's role (which is using handlers)

- { import_role:  { name : docker },    tags: debian-docker,   when: debian_docker is defined and debian_docker }`

A must have!

@Routhinator
Copy link

Just ended up here because of the exact condition that @vbotka describes.

We have an NFS role and two plays that are included based on server or client configuration. In the Server play, which runs first, there is a flush_handlers meta to ensure changes to the NFS server config are applied before the clients are configured and attempt to mount based on the presumption these changes are applied. This warning is not helpful here, because the meta task has no when clause - the entire set of tasks is conditional.

@debalance
Copy link

I'm facing the same situation as @vbotka, @marcinkubica and @Routhinator.
Looks like I'll have to deploy a patched ansible that at least partially reverts #41126.

@pjnagel
Copy link

pjnagel commented Jun 13, 2019

I also have the same issue as @vbotka, @marcinkubica and @Routhinator.

For me, this issue makes handlers useless unless you know you will never need to flush them.

Over the maintenance life of our ansible playbooks, there is a high chance that parts may in future be refactored to be included conditionally. So any handler that you may at some point want to flush is just a booby-trap waiting to happen as far as I am concerned.

@edward2a
Copy link

Just hit this issue with a 3rd party playbook.
IMO, if flush_handlers does not work with conditionals, it should either be fixed (bug) or error out (not intended).
Having a warning makes no sense to me given that whatever depends on this handler acting will actually fail, unnecessarily making it more difficult to pinpoint the issue.

@cesarjorgemartinez
Copy link

cesarjorgemartinez commented Aug 26, 2020 via email

@rvalle
Copy link
Contributor

rvalle commented Oct 2, 2020

flush_handlers requires a when clause because of the way that conditional dependencies have been implemented.

flush handlers may appear in a role, roles may be a conditional dependency.... all tasks that may appear in a role require a When clause.

@stroobandt
Copy link

Personally, I have the following:

- name: 'Installing utilities for Belgian eID.'
  …
  notify: configure eid
  changed_when: True    # NOTE: A notify signal is normally produced only when the task introduced a change!

- meta: flush_handlers    # Execute the notified handlers now, without delay.

In other words, a changed_when: True is required to execute the notified handlers even though nothing had changed. I hope this counts towards reinstating this desired behaviour without any warning.

@cesarjorgemartinez
Copy link

cesarjorgemartinez commented Dec 8, 2020 via email

@fthommen
Copy link

fthommen commented Dec 8, 2020

Personally, I have the following:

- name: 'Installing utilities for Belgian eID.'
  …
  notify: configure eid
  changed_when: True    # NOTE: A notify signal is normally produced only when the task introduced a change!

- meta: flush_handlers    # Execute the notified handlers now, without delay.

In other words, a changed_when: True is required to execute the notified handlers even though nothing had changed. I hope this counts towards reinstating this desired behaviour without any warning.

IMHO this would completely defeat the sense of a handler. You could as well set the handler content as regular next task.

@pjnagel
Copy link

pjnagel commented Dec 9, 2020

Would it be possible to at least change the error message to state the file and line number where the problematic "flush_handlers:", as well as the offending "when:" around it, occurs?

We have thousands of lines of ansible code, and every time we run it we see "[WARNING]: flush_handlers task does not support when conditional" at the start, but how on earth do we go about finding the cause?

I still maintain that this issue makes "flush_handlers" and/or conditional inclusion of roles problematic features in ansible. Either fix the warning, or remove those features.

@jimbo8098
Copy link
Contributor

Would it be possible to at least change the error message to state the file and line number where the problematic "flush_handlers:", as well as the offending "when:" around it, occurs?

We have thousands of lines of ansible code, and every time we run it we see "[WARNING]: flush_handlers task does not support when conditional" at the start, but how on earth do we go about finding the cause?

I still maintain that this issue makes "flush_handlers" and/or conditional inclusion of roles problematic features in ansible. Either fix the warning, or remove those features.

Couldn't you just run ansible with -vvv to get what you need?

@pjnagel
Copy link

pjnagel commented May 24, 2021

Couldn't you just run ansible with -vvv to get what you need?

Thanks @jimbo8098, I never thought to use -vvv that way.

Having explored the reason for this message in our codebase, I re-confirm my position that this issue makes the notion of "handlers" much less useful than they could be:

  • In our code we have a inventory variable that determines whether servers use LetsEncrypt for their certificates or not.
  • We conditionally include a role that installs certbot on those servers.
  • That role in turn conditionally includes a role that configures Apache to not use port 80 (to leave the port free for Certbot's HTTP challenge).
  • The apache role has two tasks, one that edits /etc/apache2/ports.conf, and one that disables the default site. Both tasks use notify: 'Reload apache2'.
  • The tasks are followed by meta: flush_handlers to ensure apache is reloaded by the time control return to the task that included it, so they are free to immediately issue a certbot challenge/response if they need to.

The lesson here is to not use handlers, but to just code the logic yourself by storing the result of both tasks. Because even if the task is not conditionally included today, who knows if it will be conditionally included in future.

That implies handlers are only to be used in cases when you don't care when they are run, as long as they are eventually run.

But given that a playbook run can fail halfway, leaving lots of handlers not yet run, that leads me to conclude: "Ansible Handlers Considered Harmful", so to speak.

@pjnagel
Copy link

pjnagel commented May 24, 2021

To make things worse, warn: false does not work on meta tasks.

@jmnavarrol
Copy link
Author

jmnavarrol commented May 26, 2021

That implies handlers are only to be used in cases when you don't care when they are run, as long as they are eventually run.

Well, you still can count on task stages: pre_tasks -> roles -> tasks -> post_tasks, each of them triggering handlers when they finish but I'd say yes, that's the basic way handlers work (I need this action to be done, like a service restart... eventually). It isn't without merit, as it allows "collecting" all those actions and running them once (you don't want your web server being restarted a dozen times when once is good enough just because you activated a dozen plugins, right?).

...and that "eventually" is what 'flush_handlers' was meant to alleviate: "the moment is... now!"... were not due to this bu^H^Hlack of feature.

Conceptually-wise, flush_handlers should work exactly the same way the handlers' flushing that already happens by the end of a stage: it knows which handlers to trigger on what hosts under what conditions... just do the same, only do it now!

But given that a playbook run can fail halfway, leaving lots of handlers not yet run, that leads me to conclude: "Ansible Handlers Considered Harmful", so to speak.

Failing half way of a playbook run may happen for very many different causes and it's up to the playbook's developer (or role's developer, for that matter) to understand the impact stopping the run half-way may have and how long wants to go to control/correct those situations.

Ansible offers some help, though:

  1. A general approach: https://docs.ansible.com/ansible/latest/user_guide/playbooks_error_handling.html
  2. An exceptions-like mechanism for blocks: https://docs.ansible.com/ansible/2.8/user_guide/playbooks_blocks.html#block-error-handling

@pieter-lautus
Copy link

My comment on failing was rather this: imagine some role that ran ages ago notified some handlers that have not run yet. Imagine now an entirely unrelated later role fails. Then the handlers won't get run.

The only way the author of the first role can deal with the situation is to make sure their handlers are run by the end of their role, by using meta: flush_handlers at the end, for example. This brings us back to this very issue.

The author of the second role can deal with anticipated errors in their own code, but (unless one tightly couples one's entire ansible codebase), they have no knowledge of what prior roles were trying to achieve. The best the author of the second role can do is use meta: flush_handlers at the start of their role to try and localise the effect of their own errors to their own role. And now we are back to this issue again.

Handlers are a nice concise idiom. But based on the above, in playbooks of largish size, I have grown to be careful about using them. It would be nice if we could keep using handlers, but also limit them to a smaller scope. Which is, I guess, what most of us complaining about this bug are trying to achieve.

Something like a notify that is only deferred to the end of the current role, instead of the end of the entire playbook, would be great. But I can live without this. I'll lean towards more explicit tasks that conditionally run code based on the results of prior tasks, instead of notifies and handlers.

@staticdev
Copy link

staticdev commented Jul 7, 2021

This issue has more than 3 years =/. Maybe this message should just be removed completely in case you use meta: flush_handlers.

@jmnavarrol
Copy link
Author

[...]
Something like a notify that is only deferred to the end of the current role, instead of the end of the entire playbook, would be great. But I can live without this. I'll lean towards more explicit tasks that conditionally run code based on the results of prior tasks, instead of notifies and handlers.

Quite off-topic here, but anyway...

I see your point... but, then, software with bugs is going to render unintended consequences! (and for "software with bugs" I mean here our own Ansible code, not the Ansible product itself).

Ansible runs are basically "stateless by definition" so, yes: it's true a following run won't know the previous one didn't finish properly, opening the door to situations like the one you mentioned (a pending [whatever] missed by an early failure is pending no more when you run the playbook again).

But then, you can have "bugs in code" (that lead to early termination) and transient errors, not code related (in our case, normally network connectivity problems). I'd say the former you (we) need to deal with by not introducing bugs in code :) and about the later, they usually can be dealt with in code itself.

Me, when an ansible run fails, it's been usually easy to see why and the resulting side effects (i.e.: an early failure after a change in a daemon configuration, but the daemon won't be restarted when you run ansible again), so I'm fine with current general Ansible approach.

Strategically-wise, these kind of problems could/should be approached by changing Ansible's general behaviour, from "stateless" to "stateful" (at least optionally). I.e.: if instead of accounting for pending handlers transiently and in-memory, Ansible left that state on a file/database/shared status device/etc. subsequent Ansible runs could take that "pending status" into account and act accordingly. I don't think that approach would be out of question, as it's somehow similar to what it already does with respect to *.retry files or inventory persistence plugins, but certainly a matter of a completely separated feature request/development (and far from a trivial one, I'd say).

@pieter-lautus
Copy link

pieter-lautus commented Jul 8, 2021

@jmnavarrol I take your point on bugs in code.

But we also do have mechanisms for dealing with anticipated problem conditions, which could include bugs. For example, we have try/catch/finally exception handlers.

I see my proposal for notifies/handlers that are constrained to their own role as a species of finally block. Whatever happens elsewhere in the code, at least I know that this bit of code (this role) will clean up after itself.

@jmnavarrol
Copy link
Author

Test at https://github.com/jmnavarrol/ansible-issue-41313 upgraded to current Ansible version (2.11.3).

Still the same behaviour.

JM1 added a commit to JM1/ansible-role-jm1-dev-cpp that referenced this issue Oct 9, 2021
Handlers cannot be used to update apt cache because in case that a role is
run conditionally (when:) from a playbook then handlers cannot be flushed
with 'meta: flush_handlers' because of Ansible issue #41313.

Ref.: ansible/ansible#41313
@ansibot
Copy link
Contributor

ansibot commented Apr 15, 2022

Thank you very much for your submission to Ansible. It means a lot to us that you've taken time to contribute.

Unfortunately, this issue has been open for some time while waiting for a contributor to take it up but there does not seem to have been anyone that did so. So we are going to close this issue to clear up the queues and make it easier for contributors to browse possible implementation targets.

However, we're absolutely always up for discussion. Because this project is very active, we're unlikely to see comments made on closed tickets and we lock them after some time. If you or anyone else has any further questions, please let us know by using any of the communication methods listed in the page below:

In the future, sometimes starting a discussion on the development list prior to proposing or implementing a feature can make getting things included a little easier, but it's not always necessary.

Thank you once again for this and your interest in Ansible!

click here for bot help

@ansibot ansibot added bot_closed and removed waiting_on_contributor This would be accepted but there are no plans to actively work on it. labels Apr 15, 2022
@ansibot ansibot closed this as completed Apr 15, 2022
@bmalynovytch
Copy link
Contributor

this definitely needs to remain open
ping @gundalow @felixfontein

@sivel
Copy link
Member

sivel commented Apr 19, 2022

We are no longer keeping feature requests open for an extended period of time that the core team has no intentions of implementing ourselves. This is not a statement that we would not accept a change that implements this feature request. This issue has been open for nearly 4 years without anyone stepping up to implement it.

@bmalynovytch
Copy link
Contributor

sad ...

@savchenko
Copy link

@sivel , you've made a valid point, however keeping the feature closed reduces its visibility and chances of someone actually seeing there is a significant interest in the implementation.

This bug will be celebrating 4 years anniversary in about a month. Perhaps keep it open with "needs contributor" tag?

@ansible ansible locked and limited conversation to collaborators Apr 22, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
affects_2.4 This issue/PR affects Ansible v2.4 bot_closed docs This issue/PR relates to or includes documentation. feature This issue/PR relates to a feature request. has_pr This issue has an associated PR. support:core This issue/PR relates to code supported by the Ansible Engineering Team.
Projects
None yet
Development

No branches or pull requests