New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
apt: update_cache=yes failing in ansible 1.3 (?) #4140
Comments
How are you invoking ansible-playbook, with -K? There was a change in 1.3 that sudo must be explicitly specified as the -K flag will not make tasks use sudo implicitly. |
Yeah, I was using -K and using sudo explicitly in most places, but not in Very Respectfully, Dan CaJacob On Tue, Sep 17, 2013 at 4:23 PM, James Cammarata
|
Ok, I'll let you confirm that and if so we'll close this. Thanks! |
Will do. Thanks! Very Respectfully, Dan CaJacob On Tue, Sep 17, 2013 at 4:27 PM, James Cammarata
|
Any follow-up on this? |
Going to go ahead and close this, if you're still having a problem let us know. Thanks! |
I am running into this problem as well. Running 1.3.2 but I don't have old version references to go by. When using -K I get another failure: ssh connection closed waiting for sudo password prompt. In the end I need to run this not in interactive mode, so the -K cannot be the final answer. |
More detail: running manually: host1.*** is the redacted FQDN vagrant@ansible-head:~$ sudo -u vagrant ansible-playbook -i inventory ubuntu-apache2.yaml PLAY [webserver] ************************************************************** GATHERING FACTS *************************************************************** TASK: [Updates apt cache] ***************************************************** FATAL: all hosts have already failed -- aborting PLAY RECAP ******************************************************************** host1.*** : ok=1 changed=0 unreachable=0 failed=1 Manually running ansible get's the same error: vagrant@ansible-head:~$ sudo -u vagrant ansible webserver -i inventory -m apt -a "update_cache=yes |
@Ravenwater That appears to be a different problem, could you open a new github issue for it? You might also want to ask on the mailing list to see if others have run into that. Thanks! |
I'm having a similar problem with ansible 1.3.4. If I use:
I get the same error message:
Also when installing a package with the update_cache option:
A similar error is shown:
The error appears intermittently. In fact using a set of 50 EC2 virtual machines with ubuntu 12.04 (all using the same base image, ami-e50e888c). The error appears in 5 to 20 depending on the test. "sudo: true" is specified in the playbook. |
This is normally exactly as the message says, either you don't have |
Yes. But using it with 50 VM with the same O.S. with the same configuration in some of them it works and in others it fails. Moreover if i retry the playbook 3 o 4 times finally it works in all the nodes. |
@micafer is it possible that other users/processes are calling apt at the same time on those machines? |
I'm testing it with a set of 50 VMs specially created for this tests, and I'm the only user in all of them. |
ubuntu has cronned cache updates daily, normally stepped so as not to |
I would vote for reopening this issue as in real life there is a serious chance for this to happen and ansible should provide a solution for it, a solution what does not involve a human that tries it until it works. |
Ubuntu 16.04For Ubuntu 16.04 user (although I think it might happen in 15.04 as well), Ubuntu ships with - name: kill automatic updating script, if any
command: pkill --full /usr/bin/unattended-upgrade
become: true
register: kill_result
failed_when: kill_result.rc > 1 # rc == 1 if the script is inactive
changed_when: kill_result.rc == 0 It should be safe, since the script will be launched again later by system. |
Just a comment that it didn't work for me, the lock remained in place even with the above command. But as I'm deploying on EC2 I simply updated my base image by manually removing |
I am getting a new error in 1.3 when running
yields
But, I can run sudo apt-get update on the node just fine and this worked before the upgrade to 1.3 The failure occurred on more than one node.
I reverted to ansible 1.2.3 and the problem went away.
On the mailing list, some suggested that this was because sudo was not being invoked.
I am running Ubuntu 12.04 on the node being controlled.
I am using roles (not updated for any changes in 1.3).
A top level node.yml file defines the roles:
It looks like nothing in that particular chain invokes sudo, which is definitely wrong. I don't understand why it worked before version 1.3 In other places, I specifically invoke sudo when required.
I think 1.3 allows using sudo on a per-roll basis easier - I'll have to investigate that.
The text was updated successfully, but these errors were encountered: