-
Notifications
You must be signed in to change notification settings - Fork 23.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug / Regression] mysql_db import fail to decompress dumps #20196
[Bug / Regression] mysql_db import fail to decompress dumps #20196
Comments
From @ansibot on July 30, 2016 16:32 @Jmainguy ping, this issue is waiting for your response. |
From @Jmainguy on July 30, 2016 20:10 I tried to recreate this on ansible-2.2.0-0.git201605131739.e083fa3.devel.el7.centos.noarch and was unable to reproduce. imported a 240mb .tar.gz (took a few hours, but it worked). This was on centos, can you try again with devel on ubuntu and let me know if this is still happening? |
From @ansibot on September 8, 2016 20:59 @Jmainguy, ping. This issue is still waiting on your response. |
From @Jmainguy on September 9, 2016 18:19 ansibot "needs_info" |
From @bmalynovytch on September 9, 2016 19:3 The bug concerns any size of compressed dumps: the module doesn't uncompress them anymore. !needs_info |
From @ilyapoz on September 27, 2016 18:24 Any workarounds so far? Still reproducible on ubuntu trusty in a virtualbox |
From @ansibot on September 27, 2016 18:43 @Jmainguy, ping. This issue is still waiting on your response. |
From @ilyapoz on September 27, 2016 18:44 Sorry, a possible workaround for small db's is not compressing the dump |
From @ansibot on December 9, 2016 19:50 This repository has been locked. All new issues and pull requests should be filed in https://github.com/ansible/ansible Please read through the repomerge page in the dev guide. The guide contains links to tools which automatically move your issue or pull request to the ansible/ansible repo. |
From @ulrith on December 20, 2016 11:35 I have the same behavior with my Ansible 2.2.0.0 on Ubuntu 16.04. |
From @sachavuk on January 4, 2017 17:38 Hi, I have the same issue on Debian 7.8 with ansible 2.2.0.0 the same error message 😢 |
Good evening, As @ulrith and @sachavuk I have this issue, too. In my case the issue is in rhel 7.3 with ansible 2.2.0.0. The task in my play looks like:
During the playbook run I receive the following error message:
I don't know how to debug an ansible module. If there are any useful information I could provide, please tell me how to do that. Regards, |
Hi there, |
@Tronde I am still unable to reproduce this bug using latest devel. Can you try and reproduce using latest devel, I followed the instructions above, any compressed database that is larger than 3.5mb when compressed.
|
@ansibot 'needs_info' |
@Jmainguy I was able to reproduce the problem with the latest devel:
Running
results in:
Please tell me, if you need further information. |
I am testing against centos7 with mariadb-server, are you testing against another DB? there is clearly something between our two env setups because I am unable to reproduce this. I just compiled and installed latest devel again to be sure.
Is it possible bz2 is failing to uncompress because of disk running out or something? Why is bz2 failing in your (and other testers reproducing this) envs? bzip2: I/O or other error, bailing out. |
Hi, here are some information about my env. Controller:
Target Node:
The node should have enough disk space and ram to extract the bz2 file:
I increased the memory to 2048 mb to be sure not to run in an out of memory issue. But I'm still getting the same error message. |
I have no idea why I cannot reproduce this, can you try manually bunzip2 that file and see if it shoots the i/o error?
|
Using bunzip2 to extract the file locally on the target node works just fine without any error. Unfortunately I have no idea how to help you reproducing this error. :-( |
I am unable to reproduce this bug. That being said, supposing it does exist, it has to do with stdout ending before the compression tool thinks it should. Guessing running out of ram / swap. I added this code 1608163 Which was reverted with this code aa79810 So it seems going from decompressing to file, importing, then compressing back up was nixed for decompressing to stdout, then importing from that stdout. I imagine going back to disk will fix this, at the cost of speed, and disk space while the playbook is running (it will compress after it imports). Thoughts? |
Well, I give it a try again, today. Using two different mysql dumps I encountered the same error. In both cases I had at least 120 MB of ram left. I thought that should be enough. My understanding of this matter is not informed enough to give you any helpful thoughts on this. But if you are going back to disk, I would be happy to give it another run. |
Time appropriate greetings everyone, Information about the test caseI've used a bzip2 compressed MySQL dump file and tried to import it on two different target systems. Ansible Control NodeRed Hat Enterprise Linux Server release 7.7 (Maipo) Target nodes
Scenario 1: Successful deployment on Debian BusterPlaybook
Test file
Playbook run
Everything is fine. Scenario 2: Unsuccessful deployment on CentOS 7.7Playbook and test file are the same as in scenario 1. Playbook run
Running In case you need more information, please tell me which and how to gather them. Regards, |
Hi @Tronde and others Who can reproduce the problem could you please create a simlink python -> python3 to force ansible use it and check then. |
(symlink on the target machine) |
No need to use a symlink, just defining |
@bmalynovytch cool, thanks for the tip! |
Hi, due to #67083 I'm not able to check with python3. |
@Tronde maybe to clone the vm and install python3 manually? if it's impossible, can anybody else try to reproduce the bug using python3 forcibly as @bmalynovytch described above? |
@Andersson007 I won't install python3 manually. But I tried something else. As we know from my test scenario the deployment on debian buster was successful. Debian 10 is shipped with python2 and python3. When @Andersson007 is right and the issue occurs only when using python2 I should be able to reproduce the issue after I've removed python3 from my buster machine. I did so but the deployment was successful again. So I would argue that extracting a compressed sql dump file with python2 is possible in general. But I noticed that buster comes with python 2.7.16 while CentOS is shipping 2.7.5. |
I can't reproduce with following dump file 14MB bz2 (several GBs uncompressed) Works:
Works:
Works:
but i know the solution. coming soon |
ansible-collections/community.general#151 |
Hi, just to note it here, too. I can confirm that the bug disappeared using the code from ansible-collections/community.general#151. |
@Tronde , thanks! we're wating for feedback from others until next week. if nobody's resonably against the changes, we'll merge that. |
close_me |
closed via ansible-collections/community.general#151 |
Closing per above. |
From @bmalynovytch on June 2, 2016 15:43
ISSUE TYPE
Bug Report (regression)
COMPONENT NAME
mysql_db (import)
ANSIBLE VERSION
CONFIGURATION
OS / ENVIRONMENT
Deployment OS: Mac OS X 10.11.5, python v2.7.11 with pyenv/virtualenv
Destination OS: ubuntu jessie/sid
SUMMARY
Using
mysql_db
to import Gzipped or Bzipped SQL dumps used to work like a charm withansible 2.0.2.0
Now, compressed imports fail with a
broken pipe
error, either if.gz
or .bz2
Strangely, this does not happen on a small (compressed) file (1.8k gzip compressed, 6k uncompressed)
Maybe related to https://blog.nelhage.com/2010/02/a-very-subtle-bug/
STEPS TO REPRODUCE
Try to import a compressed (large enough) SQL dump with mysql_db.
Failure happen with a 3.5MB gzip compressed / 20MB uncompressed dump.
EXPECTED RESULTS
Import should just work.
ACTUAL RESULTS
Copied from original issue: ansible/ansible-modules-core#3835
The text was updated successfully, but these errors were encountered: