Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CI] Stage 3 transiently fails at ceph.osd.restart #1561

Open
smithfarm opened this issue Mar 6, 2019 · 0 comments
Open

[CI] Stage 3 transiently fails at ceph.osd.restart #1561

smithfarm opened this issue Mar 6, 2019 · 0 comments
Assignees

Comments

@smithfarm
Copy link
Contributor

smithfarm commented Mar 6, 2019

Description of Issue/Question

This failure just started happening on the tip of DeepSea master branch. It does not appear to happen on 0.9.14, which would appear to implicate one of the following PRs:

So far the failure has happened twice in two runs. Each run deploys a ceph cluster 7 times, so there is a 1-in-7 (14%) failure rate.

2019-03-06T06:18:52.856 INFO:tasks.deepsea.orch:WWWW: Running DeepSea Stage 3 (deploy)
2019-03-06T06:18:52.857 INFO:teuthology.orchestra.run.target192168000065:Running: "sudo bash -c 'DEV_ENV=true timeout 60m deepsea --log-file=/var/log/salt/deepsea.log --log-level=debug salt-run state.orch ceph.stage.3 --simple-output'"
2019-03-06T06:18:53.491 INFO:teuthology.orchestra.run.target192168000065.stdout:Starting orchestration: ceph.stage.3
2019-03-06T06:19:27.982 INFO:teuthology.orchestra.run.target192168000065.stdout:Parsing orchestration ceph.stage.3 steps... done
2019-03-06T06:19:27.983 INFO:teuthology.orchestra.run.target192168000065.stdout:
2019-03-06T06:19:27.983 INFO:teuthology.orchestra.run.target192168000065.stdout:Stage initialization output:
2019-03-06T06:19:27.983 INFO:teuthology.orchestra.run.target192168000065.stdout:firewall                 : �[1m�[92mnot installed�[0m
2019-03-06T06:19:27.984 INFO:teuthology.orchestra.run.target192168000065.stdout:apparmor                 : �[1m�[92mdisabled�[0m
2019-03-06T06:19:27.984 INFO:teuthology.orchestra.run.target192168000065.stdout:DEV_ENV                  : �[1m�[92mTrue�[0m
2019-03-06T06:19:27.984 INFO:teuthology.orchestra.run.target192168000065.stdout:fsid                     : �[1m�[92mvalid�[0m
2019-03-06T06:19:27.984 INFO:teuthology.orchestra.run.target192168000065.stdout:public_network           : �[1m�[92mvalid�[0m
2019-03-06T06:19:27.984 INFO:teuthology.orchestra.run.target192168000065.stdout:cluster_network          : �[1m�[92mvalid�[0m
2019-03-06T06:19:27.985 INFO:teuthology.orchestra.run.target192168000065.stdout:cluster_interface        : �[1m�[92mvalid�[0m
2019-03-06T06:19:27.985 INFO:teuthology.orchestra.run.target192168000065.stdout:monitors                 : �[1m�[92mvalid�[0m
2019-03-06T06:19:27.985 INFO:teuthology.orchestra.run.target192168000065.stdout:mgrs                     : �[1m�[92mvalid�[0m
2019-03-06T06:19:27.985 INFO:teuthology.orchestra.run.target192168000065.stdout:storage                  : �[1m�[92mvalid�[0m
2019-03-06T06:19:27.985 INFO:teuthology.orchestra.run.target192168000065.stdout:rgw                      : �[1m�[92mvalid�[0m
2019-03-06T06:19:27.985 INFO:teuthology.orchestra.run.target192168000065.stdout:ganesha                  : �[1m�[92mvalid�[0m
2019-03-06T06:19:27.986 INFO:teuthology.orchestra.run.target192168000065.stdout:master_role              : �[1m�[92mvalid�[0m
2019-03-06T06:19:27.986 INFO:teuthology.orchestra.run.target192168000065.stdout:time_server              : �[1m�[92mvalid�[0m
2019-03-06T06:19:27.986 INFO:teuthology.orchestra.run.target192168000065.stdout:fqdn                     : �[1m�[92mvalid�[0m
2019-03-06T06:19:27.987 INFO:teuthology.orchestra.run.target192168000065.stdout:
2019-03-06T06:19:29.053 INFO:teuthology.orchestra.run.target192168000065.stdout:[init] Executing runner select.minions... ok
2019-03-06T06:19:29.937 INFO:teuthology.orchestra.run.target192168000065.stdout:[init] Executing runner select.minions... ok
2019-03-06T06:19:30.486 INFO:teuthology.orchestra.run.target192168000065.stdout:[init] Executing runner ready.check...
2019-03-06T06:19:30.486 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  cmd.run_all(/usr/sbin/iptables -S)...
2019-03-06T06:19:30.520 INFO:teuthology.orchestra.run.target192168000065.stdout:               in target192168000065.teuthology... fail
2019-03-06T06:19:30.542 INFO:teuthology.orchestra.run.target192168000065.stdout:               in target192168000061.teuthology... fail
2019-03-06T06:19:30.665 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  cmd.shell(/usr/sbin/aa-status --enabled 2>/dev/null; echo $?)...
2019-03-06T06:19:30.711 INFO:teuthology.orchestra.run.target192168000065.stdout:               in target192168000065.teuthology... ok
2019-03-06T06:19:30.732 INFO:teuthology.orchestra.run.target192168000065.stdout:               in target192168000061.teuthology... ok
2019-03-06T06:19:30.844 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  state.apply(ceph.apparmor.profiles, __kwarg__=True, test=true)...
2019-03-06T06:19:31.275 INFO:teuthology.orchestra.run.target192168000065.stdout:               in target192168000065.teuthology... ok
2019-03-06T06:19:31.352 INFO:teuthology.orchestra.run.target192168000065.stdout:               in target192168000061.teuthology... ok
2019-03-06T06:19:31.457 INFO:teuthology.orchestra.run.target192168000065.stdout:[init] Executing runner ready.check... ok
2019-03-06T06:19:34.755 INFO:teuthology.orchestra.run.target192168000065.stdout:[init] Executing runner changed.mon... ok
2019-03-06T06:19:35.367 INFO:teuthology.orchestra.run.target192168000065.stdout:[init] Executing runner changed.osd... ok
2019-03-06T06:19:35.968 INFO:teuthology.orchestra.run.target192168000065.stdout:[init] Executing runner changed.mgr... ok
2019-03-06T06:19:36.876 INFO:teuthology.orchestra.run.target192168000065.stdout:[init] Executing runner changed.config... ok
2019-03-06T06:19:37.556 INFO:teuthology.orchestra.run.target192168000065.stdout:[init] Executing runner changed.client... ok
2019-03-06T06:19:38.171 INFO:teuthology.orchestra.run.target192168000065.stdout:[init] Executing runner changed.global... ok
2019-03-06T06:19:38.839 INFO:teuthology.orchestra.run.target192168000065.stdout:[init] Executing runner changed.mds... ok
2019-03-06T06:19:39.288 INFO:teuthology.orchestra.run.target192168000065.stdout:[init] Executing runner changed.igw... ok
2019-03-06T06:19:39.934 INFO:teuthology.orchestra.run.target192168000065.stdout:[init] Executing runner select.minions... ok
2019-03-06T06:19:40.589 INFO:teuthology.orchestra.run.target192168000065.stdout:[init] Executing runner select.minions... ok
2019-03-06T06:19:41.276 INFO:teuthology.orchestra.run.target192168000065.stdout:[init] Executing runner select.minions... ok
2019-03-06T06:19:41.351 INFO:teuthology.orchestra.run.target192168000065.stdout:
2019-03-06T06:19:41.352 INFO:teuthology.orchestra.run.target192168000065.stdout:[1/29] Executing state ceph.time...
2019-03-06T06:19:42.640 INFO:teuthology.orchestra.run.target192168000065.stdout:         target192168000065.teuthology... ok
2019-03-06T06:19:49.695 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  target192168000061.teuthology: stop ntpd(ntpd) ok
2019-03-06T06:19:55.436 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  target192168000061.teuthology: install chrony ok
2019-03-06T06:19:55.811 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  target192168000061.teuthology: /etc/chrony.conf ok
2019-03-06T06:19:55.894 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  target192168000061.teuthology: start chronyd(chronyd) ok
2019-03-06T06:19:55.918 INFO:teuthology.orchestra.run.target192168000065.stdout:         target192168000061.teuthology... ok
2019-03-06T06:19:56.102 INFO:teuthology.orchestra.run.target192168000065.stdout:[2/29] Executing state ceph.packages...
2019-03-06T06:20:34.166 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  target192168000065.teuthology: ceph ok
2019-03-06T06:20:34.177 INFO:teuthology.orchestra.run.target192168000065.stdout:         target192168000065.teuthology... ok
2019-03-06T06:20:54.638 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  target192168000061.teuthology: ceph ok
2019-03-06T06:20:54.659 INFO:teuthology.orchestra.run.target192168000065.stdout:         target192168000061.teuthology... ok
2019-03-06T06:20:54.813 INFO:teuthology.orchestra.run.target192168000065.stdout:[3/29] Executing state ceph.configuration.check...
2019-03-06T06:20:55.487 INFO:teuthology.orchestra.run.target192168000065.stdout:         target192168000065.teuthology... ok
2019-03-06T06:20:55.643 INFO:teuthology.orchestra.run.target192168000065.stdout:[4/29] Executing state ceph.configuration.create...
2019-03-06T06:20:56.979 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  select.minions(cluster=ceph, roles=mon, host=True)... ok
2019-03-06T06:20:57.405 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  select.public_addresses(cluster=ceph, roles=mon)...
2019-03-06T06:20:57.405 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  public.address()...
2019-03-06T06:20:57.469 INFO:teuthology.orchestra.run.target192168000065.stdout:               in target192168000065.teuthology... ok
2019-03-06T06:20:57.575 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  select.public_addresses(cluster=ceph, roles=mon)... ok
2019-03-06T06:20:58.770 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  select.from(pillar=rgw_configurations, role=rgw, attr=host, fqdn)... ok
2019-03-06T06:20:58.854 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  target192168000065.teuthology: /srv/salt/ceph/configuration/cache/ceph.conf ok
2019-03-06T06:20:58.923 INFO:teuthology.orchestra.run.target192168000065.stdout:         target192168000065.teuthology... ok
2019-03-06T06:20:59.089 INFO:teuthology.orchestra.run.target192168000065.stdout:[5/29] Executing state ceph.configuration...
2019-03-06T06:20:59.581 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  target192168000065.teuthology: /etc/ceph/ceph.conf ok
2019-03-06T06:20:59.599 INFO:teuthology.orchestra.run.target192168000065.stdout:         target192168000065.teuthology... ok
2019-03-06T06:20:59.618 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  target192168000061.teuthology: /etc/ceph/ceph.conf ok
2019-03-06T06:20:59.640 INFO:teuthology.orchestra.run.target192168000065.stdout:         target192168000061.teuthology... ok
2019-03-06T06:20:59.820 INFO:teuthology.orchestra.run.target192168000065.stdout:[6/29] Executing state ceph.admin...
2019-03-06T06:21:00.284 INFO:teuthology.orchestra.run.target192168000065.stdout:         target192168000061.teuthology... ok
2019-03-06T06:21:00.292 INFO:teuthology.orchestra.run.target192168000065.stdout:         target192168000065.teuthology... ok
2019-03-06T06:21:00.464 INFO:teuthology.orchestra.run.target192168000065.stdout:[7/29] Executing state ceph.mgr.keyring...
2019-03-06T06:21:00.896 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  target192168000065.teuthology: /var/lib/ceph/mgr/ceph-target192168000065/keyring ok
2019-03-06T06:21:00.912 INFO:teuthology.orchestra.run.target192168000065.stdout:         target192168000065.teuthology... ok
2019-03-06T06:21:01.088 INFO:teuthology.orchestra.run.target192168000065.stdout:[8/29] Executing state ceph.mon...
2019-03-06T06:21:01.564 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  target192168000065.teuthology: /var/lib/ceph/tmp/keyring.mon ok
2019-03-06T06:21:08.502 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  target192168000065.teuthology: wait for mon(cephprocesses.wait) ok
2019-03-06T06:21:08.518 INFO:teuthology.orchestra.run.target192168000065.stdout:         target192168000065.teuthology... ok
2019-03-06T06:21:08.690 INFO:teuthology.orchestra.run.target192168000065.stdout:[9/29] Executing state ceph.mgr.auth...
2019-03-06T06:21:09.937 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  select.minions(cluster=ceph, roles=mgr, host=True)... ok
2019-03-06T06:21:10.266 INFO:teuthology.orchestra.run.target192168000065.stdout:         target192168000065.teuthology... ok
2019-03-06T06:21:10.443 INFO:teuthology.orchestra.run.target192168000065.stdout:[10/29] Executing state ceph.mgr...
2019-03-06T06:21:10.918 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  target192168000065.teuthology: /var/lib/ceph/mgr/ceph-target192168000065/keyring ok
2019-03-06T06:21:17.418 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  target192168000065.teuthology: wait for mgr(cephprocesses.wait) ok
2019-03-06T06:21:17.433 INFO:teuthology.orchestra.run.target192168000065.stdout:         target192168000065.teuthology... ok
2019-03-06T06:21:17.612 INFO:teuthology.orchestra.run.target192168000065.stdout:[11/29] Executing state ceph.osd.auth...
2019-03-06T06:21:18.910 INFO:teuthology.orchestra.run.target192168000065.stdout:         target192168000065.teuthology... ok
2019-03-06T06:21:19.079 INFO:teuthology.orchestra.run.target192168000065.stdout:[12/29] Executing state ceph.sysctl...
2019-03-06T06:21:19.507 INFO:teuthology.orchestra.run.target192168000065.stdout:         target192168000065.teuthology... ok
2019-03-06T06:21:19.669 INFO:teuthology.orchestra.run.target192168000065.stdout:[13/29] Executing state ceph.osd.keyring...
2019-03-06T06:21:20.113 INFO:teuthology.orchestra.run.target192168000065.stdout:         target192168000065.teuthology... ok
2019-03-06T06:21:20.594 INFO:teuthology.orchestra.run.target192168000065.stdout:[14/29] Executing runner disks.deploy...
2019-03-06T06:21:20.595 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  dg.deploy(__kwarg__=True, filter_args={'target': '*', 'data_devices': {'all': True}}, dry_run=False)...
2019-03-06T06:21:26.936 INFO:teuthology.orchestra.run.target192168000065.stdout:               in target192168000061.teuthology... ok
2019-03-06T06:21:55.751 INFO:teuthology.orchestra.run.target192168000065.stdout:               in target192168000065.teuthology... ok
2019-03-06T06:21:55.870 INFO:teuthology.orchestra.run.target192168000065.stdout:[14/29] Executing runner disks.deploy... ok
2019-03-06T06:21:55.934 INFO:teuthology.orchestra.run.target192168000065.stdout:[15/29] Executing state ceph.tuned.mgr...
2019-03-06T06:21:56.861 INFO:teuthology.orchestra.run.target192168000065.stdout:         target192168000065.teuthology... ok
2019-03-06T06:21:57.026 INFO:teuthology.orchestra.run.target192168000065.stdout:[16/29] Executing state ceph.tuned.mon...
2019-03-06T06:21:57.580 INFO:teuthology.orchestra.run.target192168000065.stdout:         target192168000065.teuthology... ok
2019-03-06T06:21:57.742 INFO:teuthology.orchestra.run.target192168000065.stdout:[17/29] Executing state ceph.tuned.osd...
2019-03-06T06:21:58.290 INFO:teuthology.orchestra.run.target192168000065.stdout:         target192168000065.teuthology... ok
2019-03-06T06:21:58.450 INFO:teuthology.orchestra.run.target192168000065.stdout:[18/29] Executing state ceph.pool...
2019-03-06T06:22:04.870 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  target192168000065.teuthology: wait(wait.out) ok
2019-03-06T06:22:04.883 INFO:teuthology.orchestra.run.target192168000065.stdout:         target192168000065.teuthology... ok
2019-03-06T06:22:05.044 INFO:teuthology.orchestra.run.target192168000065.stdout:[19/29] Executing state ceph.wait...
2019-03-06T06:22:11.454 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  target192168000065.teuthology: wait(wait.out) ok
2019-03-06T06:22:11.469 INFO:teuthology.orchestra.run.target192168000065.stdout:         target192168000065.teuthology... ok
2019-03-06T06:22:11.636 INFO:teuthology.orchestra.run.target192168000065.stdout:[20/29] Executing state ceph.processes.mon...
2019-03-06T06:22:12.051 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  target192168000065.teuthology: wait for mon processes(cephprocesses.wait) ok
2019-03-06T06:22:12.074 INFO:teuthology.orchestra.run.target192168000065.stdout:         target192168000065.teuthology... ok
2019-03-06T06:22:12.241 INFO:teuthology.orchestra.run.target192168000065.stdout:[21/29] Executing state ceph.mon.restart...
2019-03-06T06:22:12.834 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  target192168000065.teuthology: restart(systemctl restart ceph-mon@target192168000065.service) ok
2019-03-06T06:22:12.887 INFO:teuthology.orchestra.run.target192168000065.stdout:         target192168000065.teuthology... ok
2019-03-06T06:22:13.048 INFO:teuthology.orchestra.run.target192168000065.stdout:[22/29] Executing state ceph.wait...
2019-03-06T06:22:19.535 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  target192168000065.teuthology: wait(wait.out) ok
2019-03-06T06:22:19.562 INFO:teuthology.orchestra.run.target192168000065.stdout:         target192168000065.teuthology... ok
2019-03-06T06:22:19.708 INFO:teuthology.orchestra.run.target192168000065.stdout:[23/29] Executing state ceph.processes.mgr...
2019-03-06T06:22:20.444 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  target192168000065.teuthology: wait for mgr processes(cephprocesses.wait) ok
2019-03-06T06:22:20.460 INFO:teuthology.orchestra.run.target192168000065.stdout:         target192168000065.teuthology... ok
2019-03-06T06:22:20.622 INFO:teuthology.orchestra.run.target192168000065.stdout:[24/29] Executing state ceph.mgr.restart...
2019-03-06T06:22:22.671 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  target192168000065.teuthology: restart(systemctl restart ceph-mgr@target192168000065.service) ok
2019-03-06T06:22:22.707 INFO:teuthology.orchestra.run.target192168000065.stdout:         target192168000065.teuthology... ok
2019-03-06T06:22:22.866 INFO:teuthology.orchestra.run.target192168000065.stdout:[25/29] Executing state ceph.wait...
2019-03-06T06:22:29.419 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  target192168000065.teuthology: wait(wait.out) ok
2019-03-06T06:22:29.431 INFO:teuthology.orchestra.run.target192168000065.stdout:         target192168000065.teuthology... ok
2019-03-06T06:22:29.604 INFO:teuthology.orchestra.run.target192168000065.stdout:[26/29] Executing state ceph.processes.osd...
2019-03-06T06:22:29.996 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  target192168000065.teuthology: wait for osd processes(cephprocesses.wait) ok
2019-03-06T06:22:30.009 INFO:teuthology.orchestra.run.target192168000065.stdout:         target192168000065.teuthology... ok
2019-03-06T06:22:30.198 INFO:teuthology.orchestra.run.target192168000065.stdout:[27/29] Executing state ceph.osd.restart...
2019-03-06T06:22:32.231 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  target192168000065.teuthology: restart osd 0(systemctl restart ceph-osd@0.service) ok
2019-03-06T06:22:32.293 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  target192168000065.teuthology: wait on processes after processing osd.0(cephprocesses.wait) ok
2019-03-06T06:22:33.400 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  target192168000065.teuthology: restart osd 2(systemctl restart ceph-osd@2.service) ok
2019-03-06T06:22:33.441 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  target192168000065.teuthology: wait on processes after processing osd.2(cephprocesses.wait) ok
2019-03-06T06:22:34.226 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  target192168000065.teuthology: restart osd 3(systemctl restart ceph-osd@3.service) ok
2019-03-06T06:24:49.679 INFO:teuthology.orchestra.run.target192168000065.stdout:         |_  target192168000065.teuthology: wait on processes after processing osd.3(cephprocesses.wait) fail
2019-03-06T06:24:49.699 INFO:teuthology.orchestra.run.target192168000065.stdout:         target192168000065.teuthology... fail
2019-03-06T06:24:49.715 INFO:teuthology.orchestra.run.target192168000065.stdout:Finished stage ceph.stage.3: succeeded=26/29 failed=1/29
2019-03-06T06:24:49.716 INFO:teuthology.orchestra.run.target192168000065.stdout:
2019-03-06T06:24:49.716 INFO:teuthology.orchestra.run.target192168000065.stdout:Failures summary:
2019-03-06T06:24:49.716 INFO:teuthology.orchestra.run.target192168000065.stdout:ceph.osd.restart (/srv/salt/ceph/osd/restart):
2019-03-06T06:24:49.716 INFO:teuthology.orchestra.run.target192168000065.stdout:  target192168000065.teuthology:
2019-03-06T06:24:49.717 INFO:teuthology.orchestra.run.target192168000065.stdout:    wait on processes after processing osd.3: Module function cephprocesses.wait executed

This is how the failure looks in non-CLI mode:

2019-03-06T10:58:17.006 INFO:tasks.deepsea.orch:WWWW: Running DeepSea Stage 3 (deploy)
2019-03-06T10:58:17.009 INFO:teuthology.orchestra.run.target192168000051:Running: "sudo bash -c 'DEV_ENV=true timeout 60m salt-run --no-color state.orch ceph.stage.3 2>/dev/null'"
2019-03-06T11:03:26.303 INFO:teuthology.orchestra.run.target192168000051.stdout:firewall                 : �[1m�[92mnot installed�[0m
2019-03-06T11:03:26.304 INFO:teuthology.orchestra.run.target192168000051.stdout:apparmor                 : �[1m�[92mdisabled�[0m
2019-03-06T11:03:26.304 INFO:teuthology.orchestra.run.target192168000051.stdout:DEV_ENV                  : �[1m�[92mTrue�[0m
2019-03-06T11:03:26.304 INFO:teuthology.orchestra.run.target192168000051.stdout:fsid                     : �[1m�[92mvalid�[0m
2019-03-06T11:03:26.305 INFO:teuthology.orchestra.run.target192168000051.stdout:public_network           : �[1m�[92mvalid�[0m
2019-03-06T11:03:26.305 INFO:teuthology.orchestra.run.target192168000051.stdout:cluster_network          : �[1m�[92mvalid�[0m
2019-03-06T11:03:26.305 INFO:teuthology.orchestra.run.target192168000051.stdout:cluster_interface        : �[1m�[92mvalid�[0m
2019-03-06T11:03:26.305 INFO:teuthology.orchestra.run.target192168000051.stdout:monitors                 : �[1m�[92mvalid�[0m
2019-03-06T11:03:26.305 INFO:teuthology.orchestra.run.target192168000051.stdout:mgrs                     : �[1m�[92mvalid�[0m
2019-03-06T11:03:26.306 INFO:teuthology.orchestra.run.target192168000051.stdout:storage                  : �[1m�[92mvalid�[0m
2019-03-06T11:03:26.306 INFO:teuthology.orchestra.run.target192168000051.stdout:rgw                      : �[1m�[92mvalid�[0m
2019-03-06T11:03:26.306 INFO:teuthology.orchestra.run.target192168000051.stdout:ganesha                  : �[1m�[92mvalid�[0m
2019-03-06T11:03:26.306 INFO:teuthology.orchestra.run.target192168000051.stdout:master_role              : �[1m�[92mvalid�[0m
2019-03-06T11:03:26.306 INFO:teuthology.orchestra.run.target192168000051.stdout:time_server              : �[1m�[92mvalid�[0m
2019-03-06T11:03:26.307 INFO:teuthology.orchestra.run.target192168000051.stdout:fqdn                     : �[1m�[92mvalid�[0m
2019-03-06T11:03:26.307 INFO:teuthology.orchestra.run.target192168000051.stdout:No minions matched the target. No command was sent, no jid was assigned.
2019-03-06T11:03:26.307 INFO:teuthology.orchestra.run.target192168000051.stdout:Found DriveGroup <default>
2019-03-06T11:03:26.307 INFO:teuthology.orchestra.run.target192168000051.stdout:Calling dg.deploy on compound target *
2019-03-06T11:03:26.307 INFO:teuthology.orchestra.run.target192168000051.stdout:target192168000051.teuthology_master:
2019-03-06T11:03:26.308 INFO:teuthology.orchestra.run.target192168000051.stdout:  Name: time - Function: salt.state - Result: Clean Started: - 10:58:33.505188 Duration: 541.668 ms
2019-03-06T11:03:26.308 INFO:teuthology.orchestra.run.target192168000051.stdout:  Name: packages - Function: salt.state - Result: Changed Started: - 10:58:34.047109 Duration: 41293.567 ms
2019-03-06T11:03:26.308 INFO:teuthology.orchestra.run.target192168000051.stdout:  Name: configuration check - Function: salt.state - Result: Clean Started: - 10:59:15.340935 Duration: 871.013 ms
2019-03-06T11:03:26.308 INFO:teuthology.orchestra.run.target192168000051.stdout:  Name: create ceph.conf - Function: salt.state - Result: Changed Started: - 10:59:16.212231 Duration: 2597.242 ms
2019-03-06T11:03:26.308 INFO:teuthology.orchestra.run.target192168000051.stdout:  Name: configuration - Function: salt.state - Result: Changed Started: - 10:59:18.809738 Duration: 617.099 ms
2019-03-06T11:03:26.309 INFO:teuthology.orchestra.run.target192168000051.stdout:  Name: admin - Function: salt.state - Result: Changed Started: - 10:59:19.427132 Duration: 551.593 ms
2019-03-06T11:03:26.309 INFO:teuthology.orchestra.run.target192168000051.stdout:  Name: mgr keyrings - Function: salt.state - Result: Changed Started: - 10:59:19.979009 Duration: 641.212 ms
2019-03-06T11:03:26.309 INFO:teuthology.orchestra.run.target192168000051.stdout:  Name: monitors - Function: salt.state - Result: Changed Started: - 10:59:20.620530 Duration: 7308.466 ms
2019-03-06T11:03:26.309 INFO:teuthology.orchestra.run.target192168000051.stdout:  Name: mgr auth - Function: salt.state - Result: Changed Started: - 10:59:27.929278 Duration: 1652.679 ms
2019-03-06T11:03:26.309 INFO:teuthology.orchestra.run.target192168000051.stdout:  Name: mgrs - Function: salt.state - Result: Changed Started: - 10:59:29.582245 Duration: 22183.833 ms
2019-03-06T11:03:26.310 INFO:teuthology.orchestra.run.target192168000051.stdout:  Name: osd auth - Function: salt.state - Result: Changed Started: - 10:59:51.766439 Duration: 1512.711 ms
2019-03-06T11:03:26.310 INFO:teuthology.orchestra.run.target192168000051.stdout:  Name: sysctl - Function: salt.state - Result: Changed Started: - 10:59:53.279589 Duration: 626.576 ms
2019-03-06T11:03:26.310 INFO:teuthology.orchestra.run.target192168000051.stdout:  Name: set osd keyrings - Function: salt.state - Result: Changed Started: - 10:59:53.906418 Duration: 551.148 ms
2019-03-06T11:03:26.310 INFO:teuthology.orchestra.run.target192168000051.stdout:  Name: disks.deploy - Function: salt.runner - Result: Changed Started: - 10:59:54.457855 Duration: 35944.091 ms
2019-03-06T11:03:26.310 INFO:teuthology.orchestra.run.target192168000051.stdout:  Name: mgr tuned - Function: salt.state - Result: Clean Started: - 11:00:30.402231 Duration: 1035.677 ms
2019-03-06T11:03:26.311 INFO:teuthology.orchestra.run.target192168000051.stdout:  Name: mon tuned - Function: salt.state - Result: Clean Started: - 11:00:31.438411 Duration: 672.354 ms
2019-03-06T11:03:26.311 INFO:teuthology.orchestra.run.target192168000051.stdout:  Name: osd tuned - Function: salt.state - Result: Clean Started: - 11:00:32.111038 Duration: 688.88 ms
2019-03-06T11:03:26.311 INFO:teuthology.orchestra.run.target192168000051.stdout:  Name: pools - Function: salt.state - Result: Changed Started: - 11:00:32.800242 Duration: 6618.263 ms
2019-03-06T11:03:26.311 INFO:teuthology.orchestra.run.target192168000051.stdout:  Name: wait until target192168000051.teuthology with role mon can be restarted - Function: salt.state - Result: Changed Started: - 11:00:39.418933 Duration: 7963.187 ms
2019-03-06T11:03:26.311 INFO:teuthology.orchestra.run.target192168000051.stdout:  Name: check if mon processes are still running on target192168000051.teuthology after restarting mons - Function: salt.state - Result: Changed Started: - 11:00:47.382473 Duration: 574.599 ms
2019-03-06T11:03:26.312 INFO:teuthology.orchestra.run.target192168000051.stdout:  Name: restarting mons on target192168000051.teuthology - Function: salt.state - Result: Changed Started: - 11:00:47.957344 Duration: 860.99 ms
2019-03-06T11:03:26.312 INFO:teuthology.orchestra.run.target192168000051.stdout:  Name: wait until target192168000051.teuthology with role mgr can be restarted - Function: salt.state - Result: Changed Started: - 11:00:48.818702 Duration: 6659.467 ms
2019-03-06T11:03:26.312 INFO:teuthology.orchestra.run.target192168000051.stdout:  Name: check if mgr processes are still running on target192168000051.teuthology after restarting mgrs - Function: salt.state - Result: Changed Started: - 11:00:55.478464 Duration: 870.862 ms
2019-03-06T11:03:26.312 INFO:teuthology.orchestra.run.target192168000051.stdout:  Name: restarting mgr on target192168000051.teuthology - Function: salt.state - Result: Changed Started: - 11:00:56.349593 Duration: 2456.745 ms
2019-03-06T11:03:26.312 INFO:teuthology.orchestra.run.target192168000051.stdout:  Name: wait until target192168000051.teuthology with role osd can be restarted - Function: salt.state - Result: Changed Started: - 11:00:58.806668 Duration: 6671.687 ms
2019-03-06T11:03:26.313 INFO:teuthology.orchestra.run.target192168000051.stdout:  Name: check if osd processes are still running on target192168000051.teuthology after restarting osds - Function: salt.state - Result: Changed Started: - 11:01:05.478716 Duration: 590.387 ms
2019-03-06T11:03:26.313 INFO:teuthology.orchestra.run.target192168000051.stdout:----------
2019-03-06T11:03:26.313 INFO:teuthology.orchestra.run.target192168000051.stdout:          ID: restarting osds on target192168000051.teuthology
2019-03-06T11:03:26.313 INFO:teuthology.orchestra.run.target192168000051.stdout:    Function: salt.state
2019-03-06T11:03:26.313 INFO:teuthology.orchestra.run.target192168000051.stdout:      Result: False
2019-03-06T11:03:26.313 INFO:teuthology.orchestra.run.target192168000051.stdout:     Comment: Run failed on minions: target192168000051.teuthology
2019-03-06T11:03:26.314 INFO:teuthology.orchestra.run.target192168000051.stdout:     Started: 11:01:06.069377
2019-03-06T11:03:26.314 INFO:teuthology.orchestra.run.target192168000051.stdout:    Duration: 140108.226 ms
2019-03-06T11:03:26.314 INFO:teuthology.orchestra.run.target192168000051.stdout:     Changes:
2019-03-06T11:03:26.314 INFO:teuthology.orchestra.run.target192168000051.stdout:              target192168000051.teuthology:
2019-03-06T11:03:26.314 INFO:teuthology.orchestra.run.target192168000051.stdout:                Name: systemctl restart ceph-osd@1.service - Function: cmd.run - Result: Changed Started: - 11:01:06.555931 Duration: 714.474 ms
2019-03-06T11:03:26.315 INFO:teuthology.orchestra.run.target192168000051.stdout:                Name: cephprocesses.wait - Function: module.run - Result: Changed Started: - 11:01:07.290399 Duration: 61.814 ms
2019-03-06T11:03:26.315 INFO:teuthology.orchestra.run.target192168000051.stdout:                Name: systemctl restart ceph-osd@3.service - Function: cmd.run - Result: Changed Started: - 11:01:07.365337 Duration: 1109.883 ms
2019-03-06T11:03:26.316 INFO:teuthology.orchestra.run.target192168000051.stdout:                Name: cephprocesses.wait - Function: module.run - Result: Changed Started: - 11:01:08.499616 Duration: 39.652 ms
2019-03-06T11:03:26.316 INFO:teuthology.orchestra.run.target192168000051.stdout:                Name: systemctl restart ceph-osd@2.service - Function: cmd.run - Result: Changed Started: - 11:01:08.554496 Duration: 900.646 ms
2019-03-06T11:03:26.316 INFO:teuthology.orchestra.run.target192168000051.stdout:                Name: cephprocesses.wait - Function: module.run - Result: Changed Started: - 11:01:09.500211 Duration: 35.091 ms
2019-03-06T11:03:26.316 INFO:teuthology.orchestra.run.target192168000051.stdout:                Name: systemctl restart ceph-osd@0.service - Function: cmd.run - Result: Changed Started: - 11:01:09.549736 Duration: 1079.509 ms
2019-03-06T11:03:26.316 INFO:teuthology.orchestra.run.target192168000051.stdout:              ----------
2019-03-06T11:03:26.316 INFO:teuthology.orchestra.run.target192168000051.stdout:                        ID: wait on processes after processing osd.0
2019-03-06T11:03:26.317 INFO:teuthology.orchestra.run.target192168000051.stdout:                  Function: module.run
2019-03-06T11:03:26.317 INFO:teuthology.orchestra.run.target192168000051.stdout:                      Name: cephprocesses.wait
2019-03-06T11:03:26.317 INFO:teuthology.orchestra.run.target192168000051.stdout:                    Result: False
2019-03-06T11:03:26.317 INFO:teuthology.orchestra.run.target192168000051.stdout:                   Comment: Module function cephprocesses.wait executed
2019-03-06T11:03:26.317 INFO:teuthology.orchestra.run.target192168000051.stdout:                   Started: 11:01:10.688741
2019-03-06T11:03:26.317 INFO:teuthology.orchestra.run.target192168000051.stdout:                  Duration: 135455.803 ms
2019-03-06T11:03:26.317 INFO:teuthology.orchestra.run.target192168000051.stdout:                   Changes:
2019-03-06T11:03:26.318 INFO:teuthology.orchestra.run.target192168000051.stdout:                            ----------
2019-03-06T11:03:26.318 INFO:teuthology.orchestra.run.target192168000051.stdout:                            ret:
2019-03-06T11:03:26.318 INFO:teuthology.orchestra.run.target192168000051.stdout:                                False
2019-03-06T11:03:26.318 INFO:teuthology.orchestra.run.target192168000051.stdout:
2019-03-06T11:03:26.318 INFO:teuthology.orchestra.run.target192168000051.stdout:              Summary for target192168000051.teuthology
2019-03-06T11:03:26.318 INFO:teuthology.orchestra.run.target192168000051.stdout:              ------------
2019-03-06T11:03:26.318 INFO:teuthology.orchestra.run.target192168000051.stdout:              Succeeded: 7 (changed=8)
2019-03-06T11:03:26.318 INFO:teuthology.orchestra.run.target192168000051.stdout:              Failed:    1
2019-03-06T11:03:26.319 INFO:teuthology.orchestra.run.target192168000051.stdout:              ------------
2019-03-06T11:03:26.319 INFO:teuthology.orchestra.run.target192168000051.stdout:              Total states run:     8
2019-03-06T11:03:26.319 INFO:teuthology.orchestra.run.target192168000051.stdout:              Total run time: 139.397 s
2019-03-06T11:03:26.319 INFO:teuthology.orchestra.run.target192168000051.stdout:
2019-03-06T11:03:26.319 INFO:teuthology.orchestra.run.target192168000051.stdout:Summary for target192168000051.teuthology_master
2019-03-06T11:03:26.319 INFO:teuthology.orchestra.run.target192168000051.stdout:-------------
2019-03-06T11:03:26.319 INFO:teuthology.orchestra.run.target192168000051.stdout:Succeeded: 26 (changed=22)
2019-03-06T11:03:26.320 INFO:teuthology.orchestra.run.target192168000051.stdout:Failed:     1
2019-03-06T11:03:26.320 INFO:teuthology.orchestra.run.target192168000051.stdout:-------------
2019-03-06T11:03:26.320 INFO:teuthology.orchestra.run.target192168000051.stdout:Total states run:     27
2019-03-06T11:03:26.320 INFO:teuthology.orchestra.run.target192168000051.stdout:Total run time:  292.664 s
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants