Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deployment is failing on step1 #570

Open
rohitnayal opened this issue Aug 2, 2023 · 1 comment
Open

Deployment is failing on step1 #570

rohitnayal opened this issue Aug 2, 2023 · 1 comment

Comments

@rohitnayal
Copy link

Expected Behavior

Step 1 deployment is failing

Actual Behavior

[root@localhost ECS-CommunityEdition]# cat deploy.yml

deploy.yml reference implementation v2.8.0

[Optional]

By changing the license_accepted boolean value to "true" you are

declaring your agreement to the terms of the license agreement

contained in the license.txt file included with this software

distribution.

licensing:
license_accepted: false

#autonames:

custom:

- ecs01

- ecs02

- ecs03

- ecs04

- ecs05

- ecs06

[Required]

Deployment facts reference

facts:

[Required]

Node IP or resolvable hostname from which installations will be launched

The only supported configuration is to install from the same node as the

bootstrap.sh script is run.

NOTE: if the install node is to be migrated into an island environment,

the hostname or IP address listed here should be the one in the

island environment.

install_node: 192.168.33.40

[Required]

IPs of machines that will be whitelisted in the firewall and allowed

to access management ports of all nodes. If this is set to the

wildcard (0.0.0.0/0) then anyone can access management ports.

management_clients:
- 0.0.0.0/0

[Required]

These credentials must be the same across all nodes. Ansible uses these credentials to

gain initial access to each node in the deployment and set up ssh public key authentication.

If these are not correct, the deployment will fail.

ssh_defaults:
# [Required]
# Username to use when logging in to nodes
ssh_username: admin
# [Required]
# Password to use with SSH login
# *** Set to same value as ssh_username to enable SSH public key authentication ***
ssh_password: ChangeMe
# [Required when enabling SSH public key authentication]
# Password to give to sudo when gaining root access.
ansible_become_pass: ChangeMe
# [Required]
# Select the type of crypto to use when dealing with ssh public key
# authentication. Valid values here are:
# - "rsa" (Default)
# - "ed25519"

ssh_crypto: rsa

[Required]

Environment configuration for this deployment.

node_defaults:
dns_domain: local
dns_servers:
- 192.168.33.1
ntp_servers:
- 192.168.33.1
#
# [Optional]
# VFS path to source of randomness
# Defaults to /dev/urandom for speed considerations. If you prefer /dev/random, put that here.
# If you have a /dev/srandom implementation or special entropy hardware, you may use that too
# so long as it implements a /dev/random type device.
entropy_source: /dev/urandom
#
# [Optional]
# Picklist for node names.
# Available options:
# - "moons" (ECS CE default)
# - "cities" (ECS SKU-flavored)
# - "custom" (uncomment and use the top-level autonames block to define these)
# autonaming: custom

#
# [Optional]
# If your ECS comes with differing default credentials, you can specify those here
# ecs_root_user: root
# ecs_root_pass: ChangeMe

[Optional]

Storage pool defaults. Configure to your liking.

All block devices that will be consumed by ECS on ALL nodes must be listed under the

ecs_block_devices option. This can be overridden by the storage pool configuration.

At least ONE (1) block device is REQUIRED for a successful install. More is better.

storage_pool_defaults:
is_cold_storage_enabled: false
is_protected: false
description: Default storage pool description
ecs_block_devices:
- /dev/sdb

[Required]

Storage pool layout. You MUST have at least ONE (1) storage pool for a successful install.

storage_pools:
- name: sp1
members:
- 192.168.33.40
options:
is_protected: false
is_cold_storage_enabled: false
description: My First SP
ecs_block_devices:
- /dev/sdb

[Optional]

VDC defaults. Configure to your liking.

virtual_data_center_defaults:
description: Default virtual data center description

[Required]

Virtual data center layout. You MUST have at least ONE (1) VDC for a successful install.

Multi-VDC deployments are not yet implemented

virtual_data_centers:
- name: vdc1
members:
- sp1
options:
description: My First VDC

[Optional]

Replication group defaults. Configure to your liking.

replication_group_defaults:
description: Default replication group description
enable_rebalancing: true
allow_all_namespaces: true
is_full_rep: false

[Optional, required for namespaces]

Replication group layout. You MUST have at least ONE (1) RG to provision namespaces.

replication_groups:
- name: rg1
members:
- vdc1
options:
description: My First RG
enable_rebalancing: true
allow_all_namespaces: true
is_full_rep: false

[Optional]

Management User defaults

management_user_defaults:
is_system_admin: false
is_system_monitor: false

[Optional]

Management Users

management_users:
- username: admin1
password: ChangeMe
options:
is_system_admin: true
- username: monitor1
password: ChangeMe
options:
is_system_monitor: true

[Optional]

Namespace defaults

namespace_defaults:
is_stale_allowed: false
is_compliance_enabled: false

[Optional]

Namespace layout

namespaces:
- name: ns1
replication_group: rg1
administrators:
- root
options:
is_stale_allowed: false
is_compliance_enabled: false

[Optional]

Object User defaults

object_user_defaults:
# Comma-separated list of Swift authorization groups
swift_groups_list:
- users
# Lifetime of S3 secret key in minutes
s3_expiry_time: 2592000

[Optional]

Object Users

object_users:
- username: object_admin1
namespace: ns1
options:
swift_password: ChangeMe
swift_groups_list:
- admin
- users
s3_secret_key: ChangeMeChangeMeChangeMeChangeMeChangeMe
s3_expiry_time: 2592000
- username: object_user1
namespace: ns1
options:
swift_password: ChangeMe
s3_secret_key: ChangeMeChangeMeChangeMeChangeMeChangeMe

[Optional]

Bucket defaults

bucket_defaults:
namespace: ns1
replication_group: rg1
head_type: s3
filesystem_enabled: False
stale_allowed: False
encryption_enabled: False
owner: object_admin1

[Optional]

Bucket layout (optional)

buckets:
- name: bucket1
options:
namespace: ns1
replication_group: rg1
owner: object_admin1
head_type: s3
filesystem_enabled: False
stale_allowed: False
encryption_enabled: False
[root@localhost ECS-CommunityEdition]#

==================================

[root@localhost ECS-CommunityEdition]# step1

PLAY [Common | Ping data nodes before doing anything else] **********************************************************************************************************************************

TASK [ping] *********************************************************************************************************************************************************************************
ok: [192.168.33.40]

PLAY [Installer | Gather facts and slice into OS groups] ************************************************************************************************************************************

TASK [group_by] *****************************************************************************************************************************************************************************
ok: [192.168.33.40]

PLAY [CentOS 7 | Configure access] **********************************************************************************************************************************************************

TASK [CentOS_7_configure_ssh : CentOS 7 | Distribute ed25519 ssh key] ***********************************************************************************************************************

TASK [CentOS_7_configure_ssh : CentOS 7 | Distribute rsa ssh key] ***************************************************************************************************************************

TASK [CentOS_7_configure_ssh : CentOS 7 | Disable SSH UseDNS] *******************************************************************************************************************************
ok: [192.168.33.40]

TASK [CentOS_7_configure_ssh : CentOS 7 | Disable requiretty] *******************************************************************************************************************************
ok: [192.168.33.40]

TASK [CentOS_7_configure_ssh : CentOS 7 | Disable sudo password reverification for admin group] *********************************************************************************************
ok: [192.168.33.40]

TASK [CentOS_7_configure_ssh : CentOS 7 | Disable sudo password reverification for wheel group] *********************************************************************************************
ok: [192.168.33.40]

TASK [firewalld_configure_access : Firewalld | Ensure service is started] *******************************************************************************************************************
changed: [192.168.33.40]

TASK [firewalld_configure_access : Firewalld | Add install node to firewalld trusted zone] **************************************************************************************************
ok: [192.168.33.40]

TASK [firewalld_configure_access : Firewalld | Add all data nodes to firewalld trusted zone] ************************************************************************************************
ok: [192.168.33.40] => (item=10.0.2.15)
ok: [192.168.33.40] => (item=192.168.33.40)
ok: [192.168.33.40] => (item=172.17.0.1)

TASK [firewalld_configure_access : Firewalld | Whitelist management prefixes] ***************************************************************************************************************
ok: [192.168.33.40] => (item=0.0.0.0/0)

TASK [firewalld_configure_access : Firewalld | Add all public service ports to firewalld public zone] ***************************************************************************************
ok: [192.168.33.40] => (item=3218/tcp)
ok: [192.168.33.40] => (item=9020-9025/tcp)
ok: [192.168.33.40] => (item=9040/tcp)

TASK [firewalld_configure_access : Firewalld | Ensure service is started] *******************************************************************************************************************
changed: [192.168.33.40]

PLAY [Common | Configure hostnames] *********************************************************************************************************************************************************

TASK [common_set_hostname : include_vars] ***************************************************************************************************************************************************
ok: [192.168.33.40]

TASK [common_set_hostname : Common | Find node hostname] ************************************************************************************************************************************
ok: [192.168.33.40] => (item=(0, u'192.168.33.40'))

TASK [common_set_hostname : Common | Set node hostname] *************************************************************************************************************************************
ok: [192.168.33.40]

PLAY [Common | Configure /etc/hosts] ********************************************************************************************************************************************************

TASK [common_etc_hosts : Common | Add install node to /etc/hosts] ***************************************************************************************************************************
ok: [192.168.33.40] => (item=192.168.33.40)

TASK [common_etc_hosts : Common | Add data nodes to /etc/hosts] *****************************************************************************************************************************
ok: [192.168.33.40] => (item=192.168.33.40)

PLAY [Common | Test inter-node access] ******************************************************************************************************************************************************

TASK [common_access_test : Common | Check node connectivity by IP] **************************************************************************************************************************
ok: [192.168.33.40] => (item=10.0.2.15)
ok: [192.168.33.40] => (item=192.168.33.40)
ok: [192.168.33.40] => (item=172.17.0.1)

TASK [common_access_test : Common | Check node connectivity by short name] ******************************************************************************************************************
ok: [192.168.33.40] => (item=luna)

TASK [common_access_test : Common | Check node connectivity by fqdn] ************************************************************************************************************************
ok: [192.168.33.40] => (item=luna)

PLAY RECAP **********************************************************************************************************************************************************************************
192.168.33.40 : ok=20 changed=2 unreachable=0 failed=0

Playbook run took 0 days, 0 hours, 0 minutes, 13 seconds

PLAY [Common | Ping data nodes before doing anything else] **********************************************************************************************************************************

TASK [ping] *********************************************************************************************************************************************************************************
ok: [192.168.33.40]

PLAY [Installer | Slice nodes into OS groups] ***********************************************************************************************************************************************

TASK [group_by] *****************************************************************************************************************************************************************************
ok: [192.168.33.40]

PLAY [Installer | Perform preflight check] **************************************************************************************************************************************************

TASK [common_collect_facts : Common | Create custom facts directory] ************************************************************************************************************************
ok: [192.168.33.40]

TASK [common_collect_facts : Common | Insert data_node.fact file] ***************************************************************************************************************************
ok: [192.168.33.40]

TASK [common_collect_facts : Common | Reload facts to pick up new items] ********************************************************************************************************************
ok: [192.168.33.40]

TASK [common_baseline_check : include_vars] *************************************************************************************************************************************************
ok: [192.168.33.40]

TASK [common_baseline_check : Common | Check RAM size] **************************************************************************************************************************************

TASK [common_baseline_check : Common | Check CPU architecture] ******************************************************************************************************************************

TASK [common_baseline_check : Common | Validate OS distribution] ****************************************************************************************************************************

TASK [common_baseline_check : Common | (Optional) Check UTC Timezone] ***********************************************************************************************************************

TASK [common_baseline_check : Common | Make sure /data directory does not exist] ************************************************************************************************************
ok: [192.168.33.40]

TASK [common_baseline_check : fail] *********************************************************************************************************************************************************

TASK [common_baseline_check : Common | Make sure /host directory does not exist] ************************************************************************************************************
ok: [192.168.33.40]

TASK [common_baseline_check : fail] *********************************************************************************************************************************************************

TASK [common_baseline_check : Common | Make sure block device(s) exist on node] *************************************************************************************************************
ok: [192.168.33.40] => (item=/dev/sdb)

TASK [common_baseline_check : fail] *********************************************************************************************************************************************************

TASK [common_baseline_check : Common | Make sure block device(s) are at least 100GB] ********************************************************************************************************

TASK [common_baseline_check : Common | Make sure block device(s) are unpartitioned] *********************************************************************************************************
ok: [192.168.33.40] => (item=/dev/sdb)

TASK [common_baseline_check : fail] *********************************************************************************************************************************************************

TASK [common_baseline_check : Common | Check for listening layer 4 ports] *******************************************************************************************************************
changed: [192.168.33.40]

TASK [common_baseline_check : Common | Report any conflicts with published ECS ports] *******************************************************************************************************
failed: [192.168.33.40] (item=[111, u'port 111/udp is listening on 0.0.0.0']) => {"changed": false, "failed": true, "item": [111, "port 111/udp is listening on 0.0.0.0"], "msg": "Port conflict with published ECS port: 111 ECS NFS | description: ECS must be the sole NFS provider on this system | support URL: https://github.com/EMCECS/ECS-CommunityEdition/issues/179"}
failed: [192.168.33.40] (item=[111, u'port 111/tcp is listening on 0.0.0.0']) => {"changed": false, "failed": true, "item": [111, "port 111/tcp is listening on 0.0.0.0"], "msg": "Port conflict with published ECS port: 111 ECS NFS | description: ECS must be the sole NFS provider on this system | support URL: https://github.com/EMCECS/ECS-CommunityEdition/issues/179"}
failed: [192.168.33.40] (item=[111, u'port 111/udp6 is listening on ::0']) => {"changed": false, "failed": true, "item": [111, "port 111/udp6 is listening on ::0"], "msg": "Port conflict with published ECS port: 111 ECS NFS | description: ECS must be the sole NFS provider on this system | support URL: https://github.com/EMCECS/ECS-CommunityEdition/issues/179"}
failed: [192.168.33.40] (item=[111, u'port 111/tcp6 is listening on ::0']) => {"changed": false, "failed": true, "item": [111, "port 111/tcp6 is listening on ::0"], "msg": "Port conflict with published ECS port: 111 ECS NFS | description: ECS must be the sole NFS provider on this system | support URL: https://github.com/EMCECS/ECS-CommunityEdition/issues/179"}

PLAY RECAP **********************************************************************************************************************************************************************************
192.168.33.40 : ok=11 changed=1 unreachable=0 failed=1

Playbook run took 0 days, 0 hours, 0 minutes, 7 seconds
Operation failed.

[root@localhost ECS-CommunityEdition]#



@nikhil-vr
Copy link
Collaborator

Disable NFSserver , if you are using it. Check what is listening for udp port 111

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants