From 356ee3b25708d37da054e0749e10cfa2faea7ce9 Mon Sep 17 00:00:00 2001 From: Travis Wichert Date: Fri, 17 Nov 2017 13:53:47 -0500 Subject: [PATCH] release-2.5.2 (#394) * Update ECS CE Docker image with ECS 3.0.0 Hotfix 2 (#312) * Reorg base files (fix my glitch) * Fixed Dockerfile so it patches everything in-place. (cherry picked from commit 867189eb92e36c8388d389273f2942b9f277380e) * update release files to use new image. * Duct tape for #301 Update to ECS 3.0.0.2 (3.0.0 HF 2) (#314) * fix a small typo * fix package install issue because EPEL is between versions again. * quick fixes for crashing HF2 to unblock clients * Docs update (#313) * Adds FAQ page * installation troubleshooting * formatting * Addition of network troubleshooting * Addition of network troubleshooting * More Troubleshooting * sidebar implementation attempt * implements important links dropdown, disables page dropdown. * whoops * Adds migration page and some small updates. * Bunch of docs updates * bugfix-hf2 (#315) * multitail and dstat are coming in handy right now * change fact cache location to /var/cache/emc/ecs-install * log the state of Docker at the end of bootstrapping for troubleshooting help (those hashes are good to see!) * [Ansible] Stop templating, start regex replacing props/confs * release prep 2.3.0 (#316) * OVA prep (#318) * Configure Jenkins pipeline to test installation process * Get repository information from Jenkins SCM config * Fix env var * Obtain TF options from Jenkins params. Moved deprovision step to post action * Allow multi-node configuration * Update checkout step in jenkinsfile * add zerofill.sh to /tools (#324) (cherry picked from commit 58f7e8eea587c5a005eff3df2c2733ac2f5e1a9c) * Add slack notifications to jenkins pipeline * [WiP] Configure Jenkins server to build PR and provide feedback (#328) Configure Jenkins server to build PR and provide feedback * Docs pass 2 (#326) * Removal of deprecated procedures * Templates * OVA install guide added * Fixed broken links * preflight remove all bootstrap packages if installed (#330) * Non-PR Jenkins jobs do not provide URL and commit author (#331) Fix variables in Slack notifications * put yum actions in retry loop with timeout (#332) Put yum actions in retry loop with timeout * Implements #205 Installer must have public key initial auth capability (#270) * ECS-CommunityEdition-205 Installer must have public key initial auth capability (cherry picked from commit 6eea10b5db3985f960d7b313d2e705a0f913ba55) * More sausage for the initial ssh key auth (cherry picked from commit 8535ccb5430e89b79d253ea1e74390a39b8b20f3) * more sausage (cherry picked from commit edf961e0765cd9a06ccea1a6d1e2406816533f46) * deploy.yml change ideas (cherry picked from commit ef48e2cc57fa6d0a57aa30bc62ba816fc167aed9) * bootstrap.sh modifications (cherry picked from commit d0b3c630f0a2004fe23534aca7e4a95986dce383) * bootstrap.sh modifications (cherry picked from commit 86f897af9395a57af5b33c2162cd85422b4e6ded) * move generic help to generic_help.j2.yml file from config.yml * include shipit.lib.sh * build install paths early add copy action for ssh PKI material * fix a couple gitopt bugs * add create_install_tree() to plugin-defaults.sh * copy ssh keys in bootstrap.sh * more longopts adjustments * add loop delay in retry_with_timeout() * stop trying to autoremove curl, it'll always error. * key_vals need basename not full path set 0700 bits on ssh/ssl stores * remove optarg debugging * more ssh pubkey sausage * update reference.deploy.yml to include feature * jenkins changes * jenkins changes * jenkins changes * jenkins changes * jenkins changes * bump versions and move OVA download links. (#335) * open-vm-tools now has a cross dependency (#337) with open-vm-tools-desktop and yum fails to install open-vm-tools on remote nodes when open-vm-tools-desktop is not installed. * ECS-CommunityEdition-317 Make `ecsconfig ping -cx` loop when dtquery fails (#344) (cherry picked from commit f5c7810a3385352a7ceb3bd9af66f9f824927dca) * Change the way ecs-install is pushed to repo (#346) * invoke zerofill via bash rather than expecting exec bit (#343) * Remove Ansible verbosity flag from Jenkinsfile (#350) * OVA QoL Improvements (#351) * add `ova-step1` and `ova-step2` macros * add `ecsdeploy noop` for some ova macros to look better * make videploy more intelligent and play nice with update_deploy * Implement Ansible global OVA flag fact (#349) * implements Ansible global ova flag fact - custom fact in /etc/ansible/facts.d/ova.fact - ova conditional flags in playbooks * misaligned `when` * skip rebooting when using the OVA. * Upgrade Ansible to 2.3 (#347) * install ansible package from @edge_main for 2.3 * Ansible changes for Ansible 2.3 * ECS-CommunityEdition-235 Bump Ansible version to 2.3 * refactor Ansible task `when:` clauses to Ansible 2.3 spec * refactor node reboot actions for Ansible 2.3 Also resolve #342 * remove unused json_file plugin * must ignore_errors: True `needs-restarting -r` * refactor port-check `when`s to Ansible 2.3 spec * cleanup * add loop_control to path permissions entries * add loop_control labels to many iterators * add loop_control labels to many iterators * break out one directive per line * add loop_control labels to many iterators * incorrect `when` * speling * Switch to Alpine 3.6 release (#359) * Switch to Alpine 3.6 release Install Python 2 from APK * Changes to Rockerfile for python:2-alpine parity * Split steps out from Ansible to get realtime console logging (#358) * Split steps out from Ansible to get realtime console logging * use /tmp? * template out a script to run command on install node via IP * Jenkinsfile sausage * Jenkinsfile sausage * Jenkinsfile sausage * Jenkinsfile sausage * Jenkinsfile sausage * Jenkinsfile sausage * Jenkinsfile sausage * Jenkinsfile sausage * Jenkinsfile sausage * log environment info to file log only, never to console. * Add CentOS 7.4 support (#360) (cherry picked from commit 1a960873d5d6f0b1011caabcbb3987114723dd8a) * [WiP] Misc. 2.5.0 bugfixes (#352) * update reference deploy version * bugfix typo in ed25519 private key filename * fix ova flag implementation * fix ova flag implementation round 2 * fix ova flag implementation round 3 * update entrypoint.sh * bump version to 2.5.0b1 (#364) * [WiP] Update ecs-install Python requirements (#356) * update python requirements * pin python requirements to major versions rather than patches. * add python2-dev to temporary build environment * need cryptography>=1.9 * [WiP] ECS 3.1.0.0 Reduced GA and CE Support (#353) * make 3.0.0.2 to use 100% regex (cherry picked from commit 0689ce59ec9e61beea4082b1f23222031b17f62d) * prep 3.1.0.0 RC3 (cherry picked from commit 2c3d5b9e92bcec83e01362c674a00158e6a61289) * prep 3.1.0.0 RC3 (cherry picked from commit ea12c296204442adf1e5079d778171140dede101) * ECS 3.1 templates * local facts must be fully qualified? * interface roles should be defined in deploy.yml * actually use a comma in the jinja joiner() func * use ansible_fqdn for agent strings not ansible_hostname * ECS 3.1.0.0 RC4 * joiner() needs to be the prefix not the suffix * the infamous missing comma * no trailing comma * remove redundant spaces * set host: field in testing * make object-main_network.json.j2 VDC-aware + formatting * Set georeceiver initialBufferNumOnHeap to 10 * Mount /usr instead of /usr/local to capture new install path * [WiP] ECS 3.1 Full GA and CE support (#367) * Use nodeId instead of the node IP to create data store * Fix errors getting node ID * fix 3.1 patch again * migrate cm.object.properties/'MustHaveEnoughResources=false' into Dockerfile * Run cf_client in container for new low partition count vars * Run cf_client in container for new low partition count vars * migrate cf_client variable settings into Dockerfile * update comments in Dockerfile for 3.1.0.0 * release-2.5.0-prep (#370) * Update ECS-Installation.md (cherry picked from commit f8be70f53b55bf718e2f1bb32df206484a12e7e8) * Update ECS-Installation.md (cherry picked from commit b479b0722308aaa40345af524f5e4430d29b11ed) * bump versions * Check if object user is editable before continuing to S3 credential provisioning (#376) * Fix #371 * Cosmetic change to fix output * Add retry loops around funcs to add credentials * add uniqueness to hostnames of CI VMs (#378) * Enable build-help in bootstrap.sh (#377) * enable build help in bootstrap.sh * enable build help in bootstrap.sh * Switch to upstream ECS version 3.1.0.1 (#375) * Add Dockerfile ecs-object-reduced to CE image patch * bump ecs version * bump version files for install node 2.5.1 (#379) * Develop (#372) (#380) backmerge * Documentation Pass 2 (#384) * ECS-CommunityEdition-299 Add Community Documentation * ECS-CommunityEdition-299 Add Community Documentation * CoC * update issue template * update pull request template * remove contributing document * update .dockerignore * move old changelogs to docs/legacy * Reformat README * remove changelog.rst * Create Standard, Island, and OVA Installation guides * Update ECS-Installation.md to be ToC style page * wording * install node capitalization * create building docs * add build_image macro to run.sh * create building docs * create utilities.md doc * undo a whoops * add utilities references and address #362 * Switch to mainline versions of libressl, mktorrent, ansible. (#391) * Switch to mainline versions of libressl, mktorrent, ansible. Resolves #390 * add tool to remove all ecs-install instances * update testbook for this issue * remove clicmd_access_host run in ecsdeploy cache command * change playbook logic to prevent single node deployments with an install node from failing. * update mktorrent-borg exec path * update tests * Change test network (#389) * Change test network to Local VLAN 999 * Switch network name to "CI Network" * Bump version 2.5.1 -> 2.5.2 (#393) --- .dockerignore | 2 +- .github/CONTRIBUTING.md | 0 .github/ISSUE_TEMPLATE.md | 13 +- .github/PULL_REQUEST_TEMPLATE.md | 17 +- CODE_OF_CONDUCT.md | 74 +++++ Jenkinsfile | 2 +- README.md | 28 +- README.rst | 10 +- changelog.rst | 47 --- changelog.md => docs/legacy/changelog.md | 0 docs/source/building/building.md | 191 ++++++++++++ docs/source/installation/ECS-Installation.md | 286 +----------------- docs/source/installation/ECS-Installation.rst | 8 +- .../installation/Island_Installation.md | 233 ++++++++++++++ docs/source/installation/OVA_Installation.md | 129 ++++++++ .../installation/Standard_Installation.md | 223 ++++++++++++++ docs/source/utilities/utilities.md | 248 +++++++++++++++ tools/clear-installer-image.sh | 2 + .../CentOS_7_baseline_install/tasks/main.yml | 4 +- .../roles/CentOS_7_reboot/tasks/main.yml | 8 +- .../common_baseline_install/tasks/main.yml | 2 +- .../firewalld_configure_access/tasks/main.yml | 2 +- .../installer_build_cache/tasks/main.yml | 14 +- ui/ansible/roles/testing/tasks/main.yml | 73 +++-- .../roles/testing/templates/agent.json.j2 | 3 - .../roles/testing/templates/id-old.json.j2 | 7 - ui/ansible/roles/testing/templates/id.json.j2 | 3 - .../templates/object-main_network.json.j2 | 29 -- .../testing/templates/rev0-network.json.j2 | 8 - .../testing/templates/rev1-network.json.j2 | 15 - ui/ansible/roles/testing/templates/seeds.j2 | 8 - ui/ansible/roles/testing/vars/main.yml | 44 --- ui/ansible/testing.yml | 19 +- ui/ecsdeploy.py | 9 +- ui/etc/config.yml | 2 +- ui/etc/release.conf | 2 +- ui/resources/docker/Rockerfile | 7 +- ui/run.sh | 7 +- ui/setup.py | 2 +- 39 files changed, 1250 insertions(+), 531 deletions(-) delete mode 100644 .github/CONTRIBUTING.md create mode 100644 CODE_OF_CONDUCT.md delete mode 100644 changelog.rst rename changelog.md => docs/legacy/changelog.md (100%) create mode 100644 docs/source/building/building.md create mode 100644 docs/source/installation/Island_Installation.md create mode 100644 docs/source/installation/OVA_Installation.md create mode 100644 docs/source/installation/Standard_Installation.md create mode 100644 docs/source/utilities/utilities.md create mode 100644 tools/clear-installer-image.sh delete mode 100644 ui/ansible/roles/testing/templates/agent.json.j2 delete mode 100644 ui/ansible/roles/testing/templates/id-old.json.j2 delete mode 100644 ui/ansible/roles/testing/templates/id.json.j2 delete mode 100644 ui/ansible/roles/testing/templates/object-main_network.json.j2 delete mode 100644 ui/ansible/roles/testing/templates/rev0-network.json.j2 delete mode 100644 ui/ansible/roles/testing/templates/rev1-network.json.j2 delete mode 100644 ui/ansible/roles/testing/templates/seeds.j2 delete mode 100644 ui/ansible/roles/testing/vars/main.yml diff --git a/.dockerignore b/.dockerignore index ac88f90e..9cb9df4a 100644 --- a/.dockerignore +++ b/.dockerignore @@ -12,7 +12,7 @@ /README.md /bootstrap.sh /.gitignore -/changelog.md +/docs /ECS-CommunityEdition.iml /.gitattributes /.dockerignore diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md deleted file mode 100644 index e69de29b..00000000 diff --git a/.github/ISSUE_TEMPLATE.md b/.github/ISSUE_TEMPLATE.md index f51ed94b..bfaf5dd0 100644 --- a/.github/ISSUE_TEMPLATE.md +++ b/.github/ISSUE_TEMPLATE.md @@ -2,9 +2,18 @@ ### Actual Behavior - +(Please put additional output and logs in the section for that below) ### Steps to Reproduce Behavior +1. +2. +3. + +### Relevant Output and Logs +``` +# Output and Logs go here +``` -### Output \ No newline at end of file +--- +Notifies: @padthaitofuhot @captntuttle @adrianmo diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md index 3aae75c5..06c2084e 100644 --- a/.github/PULL_REQUEST_TEMPLATE.md +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -1,11 +1,14 @@ -This pull request addresses the following issue(s): # . +### This pull request addresses the following referenced issue(s): +1. +2. +3. -Overview of changes: +### Overview of changes: -- -- -- +1. +2. +3. - -@padthaitofuhot @captntuttle @adrianmo \ No newline at end of file +--- +@padthaitofuhot @captntuttle @adrianmo diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md new file mode 100644 index 00000000..c2f03947 --- /dev/null +++ b/CODE_OF_CONDUCT.md @@ -0,0 +1,74 @@ +# Contributor Covenant Code of Conduct + +## Our Pledge + +In the interest of fostering an open and welcoming environment, we as +contributors and maintainers pledge to making participation in our project and +our community a harassment-free experience for everyone, regardless of age, body +size, disability, ethnicity, gender identity and expression, level of experience, +nationality, personal appearance, race, religion, or sexual identity and +orientation. + +## Our Standards + +Examples of behavior that contributes to creating a positive environment +include: + +* Using welcoming and inclusive language +* Being respectful of differing viewpoints and experiences +* Gracefully accepting constructive criticism +* Focusing on what is best for the community +* Showing empathy towards other community members + +Examples of unacceptable behavior by participants include: + +* The use of sexualized language or imagery and unwelcome sexual attention or +advances +* Trolling, insulting/derogatory comments, and personal or political attacks +* Public or private harassment +* Publishing others' private information, such as a physical or electronic + address, without explicit permission +* Other conduct which could reasonably be considered inappropriate in a + professional setting + +## Our Responsibilities + +Project maintainers are responsible for clarifying the standards of acceptable +behavior and are expected to take appropriate and fair corrective action in +response to any instances of unacceptable behavior. + +Project maintainers have the right and responsibility to remove, edit, or +reject comments, commits, code, wiki edits, issues, and other contributions +that are not aligned to this Code of Conduct, or to ban temporarily or +permanently any contributor for other behaviors that they deem inappropriate, +threatening, offensive, or harmful. + +## Scope + +This Code of Conduct applies both within project spaces and in public spaces +when an individual is representing the project or its community. Examples of +representing a project or community include using an official project e-mail +address, posting via an official social media account, or acting as an appointed +representative at an online or offline event. Representation of a project may be +further defined and clarified by project maintainers. + +## Enforcement + +Instances of abusive, harassing, or otherwise unacceptable behavior may be +reported by contacting the project team at padthaitofuhot@gmail.com. All +complaints will be reviewed and investigated and will result in a response that +is deemed necessary and appropriate to the circumstances. The project team is +obligated to maintain confidentiality with regard to the reporter of an incident. +Further details of specific enforcement policies may be posted separately. + +Project maintainers who do not follow or enforce the Code of Conduct in good +faith may face temporary or permanent repercussions as determined by other +members of the project's leadership. + +## Attribution + +This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, +available at [http://contributor-covenant.org/version/1/4][version] + +[homepage]: http://contributor-covenant.org +[version]: http://contributor-covenant.org/version/1/4/ diff --git a/Jenkinsfile b/Jenkinsfile index 1d74b631..8b545445 100644 --- a/Jenkinsfile +++ b/Jenkinsfile @@ -13,7 +13,7 @@ pipeline { string(name: 'template', defaultValue: 'jenkins/ecsce-template', description: 'VM template') string(name: 'resource_pool', defaultValue: 'Cisco UCS Cluster/Resources/Tests', description: 'vSphere resource pool') string(name: 'datacenter', defaultValue: 'Datacenter', description: 'vSphere datacenter') - string(name: 'network_interface', defaultValue: 'VM Network', description: 'VM network interface') + string(name: 'network_interface', defaultValue: 'CI Network', description: 'CI network interface') string(name: 'ecs_nodes', defaultValue: '1', description: 'Number of ECS nodes to be deployed') } diff --git a/README.md b/README.md index 889ebc59..335a3056 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,14 @@ -[![Documentation Status](http://readthedocs.org/projects/ecsce/badge/?version=latest)](http://ecsce.readthedocs.io/en/latest/?badge=latest) -

ECS Community Edition

+# ECS Community Edition +**ECS Software Image** +[![](https://images.microbadger.com/badges/version/emccorp/ecs-software-3.1.0.svg)](https://microbadger.com/images/emccorp/ecs-software-3.1.0 "Get your own version badge on microbadger.com") +**ECS CE Install Node Image** +[![](https://images.microbadger.com/badges/version/emccorp/ecs-install.svg)](https://microbadger.com/images/emccorp/ecs-install "Get your own version badge on microbadger.com") [![Documentation Status](http://readthedocs.org/projects/ecsce/badge/?version=latest)](http://ecsce.readthedocs.io/en/latest/?badge=latest) [![Jenkins](https://img.shields.io/jenkins/s/https/jenkins.qa.ubuntu.com/view/Precise/view/All%20Precise/job/precise-desktop-amd64_default.svg?style=flat-square)](http://10.1.83.120/job/ecs-ce/) + +Current releases and history are available [here][releases]. + +## Community Guides +Please read our [Community Code of Conduct][ccoc] before contributing. Thank you. -See [release history](https://github.com/EMCECS/ECS-CommunityEdition/releases) for current release notes and change log; [changelog.md](changelog.md) file for legacy history. ## Description @@ -85,12 +92,12 @@ Deploy a multi-node ECS instance to two or more hardware or virtual machines. T ### Deployments into Soft-Isolated and Air-Gapped Island Environments ##### Important information regarding Island deployments -Please be aware that Install Node bootstrapping requires Internet access to the hardware or virtual machine that will become the Install Node, but once this step is complete, the machine can be removed from the Internet and migrated into the Island environment. +Please be aware that install node bootstrapping requires Internet access to the hardware or virtual machine that will become the install node, but once this step is complete, the machine can be removed from the Internet and migrated into the Island environment. #### Deploying from OVA -In situations where Internet access is completely disallowed, or for the sake of convenience, an OVA of a prefabricated, bootstrapped, Install Node is provided. Please download the OVA from one of the links below. +In situations where Internet access is completely disallowed, or for the sake of convenience, an OVA of a prefabricated, bootstrapped, install node is provided. Please download the OVA from one of the links below. -The OVA is shipped as a bootstrapped Install Node. It must be cloned multiple times to create as many Data Store Nodes as desired. +The OVA is shipped as a bootstrapped install node. It must be cloned multiple times to create as many Data Store Nodes as desired. ###### OVA Special Requirements * All nodes **MUST** be clones of the OVA. @@ -104,10 +111,10 @@ The OVA is shipped as a bootstrapped Install Node. It must be cloned multiple t Please see the [release page](https://github.com/EMCECS/ECS-CommunityEdition/releases) for OVA download links. #### [ECS Single-Node Deployment with Install Node (recommended)](docs/source/installation/ECS-Installation.md) -Using an Install Node for isolated environments, deploy a stand-alone instance of ECS to a single hardware or virtual machine. +Using an install node for isolated environments, deploy a stand-alone instance of ECS to a single hardware or virtual machine. #### [ECS Multi-Node Deployment with Install Node](docs/source/installation/ECS-Installation.md) -Using an Install Node for isolated environments, deploy a multi-node ECS instance to two or more hardware or virtual machines. Three nodes are required to enable erasure-coding replication. +Using an install node for isolated environments, deploy a multi-node ECS instance to two or more hardware or virtual machines. Three nodes are required to enable erasure-coding replication. # Directory Structure @@ -207,3 +214,8 @@ EMC and Customer enter into this Agreement and this Agreement shall become effec **9.7 - Independent Contractors** - The parties shall act as independent contractors for all purposes under this Agreement. Nothing contained herein shall be deemed to constitute either party as an agent or representative of the other party, or both parties as joint venturers or partners for any purpose. Neither party shall be responsible for the acts or omissions of the other party, and neither party will have authority to speak for, represent or obligate the other party in any way without an authenticated record indicating the prior approval of the other party. **9.8 - Separability** - If any provision of this Agreement shall be held illegal or unenforceable, such provision shall be deemed separable from, and shall in no way affect or impair the validity or enforceability of, the remaining provisions. + +[ccoc]: https://github.com/EMCECS/ECS-CommunityEdition/blob/master/CODE_OF_CONDUCT.md +[contributing]: https://github.com/EMCECS/ECS-CommunityEdition/blob/master/.github/CONTRIBUTING.md +[releases]: https://github.com/EMCECS/ECS-CommunityEdition/releases +[legacy_changelog]: https://github.com/EMCECS/ECS-CommunityEdition/blob/master/docs/legacy/changelog.md diff --git a/README.rst b/README.rst index e75293b3..ed59e6f3 100644 --- a/README.rst +++ b/README.rst @@ -135,12 +135,12 @@ Deployments into Soft-Isolated and Air-Gapped Island Environments Important information regarding Island deployments '''''''''''''''''''''''''''''''''''''''''''''''''' -Please be aware that Install Node bootstrapping requires Internet access -to the hardware or virtual machine that will become the Install Node, +Please be aware that install node bootstrapping requires Internet access +to the hardware or virtual machine that will become the install node, but once this step is complete, the machine can be removed from the Internet and migrated into the Island environment. -If you prefer to download a prefab Install Node as an OVF/OVA, follow +If you prefer to download a prefab install node as an OVF/OVA, follow one of the links below. Please note that OVAs are produced upon each release and do not necessarily have the most current software. @@ -149,7 +149,7 @@ Please see the `release page `__ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Using an Install Node for isolated environments, deploy a multi-node ECS +Using an install node for isolated environments, deploy a multi-node ECS instance to two or more hardware or virtual machines and enable all ECS features. Three nodes are required for all ECS 3.0 and above features to be activated. @@ -157,7 +157,7 @@ be activated. `ECS Single-Node Deployment with Install Node `__ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Using an Install Node for isolated environments, deploy a stand-alone +Using an install node for isolated environments, deploy a stand-alone instance of a limited set of ECS kit to a single hardware or virtual machine. diff --git a/changelog.rst b/changelog.rst deleted file mode 100644 index f0351ca4..00000000 --- a/changelog.rst +++ /dev/null @@ -1,47 +0,0 @@ -.. _changelog: - -Update 2017-04-30: v 3.0.0.1 (HF1) ----------------------------------- - -- Updated Docker image to `ECS Software v3.0 Hotfix 1 <>`__ -- Go-live with Installer 2.0 and yaml-configured, Ansible-based - deployment system -- Legacy installer code moved to legacy/ directory. - -Update 2015-12-29: v.2.1.0.2 ----------------------------- - -- Updated Docker Image to `ECS Software v2.1 Hotfix - 2 `__ -- Various improvements to retry code -- Changes to and fixes for VDC creation, esp. in multi-VDC builds -- VDC Provisioning may now be deliberately omitted with - -SkipVDCProvision -- Modification of storage server (SSM) parameters to better support - smaller disk configurations -- Pre-existing docker images no longer removed during installation -- Addition of systemd script for container start/stop -- Fixes for import, formatting, and other assorted minor bugs - -Update 2015-11-30 ------------------ - -- Updated Docker Image to a `ECS Software v2.1 Hotfix - 1 `__ -- Users can now optionally specify docker image via command-line - arguments in step1. -- Installation script provides more inforamation to user, proceeds - depending on services' availability. -- Fix for authentication issues resulting from default provisioning. - -Update 2015-01-19: v2.2.0.0 ---------------------------- - -- Updated Docker Image to `ECS Software - v2.2 `__ -- **Note:** Due to export restrictions, ECS Community Edition does not - include encryption functionality. -- Updated install scripts to work with ECS 2.2. **Note:** if you want - to install ECS 2.1, please download the install scripts for 2.1 from - github. The changes to install 2.2 are not backward-compatible. - diff --git a/changelog.md b/docs/legacy/changelog.md similarity index 100% rename from changelog.md rename to docs/legacy/changelog.md diff --git a/docs/source/building/building.md b/docs/source/building/building.md new file mode 100644 index 00000000..7982f9c3 --- /dev/null +++ b/docs/source/building/building.md @@ -0,0 +1,191 @@ +# Building `ecs-install` Image From Sources +The ECS-CommunityEdition git repository is also a build environment for the `ecs-install` image. + +## Building `ecs-install` Image During Bootstrap with `boostrap.sh` +If you're hacking around in the install node code, then you'll probably want to build your own install node image at some point. The `bootstrap.sh` script has options for accomplishing just this. + +``` +[Usage] + -h, --help + Display this help text and exit + --help-build + Display build environment help and exit + --version + Display version information and exit + +[Build Options] + --zero-fill-ova + Reduce ephemera, defrag, and zerofill the instance after bootstrapping + --build-from + Use the Alpine Linux mirror at to build the ecs-install image locally. + Mirror list: https://wiki.alpinelinux.org/wiki/Alpine_Linux:Mirrors +``` + +All you'll need is a URL that points to an Alpine Linux mirror. For a good default, you can use the GeoDB enabled CDN mirror, which should auto-select a nearby mirror for you based on your public edge IP: http://dl-cdn.alpinelinux.org/alpine/ + +To tell bootstrap to build the image for you, just include the `--build-from` argument on your command line, like so: + +`[admin@localhost ECS-CommunityEdition]$ ./bootstrap.sh --build-from http://dl-cdn.alpinelinux.org/alpine/` + +## Building `ecs-install` Image After Bootstrapping with `build_image` + +If you need to build the `ecs-install` image after bootstrapping, then you'll need to give a valid Alpine Linux mirror to your install node: + +``` +[admin@installer-230 ECS-CommunityEdition]$ build_image --update-mirror http://cache.local/alpine/ +> Updating bootstrap.conf to use Alpine Linux mirror http://cache.local/alpine/ +``` + +Once the mirror is configured, you can then build the image: + +``` +[admin@installer-230 ECS-CommunityEdition]$ build_image +> Building image ecs-install +> Build context is: local +> Using custom registry: cache.local:5000 +> Tunneling through proxy: cache.local:3128 +> Checking Alpine Linux mirror +> Generating Alpine Linux repositories file +> Collecting artifacts +> UI artifact is: ui/resources/docker/ecs-install.2.5.1-local.installer-230.4.tgz +INFO[0000] FROM cache.local:5000/alpine:3.6 +INFO[0000] | Image sha256:37eec size=3.962 MB +INFO[0000] LABEL MAINTAINER='Travis Wichert ' +INFO[0000] ENV ANSIBLE_CONFIG="/etc/ansible/ansible.cfg" +INFO[0000] ENV ANSIBLE_HOSTS="/usr/local/src/ui/inventory.py" +INFO[0000] Commit changes +INFO[0000] | Cached! Take image sha256:302bc size=3.962 MB (+0 B) +INFO[0000] COPY ui/resources/docker/ecs-install-requirements.txt /etc/ecs-install-requirements.txt +INFO[0000] | Calculating tarsum for 1 files (465 B total) +INFO[0000] | Cached! Take image sha256:44a83 size=3.962 MB (+465 B) +INFO[0000] COPY ui/resources/docker/apk-repositories /etc/apk/repositories +INFO[0000] | Calculating tarsum for 1 files (239 B total) +INFO[0000] | Not cached +INFO[0000] | Created container 89e5a010f1b5 (image sha256:44a83) +INFO[0000] | Uploading files to container 89e5a010f1b5 +INFO[0000] Commit changes +INFO[0001] | Result image is sha256:26c0f size=3.962 MB (+239 B) +INFO[0001] | Removing container 89e5a010f1b5 +INFO[0001] ENV http_proxy=http://cache.local:3128 +INFO[0001] ENV pip_proxy=cache.local:3128 +INFO[0001] Commit changes +INFO[0002] | Created container 49b210eacd7c (image sha256:26c0f) +INFO[0002] | Result image is sha256:d9d58 size=3.962 MB (+0 B) +INFO[0002] | Removing container 49b210eacd7c +INFO[0003] RUN apk -q update && apk -q --no-cache upgrade +INFO[0003] | Created container 856a966289a6 (image sha256:d9d58) +INFO[0005] Commit changes +INFO[0006] | Result image is sha256:a2978 size=6.855 MB (+2.893 MB) +INFO[0006] | Removing container 856a966289a6 +INFO[0006] RUN apk -q --no-cache add python2 py-pip openssh-client sshpass openssl ca-certificates libffi libressl@edge_main pigz jq less opentracker aria2 mktorrent@edge_community ansible@edge_main +INFO[0006] | Created container 2c940cb6c2e6 (image sha256:a2978) +INFO[0016] Commit changes +INFO[0026] | Result image is sha256:b806e size=124.4 MB (+117.6 MB) +INFO[0026] | Removing container 2c940cb6c2e6 +INFO[0026] RUN mv /etc/profile.d/color_prompt /etc/profile.d/color_prompt.sh && ln -s /usr/local/src/ui/ansible /ansible && ln -s /usr/local/src/ui /ui && ln -s /usr/local/src /src && ln -s /usr/bin/python /usr/local/bin/python && mkdir -p /var/run/opentracker && chown nobody:nobody /var/run/opentracker +INFO[0027] | Created container a5a35a59e61a (image sha256:b806e) +INFO[0027] Commit changes +INFO[0029] | Result image is sha256:55ae2 size=124.4 MB (+295 B) +INFO[0029] | Removing container a5a35a59e61a +INFO[0029] RUN apk -q --no-cache add --update --virtual .build-deps musl-dev python2-dev libffi-dev build-base make openssl-dev linux-headers git gcc git-perl && if ! [ -z "$pip_proxy" ]; then export pip_proxy="--proxy $pip_proxy" && git config --global http.proxy "$http_proxy" ;fi && pip install -q $pip_proxy --no-cache-dir -r /etc/ecs-install-requirements.txt && apk -q --no-cache --purge del .build-deps +INFO[0030] | Created container 4d07a461385a (image sha256:55ae2) +INFO[0184] Commit changes +INFO[0187] | Result image is sha256:79f09 size=151.1 MB (+26.68 MB) +INFO[0187] | Removing container 4d07a461385a +INFO[0187] RUN mkdir -p /etc/ansible +INFO[0188] | Created container 021968b10369 (image sha256:79f09) +INFO[0188] Commit changes +INFO[0190] | Result image is sha256:376dc size=151.1 MB (+0 B) +INFO[0190] | Removing container 021968b10369 +INFO[0191] COPY ui/resources/docker/ansible.cfg /etc/ansible/ansible.cfg +INFO[0191] | Calculating tarsum for 1 files (5.437 kB total) +INFO[0191] | Created container acf602cb1215 (image sha256:376dc) +INFO[0191] | Uploading files to container acf602cb1215 +INFO[0191] Commit changes +INFO[0193] | Result image is sha256:a3b7d size=151.1 MB (+5.437 kB) +INFO[0193] | Removing container acf602cb1215 +INFO[0193] COPY ui/resources/docker/entrypoint.sh /usr/local/bin/entrypoint.sh +INFO[0193] | Calculating tarsum for 1 files (5.844 kB total) +INFO[0194] | Created container d2e1e94bba06 (image sha256:a3b7d) +INFO[0194] | Uploading files to container d2e1e94bba06 +INFO[0194] Commit changes +INFO[0196] | Result image is sha256:c0530 size=151.1 MB (+5.844 kB) +INFO[0196] | Removing container d2e1e94bba06 +INFO[0196] RUN chmod +x /usr/local/bin/entrypoint.sh +INFO[0197] | Created container 58814799d1c4 (image sha256:c0530) +INFO[0197] Commit changes +INFO[0199] | Result image is sha256:6fa79 size=151.1 MB (+0 B) +INFO[0199] | Removing container 58814799d1c4 +INFO[0200] ENTRYPOINT [ "/usr/local/bin/entrypoint.sh" ] +INFO[0200] Commit changes +INFO[0200] | Created container dc4494fd062f (image sha256:6fa79) +INFO[0202] | Result image is sha256:481e1 size=151.1 MB (+0 B) +INFO[0202] | Removing container dc4494fd062f +INFO[0202] COPY ui/resources/docker/torrent.sh /usr/local/bin/torrent.sh +INFO[0202] | Calculating tarsum for 1 files (890 B total) +INFO[0203] | Created container 9f15d6413cd2 (image sha256:481e1) +INFO[0203] | Uploading files to container 9f15d6413cd2 +INFO[0203] Commit changes +INFO[0205] | Result image is sha256:35f06 size=151.1 MB (+890 B) +INFO[0205] | Removing container 9f15d6413cd2 +INFO[0205] COPY ui/resources/docker/ecs-install.2.5.1-local.installer-230.4.tgz /usr/local/src/ui.tgz +INFO[0205] | Calculating tarsum for 1 files (3.958 MB total) +INFO[0206] | Created container e6542b37ddc7 (image sha256:35f06) +INFO[0206] | Uploading files to container e6542b37ddc7 +INFO[0206] Commit changes +INFO[0208] | Result image is sha256:161f5 size=155.1 MB (+3.958 MB) +INFO[0208] | Removing container e6542b37ddc7 +INFO[0208] ENV http_proxy= +INFO[0208] ENV pip_proxy= +INFO[0208] VOLUME [ "/opt", "/usr", "/var/log", "/root", "/etc" ] +INFO[0208] LABEL VERSION=cache.local:5000/emccorp/ecs-install:2.5.1-local.installer-230.4 +INFO[0208] ENV VERSION=cache.local:5000/emccorp/ecs-install:2.5.1-local.installer-230.4 +INFO[0208] Commit changes +INFO[0213] | Created container 7beb4650354e (image sha256:161f5) +INFO[0216] | Result image is sha256:7bd3d size=155.1 MB (+0 B) +INFO[0216] | Removing container 7beb4650354e +INFO[0217] TAG cache.local:5000/emccorp/ecs-install:2.5.1-local.installer-230.4 +INFO[0217] | Tag sha256:7bd3d -> cache.local:5000/emccorp/ecs-install:2.5.1-local.installer-230.4 +INFO[0217] Cleaning up +INFO[0217] Successfully built sha256:7bd3d | final size 155.1 MB (+151.1 MB from the base image) +> Tagging cache.local:5000/emccorp/ecs-install:2.5.1-local.installer-230.4 -> emccorp/ecs-install:latest +``` + +The new image is automatically tagged :latest in the local repository and replaces any previous :latest images. + +You'll then want to clean up the local Docker repository with this command: + +``` +[admin@installer-230 ECS-CommunityEdition]$ build_image --clean +> Cleaning up... +> [build tmp containers] +> [ecs-install data containers] +> [exited containers] +> [dangling layers] +``` + +## Making Quick Iterative Changes to an Existing `ecs-install` Image with `update_image` + +Building an image can take a long time. If you have not made any changes to files that are used in the `docker build` process, then you can update an existing `ecs-install` data container with code changes using the `update_image` macro: + +``` +[admin@installer-230 ECS-CommunityEdition]$ update_image +> Updating image: ecs-install +> Build context is: local +> Tunneling through proxy: cache.local:3128 +> Cleaning up... +> [build tmp containers] +> [ecs-install data containers] +> [exited containers] +> [dangling layers] +> Collecting artifacts +> UI is: ui/resources/docker/ecs-install.2.5.1-local.installer-230.5.tgz +> Creating new data container +> Image updated. +``` + +## Quickly Testing Ansible Changes with `testbook` + +If you're working with Ansible within ECS Community Edition, you might find yourself needing to test to see how your Ansible role is being played from within the `ecs-install` image. You can do this by modifying the files under the `testing` subdirectory of the Ansible `roles` directory: `ui/ansible/roles/testing` + +After making your changes, run `update_image` as discussed above, and then run `testbook` to execute your role. The `testbook` command will automatically initialize a new data container, configure access with the install node, and test your role directives. diff --git a/docs/source/installation/ECS-Installation.md b/docs/source/installation/ECS-Installation.md index 7588d876..d5b6dc60 100644 --- a/docs/source/installation/ECS-Installation.md +++ b/docs/source/installation/ECS-Installation.md @@ -1,282 +1,14 @@ -# ECS Community Edition Installation +# ECS Community Edition Installation Guides -## Standard Installation +For **Standard** installations (Internet connected, from source) use [this guide][standard-in]. -ECS Community Edition now features a brand new installer. This installer aims to greatly improve user experience through automation. This document will guide the user through the new installation process. +For **Island** installations (Isolated environment, from source) use [this guide][island-in]. -### Prerequisites +For **OVA** installations (connectivity agnostic, from OVA) use [this guide][ova-in]. -Listed below are all necessary components for a successful ECS Community Edition installation. If they are not met the installation will likely fail. +For information on [deploy.yml][deploy_yml] file available options use [this guide][deploy_yml]. -#### Hardware Requirements - -The installation process is designed to be performed from either a dedicated installation node. However, it is possible, if you so choose, for one of the ECS data nodes to double as the install node. The install node will bootstrap the ECS data nodes and configure the ECS instance. When the process is complete, the install node may be safely destroyed. Both single node and multi-node deployments require only a single install node. - -The technical requirements for the installation node are minimal, but reducing available CPU, memory, and IO throughput will adversely affect the speed of the installation process: - -* 1 CPU Core -* 2 GB Memory -* 10 GB HDD -* CentOS 7 Minimal installation (ISO- and network-based minimal installs are equally supported) - -The minimum technical requirements for each ECS data node are: - -* 4 CPU Cores -* 16 GB Memory -* 16 GB Minimum system block storage device -* 104 GB Minimum additional block storage device in a raw, unpartitioned state. -* CentOS 7 Minimal installation (ISO- and network-based minimal installs are equally supported) - -The recommended technical requirements for each ECS data node are: - -* 8 CPU Cores -* 64GB RAM -* 16GB root block storage -* 1TB additional block storage -* CentOS 7.3 Minimal installation - -For multi-node installations each data node must fulfill these minimum qualifications. The installer will do a pre-flight check to ensure that the minimum qualifications are met. If they are not, the installation will not continue. - -#### Environmental Requirements - -The following environmental requirements must also be met to ensure a successful installation: - -* **Network:** All nodes, including install node and ECS data node(s), must exist on the same IPv4 subnet. IPv6 networking *may* work, but is neither tested nor supported for ECS Community Edition at this time. -* **Remote Access:** Installation is coordinated via Ansible and SSH. However, public key authentication during the initial authentication and access configuration is not yet supported. Therefore, password authentication must be enabled on all nodes, including install node and ECS data node(s). *This is a known issue and will be addressed in a future release* -* **OS:** CentOS 7 Minimal installation (ISO- and network-based minimal installs are equally supported) - -#### All-in-One Single-Node Deployments - -A single node *can* successfully run the installation procedure on itself. To do this simply input the node's own IP address as the installation node as well as the data node in the deploy.yml file. - -### 1. Getting Started - -Please use a non-root administrative user account with sudo privileges on the Install Node when performing the deployment. If deploying from the provided OVA, this account is username `admin` with password `ChangeMe`. - -Before data store nodes can be created, the install node must be prepared. If downloading the repository from github run the following commands to get started: - -0. `sudo yum install -y git` -0. `git clone https://github.com/EMCECS/ECS-CommunityEdition`. - -If the repository is being added to the machine via usb drive, scp, or some other file-based means, please copy the archive into `$HOME/` and run: - -* for .zip archive `unzip ECS-CommunityEdition.zip` -* for .tar.gz archive `tar -xzvf ECS-CommunityEdition.tar.gz` - -###### Important Note -> This documentation refers only to the `ECS-CommunityEdition` directory, but the directory created when unarchiving the release archive may have a different name than `ECS-CommunityEdition`. If this is so, please rename the directory created to `ECS-CommunityEdition` with the `mv` command. This will help the documentation make sense as you proceed with the deployment. - -### 2. Creating The Deployment Map (`deploy.yml`) -###### Important Note -> When installing using the OVA method, please run `videploy` at this time and skip to Step 2.2. - -Installation requires the creation of a deployment map. This map is represented in a YAML configuration file called deploy.yml. This file *should* be written before the next step for the smoothest experience. - -##### 2.1 -Create this file in the `ECS-CommunityEdition` directory that was created when the repository was cloned. A template guide for writing this file can be found [here](deploy.yml.md). - -##### 2.2 -Below are steps for creating a basic deploy.yml. **Please note that all fields mentioned below are required for a successful installation.** - -0. From the ECS-CommunityEdition directory, run the commmand: `cp docs/design/reference.deploy.yml deploy.yml` -0. Edit the file with your favorite editor on another machine, or use `vi deploy.yml` on the Install Node. Read the comments in the file and review the examples in the `examples/` directory. -0. Top-level deployment facts (`facts:`) - 0. Enter the IP address of the Install Node into the `install_node:` field. - 0. Enter into the `management_clients:` field the CIDR address/mask of each machine or subnet that will be whitelisted in node's firewalls and allowed to communicate with ECS management API. - * `10.1.100.50/32` is *exactly* the IP address. - * `192.168.2.0/24` is the entire /24 subnet. - * `0.0.0.0/0` represents the entire Internet. -0. SSH login details (`ssh_defaults:`) - 0. If the SSH server is bound to a non-standard port, enter that port number in the `ssh_port:` field, or leave it set at the default (22). - 0. Enter the username of a user permitted to run commands as UID 0/GID 0 ("root") via the `sudo` command into the `ssh_username:` field. This must be the same across all nodes. - 0. Enter the password for the above user in the `ssh_password:` field. This will only be used during the initial public key authentication setup and can be changed after. This must be the same across all nodes. -0. Node configuration (`node_defaults:`) - 0. Enter the DNS domain for the ECS installation. This can simply be set to `localdomain` if you will not be using DNS with this ECS deployment. - 0. Enter each DNS server address, one per line, into `dns_servers:`. This can be what's present in `/etc/resolv.conf`, or it can be a different DNS server entirely. This DNS server will be set to the primary DNS server for each ECS node. - 0. Enter each NTP server address, one per line, into `ntp_servers:`. -0. Storage Pool configuration (`storage_pools:`) - 0. Enter the storage pool `name:`. - 0. Enter each member data node address, one per line, in `members:`. - 0. Under `options:`, enter each block device reserved for ECS, one per line, in `ecs_block_devices:`. -0. Virtual Data Center configuration (`virtual_data_centers:`) - 0. Enter each VDC `name:`. - 0. Enter each member Storage Pool name, one per line, in `members:` -0. Optional directives, such as those for Replication Groups and users, may also be configured at this time. -0. When you have completed the `deploy.yml` to your liking, save the file and exit the `vi` editor. -0. Move on to Bootstrapping - -These steps quickly set up a basic deploy.yml file - -##### More on deploy.yml -Please read the reference deploy.yml found [here](deploy.yml.md). It is designed to be self documenting and required fields are filled with either example or default values. The above values are only bare minimum values and may not yield optimal results for your environment. - -### 3. Bootstrapping the Install Node (`bootstrap.sh`) -###### Important Note ->When installing using the OVA method, please skip to Step 4. - -The bootstrap script configures the installation node for ECS deployment and downloads the required Docker images and software packages that all other nodes in the deployment will need for successful installation. - -Once the deploy.yml file has been created, the installation node must be bootstrapped. To do this `cd` into the ECS-CommunityEdition directory and run `./bootstrap.sh -c deploy.yml`. Be sure to add the `-g` flag if building the ECS deployment in a virtual environment and the `-y` flag if you're okay accepting all defaults. - -The bootstrap script accepts many flags. If your environment uses proxies, including MitM SSL proxies, custom nameservers, or a local Docker registry or CentOS mirror, you may want to indicate that on the `bootstrap.sh` command line. - -``` -[Usage] - -h This help text - -[General Options] - -y / -n Assume YES or NO to any questions (may be dangerous). - - -v / -q Be verbose (also show all logs) / Be quiet (only show necessary output) - - -c If you have a deploy.yml ready to go, use this. - - -o Override DHCP-configured nameserver(s); use these instead. No spaces! - - -g Install virtual machine guest agents and utilities for QEMU and VMWare. - VirtualBox is not supported at this time. - - -m Use the provided package when fetching packages for the - base OS (but not 3rd-party sources, such as EPEL or Debian-style PPAs). - The mirror is specified as ':'. This option overrides any - mirror lists the base OS would normally use AND supersedes any proxies - (assuming the mirror is local), so be warned that when using this - option it's possible for bootstrapping to hang indefinitely if the - mirror cannot be contacted. - - -b Build the installer image (ecs-install) locally instead of fetching - the current release build from DockerHub (not recommended). Use the - Alpine Linux mirror when building the image. - -[Docker Options] - -r Use the Docker registry at instead of DockerHub. - The connect string is specified as ':[/]' - You may be prompted for your credentials if authentication is required. - You may need to use -d (below) to add the registry's cert to Docker. - - -l After Docker is installed, login to the Docker registry to access images - which require access authentication. Login to Dockerhub by default unless - -r is used. - - -d NOTE: This does nothing unless -r is also given. - If an alternate Docker registry was specified with -r and uses a cert - that cannot be resolved from the anchors in the local system's trust - store, then use -d to specify the x509 cert file for your registry. - -[Proxies & Middlemen] - -k Install the certificate in into the local trust store. This is - useful for environments that live behind a corporate HTTPS proxy. - - -p Use the specified as '[user:pass@]:' - items in [] are optional. It is assumed this proxy handles all protocols. - - -t Attempt to CONNECT through the proxy using the string specified - as ':'. By default 'google.com:80' is used. Unless you block - access to Google (or vice versa), there's no need to change the default. - -[Examples] - Install VM guest agents and install the corporate firewall cert in certs/mitm.pem. - $ ./bootstrap.sh -g -k certs/mitm.pem - - Quietly use nlanr.peer.local on port 80 and test the connection using EMC's webserver. - $ ./bootstrap.sh -q -p nlanr.peer.local:80 -t emc.com:80 - - Assume YES to all questions and use the proxy cache at cache.local port 3128 for HTTP- - related traffic. Use the Docker registry at registry.local:5000 instead of DockerHub, - and install the x509 certificate in certs/reg.pem into Docker's trust store so it can - access the Docker registry. - $ ./bootstrap.sh -y -p cache.local:3128 -r registry.local:5000 -d certs/reg.pem -``` - -The bootstrapping process has completed when the following message appears: - -``` -> All done bootstrapping your install node. -> -> To continue (after reboot if needed): -> $ cd /home/admin/ECS-CommunityEdition -> If you have a deploy.yml ready to go (and did not use -c flag): -> $ sudo cp deploy.yml /opt/emc/ecs-install/ -> If not, check out the docs/design and examples directory for references. -> Once you have a deploy.yml, you can start the deployment -> by running: -> -> [WITH Internet access] -> $ step1 -> [Wait for deployment to complete, then run:] -> $ step2 -> -> [WITHOUT Internet access] -> $ island-step1 -> [Migrate your install node into the isolated environment and run:] -> $ island-step2 -> [Wait for deployment to complete, then run:] -> $ island-step3 -> -``` - -After the installation node has successfully bootstrapped you may be prompted to reboot the machine. If so, then the machine must be rebooted before continuing to Step 4. - -### 4. Deploying ECS Nodes (`step1` or `island-step1`) - -Once the deploy.yml file has been correctly written and the Install Node rebooted if needed, then the next step is to simply run one of the following commands: - -* Internet-connected environments: `step1` -* Island environments: `island-step1` - -After the installer initializes, the EMC ECS license agreement will appear on the screen. Press `q` to close the screen and type `yes` to accept the license and continue or `no` to abort the process. The install cannot continue until the license agreement has been accepted. - -The first thing the installer will do is create an artifact cache of base operating system packages and the ECS software Docker image. If you are running `step1`, please skip to **Step 5**. If you are running `island-step1`, then the installer will stop after this step. The install node can then be migrated into your island environment where deployment can continue. - -##### 4.5. Deploying the Island Environment ECS Nodes (`island-step2`) -###### Important Note -> If you are deploying to Internet-connected nodes and used `step1` to begin your deployment, please skip to **Step 5**. - -* Internet-connected environments: *automatic* -* Island environments: `island-step2` - -If you are deploying into an island environment and have migrated the install node into your island, you can begin this process by running `island-step2`. The next tasks the installer will perform are: configuring the ECS nodes, performing a pre-flight check to ensure ECS nodes are viable deployment targets, distributing the artifact cache to ECS nodes, installing necessary packages, and finally deploying the ECS software and init scripts onto ECS nodes. - -### 5. Deploying ECS Topology (`step2` or `island-step3`) - -* Internet-connected environments: `step2` -* Island environments: `island-step3` - -Once either `step1` or `island-step2` have completed, you may then direct the installer to configure the ECS topology by running either `step2` or `island-step3`. These commands are identical. Once `step2` or `island-step3` have completed, your ECS will be ready for use. -If you would prefer to manually configure your ECS topology, you may skip this step entirely. - -## OVA Installation - -ECS Community Edition can optionally be installed with the OVA available [on the release notes page](https://github.com/EMCECS/ECS-CommunityEdition/releases). To install with this method: - -### 1. Download and deploy the OVA to a VM - -### 2. Adjust the resources to have a minimum of: - - * 16GB RAM - * 4 CPU cores - * (Optional) Increase vmdk from the minimum 104GB - -### 3. Clone VM to number of nodes desired - -### 4. Collect network information - -Power on VM's and collect their DHCP assigned IP addresses from the vCenter client or from the VMs themselves - -You may also assign static IP addresses by logging into each VM and running `nmtui` to set network the network variables (IP, mask, gateway, DNS, etc). - -### 5. Log into the first VM and run `videploy` - -Follow the directions laid out in the standard installation concerning the creation of the deploy.yml file (section 2). - -After completing the deploy.yml file, exit out of `videploy`, this will update the deploy.yml file. - -### 6. Run `step1` - -### 7. Run `step2` - -###### Important Note: `step1` and `step2` are not scripts and should not be run as such. `./step1` is not a valid command. - - -## That's it! -Assuming all went well, you now have a functioning ECS Community Edition instance and you may now proceed with your test efforts. +[standard-in]: Standard_Installation.md +[island-in]: Island_Installation.md +[ova-in]: OVA_Installation.md +[deploy_yml]: deploy.yml.md diff --git a/docs/source/installation/ECS-Installation.rst b/docs/source/installation/ECS-Installation.rst index 2098eb1c..c095202a 100644 --- a/docs/source/installation/ECS-Installation.rst +++ b/docs/source/installation/ECS-Installation.rst @@ -89,7 +89,7 @@ installation node as well as the data node in the deploy.yml file. ~~~~~~~~~~~~~~~~~~ Please use a non-root administrative user account with sudo privileges -on the Install Node when performing the deployment. If deploying from +on the install node when performing the deployment. If deploying from the provided OVA, this account is username ``admin`` with password ``ChangeMe``. @@ -146,11 +146,11 @@ fields mentioned below are required for a successful installation.** 0. From the ECS-CommunityEdition directory, run the commmand: ``cp docs/design/reference.deploy.yml deploy.yml`` 1. Edit the file with your favorite editor on another machine, or use - ``vi deploy.yml`` on the Install Node. Read the comments in the file + ``vi deploy.yml`` on the install node. Read the comments in the file and review the examples in the ``examples/`` directory. 2. Top-level deployment facts (``facts:``) - 0. Enter the IP address of the Install Node into the + 0. Enter the IP address of the install node into the ``install_node:`` field. 1. Enter into the ``management_clients:`` field the CIDR address/mask of each machine or subnet that will be whitelisted in node's @@ -342,7 +342,7 @@ before continuing to Step 4. 4. Deploying ECS Nodes (``step1`` or ``island-step1``) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Once the deploy.yml file has been correctly written and the Install Node +Once the deploy.yml file has been correctly written and the install node rebooted if needed, then the next step is to simply run one of the following commands: diff --git a/docs/source/installation/Island_Installation.md b/docs/source/installation/Island_Installation.md new file mode 100644 index 00000000..fbaa9378 --- /dev/null +++ b/docs/source/installation/Island_Installation.md @@ -0,0 +1,233 @@ +# ECS Community Edition Installation + +## Island Installation + +The island installation assumes an Internet connected VM which will be bootstrapped and become an install node. The install node will be migrated into a network-isolated environment and the ECS deployment will then proceed from the install node. + +### Prerequisites + +Listed below are all necessary components for a successful ECS Community Edition installation. If they are not met the installation will likely fail. + +#### Hardware Requirements + +The installation process is designed to be performed from either a dedicated installation node. However, it is possible, if you so choose, for one of the ECS data nodes to double as the install node. The install node will bootstrap the ECS data nodes and configure the ECS instance. When the process is complete, the install node may be safely destroyed. Both single node and multi-node deployments require only a single install node. + +The technical requirements for the installation node are minimal, but reducing available CPU, memory, and IO throughput will adversely affect the speed of the installation process: + +* 1 CPU Core +* 2 GB Memory +* 10 GB HDD +* CentOS 7 Minimal installation (ISO- and network-based minimal installs are equally supported) + +The minimum technical requirements for each ECS data node are: + +* 4 CPU Cores +* 16 GB Memory +* 16 GB Minimum system block storage device +* 104 GB Minimum additional block storage device in a raw, unpartitioned state. +* CentOS 7 Minimal installation (ISO- and network-based minimal installs are equally supported) + +The recommended technical requirements for each ECS data node are: + +* 8 CPU Cores +* 64GB RAM +* 16GB root block storage +* 1TB additional block storage +* CentOS 7.3 Minimal installation + +For multi-node installations each data node must fulfill these minimum qualifications. The installer will do a pre-flight check to ensure that the minimum qualifications are met. If they are not, the installation will not continue. + +#### Environmental Requirements + +The following environmental requirements must also be met to ensure a successful installation: + +* **Network:** All nodes, including install node and ECS data node(s), must exist on the same IPv4 subnet. IPv6 networking *may* work, but is neither tested nor supported for ECS Community Edition at this time. +* **Remote Access:** Installation is coordinated via Ansible and SSH. However, public key authentication during the initial authentication and access configuration is not yet supported. Therefore, password authentication must be enabled on all nodes, including install node and ECS data node(s). *This is a known issue and will be addressed in a future release* +* **OS:** CentOS 7 Minimal installation (ISO- and network-based minimal installs are equally supported) + +#### All-in-One Single-Node Deployments + +A single node *can* successfully run the installation procedure on itself. To do this simply input the node's own IP address as the installation node as well as the data node in the deploy.yml file. + +### 1. Getting Started + +It is recommended to use a non-root administrative user account with sudo privileges on the install node when performing the deployment. Deploying from the root account is supported, but not recommended. + +Before data store nodes can be created, the install node must be prepared. If acquiring the software via the GitHub repository, run: + +0. `cd $HOME` +0. `sudo yum install -y git` +0. `git clone https://github.com/EMCECS/ECS-CommunityEdition`. + +If the repository is being added to the machine via usb drive, scp, or some other file-based means, please copy the archive into `$HOME/` and run: + +* for .zip archive `unzip ECS-CommunityEdition.zip` +* for .tar.gz archive `tar -xzvf ECS-CommunityEdition.tar.gz` + +If the directory created when unarchiving the release .zip or tarball has a different name than `ECS-CommunityEdition`, then rename it with the following command: + +0. `mv ECS-CommunityEdition` + +This will help the documentation make sense as you proceed with the deployment. + +### 2. Creating The Deployment Map (`deploy.yml`) +Installation requires the creation of a deployment map. This map is represented in a YAML configuration file called deploy.yml. + +Below are steps for creating a basic deploy.yml. **All fields indicated below are required for a successful installation.** + +0. From the `$HOME/ECS-CommunityEdition` directory, run the commmand: +`cp docs/design/reference.deploy.yml deploy.yml` +0. Edit the file with your favorite editor on another machine, or use `vi deploy.yml` on the install node. Read the comments in the file and review the examples in the `examples/` directory. +0. Top-level deployment facts (`facts:`) + 0. Enter the IP address of the install node into the `install_node:` field. + 0. Enter into the `management_clients:` field the CIDR address/mask of each machine or subnet that will be whitelisted in node's firewalls and allowed to communicate with ECS management API. + * `10.1.100.50/32` is *exactly* the IP address. + * `192.168.2.0/24` is the entire /24 subnet. + * `0.0.0.0/0` represents the entire Internet. +0. SSH login details (`ssh_defaults:`) + 0. If the SSH server is bound to a non-standard port, enter that port number in the `ssh_port:` field, or leave it set at the default (22). + 0. Enter the username of a user permitted to run commands as UID 0/GID 0 ("root") via the `sudo` command into the `ssh_username:` field. This must be the same across all nodes. + 0. Enter the password for the above user in the `ssh_password:` field. This will only be used during the initial public key authentication setup and can be changed after. This must be the same across all nodes. +0. Node configuration (`node_defaults:`) + 0. Enter the DNS domain for the ECS installation. This can simply be set to `localdomain` if you will not be using DNS with this ECS deployment. + 0. Enter each DNS server address, one per line, into `dns_servers:`. This can be what's present in `/etc/resolv.conf`, or it can be a different DNS server entirely. This DNS server will be set to the primary DNS server for each ECS node. + 0. Enter each NTP server address, one per line, into `ntp_servers:`. +0. Storage Pool configuration (`storage_pools:`) + 0. Enter the storage pool `name:`. + 0. Enter each member data node's IP address, one per line, in `members:`. + 0. Under `options:`, enter each block device reserved for ECS, one per line, in `ecs_block_devices:`. All member data nodes of a storage pool must be identical. +0. Virtual Data Center configuration (`virtual_data_centers:`) + 0. Enter each VDC `name:`. + 0. Enter each member Storage Pool name, one per line, in `members:` +0. Optional directives, such as those for Replication Groups and users, may also be configured at this time. +0. When you have completed the `deploy.yml` to your liking, save the file and exit the `vi` editor. +0. Move on to Bootstrapping + +These steps quickly set up a basic deploy.yml file + +#### More on deploy.yml +If you need to make changes to your deploy.yml after bootstrapping, there are two utilities for this. + +0. The `videploy` utility will update the installed `deploy.yml` file in place and is the preferred method. +0. The `update_deploy` utility will update the installed `deploy.yml` file with the contents of a different `deploy.yml` file. + +See the [utilties][utilities] document for more information on these and other ECS CE utilities. + +For more information on deploy.yml, please read the reference guide found [here](deploy.yml.md). + +### 3. Bootstrapping the install node (`bootstrap.sh`) +The bootstrap script configures the installation node for ECS deployment and downloads the required Docker images and software packages that all other nodes in the deployment will need for successful installation. + +Once the deploy.yml file has been created, the installation node must be bootstrapped. To do this `cd` into the ECS-CommunityEdition directory and run `./bootstrap.sh -c deploy.yml`. Be sure to add the `-g` flag if building the ECS deployment in a virtual environment and the `-y` flag if you're okay accepting all defaults. + +The bootstrap script accepts many flags. If your environment uses proxies, including MitM SSL proxies, custom nameservers, or a local Docker registry or CentOS mirror, you may want to indicate that on the `bootstrap.sh` command line. + +``` +[Usage] + -h, --help + Display this help text and exit + --help-build + Display build environment help and exit + --version + Display version information and exit + +[General Options] + -y / -n + Assume YES or NO to any questions (may be dangerous). + -v / -q + Be verbose (also show all logs) / Be quiet (only show necessary output) + -c + If you have a deploy.yml ready to go, give its path to this arg. + +[Platform Options] + --ssh-private-key + --ssh-public-key + Import SSH public key auth material and use it when authenticating to remote nodes. + -o, --override-dns + Override DHCP-configured nameserver(s); use these instead. No spaces! Use of -o is deprecated, please use --override-dns. + -g, --vm-tools + Install virtual machine guest agents and utilities for QEMU and VMWare. VirtualBox is not supported at this time. Use of -g is deprecated, please use --vm-tools. + -m, --centos-mirror + Use the provided package when fetching packages for the base OS (but not 3rd-party sources, such as EPEL or Debian-style PPAs). The mirror is specified as ':'. This option overrides any mirror lists the base OS would normally use AND supersedes any proxies (assuming the mirror is local), so be warned that when using this option it's possible for bootstrapping to hang indefinitely if the mirror cannot be contacted. Use of -m is deprecated, please use --centos-mirror. + +[Docker Options] + -r, --registry-endpoint REGISTRY + Use the Docker registry at REGISTRY instead of DockerHub. The connect string is specified as ':[/]'. You may be prompted for your credentials if authentication is required. You may need to use -d (below) to add the registry's cert to Docker. Use of -r is deprecated, please use --registry-endpoint. + + -l, --registry-login + After Docker is installed, login to the Docker registry to access images which require access authentication. This will authenticate with Dockerhub unless --registry-endpoint is also used. Use of -l is deprecated, please use --registry-login. + + -d, --registry-cert + [Requires --registry-endpoint] If an alternate Docker registry was specified with -r and uses a cert that cannot be resolved from the anchors in the local system's trust store, then use -d to specify the x509 cert file for your registry. + +[Proxies & Middlemen] + -p, --proxy-endpoint + Connect to the Internet via the PROXY specified as '[user:pass@]:'. Items in [] are optional. It is assumed this proxy handles all protocols. Use of -p is deprecated, please use --proxy-endpoint. + -k, --proxy-cert + Install the certificate in into the local trust store. This is useful for environments that live behind a corporate HTTPS proxy. Use of -k is deprecated, please use --proxy-cert. + -t, --proxy-test-via + [Requires --proxy-endpoint] Test Internet connectivity through the PROXY by connecting to HOSTSPEC. HOSTSPEC is specified as ':'. By default 'google.com:80' is used. Unless access to Google is blocked (or vice versa), there is no need to change the default. + +[Examples] + Install VM guest agents and use SSH public key auth keys in the .ssh/ directory. + $ bash bootstrap.sh --vm-tools --ssh-private-key .ssh/id_rsa --ssh-public-key .ssh/id_rsa.pub + + Quietly use nlanr.peer.local on port 80 and test the connection using EMC's webserver. + $ bash bootstrap.sh -q --proxy-endpoint nlanr.peer.local:80 -proxy-test-via emc.com:80 + + Assume YES to all questions. Use the CentOS mirror at http://cache.local/centos when fetching packages. Use the Docker registry at registry.local:5000 instead of DockerHub, and install the x509 certificate in certs/reg.pem into Docker's trust store so it can access the Docker registry. + $ bash bootstrap.sh -y --centos-mirror http://cache.local/centos --registry-endpoint registry.local:5000 --registry-cert certs/reg.pem +``` + +The bootstrapping process has completed when the following message appears: + +``` +> All done bootstrapping your install node. +> +> To continue (after reboot if needed): +> $ cd /home/admin/ECS-CommunityEdition +> If you have a deploy.yml ready to go (and did not use -c flag): +> $ sudo cp deploy.yml /opt/emc/ecs-install/ +> If not, check out the docs/design and examples directory for references. +> Once you have a deploy.yml, you can start the deployment +> by running: +> +> [WITH Internet access] +> $ step1 +> [Wait for deployment to complete, then run:] +> $ step2 +> +> [WITHOUT Internet access] +> $ island-step1 +> [Migrate your install node into the isolated environment and run:] +> $ island-step2 +> [Wait for deployment to complete, then run:] +> $ island-step3 +> +``` + +After the installation node has successfully bootstrapped you will likely be prompted to reboot the machine. If so, then the machine MUST be rebooted before continuing to Step 4. + +### 4. Deploying ECS Nodes (`island-step1`) + +Once the deploy.yml file has been correctly written and the install node rebooted if needed, then the next step is to simply run `island-step1`. + +After the installer initializes, the EMC ECS license agreement will appear on the screen. Press `q` to close the screen and type `yes` to accept the license and continue or `no` to abort the process. The install cannot continue until the license agreement has been accepted. + +The first thing the installer will do is create an artifact cache of base operating system packages and the ECS software Docker image. The installer will stop after this step. + +### 5. Migrate the Install Node + +At this time, please shut down the install node VM and migrate it into your isolated environment. + +##### 6. Deploying the Island Environment ECS Nodes (`island-step2`) + +Once the install node has been migrated into your island, you can begin deploying ECS by running `island-step2`. The next tasks the installer will perform are: configuring the ECS nodes, performing a pre-flight check to ensure ECS nodes are viable deployment targets, distributing the artifact cache to ECS nodes, installing necessary packages, and finally deploying the ECS software and init scripts onto ECS nodes. + +### 6. Deploying ECS Topology (`island-step3`) +*If you would prefer to manually configure your ECS topology, you may skip this step entirely.* + +Once `island-step2` has completed, you may then direct the installer to configure the ECS topology by running `island-step3`. Once `island-step3` has completed, your ECS will be ready for use. + +## That's it! +Assuming all went well, you now have a functioning ECS Community Edition instance and you may now proceed with your test efforts. diff --git a/docs/source/installation/OVA_Installation.md b/docs/source/installation/OVA_Installation.md new file mode 100644 index 00000000..ed2b4e61 --- /dev/null +++ b/docs/source/installation/OVA_Installation.md @@ -0,0 +1,129 @@ +# ECS Community Edition Installation + +## OVA Installation + +The OVA installation assumes deployment in a network-isolated environment. One clone of the OVA will become an install node. The ECS deployment will then proceed from the install node. + +### Prerequisites + +Listed below are all necessary components for a successful ECS Community Edition installation. If they are not met the installation will likely fail. + +#### Hardware Requirements + +The installation process is designed to be performed from either a dedicated installation node. However, it is possible, if you so choose, for one of the ECS data nodes to double as the install node. The install node will bootstrap the ECS data nodes and configure the ECS instance. When the process is complete, the install node may be safely destroyed. Both single node and multi-node deployments require only a single install node. + +The technical requirements for the installation node are minimal, but reducing available CPU, memory, and IO throughput will adversely affect the speed of the installation process: + +* 1 CPU Core +* 2 GB Memory +* 10 GB HDD +* CentOS 7 Minimal installation (ISO- and network-based minimal installs are equally supported) + +The minimum technical requirements for each ECS data node are: + +* 4 CPU Cores +* 16 GB Memory +* 16 GB Minimum system block storage device +* 104 GB Minimum additional block storage device in a raw, unpartitioned state. +* CentOS 7 Minimal installation (ISO- and network-based minimal installs are equally supported) + +The recommended technical requirements for each ECS data node are: + +* 8 CPU Cores +* 64GB RAM +* 16GB root block storage +* 1TB additional block storage +* CentOS 7.3 Minimal installation + +For multi-node installations each data node must fulfill these minimum qualifications. The installer will do a pre-flight check to ensure that the minimum qualifications are met. If they are not, the installation will not continue. + +#### Environmental Requirements + +The following environmental requirements must also be met to ensure a successful installation: + +* **Network:** All nodes, including install node and ECS data node(s), must exist on the same IPv4 subnet. IPv6 networking *may* work, but is neither tested nor supported for ECS Community Edition at this time. +* **Remote Access:** Installation is coordinated via Ansible and SSH. However, public key authentication during the initial authentication and access configuration is not yet supported. Therefore, password authentication must be enabled on all nodes, including install node and ECS data node(s). *This is a known issue and will be addressed in a future release* +* **OS:** CentOS 7 Minimal installation (ISO- and network-based minimal installs are equally supported) + +#### All-in-One Single-Node Deployments + +A single node *can* successfully run the installation procedure on itself. To do this simply input the node's own IP address as the installation node as well as the data node in the deploy.yml file. + +### 1. Getting Started + +#### 1.1. Download and deploy the OVA to a VM +The OVA is available for download from [the release notes page](https://github.com/EMCECS/ECS-CommunityEdition/releases). Select the most recent version of the OVA for the best experience. + +#### 1.2. Deploy a VM from the OVA and Adjust its resources to have a minimum of: + +* 16GB RAM +* 4 CPU cores +* (Optional) Increase vmdk from the minimum 104GB + +#### 1.3. Clone the VM +Clone the VM you created enough times to reach the number of nodes desired for your deployment. The minimum number of nodes for basic functionality is one (1). The minimum number of nodes for erasure coding replication to be enabled is three (3). + +#### 1.4. Collect and Configure networking information + +Power on the VMs and collect their DHCP assigned IP addresses from the vCenter client or from the VMs themselves + +You may also assign static IP addresses by logging into each VM and running `nmtui` to set network the network variables (IP, mask, gateway, DNS, etc). + +The information you collect in this step is crucial for step 2. + +### 2. Creating The Deployment Map (`deploy.yml`) +Installation requires the creation of a deployment map. This map is represented in a YAML configuration file called deploy.yml. + +Below are steps for creating a basic deploy.yml. **All fields indicated below are required for a successful installation.** + +0. Log into the first VM and run `videploy`. +0. Edit this deploy.yml file with your favorite editor on another machine, or use `vi deploy.yml` on the install node. Read the comments in the file and review the examples in the `examples/` directory. +0. Top-level deployment facts (`facts:`) + 0. Enter the IP address of the install node into the `install_node:` field. + 0. Enter into the `management_clients:` field the CIDR address/mask of each machine or subnet that will be whitelisted in node's firewalls and allowed to communicate with ECS management API. + * `10.1.100.50/32` is *exactly* the IP address. + * `192.168.2.0/24` is the entire /24 subnet. + * `0.0.0.0/0` represents the entire Internet. +0. SSH login details (`ssh_defaults:`) + 0. If the SSH server is bound to a non-standard port, enter that port number in the `ssh_port:` field, or leave it set at the default (22). + 0. Enter the username of a user permitted to run commands as UID 0/GID 0 ("root") via the `sudo` command into the `ssh_username:` field. This must be the same across all nodes. + 0. Enter the password for the above user in the `ssh_password:` field. This will only be used during the initial public key authentication setup and can be changed after. This must be the same across all nodes. +0. Node configuration (`node_defaults:`) + 0. Enter the DNS domain for the ECS installation. This can simply be set to `localdomain` if you will not be using DNS with this ECS deployment. + 0. Enter each DNS server address, one per line, into `dns_servers:`. This can be what's present in `/etc/resolv.conf`, or it can be a different DNS server entirely. This DNS server will be set to the primary DNS server for each ECS node. + 0. Enter each NTP server address, one per line, into `ntp_servers:`. +0. Storage Pool configuration (`storage_pools:`) + 0. Enter the storage pool `name:`. + 0. Enter each member data node's IP address, one per line, in `members:`. + 0. Under `options:`, enter each block device reserved for ECS, one per line, in `ecs_block_devices:`. All member data nodes of a storage pool must be identical. +0. Virtual Data Center configuration (`virtual_data_centers:`) + 0. Enter each VDC `name:`. + 0. Enter each member Storage Pool name, one per line, in `members:` +0. Optional directives, such as those for Replication Groups and users, may also be configured at this time. +0. After completing the deploy.yml file to your liking, exit out of `videploy` as you would the `vim` editor (ESC, :, wq, ENTER). This will update the deploy.yml file. + +#### More on deploy.yml +If you need to make changes to your deploy.yml after bootstrapping, there are two utilities for this. + +0. The `videploy` utility will update the installed `deploy.yml` file in place and is the preferred method. +0. The `update_deploy` utility will update the installed `deploy.yml` file with the contents of a different `deploy.yml` file. + +See the [utilties][utilities] document for more information on these and other ECS CE utilities. + +For more information on deploy.yml, please read the reference guide found [here](deploy.yml.md). + +### 4. Deploying ECS Nodes (`ova-step1`) + +Once the deploy.yml file has been correctly written and the install node rebooted if needed, then the next step is to simply run `ova-step1`. + +After the installer initializes, the EMC ECS license agreement will appear on the screen. Press `q` to close the screen and type `yes` to accept the license and continue or `no` to abort the process. The install cannot continue until the license agreement has been accepted. + +### 5. Deploying ECS Topology (`ova-step2`) +*If you would prefer to manually configure your ECS topology, you may skip this step entirely.* + +Once `ova-step1` has completed, you may then direct the installer to configure the ECS topology by running `ova-step2`. Once `ova-step2` has completed, your ECS will be ready for use. + +## That's it! +Assuming all went well, you now have a functioning ECS Community Edition instance and you may now proceed with your test efforts. + +[utilties]: /utilities/utilities.md diff --git a/docs/source/installation/Standard_Installation.md b/docs/source/installation/Standard_Installation.md new file mode 100644 index 00000000..cea4b9cb --- /dev/null +++ b/docs/source/installation/Standard_Installation.md @@ -0,0 +1,223 @@ +# ECS Community Edition Installation + +## Standard Installation + +The standard installation assumes an Internet connected VM which will be bootstrapped and become an install node. The ECS deployment will then proceed from the install node. + +### Prerequisites + +Listed below are all necessary components for a successful ECS Community Edition installation. If they are not met the installation will likely fail. + +#### Hardware Requirements + +The installation process is designed to be performed from either a dedicated installation node. However, it is possible, if you so choose, for one of the ECS data nodes to double as the install node. The install node will bootstrap the ECS data nodes and configure the ECS instance. When the process is complete, the install node may be safely destroyed. Both single node and multi-node deployments require only a single install node. + +The technical requirements for the installation node are minimal, but reducing available CPU, memory, and IO throughput will adversely affect the speed of the installation process: + +* 1 CPU Core +* 2 GB Memory +* 10 GB HDD +* CentOS 7 Minimal installation (ISO- and network-based minimal installs are equally supported) + +The minimum technical requirements for each ECS data node are: + +* 4 CPU Cores +* 16 GB Memory +* 16 GB Minimum system block storage device +* 104 GB Minimum additional block storage device in a raw, unpartitioned state. +* CentOS 7 Minimal installation (ISO- and network-based minimal installs are equally supported) + +The recommended technical requirements for each ECS data node are: + +* 8 CPU Cores +* 64GB RAM +* 16GB root block storage +* 1TB additional block storage +* CentOS 7.3 Minimal installation + +For multi-node installations each data node must fulfill these minimum qualifications. The installer will do a pre-flight check to ensure that the minimum qualifications are met. If they are not, the installation will not continue. + +#### Environmental Requirements + +The following environmental requirements must also be met to ensure a successful installation: + +* **Network:** All nodes, including install node and ECS data node(s), must exist on the same IPv4 subnet. IPv6 networking *may* work, but is neither tested nor supported for ECS Community Edition at this time. +* **Remote Access:** Installation is coordinated via Ansible and SSH. However, public key authentication during the initial authentication and access configuration is not yet supported. Therefore, password authentication must be enabled on all nodes, including install node and ECS data node(s). *This is a known issue and will be addressed in a future release* +* **OS:** CentOS 7 Minimal installation (ISO- and network-based minimal installs are equally supported) + +#### All-in-One Single-Node Deployments + +A single node *can* successfully run the installation procedure on itself. To do this simply input the node's own IP address as the installation node as well as the data node in the deploy.yml file. + +### 1. Getting Started + +It is recommended to use a non-root administrative user account with sudo privileges on the install node when performing the deployment. Deploying from the root account is supported, but not recommended. + +Before data store nodes can be created, the install node must be prepared. If acquiring the software via the GitHub repository, run: + +0. `cd $HOME` +0. `sudo yum install -y git` +0. `git clone https://github.com/EMCECS/ECS-CommunityEdition`. + +If the repository is being added to the machine via usb drive, scp, or some other file-based means, please copy the archive into `$HOME/` and run: + +* for .zip archive `unzip ECS-CommunityEdition.zip` +* for .tar.gz archive `tar -xzvf ECS-CommunityEdition.tar.gz` + +If the directory created when unarchiving the release .zip or tarball has a different name than `ECS-CommunityEdition`, then rename it with the following command: + +0. `mv ECS-CommunityEdition` + +This will help the documentation make sense as you proceed with the deployment. + +### 2. Creating The Deployment Map (`deploy.yml`) +Installation requires the creation of a deployment map. This map is represented in a YAML configuration file called deploy.yml. + +Below are steps for creating a basic deploy.yml. **All fields indicated below are required for a successful installation.** + +0. From the `$HOME/ECS-CommunityEdition` directory, run the commmand: +`cp docs/design/reference.deploy.yml deploy.yml` +0. Edit the file with your favorite editor on another machine, or use `vi deploy.yml` on the install node. Read the comments in the file and review the examples in the `examples/` directory. +0. Top-level deployment facts (`facts:`) + 0. Enter the IP address of the install node into the `install_node:` field. + 0. Enter into the `management_clients:` field the CIDR address/mask of each machine or subnet that will be whitelisted in node's firewalls and allowed to communicate with ECS management API. + * `10.1.100.50/32` is *exactly* the IP address. + * `192.168.2.0/24` is the entire /24 subnet. + * `0.0.0.0/0` represents the entire Internet. +0. SSH login details (`ssh_defaults:`) + 0. If the SSH server is bound to a non-standard port, enter that port number in the `ssh_port:` field, or leave it set at the default (22). + 0. Enter the username of a user permitted to run commands as UID 0/GID 0 ("root") via the `sudo` command into the `ssh_username:` field. This must be the same across all nodes. + 0. Enter the password for the above user in the `ssh_password:` field. This will only be used during the initial public key authentication setup and can be changed after. This must be the same across all nodes. +0. Node configuration (`node_defaults:`) + 0. Enter the DNS domain for the ECS installation. This can simply be set to `localdomain` if you will not be using DNS with this ECS deployment. + 0. Enter each DNS server address, one per line, into `dns_servers:`. This can be what's present in `/etc/resolv.conf`, or it can be a different DNS server entirely. This DNS server will be set to the primary DNS server for each ECS node. + 0. Enter each NTP server address, one per line, into `ntp_servers:`. +0. Storage Pool configuration (`storage_pools:`) + 0. Enter the storage pool `name:`. + 0. Enter each member data node's IP address, one per line, in `members:`. + 0. Under `options:`, enter each block device reserved for ECS, one per line, in `ecs_block_devices:`. All member data nodes of a storage pool must be identical. +0. Virtual Data Center configuration (`virtual_data_centers:`) + 0. Enter each VDC `name:`. + 0. Enter each member Storage Pool name, one per line, in `members:` +0. Optional directives, such as those for Replication Groups and users, may also be configured at this time. +0. When you have completed the `deploy.yml` to your liking, save the file and exit the `vi` editor. +0. Move on to Bootstrapping + +These steps quickly set up a basic deploy.yml file + +#### More on deploy.yml +If you need to make changes to your deploy.yml after bootstrapping, there are two utilities for this. + +0. The `videploy` utility will update the installed `deploy.yml` file in place and is the preferred method. +0. The `update_deploy` utility will update the installed `deploy.yml` file with the contents of a different `deploy.yml` file. + +See the [utilties][utilities] document for more information on these and other ECS CE utilities. + +For more information on deploy.yml, please read the reference guide found [here](deploy.yml.md). + +### 3. Bootstrapping the Install Node (`bootstrap.sh`) +The bootstrap script configures the installation node for ECS deployment and downloads the required Docker images and software packages that all other nodes in the deployment will need for successful installation. + +Once the deploy.yml file has been created, the installation node must be bootstrapped. To do this `cd` into the ECS-CommunityEdition directory and run `./bootstrap.sh -c deploy.yml`. Be sure to add the `-g` flag if building the ECS deployment in a virtual environment and the `-y` flag if you're okay accepting all defaults. + +The bootstrap script accepts many flags. If your environment uses proxies, including MitM SSL proxies, custom nameservers, or a local Docker registry or CentOS mirror, you may want to indicate that on the `bootstrap.sh` command line. + +``` +[Usage] + -h, --help + Display this help text and exit + --help-build + Display build environment help and exit + --version + Display version information and exit + +[General Options] + -y / -n + Assume YES or NO to any questions (may be dangerous). + -v / -q + Be verbose (also show all logs) / Be quiet (only show necessary output) + -c + If you have a deploy.yml ready to go, give its path to this arg. + +[Platform Options] + --ssh-private-key + --ssh-public-key + Import SSH public key auth material and use it when authenticating to remote nodes. + -o, --override-dns + Override DHCP-configured nameserver(s); use these instead. No spaces! Use of -o is deprecated, please use --override-dns. + -g, --vm-tools + Install virtual machine guest agents and utilities for QEMU and VMWare. VirtualBox is not supported at this time. Use of -g is deprecated, please use --vm-tools. + -m, --centos-mirror + Use the provided package when fetching packages for the base OS (but not 3rd-party sources, such as EPEL or Debian-style PPAs). The mirror is specified as ':'. This option overrides any mirror lists the base OS would normally use AND supersedes any proxies (assuming the mirror is local), so be warned that when using this option it's possible for bootstrapping to hang indefinitely if the mirror cannot be contacted. Use of -m is deprecated, please use --centos-mirror. + +[Docker Options] + -r, --registry-endpoint REGISTRY + Use the Docker registry at REGISTRY instead of DockerHub. The connect string is specified as ':[/]'. You may be prompted for your credentials if authentication is required. You may need to use -d (below) to add the registry's cert to Docker. Use of -r is deprecated, please use --registry-endpoint. + + -l, --registry-login + After Docker is installed, login to the Docker registry to access images which require access authentication. This will authenticate with Dockerhub unless --registry-endpoint is also used. Use of -l is deprecated, please use --registry-login. + + -d, --registry-cert + [Requires --registry-endpoint] If an alternate Docker registry was specified with -r and uses a cert that cannot be resolved from the anchors in the local system's trust store, then use -d to specify the x509 cert file for your registry. + +[Proxies & Middlemen] + -p, --proxy-endpoint + Connect to the Internet via the PROXY specified as '[user:pass@]:'. Items in [] are optional. It is assumed this proxy handles all protocols. Use of -p is deprecated, please use --proxy-endpoint. + -k, --proxy-cert + Install the certificate in into the local trust store. This is useful for environments that live behind a corporate HTTPS proxy. Use of -k is deprecated, please use --proxy-cert. + -t, --proxy-test-via + [Requires --proxy-endpoint] Test Internet connectivity through the PROXY by connecting to HOSTSPEC. HOSTSPEC is specified as ':'. By default 'google.com:80' is used. Unless access to Google is blocked (or vice versa), there is no need to change the default. + +[Examples] + Install VM guest agents and use SSH public key auth keys in the .ssh/ directory. + $ bash bootstrap.sh --vm-tools --ssh-private-key .ssh/id_rsa --ssh-public-key .ssh/id_rsa.pub + + Quietly use nlanr.peer.local on port 80 and test the connection using EMC's webserver. + $ bash bootstrap.sh -q --proxy-endpoint nlanr.peer.local:80 -proxy-test-via emc.com:80 + + Assume YES to all questions. Use the CentOS mirror at http://cache.local/centos when fetching packages. Use the Docker registry at registry.local:5000 instead of DockerHub, and install the x509 certificate in certs/reg.pem into Docker's trust store so it can access the Docker registry. + $ bash bootstrap.sh -y --centos-mirror http://cache.local/centos --registry-endpoint registry.local:5000 --registry-cert certs/reg.pem +``` + +The bootstrapping process has completed when the following message appears: + +``` +> All done bootstrapping your install node. +> +> To continue (after reboot if needed): +> $ cd /home/admin/ECS-CommunityEdition +> If you have a deploy.yml ready to go (and did not use -c flag): +> $ sudo cp deploy.yml /opt/emc/ecs-install/ +> If not, check out the docs/design and examples directory for references. +> Once you have a deploy.yml, you can start the deployment +> by running: +> +> [WITH Internet access] +> $ step1 +> [Wait for deployment to complete, then run:] +> $ step2 +> +> [WITHOUT Internet access] +> $ island-step1 +> [Migrate your install node into the isolated environment and run:] +> $ island-step2 +> [Wait for deployment to complete, then run:] +> $ island-step3 +> +``` + +After the installation node has successfully bootstrapped you will likely be prompted to reboot the machine. If so, then the machine MUST be rebooted before continuing to Step 4. + +### 4. Deploying ECS Nodes (`step1`) + +Once the deploy.yml file has been correctly written and the install node rebooted if needed, then the next step is to simply run `step1`. + +After the installer initializes, the EMC ECS license agreement will appear on the screen. Press `q` to close the screen and type `yes` to accept the license and continue or `no` to abort the process. The install cannot continue until the license agreement has been accepted. + +### 5. Deploying ECS Topology (`step2`) +*If you would prefer to manually configure your ECS topology, you may skip this step entirely.* + +Once `step1` has completed, you may then direct the installer to configure the ECS topology by running `step2`. Once `step2` has completed, your ECS will be ready for use. + +## That's it! +Assuming all went well, you now have a functioning ECS Community Edition instance and you may now proceed with your test efforts. diff --git a/docs/source/utilities/utilities.md b/docs/source/utilities/utilities.md new file mode 100644 index 00000000..c9057532 --- /dev/null +++ b/docs/source/utilities/utilities.md @@ -0,0 +1,248 @@ +# ECS Community Edition Utilities + +## `ecsdeploy` +The `ecsdeploy` utility responsible for executing Ansible playbooks and helper scripts responsible for deploying ECS Community Edition to member data nodes. + +``` +Usage: ecsdeploy [OPTIONS] COMMAND1 [ARGS]... [COMMAND2 [ARGS]...]... + + Command line interface to ecs-install installer + +Options: + -v, --verbose Use multiple times for more verbosity + --help Show this message and exit. + +Commands: + access Configure ssh access to nodes + bootstrap Install required packages on nodes + cache Build package cache + check Check data nodes to ensure they are in compliance + deploy Deploy ECS to nodes + disable-cache Disable datanode package cache handling + enable-cache Enable datanode package cache handling + load Apply deploy.yml + reboot Reboot data nodes that need it + start Start the ECS service + stop Stop the ECS service +``` + +## `ecsconfig` +The `ecsconfig` utility responsible for communicating with the ECS management API and configuring an ECS deployment with administrative and organizational objects. + +``` +Usage: ecsconfig [OPTIONS] COMMAND1 [ARGS]... [COMMAND2 [ARGS]...]... + + Command line interface to configure ECS from declarations in deploy.yml + +Options: + -v, --verbose Use multiple times for more verbosity + --help Show this message and exit. + +Commands: + licensing Work with ECS Licenses + management-user Work with ECS Management Users + namespace Work with ECS Namespaces + object-user Work with ECS Object Users + ping Check ECS Management API Endpoint(s) + rg Work with ECS Replication Groups + sp Work with ECS Storage Pools + trust Work with ECS Certificates + vdc Work with ECS Virtual Data Centers +``` + +## `ecsremove` +The `ecsremove` utility is responsible for removing ECS instances and artifacts from member data nodes and the install node. + +``` +Usage: ecsremove [OPTIONS] COMMAND1 [ARGS]... [COMMAND2 [ARGS]...]... + + Command line interface to remove ECS bits + +Options: + -v, --verbose Use multiple times for more verbosity + --help Show this message and exit. + +Commands: + purge-all Uninstall ECS and purge artifacts from all nodes + purge-installer Purge caches from install node + purge-nodes Uninstall ECS and purge artifacts from data nodes +``` + +## `enter` +This utility has two functions: +1. To access member data nodes by name `enter luna` +2. To access the `ecs-install` image directly and the contents of the data container. + +Accessing the `ecs-install` image directly +``` +[admin@installer-230 ~]$ enter +installer-230 [/]$ +``` + +Accessing a member node +``` +[admin@installer-230 ~]$ enter luna +Warning: Identity file /opt/ssh/id_ed25519 not accessible: No such file or directory. +Warning: Permanently added 'luna,192.168.2.220' (ECDSA) to the list of known hosts. +Last login: Thu Nov 9 16:44:31 2017 from 192.168.2.200 +[admin@luna ~]$ +``` + +## `catfacts` +This utility displays all the facts Ansible has registered about a node in pretty-printed, colorized output from `jq` paged through `less`. + +Running `catfacts` without an argument lists queryable nodes. +``` +[admin@installer-230 ~]$ catfacts +Usage: $ catfacts +Here is a list of hosts you can query: +Data Node(s): + hosts (1): + 192.168.2.220 +Install Node: + hosts (1): + 192.168.2.200 +``` +Querying a node +``` +[admin@installer-230 ~]$ catfacts 192.168.2.200 +{ + "ansible_all_ipv4_addresses": [ + "172.17.0.1", + "192.168.2.200" + ], + "ansible_all_ipv6_addresses": [ + "fe80::42:98ff:fe85:2502", + "fe80::f0c5:a7d1:6fff:205e" + ], + "ansible_apparmor": { + "status": "disabled" + }, + "ansible_architecture": "x86_64", + "ansible_bios_date": "04/01/2014", + "ansible_bios_version": "rel-1.8.2-0-g33fbe13 by qemu-project.org", + "ansible_cmdline": { + "BOOT_IMAGE": "/vmlinuz-3.10.0-693.5.2.el7.x86_64", + "LANG": "en_US.UTF-8", + +[... snip ...] +``` + + +## `update_deploy` +This utility updates the `/opt/emc/ecs-install/deploy.yml` file with the updated contents of the file `deploy.yml` provided during bootstrapping. It can also set the path to the `deploy.yml` file from which to fetch updates. + +Running with no arguments +``` +[admin@installer-230 ~]$ update_deploy +> Updating /opt/emc/ecs-install/deploy.yml from /home/admin/ecsce-lab-configs/local/local-lab-1-node-1/deploy.yml +37c37 +< ssh_password: ChangeMe +--- +> ssh_password: admin +> Recreating ecs-install data container +ecs-install> Initializing data container, one moment ... OK +ecs-install> Applying deploy.yml +``` + +Updating the deploy.yml file to a different source. +``` +[admin@installer-230 ~]$ update_deploy ~/ecsce-lab-configs/local/local-lab-1-node-2/deploy.yml +> Updating bootstrap.conf to use deploy config from /home/admin/ecsce-lab-configs/local/local-lab-1-node-2/deploy.yml +> Updating /opt/emc/ecs-install/deploy.yml from /home/admin/ecsce-lab-configs/local/local-lab-1-node-2/deploy.yml +37c37 +< ssh_password: admin +--- +> ssh_password: ChangeMe +82c82 +< - 192.168.2.221 +--- +> - 192.168.2.220 +173a174 +> +> Recreating ecs-install data container +ecs-install> Initializing data container, one moment ... OK +ecs-install> Applying deploy.yml +``` + +## `videploy` +This utility modifies the `deploy.yml` file currently installed at `/opt/emc/ecs-install/deploy.yml`. + +``` +[admin@installer-230 ~]$ videploy +``` +First, vim runs with the contents of `deploy.yml`, and then `videploy` calls `update_deploy`. + +## `pingnodes` +This utility pings nodes involved in the deployment using Ansible's `ping` module to verify connectivity. It can be used to ping groups or individual nodes. + +Ping all data nodes (default) +``` +[admin@installer-230 ~]$ pingnodes +192.168.2.220 | SUCCESS => { + "changed": false, + "failed": false, + "ping": "pong" +} +``` +Ping all known nodes +``` +[admin@installer-230 ~]$ pingnodes all +localhost | SUCCESS => { + "changed": false, + "failed": false, + "ping": "pong" +} +192.168.2.200 | SUCCESS => { + "changed": false, + "failed": false, + "ping": "pong" +} +192.168.2.220 | SUCCESS => { + "changed": false, + "failed": false, + "ping": "pong" +} +``` +Ping the node identified as 192.168.2.220 +``` +[admin@installer-230 ~]$ pingnodes 192.168.2.220 +192.168.2.220 | SUCCESS => { + "changed": false, + "failed": false, + "ping": "pong" +} +``` +Ping members of the install_node group +``` +[admin@installer-230 ~]$ pingnodes install_node +192.168.2.200 | SUCCESS => { + "changed": false, + "failed": false, + "ping": "pong" +} +``` + +## `inventory` +This utility displays the known Ansible inventory and all registered group and host variables. + +``` +[admin@installer-230 ~]$ inventory +{ + "ecs_install": { + "hosts": [ + "localhost" + ], + "vars": { + "ansible_become": false, + "ansible_python_interpreter": "/usr/local/bin/python", + "ansible_connection": "local" + } + }, + "install_node": { + "hosts": [ + "192.168.2.200" + ], + +[... snip ...] +``` diff --git a/tools/clear-installer-image.sh b/tools/clear-installer-image.sh new file mode 100644 index 00000000..85cb683d --- /dev/null +++ b/tools/clear-installer-image.sh @@ -0,0 +1,2 @@ +#!/usr/bin/env bash +sudo docker rmi --force $(sudo docker images | grep ecs-install | awk '{print $3}' | uniq) diff --git a/ui/ansible/roles/CentOS_7_baseline_install/tasks/main.yml b/ui/ansible/roles/CentOS_7_baseline_install/tasks/main.yml index 3865456c..9a58116b 100644 --- a/ui/ansible/roles/CentOS_7_baseline_install/tasks/main.yml +++ b/ui/ansible/roles/CentOS_7_baseline_install/tasks/main.yml @@ -30,7 +30,7 @@ creates: "{{host_cache_dir}}/disable_package_cache.sem" when: - not ( num_data_nodes|int == 1 and top_data_node == install_node ) - - ( ansible_local is defined and ansible_local.ova is not defined ) + - not ( ansible_local is defined and ansible_local.ova is defined ) # TODO: Improve this # This is suuuuuper hacky - There should be a better way using the yum module, @@ -46,7 +46,7 @@ creates: "{{host_cache_dir}}/disable_package_cache.sem" when: - not ( num_data_nodes|int == 1 and top_data_node == install_node and ( ansible_local is defined and ansible_local.ova is defined ) ) - - ( ansible_local is defined and ansible_local.ova is not defined ) + - not ( ansible_local is defined and ansible_local.ova is defined ) - name: CentOS 7 | Configure ntp template: src=ntp.conf.j2 dest=/etc/ntp.conf diff --git a/ui/ansible/roles/CentOS_7_reboot/tasks/main.yml b/ui/ansible/roles/CentOS_7_reboot/tasks/main.yml index 4d491518..756e48b2 100644 --- a/ui/ansible/roles/CentOS_7_reboot/tasks/main.yml +++ b/ui/ansible/roles/CentOS_7_reboot/tasks/main.yml @@ -17,7 +17,7 @@ msg: "Node flagged for reboot by package manager" when: - ( needs_restarting|failed ) and flag_install_node is not defined - - ( ansible_local is defined and ansible_local.ova is not defined ) + - not ( ansible_local is defined and ansible_local.ova is defined ) - name: CentOS 7 | Check if install node also needs restarting debug: @@ -26,7 +26,7 @@ - CentOS 7 | Reboot required when: - ( needs_restarting|failed ) and flag_install_node is defined - - ( ansible_local is defined and ansible_local.ova is not defined ) + - not ( ansible_local is defined and ansible_local.ova is defined ) - name: CentOS 7 | Reboot node(s) become: True @@ -36,14 +36,14 @@ ignore_errors: True when: - ( needs_restarting|failed ) and flag_install_node is not defined - - ( ansible_local is defined and ansible_local.ova is not defined ) + - not ( ansible_local is defined and ansible_local.ova is defined ) - name: CentOS 7 | Wait for node(s) to reboot become: False local_action: wait_for host="{{ ansible_host | default(inventory_hostname) }}" port="{{ ansible_port }}" state=started delay=15 timeout=300 when: - ( needs_restarting|failed ) and flag_install_node is not defined - - ( ansible_local is defined and ansible_local.ova is not defined ) + - not ( ansible_local is defined and ansible_local.ova is defined ) # host={{ ansible_default_ipv4.address }} port=22 state=started delay=60 timeout=120 diff --git a/ui/ansible/roles/common_baseline_install/tasks/main.yml b/ui/ansible/roles/common_baseline_install/tasks/main.yml index 0ac3368a..fbe95728 100644 --- a/ui/ansible/roles/common_baseline_install/tasks/main.yml +++ b/ui/ansible/roles/common_baseline_install/tasks/main.yml @@ -95,7 +95,7 @@ when: - flag_install_node is not defined - not ( num_data_nodes|int == 1 and top_data_node == install_node ) - - ( ansible_local is defined and ansible_local.ova is not defined ) + - not ( ansible_local is defined and ansible_local.ova is defined ) - name: Common | Update block storage path permissions file: diff --git a/ui/ansible/roles/firewalld_configure_access/tasks/main.yml b/ui/ansible/roles/firewalld_configure_access/tasks/main.yml index 4fe93306..382f9c20 100644 --- a/ui/ansible/roles/firewalld_configure_access/tasks/main.yml +++ b/ui/ansible/roles/firewalld_configure_access/tasks/main.yml @@ -5,7 +5,7 @@ - name: Firewalld | Add install node to firewalld trusted zone firewalld: - source: "{{install_node}}/32" + source: "{{groups['install_node'][0]}}/32" state: enabled zone: trusted immediate: true diff --git a/ui/ansible/roles/installer_build_cache/tasks/main.yml b/ui/ansible/roles/installer_build_cache/tasks/main.yml index 1a44d974..6d7dda1a 100644 --- a/ui/ansible/roles/installer_build_cache/tasks/main.yml +++ b/ui/ansible/roles/installer_build_cache/tasks/main.yml @@ -4,8 +4,8 @@ file: state=directory path={{ cache_dir }}/{{ item }} with_items: "{{ caches.keys() }}" when: - - not ( num_data_nodes|int == 1 and top_data_node == install_node ) - - ( ansible_local is defined and ansible_local.ova is not defined ) + - not ( num_data_nodes|int == 1 and top_data_node == groups['install_node'][0] ) + - not ( ansible_local is defined and ansible_local.ova is defined ) loop_control: label: "{{ cache_dir }}/{{ item }}" @@ -18,15 +18,15 @@ delegate_to: "{{ groups['install_node'][0] }}" register: cacheresults when: - - not ( num_data_nodes|int == 1 and top_data_node == install_node ) - - ( ansible_local is defined and ansible_local.ova is not defined ) + - not ( num_data_nodes|int == 1 and top_data_node == groups['install_node'][0] ) + - not ( ansible_local is defined and ansible_local.ova is defined ) loop_control: label: "{{ host_cache_dir }}/{{ item.key }}/{{ item.value.dest }}" - name: Installer | Create cache distribution torrent file - shell: mktorrent-borg -ig 'facts*' -a udp://{{ groups['install_node'][0] }}:6881 -a http://{{ groups['install_node'][0] }}:6881/announce -o {{ cache_dir }}/cache.torrent -pub {{ cache_dir }} + shell: /usr/bin/mktorrent-borg -ig 'facts*' -a udp://{{ groups['install_node'][0] }}:6881 -a http://{{ groups['install_node'][0] }}:6881/announce -o {{ cache_dir }}/cache.torrent -pub {{ cache_dir }} args: creates: "{{ cache_dir }}/cache.torrent" when: - - not ( num_data_nodes|int == 1 and top_data_node == install_node ) - - ( ansible_local is defined and ansible_local.ova is not defined ) + - not ( num_data_nodes|int == 1 and top_data_node == groups['install_node'][0] ) + - not ( ansible_local is defined and ansible_local.ova is defined ) diff --git a/ui/ansible/roles/testing/tasks/main.yml b/ui/ansible/roles/testing/tasks/main.yml index 06fb4840..1d29cb01 100644 --- a/ui/ansible/roles/testing/tasks/main.yml +++ b/ui/ansible/roles/testing/tasks/main.yml @@ -1,39 +1,48 @@ -- include_vars: main.yml - include_vars: caches.yml -- name: Common | Create and modify paths and semaphores for docker containers - file: - path: "{{item.path}}" - state: "{{item.state}}" - mode: "{{item.mode}}" - owner: "{{item.owner}}" - group: "{{item.group}}" - with_items: "{{ecs_docker_dirs}}" - tags: files - loop_control: - label: "{{item.path}}" +- name: Debug + debug: + msg: "data nodes: {{num_data_nodes|int}} top data node: {{top_data_node}} install node: {{groups['install_node'][0]}} is ova?: {{ansible_local is defined and ansible_local.ova is defined}}" + +- name: Debug 2 + debug: + msg: "{{not ( num_data_nodes|int == 1 and top_data_node == groups['install_node'][0] )}}" -### Generate network.json -- name: Common | Generate network.json - template: src=rev1-network.json.j2 dest=/host/data/network.json owner=444 group=444 force=no - tags: files +- name: Debug 3 + debug: + msg: "{{not ( ansible_local is defined and ansible_local.ova is defined )}}" -### Generate object-main_network.json -- name: Common | Generate object-main_network.json - template: src=object-main_network.json.j2 dest=/host/data/object-main_network.json owner=444 group=444 force=no - tags: files +- name: Debug 4 + debug: + msg: "ansible_local: {{ansible_local is defined}} ansible_local.ova: {{ansible_local is defined and ansible_local.ova is defined}}" -### Generate id.json -- name: Common | Generate id.json - template: src=id.json.j2 dest=/host/data/id.json owner=444 group=444 force=no - tags: files +- name: Installer | Create cache directories + file: state=directory path={{ cache_dir }}/{{ item }} + with_items: "{{ caches.keys() }}" + when: + - not ( num_data_nodes|int == 1 and top_data_node == groups['install_node'][0] ) + - not ( ansible_local is defined and ansible_local.ova is defined ) + loop_control: + label: "{{ cache_dir }}/{{ item }}" -### Generate agent.json -- name: Common | Generate agent.json - template: src=agent.json.j2 dest=/host/data/agent.json owner=444 group=444 force=no - tags: files +- name: Installer | Create compressed cache files + shell: "{{ item.value.pack_cmd }}" + args: + chdir: "{{ host_cache_dir }}/{{ item.key }}" + creates: "{{ host_cache_dir }}/{{ item.key }}/{{ item.value.dest }}" + with_dict: "{{ caches }}" + delegate_to: "{{ groups['install_node'][0] }}" + register: cacheresults + when: + - not ( num_data_nodes|int == 1 and top_data_node == groups['install_node'][0] ) + - not ( ansible_local is defined and ansible_local.ova is defined ) + loop_control: + label: "{{ host_cache_dir }}/{{ item.key }}/{{ item.value.dest }}" -### Generate seeds file -- name: Common | Generate seeds file - template: src=seeds.j2 dest=/host/files/seeds owner=444 group=444 force=no - tags: files +- name: Installer | Create cache distribution torrent file + shell: /usr/bin/mktorrent-borg -ig 'facts*' -a udp://{{ groups['install_node'][0] }}:6881 -a http://{{ groups['install_node'][0] }}:6881/announce -o {{ cache_dir }}/cache.torrent -pub {{ cache_dir }} + args: + creates: "{{ cache_dir }}/cache.torrent" + when: + - not ( num_data_nodes|int == 1 and top_data_node == groups['install_node'][0] ) + - not ( ansible_local is defined and ansible_local.ova is defined ) diff --git a/ui/ansible/roles/testing/templates/agent.json.j2 b/ui/ansible/roles/testing/templates/agent.json.j2 deleted file mode 100644 index 2814858f..00000000 --- a/ui/ansible/roles/testing/templates/agent.json.j2 +++ /dev/null @@ -1,3 +0,0 @@ -{ - "endpoint": "https://{{ansible_fqdn}}:9240" -} diff --git a/ui/ansible/roles/testing/templates/id-old.json.j2 b/ui/ansible/roles/testing/templates/id-old.json.j2 deleted file mode 100644 index 850d16fe..00000000 --- a/ui/ansible/roles/testing/templates/id-old.json.j2 +++ /dev/null @@ -1,7 +0,0 @@ -{%- set spanner = joiner("-") -%} -{%- set this_host = ansible_hostname -%} -{%- set this_sp = hostvars[inventory_hostname]['sp'] -%} -{%- set this_vdc = hostvars[inventory_hostname]['vdc'] -%} -{ - "agent_id": "{{ this_host }}-{{ spanner() }}{{ hostvars[inventory_hostname]['group_names'] }}-{{ this_vdc }}" -} diff --git a/ui/ansible/roles/testing/templates/id.json.j2 b/ui/ansible/roles/testing/templates/id.json.j2 deleted file mode 100644 index f4dd6ef3..00000000 --- a/ui/ansible/roles/testing/templates/id.json.j2 +++ /dev/null @@ -1,3 +0,0 @@ -{ - "agent_id": "{{ ansible_local.data_node.node_uuid }}" -} diff --git a/ui/ansible/roles/testing/templates/object-main_network.json.j2 b/ui/ansible/roles/testing/templates/object-main_network.json.j2 deleted file mode 100644 index 827a7dc4..00000000 --- a/ui/ansible/roles/testing/templates/object-main_network.json.j2 +++ /dev/null @@ -1,29 +0,0 @@ -{%- set comma = joiner(",") -%} -{%- set vdc = hostvars[inventory_hostname]['vdc'] -%} -{ - "cluster_info": [ -{%- for node in groups['data_node'] -%} -{%- if ( (hostvars[node]['vdc'] is defined) and - (hostvars[node]['vdc'] == vdc) ) -%}{{ comma() }} - { - "network": { - "mgmt_ip": "{{hostvars[node].ansible_default_ipv4.address}}", - "hostname": "{{hostvars[node].ansible_fqdn}}", - "data_interface_name": "{{hostvars[node].ansible_default_ipv4.alias}}", - "replication_ip": "{{hostvars[node].ansible_default_ipv4.address}}", - "data2_interface_name": "{{hostvars[node].ansible_default_ipv4.alias}}", - "private_interface_name": "{{hostvars[node].ansible_default_ipv4.alias}}", - "data2_ip": "{{hostvars[node].ansible_default_ipv4.address}}", - "private_ip": "{{hostvars[node].ansible_default_ipv4.address}}", - "replication_interface_name": "{{hostvars[node].ansible_default_ipv4.alias}}", - "public_ip": "{{hostvars[node].ansible_default_ipv4.address}}", - "mgmt_interface_name": "{{hostvars[node].ansible_default_ipv4.alias}}", - "data_ip": "{{hostvars[node].ansible_default_ipv4.address}}", - "public_interface_name": "{{hostvars[node].ansible_default_ipv4.alias}}" - }, - "agent_endpoint": "https://{{hostvars[node].ansible_fqdn}}:9240", - "agent_id": "{{hostvars[node].ansible_local.data_node.node_uuid}}" - } -{% endif %}{% endfor %} - ] -} diff --git a/ui/ansible/roles/testing/templates/rev0-network.json.j2 b/ui/ansible/roles/testing/templates/rev0-network.json.j2 deleted file mode 100644 index c8afd891..00000000 --- a/ui/ansible/roles/testing/templates/rev0-network.json.j2 +++ /dev/null @@ -1,8 +0,0 @@ -{ - "private_interface_name": "{{ ansible_default_ipv4.alias }}", - "public_interface_name": "{{ ansible_default_ipv4.alias }}", - "hostname": "{{ ansible_hostname }}", - "data_ip": "{{ ansible_default_ipv4.address }}", - "mgmt_ip": "{{ ansible_default_ipv4.address }}", - "replication_ip": "{{ ansible_default_ipv4.address }}" -} diff --git a/ui/ansible/roles/testing/templates/rev1-network.json.j2 b/ui/ansible/roles/testing/templates/rev1-network.json.j2 deleted file mode 100644 index 9c9800df..00000000 --- a/ui/ansible/roles/testing/templates/rev1-network.json.j2 +++ /dev/null @@ -1,15 +0,0 @@ -{ - "data_interface_name": "{{ansible_default_ipv4.alias}}", - "mgmt_interface_name": "{{ansible_default_ipv4.alias}}", - "hostname": "{{ansible_hostname}}", - "replication_ip": "{{ansible_default_ipv4.address}}", - "data2_interface_name": "{{ansible_default_ipv4.alias}}", - "private_ip": "{{ansible_default_ipv4.address}}", - "data_ip": "{{ansible_default_ipv4.address}}", - "data2_ip": "{{ansible_default_ipv4.address}}", - "public_ip": "{{ansible_default_ipv4.address}}", - "replication_interface_name": "{{ansible_default_ipv4.alias}}", - "public_interface_name": "{{ansible_default_ipv4.alias}}", - "mgmt_ip": "{{ansible_default_ipv4.address}}", - "private_interface_name": "{{ansible_default_ipv4.alias}}" -} diff --git a/ui/ansible/roles/testing/templates/seeds.j2 b/ui/ansible/roles/testing/templates/seeds.j2 deleted file mode 100644 index b9c71617..00000000 --- a/ui/ansible/roles/testing/templates/seeds.j2 +++ /dev/null @@ -1,8 +0,0 @@ -{%- set comma = joiner(",") -%} -{%- set vdc = hostvars[inventory_hostname]['vdc'] -%} -{%- for host in groups.data_node -%} -{%- if ( (hostvars[host]['vdc'] is defined) and - (hostvars[host]['vdc'] == vdc) ) -%} -{{ comma() }}{{ hostvars[host]['ansible_default_ipv4']['address'] }} -{%- endif -%} -{%- endfor %} diff --git a/ui/ansible/roles/testing/vars/main.yml b/ui/ansible/roles/testing/vars/main.yml deleted file mode 100644 index 09291e67..00000000 --- a/ui/ansible/roles/testing/vars/main.yml +++ /dev/null @@ -1,44 +0,0 @@ ---- -ecs_docker_dirs: - - path: /ecs - mode: 755 - owner: 444 - group: 444 - state: directory - - path: /host - mode: 755 - owner: 444 - group: 444 - state: directory - - path: /host/data - mode: 755 - owner: 444 - group: 444 - state: directory - - path: /host/files - mode: 755 - owner: 444 - group: 444 - state: directory - - path: /data - mode: 755 - owner: 444 - group: 444 - state: directory - - path: /var/log/vipr/emcvipr-object - mode: 755 - owner: 444 - group: 444 - state: directory - - path: /data/is_community_edition - mode: 755 - owner: 444 - group: 444 - state: touch -ecs_docker_dirs_post: - - path: /ecs - mode: 755 - owner: 444 - group: 444 - state: directory - recurse: yes diff --git a/ui/ansible/testing.yml b/ui/ansible/testing.yml index 4deecf2a..466d46f9 100644 --- a/ui/ansible/testing.yml +++ b/ui/ansible/testing.yml @@ -1,8 +1,21 @@ -- name: Installer | Testing - hosts: data_node +- name: Installer | Build the package cache vars: num_data_nodes: "{{ groups['data_node'] | length }}" top_data_node: "{{ groups['data_node'][0] }}" install_node: "{{ groups['install_node'][0] }}" + hosts: ecs_install roles: - - testing + - installer_build_cache + +- name: Installer | Enable torrent ffx + vars: + num_data_nodes: "{{ groups['data_node'] | length }}" + top_data_node: "{{ groups['data_node'][0] }}" + install_node: "{{ groups['install_node'][0] }}" + hosts: ecs_install + tasks: + - file: + path: "{{ffx_sem}}" + state: touch + when: not ( num_data_nodes|int == 1 and top_data_node == install_node ) + diff --git a/ui/ecsdeploy.py b/ui/ecsdeploy.py index f0322a54..9cc83fa4 100755 --- a/ui/ecsdeploy.py +++ b/ui/ecsdeploy.py @@ -236,10 +236,10 @@ def cache(conf): behind slow Internet links or into island environments. """ - playbook = 'clicmd_access_host' - if not play(playbook, conf.config.verbosity): - click.echo('Operation failed.') - sys.exit(1) + # playbook = 'clicmd_access_host' + # if not play(playbook, conf.config.verbosity): + # click.echo('Operation failed.') + # sys.exit(1) playbook = 'clicmd_cache' if not play(playbook, conf.config.verbosity): @@ -257,6 +257,7 @@ def check(conf): if not play(playbook, conf.config.verbosity): sys.exit(1) + @ecsdeploy.command('bootstrap', short_help='Install required packages on nodes') @pass_conf def bootstrap(conf): diff --git a/ui/etc/config.yml b/ui/etc/config.yml index 96aa33d2..92343c25 100644 --- a/ui/etc/config.yml +++ b/ui/etc/config.yml @@ -13,7 +13,7 @@ --- ui: name: ECS Community Edition Install Node - version: 2.5.1 + version: 2.5.2 host_root_dir: /opt/emc/ecs-install state_file: /opt/state.yml deploy_file: /opt/deploy.yml diff --git a/ui/etc/release.conf b/ui/etc/release.conf index 698f33ec..445f42ca 100644 --- a/ui/etc/release.conf +++ b/ui/etc/release.conf @@ -33,7 +33,7 @@ image_name='ecs-install' tag='latest' ver_maj='2' ver_min='5' -ver_rev='1' +ver_rev='2' ver_tag='r' serial=0 diff --git a/ui/resources/docker/Rockerfile b/ui/resources/docker/Rockerfile index 0b9dc2e8..927912d6 100644 --- a/ui/resources/docker/Rockerfile +++ b/ui/resources/docker/Rockerfile @@ -30,12 +30,11 @@ RUN apk -q update && \ apk -q --no-cache upgrade # Add required system packages -#RUN apk -q --no-cache add openssh-client sshpass openssl ca-certificates libffi libressl@edge_main \ RUN apk -q --no-cache add python2 py-pip\ - openssh-client sshpass openssl ca-certificates libffi libressl@edge_main \ + openssh-client sshpass openssl ca-certificates libffi libressl \ pigz jq less \ - opentracker aria2 mktorrent@edge_community \ - ansible@edge_main + opentracker aria2 mktorrent \ + ansible # Setup the environment RUN mv /etc/profile.d/color_prompt /etc/profile.d/color_prompt.sh \ diff --git a/ui/run.sh b/ui/run.sh index 9119f552..631aa667 100755 --- a/ui/run.sh +++ b/ui/run.sh @@ -60,7 +60,12 @@ case "$(basename ${0})" in ;; update_image) cd "${root}" - "${root}/ui/update_image.sh" + "${root}/ui/update_image.sh" ${*} + cd - 2>&1 >/dev/null + ;; + build_image) + cd "${root}" + "${root}/ui/build_image.sh" ${*} cd - 2>&1 >/dev/null ;; rebuild_image) diff --git a/ui/setup.py b/ui/setup.py index 22ae65ba..73ada285 100755 --- a/ui/setup.py +++ b/ui/setup.py @@ -3,7 +3,7 @@ setup( name='ecsdeploy', - version='2.5.1', + version='2.5.2', packages=find_packages(), scripts=['ui.py', 'ecsdeploy.py',