Skip to content

CoreOS Cluster on Vagrant with ELK Stack and Service Registration & Discovery through Consul

License

Notifications You must be signed in to change notification settings

stakater-archive/coreos-vagrant-with-consul-elk

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Vagrant VirtualBox CoreOS with ELK Stack and Discovery using Consul and Consul-Template

This Repo contains 2 Vagrant based CoreOS machines:

  1. Etcd & Consul Server
  2. Consul agent (Client machine)

##Etcd & Consul Server This machine initiates etcd and also runs the consul server.

####Etcd Etcd Discovery is handled locally through environment files, the server creates this file at /etc/sysconfig/etcd-peers. This file is then fed to the etcd system unit as a drop-in in the server's cloud-config file (etcd-consul-server/user-data.yml).

####Consul Server The consul server runs in a docker container from the time that the machine is started. It is submitted as a systemd Unit via the server's cloud-config file. The Unit is responsible of starting the consul server container and saving the server's ip in etcd with the key /consul/server/ip. So that client machines can query etcd and fetch the IP of the machine on which consul server is running.

##Consul Agent The Consul Agent is the client machine, which runs in a cluster. You can change the number of instances in the cluster by changing the value of num_instances variable in the Vagrantfile. The client machine starts consul agent and docker registrator via its cloud-config file (consul-agent/user-data.yml).

######NOTE: The shared directory in both server and client machines, is mapped inside the vagrant machine to easily share files between the host and vagrant machines.

##How to Run:

  1. Scroll down to [Streamlined setup](#Streamlined setup) and install all dependencies

  2. Navigate to the etcd-consul-server directory and run vagrant up. Once the vagrant machine is up, you can access the Consul UI from the server's ip address and Consul port, by default : http://172.17.8.101:8500

  3. Navigate to the consul-agent directory and run vagrant up. By default 2 vagrant client machines would start up. You would be able to see these nodes in the Consul UI.

  4. SSH into one of the client machines, by running vagrant ssh consul-agent-01 command. By default all client machines are named as consul-agent-<machine-number>

  5. Run the docker-compose file for ELK as: docker-compose -f ~/shared/elk.yaml up -d. Once docker-compose has started all the containers successfully, you can congratulate yourself on successfully setting up the ELK (Elasticsearch, Logstash, Kibana) Stack successfully with Consul Discovery. Access Kibana at http://172.17.9.101:5601/ & Access ElasticSearch at http://172.17.9.101:9200/

Add key in Consul:

key - kibana/elasticsearchURL value - http://elasticsearch-9200:9200

  1. Now run a docker container for "log-generating" application, and map the volume which stores logs to ~/shared/logs

  2. Run filebeat docker-compose file on the client machine which has your application running: docker-compose -f ~/shared/filebeat.yaml up -d. Filebeat will start sending logs to your ELK stack.

Consul-Template

Note that Logstash, Filebeat, and Kibana use consul-tempalte to render their templatized config files. You must set consul key-value pairs in order to make these services run.

Refer to their respective README files in their dockerfile repos:

######NOTE: The virtual memory for the consul-agent machines has been update to 4096 as elasticsearch was not able to run with memory below than that.

If elasticsearch container fails to start due to VM memmory error, run the following command on the host machine:

sysctl -w vm.max_map_count=262144

For OS specific commands and further details refer to https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html

Why docker-compose files and services not in user-data?

The ELK services are not placed in the user data of consul-agent machine(s) because the machines are designed in such a way that you may be able to launch multiple machines with different services running on different machines. By default, we have set up to launch 2 consul-agent machines, and as a proof of concept, we expect you to run ELK (Elasticsearch, logstash and kibana) on one machine and Filebeat + a "log-generating" application on the other consul-agent machine.

The vagrant machines used in this set up are inspired from: https://github.com/coreos/coreos-vagrant

Streamlined setup

  1. Install dependencies
  1. Clone this project and get it running!
git clone https://github.com/stakater/coreos-vagrant-with-consul-elk.git
cd coreos-vagrant-with-consul-elk
  1. Startup and SSH

There are two "providers" for Vagrant with slightly different instructions. Follow one of the following two options:

VirtualBox Provider

The VirtualBox provider is the default Vagrant provider. Use this if you are unsure.

vagrant up
vagrant ssh

VMware Provider

The VMware provider is a commercial addon from Hashicorp that offers better stability and speed. If you use this provider follow these instructions.

VMware Fusion:

vagrant up --provider vmware_fusion
vagrant ssh

VMware Workstation:

vagrant up --provider vmware_workstation
vagrant ssh

vagrant up triggers vagrant to download the CoreOS image (if necessary) and (re)launch the instance

vagrant ssh connects you to the virtual machine. Configuration is stored in the directory so you can always return to this machine by executing vagrant ssh from the directory where the Vagrantfile was located.

  1. Get started using CoreOS

Shared Folder Setup

There is optional shared folder setup. You can try it out by adding a section to your Vagrantfile like this.

config.vm.network "private_network", ip: "172.17.8.150"
config.vm.synced_folder ".", "/home/core/share", id: "core", :nfs => true,  :mount_options   => ['nolock,vers=3,udp']

After a 'vagrant reload' you will be prompted for your local machine password.

Provisioning with user-data

The Vagrantfile will provision your CoreOS VM(s) with coreos-cloudinit if a user-data.yaml file is found in the project directory. coreos-cloudinit simplifies the provisioning process through the use of a script or cloud-config document.

Configuration

The Vagrantfile will parse a config.rb file containing a set of options used to configure your CoreOS cluster. See config.rb.sample for more information.

Cluster Setup

Launching a CoreOS cluster on Vagrant is as simple as configuring $num_instances in the Vagrantfile file to 3 (or more!) and running vagrant up. Make sure you provide a fresh discovery URL in your user-data if you wish to bootstrap etcd in your cluster.

New Box Versions

CoreOS is a rolling release distribution and versions that are out of date will automatically update. If you want to start from the most up to date version you will need to make sure that you have the latest box file of CoreOS. You can do this by running

vagrant box update

Docker Forwarding

By setting the $expose_docker_tcp configuration value you can forward a local TCP port to docker on each CoreOS machine that you launch. The first machine will be available on the port that you specify and each additional machine will increment the port by 1.

Follow the Enable Remote API instructions to get the CoreOS VM setup to work with port forwarding.

Then you can then use the docker command from your local shell by setting DOCKER_HOST:

export DOCKER_HOST=tcp://localhost:2375

About

CoreOS Cluster on Vagrant with ELK Stack and Service Registration & Discovery through Consul

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages