Skip to main content

Playing with ETCD cluster in Docker on Local

I've started to write some management component last week. We would like to utilize CoreOs with the whole stack, as much as possible, at least within such early phase of our project.

The core component of our solution is ETCD - distributed key value store. Something like my favorite piece of software - Redis. Word 'distributed' means that the core of all things within your solution needs to be synchronized or 'consensused'. ETCD uses Raft. I'd love to know how my desired component works in real environment where everything can die.

In the age of docker - where every piece of software is docker-ized, it's pretty simple to start ETCD cluster on local in a second. Following piece of code starts three etcd instances linked together in one cluster.

docker run -d -p 4001:4001 -p 2380:2380 -p 2379:2379 --net=host --name etcd0 quay.io/coreos/etcd:v2.0.3 \
 -name etcd0 \
 -advertise-client-urls http://localhost:2379,http://localhost:4001 \
 -listen-client-urls http://localhost:2379,http://localhost:4001 \
 -initial-advertise-peer-urls http://localhost:2380 \
 -listen-peer-urls http://localhost:2380 \
 -initial-cluster-token etcd-cluster-1 \
 -initial-cluster etcd0=http://localhost:2380,etcd1=http://localhost:2480,etcd2=http://localhost:2580

docker run -d -p 4101:4101 -p 2480:2480 -p 2479:2479 --net=host --name etcd1 quay.io/coreos/etcd:v2.0.3 \
 -name etcd1 \
 -advertise-client-urls http://localhost:2479,http://localhost:4101 \
 -listen-client-urls http://localhost:2479,http://localhost:4101 \
 -initial-advertise-peer-urls http://localhost:2480 \
 -listen-peer-urls http://localhost:2480 \
 -initial-cluster-token etcd-cluster-1 \
 -initial-cluster etcd0=http://localhost:2380,etcd1=http://localhost:2480,etcd2=http://localhost:2580

docker run -d -p 4201:4201 -p 2580:2580 -p 2579:2579 --net=host --name etcd2 quay.io/coreos/etcd:v2.0.3 \
 -name etcd2 \
 -advertise-client-urls http://localhost:2579,http://localhost:4201 \
 -listen-client-urls http://localhost:2579,http://localhost:4201 \
 -initial-advertise-peer-urls http://localhost:2580 \
 -listen-peer-urls http://localhost:2580 \
 -initial-cluster-token etcd-cluster-1 \
 -initial-cluster etcd0=http://localhost:2380,etcd1=http://localhost:2480,etcd2=http://localhost:2580

The inspiration is obvious, but this stuff simply runs everything on your computer.  Parameter --net=host provides full transparency from port&network point of view.

You can now use following URL in a browser:

http://localhost:4101/v2/keys/?recursive=true

Good thing is also to check all members of your cluster. You will kill them later.

http://localhost:2379/v2/members

You can easily delete all keys in XYZ namespace using curl once you did you tests. Note that you can delete only one of your keys so you can't perform following command on your root namespace.

curl http://127.0.0.1:2379/v2/keys/XYZ?recursive=true -XDELETE

I also prefer to see http status code as ETCD uses http status codes.

curl -v http://127.0.0.1:2379/v2/keys/XYZ

In advance to status codes, it always returns a json with their own errors codes. See a snippet at the end of the following listing. You can get something similar to:

* Hostname was NOT found in DNS cache
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 2379 (#0)
> GET /v2/keys/XYZ HTTP/1.1
> User-Agent: curl/7.35.0
> Host: localhost:2379
> Accept: */*

< HTTP/1.1 404 Not Found
< Content-Type: application/json
< X-Etcd-Cluster-Id: 65a1e86cb62588c5
< X-Etcd-Index: 6
< Date: Sun, 01 Mar 2015 22:55:14 GMT
< Content-Length: 69

{"errorCode":100,"message":"Key not found","cause":"/XYZ","index":6}
* Connection #0 to host localhost left intact

At the end of playing with ETCD cluster, you will probably want to remove all etcd's containers. I use simple script which removes every docker container, but you can improve it using grep to remove only those hosting ETCD.

sudo docker rm -f `docker ps --no-trunc -aq`

The last interesting thing is the performance. I've reminded Redis which can handle one million of transactions per second using one thread. I was surprised when ETCD responded usually in 20-30ms. Much worse fact is that I've also encountered client timeouts because of 400-500ms RT per request. Raft is obviously not for free. But the purpose of ETCD is massive reading scalability. Well, good to know.

Comments

Unknown said…
Hello Martin,

I have to implement an ETCD (V3) cluster on my openstack project, every VMs are reachable from each other, (ping and ssh).

Then, a few weeks ago, I followed the Coreos docs in order to start a cluster in containers on a static way, with a bash file for the environment vars.

So here is my conf file:

#!/bin/bash

export ETCD_VERSION=v3.0.0
export TOKEN=http://192.168.0.5/753a8bf0-0ba5-43ac-b5ac-a2a47e430c11
export CLUSTER_STATE=new
export NAME_1=etcd-server1
export NAME_2=etcd-server2
export NAME_3=etcd-server3
export HOST_1=192.168.0.5
export HOST_2=192.168.0.6
export HOST_3=192.168.0.7
export CLUSTER=${NAME_1}=http://${HOST_1}:2380,${NAME_2}=http://${HOST_2}:2380,${NAME_3}=http://${HOST_3}:2380

As you can see, I wanted to use one of my tecd nodes to be the discovery service with a token, in my case I uses the command UUID=$(uuidgen).

So as you can see, my etcd-server1 IS the one with the token to contact in order to know every members of the cluster.

But when I run this command on each VM:

docker run -dt --net=host --name etcd quay.io/coreos/etcd:V3.0.0 /usr/local/bin/etcd --data-dir=data.etcd --name ${THIS_NAME} --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 --initial-cluster ${CLUSTER} --initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}

My etcd-server1 has stopped (see this dockers logs):

ubuntu@etcd-server1:~$ docker logs etcd
2016-08-10 14:05:22.910217 I | etcdmain: etcd Version: 3.0.0
2016-08-10 14:05:22.910468 I | etcdmain: Git SHA: 6f48bda
2016-08-10 14:05:22.910571 I | etcdmain: Go Version: go1.6.2
2016-08-10 14:05:22.910669 I | etcdmain: Go OS/Arch: linux/amd64
2016-08-10 14:05:22.910790 I | etcdmain: setting maximum number of CPUs to 1, total number of available CPUs is 1
2016-08-10 14:05:22.911139 I | etcdmain: listening for peers on http://192.168.0.5:2380
2016-08-10 14:05:22.911315 I | etcdmain: listening for client requests on 192.168.0.5:2379
2016-08-10 14:05:22.918816 I | etcdmain: stopping listening for client requests on 192.168.0.5:2379
2016-08-10 14:05:22.918983 I | etcdmain: stopping listening for peers on http://192.168.0.5:2380
2016-08-10 14:05:22.919092 I | etcdmain: --initial-cluster must include etcd-server1=http://192.168.0.5:2380 given --initial-advertise-peer-urls=http://192.168.0.5:2380


My question is this one: Do you think I should insert the token in a different manner? For example run the discovery container with "sh" command in order to insert the key? But I do not know how :/

Thank you for your time.

Benjamin
Unknown said…
Hello Martin,

I have to implement an ETCD (V3) cluster on my openstack project, every VMs are reachable from each other, (ping and ssh).

Then, a few weeks ago, I followed the Coreos docs in order to start a cluster in containers on a static way, with a bash file for the environment vars.

So here is my conf file:

#!/bin/bash

export ETCD_VERSION=v3.0.0
export TOKEN=http://192.168.0.5/753a8bf0-0ba5-43ac-b5ac-a2a47e430c11
export CLUSTER_STATE=new
export NAME_1=etcd-server1
export NAME_2=etcd-server2
export NAME_3=etcd-server3
export HOST_1=192.168.0.5
export HOST_2=192.168.0.6
export HOST_3=192.168.0.7
export CLUSTER=${NAME_1}=http://${HOST_1}:2380,${NAME_2}=http://${HOST_2}:2380,${NAME_3}=http://${HOST_3}:2380

As you can see, I wanted to use one of my tecd nodes to be the discovery service with a token, in my case I uses the command UUID=$(uuidgen).

So as you can see, my etcd-server1 IS the one with the token to contact in order to know every members of the cluster.

But when I run this command on each VM:

docker run -dt --net=host --name etcd quay.io/coreos/etcd:V3.0.0 /usr/local/bin/etcd --data-dir=data.etcd --name ${THIS_NAME} --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 --initial-cluster ${CLUSTER} --initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}

My etcd-server1 has stopped (see this dockers logs):

ubuntu@etcd-server1:~$ docker logs etcd
2016-08-10 14:05:22.910217 I | etcdmain: etcd Version: 3.0.0
2016-08-10 14:05:22.910468 I | etcdmain: Git SHA: 6f48bda
2016-08-10 14:05:22.910571 I | etcdmain: Go Version: go1.6.2
2016-08-10 14:05:22.910669 I | etcdmain: Go OS/Arch: linux/amd64
2016-08-10 14:05:22.910790 I | etcdmain: setting maximum number of CPUs to 1, total number of available CPUs is 1
2016-08-10 14:05:22.911139 I | etcdmain: listening for peers on http://192.168.0.5:2380
2016-08-10 14:05:22.911315 I | etcdmain: listening for client requests on 192.168.0.5:2379
2016-08-10 14:05:22.918816 I | etcdmain: stopping listening for client requests on 192.168.0.5:2379
2016-08-10 14:05:22.918983 I | etcdmain: stopping listening for peers on http://192.168.0.5:2380
2016-08-10 14:05:22.919092 I | etcdmain: --initial-cluster must include etcd-server1=http://192.168.0.5:2380 given --initial-advertise-peer-urls=http://192.168.0.5:2380


My question is this one: Do you think I should insert the token in a different manner? For example run the discovery container with "sh" command in order to insert the key? But I do not know how :/

Thank you for your time.

Benjamin

Popular posts from this blog

Performance Battle of NoSQL blob storages #1: Cassandra

Preface We spend last five years on HP Service Virtualization using MsSQL database . Non-clustered server. Our app utilizes this system for all kinds of persistence. No polyglot so far. As we tuned the performance of the response time - we started at 700ms/call and we achieved couple milliseconds per call at the end when DB involved - we had to learn a lot of stuff. Transactions, lock escalation , isolation levels , clustered and non clustered indexes, buffered reading, index structure and it's persistence, GUID ids in clustered indexes , bulk importing , omit slow joins, sparse indexes, and so on. We also rewrite part of NHibernate to support multiple tables for one entity type which allows use scaling up without lock escalation. It was good time. The end also showed us that famous Oracle has half of our favorite features once we decided to support this database. Well, as I'm thinking about all issues which we encountered during the development, unpredictive behavio

NHibernate performance issues #3: slow inserts (stateless session)

The whole series of NHibernate performance issues isn't about simple use-cases. If you develop small app, such as simple website, you don't need to care about performance. But if you design and develop huge application and once you have decided to use NHibernate you'll solve various sort of issue. For today the use-case is obvious: how to insert many entities into the database as fast as possible? Why I'm taking about previous stuff? The are a lot of articles how the original NHibernate's purpose isn't to support batch operations , like inserts. Once you have decided to NHibernate, you have to solve this issue. Slow insertion The basic way how to insert mapped entity into database is: SessionFactory.GetCurrentSession().Save(object); But what happen when I try to insert many entities? Lets say, I want to persist 1000 libraries each library has 100 books = 100k of books each book has 5 rentals - there are 500k of rentals  It's really slow! The inser

Java, Docker, Spring boot ... and signals

I spend last couple weeks working on java apps running within docker containers deployed on clustered CoreOS machines . It's pretty simple to run java app within a docker container. You just have to choose a base image for your app and write a docker file. Note that docker registry contains many java distributions usually based on open jdk. We use our internal image for Oracle's Java 8 , build on top of something like this docker file . Once you make a decision whether oracle or openjdk, you can start to write your own docker file. FROM dockerfile/java:oracle-java8 ADD your.jar /opt/your-app ADD /dependencies /opt/your-app/dependency WORKDIR /opt/your-app CMD ["java -jar /opt/your-app/your.jar"] However, your app would probably require some parameters. Therefore, last line usually calls your shell script. Such script than validates number and format of those parameters among other things. This is also useful during the development phase because none of us