Http and TCP Load Balancing with Kubernetes

Kubernetes allows to manage large clusters which are composed of docker containers. And where is large computation power there is large amount of data throughput. However, there is still a need to spread the data throughput so it can reach and utilize particular docker container(s). This is called load balancingKubernetes supports two basic forms of load balancing. Internal among docker containers themselves and external for clients which call you application from outside environment – just users.

Internal Load Balancing with Kubernetes

Usual approach during the modeling of an application in kubernetes is to provide domain models for pods, replications controllers and services. Look at the great documentation if you are not aware of all these principles.

For simplification, pod is a set of docker containers which are always located on the same node. Replication controller allows and guarantees scalability. The last but not least is the service which hides all instances of pods – created by replication controller – behind a facade.

The concept of services brings very powerful system of internal load balancing.

Look at the picture above. Service A uses Service B. Common pattern is to provide some service discovery so some service A can look up some instance of service B’ endpoint and use it for later communication. However, service B has it’s own lifecycle and can fail forever or service B can have multiple running instance. Thus, a collection of instances of service B is changing during the time.

You need some boilerplate code to handle the algorithm describe above. Ohh, wait. Kubernetes already contains similar system.

Let say, service B has 3 pods – running docker containers. Kubernetes allows you to introduce one instance of service B to hide these three pods. In advance, it gives you one virtual IP address. Service A than can use this virtual IP address and kubernetes proxying system does the routing to a specific pod. Cool isn’t it?

The routing supports sticky session or round robin algorithm. It depends. The last thing which is up to you is to have clever client for your service, e.g. implemented with cooperation of hystrix, so it does re-trying or handle timeouts correctly.

External Load Balancing using Kubernetes

This type of load balancing is about routing the traffic from source clients to target proper pods. You have multiple options here.

Buy AWS/GCE Load Balancer

The load balancing and the networking itself is difficult system. Sharing public IPs, protocols to resolve hostnames to IPs, to resolve IPs to mac addresses, HA of LB and so on. It makes sense to buy load balancer as a service if you deploy your app in some well known cloud provider.

Thus, the services support load balancing type. Kubernetes then uses cloud provider specific API to configure the load balancer.

This is the simplest way. But what are your other options if you are not so lucky and you can’t do that? You have to deploy your own load balancer. Let’s review existing load balancers which can be deployed on premise.

Kube-proxy as external load balancer

I’ve already described the mechanism how kubernetes supports internal load balancing. Yeah! Is it possible to use it for external load balancing? Yes but there are some limitations.

Kube-proxy does exactly what we want but it’s very simple tcp load balancer – L4. I’ve detected some failures during some performance testing from external clients. The proxy just sometimes refused the connection or did not know target address of particular host and so on. However, there is no way how to configure kube-proxy, e.g. to setup some timeouts. Or there is also no way – as it’s TCP load balancer – how to do re-tries on http layer.

Kube-proxy also suffers by performance issues but Kubernetes 1.1 supports native IP tables so huge progress has been made during the recent time.

These all are reasons why I recommended to have clever service client within the internal load balancing section at the beginning. So I would still recommend to use kube-proxy as an internal load balancer but use some solid external load balancer implementation for external load balancing.

Kubernetes Ingress

New version 1.1 of the system introduced ingress feature. However, it’s still beta.

Ingress is a new version of kubernetes resource. It’s API which allows you to define certain rules where to send a traffic. For example you can re-route a traffic from path http://host/example to a service called example. It’s pretty simple.

Ingress itself is not load balancer. It’s just a domain model and a tool to describe the problem. Thus there must be a service which absorbs this model and which does the load balancing but Ingress just helps you as a abstraction. Kubernetes team already developed ha-proxy implementation for this load balancing service.

Service Load Balancer

There is a project which exist from earlier kubernetes times called service load balancer. It’s under development for a year. It’s probably predecessor of Ingress. The load balancer also uses two levels of abstraction and any existing load balancer can stand as an implementation, ha-proxy is the only current implementation.

Vulcand

Vulcand is a new load balancer. It’s technology is compatible as the base configuration is stored within etcd – this allows to utilize watch command. Note that vulcand is http-only load balancer, tcp belongs among requested features. It’s also not integrated with kubernetes but here is a simple tutorial.

Standalone HAProxy

In fact, I have finally ended with developing of my own load balancer component which is based on existing system. It’s pretty easy.

There is a kubernetes java client from fabric8, you can use it to watch the kubernetes API and it’s then up to you which existing load balancer you want to leverage and which kind of config file you will generate. However, as it’s simple there are already some projects which do the same:

Standalone Nginx

Nginx is similar technology to HAProxy so it’s easy to develop a component to configure Nginx load balancer as well. Nginx plus seems to support directly kubernetes but it’s a service.

Using CoreOS stack and Kubernetes #2: Why use CoreOS as Cloud Operating System

I’d like to deal in this part with potential benefits resulting from using CoreOS as an operating system in your cloud deployment. You can install kubernetes on various operating systems so you can make a decision what to choose. So why CoreOS? What is my experience?

Etcd, Fleet and Flannel Preinstalled

First reason is obvious. CoreOS always provides latest version of all components in Kubernetes cluster. 
My experience: we have profited from pre-installed components from the beginning. E.g. in early stages when etcd was coming with new beautiful and powerful API (v.2), they put both – old and new – versions together so we just enabled one of them. The setup of all components together is not very simple so you can save couple hours by choosing preinstalled and pre-setuped CoreOS.

No Package Manager, Read Only Partitions

It sounds more like disadvantage than benefit, but …

Look at CoreOS releases what it consist of.

Fore example, CoreOS includes basic linux utils so you can employ many popular command line tools. But it’s not recommended to install anything else. Take what is installed and all machines within the cluster can be easily added, removed and/or replaced. All parts of your application are supposed to be distributed as docker containers.
CoreOS installation also use a concept of nine disk partitions. Some of them are read only, some of them contain operating system. This forces an administrator to keep mutable data on one of them. This, again, improves node replaceability.
My experience: this is great for operations. It’s matter of few seconds to add a new node. However, it’s sometimes tough to work with CoreOS when you are used to rely on some tools, like htop. Speaking of which, there is nothing against manual download anyway, e.g. via the cloud config.

Online Updates

There is a great update methodology. You can setup a CoreOS node to do an automatic update. What does it mean in real?
You choose an update channel (alpha, beta, stable) and CoreOS does automatic checking of new versions as well. You can manually use tool update_engine_client to manage updates from command line. This is useful for debugging in early stages when you did not setup updates properly and they might fail.
Once the update engine detects a new version, it immediately starts to download new bytes. There is a notion of active and passive partitions. The current boot runs from active partition, downloading uses passive one.
CoreOS needs a reboot to apply the new version of the operating system. However, consider running cluster of many and many nodes. What would happen when they downloaded new operating system version? They would reboot all together!
Here is locksmith tool. This stuff utilizes etcd persistent storage to do simple semaphore for all running and potentially rebooting CoreOs nodes. In short, this distributed lock guarantees that only one machine is being rebooted in a time.
My experience: this is one of best things on CoreOS. You are just subscribed on some channel with proper reboot strategy and your cluster is continually up-to-date. Either linux kernel, fleet or etcd, linux tool or newly added Kubelet.
We have also encountered problems with one of new versions of CoreOS. For examples, there was a new version of golang and docker started to hang once it finished an image pulling. You can manually rollback or downgrade CoreOS version back. This tutorial just switch current node to passive read-only disk partition with previous version of CoreOS.

Cloud Configuration File

It’s always pretty long procedure to setup and configure a machine when it’s just installed with fresh operating system. Therefore, CoreOS brings with concept of cloud config files.

The point is to have the only file which contains the whole configuration of a node.

I’ll dedicate one chapter to this concept. However, it’s usual to store following information in cloud configs:
  • setup CoreOS specifics, e.g. update channel, rebooting strategy etc.
  • adjust any systemd service
  • write files, like proxy setting, certificates etc.
  • setup node hostname
  • configure etcd, fleet, kubernetes or docker tools
My experience: it’s pretty useful to have one cloud config for the whole cluster. You can put it to some storage, your git repository or artifactory. All nodes can take this instance and apply the content during it’s boot. This guarantees that all nodes have same configuration. 
There is a lot of other useful things on CoreOS but these above were major. I’d like to dedicate next article to the installation.

Here is a link to the whole series.

Using CoreOS stack and Kubernetes #1: Introduction

We were lucky enough in December 2014 to join the group of teams who use CoreOS stack and Kubernetes on their way to become next generation of cloud infrastructure. It has been almost one year so I’d like to provide a article series about our experience with the whole stack.

The Motivation

You usually want to model your business domain, provide useful APIs, break your application into pieces, services, and so on. Well, it’s your work.

The distributed computing is one of most challenging disciplines in the computer science. Why is that? Because of an asynchronicity in the form of remote calls among distributed components. There are no locks like in your favorite languages. However, there are remote calls with no guarantees of any response or in any time.

It’s pretty challenging to provide high-available application, with no downtime during updates, crashes. The application which scales according to the needs. The application with guarantees any data consistency.

What are typical questions and considerations when you start to build such app?

  • how can I run exactly 3 instances of a service in my app? 
  • how can I detect that some instance failed? 
  • how can I run a new replica instead of dead one? 
  • what if there are more than 3 instances because the dead replica was not so much dead and it’s now back in the cluster? 
  • what if there are two replicas – dead one and new one – which process same part of the data? 
  • how can I guarantee that all replicas can see the same configuration? 
  • where can service B discover a link to running service A? 
  • where can service B discover new instance of service A because the first one failed? 
  • how can I install all that mess to one operating system? 

I could write many and many questions like these above. CoreOS and Kubernates allows you to address many of these questions.

CoreOS stack and Kubernates provide well tested but tiny platform for your cluster/cloud infrastructure. You can focus on your business not on the infrastructure.

Components

Here is a diagram how all tools fits together:

  • CoreOS :It’s just very simple linux distribution prepared for cluster/cloud deployment.
  • Fleet is responsible for running units (services) on remote nodes
  • Etcd is distributed key value store using Raft consensus. The purpose is to store the configuration in namespaces. I’ve already wrote some articles about etcd
  • Flannel allows to provide private networking among nodes – or docker container in this case
  • Kubernetes uses all tools together to provide cluster management. You can describe your application via kubernetes descriptors and use kubectl or REST API to run, scale or fail-over your app. Obviously in cooperation with the application. One can say that it’s PaaS for dockerized application. And (s)he would be right.

What should I read to become more familiar with all these?

If you would have only 30 minutes check out this video:

What’s next?

I’d like to write article series about our experience with CoreOS and Kubernetes. I’d like to deal with the installation in next article.

Here is a link to the whole series.

ETCD: POST vs. PUT understanding

ETCD is distributed key value store used as a core component in CoreOS. I’ve already send a post earlier this week. Here is a page describing how to use ETCD basic commands = ETCD API. Code snippets placed in a page mostly use put, but ETCD allows to use post as well. Most of us understand differences between those two commands in a notion of a REST(ful) service, but how does it work in key value store?

POST

Example over many words.

curl -v http://127.0.0.1:2379/v2/keys/test -XPOST -D value=”some value”

curl -v http://127.0.0.1:2379/v2/keys/test -XPOST -D value=”some value”
Two same command result into following content:

1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
{
  "action": "get",
  "node": {
    "key": "/test",
    "dir": true,
    "nodes": [
      {
        "key": "/test/194",
        "value": "",
        "modifiedIndex": 194,
        "createdIndex": 194
      },
      {
        "key": "/test/195",
        "value": "",
        "modifiedIndex": 195,
        "createdIndex": 195
      }
    ],
    "modifiedIndex": 194,
    "createdIndex": 194
  }
}
So ETCD adds an index value and put it into resulting key – which is also path to the value. For instance:

curl -v http://127.0.0.1:2379/v2/keys/test/194 -XGET

Allows you to get the specific key. The index is explicitly expressed in the url.

PUT

Put command just add or update given key. Let say I would use following example:
curl -v http://127.0.0.1:2379/v2/keys/test -XPUT -D value=”some value”
Resulting content on test key is expected.

1
2
3
4
5
6
7
8
9
{
  "action": "get",
  "node": {
    "key": "/test",
    "value": "",
    "modifiedIndex": 198,
    "createdIndex": 198
  }
}

How to Model Add and Update Method?

My current task is to model and implement repository using ETCD under the hood. Usual repository contains CRUD methods for particular set of entities. Reasonable approach is to separate add from update to do not replace existing object, e.g. when using optimistic locking.

I don’t want to see revision – index – numbers within keys so post command is not useful here. ETCD brings prevExist parameter for this use cases.

I want to perform add method which expect that there is no content on given key. I’ll use following statement:

curl -v http://127.0.0.1:2379/v2/keys/test?prevExist=false -XPUT -D value=”some value”

When you did not delete the key, as I did not, you can get following error:

1
2
3
4
5
6
{
  "errorCode": 105,
  "message": "Key already exists",
  "cause": "/test",
  "index": 198
}

On the other hand, use false to express update existing entity.

curl -v http://127.0.0.1:2379/v2/keys/test?prevExist=true -XPUT -D value=”some value”

This command results into positive response.
< HTTP/1.1 200 OK

The repository uses put for both add and update methods but value for prevExist is the difference.

Playing with ETCD cluster in Docker on Local

I’ve started to write some management component last week. We would like to utilize CoreOs with the whole stack, as much as possible, at least within such early phase of our project.

The core component of our solution is ETCD – distributed key value store. Something like my favorite piece of software – Redis. Word ‘distributed’ means that the core of all things within your solution needs to be synchronized or ‘consensused’. ETCD uses Raft. I’d love to know how my desired component works in real environment where everything can die.

In the age of docker – where every piece of software is docker-ized, it’s pretty simple to start ETCD cluster on local in a second. Following piece of code starts three etcd instances linked together in one cluster.

docker run -d -p 4001:4001 -p 2380:2380 -p 2379:2379 –net=host –name etcd0 quay.io/coreos/etcd:v2.0.3 \
 -name etcd0 \
 -advertise-client-urls http://localhost:2379,http://localhost:4001 \
 -listen-client-urls http://localhost:2379,http://localhost:4001 \
 -initial-advertise-peer-urls http://localhost:2380 \
 -listen-peer-urls http://localhost:2380 \
 -initial-cluster-token etcd-cluster-1 \
 -initial-cluster etcd0=http://localhost:2380,etcd1=http://localhost:2480,etcd2=http://localhost:2580

docker run -d -p 4101:4101 -p 2480:2480 -p 2479:2479 –net=host –name etcd1 quay.io/coreos/etcd:v2.0.3 \
 -name etcd1 \
 -advertise-client-urls http://localhost:2479,http://localhost:4101 \
 -listen-client-urls http://localhost:2479,http://localhost:4101 \
 -initial-advertise-peer-urls http://localhost:2480 \
 -listen-peer-urls http://localhost:2480 \
 -initial-cluster-token etcd-cluster-1 \
 -initial-cluster etcd0=http://localhost:2380,etcd1=http://localhost:2480,etcd2=http://localhost:2580

docker run -d -p 4201:4201 -p 2580:2580 -p 2579:2579 –net=host –name etcd2 quay.io/coreos/etcd:v2.0.3 \
 -name etcd2 \
 -advertise-client-urls http://localhost:2579,http://localhost:4201 \
 -listen-client-urls http://localhost:2579,http://localhost:4201 \
 -initial-advertise-peer-urls http://localhost:2580 \
 -listen-peer-urls http://localhost:2580 \
 -initial-cluster-token etcd-cluster-1 \
 -initial-cluster etcd0=http://localhost:2380,etcd1=http://localhost:2480,etcd2=http://localhost:2580

The inspiration is obvious, but this stuff simply runs everything on your computer.  Parameter –net=host provides full transparency from port&network point of view.

You can now use following URL in a browser:

http://localhost:4101/v2/keys/?recursive=true

Good thing is also to check all members of your cluster. You will kill them later.

http://localhost:2379/v2/members

You can easily delete all keys in XYZ namespace using curl once you did you tests. Note that you can delete only one of your keys so you can’t perform following command on your root namespace.

curl http://127.0.0.1:2379/v2/keys/XYZ?recursive=true -XDELETE

I also prefer to see http status code as ETCD uses http status codes.

curl -v http://127.0.0.1:2379/v2/keys/XYZ

In advance to status codes, it always returns a json with their own errors codes. See a snippet at the end of the following listing. You can get something similar to:

* Hostname was NOT found in DNS cache
*   Trying 127.0.0.1…
* Connected to localhost (127.0.0.1) port 2379 (#0)
> GET /v2/keys/XYZ HTTP/1.1
> User-Agent: curl/7.35.0
> Host: localhost:2379
> Accept: */*

< HTTP/1.1 404 Not Found
< Content-Type: application/json
< X-Etcd-Cluster-Id: 65a1e86cb62588c5
< X-Etcd-Index: 6
< Date: Sun, 01 Mar 2015 22:55:14 GMT
< Content-Length: 69

{“errorCode”:100,”message”:”Key not found”,”cause”:”/XYZ”,”index”:6}
* Connection #0 to host localhost left intact

At the end of playing with ETCD cluster, you will probably want to remove all etcd’s containers. I use simple script which removes every docker container, but you can improve it using grep to remove only those hosting ETCD.

sudo docker rm -f `docker ps –no-trunc -aq`

The last interesting thing is the performance. I’ve reminded Redis which can handle one million of transactions per second using one thread. I was surprised when ETCD responded usually in 20-30ms. Much worse fact is that I’ve also encountered client timeouts because of 400-500ms RT per request. Raft is obviously not for free. But the purpose of ETCD is massive reading scalability. Well, good to know.

Rozšiřte náš R&D tým!

Máte rádi nové zajímavé technologie? Lidé v našem týmu je rádi používají, ale ještě raději je vytváří.

V současné době pracujeme na produktu simulujícím chování komplexních softwarových kompoment v enterprise software rešeních. Náš systém si můžete představit jako T1000, liquid robot z Terminator 2, který umí replikovat a simulovat chování softwarových komponent.

Produkt se skládá z grafického rozhraní, kterým se ovládá a modifikuje chování simualce, škálovatelný simulátor, který zvládá zpracovávat tisíce transakcí za vteřinu, a sadu adapterů do enterprise technologii jako je SOAP, REST, JMS, MQ, EJB, TIBCO, SAP, Mainframe a další. V současné době pracujeme na nové verzi produktu, která bude realizována jako SaaS cloud řešení.

Rádi bychom s Vámi diskutovali možnost spolupráce na našem produktu. Jsme kompaktní přátelský tým 25 lidí od juniorů čerstvě po škole až po zkušené vývojáře s patnáctiletou zkušeností v enterprise software. Vynalézání systému probíha v Praze, produkt management sídlí v Palo Alto, California. Naše řešení se prodává po celém světě.

Pokud se s námi chcete bavit prací a přenést svoji vize a myšlenky do celého produktu, rádi s Vámi probereme možnosti rozšíření našeho týmu. Máte-li podobné kamarády, neváhejte je doporučit. Popovídáme si rádi i s lidmi, které doporučíte.

Kontaktujte mne prosím s dotazy či životopisem na martin@podval.eu.

Martin Podval