Skip to main content

Using CoreOS stack and Kubernetes #2: Why use CoreOS as Cloud Operating System

I'd like to deal in this part with potential benefits resulting from using CoreOS as an operating system in your cloud deployment. You can install kubernetes on various operating systems so you can make a decision what to choose. So why CoreOS? What is my experience?

Etcd, Fleet and Flannel Preinstalled

First reason is obvious. CoreOS always provides latest version of all components in Kubernetes cluster. 

My experience: we have profited from pre-installed components from the beginning. E.g. in early stages when etcd was coming with new beautiful and powerful API (v.2), they put both - old and new - versions together so we just enabled one of them. The setup of all components together is not very simple so you can save couple hours by choosing preinstalled and pre-setuped CoreOS.

No Package Manager, Read Only Partitions

It sounds more like disadvantage than benefit, but ...

Look at CoreOS releases what it consist of.

Fore example, CoreOS includes basic linux utils so you can employ many popular command line tools. But it's not recommended to install anything else. Take what is installed and all machines within the cluster can be easily added, removed and/or replaced. All parts of your application are supposed to be distributed as docker containers.

CoreOS installation also use a concept of nine disk partitions. Some of them are read only, some of them contain operating system. This forces an administrator to keep mutable data on one of them. This, again, improves node replaceability.

My experience: this is great for operations. It's matter of few seconds to add a new node. However, it's sometimes tough to work with CoreOS when you are used to rely on some tools, like htop. Speaking of which, there is nothing against manual download anyway, e.g. via the cloud config.

Online Updates

There is a great update methodology. You can setup a CoreOS node to do an automatic update. What does it mean in real?

You choose an update channel (alpha, beta, stable) and CoreOS does automatic checking of new versions as well. You can manually use tool update_engine_client to manage updates from command line. This is useful for debugging in early stages when you did not setup updates properly and they might fail.

Once the update engine detects a new version, it immediately starts to download new bytes. There is a notion of active and passive partitions. The current boot runs from active partition, downloading uses passive one.

CoreOS needs a reboot to apply the new version of the operating system. However, consider running cluster of many and many nodes. What would happen when they downloaded new operating system version? They would reboot all together!

Here is locksmith tool. This stuff utilizes etcd persistent storage to do simple semaphore for all running and potentially rebooting CoreOs nodes. In short, this distributed lock guarantees that only one machine is being rebooted in a time.

My experience: this is one of best things on CoreOS. You are just subscribed on some channel with proper reboot strategy and your cluster is continually up-to-date. Either linux kernel, fleet or etcd, linux tool or newly added Kubelet.

We have also encountered problems with one of new versions of CoreOS. For examples, there was a new version of golang and docker started to hang once it finished an image pulling. You can manually rollback or downgrade CoreOS version back. This tutorial just switch current node to passive read-only disk partition with previous version of CoreOS.

Cloud Configuration File

It's always pretty long procedure to setup and configure a machine when it's just installed with fresh operating system. Therefore, CoreOS brings with concept of cloud config files.

The point is to have the only file which contains the whole configuration of a node.

I'll dedicate one chapter to this concept. However, it's usual to store following information in cloud configs:

  • setup CoreOS specifics, e.g. update channel, rebooting strategy etc.
  • adjust any systemd service
  • write files, like proxy setting, certificates etc.
  • setup node hostname
  • configure etcd, fleet, kubernetes or docker tools
My experience: it's pretty useful to have one cloud config for the whole cluster. You can put it to some storage, your git repository or artifactory. All nodes can take this instance and apply the content during it's boot. This guarantees that all nodes have same configuration. 

There is a lot of other useful things on CoreOS but these above were major. I'd like to dedicate next article to the installation.

Here is a link to the whole series.
Post a Comment

Popular posts from this blog

Http and TCP Load Balancing with Kubernetes

Kubernetes allows to manage large clusters which are composed of docker containers. And where is large computation power there is large amount of data throughput. However, there is still a need to spread the data throughput so it can reach and utilize particular docker container(s). This is called load balancingKubernetes supports two basic forms of load balancing. Internal among docker containers themselves and external for clients which call you application from outside environment - just users.


Internal Load Balancing with Kubernetes Usual approach during the modeling of an application in kubernetes is to provide domain models for pods, replications controllers and services. Look at the great documentation if you are not aware of all these principles.

For simplification, pod is a set of docker containers which are always located on the same node. Replication controller allows and guarantees scalability. The last but not least is the service which hides all instances of pods - cre…

Validating nginx config file using docker approach

I try to setup nginx as a load balancer. The configuration is just a file with a lot of constrains so I need a validation. There is no online validation service, as e.g. CoreOS has, and I don't want to install nginx on my laptop as I work on a distributed app.

Docker is right approach for me. Let say I have following config:


In short, I'm going to pass nginx config to running nginx instance and look to the logs.

Put you nginx.config to the temp and start the docker image:

sudo docker run --name nginx -v /tmp/nginx.config:/etc/nginx/nginx.conf:ro -d nginx It uses volume mapping so the command just starts a new docker container and mounts a local /tmp/nginx.config to the given in-container path. You can obviously change the volume path to your personal path. Is it working or not? Look at logs.

sudo docker logs nginx If there is no entry, your file is fine. In the case of an error, you can see something like this:

2016/01/08 11:37:31 [emerg] 1#1: unexpected "}" in /etc/…