Skip to main content

Using CoreOS stack and Kubernetes #2: Why use CoreOS as Cloud Operating System

I'd like to deal in this part with potential benefits resulting from using CoreOS as an operating system in your cloud deployment. You can install kubernetes on various operating systems so you can make a decision what to choose. So why CoreOS? What is my experience?

Etcd, Fleet and Flannel Preinstalled

First reason is obvious. CoreOS always provides latest version of all components in Kubernetes cluster. 

My experience: we have profited from pre-installed components from the beginning. E.g. in early stages when etcd was coming with new beautiful and powerful API (v.2), they put both - old and new - versions together so we just enabled one of them. The setup of all components together is not very simple so you can save couple hours by choosing preinstalled and pre-setuped CoreOS.

No Package Manager, Read Only Partitions

It sounds more like disadvantage than benefit, but ...

Look at CoreOS releases what it consist of.

Fore example, CoreOS includes basic linux utils so you can employ many popular command line tools. But it's not recommended to install anything else. Take what is installed and all machines within the cluster can be easily added, removed and/or replaced. All parts of your application are supposed to be distributed as docker containers.

CoreOS installation also use a concept of nine disk partitions. Some of them are read only, some of them contain operating system. This forces an administrator to keep mutable data on one of them. This, again, improves node replaceability.

My experience: this is great for operations. It's matter of few seconds to add a new node. However, it's sometimes tough to work with CoreOS when you are used to rely on some tools, like htop. Speaking of which, there is nothing against manual download anyway, e.g. via the cloud config.

Online Updates

There is a great update methodology. You can setup a CoreOS node to do an automatic update. What does it mean in real?

You choose an update channel (alpha, beta, stable) and CoreOS does automatic checking of new versions as well. You can manually use tool update_engine_client to manage updates from command line. This is useful for debugging in early stages when you did not setup updates properly and they might fail.

Once the update engine detects a new version, it immediately starts to download new bytes. There is a notion of active and passive partitions. The current boot runs from active partition, downloading uses passive one.

CoreOS needs a reboot to apply the new version of the operating system. However, consider running cluster of many and many nodes. What would happen when they downloaded new operating system version? They would reboot all together!

Here is locksmith tool. This stuff utilizes etcd persistent storage to do simple semaphore for all running and potentially rebooting CoreOs nodes. In short, this distributed lock guarantees that only one machine is being rebooted in a time.

My experience: this is one of best things on CoreOS. You are just subscribed on some channel with proper reboot strategy and your cluster is continually up-to-date. Either linux kernel, fleet or etcd, linux tool or newly added Kubelet.

We have also encountered problems with one of new versions of CoreOS. For examples, there was a new version of golang and docker started to hang once it finished an image pulling. You can manually rollback or downgrade CoreOS version back. This tutorial just switch current node to passive read-only disk partition with previous version of CoreOS.

Cloud Configuration File

It's always pretty long procedure to setup and configure a machine when it's just installed with fresh operating system. Therefore, CoreOS brings with concept of cloud config files.

The point is to have the only file which contains the whole configuration of a node.

I'll dedicate one chapter to this concept. However, it's usual to store following information in cloud configs:

  • setup CoreOS specifics, e.g. update channel, rebooting strategy etc.
  • adjust any systemd service
  • write files, like proxy setting, certificates etc.
  • setup node hostname
  • configure etcd, fleet, kubernetes or docker tools
My experience: it's pretty useful to have one cloud config for the whole cluster. You can put it to some storage, your git repository or artifactory. All nodes can take this instance and apply the content during it's boot. This guarantees that all nodes have same configuration. 

There is a lot of other useful things on CoreOS but these above were major. I'd like to dedicate next article to the installation.

Here is a link to the whole series.

Comments

Popular posts from this blog

Git on Windows: MSysGit

I have started to use Git today. I read a lot of discussions that there is no good tool for Windows platform. After forethought I have decided to used TortoiseGit. I also feared of difficult work related with Git as a lot of articles mentioned many instructions. As I already said, I have decided to use TortoiseGit, because I'm used to work with TortoiseSvn, but for start, MSysGit is enought. So this article is about MSysGit, next will be about TortoiseGit.

How to start with MSysgit on local machine?
Download and install Git for WindowsCreate source code directory for your git appRight click the directory at your favorite file browser. Menu should contain item "Git init here". It initializes chosen directory to be git-abled :-)It was your first usage of Git.

Commit data to local Git repository

Now, you can add any file, your first source code, to created directory. If you are prepared to commit any changes to your local git repository, follow next instructions.
Right-click th…

Tomcat 7 remote deployment

I decided to provide automatic deployment of war packaged application using Jenkins and Deployment plugin. The target platform is Amazon with Tomcat 7, see nice set of articles to find out how to setup such environment for free.

Well, there is couple of tutorials but they missing some points so it pushed me to lost one hour of my work.

What I gotFresh installation of Tomcat 7 on remote machine with opened 8080 port on firewallPersonal war file supposed to be deployed
How to push it to tomcat?
1. First of all, there is simple configuration of tomcat users in file tomcat-users.xml - it was my pain in the ass :-) As original comprehensive documentation says, it's necessary to define user, but which one(s)?

Here is working example of tomcat-users.xml:

<tomcat-users>
<user username="manager-gui" password="changeit" roles="manager-gui"/>
<user username="manager-script" password="changeit" roles="manager-script"/>

Java, Docker, Spring boot ... and signals

I spend last couple weeks working on java apps running within docker containers deployed on clustered CoreOS machines. It's pretty simple to run java app within a docker container. You just have to choose a base image for your app and write a docker file.

Note that docker registry contains many java distributions usually based on open jdk. We use our internal image for Oracle's Java 8, build on top of something like this docker file. Once you make a decision whether oracle or openjdk, you can start to write your own docker file.

FROM dockerfile/java:oracle-java8
ADD your.jar /opt/your-app
ADD /dependencies /opt/your-app/dependency
WORKDIR /opt/your-app
CMD ["java -jar /opt/your-app/your.jar"]
However, your app would probably require some parameters. Therefore, last line usually calls your shell script. Such script than validates number and format of those parameters among other things. This is also useful during the development phase because none of us want to build …