Skip to main content

Posts

Showing posts from 2015

Building go-lang project /Kubernetes Ingress/ from scratch with no go experience

I have been working with Kubernetes and I wanted to build it's contrib yesterday. However, nginx implementation of Kubernetes' Ingress is written in go-lang . Even though I needed to change some  const string , it required recompilation. Go is not java, and go building system is not maven. Setting up the environment was not straightforward. I was facing couple troubles but I'm going to take it from the beginning. My laptop uses ubuntu 15.04 - well 15.10 since 9pm :-) - and I have never installed go lang yet. Go lang installation on Ubuntu First of all, you need to install go lang. You can use official repo , but it contains older version 1.3. However, do not install it using apt-get as kubernetes or it's dependencies require higher version of go-lang. Of course, I originally installed version 1.3  but some fatal error occurred later. It forced me do to the manual installation anyway. Here is simple tutorial . Last two lines affect only the current termin

Using CoreOS stack and Kubernetes #2: Why use CoreOS as Cloud Operating System

I'd like to deal in this part with potential benefits resulting from using CoreOS as an operating system in your cloud deployment. You can install kubernetes on various operating systems  so you can make a decision what to choose. So why CoreOS? What is my experience? Etcd, Fleet and Flannel Preinstalled First reason is obvious. CoreOS always provides latest version of all components in Kubernetes cluster.  My experience : we have profited from pre-installed components from the beginning. E.g. in early stages when etcd was coming with new beautiful and powerful API (v.2), they put both - old and new - versions together so we just enabled one of them. The setup of all components together is not very simple so you can save couple hours by choosing preinstalled and pre-setuped CoreOS. No Package Manager, Read Only Partitions It sounds more like disadvantage than benefit, but ... Look at CoreOS releases what it consist of. Fore example, CoreOS includes basic

Using CoreOS stack and Kubernetes #1: Introduction

We were lucky enough in December 2014 to join the group of teams who use CoreOS stack and Kubernetes on their way to become next generation of cloud infrastructure . It has been almost one year so I'd like to provide a article series about our experience with the whole stack. The Motivation You usually want to model your business domain, provide useful APIs, break your application into pieces, services , and so on. Well, it's your work. The distributed computing is one of most challenging disciplines in the computer science. Why is that? Because of an asynchronicity in the form of remote calls among distributed components. There are no locks like in your favorite languages. However, there are remote calls with no guarantees of any response or in any time. It's pretty challenging to provide high-available application, with no downtime during updates, crashes. The application which scales according to the needs. The application with guarantees any data consistency.

Apache Kafka Presentation for CZJUG

Apache Kafka is famous technology these days. Being almost traditional messaging system from user point of view, it also supports scalability, high throughput and failover as well. I've already wrote an article . Guys from Czech Java User Group gave me a chance to had a talk about Kafka. Here is a video from the talk in czech language. Slides are also published on slideshare .

Designing Key/Value Repository API with Java Optional

I spent some time last month by defining our repository API . Repository is commonly component used by service layer in your application to persist the data. In the time of polyglot persistence , we use this repository design discussed in this article to persist business domain model - designed according to (our experience with)  domain driven design . Lessons Learned We have large experience since we used nhibernate as a persistent framework in earlier product version. First, and naive, idea consist in allowing the programmers to write queries to the database on his own. Unfortunately the idea failed soon. This scenario heavily relied on a belief that every programmer knows how persistence/database work and s/he wants to write those queries effectively. It inevitably inflicted error-prone and inefficient queries. Essentially, nobody was responsible for the repositories because everyone contributed to them. Persistence components was just a framework. The whole experience implies

Java, Docker, Spring boot ... and signals

I spend last couple weeks working on java apps running within docker containers deployed on clustered CoreOS machines . It's pretty simple to run java app within a docker container. You just have to choose a base image for your app and write a docker file. Note that docker registry contains many java distributions usually based on open jdk. We use our internal image for Oracle's Java 8 , build on top of something like this docker file . Once you make a decision whether oracle or openjdk, you can start to write your own docker file. FROM dockerfile/java:oracle-java8 ADD your.jar /opt/your-app ADD /dependencies /opt/your-app/dependency WORKDIR /opt/your-app CMD ["java -jar /opt/your-app/your.jar"] However, your app would probably require some parameters. Therefore, last line usually calls your shell script. Such script than validates number and format of those parameters among other things. This is also useful during the development phase because none of us

ETCD: POST vs. PUT understanding

ETCD is distributed key value store used as a core component in CoreOS . I've already send a post earlier this week. Here is a page describing how to use ETCD basic commands = ETCD API. Code snippets placed in a page mostly use put , but ETCD allows to use post as well.  Most of us understand differences between those two commands in a notion of a REST(ful) service, but how does it work in key value store? POST Example over many words. curl -v http://127.0.0.1:2379/v2/keys/test -XPOST -D value="some value" curl -v http://127.0.0.1:2379/v2/keys/test -XPOST -D value="some value" Two same command result into following content: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 { "action" : "get" , "node" : { "key" : "/test" , "dir" : true , "nodes" : [ { "key" : "/test/194" , "value" : &

Playing with ETCD cluster in Docker on Local

I've started to write some management component last week. We would like to utilize CoreOs  with the whole stack, as much as possible, at least within such early phase of our project. The core component of our solution is ETCD - distributed key value store. Something like my favorite piece of software -  Redis . Word 'distributed' means that the core of all things within your solution needs to be synchronized or 'consensused'. ETCD uses Raft . I'd love to know how my desired component works in real environment where everything can die. In the age of docker - where every piece of software is docker-ized, it's pretty simple to start ETCD cluster on local in a second. Following piece of code starts three etcd instances linked together in one cluster. docker run -d -p 4001:4001 -p 2380:2380 -p 2379:2379 --net=host --name etcd0 quay.io/coreos/etcd:v2.0.3 \  -name etcd0 \  -advertise-client-urls http://localhost:2379,http://localhost:4001 \  -listen-cli

Performance Battle of NoSQL blob storages #3: Redis

We have already measured performance stats for  Apache Cassandra and Apache Kafka as well. To include Redis  within a comparison of persistent storages could see like some misunderstanding at first sight. On the other hand, there are certain use-cases allowing us to think about to store data in main memory, especially in private data centers. Primarily once your cluster includes a machine having almost equal size of hard drive and RAM :-) Redis is enterprise, or advanced, key-value store with optional persistence. There are couple of reasons why everyone loves Redis. Why I do? 1. It's pretty simple . Following command can install redis server on ubuntu. That's all. apt-get install redis-server 2. It's incredible fast . Look at following tables. One million remote operations per second. 3. It supports large set of commands . More than some kind of database, it's rather enterprise remote-aware hash-map, hash-set, sorted-list or pub/sub channel so