conferences, kubernetes

KubeCon and CloudNativeCon 2018 EU – short and delayed summary

TL;DR:

  • really good conference
  • hot topics:
    • security
    • service mesh
    • custom controllers and operators
  • cool afterparty!
  • watch all videos on this youtube channel

KubeCon 2018

Yes, I should have written this post 3 weeks ago, but because of the conference itself, I was busy doing some learn&code in my “free” time. So, because there are already multiple good summaries of KubeCon EU 2018 on the Internet, let me just give you a summary of my personal opinions, notes, and learnings from the conference.

KubeCon 2018 - crowd in the hallway
KubeCon 2018 – crowd in the hallway

General impressions

In short: it was totally worth going there. This conference was really all about tech. In general, all sessions that I attended, including keynotes, were strictly technical and well prepared. Also, the “hallway track” is really great: you can learn a lot talking with other people and learning from their experience. Also, you can talk – and learn from – some of the community “stars”. I myself had an opportunity to talk to Kelsy Hightower and Tim Hockin.

KubeCon - keynotes session
KubeCon – keynotes session

The conference was also kind of crazy. There were over 4000 people and sometimes there were as many as 8 parallel tracks during breakout sessions. I highly recommend to just check the youtube channel, where all the videos – over 350 of them! – are available. One last thing: the afterparty was really cool! It was held in Tivoli Gardens, an amusement park in the center of Copenhagen. There was plenty of space, time and food to have a good time and talk with different people.

KubeCon - Tivoli park after dark
KubeCon – Tivoli park after dark

Hot Kubernetes topics

Some of the main topics were really recurring during the conference and it was easy to spot that there is a lot of work going in these areas. Here are the “hottest” ones:

Service mesh

Yes, service mesh is the “next big thing” and Istio project is really hot now.  In general, you could have an impression that Istio is the new Kubernetes: everyone has it, everyone runs it and it’s time to move past that, bring integration with other tools and see how you can exploit further the concept. But it’s not that fast. A few companies already admitted to run Istio in production, but the majority is only testing and evaluating it. I think this matches a common sense. The idea of service mesh is to have a separate layer built on top of the “normal” network stack, which is dedicated to managing traffic between different services running in your cluster and/or for your application. This management can include things like:

  • request routing (canary deployments: please route 5% of my traffic to pods with label v2 and the rest to the stable deployment that has label v1)
  • monitoring and visibility (gather traffic and request detail with per-service granularity; like Istio and Grafana)
  • improved security (only services A and B are allowed to direct traffic to service C )
  • automated dependency discovery (like Istio and Servicegraph)
  • request tracing (like Istio and Jaeger)

All these possibilities are great, for me especially the area of monitoring and tracing, which can almost instantly improve your understanding of how services are behaving on their own and in relation to others. Still, let’s remember the idea is pretty fresh. It is wise to check and evaluate it on your own but later you have to decide if the increased complexity of your tech stack is worth the benefits you’re expecting. I highly recommend going through Istio’s getting started tutorial. Also, keep in mind that Istio is now the leading implementation of the service mesh idea, but it’s not the only one. There are already new projects like Conduit.

Security

Another hot topic: a lot of effort is going into the security area.  A lot is going in the area of better containers isolation and providing cloud-native identification for services. Projects worth mentioning that I’ve heard about during the conference are:

  • SPIFFY/SPIRE: a specification and its implementation for providing strong identities to microservices
  • gVisor: this sounded somehow incredible to me: a team from Google implemented a lightweight microkernel in userspace using golang. This microkernel is compatible (to some extent) with normal Linux kernel, but offers isolation layer for system calls, that exists between a container and operating system’s kernel. This has its own drawbacks (compatibility, performance cost), but sounds like a great way to improve container security.
  • kata containers, which just released version 1.0. It’s another approach to strong container isolation, this time by implementing very lightweight virtual machines and a runtime to run them. Every container is running in a single lightweight virtual machine and the whole runtime is OCI compatible, so the isolation is transparent for a user.

Custom controllers and operators

This topic caught my personal attention. I thought that Kubernetes’ controller mechanisms are somewhat monolithic and highly integrated parts if the orchestrator. I was wrong. Basically, every object you can create through Kubernetes API has its own controller, that is responsible for handling this particular type of resource and making sure that the state of resources in the cluster matches what is configured. As an example, think about ReplicaSet controller. ReplicaSet defines a number of pods that are required to run in the cluster and a label selector that allows checking how many of them are running in the cluster right now. The whole point of ReplicaSet controller is to check how many pods matching a selector are now available in the system and compare it to the value defined in the ReplicaSet object. If there are too many pods, some of them are stopped; if there’s not enough, new ones are created using Pod Template from the ReplicaSet definition.

The idea is, that you can easily provide your own controllers. They are just a process, that needs access to Kubernetes API and monitors the state of some API objects. They can either control some already defined API objects (like Pods or Deployments) or you can define your own resource types, called Custom Resource Definitions. This second approach leads to a new pattern in creating controllers called operators. This idea was mainly started by the CoreOS team. The operator is an application specific controller, which can create and manage an application in a way, that is tuned for that particular operation. This can include functions like being able to define a backup policy in the same way as any other object in the cluster, like Pod. Be sure to check etcd, Prometheus or Vault operators as examples of this approach. The cool thing is that CoreOS created a library called Operator SDK, which makes implementing Operators easier.

After getting back from KubeCon, I really wanted to check how it works in practice. I started with trying to understand what are the basic elements and control logic of a controller. You can read about my findings in this previous article. But recently I also spent some time trying to implement Netperf Operator using the Operator SDK. My aim is to create an operator, that runs pod-to-pod network performance test for you. I’m really close to completing that, so stay tuned, a new blog entry about what I learned in the process is on the way.

Other interesting stuff

  • I had no idea how much useful and interesting open source projects related to Kubernetes the team from Zalando has! Be sure to check them out! On the conference, they showed how they manage over 160 kubernetes clusters with their tools.
  • If you want to learn about all projects currently under CNCF and what maturity level they are, check this CNCF Landscape page.