<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Kubernetes :: Cloud Platform Journey</title>
    <link>/k8s_basics/index.html</link>
    <description>Discover Kubernetes Topics:&#xA;Basics Validation of your Kubernetes installation Deployment Artifacts (Distributed) Applications Services Specifics Labels and Selectors Runtime behavior Storage Stateful Sets Helm Charts Templating Dependencies Operators Ingress Controller Nginx Traefik Extensions Service Meshes Linkerd Istio Observability Logs Metrics Traces OpenTelemetry Clean-up Flow The thick lines show the main flow through the chapters. Thinner lines show optional chapters, and dotted lines indicate that having finished the previous chapters would be helpful for understanding the following.</description>
    <generator>Hugo</generator>
    <language>en-US</language>
    <lastBuildDate>Mon, 01 Jan 0001 00:00:00 +0000</lastBuildDate>
    <atom:link href="/k8s_basics/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Validation</title>
      <link>/k8s_basics/validation/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/k8s_basics/validation/index.html</guid>
      <description>Similar to the container exercises, check the health state of the Kubernetes cluster.&#xA;Exercise - kubectl Open your terminal application and run the kubectl command without any parameters in order to display a generic help and most options&#xA;kubectl&#xA;Exercise - version information Execute the following command:&#xA;kubectl version&#xA;You get an output similar to the following:&#xA;Client Version: v1.28.4 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.27.7 This displays the version of the Kubernetes cluster (Server version) and the version of the client.</description>
    </item>
    <item>
      <title>Deployment Artifacts</title>
      <link>/k8s_basics/deployment/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/k8s_basics/deployment/index.html</guid>
      <description>Now that we have validated an operational Kubernetes environment, let’s focus on deploying workloads to it and see how it reflects in individual artifacts.&#xA;Exercise - check status of artifacts In the previous exercise you have seen how “kubectl get” generally lists instances of certain object types. You can also combine the options by separation with comma. E.g. to list the currently running Deployments, ReplicaSets and Pods type:&#xA;kubectl get deployment,replicaset,pod</description>
    </item>
    <item>
      <title>Applications</title>
      <link>/k8s_basics/applications/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/k8s_basics/applications/index.html</guid>
      <description>In the Docker exercises part, you have built images for the Spring Boot app and pushed them to the local image store. We will now use those images - but this time from a public image registry - in order to run the App in K8s.&#xA;Before we start, let’s explain a little bit about some important objects in K8s that we will use in our next example: Secrets and ConfigMaps.</description>
    </item>
    <item>
      <title>Services</title>
      <link>/k8s_basics/services/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/k8s_basics/services/index.html</guid>
      <description>Now in order to access the app from outside of the Pod, or from in between Pods, you need to expose it to the network using a service.&#xA;Exercise - Inspect YAML files for services Similar to the previous exercises there will be complete files for your reference and one with gaps to fill out yourself.&#xA;In the exercise directory you will find 3 files to deploy services.&#xA;ls -ltr *-service.yaml</description>
    </item>
    <item>
      <title>Labels and Selectors</title>
      <link>/k8s_basics/labels/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/k8s_basics/labels/index.html</guid>
      <description>As you may have noticed we used a new attribute in our service declaration called selector. By specifying a selector pointing to the label app: todoui we told the service which deployments he should route to. In general labels can be used to attach custom annotations and meta information to Kubernetes resources and to group and filter your objects based on your own organizational structures.&#xA;Exercise - filter resources using labels Let’s see how we can use labels to filter the output of kubectl commands. The following statement will list every Kubernetes object labeled with app=todoui:</description>
    </item>
    <item>
      <title>Runtime</title>
      <link>/k8s_basics/runtime/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/k8s_basics/runtime/index.html</guid>
      <description>So far the steps done with Kubernetes are very similar to what was set up with plain Docker previously. However running workloads in Kubernetes offer much more potential in terms of&#xA;automatic recovery from failure scaling of instances load-balancing between instances isolation of failed components zero-downtime deployments when patching to a new version This exercise will walk through all those steps on the basis of the deployed application. Please bear with us, all in all this will take a bit longer than the previous exercises.</description>
    </item>
    <item>
      <title>Storage</title>
      <link>/k8s_basics/storage/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/k8s_basics/storage/index.html</guid>
      <description>Statefulness The last scenario in the section Runtime with the failing database also uncovered another problem - the handling of state.&#xA;With the help of liveness and readiness probe it is possible to make the application more resilient when it comes to dealing with backend outages. However it does not protect the database losing its state when the container crashes.&#xA;Right now the database is being deployed using a construct of Deployment and ReplicaSet objects. However this construct expects a stateless application workload. When the replicas are increased no state is shared between them, hence this will lead to unwanted scenarios even if the replicas are bigger than 1. Which ones in particular?</description>
    </item>
    <item>
      <title>Stateful Sets</title>
      <link>/k8s_basics/statefulsets/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/k8s_basics/statefulsets/index.html</guid>
      <description>Statefulness Kubernetes has a concept of StatefulSets (formerly called PetSets, from the Pets vs. Cattle analogy). They are also running multiple Pods, but separate between a stateful and stateless part. These can be used to build fully-blown clusters running on Kubernetes, where each cluster member has a distinct identity that cannot easily be replaced, most often combined with making use of persistent storage.&#xA;Unfortunately, in the case of PostgreSQL, running a scalable and highly available cluster on Kubernetes is rather complex. As such, let’s start with a simpler example: an Apache ZooKeeper cluster. And while doing so we will utilize all the concepts that we have learned so far, and then adding some.</description>
    </item>
    <item>
      <title>Helm</title>
      <link>/k8s_basics/helm/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/k8s_basics/helm/index.html</guid>
      <description>Intro Helm describes itself as “The package manager for Kubernetes”. An excerpt taken from the official Helm website describing what that means:&#xA;Helm helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.&#xA;Charts are easy to create, version, share, and publish — so start using Helm and stop the copy-and-paste.&#xA;Helm is a graduated project in the CNCF and is maintained by the Helm community.</description>
    </item>
    <item>
      <title>Operators</title>
      <link>/k8s_basics/operators/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/k8s_basics/operators/index.html</guid>
      <description>The PostgreSQL case So, coming back to Postgres now. What would we need to get a persistent, horizontally scalable and possibly also highly available PostgreSQL cluster?&#xA;Storage allows us to persist data, and Stateful Sets allow us to maintain identities and thus roles within a cluster, but unfortunately that is not sufficient yet.&#xA;Horizontal scalability is challenging to implement. Replication is the crucial pillar of horizontal scalability, and PostgreSQL supports unidirectional replication from Primary to (Hot) Standby (formerly Master-Slave Replication), which is enough for many use-cases where data are more read than written. Bidirectional replication (aka Multi-Master Replication), however, is entirely more complex due to the system being also responsible for resolving any conflicts that occur between concurrent changes.</description>
    </item>
    <item>
      <title>Ingress Controller</title>
      <link>/k8s_basics/ingress/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/k8s_basics/ingress/index.html</guid>
      <description>Generally, Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.&#xA;A Kubernetes Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting, among other things. An Ingress Controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic. Cf. this blog post for a general introduction.</description>
    </item>
    <item>
      <title>Service Meshes</title>
      <link>/k8s_basics/servicemeshes/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/k8s_basics/servicemeshes/index.html</guid>
      <description>Background So, we have started to encapsulate our software components into containers (cf. Applications ), and Kubernetes takes care to distribute and/or scale these containers over our cluster nodes ensuring they are running as defined (cf. Runtime ). Inter-process communication is handled via Services , and we have found means of persisting data (cf. Storage and Stateful Sets ), helpers for getting our applications deployed following best-practices and in an organized manner (cf. Helm and Operators ), and tools for routing external accesses into our cluster (cf. Ingress ).</description>
    </item>
    <item>
      <title>Observability</title>
      <link>/k8s_basics/observability/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/k8s_basics/observability/index.html</guid>
      <description>In any complex application, at some point something will go wrong. In a microservices application, you need to track what’s happening across dozens or even hundreds of services. On the network level we have already gained some insight making use of what the Service Meshes have to offer. But to make sense of what’s happening, you must collect telemetry from the application(s). Telemetry can be divided into logs, metrics and traces.</description>
    </item>
    <item>
      <title>Clean-up</title>
      <link>/k8s_basics/cleanup/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/k8s_basics/cleanup/index.html</guid>
      <description>Delete the app deployments: (This will delete the deployments, including Pods and Replica Sets)&#xA;kubectl delete deployment sampleapp sampleapp-subpath&#xA;kubectl delete deployment postgresdb&#xA;kubectl delete deployment todobackend todobackend-v1&#xA;kubectl delete deployment todoui&#xA;kubectl get deployments&#xA;Delete the services associated with your app:&#xA;kubectl delete service sampleapp sampleapp-subpath&#xA;kubectl delete service postgresdb&#xA;kubectl delete service todobackend todobackend-v1&#xA;kubectl delete service todoui&#xA;kubectl delete services zk-cs zk-hs&#xA;kubectl get services&#xA;Delete the horizontal pod autoscaler associated with your app:</description>
    </item>
  </channel>
</rss>