<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Observability :: Cloud Platform Journey</title>
    <link>/k8s_basics/observability/index.html</link>
    <description>In any complex application, at some point something will go wrong. In a microservices application, you need to track what’s happening across dozens or even hundreds of services. On the network level we have already gained some insight making use of what the Service Meshes have to offer. But to make sense of what’s happening, you must collect telemetry from the application(s). Telemetry can be divided into logs, metrics and traces.</description>
    <generator>Hugo</generator>
    <language>en-US</language>
    <atom:link href="/k8s_basics/observability/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Logs</title>
      <link>/k8s_basics/observability/logs/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/k8s_basics/observability/logs/index.html</guid>
      <description>Tip We don’t create anything in this subchapter that will be needed in any subsequent (sub)chapters, so if logs don’t interest you feel free to just skip this subchapter.&#xA;Here are some of the general challenges of logging in a microservices application:&#xA;Understanding the end-to-end processing of a client request, where multiple services might be invoked to handle a single request. Consolidating logs from multiple services into a single aggregated view. Parsing logs that come from multiple sources, which use their own logging schemas or have no particular schema. Logs may be generated by third-party components that you don’t control. Microservices architectures often generate a larger volume of logs than traditional monoliths, because there are more services, network calls, and steps in a transaction. That means logging itself can be a performance or resource bottleneck for the application. There are some additional challenges for a Kubernetes-based architecture:</description>
    </item>
    <item>
      <title>Metrics</title>
      <link>/k8s_basics/observability/metrics/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/k8s_basics/observability/metrics/index.html</guid>
      <description>We will first focus on node-level metrics and container metrics here, to show what can be easily be achieved in our infrastructure. Of course, it is possible to apply instrumentation to our applications in order to also collect application metrics or dependent service metrics (e.g. using Novatec’s inspectIT Ocelot ), but this can quickly become arbitrarily complicated and will be left for another time.&#xA;It is convenient to collect the various metrics in a time-series database such as Prometheus or InfluxDB running in the cluster, or to export them to one of the commercial solution providers such as New Relic or DataDog . In our case, we are using Prometheus , a graduate project of the Cloud Native Computing Foundation , which has already been set up in our cluster (yes, both Linkerd and Istio have their own integrated Prometheus instance, but we are using our own here to show how to set up metrics handling from scratch).</description>
    </item>
    <item>
      <title>Traces</title>
      <link>/k8s_basics/observability/traces/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/k8s_basics/observability/traces/index.html</guid>
      <description>If you are using Istio or Linkerd as a Service Mesh, these technologies automatically generate certain correlation headers when HTTP calls are routed through the Service Mesh data plane proxies. We already have investigated Istio’s standard tracing , whereas Linkerd’s tracing first requires specific configuration and thus had not been investigated so far.&#xA;Here we will cover traces independently of any Service Mesh, to showcase the possibilities. We will employ Jaeger for this, a CNCF graduated project that has already been set up in our cluster (yes, both Linkerd and Istio brought their own integrated Jaeger instance already, but let’s roll our own here to showcase handling from scratch). The CNCF refers to a generic OpenTracing incubating project working towards creating more standardized APIs and instrumentation for distributed tracing as well, and incidentally Jaeger is already compatible to OpenTracing, and of course several other contenders - Open Source or commercial - are available.</description>
    </item>
    <item>
      <title>OpenTelemetry</title>
      <link>/k8s_basics/observability/opentelemetry/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/k8s_basics/observability/opentelemetry/index.html</guid>
      <description>Intro OpenTelemetry is :&#xA;an observability framework and toolkit designed for creating and managing telemetry data, including traces , metrics , and logs . vendor- and tool-agnostic, meaning it can be used with a variety of observability backends, including open-source tools such as Jaeger and Prometheus , as well as commercial offerings. not an observability backend like Jaeger, Prometheus, or other commercial vendors. focused on the generation, collection, management and export of telemetry. One of OpenTelemetry’s main objectives is to make it easy for you to instrument your applications or systems, regardless of the language, infrastructure, or runtime environment used. Storage and visualization of telemetry is intentionally left to other tools. So, unlike in the preceding chapters, we are going to use one generic tool to fulfill the tasks that have been handled by various individual tools before, as OpenTelemetry consists of the following major components:</description>
    </item>
  </channel>
</rss>