OpenTelemetry Support
...
Service Integration Guides
Kubernetes Auto-Instrumentatio...
Python Application
7min
this document explains how to instrument the python application with kloudmate, using opentelemetry this instrumentation can be done using the opentelemetry operator, which supports injecting and configuring auto instrumentation libraries for net, java, node js, python, and go service pre requisites running a kubernetes cluster the cert manager must be installed if you use the helm chart, there is an option to generate a self signed cert instead step 1 first, install the opentelemetry operator into your cluster you can do this with the operator release manifest , the operator helm chart , or with operator hub bash kubectl apply f https //github com/open telemetry/opentelemetry operator/releases/latest/download/opentelemetry operator yaml step 2 create an opentelemetry collector to send telemetry from containers to a collector instead of directly to a backend for example, here is a demo collector used kubectl apply f <\<eof apiversion opentelemetry io/v1beta1 kind opentelemetrycollector metadata name demo namespace default #change namespace spec config receivers otlp protocols grpc endpoint 0 0 0 0 4317 http endpoint 0 0 0 0 4318 processors memory limiter check interval 1s limit percentage 75 spike limit percentage 15 batch send batch size 10000 timeout 10s exporters debug {} otlphttp endpoint 'https //otel kloudmate com 4318' headers authorization kloudmate api key service pipelines traces receivers \[otlp] processors \[memory limiter, batch] exporters \[debug, otlphttp] metrics receivers \[otlp] processors \[memory limiter, batch] exporters \[debug, otlphttp] logs receivers \[otlp] processors \[memory limiter, batch] exporters \[debug, otlphttp] eof change your kloudmate workspace api key to configure the collector to export data to kloudmate step 3 configure automatic instrumentation to be able to manage automatic instrumentation, the operator needs to be configured to know what pods to instrument and which automatic instrumentation to use for those pods this is done via the instrumentation crd for our example, we used python sdk auto instrumentation the following command creates a basic instrumentation resource that is configured for instrumenting python services kubectl apply f <\<eof apiversion opentelemetry io/v1alpha1 kind instrumentation metadata name demo instrumentation spec exporter endpoint http //demo collector 4318 propagators \ tracecontext \ baggage sampler type parentbased traceidratio argument "1" eof by default, python logs auto instrumentation is disabled by applying the above instrumentation user can only able to get the metrics and traces to enable this feature(logs), you must set the otel logs exporter otel logs exporter and otel python logging auto instrumentation enabled otel python logging auto instrumentation enabled environment variables as needed the following command creates instrumentation for metrics, logs, and traces apiversion opentelemetry io/v1alpha1 kind instrumentation metadata name demo instrumentation spec exporter endpoint http //demo collector default svc cluster local 4318 propagators \ tracecontext \ baggage sampler type parentbased traceidratio argument "1" python env \ name otel logs exporter value otlp proto http \ name otel python logging auto instrumentation enabled value 'true' the endpoint used under exporter must be the same as the service name of the collector created in step 2 use only one instrumentation command as per the requirement step 4 now opt in your services to automatic instrumentation this is done by updating your service’s spec template metadata annotations spec template metadata annotations to include a language specific annotation python instrumentation opentelemetry io/inject python "true" "true" to patch your existing python application with the necessary annotation use the below command bash kubectl patch deployment \<deployment name> n \<namespace> p '{"spec" {"template" {"metadata" {"annotations" {"instrumentation opentelemetry io/inject python" "true"}}}}}' python application logging with file log receiver python auto instrumentation logs only work with certain libraries after instrumentation, it's possible that some logs are missing to fix this issue file log receiver file log receiver could be a great choice to set up the file log receiver you only have to make some changes on the collector yaml file the example collector yaml file apiversion opentelemetry io/v1beta1 kind opentelemetrycollector metadata name demo spec mode daemonset image otel/opentelemetry collector contrib\ latest \# securitycontext \# runasuser 0 volumemounts \ name varlogpods mountpath /var/log/pods readonly true \ name varlibdockercontainers mountpath /var/lib/docker/containers readonly true config receivers otlp protocols grpc endpoint 0 0 0 0 4317 http endpoint 0 0 0 0 4318 filelog include \ /var/log/pods/ / / log exclude \# exclude logs from all containers named otel collector \ /var/log/pods/ /otel collector/ log start at end include file path true include file name false operators \# determine log format used by kubernetes \ type router id get format routes \ output parser docker expr 'body matches "^\\\\{"' \ output parser crio expr 'body matches "^\[^ z]+ "' \ output parser containerd expr 'body matches "^\[^ z]+z"' \# parse cri o format \ type regex parser id parser crio regex '^(?p\<time>\[^ z]+) (?p\<stream>stdout|stderr) (?p\<logtag>\[^ ] ) ?(?p\<log> )$' output extract metadata from filepath timestamp parse from attributes time layout type gotime layout '2006 01 02t15 04 05 999999999z07 00' \# parse cri containerd format \ type regex parser id parser containerd regex '^(?p\<time>\[^ ^z]+z) (?p\<stream>stdout|stderr) (?p\<logtag>\[^ ] ) ?(?p\<log> )$' output extract metadata from filepath timestamp parse from attributes time layout '%y %m %dt%h %m %s %lz' \# parse docker format \ type json parser id parser docker output extract metadata from filepath timestamp parse from attributes time layout '%y %m %dt%h %m %s %lz' \# extract metadata from file path \ type regex parser id extract metadata from filepath regex '^ \\/(?p\<namespace>\[^ ]+) (?p\<pod name>\[^ ]+) (?p\<uid>\[a f0 9\\ ]{36})\\/(?p\<container name>\[^\\ ]+)\\/(?p\<restart count>\d+)\\ log$' parse from attributes\["log file path"] cache size 128 # default maximum amount of pods per node is 110 \# update body field after all parsing \ type move from attributes log to body \# rename attributes \ type move from attributes stream to attributes\["log iostream"] \ type move from attributes container name to resource\["k8s container name"] \ type move from attributes namespace to resource\["k8s namespace name"] \ type move from attributes pod name to resource\["k8s pod name"] \ type move from attributes restart count to resource\["k8s container restart count"] \ type move from attributes uid to resource\["k8s pod uid"] processors memory limiter check interval 1s limit percentage 75 spike limit percentage 15 batch send batch size 10000 timeout 10s exporters debug {} otlphttp endpoint 'https //otel kloudmate com 4318' headers authorization kloudmate api key service pipelines traces receivers \[otlp] processors \[memory limiter, batch] exporters \[debug, otlphttp] metrics receivers \[otlp] processors \[memory limiter, batch] exporters \[debug, otlphttp] logs receivers \[otlp, filelog] processors \[memory limiter, batch] exporters \[debug, otlphttp] volumes \ name varlogpods hostpath path /var/log/pods \ name varlibdockercontainers hostpath path /var/lib/docker/containers the above yaml configuration file is designed to configure a file log receiver that collects pod logs and exports them to kloudmate additionally, it is set up to export metrics and trace them to kloudmate if you're using the file log receiver to export logs instead of auto instrumentation, ensure that auto instrumentation logging is disabled otherwise, duplicate logs may appear (the user can use the instrumentation command for metrics and traces)