No /tmp/secret/api.login present. This is not an HCP or ROSA cluster. Continue using $KUBECONFIG env path. Cloning into '/tmp/otel-tests'... Switched to a new branch 'rhosdt-3-3-interop' branch 'rhosdt-3-3-interop' set up to track 'origin/rhosdt-3-3-interop'. Warning: resource configmaps/cluster-monitoring-config is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by oc apply. oc apply should only be used on resources created declaratively by either oc create --save-config or oc apply. The missing annotation will be patched automatically. configmap/cluster-monitoring-config configured Version: v0.2.6 No configuration provided but found default file: .chainsaw.yaml Loading config (.chainsaw.yaml)... - Using test file: chainsaw-test - TestDirs [tests/e2e tests/e2e-autoscale tests/e2e-openshift tests/e2e-prometheuscr tests/e2e-instrumentation tests/e2e-pdb tests/e2e-opampbridge tests/e2e-targetallocator] - SkipDelete false - FailFast false - ReportFormat 'XML' - ReportName 'junit_otel_e2e' - ReportPath '/logs/artifacts' - Namespace '' - FullName false - IncludeTestRegex '' - ExcludeTestRegex '' - ApplyTimeout 15s - AssertTimeout 6m0s - CleanupTimeout 5m0s - DeleteTimeout 5m0s - ErrorTimeout 5m0s - ExecTimeout 15s - DeletionPropagationPolicy Background - Parallel 4 - NoCluster false - PauseOnFailure false Loading tests... - autoscale (tests/e2e-autoscale/autoscale) - instrumentation-apache-httpd (tests/e2e-instrumentation/instrumentation-apache-httpd) - instrumentation-apache-multicontainer (tests/e2e-instrumentation/instrumentation-apache-multicontainer) - instrumentation-dotnet (tests/e2e-instrumentation/instrumentation-dotnet) - instrumentation-dotnet-multicontainer (tests/e2e-instrumentation/instrumentation-dotnet-multicontainer) - instrumentation-dotnet-musl (tests/e2e-instrumentation/instrumentation-dotnet-musl) - instrumentation-go (tests/e2e-instrumentation/instrumentation-go) - instrumentation-java (tests/e2e-instrumentation/instrumentation-java) - instrumentation-java-multicontainer (tests/e2e-instrumentation/instrumentation-java-multicontainer) - instrumentation-java-other-ns (tests/e2e-instrumentation/instrumentation-java-other-ns) - instrumentation-nginx (tests/e2e-instrumentation/instrumentation-nginx) - instrumentation-nginx-contnr-secctx (tests/e2e-instrumentation/instrumentation-nginx-contnr-secctx) - instrumentation-nginx-multicontainer (tests/e2e-instrumentation/instrumentation-nginx-multicontainer) - instrumentation-nodejs (tests/e2e-instrumentation/instrumentation-nodejs) - instrumentation-nodejs-multicontainer (tests/e2e-instrumentation/instrumentation-nodejs-multicontainer) - instrumentation-python (tests/e2e-instrumentation/instrumentation-python) - instrumentation-python-multicontainer (tests/e2e-instrumentation/instrumentation-python-multicontainer) - instrumentation-sdk (tests/e2e-instrumentation/instrumentation-sdk) - opampbridge (tests/e2e-opampbridge/opampbridge) - kafka (tests/e2e-openshift/kafka) - monitoring (tests/e2e-openshift/monitoring) - multi-cluster (tests/e2e-openshift/multi-cluster) - otlp-metrics-traces (tests/e2e-openshift/otlp-metrics-traces) - route (tests/e2e-openshift/route) - scrape-in-cluster-monitoring (tests/e2e-openshift/scrape-in-cluster-monitoring) - pdb (tests/e2e-pdb/pdb) - target-allocator (tests/e2e-pdb/target-allocator) - create-pm-prometheus-exporters (tests/e2e-prometheuscr/create-pm-prometheus-exporters) - create-sm-prometheus-exporters (tests/e2e-prometheuscr/create-sm-prometheus-exporters) - targetallocator-kubernetessd (tests/e2e-targetallocator/targetallocator-kubernetessd) - targetallocator-prometheuscr (tests/e2e-targetallocator/targetallocator-prometheuscr) - daemonset-features (tests/e2e/daemonset-features) - env-vars (tests/e2e/env-vars) - ingress (tests/e2e/ingress) - ingress-subdomains (tests/e2e/ingress-subdomains) - managed-reconcile (tests/e2e/managed-reconcile) - multiple-configmaps (tests/e2e/multiple-configmaps) - node-selector-collector (tests/e2e/node-selector-collector) - prometheus-config-validation (tests/e2e/prometheus-config-validation) - smoke-daemonset (tests/e2e/smoke-daemonset) - smoke-pod-dns-config (tests/e2e/smoke-dns-config) - smoke-init-containers (tests/e2e/smoke-init-containers) - smoke-pod-annotations (tests/e2e/smoke-pod-annotations) - smoke-pod-labels (tests/e2e/smoke-pod-labels) - smoke-ports (tests/e2e/smoke-ports) - smoke-restarting-deployment (tests/e2e/smoke-restarting-deployment) - smoke-shareprocessnamespace (tests/e2e/smoke-shareprocessnamespace) - smoke-sidecar (tests/e2e/smoke-sidecar) - smoke-sidecar-other-namespace (tests/e2e/smoke-sidecar-other-namespace) - smoke-simplest (tests/e2e/smoke-simplest) - smoke-simplest-v1beta1 (tests/e2e/smoke-simplest-v1beta1) - smoke-statefulset (tests/e2e/smoke-statefulset) - smoke-targetallocator (tests/e2e/smoke-targetallocator) - statefulset-features (tests/e2e/statefulset-features) - versioned-configmaps (tests/e2e/versioned-configmaps) Loading values... Running tests... === RUN chainsaw === PAUSE chainsaw === CONT chainsaw === RUN chainsaw/autoscale === PAUSE chainsaw/autoscale === RUN chainsaw/instrumentation-apache-httpd === PAUSE chainsaw/instrumentation-apache-httpd === RUN chainsaw/instrumentation-apache-multicontainer === PAUSE chainsaw/instrumentation-apache-multicontainer === RUN chainsaw/instrumentation-dotnet === PAUSE chainsaw/instrumentation-dotnet === RUN chainsaw/instrumentation-dotnet-multicontainer === PAUSE chainsaw/instrumentation-dotnet-multicontainer === RUN chainsaw/instrumentation-dotnet-musl === PAUSE chainsaw/instrumentation-dotnet-musl === RUN chainsaw/instrumentation-go === PAUSE chainsaw/instrumentation-go === RUN chainsaw/instrumentation-java === PAUSE chainsaw/instrumentation-java === RUN chainsaw/instrumentation-java-multicontainer === PAUSE chainsaw/instrumentation-java-multicontainer === RUN chainsaw/instrumentation-java-other-ns === PAUSE chainsaw/instrumentation-java-other-ns === RUN chainsaw/instrumentation-nginx === PAUSE chainsaw/instrumentation-nginx === RUN chainsaw/instrumentation-nginx-contnr-secctx === PAUSE chainsaw/instrumentation-nginx-contnr-secctx === RUN chainsaw/instrumentation-nginx-multicontainer === PAUSE chainsaw/instrumentation-nginx-multicontainer === RUN chainsaw/instrumentation-nodejs === PAUSE chainsaw/instrumentation-nodejs === RUN chainsaw/instrumentation-nodejs-multicontainer === PAUSE chainsaw/instrumentation-nodejs-multicontainer === RUN chainsaw/instrumentation-python === PAUSE chainsaw/instrumentation-python === RUN chainsaw/instrumentation-python-multicontainer === PAUSE chainsaw/instrumentation-python-multicontainer === RUN chainsaw/instrumentation-sdk === PAUSE chainsaw/instrumentation-sdk === RUN chainsaw/opampbridge === PAUSE chainsaw/opampbridge === RUN chainsaw/kafka === PAUSE chainsaw/kafka === RUN chainsaw/monitoring l.go:53: | 07:22:05 | monitoring | @setup  | CREATE | OK | v1/Namespace @ chainsaw-live-doberman l.go:53: | 07:22:05 | monitoring | step-00  | TRY | RUN | l.go:53: | 07:22:05 | monitoring | step-00  | APPLY | RUN | v1/ConfigMap @ openshift-monitoring/cluster-monitoring-config l.go:53: | 07:22:05 | monitoring | step-00  | PATCH | OK | v1/ConfigMap @ openshift-monitoring/cluster-monitoring-config l.go:53: | 07:22:05 | monitoring | step-00  | APPLY | DONE | v1/ConfigMap @ openshift-monitoring/cluster-monitoring-config l.go:53: | 07:22:05 | monitoring | step-00  | ASSERT | RUN | apps/v1/Deployment @ openshift-user-workload-monitoring/prometheus-operator l.go:53: | 07:22:09 | monitoring | step-00  | ASSERT | DONE | apps/v1/Deployment @ openshift-user-workload-monitoring/prometheus-operator l.go:53: | 07:22:09 | monitoring | step-00  | ASSERT | RUN | apps/v1/StatefulSet @ openshift-user-workload-monitoring/prometheus-user-workload l.go:53: | 07:22:12 | monitoring | step-00  | ASSERT | DONE | apps/v1/StatefulSet @ openshift-user-workload-monitoring/prometheus-user-workload l.go:53: | 07:22:12 | monitoring | step-00  | ASSERT | RUN | apps/v1/StatefulSet @ openshift-user-workload-monitoring/thanos-ruler-user-workload l.go:53: | 07:22:12 | monitoring | step-00  | ASSERT | DONE | apps/v1/StatefulSet @ openshift-user-workload-monitoring/thanos-ruler-user-workload l.go:53: | 07:22:12 | monitoring | step-00  | TRY | DONE | l.go:53: | 07:22:12 | monitoring | step-01  | TRY | RUN | l.go:53: | 07:22:12 | monitoring | step-01  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-live-doberman/cluster-collector l.go:53: | 07:22:12 | monitoring | step-01  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-live-doberman/cluster-collector l.go:53: | 07:22:12 | monitoring | step-01  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-live-doberman/cluster-collector l.go:53: | 07:22:12 | monitoring | step-01  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-live-doberman/cluster-collector-collector l.go:53: | 07:22:18 | monitoring | step-01  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-live-doberman/cluster-collector-collector l.go:53: | 07:22:18 | monitoring | step-01  | ASSERT | RUN | monitoring.coreos.com/v1/ServiceMonitor @ chainsaw-live-doberman/cluster-collector-monitoring-collector l.go:53: | 07:22:18 | monitoring | step-01  | ASSERT | DONE | monitoring.coreos.com/v1/ServiceMonitor @ chainsaw-live-doberman/cluster-collector-monitoring-collector l.go:53: | 07:22:18 | monitoring | step-01  | ASSERT | RUN | v1/Service @ chainsaw-live-doberman/cluster-collector-collector l.go:53: | 07:22:18 | monitoring | step-01  | ASSERT | DONE | v1/Service @ chainsaw-live-doberman/cluster-collector-collector l.go:53: | 07:22:18 | monitoring | step-01  | ASSERT | RUN | v1/Service @ chainsaw-live-doberman/cluster-collector-collector-headless l.go:53: | 07:22:18 | monitoring | step-01  | ASSERT | DONE | v1/Service @ chainsaw-live-doberman/cluster-collector-collector-headless l.go:53: | 07:22:18 | monitoring | step-01  | ASSERT | RUN | v1/Service @ chainsaw-live-doberman/cluster-collector-collector-monitoring l.go:53: | 07:22:18 | monitoring | step-01  | ASSERT | DONE | v1/Service @ chainsaw-live-doberman/cluster-collector-collector-monitoring l.go:53: | 07:22:18 | monitoring | step-01  | TRY | DONE | l.go:53: | 07:22:18 | monitoring | step-02  | TRY | RUN | l.go:53: | 07:22:18 | monitoring | step-02  | APPLY | RUN | batch/v1/Job @ chainsaw-live-doberman/telemetrygen-traces l.go:53: | 07:22:18 | monitoring | step-02  | CREATE | OK | batch/v1/Job @ chainsaw-live-doberman/telemetrygen-traces l.go:53: | 07:22:18 | monitoring | step-02  | APPLY | DONE | batch/v1/Job @ chainsaw-live-doberman/telemetrygen-traces l.go:53: | 07:22:18 | monitoring | step-02  | ASSERT | RUN | batch/v1/Job @ chainsaw-live-doberman/telemetrygen-traces l.go:53: | 07:22:18 | monitoring | step-02  | ASSERT | DONE | batch/v1/Job @ chainsaw-live-doberman/telemetrygen-traces l.go:53: | 07:22:18 | monitoring | step-02  | TRY | DONE | l.go:53: | 07:22:18 | monitoring | step-03  | TRY | RUN | l.go:53: | 07:22:18 | monitoring | step-03  | APPLY | RUN | rbac.authorization.k8s.io/v1/ClusterRole @ chainsaw-monitoring-metrics-api l.go:53: | 07:22:19 | monitoring | step-03  | CREATE | OK | rbac.authorization.k8s.io/v1/ClusterRole @ chainsaw-monitoring-metrics-api l.go:53: | 07:22:19 | monitoring | step-03  | APPLY | DONE | rbac.authorization.k8s.io/v1/ClusterRole @ chainsaw-monitoring-metrics-api l.go:53: | 07:22:19 | monitoring | step-03  | APPLY | RUN | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ chainsaw-monitoring-metrics-api l.go:53: | 07:22:19 | monitoring | step-03  | CREATE | OK | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ chainsaw-monitoring-metrics-api l.go:53: | 07:22:19 | monitoring | step-03  | APPLY | DONE | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ chainsaw-monitoring-metrics-api l.go:53: | 07:22:19 | monitoring | step-03  | ASSERT | RUN | rbac.authorization.k8s.io/v1/ClusterRole @ chainsaw-monitoring-metrics-api l.go:53: | 07:22:19 | monitoring | step-03  | ASSERT | DONE | rbac.authorization.k8s.io/v1/ClusterRole @ chainsaw-monitoring-metrics-api l.go:53: | 07:22:19 | monitoring | step-03  | ASSERT | RUN | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ chainsaw-monitoring-metrics-api l.go:53: | 07:22:19 | monitoring | step-03  | ASSERT | DONE | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ chainsaw-monitoring-metrics-api l.go:53: | 07:22:19 | monitoring | step-03  | SCRIPT | RUN | === COMMAND /usr/bin/sh -c ./check_metrics.sh l.go:53: | 07:22:54 | monitoring | step-03  | SCRIPT | LOG | === STDOUT No metric 'otelcol_process_uptime' with value present. Retrying... No metric 'otelcol_process_uptime' with value present. Retrying... No metric 'otelcol_process_uptime' with value present. Retrying... Metric 'otelcol_process_uptime' with value is present. Metric 'otelcol_process_runtime_total_sys_memory_bytes' with value is present. Metric 'otelcol_process_memory_rss' with value is present. Metric 'otelcol_exporter_sent_spans' with value is present. Metric 'otelcol_process_cpu_seconds' with value is present. Metric 'otelcol_process_memory_rss' with value is present. Metric 'otelcol_process_runtime_heap_alloc_bytes' with value is present. Metric 'otelcol_process_runtime_total_alloc_bytes' with value is present. Metric 'otelcol_process_runtime_total_sys_memory_bytes' with value is present. Metric 'otelcol_process_uptime' with value is present. Metric 'otelcol_receiver_accepted_spans' with value is present. Metric 'otelcol_receiver_refused_spans' with value is present. === STDERR % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:03 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:04 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:05 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:06 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:07 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:08 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:09 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:10 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:11 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:12 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:13 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:14 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:15 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:16 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:17 --:--:-- 0 100 917 100 917 0 0 50 0 0:00:18 0:00:18 --:--:-- 184 100 917 100 917 0 0 50 0 0:00:18 0:00:18 --:--:-- 231 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1079 0 --:--:-- --:--:-- --:--:-- 1083 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 938 0 --:--:-- --:--:-- --:--:-- 939 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 614 100 614 0 0 8787 0 --:--:-- --:--:-- --:--:-- 8898 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 634 100 634 0 0 10937 0 --:--:-- --:--:-- --:--:-- 11122 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 614 100 614 0 0 10188 0 --:--:-- --:--:-- --:--:-- 10233 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 628 100 628 0 0 11355 0 --:--:-- --:--:-- --:--:-- 11418 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 611 100 611 0 0 8942 0 --:--:-- --:--:-- --:--:-- 8855 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 614 100 614 0 0 9367 0 --:--:-- --:--:-- --:--:-- 9446 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 628 100 628 0 0 10850 0 --:--:-- --:--:-- --:--:-- 11017 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 629 100 629 0 0 9181 0 --:--:-- --:--:-- --:--:-- 9250 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 634 100 634 0 0 10668 0 --:--:-- --:--:-- --:--:-- 10745 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 614 100 614 0 0 10018 0 --:--:-- --:--:-- --:--:-- 10065 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 650 100 650 0 0 11076 0 --:--:-- --:--:-- --:--:-- 11206 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 648 100 648 0 0 11228 0 --:--:-- --:--:-- --:--:-- 11368 l.go:53: | 07:22:54 | monitoring | step-03  | SCRIPT | DONE | l.go:53: | 07:22:54 | monitoring | step-03  | TRY | DONE | l.go:53: | 07:22:54 | monitoring | step-04  | TRY | RUN | l.go:53: | 07:22:54 | monitoring | step-04  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-live-doberman/cluster-collector2 l.go:53: | 07:22:54 | monitoring | step-04  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-live-doberman/cluster-collector2 l.go:53: | 07:22:54 | monitoring | step-04  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-live-doberman/cluster-collector2 l.go:53: | 07:22:54 | monitoring | step-04  | ASSERT | RUN | monitoring.coreos.com/v1/ServiceMonitor @ chainsaw-live-doberman/cluster-collector2-collector l.go:53: | 07:22:54 | monitoring | step-04  | ASSERT | DONE | monitoring.coreos.com/v1/ServiceMonitor @ chainsaw-live-doberman/cluster-collector2-collector l.go:53: | 07:22:54 | monitoring | step-04  | TRY | DONE | l.go:53: | 07:22:54 | monitoring | step-04  | CLEANUP | RUN | l.go:53: | 07:22:54 | monitoring | step-04  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-live-doberman/cluster-collector2 l.go:53: | 07:22:54 | monitoring | step-04  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-live-doberman/cluster-collector2 l.go:53: | 07:22:54 | monitoring | step-04  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-live-doberman/cluster-collector2 l.go:53: | 07:22:54 | monitoring | step-04  | CLEANUP | DONE | l.go:53: | 07:22:54 | monitoring | step-03  | CLEANUP | RUN | l.go:53: | 07:22:54 | monitoring | step-03  | DELETE | RUN | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ chainsaw-monitoring-metrics-api l.go:53: | 07:22:54 | monitoring | step-03  | DELETE | OK | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ chainsaw-monitoring-metrics-api l.go:53: | 07:22:54 | monitoring | step-03  | DELETE | DONE | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ chainsaw-monitoring-metrics-api l.go:53: | 07:22:54 | monitoring | step-03  | DELETE | RUN | rbac.authorization.k8s.io/v1/ClusterRole @ chainsaw-monitoring-metrics-api l.go:53: | 07:22:54 | monitoring | step-03  | DELETE | OK | rbac.authorization.k8s.io/v1/ClusterRole @ chainsaw-monitoring-metrics-api l.go:53: | 07:22:54 | monitoring | step-03  | DELETE | DONE | rbac.authorization.k8s.io/v1/ClusterRole @ chainsaw-monitoring-metrics-api l.go:53: | 07:22:54 | monitoring | step-03  | CLEANUP | DONE | l.go:53: | 07:22:54 | monitoring | step-02  | CLEANUP | RUN | l.go:53: | 07:22:54 | monitoring | step-02  | DELETE | RUN | batch/v1/Job @ chainsaw-live-doberman/telemetrygen-traces l.go:53: | 07:22:54 | monitoring | step-02  | DELETE | OK | batch/v1/Job @ chainsaw-live-doberman/telemetrygen-traces l.go:53: | 07:22:54 | monitoring | step-02  | DELETE | DONE | batch/v1/Job @ chainsaw-live-doberman/telemetrygen-traces l.go:53: | 07:22:54 | monitoring | step-02  | CLEANUP | DONE | l.go:53: | 07:22:54 | monitoring | step-01  | CLEANUP | RUN | l.go:53: | 07:22:54 | monitoring | step-01  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-live-doberman/cluster-collector l.go:53: | 07:22:54 | monitoring | step-01  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-live-doberman/cluster-collector l.go:53: | 07:22:54 | monitoring | step-01  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-live-doberman/cluster-collector l.go:53: | 07:22:54 | monitoring | step-01  | CLEANUP | DONE | l.go:53: | 07:22:54 | monitoring | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-live-doberman l.go:53: | 07:22:54 | monitoring | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-live-doberman l.go:53: | 07:23:01 | monitoring | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-live-doberman === RUN chainsaw/multi-cluster === PAUSE chainsaw/multi-cluster === RUN chainsaw/otlp-metrics-traces l.go:53: | 07:23:01 | otlp-metrics-traces | @setup  | CREATE | OK | v1/Namespace @ chainsaw-select-possum l.go:53: | 07:23:01 | otlp-metrics-traces | step-00  | TRY | RUN | l.go:53: | 07:23:01 | otlp-metrics-traces | step-00  | APPLY | RUN | v1/Namespace @ chainsaw-otlp-metrics l.go:53: | 07:23:01 | otlp-metrics-traces | step-00  | CREATE | OK | v1/Namespace @ chainsaw-otlp-metrics l.go:53: | 07:23:01 | otlp-metrics-traces | step-00  | APPLY | DONE | v1/Namespace @ chainsaw-otlp-metrics l.go:53: | 07:23:01 | otlp-metrics-traces | step-00  | APPLY | RUN | jaegertracing.io/v1/Jaeger @ chainsaw-otlp-metrics/jaeger-allinone l.go:53: | 07:23:01 | otlp-metrics-traces | step-00  | CREATE | OK | jaegertracing.io/v1/Jaeger @ chainsaw-otlp-metrics/jaeger-allinone l.go:53: | 07:23:01 | otlp-metrics-traces | step-00  | APPLY | DONE | jaegertracing.io/v1/Jaeger @ chainsaw-otlp-metrics/jaeger-allinone l.go:53: | 07:23:01 | otlp-metrics-traces | step-00  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-otlp-metrics/jaeger-allinone l.go:53: | 07:23:09 | otlp-metrics-traces | step-00  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-otlp-metrics/jaeger-allinone l.go:53: | 07:23:09 | otlp-metrics-traces | step-00  | TRY | DONE | l.go:53: | 07:23:09 | otlp-metrics-traces | step-01  | TRY | RUN | l.go:53: | 07:23:09 | otlp-metrics-traces | step-01  | APPLY | RUN | v1/ConfigMap @ openshift-monitoring/cluster-monitoring-config l.go:53: | 07:23:09 | otlp-metrics-traces | step-01  | PATCH | OK | v1/ConfigMap @ openshift-monitoring/cluster-monitoring-config l.go:53: | 07:23:09 | otlp-metrics-traces | step-01  | APPLY | DONE | v1/ConfigMap @ openshift-monitoring/cluster-monitoring-config l.go:53: | 07:23:09 | otlp-metrics-traces | step-01  | ASSERT | RUN | apps/v1/Deployment @ openshift-user-workload-monitoring/prometheus-operator l.go:53: | 07:23:09 | otlp-metrics-traces | step-01  | ASSERT | DONE | apps/v1/Deployment @ openshift-user-workload-monitoring/prometheus-operator l.go:53: | 07:23:09 | otlp-metrics-traces | step-01  | ASSERT | RUN | apps/v1/StatefulSet @ openshift-user-workload-monitoring/prometheus-user-workload l.go:53: | 07:23:09 | otlp-metrics-traces | step-01  | ASSERT | DONE | apps/v1/StatefulSet @ openshift-user-workload-monitoring/prometheus-user-workload l.go:53: | 07:23:09 | otlp-metrics-traces | step-01  | ASSERT | RUN | apps/v1/StatefulSet @ openshift-user-workload-monitoring/thanos-ruler-user-workload l.go:53: | 07:23:09 | otlp-metrics-traces | step-01  | ASSERT | DONE | apps/v1/StatefulSet @ openshift-user-workload-monitoring/thanos-ruler-user-workload l.go:53: | 07:23:09 | otlp-metrics-traces | step-01  | TRY | DONE | l.go:53: | 07:23:09 | otlp-metrics-traces | step-02  | TRY | RUN | l.go:53: | 07:23:09 | otlp-metrics-traces | step-02  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-otlp-metrics/cluster-collector l.go:53: | 07:23:09 | otlp-metrics-traces | step-02  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-otlp-metrics/cluster-collector l.go:53: | 07:23:09 | otlp-metrics-traces | step-02  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-otlp-metrics/cluster-collector l.go:53: | 07:23:09 | otlp-metrics-traces | step-02  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-otlp-metrics/* l.go:53: | 07:23:09 | otlp-metrics-traces | step-02  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-otlp-metrics/* l.go:53: | 07:23:09 | otlp-metrics-traces | step-02  | ASSERT | RUN | monitoring.coreos.com/v1/ServiceMonitor @ chainsaw-otlp-metrics/cluster-collector-monitoring-collector l.go:53: | 07:23:10 | otlp-metrics-traces | step-02  | ASSERT | DONE | monitoring.coreos.com/v1/ServiceMonitor @ chainsaw-otlp-metrics/cluster-collector-monitoring-collector l.go:53: | 07:23:10 | otlp-metrics-traces | step-02  | ASSERT | RUN | monitoring.coreos.com/v1/ServiceMonitor @ chainsaw-otlp-metrics/cluster-collector-collector l.go:53: | 07:23:10 | otlp-metrics-traces | step-02  | ASSERT | DONE | monitoring.coreos.com/v1/ServiceMonitor @ chainsaw-otlp-metrics/cluster-collector-collector l.go:53: | 07:23:10 | otlp-metrics-traces | step-02  | TRY | DONE | l.go:53: | 07:23:10 | otlp-metrics-traces | step-03  | TRY | RUN | l.go:53: | 07:23:10 | otlp-metrics-traces | step-03  | APPLY | RUN | rbac.authorization.k8s.io/v1/ClusterRole @ chainsaw-otlp-metrics-traces-api l.go:53: | 07:23:10 | otlp-metrics-traces | step-03  | CREATE | OK | rbac.authorization.k8s.io/v1/ClusterRole @ chainsaw-otlp-metrics-traces-api l.go:53: | 07:23:10 | otlp-metrics-traces | step-03  | APPLY | DONE | rbac.authorization.k8s.io/v1/ClusterRole @ chainsaw-otlp-metrics-traces-api l.go:53: | 07:23:10 | otlp-metrics-traces | step-03  | APPLY | RUN | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ chainsaw-otlp-metrics-traces-api l.go:53: | 07:23:10 | otlp-metrics-traces | step-03  | CREATE | OK | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ chainsaw-otlp-metrics-traces-api l.go:53: | 07:23:10 | otlp-metrics-traces | step-03  | APPLY | DONE | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ chainsaw-otlp-metrics-traces-api l.go:53: | 07:23:10 | otlp-metrics-traces | step-03  | APPLY | RUN | batch/v1/Job @ chainsaw-select-possum/telemetrygen-traces l.go:53: | 07:23:10 | otlp-metrics-traces | step-03  | CREATE | OK | batch/v1/Job @ chainsaw-select-possum/telemetrygen-traces l.go:53: | 07:23:10 | otlp-metrics-traces | step-03  | APPLY | DONE | batch/v1/Job @ chainsaw-select-possum/telemetrygen-traces l.go:53: | 07:23:10 | otlp-metrics-traces | step-03  | APPLY | RUN | batch/v1/Job @ chainsaw-select-possum/telemetrygen-metrics l.go:53: | 07:23:10 | otlp-metrics-traces | step-03  | CREATE | OK | batch/v1/Job @ chainsaw-select-possum/telemetrygen-metrics l.go:53: | 07:23:10 | otlp-metrics-traces | step-03  | APPLY | DONE | batch/v1/Job @ chainsaw-select-possum/telemetrygen-metrics l.go:53: | 07:23:10 | otlp-metrics-traces | step-03  | ASSERT | RUN | rbac.authorization.k8s.io/v1/ClusterRole @ chainsaw-otlp-metrics-traces-api l.go:53: | 07:23:10 | otlp-metrics-traces | step-03  | ASSERT | DONE | rbac.authorization.k8s.io/v1/ClusterRole @ chainsaw-otlp-metrics-traces-api l.go:53: | 07:23:10 | otlp-metrics-traces | step-03  | ASSERT | RUN | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ chainsaw-otlp-metrics-traces-api l.go:53: | 07:23:10 | otlp-metrics-traces | step-03  | ASSERT | DONE | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ chainsaw-otlp-metrics-traces-api l.go:53: | 07:23:10 | otlp-metrics-traces | step-03  | ASSERT | RUN | batch/v1/Job @ chainsaw-select-possum/telemetrygen-traces l.go:53: | 07:23:13 | otlp-metrics-traces | step-03  | ASSERT | DONE | batch/v1/Job @ chainsaw-select-possum/telemetrygen-traces l.go:53: | 07:23:13 | otlp-metrics-traces | step-03  | ASSERT | RUN | batch/v1/Job @ chainsaw-select-possum/telemetrygen-metrics l.go:53: | 07:23:13 | otlp-metrics-traces | step-03  | ASSERT | DONE | batch/v1/Job @ chainsaw-select-possum/telemetrygen-metrics l.go:53: | 07:23:13 | otlp-metrics-traces | step-03  | ASSERT | RUN | v1/Pod @ chainsaw-select-possum/* l.go:53: | 07:23:13 | otlp-metrics-traces | step-03  | ASSERT | DONE | v1/Pod @ chainsaw-select-possum/* l.go:53: | 07:23:13 | otlp-metrics-traces | step-03  | TRY | DONE | l.go:53: | 07:23:13 | otlp-metrics-traces | step-04  | TRY | RUN | l.go:53: | 07:23:13 | otlp-metrics-traces | step-04  | SCRIPT | RUN | === COMMAND /usr/bin/sh -c ./check_traces.sh l.go:53: | 07:23:17 | otlp-metrics-traces | step-04  | SCRIPT | LOG | === STDOUT Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Trace for telemetrygen does not exist in Jaeger. Fetching again... Traces for telemetrygen exist in Jaeger. l.go:53: | 07:23:17 | otlp-metrics-traces | step-04  | SCRIPT | DONE | l.go:53: | 07:23:17 | otlp-metrics-traces | step-04  | TRY | DONE | l.go:53: | 07:23:17 | otlp-metrics-traces | step-05  | TRY | RUN | l.go:53: | 07:23:17 | otlp-metrics-traces | step-05  | SCRIPT | RUN | === COMMAND /usr/bin/sh -c ./check_metrics.sh l.go:53: | 07:24:04 | otlp-metrics-traces | step-05  | SCRIPT | LOG | === STDOUT No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... No telemetrygen metrics count with value present. Fetching again... telemetrygen metrics with value is present. === STDERR % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1147 0 --:--:-- --:--:-- --:--:-- 1164 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1444 0 --:--:-- --:--:-- --:--:-- 1471 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1516 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1215 0 --:--:-- --:--:-- --:--:-- 1218 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1116 0 --:--:-- --:--:-- --:--:-- 1130 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1341 0 --:--:-- --:--:-- --:--:-- 1344 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1455 0 --:--:-- --:--:-- --:--:-- 1471 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1144 0 --:--:-- --:--:-- --:--:-- 1147 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1195 0 --:--:-- --:--:-- --:--:-- 1200 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1229 0 --:--:-- --:--:-- --:--:-- 1238 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1468 0 --:--:-- --:--:-- --:--:-- 1471 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1284 0 --:--:-- --:--:-- --:--:-- 1300 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1535 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1473 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1492 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1117 0 --:--:-- --:--:-- --:--:-- 1130 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1268 0 --:--:-- --:--:-- --:--:-- 1278 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1166 0 --:--:-- --:--:-- --:--:-- 1181 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1367 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1378 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1338 0 --:--:-- --:--:-- --:--:-- 1344 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1267 0 --:--:-- --:--:-- --:--:-- 1278 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1363 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1266 0 --:--:-- --:--:-- --:--:-- 1278 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1504 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1443 0 --:--:-- --:--:-- --:--:-- 1471 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1299 0 --:--:-- --:--:-- --:--:-- 1322 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1473 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1165 0 --:--:-- --:--:-- --:--:-- 1181 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1164 0 --:--:-- --:--:-- --:--:-- 1181 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1154 0 --:--:-- --:--:-- --:--:-- 1164 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1261 0 --:--:-- --:--:-- --:--:-- 1278 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1520 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1527 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1376 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1245 0 --:--:-- --:--:-- --:--:-- 1238 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1265 0 --:--:-- --:--:-- --:--:-- 1278 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1494 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1350 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1497 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1505 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1320 0 --:--:-- --:--:-- --:--:-- 1344 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1143 0 --:--:-- --:--:-- --:--:-- 1147 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1502 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1268 0 --:--:-- --:--:-- --:--:-- 1278 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1512 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1244 0 --:--:-- --:--:-- --:--:-- 1238 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1201 0 --:--:-- --:--:-- --:--:-- 1218 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1234 0 --:--:-- --:--:-- --:--:-- 1238 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1139 0 --:--:-- --:--:-- --:--:-- 1147 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1262 0 --:--:-- --:--:-- --:--:-- 1278 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1474 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1465 0 --:--:-- --:--:-- --:--:-- 1471 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1351 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1254 0 --:--:-- --:--:-- --:--:-- 1258 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1183 0 --:--:-- --:--:-- --:--:-- 1200 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1509 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1362 0 --:--:-- --:--:-- --:--:-- 1344 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1448 0 --:--:-- --:--:-- --:--:-- 1471 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1429 0 --:--:-- --:--:-- --:--:-- 1444 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1317 0 --:--:-- --:--:-- --:--:-- 1322 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1448 0 --:--:-- --:--:-- --:--:-- 1471 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1384 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1363 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1260 0 --:--:-- --:--:-- --:--:-- 1278 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1235 0 --:--:-- --:--:-- --:--:-- 1238 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1538 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1396 0 --:--:-- --:--:-- --:--:-- 1418 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1304 0 --:--:-- --:--:-- --:--:-- 1322 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1498 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1504 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1175 0 --:--:-- --:--:-- --:--:-- 1181 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1470 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1142 0 --:--:-- --:--:-- --:--:-- 1147 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1346 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1179 0 --:--:-- --:--:-- --:--:-- 1181 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1132 0 --:--:-- --:--:-- --:--:-- 1147 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1473 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1421 0 --:--:-- --:--:-- --:--:-- 1444 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1219 0 --:--:-- --:--:-- --:--:-- 1238 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1341 0 --:--:-- --:--:-- --:--:-- 1344 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1115 0 --:--:-- --:--:-- --:--:-- 1130 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1385 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1399 0 --:--:-- --:--:-- --:--:-- 1418 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1291 0 --:--:-- --:--:-- --:--:-- 1300 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1532 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1304 0 --:--:-- --:--:-- --:--:-- 1322 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1295 0 --:--:-- --:--:-- --:--:-- 1300 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1288 0 --:--:-- --:--:-- --:--:-- 1300 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1387 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1175 0 --:--:-- --:--:-- --:--:-- 1181 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1391 0 --:--:-- --:--:-- --:--:-- 1418 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1511 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1241 0 --:--:-- --:--:-- --:--:-- 1258 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1356 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1268 0 --:--:-- --:--:-- --:--:-- 1278 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1470 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1167 0 --:--:-- --:--:-- --:--:-- 1181 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1369 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1382 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1558 0 --:--:-- --:--:-- --:--:-- 1591 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1310 0 --:--:-- --:--:-- --:--:-- 1322 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1291 0 --:--:-- --:--:-- --:--:-- 1278 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1354 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1297 0 --:--:-- --:--:-- --:--:-- 1300 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1293 0 --:--:-- --:--:-- --:--:-- 1300 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1395 0 --:--:-- --:--:-- --:--:-- 1418 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1516 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1286 0 --:--:-- --:--:-- --:--:-- 1300 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1245 0 --:--:-- --:--:-- --:--:-- 1258 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1189 0 --:--:-- --:--:-- --:--:-- 1200 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1360 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1376 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1182 0 --:--:-- --:--:-- --:--:-- 1181 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1241 0 --:--:-- --:--:-- --:--:-- 1258 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1494 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1171 0 --:--:-- --:--:-- --:--:-- 1181 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1385 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1524 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1182 0 --:--:-- --:--:-- --:--:-- 1200 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1517 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1361 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1382 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1196 0 --:--:-- --:--:-- --:--:-- 1200 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1519 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1286 0 --:--:-- --:--:-- --:--:-- 1300 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1270 0 --:--:-- --:--:-- --:--:-- 1278 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1500 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1510 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1322 0 --:--:-- --:--:-- --:--:-- 1344 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1517 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1301 0 --:--:-- --:--:-- --:--:-- 1322 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1176 0 --:--:-- --:--:-- --:--:-- 1181 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1380 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1544 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1162 0 --:--:-- --:--:-- --:--:-- 1164 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1288 0 --:--:-- --:--:-- --:--:-- 1278 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1166 0 --:--:-- --:--:-- --:--:-- 1181 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1163 0 --:--:-- --:--:-- --:--:-- 1181 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1526 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1175 0 --:--:-- --:--:-- --:--:-- 1181 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1532 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1511 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1370 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1306 0 --:--:-- --:--:-- --:--:-- 1322 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1177 0 --:--:-- --:--:-- --:--:-- 1181 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1511 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1286 0 --:--:-- --:--:-- --:--:-- 1278 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1220 0 --:--:-- --:--:-- --:--:-- 1238 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1392 0 --:--:-- --:--:-- --:--:-- 1418 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1524 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1540 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1260 0 --:--:-- --:--:-- --:--:-- 1278 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1311 0 --:--:-- --:--:-- --:--:-- 1322 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1300 0 --:--:-- --:--:-- --:--:-- 1322 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1532 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1328 0 --:--:-- --:--:-- --:--:-- 1344 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1387 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1532 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1162 0 --:--:-- --:--:-- --:--:-- 1147 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1141 0 --:--:-- --:--:-- --:--:-- 1147 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1441 0 --:--:-- --:--:-- --:--:-- 1444 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1161 0 --:--:-- --:--:-- --:--:-- 1164 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1379 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1513 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1508 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1474 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1497 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1279 0 --:--:-- --:--:-- --:--:-- 1300 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1356 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1490 0 --:--:-- --:--:-- --:--:-- 1471 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1496 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1387 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1385 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1501 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1556 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1516 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1463 0 --:--:-- --:--:-- --:--:-- 1471 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1170 0 --:--:-- --:--:-- --:--:-- 1181 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1546 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1398 0 --:--:-- --:--:-- --:--:-- 1418 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1270 0 --:--:-- --:--:-- --:--:-- 1278 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1325 0 --:--:-- --:--:-- --:--:-- 1322 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1297 0 --:--:-- --:--:-- --:--:-- 1300 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1531 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1289 0 --:--:-- --:--:-- --:--:-- 1300 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1300 0 --:--:-- --:--:-- --:--:-- 1322 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1388 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1551 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1502 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1272 0 --:--:-- --:--:-- --:--:-- 1278 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1458 0 --:--:-- --:--:-- --:--:-- 1471 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1322 0 --:--:-- --:--:-- --:--:-- 1344 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1290 0 --:--:-- --:--:-- --:--:-- 1300 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1342 0 --:--:-- --:--:-- --:--:-- 1344 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1283 0 --:--:-- --:--:-- --:--:-- 1300 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1091 0 --:--:-- --:--:-- --:--:-- 1098 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1064 0 --:--:-- --:--:-- --:--:-- 1068 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1463 0 --:--:-- --:--:-- --:--:-- 1471 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1334 0 --:--:-- --:--:-- --:--:-- 1344 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1340 0 --:--:-- --:--:-- --:--:-- 1344 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1490 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1129 0 --:--:-- --:--:-- --:--:-- 1147 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1495 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1446 0 --:--:-- --:--:-- --:--:-- 1471 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1456 0 --:--:-- --:--:-- --:--:-- 1471 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1370 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1348 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1535 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1484 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1209 0 --:--:-- --:--:-- --:--:-- 1218 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1350 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1301 0 --:--:-- --:--:-- --:--:-- 1322 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1503 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1479 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1185 0 --:--:-- --:--:-- --:--:-- 1200 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1213 0 --:--:-- --:--:-- --:--:-- 1200 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1156 0 --:--:-- --:--:-- --:--:-- 1164 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1493 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1490 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1516 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1393 0 --:--:-- --:--:-- --:--:-- 1418 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1367 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1281 0 --:--:-- --:--:-- --:--:-- 1300 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1522 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1497 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1160 0 --:--:-- --:--:-- --:--:-- 1164 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1293 0 --:--:-- --:--:-- --:--:-- 1278 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1355 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1354 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1164 0 --:--:-- --:--:-- --:--:-- 1181 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1530 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1527 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1473 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1331 0 --:--:-- --:--:-- --:--:-- 1344 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1522 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1504 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1298 0 --:--:-- --:--:-- --:--:-- 1322 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1546 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1538 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1370 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1489 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1522 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1161 0 --:--:-- --:--:-- --:--:-- 1164 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1395 0 --:--:-- --:--:-- --:--:-- 1418 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1528 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1424 0 --:--:-- --:--:-- --:--:-- 1444 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1512 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1292 0 --:--:-- --:--:-- --:--:-- 1300 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1488 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1361 0 --:--:-- --:--:-- --:--:-- 1344 100 78 100 78 0 0 1359 0 --:--:-- --:--:-- --:--:-- 1344 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1174 0 --:--:-- --:--:-- --:--:-- 1181 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1541 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1507 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1268 0 --:--:-- --:--:-- --:--:-- 1278 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1274 0 --:--:-- --:--:-- --:--:-- 1278 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1388 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1361 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1532 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1356 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1256 0 --:--:-- --:--:-- --:--:-- 1258 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1341 0 --:--:-- --:--:-- --:--:-- 1344 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1302 0 --:--:-- --:--:-- --:--:-- 1322 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1279 0 --:--:-- --:--:-- --:--:-- 1300 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1467 0 --:--:-- --:--:-- --:--:-- 1471 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1516 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1223 0 --:--:-- --:--:-- --:--:-- 1238 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1282 0 --:--:-- --:--:-- --:--:-- 1300 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1492 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1509 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1391 0 --:--:-- --:--:-- --:--:-- 1418 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1292 0 --:--:-- --:--:-- --:--:-- 1300 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1490 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1497 0 --:--:-- --:--:-- --:--:-- 1471 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1368 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1450 0 --:--:-- --:--:-- --:--:-- 1471 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1330 0 --:--:-- --:--:-- --:--:-- 1344 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1321 0 --:--:-- --:--:-- --:--:-- 1344 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1543 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1158 0 --:--:-- --:--:-- --:--:-- 1164 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1174 0 --:--:-- --:--:-- --:--:-- 1181 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1345 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1498 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1173 0 --:--:-- --:--:-- --:--:-- 1181 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1244 0 --:--:-- --:--:-- --:--:-- 1238 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1339 0 --:--:-- --:--:-- --:--:-- 1344 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1299 0 --:--:-- --:--:-- --:--:-- 1322 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1506 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1497 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1385 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1476 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1325 0 --:--:-- --:--:-- --:--:-- 1344 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1372 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1191 0 --:--:-- --:--:-- --:--:-- 1200 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1469 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1158 0 --:--:-- --:--:-- --:--:-- 1164 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1567 0 --:--:-- --:--:-- --:--:-- 1591 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1254 0 --:--:-- --:--:-- --:--:-- 1258 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1387 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1496 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1272 0 --:--:-- --:--:-- --:--:-- 1278 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1400 0 --:--:-- --:--:-- --:--:-- 1418 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1533 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1499 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1255 0 --:--:-- --:--:-- --:--:-- 1258 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1157 0 --:--:-- --:--:-- --:--:-- 1164 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1540 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1512 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1281 0 --:--:-- --:--:-- --:--:-- 1300 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1523 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1345 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1273 0 --:--:-- --:--:-- --:--:-- 1278 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1284 0 --:--:-- --:--:-- --:--:-- 1300 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1344 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1302 0 --:--:-- --:--:-- --:--:-- 1322 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1251 0 --:--:-- --:--:-- --:--:-- 1258 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1490 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1288 0 --:--:-- --:--:-- --:--:-- 1300 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1321 0 --:--:-- --:--:-- --:--:-- 1322 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1173 0 --:--:-- --:--:-- --:--:-- 1181 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1272 0 --:--:-- --:--:-- --:--:-- 1278 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1112 0 --:--:-- --:--:-- --:--:-- 1114 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1404 0 --:--:-- --:--:-- --:--:-- 1418 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1220 0 --:--:-- --:--:-- --:--:-- 1238 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1461 0 --:--:-- --:--:-- --:--:-- 1471 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1005 0 --:--:-- --:--:-- --:--:-- 1012 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1460 0 --:--:-- --:--:-- --:--:-- 1471 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1465 0 --:--:-- --:--:-- --:--:-- 1471 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1364 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1241 0 --:--:-- --:--:-- --:--:-- 1238 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1236 0 --:--:-- --:--:-- --:--:-- 1238 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1265 0 --:--:-- --:--:-- --:--:-- 1278 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1124 0 --:--:-- --:--:-- --:--:-- 1130 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1180 0 --:--:-- --:--:-- --:--:-- 1200 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1263 0 --:--:-- --:--:-- --:--:-- 1278 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1298 0 --:--:-- --:--:-- --:--:-- 1300 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1460 0 --:--:-- --:--:-- --:--:-- 1471 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1475 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1506 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1497 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1439 0 --:--:-- --:--:-- --:--:-- 1444 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1153 0 --:--:-- --:--:-- --:--:-- 1164 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1488 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1210 0 --:--:-- --:--:-- --:--:-- 1218 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1167 0 --:--:-- --:--:-- --:--:-- 1181 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1531 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1282 0 --:--:-- --:--:-- --:--:-- 1300 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1484 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1507 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1247 0 --:--:-- --:--:-- --:--:-- 1258 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1545 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1344 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1373 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1492 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1292 0 --:--:-- --:--:-- --:--:-- 1300 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1476 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1512 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1284 0 --:--:-- --:--:-- --:--:-- 1300 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1182 0 --:--:-- --:--:-- --:--:-- 1200 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1495 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1071 0 --:--:-- --:--:-- --:--:-- 1083 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1332 0 --:--:-- --:--:-- --:--:-- 1344 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 896 0 --:--:-- --:--:-- --:--:-- 906 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1352 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1525 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1470 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1536 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1390 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1362 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1329 0 --:--:-- --:--:-- --:--:-- 1344 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1273 0 --:--:-- --:--:-- --:--:-- 1278 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1464 0 --:--:-- --:--:-- --:--:-- 1471 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1476 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1171 0 --:--:-- --:--:-- --:--:-- 1181 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1527 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1338 0 --:--:-- --:--:-- --:--:-- 1344 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1306 0 --:--:-- --:--:-- --:--:-- 1322 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1480 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1546 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1437 0 --:--:-- --:--:-- --:--:-- 1444 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1498 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1253 0 --:--:-- --:--:-- --:--:-- 1258 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1457 0 --:--:-- --:--:-- --:--:-- 1471 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1104 0 --:--:-- --:--:-- --:--:-- 1114 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1331 0 --:--:-- --:--:-- --:--:-- 1344 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1337 0 --:--:-- --:--:-- --:--:-- 1344 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1392 0 --:--:-- --:--:-- --:--:-- 1418 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1238 0 --:--:-- --:--:-- --:--:-- 1258 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1094 0 --:--:-- --:--:-- --:--:-- 1098 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1098 0 --:--:-- --:--:-- --:--:-- 1114 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1455 0 --:--:-- --:--:-- --:--:-- 1471 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1451 0 --:--:-- --:--:-- --:--:-- 1471 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1487 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1379 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1392 0 --:--:-- --:--:-- --:--:-- 1418 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1152 0 --:--:-- --:--:-- --:--:-- 1164 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1241 0 --:--:-- --:--:-- --:--:-- 1238 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1258 0 --:--:-- --:--:-- --:--:-- 1278 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1442 0 --:--:-- --:--:-- --:--:-- 1471 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1272 0 --:--:-- --:--:-- --:--:-- 1278 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1369 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1417 0 --:--:-- --:--:-- --:--:-- 1444 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1474 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1239 0 --:--:-- --:--:-- --:--:-- 1258 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1371 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1500 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1524 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1521 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1479 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1529 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1174 0 --:--:-- --:--:-- --:--:-- 1181 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1488 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1527 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1371 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1353 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1343 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1488 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1430 0 --:--:-- --:--:-- --:--:-- 1444 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1405 0 --:--:-- --:--:-- --:--:-- 1418 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1386 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1511 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1274 0 --:--:-- --:--:-- --:--:-- 1278 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1350 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1511 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1533 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1543 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1260 0 --:--:-- --:--:-- --:--:-- 1278 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1522 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1273 0 --:--:-- --:--:-- --:--:-- 1278 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1388 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1510 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1375 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1350 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1471 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1471 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1371 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1142 0 --:--:-- --:--:-- --:--:-- 1147 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1430 0 --:--:-- --:--:-- --:--:-- 1444 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1483 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1546 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1474 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1497 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1335 0 --:--:-- --:--:-- --:--:-- 1322 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1342 0 --:--:-- --:--:-- --:--:-- 1344 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1505 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1324 0 --:--:-- --:--:-- --:--:-- 1344 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1521 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1495 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1357 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1146 0 --:--:-- --:--:-- --:--:-- 1164 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1181 0 --:--:-- --:--:-- --:--:-- 1200 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1331 0 --:--:-- --:--:-- --:--:-- 1344 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1481 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1545 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1544 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1486 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1536 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1481 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1349 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1184 0 --:--:-- --:--:-- --:--:-- 1200 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1288 0 --:--:-- --:--:-- --:--:-- 1300 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1156 0 --:--:-- --:--:-- --:--:-- 1164 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1538 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1397 0 --:--:-- --:--:-- --:--:-- 1418 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1272 0 --:--:-- --:--:-- --:--:-- 1278 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1520 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1497 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1456 0 --:--:-- --:--:-- --:--:-- 1471 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1531 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1501 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1509 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1534 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1563 0 --:--:-- --:--:-- --:--:-- 1591 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1546 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1500 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1379 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1513 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1251 0 --:--:-- --:--:-- --:--:-- 1238 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1407 0 --:--:-- --:--:-- --:--:-- 1418 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1517 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1506 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1278 0 --:--:-- --:--:-- --:--:-- 1300 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1497 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1304 0 --:--:-- --:--:-- --:--:-- 1322 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1528 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1385 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1500 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1475 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1486 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1495 0 --:--:-- --:--:-- --:--:-- 1471 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1495 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1488 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1447 0 --:--:-- --:--:-- --:--:-- 1471 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1332 0 --:--:-- --:--:-- --:--:-- 1344 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1383 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1515 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1359 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1255 0 --:--:-- --:--:-- --:--:-- 1258 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1333 0 --:--:-- --:--:-- --:--:-- 1344 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1388 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1349 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1473 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1375 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1450 0 --:--:-- --:--:-- --:--:-- 1471 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1382 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1389 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1337 0 --:--:-- --:--:-- --:--:-- 1344 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1243 0 --:--:-- --:--:-- --:--:-- 1258 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1336 0 --:--:-- --:--:-- --:--:-- 1344 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1251 0 --:--:-- --:--:-- --:--:-- 1258 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1529 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1181 0 --:--:-- --:--:-- --:--:-- 1200 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1409 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1304 0 --:--:-- --:--:-- --:--:-- 1322 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1360 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1409 0 --:--:-- --:--:-- --:--:-- 1418 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1493 0 --:--:-- --:--:-- --:--:-- 1500 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1368 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1518 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1349 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1516 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1283 0 --:--:-- --:--:-- --:--:-- 1300 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1352 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1373 0 --:--:-- --:--:-- --:--:-- 1392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1393 0 --:--:-- --:--:-- --:--:-- 1418 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1523 0 --:--:-- --:--:-- --:--:-- 1529 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1357 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1364 0 --:--:-- --:--:-- --:--:-- 1368 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1452 0 --:--:-- --:--:-- --:--:-- 1471 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 78 100 78 0 0 1530 0 --:--:-- --:--:-- --:--:-- 1560 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 473 100 473 0 0 9122 0 --:--:-- --:--:-- --:--:-- 9274 l.go:53: | 07:24:04 | otlp-metrics-traces | step-05  | SCRIPT | DONE | l.go:53: | 07:24:04 | otlp-metrics-traces | step-05  | TRY | DONE | l.go:53: | 07:24:04 | otlp-metrics-traces | step-03  | CLEANUP | RUN | l.go:53: | 07:24:04 | otlp-metrics-traces | step-03  | DELETE | RUN | batch/v1/Job @ chainsaw-select-possum/telemetrygen-metrics l.go:53: | 07:24:05 | otlp-metrics-traces | step-03  | DELETE | OK | batch/v1/Job @ chainsaw-select-possum/telemetrygen-metrics l.go:53: | 07:24:05 | otlp-metrics-traces | step-03  | DELETE | DONE | batch/v1/Job @ chainsaw-select-possum/telemetrygen-metrics l.go:53: | 07:24:05 | otlp-metrics-traces | step-03  | DELETE | RUN | batch/v1/Job @ chainsaw-select-possum/telemetrygen-traces l.go:53: | 07:24:05 | otlp-metrics-traces | step-03  | DELETE | OK | batch/v1/Job @ chainsaw-select-possum/telemetrygen-traces l.go:53: | 07:24:05 | otlp-metrics-traces | step-03  | DELETE | DONE | batch/v1/Job @ chainsaw-select-possum/telemetrygen-traces l.go:53: | 07:24:05 | otlp-metrics-traces | step-03  | DELETE | RUN | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ chainsaw-otlp-metrics-traces-api l.go:53: | 07:24:05 | otlp-metrics-traces | step-03  | DELETE | OK | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ chainsaw-otlp-metrics-traces-api l.go:53: | 07:24:05 | otlp-metrics-traces | step-03  | DELETE | DONE | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ chainsaw-otlp-metrics-traces-api l.go:53: | 07:24:05 | otlp-metrics-traces | step-03  | DELETE | RUN | rbac.authorization.k8s.io/v1/ClusterRole @ chainsaw-otlp-metrics-traces-api l.go:53: | 07:24:05 | otlp-metrics-traces | step-03  | DELETE | OK | rbac.authorization.k8s.io/v1/ClusterRole @ chainsaw-otlp-metrics-traces-api l.go:53: | 07:24:05 | otlp-metrics-traces | step-03  | DELETE | DONE | rbac.authorization.k8s.io/v1/ClusterRole @ chainsaw-otlp-metrics-traces-api l.go:53: | 07:24:05 | otlp-metrics-traces | step-03  | CLEANUP | DONE | l.go:53: | 07:24:05 | otlp-metrics-traces | step-02  | CLEANUP | RUN | l.go:53: | 07:24:05 | otlp-metrics-traces | step-02  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-otlp-metrics/cluster-collector l.go:53: | 07:24:05 | otlp-metrics-traces | step-02  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-otlp-metrics/cluster-collector l.go:53: | 07:24:05 | otlp-metrics-traces | step-02  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-otlp-metrics/cluster-collector l.go:53: | 07:24:05 | otlp-metrics-traces | step-02  | CLEANUP | DONE | l.go:53: | 07:24:05 | otlp-metrics-traces | step-00  | CLEANUP | RUN | l.go:53: | 07:24:05 | otlp-metrics-traces | step-00  | DELETE | RUN | jaegertracing.io/v1/Jaeger @ chainsaw-otlp-metrics/jaeger-allinone l.go:53: | 07:24:05 | otlp-metrics-traces | step-00  | DELETE | OK | jaegertracing.io/v1/Jaeger @ chainsaw-otlp-metrics/jaeger-allinone l.go:53: | 07:24:05 | otlp-metrics-traces | step-00  | DELETE | DONE | jaegertracing.io/v1/Jaeger @ chainsaw-otlp-metrics/jaeger-allinone l.go:53: | 07:24:05 | otlp-metrics-traces | step-00  | DELETE | RUN | v1/Namespace @ chainsaw-otlp-metrics l.go:53: | 07:24:05 | otlp-metrics-traces | step-00  | DELETE | OK | v1/Namespace @ chainsaw-otlp-metrics l.go:53: | 07:24:12 | otlp-metrics-traces | step-00  | DELETE | DONE | v1/Namespace @ chainsaw-otlp-metrics l.go:53: | 07:24:12 | otlp-metrics-traces | step-00  | CLEANUP | DONE | l.go:53: | 07:24:12 | otlp-metrics-traces | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-select-possum l.go:53: | 07:24:12 | otlp-metrics-traces | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-select-possum l.go:53: | 07:24:18 | otlp-metrics-traces | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-select-possum === RUN chainsaw/route === PAUSE chainsaw/route === RUN chainsaw/scrape-in-cluster-monitoring === PAUSE chainsaw/scrape-in-cluster-monitoring === RUN chainsaw/pdb === PAUSE chainsaw/pdb === RUN chainsaw/target-allocator === PAUSE chainsaw/target-allocator === RUN chainsaw/create-pm-prometheus-exporters === PAUSE chainsaw/create-pm-prometheus-exporters === RUN chainsaw/create-sm-prometheus-exporters === PAUSE chainsaw/create-sm-prometheus-exporters === RUN chainsaw/targetallocator-kubernetessd === PAUSE chainsaw/targetallocator-kubernetessd === RUN chainsaw/targetallocator-prometheuscr === PAUSE chainsaw/targetallocator-prometheuscr === RUN chainsaw/daemonset-features === PAUSE chainsaw/daemonset-features === RUN chainsaw/env-vars === PAUSE chainsaw/env-vars === RUN chainsaw/ingress === PAUSE chainsaw/ingress === RUN chainsaw/ingress-subdomains === PAUSE chainsaw/ingress-subdomains === RUN chainsaw/managed-reconcile === PAUSE chainsaw/managed-reconcile === RUN chainsaw/multiple-configmaps === PAUSE chainsaw/multiple-configmaps === RUN chainsaw/node-selector-collector === PAUSE chainsaw/node-selector-collector === RUN chainsaw/prometheus-config-validation === PAUSE chainsaw/prometheus-config-validation === RUN chainsaw/smoke-daemonset === PAUSE chainsaw/smoke-daemonset === RUN chainsaw/smoke-pod-dns-config === PAUSE chainsaw/smoke-pod-dns-config === RUN chainsaw/smoke-init-containers === PAUSE chainsaw/smoke-init-containers === RUN chainsaw/smoke-pod-annotations === PAUSE chainsaw/smoke-pod-annotations === RUN chainsaw/smoke-pod-labels === PAUSE chainsaw/smoke-pod-labels === RUN chainsaw/smoke-ports === PAUSE chainsaw/smoke-ports === RUN chainsaw/smoke-restarting-deployment === PAUSE chainsaw/smoke-restarting-deployment === RUN chainsaw/smoke-shareprocessnamespace === PAUSE chainsaw/smoke-shareprocessnamespace === RUN chainsaw/smoke-sidecar === PAUSE chainsaw/smoke-sidecar === RUN chainsaw/smoke-sidecar-other-namespace === PAUSE chainsaw/smoke-sidecar-other-namespace === RUN chainsaw/smoke-simplest === PAUSE chainsaw/smoke-simplest === RUN chainsaw/smoke-simplest-v1beta1 === PAUSE chainsaw/smoke-simplest-v1beta1 === RUN chainsaw/smoke-statefulset === PAUSE chainsaw/smoke-statefulset === RUN chainsaw/smoke-targetallocator === PAUSE chainsaw/smoke-targetallocator === RUN chainsaw/statefulset-features === PAUSE chainsaw/statefulset-features === RUN chainsaw/versioned-configmaps === PAUSE chainsaw/versioned-configmaps === CONT chainsaw/autoscale === CONT chainsaw/env-vars === CONT chainsaw/smoke-ports === CONT chainsaw/prometheus-config-validation === NAME chainsaw/smoke-ports l.go:53: | 07:24:18 | smoke-ports | @setup  | CREATE | OK | v1/Namespace @ chainsaw-viable-kingfish l.go:53: | 07:24:18 | smoke-ports | step-00  | TRY | RUN | === NAME chainsaw/autoscale l.go:53: | 07:24:18 | autoscale | @setup  | CREATE | OK | v1/Namespace @ chainsaw-flexible-gazelle === NAME chainsaw/env-vars l.go:53: | 07:24:18 | env-vars | @setup  | CREATE | OK | v1/Namespace @ chainsaw-ready-ray l.go:53: | 07:24:18 | env-vars | step-00  | TRY | RUN | === NAME chainsaw/autoscale l.go:53: | 07:24:18 | autoscale | step-00  | TRY | RUN | === NAME chainsaw/prometheus-config-validation l.go:53: | 07:24:18 | prometheus-config-validation | @setup  | CREATE | OK | v1/Namespace @ chainsaw-moved-pelican l.go:53: | 07:24:18 | prometheus-config-validation | step-00  | TRY | RUN | l.go:53: | 07:24:18 | prometheus-config-validation | step-00  | APPLY | RUN | v1/ServiceAccount @ chainsaw-moved-pelican/ta === NAME chainsaw/smoke-ports l.go:53: | 07:24:18 | smoke-ports | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-viable-kingfish/smoke-ports === NAME chainsaw/env-vars l.go:53: | 07:24:18 | env-vars | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-ready-ray/sidecar === NAME chainsaw/autoscale l.go:53: | 07:24:18 | autoscale | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-flexible-gazelle/simplest === NAME chainsaw/prometheus-config-validation l.go:53: | 07:24:18 | prometheus-config-validation | step-00  | CREATE | OK | v1/ServiceAccount @ chainsaw-moved-pelican/ta l.go:53: | 07:24:18 | prometheus-config-validation | step-00  | APPLY | DONE | v1/ServiceAccount @ chainsaw-moved-pelican/ta l.go:53: | 07:24:18 | prometheus-config-validation | step-00  | APPLY | RUN | rbac.authorization.k8s.io/v1/ClusterRole @ promreceiver-allocatorconfig === NAME chainsaw/smoke-ports l.go:53: | 07:24:18 | smoke-ports | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-viable-kingfish/smoke-ports l.go:53: | 07:24:18 | smoke-ports | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-viable-kingfish/smoke-ports l.go:53: | 07:24:18 | smoke-ports | step-00  | ASSERT | RUN | apps/v1/DaemonSet @ chainsaw-viable-kingfish/smoke-ports-collector === NAME chainsaw/autoscale l.go:53: | 07:24:18 | autoscale | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-flexible-gazelle/simplest l.go:53: | 07:24:18 | autoscale | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-flexible-gazelle/simplest l.go:53: | 07:24:18 | autoscale | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-flexible-gazelle/simplest-set-utilization === NAME chainsaw/env-vars l.go:53: | 07:24:18 | env-vars | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-ready-ray/sidecar l.go:53: | 07:24:18 | env-vars | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-ready-ray/sidecar l.go:53: | 07:24:18 | env-vars | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-ready-ray/sdk-only === NAME chainsaw/prometheus-config-validation l.go:53: | 07:24:19 | prometheus-config-validation | step-00  | CREATE | OK | rbac.authorization.k8s.io/v1/ClusterRole @ promreceiver-allocatorconfig l.go:53: | 07:24:19 | prometheus-config-validation | step-00  | APPLY | DONE | rbac.authorization.k8s.io/v1/ClusterRole @ promreceiver-allocatorconfig l.go:53: | 07:24:19 | prometheus-config-validation | step-00  | APPLY | RUN | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ default-view-chainsaw-moved-pelican === NAME chainsaw/smoke-ports l.go:53: | 07:24:19 | smoke-ports | step-00  | ASSERT | DONE | apps/v1/DaemonSet @ chainsaw-viable-kingfish/smoke-ports-collector l.go:53: | 07:24:19 | smoke-ports | step-00  | ASSERT | RUN | v1/Service @ chainsaw-viable-kingfish/smoke-ports-collector === NAME chainsaw/autoscale l.go:53: | 07:24:19 | autoscale | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-flexible-gazelle/simplest-set-utilization l.go:53: | 07:24:19 | autoscale | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-flexible-gazelle/simplest-set-utilization l.go:53: | 07:24:19 | autoscale | step-00  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-flexible-gazelle/simplest-collector === NAME chainsaw/env-vars l.go:53: | 07:24:19 | env-vars | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-ready-ray/sdk-only l.go:53: | 07:24:19 | env-vars | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-ready-ray/sdk-only l.go:53: | 07:24:19 | env-vars | step-00  | TRY | DONE | l.go:53: | 07:24:19 | env-vars | step-01  | TRY | RUN | l.go:53: | 07:24:19 | env-vars | step-01  | APPLY | RUN | apps/v1/Deployment @ chainsaw-ready-ray/my-deploy === NAME chainsaw/prometheus-config-validation l.go:53: | 07:24:19 | prometheus-config-validation | step-00  | CREATE | OK | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ default-view-chainsaw-moved-pelican l.go:53: | 07:24:19 | prometheus-config-validation | step-00  | APPLY | DONE | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ default-view-chainsaw-moved-pelican === NAME chainsaw/smoke-ports l.go:53: | 07:24:19 | smoke-ports | step-00  | ASSERT | DONE | v1/Service @ chainsaw-viable-kingfish/smoke-ports-collector l.go:53: | 07:24:19 | smoke-ports | step-00  | TRY | DONE | l.go:53: | 07:24:19 | smoke-ports | step-00  | CLEANUP | RUN | l.go:53: | 07:24:19 | smoke-ports | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-viable-kingfish/smoke-ports === NAME chainsaw/prometheus-config-validation l.go:53: | 07:24:19 | prometheus-config-validation | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-moved-pelican/promreceiver-allocatorconfig === NAME chainsaw/smoke-ports l.go:53: | 07:24:19 | smoke-ports | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-viable-kingfish/smoke-ports === NAME chainsaw/env-vars l.go:53: | 07:24:19 | env-vars | step-01  | CREATE | OK | apps/v1/Deployment @ chainsaw-ready-ray/my-deploy l.go:53: | 07:24:19 | env-vars | step-01  | APPLY | DONE | apps/v1/Deployment @ chainsaw-ready-ray/my-deploy l.go:53: | 07:24:19 | env-vars | step-01  | ASSERT | RUN | v1/Pod @ chainsaw-ready-ray/* === NAME chainsaw/prometheus-config-validation l.go:53: | 07:24:19 | prometheus-config-validation | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-moved-pelican/promreceiver-allocatorconfig l.go:53: | 07:24:19 | prometheus-config-validation | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-moved-pelican/promreceiver-allocatorconfig l.go:53: | 07:24:19 | prometheus-config-validation | step-00  | ASSERT | RUN | apps/v1/StatefulSet @ chainsaw-moved-pelican/promreceiver-allocatorconfig-collector === NAME chainsaw/env-vars l.go:53: | 07:24:19 | env-vars | step-01  | ASSERT | DONE | v1/Pod @ chainsaw-ready-ray/* l.go:53: | 07:24:19 | env-vars | step-01  | TRY | DONE | l.go:53: | 07:24:19 | env-vars | step-02  | TRY | RUN | l.go:53: | 07:24:19 | env-vars | step-02  | APPLY | RUN | batch/v1/CronJob @ chainsaw-ready-ray/my-cron-job l.go:53: | 07:24:19 | env-vars | step-02  | CREATE | OK | batch/v1/CronJob @ chainsaw-ready-ray/my-cron-job l.go:53: | 07:24:19 | env-vars | step-02  | APPLY | DONE | batch/v1/CronJob @ chainsaw-ready-ray/my-cron-job l.go:53: | 07:24:19 | env-vars | step-02  | CMD | RUN | === COMMAND /usr/local/bin/kubectl -n chainsaw-ready-ray create job --from cronjob/my-cron-job my-cron-job-exec l.go:53: | 07:24:19 | env-vars | step-02  | CMD | LOG | === STDOUT job.batch/my-cron-job-exec created l.go:53: | 07:24:19 | env-vars | step-02  | CMD | DONE | l.go:53: | 07:24:19 | env-vars | step-02  | ASSERT | RUN | v1/Pod @ chainsaw-ready-ray/* === NAME chainsaw/smoke-ports l.go:53: | 07:24:20 | smoke-ports | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-viable-kingfish/smoke-ports l.go:53: | 07:24:20 | smoke-ports | step-00  | CLEANUP | DONE | l.go:53: | 07:24:20 | smoke-ports | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-viable-kingfish l.go:53: | 07:24:20 | smoke-ports | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-viable-kingfish === NAME chainsaw/env-vars l.go:53: | 07:24:20 | env-vars | step-02  | ASSERT | DONE | v1/Pod @ chainsaw-ready-ray/* l.go:53: | 07:24:20 | env-vars | step-02  | TRY | DONE | l.go:53: | 07:24:20 | env-vars | step-03  | TRY | RUN | l.go:53: | 07:24:20 | env-vars | step-03  | APPLY | RUN | batch/v1/Job @ chainsaw-ready-ray/my-job l.go:53: | 07:24:20 | env-vars | step-03  | CREATE | OK | batch/v1/Job @ chainsaw-ready-ray/my-job l.go:53: | 07:24:20 | env-vars | step-03  | APPLY | DONE | batch/v1/Job @ chainsaw-ready-ray/my-job l.go:53: | 07:24:20 | env-vars | step-03  | ASSERT | RUN | v1/Pod @ chainsaw-ready-ray/* l.go:53: | 07:24:20 | env-vars | step-03  | ASSERT | DONE | v1/Pod @ chainsaw-ready-ray/* l.go:53: | 07:24:20 | env-vars | step-03  | TRY | DONE | l.go:53: | 07:24:20 | env-vars | step-03  | CLEANUP | RUN | l.go:53: | 07:24:20 | env-vars | step-03  | DELETE | RUN | batch/v1/Job @ chainsaw-ready-ray/my-job l.go:53: | 07:24:20 | env-vars | step-03  | DELETE | OK | batch/v1/Job @ chainsaw-ready-ray/my-job l.go:53: | 07:24:20 | env-vars | step-03  | DELETE | DONE | batch/v1/Job @ chainsaw-ready-ray/my-job l.go:53: | 07:24:20 | env-vars | step-03  | CLEANUP | DONE | l.go:53: | 07:24:20 | env-vars | step-02  | CLEANUP | RUN | l.go:53: | 07:24:20 | env-vars | step-02  | DELETE | RUN | batch/v1/CronJob @ chainsaw-ready-ray/my-cron-job l.go:53: | 07:24:20 | env-vars | step-02  | DELETE | OK | batch/v1/CronJob @ chainsaw-ready-ray/my-cron-job l.go:53: | 07:24:20 | env-vars | step-02  | DELETE | DONE | batch/v1/CronJob @ chainsaw-ready-ray/my-cron-job l.go:53: | 07:24:20 | env-vars | step-02  | CLEANUP | DONE | l.go:53: | 07:24:20 | env-vars | step-01  | CLEANUP | RUN | l.go:53: | 07:24:20 | env-vars | step-01  | DELETE | RUN | apps/v1/Deployment @ chainsaw-ready-ray/my-deploy l.go:53: | 07:24:20 | env-vars | step-01  | DELETE | OK | apps/v1/Deployment @ chainsaw-ready-ray/my-deploy l.go:53: | 07:24:20 | env-vars | step-01  | DELETE | DONE | apps/v1/Deployment @ chainsaw-ready-ray/my-deploy l.go:53: | 07:24:20 | env-vars | step-01  | CLEANUP | DONE | l.go:53: | 07:24:20 | env-vars | step-00  | CLEANUP | RUN | l.go:53: | 07:24:20 | env-vars | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-ready-ray/sdk-only l.go:53: | 07:24:21 | env-vars | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-ready-ray/sdk-only l.go:53: | 07:24:21 | env-vars | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-ready-ray/sdk-only l.go:53: | 07:24:21 | env-vars | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-ready-ray/sidecar l.go:53: | 07:24:21 | env-vars | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-ready-ray/sidecar l.go:53: | 07:24:21 | env-vars | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-ready-ray/sidecar l.go:53: | 07:24:21 | env-vars | step-00  | CLEANUP | DONE | l.go:53: | 07:24:21 | env-vars | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-ready-ray l.go:53: | 07:24:21 | env-vars | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-ready-ray === NAME chainsaw/prometheus-config-validation l.go:53: | 07:24:22 | prometheus-config-validation | step-00  | ASSERT | DONE | apps/v1/StatefulSet @ chainsaw-moved-pelican/promreceiver-allocatorconfig-collector l.go:53: | 07:24:22 | prometheus-config-validation | step-00  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-moved-pelican/promreceiver-allocatorconfig-targetallocator === NAME chainsaw/autoscale l.go:53: | 07:24:23 | autoscale | step-00  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-flexible-gazelle/simplest-collector l.go:53: | 07:24:23 | autoscale | step-00  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-flexible-gazelle/simplest-set-utilization-collector === NAME chainsaw/smoke-ports l.go:53: | 07:24:26 | smoke-ports | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-viable-kingfish === CONT chainsaw/instrumentation-python l.go:53: | 07:24:26 | instrumentation-python | @setup  | CREATE | OK | v1/Namespace @ chainsaw-settled-foal l.go:53: | 07:24:26 | instrumentation-python | step-00  | TRY | RUN | l.go:53: | 07:24:26 | instrumentation-python | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-settled-foal openshift.io/sa.scc.uid-range=1000/1000 --overwrite l.go:53: | 07:24:26 | instrumentation-python | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-settled-foal annotated l.go:53: | 07:24:26 | instrumentation-python | step-00  | CMD | DONE | l.go:53: | 07:24:26 | instrumentation-python | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-settled-foal openshift.io/sa.scc.supplemental-groups=3000/3000 --overwrite === NAME chainsaw/autoscale l.go:53: | 07:24:26 | autoscale | step-00  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-flexible-gazelle/simplest-set-utilization-collector l.go:53: | 07:24:26 | autoscale | step-00  | ASSERT | RUN | autoscaling/v2/HorizontalPodAutoscaler @ chainsaw-flexible-gazelle/simplest-collector === NAME chainsaw/instrumentation-python l.go:53: | 07:24:26 | instrumentation-python | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-settled-foal annotated l.go:53: | 07:24:26 | instrumentation-python | step-00  | CMD | DONE | l.go:53: | 07:24:26 | instrumentation-python | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-settled-foal/sidecar === NAME chainsaw/autoscale l.go:53: | 07:24:26 | autoscale | step-00  | ASSERT | DONE | autoscaling/v2/HorizontalPodAutoscaler @ chainsaw-flexible-gazelle/simplest-collector l.go:53: | 07:24:26 | autoscale | step-00  | ASSERT | RUN | autoscaling/v2/HorizontalPodAutoscaler @ chainsaw-flexible-gazelle/simplest-set-utilization-collector l.go:53: | 07:24:26 | autoscale | step-00  | ASSERT | DONE | autoscaling/v2/HorizontalPodAutoscaler @ chainsaw-flexible-gazelle/simplest-set-utilization-collector l.go:53: | 07:24:26 | autoscale | step-00  | TRY | DONE | l.go:53: | 07:24:26 | autoscale | step-01  | TRY | RUN | l.go:53: | 07:24:26 | autoscale | step-01  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-flexible-gazelle/simplest-set-utilization === NAME chainsaw/instrumentation-python l.go:53: | 07:24:26 | instrumentation-python | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-settled-foal/sidecar l.go:53: | 07:24:26 | instrumentation-python | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-settled-foal/sidecar l.go:53: | 07:24:26 | instrumentation-python | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-settled-foal/python === NAME chainsaw/autoscale l.go:53: | 07:24:27 | autoscale | step-01  | PATCH | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-flexible-gazelle/simplest-set-utilization l.go:53: | 07:24:27 | autoscale | step-01  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-flexible-gazelle/simplest-set-utilization l.go:53: | 07:24:27 | autoscale | step-01  | ASSERT | RUN | autoscaling/v2/HorizontalPodAutoscaler @ chainsaw-flexible-gazelle/simplest-set-utilization-collector === NAME chainsaw/instrumentation-python l.go:53: | 07:24:27 | instrumentation-python | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-settled-foal/python l.go:53: | 07:24:27 | instrumentation-python | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-settled-foal/python l.go:53: | 07:24:27 | instrumentation-python | step-00  | TRY | DONE | l.go:53: | 07:24:27 | instrumentation-python | step-01  | TRY | RUN | l.go:53: | 07:24:27 | instrumentation-python | step-01  | APPLY | RUN | apps/v1/Deployment @ chainsaw-settled-foal/my-python l.go:53: | 07:24:27 | instrumentation-python | step-01  | CREATE | OK | apps/v1/Deployment @ chainsaw-settled-foal/my-python l.go:53: | 07:24:27 | instrumentation-python | step-01  | APPLY | DONE | apps/v1/Deployment @ chainsaw-settled-foal/my-python l.go:53: | 07:24:27 | instrumentation-python | step-01  | ASSERT | RUN | v1/Pod @ chainsaw-settled-foal/* === NAME chainsaw/autoscale l.go:53: | 07:24:27 | autoscale | step-01  | ASSERT | DONE | autoscaling/v2/HorizontalPodAutoscaler @ chainsaw-flexible-gazelle/simplest-set-utilization-collector l.go:53: | 07:24:27 | autoscale | step-01  | TRY | DONE | l.go:53: | 07:24:27 | autoscale | step-02  | TRY | RUN | l.go:53: | 07:24:27 | autoscale | step-02  | APPLY | RUN | batch/v1/Job @ chainsaw-flexible-gazelle/telemetrygen-set-utilization l.go:53: | 07:24:27 | autoscale | step-02  | CREATE | OK | batch/v1/Job @ chainsaw-flexible-gazelle/telemetrygen-set-utilization l.go:53: | 07:24:27 | autoscale | step-02  | APPLY | DONE | batch/v1/Job @ chainsaw-flexible-gazelle/telemetrygen-set-utilization l.go:53: | 07:24:27 | autoscale | step-02  | ASSERT | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-flexible-gazelle/simplest-set-utilization === NAME chainsaw/prometheus-config-validation l.go:53: | 07:24:27 | prometheus-config-validation | step-00  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-moved-pelican/promreceiver-allocatorconfig-targetallocator l.go:53: | 07:24:27 | prometheus-config-validation | step-00  | TRY | DONE | l.go:53: | 07:24:27 | prometheus-config-validation | step-01  | TRY | RUN | l.go:53: | 07:24:27 | prometheus-config-validation | step-01  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-moved-pelican/labeldrop === NAME chainsaw/autoscale l.go:53: | 07:24:27 | autoscale | step-02  | ASSERT | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-flexible-gazelle/simplest-set-utilization l.go:53: | 07:24:27 | autoscale | step-02  | TRY | DONE | l.go:53: | 07:24:27 | autoscale | step-03  | TRY | RUN | l.go:53: | 07:24:27 | autoscale | step-03  | DELETE | RUN | batch/v1/Job @ chainsaw-flexible-gazelle/telemetrygen-set-utilization l.go:53: | 07:24:27 | autoscale | step-03  | DELETE | OK | batch/v1/Job @ chainsaw-flexible-gazelle/telemetrygen-set-utilization l.go:53: | 07:24:27 | autoscale | step-03  | DELETE | DONE | batch/v1/Job @ chainsaw-flexible-gazelle/telemetrygen-set-utilization l.go:53: | 07:24:27 | autoscale | step-03  | ASSERT | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-flexible-gazelle/simplest-set-utilization === NAME chainsaw/prometheus-config-validation l.go:53: | 07:24:27 | prometheus-config-validation | step-01  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-moved-pelican/labeldrop l.go:53: | 07:24:27 | prometheus-config-validation | step-01  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-moved-pelican/labeldrop l.go:53: | 07:24:27 | prometheus-config-validation | step-01  | TRY | DONE | l.go:53: | 07:24:27 | prometheus-config-validation | step-02  | TRY | RUN | l.go:53: | 07:24:27 | prometheus-config-validation | step-02  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-moved-pelican/promreceiver-allocatorconfig-extra === NAME chainsaw/env-vars l.go:53: | 07:24:28 | env-vars | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-ready-ray === CONT chainsaw/daemonset-features l.go:53: | 07:24:28 | daemonset-features | @setup  | CREATE | OK | v1/Namespace @ chainsaw-expert-tomcat l.go:53: | 07:24:28 | daemonset-features | step-00  | TRY | RUN | l.go:53: | 07:24:28 | daemonset-features | step-00  | SCRIPT | RUN | === COMMAND /usr/bin/sh -c ./add-scc-openshift.sh l.go:53: | 07:24:29 | daemonset-features | step-00  | SCRIPT | LOG | === STDOUT Running the test against an OpenShift Cluster Creating an Service Account Creating a Security Context Constrain Setting the Service Account for the Daemonset Adding the new policy to the Service Account securitycontextconstraints.security.openshift.io/daemonset-with-hostport created serviceaccount/otel-collector-daemonset created clusterrole.rbac.authorization.k8s.io/system:openshift:scc:daemonset-with-hostport added: "otel-collector-daemonset" l.go:53: | 07:24:29 | daemonset-features | step-00  | SCRIPT | DONE | l.go:53: | 07:24:29 | daemonset-features | step-00  | TRY | DONE | l.go:53: | 07:24:29 | daemonset-features | step-01  | TRY | RUN | l.go:53: | 07:24:29 | daemonset-features | step-01  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-expert-tomcat/daemonset l.go:53: | 07:24:29 | daemonset-features | step-01  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-expert-tomcat/daemonset l.go:53: | 07:24:29 | daemonset-features | step-01  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-expert-tomcat/daemonset l.go:53: | 07:24:29 | daemonset-features | step-01  | TRY | DONE | l.go:53: | 07:24:29 | daemonset-features | step-02  | TRY | RUN | l.go:53: | 07:24:29 | daemonset-features | step-02  | ASSERT | RUN | apps/v1/DaemonSet @ chainsaw-expert-tomcat/daemonset-collector === NAME chainsaw/prometheus-config-validation l.go:53: | 07:24:30 | prometheus-config-validation | step-02  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-moved-pelican/promreceiver-allocatorconfig-extra l.go:53: | 07:24:30 | prometheus-config-validation | step-02  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-moved-pelican/promreceiver-allocatorconfig-extra l.go:53: | 07:24:30 | prometheus-config-validation | step-02  | ASSERT | RUN | apps/v1/StatefulSet @ chainsaw-moved-pelican/promreceiver-allocatorconfig-extra-collector === NAME chainsaw/autoscale l.go:53: | 07:24:30 | autoscale | step-03  | ASSERT | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-flexible-gazelle/simplest-set-utilization l.go:53: | 07:24:30 | autoscale | step-03  | TRY | DONE | l.go:53: | 07:24:30 | autoscale | step-04  | TRY | RUN | l.go:53: | 07:24:30 | autoscale | step-04  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-flexible-gazelle/simplest l.go:53: | 07:24:30 | autoscale | step-04  | PATCH | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-flexible-gazelle/simplest l.go:53: | 07:24:30 | autoscale | step-04  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-flexible-gazelle/simplest l.go:53: | 07:24:30 | autoscale | step-04  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-flexible-gazelle/simplest-set-utilization l.go:53: | 07:24:30 | autoscale | step-04  | PATCH | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-flexible-gazelle/simplest-set-utilization l.go:53: | 07:24:30 | autoscale | step-04  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-flexible-gazelle/simplest-set-utilization l.go:53: | 07:24:30 | autoscale | step-04  | ERROR | RUN | autoscaling/v2/HorizontalPodAutoscaler @ chainsaw-flexible-gazelle/simplest-collector l.go:53: | 07:24:30 | autoscale | step-04  | ERROR | DONE | autoscaling/v2/HorizontalPodAutoscaler @ chainsaw-flexible-gazelle/simplest-collector l.go:53: | 07:24:30 | autoscale | step-04  | ERROR | RUN | autoscaling/v2/HorizontalPodAutoscaler @ chainsaw-flexible-gazelle/simplest-set-utilization-collector l.go:53: | 07:24:30 | autoscale | step-04  | ERROR | DONE | autoscaling/v2/HorizontalPodAutoscaler @ chainsaw-flexible-gazelle/simplest-set-utilization-collector l.go:53: | 07:24:30 | autoscale | step-04  | TRY | DONE | l.go:53: | 07:24:30 | autoscale | step-02  | CLEANUP | RUN | l.go:53: | 07:24:30 | autoscale | step-02  | DELETE | RUN | batch/v1/Job @ chainsaw-flexible-gazelle/telemetrygen-set-utilization l.go:53: | 07:24:30 | autoscale | step-02  | DELETE | DONE | batch/v1/Job @ chainsaw-flexible-gazelle/telemetrygen-set-utilization l.go:53: | 07:24:30 | autoscale | step-02  | CLEANUP | DONE | l.go:53: | 07:24:30 | autoscale | step-00  | CLEANUP | RUN | l.go:53: | 07:24:30 | autoscale | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-flexible-gazelle/simplest-set-utilization l.go:53: | 07:24:30 | autoscale | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-flexible-gazelle/simplest-set-utilization l.go:53: | 07:24:32 | autoscale | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-flexible-gazelle/simplest-set-utilization l.go:53: | 07:24:32 | autoscale | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-flexible-gazelle/simplest l.go:53: | 07:24:32 | autoscale | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-flexible-gazelle/simplest l.go:53: | 07:24:32 | autoscale | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-flexible-gazelle/simplest l.go:53: | 07:24:32 | autoscale | step-00  | CLEANUP | DONE | l.go:53: | 07:24:32 | autoscale | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-flexible-gazelle l.go:53: | 07:24:32 | autoscale | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-flexible-gazelle === NAME chainsaw/prometheus-config-validation l.go:53: | 07:24:33 | prometheus-config-validation | step-02  | ASSERT | DONE | apps/v1/StatefulSet @ chainsaw-moved-pelican/promreceiver-allocatorconfig-extra-collector l.go:53: | 07:24:33 | prometheus-config-validation | step-02  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-moved-pelican/promreceiver-allocatorconfig-extra-targetallocator === NAME chainsaw/autoscale l.go:53: | 07:24:38 | autoscale | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-flexible-gazelle === CONT chainsaw/targetallocator-prometheuscr l.go:53: | 07:24:38 | targetallocator-prometheuscr | @setup  | CREATE | OK | v1/Namespace @ chainsaw-targetallocator-prometheuscr l.go:53: | 07:24:38 | targetallocator-prometheuscr | step-00  | TRY | RUN | l.go:53: | 07:24:38 | targetallocator-prometheuscr | step-00  | APPLY | RUN | v1/ServiceAccount @ chainsaw-targetallocator-prometheuscr/ta l.go:53: | 07:24:38 | targetallocator-prometheuscr | step-00  | CREATE | OK | v1/ServiceAccount @ chainsaw-targetallocator-prometheuscr/ta l.go:53: | 07:24:38 | targetallocator-prometheuscr | step-00  | APPLY | DONE | v1/ServiceAccount @ chainsaw-targetallocator-prometheuscr/ta l.go:53: | 07:24:38 | targetallocator-prometheuscr | step-00  | APPLY | RUN | v1/ServiceAccount @ chainsaw-targetallocator-prometheuscr/collector l.go:53: | 07:24:38 | targetallocator-prometheuscr | step-00  | CREATE | OK | v1/ServiceAccount @ chainsaw-targetallocator-prometheuscr/collector l.go:53: | 07:24:38 | targetallocator-prometheuscr | step-00  | APPLY | DONE | v1/ServiceAccount @ chainsaw-targetallocator-prometheuscr/collector l.go:53: | 07:24:38 | targetallocator-prometheuscr | step-00  | APPLY | RUN | rbac.authorization.k8s.io/v1/ClusterRole @ targetallocator-prometheuscr l.go:53: | 07:24:38 | targetallocator-prometheuscr | step-00  | CREATE | OK | rbac.authorization.k8s.io/v1/ClusterRole @ targetallocator-prometheuscr l.go:53: | 07:24:38 | targetallocator-prometheuscr | step-00  | APPLY | DONE | rbac.authorization.k8s.io/v1/ClusterRole @ targetallocator-prometheuscr l.go:53: | 07:24:38 | targetallocator-prometheuscr | step-00  | APPLY | RUN | rbac.authorization.k8s.io/v1/ClusterRole @ collector-prometheuscr l.go:53: | 07:24:39 | targetallocator-prometheuscr | step-00  | CREATE | OK | rbac.authorization.k8s.io/v1/ClusterRole @ collector-prometheuscr l.go:53: | 07:24:39 | targetallocator-prometheuscr | step-00  | APPLY | DONE | rbac.authorization.k8s.io/v1/ClusterRole @ collector-prometheuscr l.go:53: | 07:24:39 | targetallocator-prometheuscr | step-00  | APPLY | RUN | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ ta-chainsaw-targetallocator-prometheuscr l.go:53: | 07:24:39 | targetallocator-prometheuscr | step-00  | CREATE | OK | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ ta-chainsaw-targetallocator-prometheuscr l.go:53: | 07:24:39 | targetallocator-prometheuscr | step-00  | APPLY | DONE | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ ta-chainsaw-targetallocator-prometheuscr l.go:53: | 07:24:39 | targetallocator-prometheuscr | step-00  | APPLY | RUN | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ collector-chainsaw-targetallocator-prometheuscr l.go:53: | 07:24:39 | targetallocator-prometheuscr | step-00  | CREATE | OK | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ collector-chainsaw-targetallocator-prometheuscr l.go:53: | 07:24:39 | targetallocator-prometheuscr | step-00  | APPLY | DONE | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ collector-chainsaw-targetallocator-prometheuscr l.go:53: | 07:24:39 | targetallocator-prometheuscr | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-targetallocator-prometheuscr/prometheus-cr l.go:53: | 07:24:39 | targetallocator-prometheuscr | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-targetallocator-prometheuscr/prometheus-cr l.go:53: | 07:24:39 | targetallocator-prometheuscr | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-targetallocator-prometheuscr/prometheus-cr l.go:53: | 07:24:39 | targetallocator-prometheuscr | step-00  | ASSERT | RUN | apps/v1/StatefulSet @ chainsaw-targetallocator-prometheuscr/prometheus-cr-collector === NAME chainsaw/prometheus-config-validation l.go:53: | 07:24:42 | prometheus-config-validation | step-02  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-moved-pelican/promreceiver-allocatorconfig-extra-targetallocator l.go:53: | 07:24:42 | prometheus-config-validation | step-02  | TRY | DONE | l.go:53: | 07:24:42 | prometheus-config-validation | step-03  | TRY | RUN | l.go:53: | 07:24:42 | prometheus-config-validation | step-03  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-moved-pelican/promreceiver-nopromconfig l.go:53: | 07:24:42 | prometheus-config-validation | step-03  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-moved-pelican/promreceiver-nopromconfig l.go:53: | 07:24:42 | prometheus-config-validation | step-03  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-moved-pelican/promreceiver-nopromconfig l.go:53: | 07:24:42 | prometheus-config-validation | step-03  | ASSERT | RUN | apps/v1/StatefulSet @ chainsaw-moved-pelican/promreceiver-nopromconfig-collector === NAME chainsaw/targetallocator-prometheuscr l.go:53: | 07:24:43 | targetallocator-prometheuscr | step-00  | ASSERT | DONE | apps/v1/StatefulSet @ chainsaw-targetallocator-prometheuscr/prometheus-cr-collector l.go:53: | 07:24:43 | targetallocator-prometheuscr | step-00  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-targetallocator-prometheuscr/prometheus-cr-targetallocator l.go:53: | 07:24:43 | targetallocator-prometheuscr | step-00  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-targetallocator-prometheuscr/prometheus-cr-targetallocator l.go:53: | 07:24:43 | targetallocator-prometheuscr | step-00  | ASSERT | RUN | v1/ConfigMap @ chainsaw-targetallocator-prometheuscr/prometheus-cr-targetallocator l.go:53: | 07:24:43 | targetallocator-prometheuscr | step-00  | ASSERT | DONE | v1/ConfigMap @ chainsaw-targetallocator-prometheuscr/prometheus-cr-targetallocator l.go:53: | 07:24:43 | targetallocator-prometheuscr | step-00  | ASSERT | RUN | v1/ConfigMap @ chainsaw-targetallocator-prometheuscr/prometheus-cr-collector-52e1d2ae === NAME chainsaw/prometheus-config-validation l.go:53: | 07:24:45 | prometheus-config-validation | step-03  | ASSERT | DONE | apps/v1/StatefulSet @ chainsaw-moved-pelican/promreceiver-nopromconfig-collector l.go:53: | 07:24:45 | prometheus-config-validation | step-03  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-moved-pelican/promreceiver-nopromconfig-targetallocator l.go:53: | 07:24:48 | prometheus-config-validation | step-03  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-moved-pelican/promreceiver-nopromconfig-targetallocator l.go:53: | 07:24:48 | prometheus-config-validation | step-03  | TRY | DONE | l.go:53: | 07:24:48 | prometheus-config-validation | step-03  | CLEANUP | RUN | l.go:53: | 07:24:48 | prometheus-config-validation | step-03  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-moved-pelican/promreceiver-nopromconfig l.go:53: | 07:24:48 | prometheus-config-validation | step-03  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-moved-pelican/promreceiver-nopromconfig l.go:53: | 07:24:50 | prometheus-config-validation | step-03  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-moved-pelican/promreceiver-nopromconfig l.go:53: | 07:24:50 | prometheus-config-validation | step-03  | CLEANUP | DONE | l.go:53: | 07:24:50 | prometheus-config-validation | step-02  | CLEANUP | RUN | l.go:53: | 07:24:50 | prometheus-config-validation | step-02  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-moved-pelican/promreceiver-allocatorconfig-extra l.go:53: | 07:24:51 | prometheus-config-validation | step-02  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-moved-pelican/promreceiver-allocatorconfig-extra l.go:53: | 07:24:52 | prometheus-config-validation | step-02  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-moved-pelican/promreceiver-allocatorconfig-extra l.go:53: | 07:24:52 | prometheus-config-validation | step-02  | CLEANUP | DONE | l.go:53: | 07:24:52 | prometheus-config-validation | step-01  | CLEANUP | RUN | l.go:53: | 07:24:52 | prometheus-config-validation | step-01  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-moved-pelican/labeldrop l.go:53: | 07:24:54 | prometheus-config-validation | step-01  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-moved-pelican/labeldrop l.go:53: | 07:24:55 | prometheus-config-validation | step-01  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-moved-pelican/labeldrop l.go:53: | 07:24:55 | prometheus-config-validation | step-01  | CLEANUP | DONE | l.go:53: | 07:24:55 | prometheus-config-validation | step-00  | CLEANUP | RUN | l.go:53: | 07:24:55 | prometheus-config-validation | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-moved-pelican/promreceiver-allocatorconfig l.go:53: | 07:24:55 | prometheus-config-validation | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-moved-pelican/promreceiver-allocatorconfig l.go:53: | 07:24:55 | prometheus-config-validation | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-moved-pelican/promreceiver-allocatorconfig l.go:53: | 07:24:55 | prometheus-config-validation | step-00  | DELETE | RUN | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ default-view-chainsaw-moved-pelican l.go:53: | 07:24:56 | prometheus-config-validation | step-00  | DELETE | OK | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ default-view-chainsaw-moved-pelican l.go:53: | 07:24:56 | prometheus-config-validation | step-00  | DELETE | DONE | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ default-view-chainsaw-moved-pelican l.go:53: | 07:24:56 | prometheus-config-validation | step-00  | DELETE | RUN | rbac.authorization.k8s.io/v1/ClusterRole @ promreceiver-allocatorconfig l.go:53: | 07:24:56 | prometheus-config-validation | step-00  | DELETE | OK | rbac.authorization.k8s.io/v1/ClusterRole @ promreceiver-allocatorconfig l.go:53: | 07:24:56 | prometheus-config-validation | step-00  | DELETE | DONE | rbac.authorization.k8s.io/v1/ClusterRole @ promreceiver-allocatorconfig l.go:53: | 07:24:56 | prometheus-config-validation | step-00  | DELETE | RUN | v1/ServiceAccount @ chainsaw-moved-pelican/ta l.go:53: | 07:24:56 | prometheus-config-validation | step-00  | DELETE | OK | v1/ServiceAccount @ chainsaw-moved-pelican/ta l.go:53: | 07:24:56 | prometheus-config-validation | step-00  | DELETE | DONE | v1/ServiceAccount @ chainsaw-moved-pelican/ta l.go:53: | 07:24:56 | prometheus-config-validation | step-00  | CLEANUP | DONE | l.go:53: | 07:24:56 | prometheus-config-validation | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-moved-pelican l.go:53: | 07:24:56 | prometheus-config-validation | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-moved-pelican l.go:53: | 07:25:02 | prometheus-config-validation | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-moved-pelican === CONT chainsaw/targetallocator-kubernetessd l.go:53: | 07:25:02 | targetallocator-kubernetessd | @setup  | CREATE | OK | v1/Namespace @ chainsaw-targetallocator-kubernetessd l.go:53: | 07:25:02 | targetallocator-kubernetessd | step-00  | TRY | RUN | l.go:53: | 07:25:02 | targetallocator-kubernetessd | step-00  | APPLY | RUN | v1/ServiceAccount @ chainsaw-targetallocator-kubernetessd/ta l.go:53: | 07:25:02 | targetallocator-kubernetessd | step-00  | CREATE | OK | v1/ServiceAccount @ chainsaw-targetallocator-kubernetessd/ta l.go:53: | 07:25:02 | targetallocator-kubernetessd | step-00  | APPLY | DONE | v1/ServiceAccount @ chainsaw-targetallocator-kubernetessd/ta l.go:53: | 07:25:02 | targetallocator-kubernetessd | step-00  | APPLY | RUN | v1/ServiceAccount @ chainsaw-targetallocator-kubernetessd/collector l.go:53: | 07:25:02 | targetallocator-kubernetessd | step-00  | CREATE | OK | v1/ServiceAccount @ chainsaw-targetallocator-kubernetessd/collector l.go:53: | 07:25:02 | targetallocator-kubernetessd | step-00  | APPLY | DONE | v1/ServiceAccount @ chainsaw-targetallocator-kubernetessd/collector l.go:53: | 07:25:03 | targetallocator-kubernetessd | step-00  | APPLY | RUN | rbac.authorization.k8s.io/v1/ClusterRole @ targetallocator-kubernetessd l.go:53: | 07:25:03 | targetallocator-kubernetessd | step-00  | CREATE | OK | rbac.authorization.k8s.io/v1/ClusterRole @ targetallocator-kubernetessd l.go:53: | 07:25:03 | targetallocator-kubernetessd | step-00  | APPLY | DONE | rbac.authorization.k8s.io/v1/ClusterRole @ targetallocator-kubernetessd l.go:53: | 07:25:03 | targetallocator-kubernetessd | step-00  | APPLY | RUN | rbac.authorization.k8s.io/v1/ClusterRole @ collector-kubernetessd l.go:53: | 07:25:03 | targetallocator-kubernetessd | step-00  | CREATE | OK | rbac.authorization.k8s.io/v1/ClusterRole @ collector-kubernetessd l.go:53: | 07:25:03 | targetallocator-kubernetessd | step-00  | APPLY | DONE | rbac.authorization.k8s.io/v1/ClusterRole @ collector-kubernetessd l.go:53: | 07:25:03 | targetallocator-kubernetessd | step-00  | APPLY | RUN | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ ta-chainsaw-targetallocator-kubernetessd l.go:53: | 07:25:03 | targetallocator-kubernetessd | step-00  | CREATE | OK | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ ta-chainsaw-targetallocator-kubernetessd l.go:53: | 07:25:03 | targetallocator-kubernetessd | step-00  | APPLY | DONE | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ ta-chainsaw-targetallocator-kubernetessd l.go:53: | 07:25:03 | targetallocator-kubernetessd | step-00  | APPLY | RUN | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ collector-chainsaw-targetallocator-kubernetessd l.go:53: | 07:25:03 | targetallocator-kubernetessd | step-00  | CREATE | OK | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ collector-chainsaw-targetallocator-kubernetessd l.go:53: | 07:25:03 | targetallocator-kubernetessd | step-00  | APPLY | DONE | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ collector-chainsaw-targetallocator-kubernetessd l.go:53: | 07:25:03 | targetallocator-kubernetessd | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-targetallocator-kubernetessd/prometheus-kubernetessd l.go:53: | 07:25:03 | targetallocator-kubernetessd | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-targetallocator-kubernetessd/prometheus-kubernetessd l.go:53: | 07:25:03 | targetallocator-kubernetessd | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-targetallocator-kubernetessd/prometheus-kubernetessd l.go:53: | 07:25:03 | targetallocator-kubernetessd | step-00  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-targetallocator-kubernetessd/prometheus-kubernetessd-targetallocator l.go:53: | 07:25:04 | targetallocator-kubernetessd | step-00  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-targetallocator-kubernetessd/prometheus-kubernetessd-targetallocator l.go:53: | 07:25:04 | targetallocator-kubernetessd | step-00  | ASSERT | RUN | v1/ConfigMap @ chainsaw-targetallocator-kubernetessd/prometheus-kubernetessd-targetallocator l.go:53: | 07:25:05 | targetallocator-kubernetessd | step-00  | ASSERT | DONE | v1/ConfigMap @ chainsaw-targetallocator-kubernetessd/prometheus-kubernetessd-targetallocator l.go:53: | 07:25:05 | targetallocator-kubernetessd | step-00  | ASSERT | RUN | v1/ConfigMap @ chainsaw-targetallocator-kubernetessd/prometheus-kubernetessd-collector-699cdaa1 === NAME chainsaw/instrumentation-python l.go:53: | 07:30:27 | instrumentation-python | step-01  | ASSERT | ERROR | v1/Pod @ chainsaw-settled-foal/* === ERROR ------------------------------------------------------- v1/Pod/chainsaw-settled-foal/my-python-5c4b47f548-l5sx6 ------------------------------------------------------- * spec.containers[0].env[6].name: Invalid value: "OTEL_EXPORTER_OTLP_PROTOCOL": Expected value: "OTEL_EXPORTER_OTLP_TRACES_PROTOCOL" * spec.containers[0].env[8].name: Invalid value: "OTEL_LOGS_EXPORTER": Expected value: "OTEL_EXPORTER_OTLP_METRICS_PROTOCOL" * spec.containers[0].env[8].value: Invalid value: "otlp": Expected value: "http/protobuf" * spec.containers[1].args: Invalid value: []interface {}{"--config=env:OTEL_CONFIG"}: lengths of slices don't match --- expected +++ actual @@ -6,17 +6,27 @@ sidecar.opentelemetry.io/inject: "true" labels: app: my-python + name: my-python-5c4b47f548-l5sx6 namespace: chainsaw-settled-foal + ownerReferences: + - apiVersion: apps/v1 + blockOwnerDeletion: true + controller: true + kind: ReplicaSet + name: my-python-5c4b47f548 + uid: 9ba1a9e0-f88c-4c76-9254-43c5c7ea5d08 spec: containers: - env: - name: OTEL_NODE_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.hostIP - name: OTEL_POD_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.podIP - name: OTEL_LOG_LEVEL value: debug @@ -26,12 +36,12 @@ value: http://localhost:4318 - name: PYTHONPATH value: /otel-auto-instrumentation-python/opentelemetry/instrumentation/auto_instrumentation:/otel-auto-instrumentation-python - - name: OTEL_EXPORTER_OTLP_TRACES_PROTOCOL + - name: OTEL_EXPORTER_OTLP_PROTOCOL value: http/protobuf - name: OTEL_METRICS_EXPORTER value: otlp - - name: OTEL_EXPORTER_OTLP_METRICS_PROTOCOL - value: http/protobuf + - name: OTEL_LOGS_EXPORTER + value: otlp - name: OTEL_EXPORTER_OTLP_TIMEOUT value: "20" - name: OTEL_TRACES_SAMPLER @@ -55,28 +65,185 @@ - name: OTEL_PROPAGATORS value: jaeger,b3 - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.container.name=myapp,k8s.deployment.name=my-python,k8s.namespace.name=chainsaw-settled-foal,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.replicaset.name=my-python-5c4b47f548,service.instance.id=chainsaw-settled-foal.$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME).myapp,service.version=main + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-python:main + imagePullPolicy: IfNotPresent name: myapp - volumeMounts: - - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-7hrx8 readOnly: true - mountPath: /otel-auto-instrumentation-python name: opentelemetry-auto-instrumentation-python - args: - - --feature-gates=-component.UseLocalHostAsDefaultHost - --config=env:OTEL_CONFIG + env: + - name: POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_CONFIG + value: | + receivers: + otlp: + protocols: + grpc: + endpoint: 0.0.0.0:4317 + http: + endpoint: 0.0.0.0:4318 + exporters: + debug: null + service: + telemetry: + metrics: + address: 0.0.0.0:8888 + pipelines: + traces: + exporters: + - debug + receivers: + - otlp + - name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_RESOURCE_ATTRIBUTES_POD_UID + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.uid + - name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.nodeName + - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.deployment.name=my-python,k8s.deployment.uid=9f004cff-334e-4bfc-8301-9f0e39db2eab,k8s.namespace.name=chainsaw-settled-foal,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.pod.uid=$(OTEL_RESOURCE_ATTRIBUTES_POD_UID),k8s.replicaset.name=my-python-5c4b47f548,k8s.replicaset.uid=9ba1a9e0-f88c-4c76-9254-43c5c7ea5d08 + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imagePullPolicy: IfNotPresent name: otc-container + ports: + - containerPort: 8888 + name: metrics + protocol: TCP + - containerPort: 4317 + name: otlp-grpc + protocol: TCP + - containerPort: 4318 + name: otlp-http + protocol: TCP + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-7hrx8 + readOnly: true initContainers: - - name: opentelemetry-auto-instrumentation-python + - command: + - cp + - -r + - /autoinstrumentation/. + - /otel-auto-instrumentation-python + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-python:0.48b0 + imagePullPolicy: IfNotPresent + name: opentelemetry-auto-instrumentation-python + resources: + limits: + cpu: 500m + memory: 32Mi + requests: + cpu: 50m + memory: 32Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /otel-auto-instrumentation-python + name: opentelemetry-auto-instrumentation-python + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-7hrx8 + readOnly: true status: containerStatuses: - - name: myapp + - containerID: cri-o://1c9240c6fc4502869956ccfcfbb49fb370ff3012d48b65b7300e4fd465bfe952 + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-python:main + imageID: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-python@sha256:66dce9234c5068b519226fee0c8584bd9c104fed87643ed89e02428e909b18db + lastState: {} + name: myapp ready: true + restartCount: 0 started: true - - name: otc-container + state: + running: + startedAt: "2025-02-03T07:24:33Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-7hrx8 + readOnly: true + recursiveReadOnly: Disabled + - mountPath: /otel-auto-instrumentation-python + name: opentelemetry-auto-instrumentation-python + - containerID: cri-o://c009d776c6ac001c33082e13c75954ea82146223535ff214e4e2404c16f38708 + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imageID: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + lastState: {} + name: otc-container ready: true + restartCount: 0 started: true + state: + running: + startedAt: "2025-02-03T07:24:33Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-7hrx8 + readOnly: true + recursiveReadOnly: Disabled initContainerStatuses: - - name: opentelemetry-auto-instrumentation-python + - containerID: cri-o://8a0d13b08df8b49391410d09f3e38bc1c93849ed85e7d1bf08261fb7c5497ec0 + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-python:0.48b0 + imageID: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-python@sha256:068a8968cc4be5e65169c5587cc582022d41841b5252f9d99972d487e749b584 + lastState: {} + name: opentelemetry-auto-instrumentation-python ready: true + restartCount: 0 + started: false + state: + terminated: + containerID: cri-o://8a0d13b08df8b49391410d09f3e38bc1c93849ed85e7d1bf08261fb7c5497ec0 + exitCode: 0 + finishedAt: "2025-02-03T07:24:30Z" + reason: Completed + startedAt: "2025-02-03T07:24:29Z" + volumeMounts: + - mountPath: /otel-auto-instrumentation-python + name: opentelemetry-auto-instrumentation-python + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-7hrx8 + readOnly: true + recursiveReadOnly: Disabled phase: Running l.go:53: | 07:30:27 | instrumentation-python | step-01  | TRY | DONE | l.go:53: | 07:30:27 | instrumentation-python | step-01  | CATCH | RUN | l.go:53: | 07:30:27 | instrumentation-python | step-01  | CMD | RUN | === COMMAND /usr/local/bin/kubectl logs --prefix -l app=my-python -n chainsaw-settled-foal --all-containers l.go:53: | 07:30:27 | instrumentation-python | step-01  | CMD | LOG | === STDOUT [pod/my-python-5c4b47f548-l5sx6/myapp] import psutil [pod/my-python-5c4b47f548-l5sx6/myapp] File "/otel-auto-instrumentation-python/psutil/__init__.py", line 103, in [pod/my-python-5c4b47f548-l5sx6/myapp] from . import _pslinux as _psplatform [pod/my-python-5c4b47f548-l5sx6/myapp] File "/otel-auto-instrumentation-python/psutil/_pslinux.py", line 25, in [pod/my-python-5c4b47f548-l5sx6/myapp] from . import _psutil_linux as cext [pod/my-python-5c4b47f548-l5sx6/myapp] ImportError: Error relocating /otel-auto-instrumentation-python/psutil/_psutil_linux.abi3.so: __sched_cpufree: symbol not found [pod/my-python-5c4b47f548-l5sx6/myapp] * Debug mode: off [pod/my-python-5c4b47f548-l5sx6/myapp] WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. [pod/my-python-5c4b47f548-l5sx6/myapp] * Running on http://127.0.0.1:8080 [pod/my-python-5c4b47f548-l5sx6/myapp] Press CTRL+C to quit [pod/my-python-5c4b47f548-l5sx6/otc-container] 2025-02-03T07:24:33.436Z warn service@v0.113.0/service.go:221 service::telemetry::metrics::address is being deprecated in favor of service::telemetry::metrics::readers [pod/my-python-5c4b47f548-l5sx6/otc-container] 2025-02-03T07:24:33.436Z info telemetry/metrics.go:70 Serving metrics {"address": "0.0.0.0:8888", "metrics level": "Normal"} [pod/my-python-5c4b47f548-l5sx6/otc-container] 2025-02-03T07:24:33.436Z info builders/builders.go:26 Development component. May change in the future. {"kind": "exporter", "data_type": "traces", "name": "debug"} [pod/my-python-5c4b47f548-l5sx6/otc-container] 2025-02-03T07:24:33.450Z info service@v0.113.0/service.go:238 Starting otelcol... {"Version": "0.113.0", "NumCPU": 4} [pod/my-python-5c4b47f548-l5sx6/otc-container] 2025-02-03T07:24:33.450Z info extensions/extensions.go:39 Starting extensions... [pod/my-python-5c4b47f548-l5sx6/otc-container] 2025-02-03T07:24:33.450Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-python-5c4b47f548-l5sx6/otc-container] 2025-02-03T07:24:33.450Z info otlpreceiver@v0.113.0/otlp.go:112 Starting GRPC server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4317"} [pod/my-python-5c4b47f548-l5sx6/otc-container] 2025-02-03T07:24:33.450Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-python-5c4b47f548-l5sx6/otc-container] 2025-02-03T07:24:33.450Z info otlpreceiver@v0.113.0/otlp.go:169 Starting HTTP server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4318"} [pod/my-python-5c4b47f548-l5sx6/otc-container] 2025-02-03T07:24:33.450Z info service@v0.113.0/service.go:261 Everything is ready. Begin running and processing data. l.go:53: | 07:30:27 | instrumentation-python | step-01  | CMD | DONE | l.go:53: | 07:30:27 | instrumentation-python | step-01  | CATCH | DONE | l.go:53: | 07:30:27 | instrumentation-python | step-01  | CLEANUP | RUN | l.go:53: | 07:30:27 | instrumentation-python | step-01  | DELETE | RUN | apps/v1/Deployment @ chainsaw-settled-foal/my-python l.go:53: | 07:30:27 | instrumentation-python | step-01  | DELETE | OK | apps/v1/Deployment @ chainsaw-settled-foal/my-python l.go:53: | 07:30:27 | instrumentation-python | step-01  | DELETE | DONE | apps/v1/Deployment @ chainsaw-settled-foal/my-python l.go:53: | 07:30:27 | instrumentation-python | step-01  | CLEANUP | DONE | l.go:53: | 07:30:27 | instrumentation-python | step-00  | CLEANUP | RUN | l.go:53: | 07:30:27 | instrumentation-python | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-settled-foal/python l.go:53: | 07:30:27 | instrumentation-python | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-settled-foal/python l.go:53: | 07:30:27 | instrumentation-python | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-settled-foal/python l.go:53: | 07:30:27 | instrumentation-python | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-settled-foal/sidecar l.go:53: | 07:30:27 | instrumentation-python | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-settled-foal/sidecar l.go:53: | 07:30:27 | instrumentation-python | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-settled-foal/sidecar l.go:53: | 07:30:27 | instrumentation-python | step-00  | CLEANUP | DONE | l.go:53: | 07:30:27 | instrumentation-python | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-settled-foal l.go:53: | 07:30:27 | instrumentation-python | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-settled-foal === NAME chainsaw/daemonset-features l.go:53: | 07:30:29 | daemonset-features | step-02  | ASSERT | ERROR | apps/v1/DaemonSet @ chainsaw-expert-tomcat/daemonset-collector === ERROR ------------------------------------------------------------ apps/v1/DaemonSet/chainsaw-expert-tomcat/daemonset-collector ------------------------------------------------------------ * spec.template.spec.containers[0].args: Invalid value: []interface {}{"--config=/conf/collector.yaml"}: lengths of slices don't match --- expected +++ actual @@ -3,13 +3,40 @@ metadata: name: daemonset-collector namespace: chainsaw-expert-tomcat + ownerReferences: + - apiVersion: opentelemetry.io/v1beta1 + blockOwnerDeletion: true + controller: true + kind: OpenTelemetryCollector + name: daemonset + uid: e3e8d583-1929-4452-a859-e3114912918b spec: template: spec: containers: - args: - --config=/conf/collector.yaml - - --feature-gates=-component.UseLocalHostAsDefaultHost + env: + - name: POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imagePullPolicy: IfNotPresent name: otc-container + ports: + - containerPort: 14250 + name: jaeger-grpc + protocol: TCP + - containerPort: 8888 + name: metrics + protocol: TCP + resources: {} + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /conf + name: otc-internal hostNetwork: true l.go:53: | 07:30:29 | daemonset-features | step-02  | TRY | DONE | l.go:53: | 07:30:29 | daemonset-features | step-01  | CLEANUP | RUN | l.go:53: | 07:30:29 | daemonset-features | step-01  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-expert-tomcat/daemonset l.go:53: | 07:30:29 | daemonset-features | step-01  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-expert-tomcat/daemonset l.go:53: | 07:30:29 | daemonset-features | step-01  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-expert-tomcat/daemonset l.go:53: | 07:30:29 | daemonset-features | step-01  | CLEANUP | DONE | l.go:53: | 07:30:29 | daemonset-features | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-expert-tomcat l.go:53: | 07:30:29 | daemonset-features | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-expert-tomcat l.go:53: | 07:30:35 | daemonset-features | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-expert-tomcat === CONT chainsaw/create-sm-prometheus-exporters l.go:53: | 07:30:35 | create-sm-prometheus-exporters | @setup  | CREATE | OK | v1/Namespace @ chainsaw-sincere-aphid l.go:53: | 07:30:35 | create-sm-prometheus-exporters | step-00  | TRY | RUN | l.go:53: | 07:30:35 | create-sm-prometheus-exporters | step-00  | APPLY | RUN | v1/Namespace @ create-sm-prometheus l.go:53: | 07:30:35 | create-sm-prometheus-exporters | step-00  | CREATE | OK | v1/Namespace @ create-sm-prometheus l.go:53: | 07:30:35 | create-sm-prometheus-exporters | step-00  | APPLY | DONE | v1/Namespace @ create-sm-prometheus l.go:53: | 07:30:35 | create-sm-prometheus-exporters | step-00  | TRY | DONE | l.go:53: | 07:30:35 | create-sm-prometheus-exporters | step-01  | TRY | RUN | l.go:53: | 07:30:35 | create-sm-prometheus-exporters | step-01  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-sm-prometheus/simplest l.go:53: | 07:30:35 | create-sm-prometheus-exporters | step-01  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-sm-prometheus/simplest l.go:53: | 07:30:35 | create-sm-prometheus-exporters | step-01  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-sm-prometheus/simplest l.go:53: | 07:30:35 | create-sm-prometheus-exporters | step-01  | ASSERT | RUN | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-monitoring-collector l.go:53: | 07:30:36 | create-sm-prometheus-exporters | step-01  | ASSERT | DONE | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-monitoring-collector l.go:53: | 07:30:36 | create-sm-prometheus-exporters | step-01  | ASSERT | RUN | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-collector l.go:53: | 07:30:36 | create-sm-prometheus-exporters | step-01  | ASSERT | DONE | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-collector l.go:53: | 07:30:36 | create-sm-prometheus-exporters | step-01  | ASSERT | RUN | v1/Service @ create-sm-prometheus/simplest-collector l.go:53: | 07:30:36 | create-sm-prometheus-exporters | step-01  | ASSERT | DONE | v1/Service @ create-sm-prometheus/simplest-collector l.go:53: | 07:30:36 | create-sm-prometheus-exporters | step-01  | ASSERT | RUN | v1/Service @ create-sm-prometheus/simplest-collector-monitoring l.go:53: | 07:30:36 | create-sm-prometheus-exporters | step-01  | ASSERT | DONE | v1/Service @ create-sm-prometheus/simplest-collector-monitoring l.go:53: | 07:30:36 | create-sm-prometheus-exporters | step-01  | TRY | DONE | l.go:53: | 07:30:36 | create-sm-prometheus-exporters | step-02  | TRY | RUN | l.go:53: | 07:30:36 | create-sm-prometheus-exporters | step-02  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-sm-prometheus/simplest l.go:53: | 07:30:36 | create-sm-prometheus-exporters | step-02  | PATCH | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-sm-prometheus/simplest l.go:53: | 07:30:36 | create-sm-prometheus-exporters | step-02  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-sm-prometheus/simplest l.go:53: | 07:30:36 | create-sm-prometheus-exporters | step-02  | ASSERT | RUN | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-monitoring-collector l.go:53: | 07:30:36 | create-sm-prometheus-exporters | step-02  | ASSERT | DONE | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-monitoring-collector l.go:53: | 07:30:36 | create-sm-prometheus-exporters | step-02  | ASSERT | RUN | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-collector l.go:53: | 07:30:36 | create-sm-prometheus-exporters | step-02  | ASSERT | DONE | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-collector l.go:53: | 07:30:36 | create-sm-prometheus-exporters | step-02  | ASSERT | RUN | v1/Service @ create-sm-prometheus/simplest-collector l.go:53: | 07:30:36 | create-sm-prometheus-exporters | step-02  | ASSERT | DONE | v1/Service @ create-sm-prometheus/simplest-collector l.go:53: | 07:30:36 | create-sm-prometheus-exporters | step-02  | ASSERT | RUN | v1/Service @ create-sm-prometheus/simplest-collector-monitoring l.go:53: | 07:30:36 | create-sm-prometheus-exporters | step-02  | ASSERT | DONE | v1/Service @ create-sm-prometheus/simplest-collector-monitoring l.go:53: | 07:30:36 | create-sm-prometheus-exporters | step-02  | TRY | DONE | l.go:53: | 07:30:36 | create-sm-prometheus-exporters | step-03  | TRY | RUN | l.go:53: | 07:30:36 | create-sm-prometheus-exporters | step-03  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-sm-prometheus/simplest l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-03  | PATCH | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-sm-prometheus/simplest l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-03  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-sm-prometheus/simplest l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-03  | ASSERT | RUN | v1/Service @ create-sm-prometheus/simplest-collector l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-03  | ASSERT | DONE | v1/Service @ create-sm-prometheus/simplest-collector l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-03  | TRY | DONE | l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-04  | TRY | RUN | l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-04  | APPLY | RUN | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-collector l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-04  | PATCH | OK | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-collector l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-04  | APPLY | DONE | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-collector l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-04  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-sm-prometheus/simplest l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-04  | PATCH | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-sm-prometheus/simplest l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-04  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-sm-prometheus/simplest l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-04  | TRY | DONE | l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-05  | TRY | RUN | l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-05  | APPLY | RUN | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-monitoring-collector l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-05  | CREATE | OK | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-monitoring-collector l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-05  | APPLY | DONE | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-monitoring-collector l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-05  | APPLY | RUN | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-collector l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-05  | CREATE | OK | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-collector l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-05  | APPLY | DONE | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-collector l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-05  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-sm-prometheus/simplest l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-05  | PATCH | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-sm-prometheus/simplest l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-05  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-sm-prometheus/simplest l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-05  | ASSERT | RUN | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-monitoring-collector l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-05  | ASSERT | DONE | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-monitoring-collector l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-05  | ASSERT | RUN | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-collector l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-05  | ASSERT | DONE | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-collector l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-05  | ASSERT | RUN | v1/Service @ create-sm-prometheus/simplest-collector-monitoring l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-05  | ASSERT | DONE | v1/Service @ create-sm-prometheus/simplest-collector-monitoring l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-05  | TRY | DONE | l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-06  | TRY | RUN | l.go:53: | 07:30:37 | create-sm-prometheus-exporters | step-06  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-sm-prometheus/simplest l.go:53: | 07:30:38 | create-sm-prometheus-exporters | step-06  | PATCH | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-sm-prometheus/simplest l.go:53: | 07:30:38 | create-sm-prometheus-exporters | step-06  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-sm-prometheus/simplest l.go:53: | 07:30:38 | create-sm-prometheus-exporters | step-06  | ASSERT | RUN | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-monitoring-collector l.go:53: | 07:30:38 | create-sm-prometheus-exporters | step-06  | ASSERT | DONE | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-monitoring-collector l.go:53: | 07:30:38 | create-sm-prometheus-exporters | step-06  | ASSERT | RUN | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-collector l.go:53: | 07:30:38 | create-sm-prometheus-exporters | step-06  | ASSERT | DONE | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-collector l.go:53: | 07:30:38 | create-sm-prometheus-exporters | step-06  | ASSERT | RUN | v1/Service @ create-sm-prometheus/simplest-collector-monitoring l.go:53: | 07:30:38 | create-sm-prometheus-exporters | step-06  | ASSERT | DONE | v1/Service @ create-sm-prometheus/simplest-collector-monitoring l.go:53: | 07:30:38 | create-sm-prometheus-exporters | step-06  | TRY | DONE | l.go:53: | 07:30:38 | create-sm-prometheus-exporters | step-07  | TRY | RUN | l.go:53: | 07:30:38 | create-sm-prometheus-exporters | step-07  | DELETE | RUN | opentelemetry.io/v1beta1/OpenTelemetryCollector @ create-sm-prometheus/simplest l.go:53: | 07:30:38 | create-sm-prometheus-exporters | step-07  | DELETE | OK | opentelemetry.io/v1beta1/OpenTelemetryCollector @ create-sm-prometheus/simplest l.go:53: | 07:30:38 | create-sm-prometheus-exporters | step-07  | DELETE | DONE | opentelemetry.io/v1beta1/OpenTelemetryCollector @ create-sm-prometheus/simplest l.go:53: | 07:30:38 | create-sm-prometheus-exporters | step-07  | ERROR | RUN | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-collector l.go:53: | 07:30:38 | create-sm-prometheus-exporters | step-07  | ERROR | DONE | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-collector l.go:53: | 07:30:38 | create-sm-prometheus-exporters | step-07  | TRY | DONE | l.go:53: | 07:30:38 | create-sm-prometheus-exporters | step-08  | TRY | RUN | l.go:53: | 07:30:38 | create-sm-prometheus-exporters | step-08  | APPLY | RUN | v1/ServiceAccount @ create-sm-prometheus/ta l.go:53: | 07:30:38 | create-sm-prometheus-exporters | step-08  | CREATE | OK | v1/ServiceAccount @ create-sm-prometheus/ta l.go:53: | 07:30:38 | create-sm-prometheus-exporters | step-08  | APPLY | DONE | v1/ServiceAccount @ create-sm-prometheus/ta l.go:53: | 07:30:38 | create-sm-prometheus-exporters | step-08  | APPLY | RUN | rbac.authorization.k8s.io/v1/ClusterRole @ create-sm-prometheus l.go:53: | 07:30:38 | create-sm-prometheus-exporters | step-08  | CREATE | OK | rbac.authorization.k8s.io/v1/ClusterRole @ create-sm-prometheus l.go:53: | 07:30:38 | create-sm-prometheus-exporters | step-08  | APPLY | DONE | rbac.authorization.k8s.io/v1/ClusterRole @ create-sm-prometheus l.go:53: | 07:30:38 | create-sm-prometheus-exporters | step-08  | APPLY | RUN | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ simplest-targetallocator-create-sm-prometheus l.go:53: | 07:30:38 | create-sm-prometheus-exporters | step-08  | CREATE | OK | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ simplest-targetallocator-create-sm-prometheus l.go:53: | 07:30:38 | create-sm-prometheus-exporters | step-08  | APPLY | DONE | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ simplest-targetallocator-create-sm-prometheus l.go:53: | 07:30:38 | create-sm-prometheus-exporters | step-08  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-sm-prometheus/simplest l.go:53: | 07:30:39 | create-sm-prometheus-exporters | step-08  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-sm-prometheus/simplest l.go:53: | 07:30:39 | create-sm-prometheus-exporters | step-08  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-sm-prometheus/simplest l.go:53: | 07:30:39 | create-sm-prometheus-exporters | step-08  | APPLY | RUN | batch/v1/Job @ create-sm-prometheus/check-ta-metrics l.go:53: | 07:30:39 | create-sm-prometheus-exporters | step-08  | CREATE | OK | batch/v1/Job @ create-sm-prometheus/check-ta-metrics l.go:53: | 07:30:39 | create-sm-prometheus-exporters | step-08  | APPLY | DONE | batch/v1/Job @ create-sm-prometheus/check-ta-metrics l.go:53: | 07:30:39 | create-sm-prometheus-exporters | step-08  | ASSERT | RUN | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-targetallocator l.go:53: | 07:30:40 | create-sm-prometheus-exporters | step-08  | ASSERT | DONE | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-targetallocator l.go:53: | 07:30:40 | create-sm-prometheus-exporters | step-08  | ASSERT | RUN | batch/v1/Job @ create-sm-prometheus/check-ta-metrics === NAME chainsaw/targetallocator-prometheuscr l.go:53: | 07:30:43 | targetallocator-prometheuscr | step-00  | ASSERT | ERROR | v1/ConfigMap @ chainsaw-targetallocator-prometheuscr/prometheus-cr-collector-52e1d2ae === ERROR actual resource not found l.go:53: | 07:30:43 | targetallocator-prometheuscr | step-00  | TRY | DONE | l.go:53: | 07:30:43 | targetallocator-prometheuscr | step-00  | CATCH | RUN | l.go:53: | 07:30:43 | targetallocator-prometheuscr | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl logs --prefix -l app.kubernetes.io/managed-by=opentelemetry-operator -n chainsaw-targetallocator-prometheuscr --all-containers l.go:53: | 07:30:44 | targetallocator-prometheuscr | step-00  | CMD | LOG | === STDOUT [pod/prometheus-cr-collector-0/otc-container] 2025-02-03T07:30:42.912Z info targetallocator/manager.go:179 Scrape job added {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "jobName": "serviceMonitor/openshift-cluster-csi-drivers/aws-ebs-csi-driver-controller-monitor/3"} [pod/prometheus-cr-collector-0/otc-container] 2025-02-03T07:30:42.912Z info targetallocator/manager.go:179 Scrape job added {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "jobName": "serviceMonitor/openshift-monitoring/kubelet/3"} [pod/prometheus-cr-collector-0/otc-container] 2025-02-03T07:30:42.912Z info targetallocator/manager.go:179 Scrape job added {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "jobName": "serviceMonitor/openshift-monitoring/node-exporter/0"} [pod/prometheus-cr-collector-0/otc-container] 2025-02-03T07:30:42.912Z info targetallocator/manager.go:179 Scrape job added {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "jobName": "serviceMonitor/openshift-apiserver/openshift-apiserver/0"} [pod/prometheus-cr-collector-0/otc-container] 2025-02-03T07:30:42.912Z info targetallocator/manager.go:179 Scrape job added {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "jobName": "serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0"} [pod/prometheus-cr-collector-0/otc-container] 2025-02-03T07:30:42.912Z info targetallocator/manager.go:179 Scrape job added {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "jobName": "serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0"} [pod/prometheus-cr-collector-0/otc-container] 2025-02-03T07:30:42.912Z info targetallocator/manager.go:179 Scrape job added {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "jobName": "serviceMonitor/openshift-marketplace/marketplace-operator/0"} [pod/prometheus-cr-collector-0/otc-container] 2025-02-03T07:30:42.912Z info targetallocator/manager.go:179 Scrape job added {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "jobName": "serviceMonitor/openshift-monitoring/kubelet/0"} [pod/prometheus-cr-collector-0/otc-container] 2025-02-03T07:30:42.912Z info targetallocator/manager.go:179 Scrape job added {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "jobName": "serviceMonitor/openshift-console-operator/console-operator/0"} [pod/prometheus-cr-collector-0/otc-container] 2025-02-03T07:30:42.912Z info targetallocator/manager.go:179 Scrape job added {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "jobName": "serviceMonitor/openshift-monitoring/thanos-querier/0"} [pod/prometheus-cr-targetallocator-75494c9d6c-l5mng/ta-container] {"level":"info","ts":"2025-02-03T07:29:02Z","msg":"k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1alpha1.ScrapeConfig: scrapeconfigs.monitoring.coreos.com is forbidden: User \"system:serviceaccount:chainsaw-targetallocator-prometheuscr:ta\" cannot list resource \"scrapeconfigs\" in API group \"monitoring.coreos.com\" at the cluster scope"} [pod/prometheus-cr-targetallocator-75494c9d6c-l5mng/ta-container] {"level":"error","ts":"2025-02-03T07:29:02Z","msg":"Unhandled Error","logger":"UnhandledError","error":"k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: Failed to watch *v1alpha1.ScrapeConfig: failed to list *v1alpha1.ScrapeConfig: scrapeconfigs.monitoring.coreos.com is forbidden: User \"system:serviceaccount:chainsaw-targetallocator-prometheuscr:ta\" cannot list resource \"scrapeconfigs\" in API group \"monitoring.coreos.com\" at the cluster scope","stacktrace":"k8s.io/client-go/tools/cache.DefaultWatchErrorHandler\n\tk8s.io/client-go@v0.31.2/tools/cache/reflector.go:158\nk8s.io/client-go/tools/cache.(*Reflector).Run.func1\n\tk8s.io/client-go@v0.31.2/tools/cache/reflector.go:308\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\tk8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:226\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\tk8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:227\nk8s.io/client-go/tools/cache.(*Reflector).Run\n\tk8s.io/client-go@v0.31.2/tools/cache/reflector.go:306\nk8s.io/client-go/tools/cache.(*controller).Run.(*Group).StartWithChannel.func2\n\tk8s.io/apimachinery@v0.31.2/pkg/util/wait/wait.go:55\nk8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1\n\tk8s.io/apimachinery@v0.31.2/pkg/util/wait/wait.go:72"} [pod/prometheus-cr-targetallocator-75494c9d6c-l5mng/ta-container] {"level":"info","ts":"2025-02-03T07:29:06Z","msg":"k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Probe: probes.monitoring.coreos.com is forbidden: User \"system:serviceaccount:chainsaw-targetallocator-prometheuscr:ta\" cannot list resource \"probes\" in API group \"monitoring.coreos.com\" at the cluster scope"} [pod/prometheus-cr-targetallocator-75494c9d6c-l5mng/ta-container] {"level":"error","ts":"2025-02-03T07:29:06Z","msg":"Unhandled Error","logger":"UnhandledError","error":"k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: Failed to watch *v1.Probe: failed to list *v1.Probe: probes.monitoring.coreos.com is forbidden: User \"system:serviceaccount:chainsaw-targetallocator-prometheuscr:ta\" cannot list resource \"probes\" in API group \"monitoring.coreos.com\" at the cluster scope","stacktrace":"k8s.io/client-go/tools/cache.DefaultWatchErrorHandler\n\tk8s.io/client-go@v0.31.2/tools/cache/reflector.go:158\nk8s.io/client-go/tools/cache.(*Reflector).Run.func1\n\tk8s.io/client-go@v0.31.2/tools/cache/reflector.go:308\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\tk8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:226\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\tk8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:227\nk8s.io/client-go/tools/cache.(*Reflector).Run\n\tk8s.io/client-go@v0.31.2/tools/cache/reflector.go:306\nk8s.io/client-go/tools/cache.(*controller).Run.(*Group).StartWithChannel.func2\n\tk8s.io/apimachinery@v0.31.2/pkg/util/wait/wait.go:55\nk8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1\n\tk8s.io/apimachinery@v0.31.2/pkg/util/wait/wait.go:72"} [pod/prometheus-cr-targetallocator-75494c9d6c-l5mng/ta-container] {"level":"info","ts":"2025-02-03T07:29:48Z","msg":"k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1alpha1.ScrapeConfig: scrapeconfigs.monitoring.coreos.com is forbidden: User \"system:serviceaccount:chainsaw-targetallocator-prometheuscr:ta\" cannot list resource \"scrapeconfigs\" in API group \"monitoring.coreos.com\" at the cluster scope"} [pod/prometheus-cr-targetallocator-75494c9d6c-l5mng/ta-container] {"level":"error","ts":"2025-02-03T07:29:48Z","msg":"Unhandled Error","logger":"UnhandledError","error":"k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: Failed to watch *v1alpha1.ScrapeConfig: failed to list *v1alpha1.ScrapeConfig: scrapeconfigs.monitoring.coreos.com is forbidden: User \"system:serviceaccount:chainsaw-targetallocator-prometheuscr:ta\" cannot list resource \"scrapeconfigs\" in API group \"monitoring.coreos.com\" at the cluster scope","stacktrace":"k8s.io/client-go/tools/cache.DefaultWatchErrorHandler\n\tk8s.io/client-go@v0.31.2/tools/cache/reflector.go:158\nk8s.io/client-go/tools/cache.(*Reflector).Run.func1\n\tk8s.io/client-go@v0.31.2/tools/cache/reflector.go:308\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\tk8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:226\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\tk8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:227\nk8s.io/client-go/tools/cache.(*Reflector).Run\n\tk8s.io/client-go@v0.31.2/tools/cache/reflector.go:306\nk8s.io/client-go/tools/cache.(*controller).Run.(*Group).StartWithChannel.func2\n\tk8s.io/apimachinery@v0.31.2/pkg/util/wait/wait.go:55\nk8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1\n\tk8s.io/apimachinery@v0.31.2/pkg/util/wait/wait.go:72"} [pod/prometheus-cr-targetallocator-75494c9d6c-l5mng/ta-container] {"level":"info","ts":"2025-02-03T07:30:01Z","msg":"k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Probe: probes.monitoring.coreos.com is forbidden: User \"system:serviceaccount:chainsaw-targetallocator-prometheuscr:ta\" cannot list resource \"probes\" in API group \"monitoring.coreos.com\" at the cluster scope"} [pod/prometheus-cr-targetallocator-75494c9d6c-l5mng/ta-container] {"level":"error","ts":"2025-02-03T07:30:01Z","msg":"Unhandled Error","logger":"UnhandledError","error":"k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: Failed to watch *v1.Probe: failed to list *v1.Probe: probes.monitoring.coreos.com is forbidden: User \"system:serviceaccount:chainsaw-targetallocator-prometheuscr:ta\" cannot list resource \"probes\" in API group \"monitoring.coreos.com\" at the cluster scope","stacktrace":"k8s.io/client-go/tools/cache.DefaultWatchErrorHandler\n\tk8s.io/client-go@v0.31.2/tools/cache/reflector.go:158\nk8s.io/client-go/tools/cache.(*Reflector).Run.func1\n\tk8s.io/client-go@v0.31.2/tools/cache/reflector.go:308\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\tk8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:226\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\tk8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:227\nk8s.io/client-go/tools/cache.(*Reflector).Run\n\tk8s.io/client-go@v0.31.2/tools/cache/reflector.go:306\nk8s.io/client-go/tools/cache.(*controller).Run.(*Group).StartWithChannel.func2\n\tk8s.io/apimachinery@v0.31.2/pkg/util/wait/wait.go:55\nk8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1\n\tk8s.io/apimachinery@v0.31.2/pkg/util/wait/wait.go:72"} [pod/prometheus-cr-targetallocator-75494c9d6c-l5mng/ta-container] {"level":"info","ts":"2025-02-03T07:30:42Z","msg":"k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1alpha1.ScrapeConfig: scrapeconfigs.monitoring.coreos.com is forbidden: User \"system:serviceaccount:chainsaw-targetallocator-prometheuscr:ta\" cannot list resource \"scrapeconfigs\" in API group \"monitoring.coreos.com\" at the cluster scope"} [pod/prometheus-cr-targetallocator-75494c9d6c-l5mng/ta-container] {"level":"error","ts":"2025-02-03T07:30:42Z","msg":"Unhandled Error","logger":"UnhandledError","error":"k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: Failed to watch *v1alpha1.ScrapeConfig: failed to list *v1alpha1.ScrapeConfig: scrapeconfigs.monitoring.coreos.com is forbidden: User \"system:serviceaccount:chainsaw-targetallocator-prometheuscr:ta\" cannot list resource \"scrapeconfigs\" in API group \"monitoring.coreos.com\" at the cluster scope","stacktrace":"k8s.io/client-go/tools/cache.DefaultWatchErrorHandler\n\tk8s.io/client-go@v0.31.2/tools/cache/reflector.go:158\nk8s.io/client-go/tools/cache.(*Reflector).Run.func1\n\tk8s.io/client-go@v0.31.2/tools/cache/reflector.go:308\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\tk8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:226\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\tk8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:227\nk8s.io/client-go/tools/cache.(*Reflector).Run\n\tk8s.io/client-go@v0.31.2/tools/cache/reflector.go:306\nk8s.io/client-go/tools/cache.(*controller).Run.(*Group).StartWithChannel.func2\n\tk8s.io/apimachinery@v0.31.2/pkg/util/wait/wait.go:55\nk8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1\n\tk8s.io/apimachinery@v0.31.2/pkg/util/wait/wait.go:72"} l.go:53: | 07:30:44 | targetallocator-prometheuscr | step-00  | CMD | DONE | l.go:53: | 07:30:44 | targetallocator-prometheuscr | step-00  | CATCH | DONE | l.go:53: | 07:30:44 | targetallocator-prometheuscr | step-00  | CLEANUP | RUN | l.go:53: | 07:30:44 | targetallocator-prometheuscr | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-targetallocator-prometheuscr/prometheus-cr l.go:53: | 07:30:44 | targetallocator-prometheuscr | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-targetallocator-prometheuscr/prometheus-cr l.go:53: | 07:30:45 | targetallocator-prometheuscr | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-targetallocator-prometheuscr/prometheus-cr l.go:53: | 07:30:45 | targetallocator-prometheuscr | step-00  | DELETE | RUN | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ collector-chainsaw-targetallocator-prometheuscr l.go:53: | 07:30:45 | targetallocator-prometheuscr | step-00  | DELETE | OK | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ collector-chainsaw-targetallocator-prometheuscr l.go:53: | 07:30:45 | targetallocator-prometheuscr | step-00  | DELETE | DONE | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ collector-chainsaw-targetallocator-prometheuscr l.go:53: | 07:30:45 | targetallocator-prometheuscr | step-00  | DELETE | RUN | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ ta-chainsaw-targetallocator-prometheuscr l.go:53: | 07:30:45 | targetallocator-prometheuscr | step-00  | DELETE | OK | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ ta-chainsaw-targetallocator-prometheuscr l.go:53: | 07:30:45 | targetallocator-prometheuscr | step-00  | DELETE | DONE | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ ta-chainsaw-targetallocator-prometheuscr l.go:53: | 07:30:45 | targetallocator-prometheuscr | step-00  | DELETE | RUN | rbac.authorization.k8s.io/v1/ClusterRole @ collector-prometheuscr l.go:53: | 07:30:45 | targetallocator-prometheuscr | step-00  | DELETE | OK | rbac.authorization.k8s.io/v1/ClusterRole @ collector-prometheuscr l.go:53: | 07:30:45 | targetallocator-prometheuscr | step-00  | DELETE | DONE | rbac.authorization.k8s.io/v1/ClusterRole @ collector-prometheuscr l.go:53: | 07:30:45 | targetallocator-prometheuscr | step-00  | DELETE | RUN | rbac.authorization.k8s.io/v1/ClusterRole @ targetallocator-prometheuscr l.go:53: | 07:30:45 | targetallocator-prometheuscr | step-00  | DELETE | OK | rbac.authorization.k8s.io/v1/ClusterRole @ targetallocator-prometheuscr l.go:53: | 07:30:45 | targetallocator-prometheuscr | step-00  | DELETE | DONE | rbac.authorization.k8s.io/v1/ClusterRole @ targetallocator-prometheuscr l.go:53: | 07:30:45 | targetallocator-prometheuscr | step-00  | DELETE | RUN | v1/ServiceAccount @ chainsaw-targetallocator-prometheuscr/collector l.go:53: | 07:30:45 | targetallocator-prometheuscr | step-00  | DELETE | OK | v1/ServiceAccount @ chainsaw-targetallocator-prometheuscr/collector l.go:53: | 07:30:45 | targetallocator-prometheuscr | step-00  | DELETE | DONE | v1/ServiceAccount @ chainsaw-targetallocator-prometheuscr/collector l.go:53: | 07:30:45 | targetallocator-prometheuscr | step-00  | DELETE | RUN | v1/ServiceAccount @ chainsaw-targetallocator-prometheuscr/ta l.go:53: | 07:30:45 | targetallocator-prometheuscr | step-00  | DELETE | OK | v1/ServiceAccount @ chainsaw-targetallocator-prometheuscr/ta l.go:53: | 07:30:45 | targetallocator-prometheuscr | step-00  | DELETE | DONE | v1/ServiceAccount @ chainsaw-targetallocator-prometheuscr/ta l.go:53: | 07:30:45 | targetallocator-prometheuscr | step-00  | CLEANUP | DONE | l.go:53: | 07:30:45 | targetallocator-prometheuscr | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-targetallocator-prometheuscr l.go:53: | 07:30:45 | targetallocator-prometheuscr | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-targetallocator-prometheuscr l.go:53: | 07:30:52 | targetallocator-prometheuscr | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-targetallocator-prometheuscr === CONT chainsaw/create-pm-prometheus-exporters l.go:53: | 07:30:52 | create-pm-prometheus-exporters | @setup  | CREATE | OK | v1/Namespace @ chainsaw-vast-weevil l.go:53: | 07:30:52 | create-pm-prometheus-exporters | step-00  | TRY | RUN | l.go:53: | 07:30:52 | create-pm-prometheus-exporters | step-00  | APPLY | RUN | v1/Namespace @ create-pm-prometheus l.go:53: | 07:30:52 | create-pm-prometheus-exporters | step-00  | CREATE | OK | v1/Namespace @ create-pm-prometheus l.go:53: | 07:30:52 | create-pm-prometheus-exporters | step-00  | APPLY | DONE | v1/Namespace @ create-pm-prometheus l.go:53: | 07:30:52 | create-pm-prometheus-exporters | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-pm-prometheus/simplest l.go:53: | 07:30:52 | create-pm-prometheus-exporters | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-pm-prometheus/simplest l.go:53: | 07:30:52 | create-pm-prometheus-exporters | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-pm-prometheus/simplest l.go:53: | 07:30:52 | create-pm-prometheus-exporters | step-00  | TRY | DONE | l.go:53: | 07:30:52 | create-pm-prometheus-exporters | step-01  | TRY | RUN | l.go:53: | 07:30:52 | create-pm-prometheus-exporters | step-01  | APPLY | RUN | apps/v1/Deployment @ create-pm-prometheus/app-with-sidecar l.go:53: | 07:30:52 | create-pm-prometheus-exporters | step-01  | CREATE | OK | apps/v1/Deployment @ create-pm-prometheus/app-with-sidecar l.go:53: | 07:30:52 | create-pm-prometheus-exporters | step-01  | APPLY | DONE | apps/v1/Deployment @ create-pm-prometheus/app-with-sidecar l.go:53: | 07:30:52 | create-pm-prometheus-exporters | step-01  | ASSERT | RUN | v1/Pod @ create-pm-prometheus/* l.go:53: | 07:30:54 | create-pm-prometheus-exporters | step-01  | ASSERT | DONE | v1/Pod @ create-pm-prometheus/* l.go:53: | 07:30:54 | create-pm-prometheus-exporters | step-01  | ASSERT | RUN | monitoring.coreos.com/v1/PodMonitor @ create-pm-prometheus/simplest-collector l.go:53: | 07:30:54 | create-pm-prometheus-exporters | step-01  | ASSERT | DONE | monitoring.coreos.com/v1/PodMonitor @ create-pm-prometheus/simplest-collector l.go:53: | 07:30:54 | create-pm-prometheus-exporters | step-01  | TRY | DONE | l.go:53: | 07:30:54 | create-pm-prometheus-exporters | step-01  | CLEANUP | RUN | l.go:53: | 07:30:54 | create-pm-prometheus-exporters | step-01  | DELETE | RUN | apps/v1/Deployment @ create-pm-prometheus/app-with-sidecar l.go:53: | 07:30:54 | create-pm-prometheus-exporters | step-01  | DELETE | OK | apps/v1/Deployment @ create-pm-prometheus/app-with-sidecar l.go:53: | 07:30:54 | create-pm-prometheus-exporters | step-01  | DELETE | DONE | apps/v1/Deployment @ create-pm-prometheus/app-with-sidecar l.go:53: | 07:30:54 | create-pm-prometheus-exporters | step-01  | CLEANUP | DONE | l.go:53: | 07:30:54 | create-pm-prometheus-exporters | step-00  | CLEANUP | RUN | l.go:53: | 07:30:54 | create-pm-prometheus-exporters | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-pm-prometheus/simplest l.go:53: | 07:30:54 | create-pm-prometheus-exporters | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-pm-prometheus/simplest l.go:53: | 07:30:54 | create-pm-prometheus-exporters | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-pm-prometheus/simplest l.go:53: | 07:30:54 | create-pm-prometheus-exporters | step-00  | DELETE | RUN | v1/Namespace @ create-pm-prometheus l.go:53: | 07:30:55 | create-pm-prometheus-exporters | step-00  | DELETE | OK | v1/Namespace @ create-pm-prometheus === NAME chainsaw/targetallocator-kubernetessd l.go:53: | 07:31:05 | targetallocator-kubernetessd | step-00  | ASSERT | ERROR | v1/ConfigMap @ chainsaw-targetallocator-kubernetessd/prometheus-kubernetessd-collector-699cdaa1 === ERROR actual resource not found l.go:53: | 07:31:05 | targetallocator-kubernetessd | step-00  | TRY | DONE | l.go:53: | 07:31:05 | targetallocator-kubernetessd | step-00  | CATCH | RUN | l.go:53: | 07:31:05 | targetallocator-kubernetessd | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl logs --prefix -l app.kubernetes.io/managed-by=opentelemetry-operator -n chainsaw-targetallocator-kubernetessd --all-containers l.go:53: | 07:31:05 | targetallocator-kubernetessd | step-00  | CMD | LOG | === STDOUT [pod/prometheus-kubernetessd-collector-4rs5p/otc-container] 2025-02-03T07:25:04.930Z warn service@v0.113.0/service.go:221 service::telemetry::metrics::address is being deprecated in favor of service::telemetry::metrics::readers [pod/prometheus-kubernetessd-collector-4rs5p/otc-container] 2025-02-03T07:25:04.930Z info telemetry/metrics.go:70 Serving metrics {"address": "0.0.0.0:8888", "metrics level": "Normal"} [pod/prometheus-kubernetessd-collector-4rs5p/otc-container] 2025-02-03T07:25:04.931Z info service@v0.113.0/service.go:238 Starting otelcol... {"Version": "0.113.0", "NumCPU": 4} [pod/prometheus-kubernetessd-collector-4rs5p/otc-container] 2025-02-03T07:25:04.931Z info extensions/extensions.go:39 Starting extensions... [pod/prometheus-kubernetessd-collector-4rs5p/otc-container] 2025-02-03T07:25:04.931Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "exporter", "data_type": "metrics", "name": "prometheus", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/prometheus-kubernetessd-collector-4rs5p/otc-container] 2025-02-03T07:25:04.931Z info prometheusreceiver@v0.113.0/metrics_receiver.go:118 Starting discovery manager {"kind": "receiver", "name": "prometheus", "data_type": "metrics"} [pod/prometheus-kubernetessd-collector-4rs5p/otc-container] 2025-02-03T07:25:04.932Z info targetallocator/manager.go:67 Starting target allocator discovery {"kind": "receiver", "name": "prometheus", "data_type": "metrics"} [pod/prometheus-kubernetessd-collector-4rs5p/otc-container] 2025-02-03T07:25:04.937Z info targetallocator/manager.go:179 Scrape job added {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "jobName": "kubelet"} [pod/prometheus-kubernetessd-collector-4rs5p/otc-container] 2025-02-03T07:25:04.937Z info service@v0.113.0/service.go:261 Everything is ready. Begin running and processing data. [pod/prometheus-kubernetessd-collector-4rs5p/otc-container] 2025-02-03T07:25:04.937Z info prometheusreceiver@v0.113.0/metrics_receiver.go:187 Starting scrape manager {"kind": "receiver", "name": "prometheus", "data_type": "metrics"} [pod/prometheus-kubernetessd-collector-58rcn/otc-container] 2025-02-03T07:25:05.001Z warn service@v0.113.0/service.go:221 service::telemetry::metrics::address is being deprecated in favor of service::telemetry::metrics::readers [pod/prometheus-kubernetessd-collector-58rcn/otc-container] 2025-02-03T07:25:05.001Z info telemetry/metrics.go:70 Serving metrics {"address": "0.0.0.0:8888", "metrics level": "Normal"} [pod/prometheus-kubernetessd-collector-58rcn/otc-container] 2025-02-03T07:25:05.002Z info service@v0.113.0/service.go:238 Starting otelcol... {"Version": "0.113.0", "NumCPU": 4} [pod/prometheus-kubernetessd-collector-58rcn/otc-container] 2025-02-03T07:25:05.002Z info extensions/extensions.go:39 Starting extensions... [pod/prometheus-kubernetessd-collector-58rcn/otc-container] 2025-02-03T07:25:05.002Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "exporter", "data_type": "metrics", "name": "prometheus", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/prometheus-kubernetessd-collector-58rcn/otc-container] 2025-02-03T07:25:05.003Z info prometheusreceiver@v0.113.0/metrics_receiver.go:118 Starting discovery manager {"kind": "receiver", "name": "prometheus", "data_type": "metrics"} [pod/prometheus-kubernetessd-collector-58rcn/otc-container] 2025-02-03T07:25:05.003Z info targetallocator/manager.go:67 Starting target allocator discovery {"kind": "receiver", "name": "prometheus", "data_type": "metrics"} [pod/prometheus-kubernetessd-collector-58rcn/otc-container] 2025-02-03T07:25:05.007Z info targetallocator/manager.go:179 Scrape job added {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "jobName": "kubelet"} [pod/prometheus-kubernetessd-collector-58rcn/otc-container] 2025-02-03T07:25:05.007Z info service@v0.113.0/service.go:261 Everything is ready. Begin running and processing data. [pod/prometheus-kubernetessd-collector-58rcn/otc-container] 2025-02-03T07:25:05.007Z info prometheusreceiver@v0.113.0/metrics_receiver.go:187 Starting scrape manager {"kind": "receiver", "name": "prometheus", "data_type": "metrics"} [pod/prometheus-kubernetessd-collector-r96qn/otc-container] 2025-02-03T07:25:07.702Z warn service@v0.113.0/service.go:221 service::telemetry::metrics::address is being deprecated in favor of service::telemetry::metrics::readers [pod/prometheus-kubernetessd-collector-r96qn/otc-container] 2025-02-03T07:25:07.702Z info telemetry/metrics.go:70 Serving metrics {"address": "0.0.0.0:8888", "metrics level": "Normal"} [pod/prometheus-kubernetessd-collector-r96qn/otc-container] 2025-02-03T07:25:07.715Z info service@v0.113.0/service.go:238 Starting otelcol... {"Version": "0.113.0", "NumCPU": 4} [pod/prometheus-kubernetessd-collector-r96qn/otc-container] 2025-02-03T07:25:07.715Z info extensions/extensions.go:39 Starting extensions... [pod/prometheus-kubernetessd-collector-r96qn/otc-container] 2025-02-03T07:25:07.716Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "exporter", "data_type": "metrics", "name": "prometheus", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/prometheus-kubernetessd-collector-r96qn/otc-container] 2025-02-03T07:25:07.716Z info prometheusreceiver@v0.113.0/metrics_receiver.go:118 Starting discovery manager {"kind": "receiver", "name": "prometheus", "data_type": "metrics"} [pod/prometheus-kubernetessd-collector-r96qn/otc-container] 2025-02-03T07:25:07.717Z info targetallocator/manager.go:67 Starting target allocator discovery {"kind": "receiver", "name": "prometheus", "data_type": "metrics"} [pod/prometheus-kubernetessd-collector-r96qn/otc-container] 2025-02-03T07:25:07.725Z info targetallocator/manager.go:179 Scrape job added {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "jobName": "kubelet"} [pod/prometheus-kubernetessd-collector-r96qn/otc-container] 2025-02-03T07:25:07.725Z info service@v0.113.0/service.go:261 Everything is ready. Begin running and processing data. [pod/prometheus-kubernetessd-collector-r96qn/otc-container] 2025-02-03T07:25:07.725Z info prometheusreceiver@v0.113.0/metrics_receiver.go:187 Starting scrape manager {"kind": "receiver", "name": "prometheus", "data_type": "metrics"} [pod/prometheus-kubernetessd-targetallocator-bc4c6b4cb-zllxl/ta-container] {"level":"info","ts":"2025-02-03T07:25:04Z","msg":"Starting the Target Allocator"} [pod/prometheus-kubernetessd-targetallocator-bc4c6b4cb-zllxl/ta-container] {"level":"info","ts":"2025-02-03T07:25:04Z","logger":"allocator","msg":"Starting server..."} [pod/prometheus-kubernetessd-targetallocator-bc4c6b4cb-zllxl/ta-container] {"level":"info","ts":"2025-02-03T07:25:09Z","logger":"allocator","msg":"Could not assign targets for some jobs","allocator":"per-node","targets":3,"error":"could not find collector for node ip-10-0-30-227.us-east-2.compute.internal\ncould not find collector for node ip-10-0-74-167.us-east-2.compute.internal\ncould not find collector for node ip-10-0-100-4.us-east-2.compute.internal"} l.go:53: | 07:31:05 | targetallocator-kubernetessd | step-00  | CMD | DONE | l.go:53: | 07:31:05 | targetallocator-kubernetessd | step-00  | CATCH | DONE | l.go:53: | 07:31:05 | targetallocator-kubernetessd | step-00  | CLEANUP | RUN | l.go:53: | 07:31:05 | targetallocator-kubernetessd | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-targetallocator-kubernetessd/prometheus-kubernetessd l.go:53: | 07:31:05 | targetallocator-kubernetessd | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-targetallocator-kubernetessd/prometheus-kubernetessd l.go:53: | 07:31:05 | targetallocator-kubernetessd | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-targetallocator-kubernetessd/prometheus-kubernetessd l.go:53: | 07:31:05 | targetallocator-kubernetessd | step-00  | DELETE | RUN | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ collector-chainsaw-targetallocator-kubernetessd l.go:53: | 07:31:05 | targetallocator-kubernetessd | step-00  | DELETE | OK | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ collector-chainsaw-targetallocator-kubernetessd l.go:53: | 07:31:05 | targetallocator-kubernetessd | step-00  | DELETE | DONE | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ collector-chainsaw-targetallocator-kubernetessd l.go:53: | 07:31:05 | targetallocator-kubernetessd | step-00  | DELETE | RUN | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ ta-chainsaw-targetallocator-kubernetessd l.go:53: | 07:31:05 | targetallocator-kubernetessd | step-00  | DELETE | OK | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ ta-chainsaw-targetallocator-kubernetessd l.go:53: | 07:31:05 | targetallocator-kubernetessd | step-00  | DELETE | DONE | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ ta-chainsaw-targetallocator-kubernetessd l.go:53: | 07:31:05 | targetallocator-kubernetessd | step-00  | DELETE | RUN | rbac.authorization.k8s.io/v1/ClusterRole @ collector-kubernetessd l.go:53: | 07:31:05 | targetallocator-kubernetessd | step-00  | DELETE | OK | rbac.authorization.k8s.io/v1/ClusterRole @ collector-kubernetessd l.go:53: | 07:31:05 | targetallocator-kubernetessd | step-00  | DELETE | DONE | rbac.authorization.k8s.io/v1/ClusterRole @ collector-kubernetessd l.go:53: | 07:31:05 | targetallocator-kubernetessd | step-00  | DELETE | RUN | rbac.authorization.k8s.io/v1/ClusterRole @ targetallocator-kubernetessd l.go:53: | 07:31:05 | targetallocator-kubernetessd | step-00  | DELETE | OK | rbac.authorization.k8s.io/v1/ClusterRole @ targetallocator-kubernetessd l.go:53: | 07:31:05 | targetallocator-kubernetessd | step-00  | DELETE | DONE | rbac.authorization.k8s.io/v1/ClusterRole @ targetallocator-kubernetessd l.go:53: | 07:31:05 | targetallocator-kubernetessd | step-00  | DELETE | RUN | v1/ServiceAccount @ chainsaw-targetallocator-kubernetessd/collector l.go:53: | 07:31:05 | targetallocator-kubernetessd | step-00  | DELETE | OK | v1/ServiceAccount @ chainsaw-targetallocator-kubernetessd/collector l.go:53: | 07:31:05 | targetallocator-kubernetessd | step-00  | DELETE | DONE | v1/ServiceAccount @ chainsaw-targetallocator-kubernetessd/collector l.go:53: | 07:31:05 | targetallocator-kubernetessd | step-00  | DELETE | RUN | v1/ServiceAccount @ chainsaw-targetallocator-kubernetessd/ta l.go:53: | 07:31:05 | targetallocator-kubernetessd | step-00  | DELETE | OK | v1/ServiceAccount @ chainsaw-targetallocator-kubernetessd/ta l.go:53: | 07:31:05 | targetallocator-kubernetessd | step-00  | DELETE | DONE | v1/ServiceAccount @ chainsaw-targetallocator-kubernetessd/ta l.go:53: | 07:31:05 | targetallocator-kubernetessd | step-00  | CLEANUP | DONE | l.go:53: | 07:31:05 | targetallocator-kubernetessd | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-targetallocator-kubernetessd l.go:53: | 07:31:05 | targetallocator-kubernetessd | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-targetallocator-kubernetessd l.go:53: | 07:31:12 | targetallocator-kubernetessd | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-targetallocator-kubernetessd === CONT chainsaw/target-allocator l.go:53: | 07:31:12 | target-allocator | @setup  | CREATE | OK | v1/Namespace @ chainsaw-quick-quail l.go:53: | 07:31:12 | target-allocator | step-00  | TRY | RUN | l.go:53: | 07:31:12 | target-allocator | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-quick-quail/pdb l.go:53: | 07:31:12 | target-allocator | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-quick-quail/pdb l.go:53: | 07:31:12 | target-allocator | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-quick-quail/pdb l.go:53: | 07:31:12 | target-allocator | step-00  | ASSERT | RUN | policy/v1/PodDisruptionBudget @ chainsaw-quick-quail/pdb-targetallocator l.go:53: | 07:31:13 | target-allocator | step-00  | ASSERT | DONE | policy/v1/PodDisruptionBudget @ chainsaw-quick-quail/pdb-targetallocator l.go:53: | 07:31:13 | target-allocator | step-00  | TRY | DONE | l.go:53: | 07:31:13 | target-allocator | step-00  | CLEANUP | RUN | l.go:53: | 07:31:13 | target-allocator | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-quick-quail/pdb === NAME chainsaw/instrumentation-python l.go:53: | 07:31:14 | instrumentation-python | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-settled-foal === CONT chainsaw/pdb l.go:53: | 07:31:14 | pdb | @setup  | CREATE | OK | v1/Namespace @ chainsaw-diverse-walleye l.go:53: | 07:31:14 | pdb | step-00  | TRY | RUN | l.go:53: | 07:31:14 | pdb | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-diverse-walleye/pdb l.go:53: | 07:31:14 | pdb | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-diverse-walleye/pdb l.go:53: | 07:31:14 | pdb | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-diverse-walleye/pdb l.go:53: | 07:31:14 | pdb | step-00  | ASSERT | RUN | policy/v1/PodDisruptionBudget @ chainsaw-diverse-walleye/pdb-collector l.go:53: | 07:31:14 | pdb | step-00  | ASSERT | DONE | policy/v1/PodDisruptionBudget @ chainsaw-diverse-walleye/pdb-collector l.go:53: | 07:31:14 | pdb | step-00  | TRY | DONE | l.go:53: | 07:31:14 | pdb | step-01  | TRY | RUN | l.go:53: | 07:31:14 | pdb | step-01  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-diverse-walleye/pdb l.go:53: | 07:31:14 | pdb | step-01  | PATCH | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-diverse-walleye/pdb l.go:53: | 07:31:14 | pdb | step-01  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-diverse-walleye/pdb l.go:53: | 07:31:14 | pdb | step-01  | ASSERT | RUN | policy/v1/PodDisruptionBudget @ chainsaw-diverse-walleye/pdb-collector === NAME chainsaw/target-allocator l.go:53: | 07:31:14 | target-allocator | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-quick-quail/pdb === NAME chainsaw/pdb l.go:53: | 07:31:15 | pdb | step-01  | ASSERT | DONE | policy/v1/PodDisruptionBudget @ chainsaw-diverse-walleye/pdb-collector l.go:53: | 07:31:15 | pdb | step-01  | TRY | DONE | l.go:53: | 07:31:15 | pdb | step-00  | CLEANUP | RUN | l.go:53: | 07:31:15 | pdb | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-diverse-walleye/pdb l.go:53: | 07:31:15 | pdb | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-diverse-walleye/pdb === NAME chainsaw/target-allocator l.go:53: | 07:31:16 | target-allocator | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-quick-quail/pdb l.go:53: | 07:31:16 | target-allocator | step-00  | CLEANUP | DONE | l.go:53: | 07:31:16 | target-allocator | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-quick-quail === NAME chainsaw/pdb l.go:53: | 07:31:16 | pdb | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-diverse-walleye/pdb l.go:53: | 07:31:16 | pdb | step-00  | CLEANUP | DONE | l.go:53: | 07:31:16 | pdb | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-diverse-walleye l.go:53: | 07:31:16 | pdb | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-diverse-walleye === NAME chainsaw/target-allocator l.go:53: | 07:31:16 | target-allocator | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-quick-quail l.go:53: | 07:31:22 | target-allocator | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-quick-quail === CONT chainsaw/scrape-in-cluster-monitoring l.go:53: | 07:31:22 | scrape-in-cluster-monitoring | @setup  | CREATE | OK | v1/Namespace @ chainsaw-scrape-in-cluster-monitoring l.go:53: | 07:31:22 | scrape-in-cluster-monitoring | Create OTEL collector with Prometheus receiver to scrape in-cluster metrics | TRY | RUN | l.go:53: | 07:31:22 | scrape-in-cluster-monitoring | Create OTEL collector with Prometheus receiver to scrape in-cluster metrics | APPLY | RUN | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ chainsaw-scrape-in-cluster-monitoring-binding l.go:53: | 07:31:23 | scrape-in-cluster-monitoring | Create OTEL collector with Prometheus receiver to scrape in-cluster metrics | CREATE | OK | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ chainsaw-scrape-in-cluster-monitoring-binding l.go:53: | 07:31:23 | scrape-in-cluster-monitoring | Create OTEL collector with Prometheus receiver to scrape in-cluster metrics | APPLY | DONE | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ chainsaw-scrape-in-cluster-monitoring-binding l.go:53: | 07:31:23 | scrape-in-cluster-monitoring | Create OTEL collector with Prometheus receiver to scrape in-cluster metrics | APPLY | RUN | v1/ConfigMap @ chainsaw-scrape-in-cluster-monitoring/cabundle l.go:53: | 07:31:23 | scrape-in-cluster-monitoring | Create OTEL collector with Prometheus receiver to scrape in-cluster metrics | CREATE | OK | v1/ConfigMap @ chainsaw-scrape-in-cluster-monitoring/cabundle l.go:53: | 07:31:23 | scrape-in-cluster-monitoring | Create OTEL collector with Prometheus receiver to scrape in-cluster metrics | APPLY | DONE | v1/ConfigMap @ chainsaw-scrape-in-cluster-monitoring/cabundle l.go:53: | 07:31:23 | scrape-in-cluster-monitoring | Create OTEL collector with Prometheus receiver to scrape in-cluster metrics | ASSERT | RUN | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ chainsaw-scrape-in-cluster-monitoring-binding l.go:53: | 07:31:23 | scrape-in-cluster-monitoring | Create OTEL collector with Prometheus receiver to scrape in-cluster metrics | ASSERT | DONE | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ chainsaw-scrape-in-cluster-monitoring-binding l.go:53: | 07:31:23 | scrape-in-cluster-monitoring | Create OTEL collector with Prometheus receiver to scrape in-cluster metrics | ASSERT | RUN | v1/ConfigMap @ chainsaw-scrape-in-cluster-monitoring/cabundle l.go:53: | 07:31:23 | scrape-in-cluster-monitoring | Create OTEL collector with Prometheus receiver to scrape in-cluster metrics | ASSERT | DONE | v1/ConfigMap @ chainsaw-scrape-in-cluster-monitoring/cabundle l.go:53: | 07:31:23 | scrape-in-cluster-monitoring | Create OTEL collector with Prometheus receiver to scrape in-cluster metrics | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-scrape-in-cluster-monitoring/otel === NAME chainsaw/pdb l.go:53: | 07:31:23 | pdb | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-diverse-walleye === CONT chainsaw/route l.go:53: | 07:31:23 | route | @setup  | CREATE | OK | v1/Namespace @ chainsaw-pure-stinkbug l.go:53: | 07:31:23 | route | step-00  | TRY | RUN | === NAME chainsaw/scrape-in-cluster-monitoring l.go:53: | 07:31:23 | scrape-in-cluster-monitoring | Create OTEL collector with Prometheus receiver to scrape in-cluster metrics | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-scrape-in-cluster-monitoring/otel l.go:53: | 07:31:23 | scrape-in-cluster-monitoring | Create OTEL collector with Prometheus receiver to scrape in-cluster metrics | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-scrape-in-cluster-monitoring/otel l.go:53: | 07:31:23 | scrape-in-cluster-monitoring | Create OTEL collector with Prometheus receiver to scrape in-cluster metrics | ASSERT | RUN | apps/v1/Deployment @ chainsaw-scrape-in-cluster-monitoring/otel-collector === NAME chainsaw/route l.go:53: | 07:31:23 | route | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-pure-stinkbug/simplest l.go:53: | 07:31:23 | route | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-pure-stinkbug/simplest l.go:53: | 07:31:23 | route | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-pure-stinkbug/simplest l.go:53: | 07:31:23 | route | step-00  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-pure-stinkbug/simplest-collector === NAME chainsaw/scrape-in-cluster-monitoring l.go:53: | 07:31:24 | scrape-in-cluster-monitoring | Create OTEL collector with Prometheus receiver to scrape in-cluster metrics | ASSERT | DONE | apps/v1/Deployment @ chainsaw-scrape-in-cluster-monitoring/otel-collector l.go:53: | 07:31:24 | scrape-in-cluster-monitoring | Create OTEL collector with Prometheus receiver to scrape in-cluster metrics | ASSERT | RUN | v1/Service @ chainsaw-scrape-in-cluster-monitoring/otel-collector-monitoring l.go:53: | 07:31:24 | scrape-in-cluster-monitoring | Create OTEL collector with Prometheus receiver to scrape in-cluster metrics | ASSERT | DONE | v1/Service @ chainsaw-scrape-in-cluster-monitoring/otel-collector-monitoring l.go:53: | 07:31:24 | scrape-in-cluster-monitoring | Create OTEL collector with Prometheus receiver to scrape in-cluster metrics | TRY | DONE | l.go:53: | 07:31:24 | scrape-in-cluster-monitoring | Wait for the metrics to be collected  | TRY | RUN | l.go:53: | 07:31:24 | scrape-in-cluster-monitoring | Wait for the metrics to be collected  | SLEEP | RUN | === NAME chainsaw/route l.go:53: | 07:31:25 | route | step-00  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-pure-stinkbug/simplest-collector l.go:53: | 07:31:25 | route | step-00  | ASSERT | RUN | route.openshift.io/v1/Route @ chainsaw-pure-stinkbug/otlp-grpc-simplest-route l.go:53: | 07:31:25 | route | step-00  | ASSERT | DONE | route.openshift.io/v1/Route @ chainsaw-pure-stinkbug/otlp-grpc-simplest-route l.go:53: | 07:31:25 | route | step-00  | ASSERT | RUN | route.openshift.io/v1/Route @ chainsaw-pure-stinkbug/otlp-http-simplest-route l.go:53: | 07:31:25 | route | step-00  | ASSERT | DONE | route.openshift.io/v1/Route @ chainsaw-pure-stinkbug/otlp-http-simplest-route l.go:53: | 07:31:25 | route | step-00  | TRY | DONE | l.go:53: | 07:31:25 | route | step-01  | TRY | RUN | l.go:53: | 07:31:25 | route | step-01  | SCRIPT | RUN | === COMMAND /usr/bin/sh -c #!/bin/bash set -ex # Export empty payload and check of collector accepted it with 2xx status code otlp_http_host=$(kubectl get route otlp-http-simplest-route -n $NAMESPACE -o jsonpath='{.spec.host}') for i in {1..40}; do curl --fail -ivX POST http://${otlp_http_host}:80/v1/traces -H "Content-Type: application/json" -d '{}' && break || sleep 1; done === NAME chainsaw/create-sm-prometheus-exporters l.go:53: | 07:31:27 | create-sm-prometheus-exporters | step-08  | ASSERT | DONE | batch/v1/Job @ create-sm-prometheus/check-ta-metrics l.go:53: | 07:31:27 | create-sm-prometheus-exporters | step-08  | TRY | DONE | l.go:53: | 07:31:27 | create-sm-prometheus-exporters | step-08  | CLEANUP | RUN | l.go:53: | 07:31:27 | create-sm-prometheus-exporters | step-08  | DELETE | RUN | batch/v1/Job @ create-sm-prometheus/check-ta-metrics l.go:53: | 07:31:27 | create-sm-prometheus-exporters | step-08  | DELETE | OK | batch/v1/Job @ create-sm-prometheus/check-ta-metrics === NAME chainsaw/route l.go:53: | 07:31:27 | route | step-01  | SCRIPT | LOG | === STDOUT HTTP/1.0 503 Service Unavailable pragma: no-cache cache-control: private, max-age=0, no-cache, no-store content-type: text/html === STDERR + kubectl get route otlp-http-simplest-route -n chainsaw-pure-stinkbug -o jsonpath={.spec.host} + otlp_http_host=otlp-http-simplest-route-chainsaw-pure-stinkbug.apps.ci-op-c6wcx4mj-037b2.cspilp.interop.ccitredhat.com + curl --fail -ivX POST http://otlp-http-simplest-route-chainsaw-pure-stinkbug.apps.ci-op-c6wcx4mj-037b2.cspilp.interop.ccitredhat.com:80/v1/traces -H Content-Type: application/json -d {} Note: Unnecessary use of -X or --request, POST is already inferred. % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 3.131.203.218:80... * Connected to otlp-http-simplest-route-chainsaw-pure-stinkbug.apps.ci-op-c6wcx4mj-037b2.cspilp.interop.ccitredhat.com (3.131.203.218) port 80 (#0) > POST /v1/traces HTTP/1.1 > Host: otlp-http-simplest-route-chainsaw-pure-stinkbug.apps.ci-op-c6wcx4mj-037b2.cspilp.interop.ccitredhat.com > User-Agent: curl/7.88.1 > Accept: */* > Content-Type: application/json > Content-Length: 2 > } [2 bytes data] * HTTP 1.0, assume close after body < HTTP/1.0 503 Service Unavailable < pragma: no-cache < cache-control: private, max-age=0, no-cache, no-store < content-type: text/html * The requested URL returned error: 503 100 2 0 0 100 2 0 34 --:--:-- --:--:-- --:--:-- 35 * Closing connection 0 curl: (22) The requested URL returned error: 503 + sleep 1 l.go:53: | 07:31:27 | route | step-01  | SCRIPT | DONE | l.go:53: | 07:31:27 | route | step-01  | TRY | DONE | l.go:53: | 07:31:27 | route | step-00  | CLEANUP | RUN | l.go:53: | 07:31:27 | route | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-pure-stinkbug/simplest === NAME chainsaw/create-sm-prometheus-exporters l.go:53: | 07:31:27 | create-sm-prometheus-exporters | step-08  | DELETE | DONE | batch/v1/Job @ create-sm-prometheus/check-ta-metrics l.go:53: | 07:31:27 | create-sm-prometheus-exporters | step-08  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-sm-prometheus/simplest === NAME chainsaw/route l.go:53: | 07:31:27 | route | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-pure-stinkbug/simplest l.go:53: | 07:31:27 | route | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-pure-stinkbug/simplest l.go:53: | 07:31:27 | route | step-00  | CLEANUP | DONE | l.go:53: | 07:31:27 | route | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-pure-stinkbug l.go:53: | 07:31:27 | route | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-pure-stinkbug === NAME chainsaw/create-sm-prometheus-exporters l.go:53: | 07:31:27 | create-sm-prometheus-exporters | step-08  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-sm-prometheus/simplest l.go:53: | 07:31:28 | create-sm-prometheus-exporters | step-08  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-sm-prometheus/simplest l.go:53: | 07:31:28 | create-sm-prometheus-exporters | step-08  | DELETE | RUN | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ simplest-targetallocator-create-sm-prometheus === NAME chainsaw/create-pm-prometheus-exporters l.go:53: | 07:31:28 | create-pm-prometheus-exporters | step-00  | DELETE | DONE | v1/Namespace @ create-pm-prometheus l.go:53: | 07:31:28 | create-pm-prometheus-exporters | step-00  | CLEANUP | DONE | l.go:53: | 07:31:28 | create-pm-prometheus-exporters | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-vast-weevil === NAME chainsaw/create-sm-prometheus-exporters l.go:53: | 07:31:28 | create-sm-prometheus-exporters | step-08  | DELETE | OK | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ simplest-targetallocator-create-sm-prometheus === NAME chainsaw/create-pm-prometheus-exporters l.go:53: | 07:31:28 | create-pm-prometheus-exporters | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-vast-weevil === NAME chainsaw/create-sm-prometheus-exporters l.go:53: | 07:31:28 | create-sm-prometheus-exporters | step-08  | DELETE | DONE | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ simplest-targetallocator-create-sm-prometheus l.go:53: | 07:31:28 | create-sm-prometheus-exporters | step-08  | DELETE | RUN | rbac.authorization.k8s.io/v1/ClusterRole @ create-sm-prometheus l.go:53: | 07:31:28 | create-sm-prometheus-exporters | step-08  | DELETE | OK | rbac.authorization.k8s.io/v1/ClusterRole @ create-sm-prometheus l.go:53: | 07:31:28 | create-sm-prometheus-exporters | step-08  | DELETE | DONE | rbac.authorization.k8s.io/v1/ClusterRole @ create-sm-prometheus l.go:53: | 07:31:28 | create-sm-prometheus-exporters | step-08  | DELETE | RUN | v1/ServiceAccount @ create-sm-prometheus/ta l.go:53: | 07:31:28 | create-sm-prometheus-exporters | step-08  | DELETE | OK | v1/ServiceAccount @ create-sm-prometheus/ta l.go:53: | 07:31:28 | create-sm-prometheus-exporters | step-08  | DELETE | DONE | v1/ServiceAccount @ create-sm-prometheus/ta l.go:53: | 07:31:28 | create-sm-prometheus-exporters | step-08  | CLEANUP | DONE | l.go:53: | 07:31:28 | create-sm-prometheus-exporters | step-05  | CLEANUP | RUN | l.go:53: | 07:31:28 | create-sm-prometheus-exporters | step-05  | DELETE | RUN | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-collector l.go:53: | 07:31:28 | create-sm-prometheus-exporters | step-05  | DELETE | DONE | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-collector l.go:53: | 07:31:28 | create-sm-prometheus-exporters | step-05  | DELETE | RUN | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-monitoring-collector l.go:53: | 07:31:28 | create-sm-prometheus-exporters | step-05  | DELETE | DONE | monitoring.coreos.com/v1/ServiceMonitor @ create-sm-prometheus/simplest-monitoring-collector l.go:53: | 07:31:28 | create-sm-prometheus-exporters | step-05  | CLEANUP | DONE | l.go:53: | 07:31:28 | create-sm-prometheus-exporters | step-01  | CLEANUP | RUN | l.go:53: | 07:31:28 | create-sm-prometheus-exporters | step-01  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-sm-prometheus/simplest l.go:53: | 07:31:28 | create-sm-prometheus-exporters | step-01  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ create-sm-prometheus/simplest l.go:53: | 07:31:28 | create-sm-prometheus-exporters | step-01  | CLEANUP | DONE | l.go:53: | 07:31:28 | create-sm-prometheus-exporters | step-00  | CLEANUP | RUN | l.go:53: | 07:31:28 | create-sm-prometheus-exporters | step-00  | DELETE | RUN | v1/Namespace @ create-sm-prometheus l.go:53: | 07:31:28 | create-sm-prometheus-exporters | step-00  | DELETE | OK | v1/Namespace @ create-sm-prometheus === NAME chainsaw/route l.go:53: | 07:31:33 | route | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-pure-stinkbug === CONT chainsaw/multi-cluster l.go:53: | 07:31:33 | multi-cluster | @setup  | CREATE | OK | v1/Namespace @ chainsaw-amusing-mako l.go:53: | 07:31:33 | multi-cluster | step-00  | TRY | RUN | l.go:53: | 07:31:33 | multi-cluster | step-00  | APPLY | RUN | v1/Namespace @ chainsaw-multi-cluster-send l.go:53: | 07:31:33 | multi-cluster | step-00  | CREATE | OK | v1/Namespace @ chainsaw-multi-cluster-send l.go:53: | 07:31:33 | multi-cluster | step-00  | APPLY | DONE | v1/Namespace @ chainsaw-multi-cluster-send l.go:53: | 07:31:33 | multi-cluster | step-00  | APPLY | RUN | v1/Namespace @ chainsaw-multi-cluster-receive l.go:53: | 07:31:33 | multi-cluster | step-00  | CREATE | OK | v1/Namespace @ chainsaw-multi-cluster-receive l.go:53: | 07:31:33 | multi-cluster | step-00  | APPLY | DONE | v1/Namespace @ chainsaw-multi-cluster-receive l.go:53: | 07:31:34 | multi-cluster | step-00  | ASSERT | RUN | project.openshift.io/v1/Project @ chainsaw-multi-cluster-receive l.go:53: | 07:31:34 | multi-cluster | step-00  | ASSERT | DONE | project.openshift.io/v1/Project @ chainsaw-multi-cluster-receive l.go:53: | 07:31:34 | multi-cluster | step-00  | TRY | DONE | l.go:53: | 07:31:34 | multi-cluster | step-01  | TRY | RUN | l.go:53: | 07:31:34 | multi-cluster | step-01  | APPLY | RUN | jaegertracing.io/v1/Jaeger @ chainsaw-multi-cluster-receive/jaeger-allinone l.go:53: | 07:31:34 | multi-cluster | step-01  | CREATE | OK | jaegertracing.io/v1/Jaeger @ chainsaw-multi-cluster-receive/jaeger-allinone l.go:53: | 07:31:34 | multi-cluster | step-01  | APPLY | DONE | jaegertracing.io/v1/Jaeger @ chainsaw-multi-cluster-receive/jaeger-allinone l.go:53: | 07:31:34 | multi-cluster | step-01  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-multi-cluster-receive/jaeger-allinone === NAME chainsaw/scrape-in-cluster-monitoring l.go:53: | 07:31:34 | scrape-in-cluster-monitoring | Wait for the metrics to be collected  | SLEEP | DONE | l.go:53: | 07:31:34 | scrape-in-cluster-monitoring | Wait for the metrics to be collected  | TRY | DONE | l.go:53: | 07:31:34 | scrape-in-cluster-monitoring | Check the presence of metrics in the OTEL collector  | TRY | RUN | l.go:53: | 07:31:34 | scrape-in-cluster-monitoring | Check the presence of metrics in the OTEL collector  | SCRIPT | RUN | === COMMAND /usr/bin/sh -c ./check_logs.sh === NAME chainsaw/create-pm-prometheus-exporters l.go:53: | 07:31:36 | create-pm-prometheus-exporters | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-vast-weevil === CONT chainsaw/kafka l.go:53: | 07:31:36 | kafka | @setup  | CREATE | OK | v1/Namespace @ chainsaw-kafka l.go:53: | 07:31:36 | kafka | step-00  | TRY | RUN | l.go:53: | 07:31:36 | kafka | step-00  | APPLY | RUN | v1/Namespace @ chainsaw-kafka l.go:53: | 07:31:36 | kafka | step-00  | PATCH | OK | v1/Namespace @ chainsaw-kafka l.go:53: | 07:31:36 | kafka | step-00  | APPLY | DONE | v1/Namespace @ chainsaw-kafka l.go:53: | 07:31:36 | kafka | step-00  | APPLY | RUN | kafka.strimzi.io/v1beta2/Kafka @ chainsaw-kafka/my-cluster l.go:53: | 07:31:36 | kafka | step-00  | CREATE | OK | kafka.strimzi.io/v1beta2/Kafka @ chainsaw-kafka/my-cluster l.go:53: | 07:31:36 | kafka | step-00  | APPLY | DONE | kafka.strimzi.io/v1beta2/Kafka @ chainsaw-kafka/my-cluster l.go:53: | 07:31:36 | kafka | step-00  | ASSERT | RUN | v1/Namespace @ chainsaw-kafka l.go:53: | 07:31:36 | kafka | step-00  | ASSERT | DONE | v1/Namespace @ chainsaw-kafka l.go:53: | 07:31:36 | kafka | step-00  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-kafka/my-cluster-entity-operator === NAME chainsaw/create-sm-prometheus-exporters l.go:53: | 07:31:37 | create-sm-prometheus-exporters | step-00  | DELETE | DONE | v1/Namespace @ create-sm-prometheus l.go:53: | 07:31:37 | create-sm-prometheus-exporters | step-00  | CLEANUP | DONE | l.go:53: | 07:31:37 | create-sm-prometheus-exporters | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-sincere-aphid l.go:53: | 07:31:37 | create-sm-prometheus-exporters | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-sincere-aphid === NAME chainsaw/multi-cluster l.go:53: | 07:31:41 | multi-cluster | step-01  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-multi-cluster-receive/jaeger-allinone l.go:53: | 07:31:41 | multi-cluster | step-01  | ASSERT | RUN | v1/Service @ chainsaw-multi-cluster-receive/jaeger-allinone-collector l.go:53: | 07:31:41 | multi-cluster | step-01  | ASSERT | DONE | v1/Service @ chainsaw-multi-cluster-receive/jaeger-allinone-collector l.go:53: | 07:31:41 | multi-cluster | step-01  | ASSERT | RUN | v1/Service @ chainsaw-multi-cluster-receive/jaeger-allinone-query l.go:53: | 07:31:42 | multi-cluster | step-01  | ASSERT | DONE | v1/Service @ chainsaw-multi-cluster-receive/jaeger-allinone-query l.go:53: | 07:31:42 | multi-cluster | step-01  | ASSERT | RUN | route.openshift.io/v1/Route @ chainsaw-multi-cluster-receive/jaeger-allinone l.go:53: | 07:31:42 | multi-cluster | step-01  | ASSERT | DONE | route.openshift.io/v1/Route @ chainsaw-multi-cluster-receive/jaeger-allinone l.go:53: | 07:31:42 | multi-cluster | step-01  | TRY | DONE | l.go:53: | 07:31:42 | multi-cluster | step-02  | TRY | RUN | l.go:53: | 07:31:42 | multi-cluster | step-02  | SCRIPT | RUN | === COMMAND /usr/bin/sh -c ./generate_certs.sh === NAME chainsaw/create-sm-prometheus-exporters l.go:53: | 07:31:43 | create-sm-prometheus-exporters | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-sincere-aphid === CONT chainsaw/opampbridge l.go:53: | 07:31:43 | opampbridge | @setup  | CREATE | OK | v1/Namespace @ chainsaw-just-grouper l.go:53: | 07:31:43 | opampbridge | step-00  | TRY | RUN | l.go:53: | 07:31:43 | opampbridge | step-00  | APPLY | RUN | apps/v1/Deployment @ chainsaw-just-grouper/e2e-test-app-bridge-server l.go:53: | 07:31:43 | opampbridge | step-00  | CREATE | OK | apps/v1/Deployment @ chainsaw-just-grouper/e2e-test-app-bridge-server l.go:53: | 07:31:43 | opampbridge | step-00  | APPLY | DONE | apps/v1/Deployment @ chainsaw-just-grouper/e2e-test-app-bridge-server l.go:53: | 07:31:43 | opampbridge | step-00  | APPLY | RUN | v1/Service @ chainsaw-just-grouper/e2e-test-app-bridge-server === NAME chainsaw/multi-cluster l.go:53: | 07:31:43 | multi-cluster | step-02  | SCRIPT | LOG | === STDOUT Certificates generated successfully in /tmp/chainsaw-certs directory. configmap/chainsaw-certs created configmap/chainsaw-certs created ConfigMaps created successfully. === STDERR ......+..+...+.+.....+.+...+.....+.........+......+....+...+..+.......+...........+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*..+..+.......+......+......+...+..+......+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*.......+.......+..+............+...+......+.+...+........+............+...+...................+......+.....+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ......+.....+.........+............+.........+....+...+...+.....+............+....+...........+...+.+...+......+.....+.........+......+......+.+..+.+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*...+........+.+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*.+............+........+.......+..+....+.................+...+.+.......................+.......+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Certificate request self-signature ok subject=C = US, ST = California, L = San Francisco, O = My Organization, CN = opentelemetry Error from server (NotFound): configmaps "chainsaw-certs" not found Error from server (NotFound): configmaps "chainsaw-certs" not found l.go:53: | 07:31:43 | multi-cluster | step-02  | SCRIPT | DONE | l.go:53: | 07:31:43 | multi-cluster | step-02  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-multi-cluster-receive/otlp-receiver === NAME chainsaw/opampbridge l.go:53: | 07:31:43 | opampbridge | step-00  | CREATE | OK | v1/Service @ chainsaw-just-grouper/e2e-test-app-bridge-server l.go:53: | 07:31:43 | opampbridge | step-00  | APPLY | DONE | v1/Service @ chainsaw-just-grouper/e2e-test-app-bridge-server l.go:53: | 07:31:43 | opampbridge | step-00  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-just-grouper/e2e-test-app-bridge-server === NAME chainsaw/multi-cluster l.go:53: | 07:31:43 | multi-cluster | step-02  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-multi-cluster-receive/otlp-receiver l.go:53: | 07:31:43 | multi-cluster | step-02  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-multi-cluster-receive/otlp-receiver l.go:53: | 07:31:43 | multi-cluster | step-02  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-multi-cluster-receive/otlp-receiver-collector === NAME chainsaw/scrape-in-cluster-monitoring l.go:53: | 07:31:43 | scrape-in-cluster-monitoring | Check the presence of metrics in the OTEL collector  | SCRIPT | LOG | === STDOUT "-> label_pod_security_kubernetes_io_enforce: Str(privileged)" found in otel-collector-7746f96cd7-p9ts5 "-> label_kubernetes_io_metadata_name:" found in otel-collector-7746f96cd7-p9ts5 "-> namespace:" found in otel-collector-7746f96cd7-p9ts5 "-> container" found in otel-collector-7746f96cd7-p9ts5 "-> label_pod_security_kubernetes_io_audit: Str(restricted)" found in otel-collector-7746f96cd7-p9ts5 Found the matched metrics in collector l.go:53: | 07:31:43 | scrape-in-cluster-monitoring | Check the presence of metrics in the OTEL collector  | SCRIPT | DONE | l.go:53: | 07:31:43 | scrape-in-cluster-monitoring | Check the presence of metrics in the OTEL collector  | TRY | DONE | l.go:53: | 07:31:43 | scrape-in-cluster-monitoring | Create OTEL collector with Prometheus receiver to scrape in-cluster metrics | CLEANUP | RUN | l.go:53: | 07:31:43 | scrape-in-cluster-monitoring | Create OTEL collector with Prometheus receiver to scrape in-cluster metrics | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-scrape-in-cluster-monitoring/otel l.go:53: | 07:31:43 | scrape-in-cluster-monitoring | Create OTEL collector with Prometheus receiver to scrape in-cluster metrics | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-scrape-in-cluster-monitoring/otel l.go:53: | 07:31:44 | scrape-in-cluster-monitoring | Create OTEL collector with Prometheus receiver to scrape in-cluster metrics | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-scrape-in-cluster-monitoring/otel l.go:53: | 07:31:44 | scrape-in-cluster-monitoring | Create OTEL collector with Prometheus receiver to scrape in-cluster metrics | DELETE | RUN | v1/ConfigMap @ chainsaw-scrape-in-cluster-monitoring/cabundle l.go:53: | 07:31:44 | scrape-in-cluster-monitoring | Create OTEL collector with Prometheus receiver to scrape in-cluster metrics | DELETE | OK | v1/ConfigMap @ chainsaw-scrape-in-cluster-monitoring/cabundle l.go:53: | 07:31:44 | scrape-in-cluster-monitoring | Create OTEL collector with Prometheus receiver to scrape in-cluster metrics | DELETE | DONE | v1/ConfigMap @ chainsaw-scrape-in-cluster-monitoring/cabundle l.go:53: | 07:31:44 | scrape-in-cluster-monitoring | Create OTEL collector with Prometheus receiver to scrape in-cluster metrics | DELETE | RUN | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ chainsaw-scrape-in-cluster-monitoring-binding l.go:53: | 07:31:44 | scrape-in-cluster-monitoring | Create OTEL collector with Prometheus receiver to scrape in-cluster metrics | DELETE | OK | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ chainsaw-scrape-in-cluster-monitoring-binding l.go:53: | 07:31:44 | scrape-in-cluster-monitoring | Create OTEL collector with Prometheus receiver to scrape in-cluster metrics | DELETE | DONE | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ chainsaw-scrape-in-cluster-monitoring-binding l.go:53: | 07:31:44 | scrape-in-cluster-monitoring | Create OTEL collector with Prometheus receiver to scrape in-cluster metrics | CLEANUP | DONE | l.go:53: | 07:31:44 | scrape-in-cluster-monitoring | @cleanup  | DELETE | RUN | v1/Namespace @ chainsaw-scrape-in-cluster-monitoring l.go:53: | 07:31:44 | scrape-in-cluster-monitoring | @cleanup  | DELETE | OK | v1/Namespace @ chainsaw-scrape-in-cluster-monitoring === NAME chainsaw/multi-cluster l.go:53: | 07:31:44 | multi-cluster | step-02  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-multi-cluster-receive/otlp-receiver-collector l.go:53: | 07:31:44 | multi-cluster | step-02  | ASSERT | RUN | v1/Service @ chainsaw-multi-cluster-receive/otlp-receiver-collector l.go:53: | 07:31:44 | multi-cluster | step-02  | ASSERT | DONE | v1/Service @ chainsaw-multi-cluster-receive/otlp-receiver-collector l.go:53: | 07:31:44 | multi-cluster | step-02  | ASSERT | RUN | v1/Service @ chainsaw-multi-cluster-receive/otlp-receiver-collector-headless l.go:53: | 07:31:45 | multi-cluster | step-02  | ASSERT | DONE | v1/Service @ chainsaw-multi-cluster-receive/otlp-receiver-collector-headless l.go:53: | 07:31:45 | multi-cluster | step-02  | ASSERT | RUN | v1/Service @ chainsaw-multi-cluster-receive/otlp-receiver-collector-monitoring l.go:53: | 07:31:45 | multi-cluster | step-02  | ASSERT | DONE | v1/Service @ chainsaw-multi-cluster-receive/otlp-receiver-collector-monitoring l.go:53: | 07:31:45 | multi-cluster | step-02  | ASSERT | RUN | route.openshift.io/v1/Route @ chainsaw-multi-cluster-receive/otlp-grpc-otlp-receiver-route l.go:53: | 07:31:45 | multi-cluster | step-02  | ASSERT | DONE | route.openshift.io/v1/Route @ chainsaw-multi-cluster-receive/otlp-grpc-otlp-receiver-route l.go:53: | 07:31:45 | multi-cluster | step-02  | ASSERT | RUN | route.openshift.io/v1/Route @ chainsaw-multi-cluster-receive/otlp-http-otlp-receiver-route l.go:53: | 07:31:45 | multi-cluster | step-02  | ASSERT | DONE | route.openshift.io/v1/Route @ chainsaw-multi-cluster-receive/otlp-http-otlp-receiver-route l.go:53: | 07:31:45 | multi-cluster | step-02  | TRY | DONE | l.go:53: | 07:31:45 | multi-cluster | step-03  | TRY | RUN | l.go:53: | 07:31:45 | multi-cluster | step-03  | SCRIPT | RUN | === COMMAND /usr/bin/sh -c ./create_otlp_sender.sh l.go:53: | 07:31:45 | multi-cluster | step-03  | SCRIPT | LOG | === STDOUT opentelemetrycollector.opentelemetry.io/otel-sender created === STDERR Warning: OpenTelemetryCollector v1alpha1 is deprecated. Migrate to v1beta1. Warning: Collector config spec.config has null objects: processors.batch:. For compatibility with other tooling, such as kustomize and kubectl edit, it is recommended to use empty objects e.g. batch: {}. l.go:53: | 07:31:45 | multi-cluster | step-03  | SCRIPT | DONE | l.go:53: | 07:31:45 | multi-cluster | step-03  | APPLY | RUN | v1/ServiceAccount @ chainsaw-multi-cluster-send/chainsaw-multi-cluster l.go:53: | 07:31:45 | multi-cluster | step-03  | CREATE | OK | v1/ServiceAccount @ chainsaw-multi-cluster-send/chainsaw-multi-cluster l.go:53: | 07:31:45 | multi-cluster | step-03  | APPLY | DONE | v1/ServiceAccount @ chainsaw-multi-cluster-send/chainsaw-multi-cluster l.go:53: | 07:31:45 | multi-cluster | step-03  | APPLY | RUN | rbac.authorization.k8s.io/v1/ClusterRole @ chainsaw-multi-cluster l.go:53: | 07:31:45 | multi-cluster | step-03  | CREATE | OK | rbac.authorization.k8s.io/v1/ClusterRole @ chainsaw-multi-cluster l.go:53: | 07:31:45 | multi-cluster | step-03  | APPLY | DONE | rbac.authorization.k8s.io/v1/ClusterRole @ chainsaw-multi-cluster l.go:53: | 07:31:45 | multi-cluster | step-03  | APPLY | RUN | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ chainsaw-multi-cluster l.go:53: | 07:31:46 | multi-cluster | step-03  | CREATE | OK | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ chainsaw-multi-cluster l.go:53: | 07:31:46 | multi-cluster | step-03  | APPLY | DONE | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ chainsaw-multi-cluster l.go:53: | 07:31:46 | multi-cluster | step-03  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-multi-cluster-send/otel-sender-collector === NAME chainsaw/opampbridge l.go:53: | 07:31:47 | opampbridge | step-00  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-just-grouper/e2e-test-app-bridge-server l.go:53: | 07:31:47 | opampbridge | step-00  | TRY | DONE | l.go:53: | 07:31:47 | opampbridge | step-01  | TRY | RUN | l.go:53: | 07:31:47 | opampbridge | step-01  | APPLY | RUN | v1/ServiceAccount @ chainsaw-just-grouper/opamp-bridge l.go:53: | 07:31:47 | opampbridge | step-01  | CREATE | OK | v1/ServiceAccount @ chainsaw-just-grouper/opamp-bridge l.go:53: | 07:31:47 | opampbridge | step-01  | APPLY | DONE | v1/ServiceAccount @ chainsaw-just-grouper/opamp-bridge l.go:53: | 07:31:47 | opampbridge | step-01  | APPLY | RUN | rbac.authorization.k8s.io/v1/ClusterRole @ opamp-bridge l.go:53: | 07:31:47 | opampbridge | step-01  | CREATE | OK | rbac.authorization.k8s.io/v1/ClusterRole @ opamp-bridge l.go:53: | 07:31:47 | opampbridge | step-01  | APPLY | DONE | rbac.authorization.k8s.io/v1/ClusterRole @ opamp-bridge l.go:53: | 07:31:47 | opampbridge | step-01  | APPLY | RUN | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ bridge-cluster-rolebinding l.go:53: | 07:31:47 | opampbridge | step-01  | CREATE | OK | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ bridge-cluster-rolebinding l.go:53: | 07:31:47 | opampbridge | step-01  | APPLY | DONE | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ bridge-cluster-rolebinding l.go:53: | 07:31:47 | opampbridge | step-01  | APPLY | RUN | opentelemetry.io/v1alpha1/OpAMPBridge @ chainsaw-just-grouper/test l.go:53: | 07:31:47 | opampbridge | step-01  | CREATE | OK | opentelemetry.io/v1alpha1/OpAMPBridge @ chainsaw-just-grouper/test l.go:53: | 07:31:47 | opampbridge | step-01  | APPLY | DONE | opentelemetry.io/v1alpha1/OpAMPBridge @ chainsaw-just-grouper/test l.go:53: | 07:31:47 | opampbridge | step-01  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-just-grouper/test-opamp-bridge === NAME chainsaw/multi-cluster l.go:53: | 07:31:49 | multi-cluster | step-03  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-multi-cluster-send/otel-sender-collector l.go:53: | 07:31:49 | multi-cluster | step-03  | ASSERT | RUN | v1/Service @ chainsaw-multi-cluster-send/otel-sender-collector l.go:53: | 07:31:50 | multi-cluster | step-03  | ASSERT | DONE | v1/Service @ chainsaw-multi-cluster-send/otel-sender-collector l.go:53: | 07:31:50 | multi-cluster | step-03  | ASSERT | RUN | v1/Service @ chainsaw-multi-cluster-send/otel-sender-collector-headless l.go:53: | 07:31:50 | multi-cluster | step-03  | ASSERT | DONE | v1/Service @ chainsaw-multi-cluster-send/otel-sender-collector-headless l.go:53: | 07:31:50 | multi-cluster | step-03  | ASSERT | RUN | v1/Service @ chainsaw-multi-cluster-send/otel-sender-collector-monitoring l.go:53: | 07:31:50 | multi-cluster | step-03  | ASSERT | DONE | v1/Service @ chainsaw-multi-cluster-send/otel-sender-collector-monitoring l.go:53: | 07:31:50 | multi-cluster | step-03  | TRY | DONE | l.go:53: | 07:31:50 | multi-cluster | step-04  | TRY | RUN | l.go:53: | 07:31:50 | multi-cluster | step-04  | APPLY | RUN | batch/v1/Job @ chainsaw-multi-cluster-send/generate-traces-http l.go:53: | 07:31:50 | multi-cluster | step-04  | CREATE | OK | batch/v1/Job @ chainsaw-multi-cluster-send/generate-traces-http l.go:53: | 07:31:50 | multi-cluster | step-04  | APPLY | DONE | batch/v1/Job @ chainsaw-multi-cluster-send/generate-traces-http l.go:53: | 07:31:50 | multi-cluster | step-04  | APPLY | RUN | batch/v1/Job @ chainsaw-multi-cluster-send/generate-traces-grpc l.go:53: | 07:31:50 | multi-cluster | step-04  | CREATE | OK | batch/v1/Job @ chainsaw-multi-cluster-send/generate-traces-grpc l.go:53: | 07:31:50 | multi-cluster | step-04  | APPLY | DONE | batch/v1/Job @ chainsaw-multi-cluster-send/generate-traces-grpc l.go:53: | 07:31:50 | multi-cluster | step-04  | ASSERT | RUN | batch/v1/Job @ chainsaw-multi-cluster-send/generate-traces-http === NAME chainsaw/scrape-in-cluster-monitoring l.go:53: | 07:31:50 | scrape-in-cluster-monitoring | @cleanup  | DELETE | DONE | v1/Namespace @ chainsaw-scrape-in-cluster-monitoring === CONT chainsaw/instrumentation-sdk l.go:53: | 07:31:50 | instrumentation-sdk | @setup  | CREATE | OK | v1/Namespace @ chainsaw-light-quail l.go:53: | 07:31:50 | instrumentation-sdk | step-00  | TRY | RUN | l.go:53: | 07:31:50 | instrumentation-sdk | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-light-quail openshift.io/sa.scc.uid-range=1000/1000 --overwrite l.go:53: | 07:31:50 | instrumentation-sdk | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-light-quail annotated l.go:53: | 07:31:50 | instrumentation-sdk | step-00  | CMD | DONE | l.go:53: | 07:31:50 | instrumentation-sdk | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-light-quail openshift.io/sa.scc.supplemental-groups=3000/3000 --overwrite === NAME chainsaw/opampbridge l.go:53: | 07:31:50 | opampbridge | step-01  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-just-grouper/test-opamp-bridge l.go:53: | 07:31:50 | opampbridge | step-01  | ASSERT | RUN | v1/ConfigMap @ chainsaw-just-grouper/test-opamp-bridge l.go:53: | 07:31:51 | opampbridge | step-01  | ASSERT | DONE | v1/ConfigMap @ chainsaw-just-grouper/test-opamp-bridge l.go:53: | 07:31:51 | opampbridge | step-01  | ASSERT | RUN | v1/Service @ chainsaw-just-grouper/test-opamp-bridge === NAME chainsaw/instrumentation-sdk l.go:53: | 07:31:51 | instrumentation-sdk | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-light-quail annotated l.go:53: | 07:31:51 | instrumentation-sdk | step-00  | CMD | DONE | === NAME chainsaw/opampbridge l.go:53: | 07:31:51 | opampbridge | step-01  | ASSERT | DONE | v1/Service @ chainsaw-just-grouper/test-opamp-bridge l.go:53: | 07:31:51 | opampbridge | step-01  | TRY | DONE | l.go:53: | 07:31:51 | opampbridge | Check effective config is empty for a valid agent id | TRY | RUN | l.go:53: | 07:31:51 | opampbridge | Check effective config is empty for a valid agent id | SCRIPT | RUN | === COMMAND /usr/bin/sh -c #!/bin/bash # set -ex # bridge_server_host=$(kubectl get service e2e-test-app-bridge-server -n $NAMESPACE -o jsonpath='{.spec.clusterIP}') # curl -H "Content-Type: application/json" http://${bridge_server_host}:4321/agents # TODO: Uncomment the above when proxying is available in chainsaw kubectl get --raw /api/v1/namespaces/$NAMESPACE/services/e2e-test-app-bridge-server:4321/proxy/agents === NAME chainsaw/instrumentation-sdk l.go:53: | 07:31:51 | instrumentation-sdk | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-light-quail/sidecar === NAME chainsaw/opampbridge l.go:53: | 07:31:51 | opampbridge | Check effective config is empty for a valid agent id | SCRIPT | LOG | === STDOUT {"0194cab9-02c2-72aa-82b0-fd50cb46eabd":{"status":{"instance_uid":"AZTKuQLCcqqCsP1Qy0bqvQ==","sequence_num":2,"agent_description":{"identifying_attributes":[{"key":"service.name","value":{"Value":{"StringValue":"io.opentelemetry.operator-opamp-bridge"}}},{"key":"service.version","value":{"Value":{"StringValue":""}}}],"non_identifying_attributes":[{"key":"os.family","value":{"Value":{"StringValue":"linux"}}},{"key":"host.name","value":{"Value":{"StringValue":"test-opamp-bridge-848fc57d69-hj728"}}}]},"capabilities":8167,"health":{"healthy":true,"start_time_unix_nano":1738567910082186679,"status_time_unix_nano":1738567910102393745},"effective_config":{"config_map":{}},"remote_config_status":{"last_remote_config_hash":"47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=","status":1},"package_statuses":{}},"started_at":"2025-02-03T07:31:50.082186679Z","effective_config":{}}} l.go:53: | 07:31:51 | opampbridge | Check effective config is empty for a valid agent id | SCRIPT | DONE | l.go:53: | 07:31:51 | opampbridge | Check effective config is empty for a valid agent id | ASSERT | RUN | === NAME chainsaw/instrumentation-sdk l.go:53: | 07:31:51 | instrumentation-sdk | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-light-quail/sidecar l.go:53: | 07:31:51 | instrumentation-sdk | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-light-quail/sidecar l.go:53: | 07:31:51 | instrumentation-sdk | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-light-quail/sdk-only === NAME chainsaw/opampbridge l.go:53: | 07:31:51 | opampbridge | Check effective config is empty for a valid agent id | ASSERT | DONE | l.go:53: | 07:31:51 | opampbridge | Check effective config is empty for a valid agent id | TRY | DONE | l.go:53: | 07:31:51 | opampbridge | step-02  | TRY | RUN | l.go:53: | 07:31:51 | opampbridge | step-02  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-just-grouper/simplest === NAME chainsaw/instrumentation-sdk l.go:53: | 07:31:51 | instrumentation-sdk | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-light-quail/sdk-only l.go:53: | 07:31:51 | instrumentation-sdk | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-light-quail/sdk-only l.go:53: | 07:31:51 | instrumentation-sdk | step-00  | TRY | DONE | l.go:53: | 07:31:51 | instrumentation-sdk | step-01  | TRY | RUN | l.go:53: | 07:31:51 | instrumentation-sdk | step-01  | APPLY | RUN | apps/v1/Deployment @ chainsaw-light-quail/my-sdk === NAME chainsaw/opampbridge l.go:53: | 07:31:51 | opampbridge | step-02  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-just-grouper/simplest l.go:53: | 07:31:51 | opampbridge | step-02  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-just-grouper/simplest l.go:53: | 07:31:51 | opampbridge | step-02  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-just-grouper/simplest-collector === NAME chainsaw/instrumentation-sdk l.go:53: | 07:31:51 | instrumentation-sdk | step-01  | CREATE | OK | apps/v1/Deployment @ chainsaw-light-quail/my-sdk l.go:53: | 07:31:51 | instrumentation-sdk | step-01  | APPLY | DONE | apps/v1/Deployment @ chainsaw-light-quail/my-sdk l.go:53: | 07:31:51 | instrumentation-sdk | step-01  | ASSERT | RUN | v1/Pod @ chainsaw-light-quail/* === NAME chainsaw/opampbridge l.go:53: | 07:31:52 | opampbridge | step-02  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-just-grouper/simplest-collector l.go:53: | 07:31:52 | opampbridge | step-02  | ASSERT | RUN | v1/Service @ chainsaw-just-grouper/simplest-collector-headless l.go:53: | 07:31:52 | opampbridge | step-02  | ASSERT | DONE | v1/Service @ chainsaw-just-grouper/simplest-collector-headless l.go:53: | 07:31:52 | opampbridge | step-02  | ASSERT | RUN | v1/Service @ chainsaw-just-grouper/simplest-collector l.go:53: | 07:31:53 | opampbridge | step-02  | ASSERT | DONE | v1/Service @ chainsaw-just-grouper/simplest-collector l.go:53: | 07:31:53 | opampbridge | step-02  | TRY | DONE | l.go:53: | 07:31:53 | opampbridge | step-5  | TRY | RUN | l.go:53: | 07:31:53 | opampbridge | step-5  | SLEEP | RUN | l.go:53: | 07:31:54 | opampbridge | step-5  | SLEEP | DONE | l.go:53: | 07:31:54 | opampbridge | step-5  | TRY | DONE | l.go:53: | 07:31:54 | opampbridge | step-6  | TRY | RUN | l.go:53: | 07:31:54 | opampbridge | step-6  | SCRIPT | RUN | === COMMAND /usr/bin/sh -c #!/bin/bash kubectl delete pod -l app.kubernetes.io/name=test-opamp-bridge -n $NAMESPACE === NAME chainsaw/multi-cluster l.go:53: | 07:31:54 | multi-cluster | step-04  | ASSERT | DONE | batch/v1/Job @ chainsaw-multi-cluster-send/generate-traces-http l.go:53: | 07:31:54 | multi-cluster | step-04  | ASSERT | RUN | batch/v1/Job @ chainsaw-multi-cluster-send/generate-traces-grpc l.go:53: | 07:31:54 | multi-cluster | step-04  | ASSERT | DONE | batch/v1/Job @ chainsaw-multi-cluster-send/generate-traces-grpc l.go:53: | 07:31:54 | multi-cluster | step-04  | TRY | DONE | l.go:53: | 07:31:54 | multi-cluster | step-05  | TRY | RUN | l.go:53: | 07:31:54 | multi-cluster | step-05  | SCRIPT | RUN | === COMMAND /usr/bin/sh -c ./check_traces.sh l.go:53: | 07:31:54 | multi-cluster | step-05  | SCRIPT | LOG | === STDOUT Traces for telemetrygen-http exist in Jaeger. Traces for telemetrygen-grpc exist in Jaeger. Traces exist for all service names. l.go:53: | 07:31:54 | multi-cluster | step-05  | SCRIPT | DONE | l.go:53: | 07:31:54 | multi-cluster | step-05  | TRY | DONE | l.go:53: | 07:31:54 | multi-cluster | step-04  | CLEANUP | RUN | l.go:53: | 07:31:54 | multi-cluster | step-04  | DELETE | RUN | batch/v1/Job @ chainsaw-multi-cluster-send/generate-traces-grpc l.go:53: | 07:31:54 | multi-cluster | step-04  | DELETE | OK | batch/v1/Job @ chainsaw-multi-cluster-send/generate-traces-grpc l.go:53: | 07:31:54 | multi-cluster | step-04  | DELETE | DONE | batch/v1/Job @ chainsaw-multi-cluster-send/generate-traces-grpc l.go:53: | 07:31:54 | multi-cluster | step-04  | DELETE | RUN | batch/v1/Job @ chainsaw-multi-cluster-send/generate-traces-http l.go:53: | 07:31:54 | multi-cluster | step-04  | DELETE | OK | batch/v1/Job @ chainsaw-multi-cluster-send/generate-traces-http l.go:53: | 07:31:54 | multi-cluster | step-04  | DELETE | DONE | batch/v1/Job @ chainsaw-multi-cluster-send/generate-traces-http l.go:53: | 07:31:54 | multi-cluster | step-04  | CLEANUP | DONE | l.go:53: | 07:31:54 | multi-cluster | step-03  | CLEANUP | RUN | l.go:53: | 07:31:54 | multi-cluster | step-03  | DELETE | RUN | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ chainsaw-multi-cluster l.go:53: | 07:31:54 | multi-cluster | step-03  | DELETE | OK | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ chainsaw-multi-cluster l.go:53: | 07:31:54 | multi-cluster | step-03  | DELETE | DONE | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ chainsaw-multi-cluster l.go:53: | 07:31:54 | multi-cluster | step-03  | DELETE | RUN | rbac.authorization.k8s.io/v1/ClusterRole @ chainsaw-multi-cluster l.go:53: | 07:31:54 | multi-cluster | step-03  | DELETE | OK | rbac.authorization.k8s.io/v1/ClusterRole @ chainsaw-multi-cluster l.go:53: | 07:31:54 | multi-cluster | step-03  | DELETE | DONE | rbac.authorization.k8s.io/v1/ClusterRole @ chainsaw-multi-cluster l.go:53: | 07:31:54 | multi-cluster | step-03  | DELETE | RUN | v1/ServiceAccount @ chainsaw-multi-cluster-send/chainsaw-multi-cluster === NAME chainsaw/opampbridge l.go:53: | 07:31:54 | opampbridge | step-6  | SCRIPT | LOG | === STDOUT pod "test-opamp-bridge-848fc57d69-hj728" deleted l.go:53: | 07:31:54 | opampbridge | step-6  | SCRIPT | DONE | l.go:53: | 07:31:54 | opampbridge | step-6  | TRY | DONE | l.go:53: | 07:31:54 | opampbridge | step-7  | TRY | RUN | l.go:53: | 07:31:54 | opampbridge | step-7  | SLEEP | RUN | === NAME chainsaw/multi-cluster l.go:53: | 07:31:54 | multi-cluster | step-03  | DELETE | OK | v1/ServiceAccount @ chainsaw-multi-cluster-send/chainsaw-multi-cluster l.go:53: | 07:31:54 | multi-cluster | step-03  | DELETE | DONE | v1/ServiceAccount @ chainsaw-multi-cluster-send/chainsaw-multi-cluster l.go:53: | 07:31:54 | multi-cluster | step-03  | CLEANUP | DONE | l.go:53: | 07:31:54 | multi-cluster | step-02  | CLEANUP | RUN | l.go:53: | 07:31:54 | multi-cluster | step-02  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-multi-cluster-receive/otlp-receiver l.go:53: | 07:31:55 | multi-cluster | step-02  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-multi-cluster-receive/otlp-receiver l.go:53: | 07:31:55 | multi-cluster | step-02  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-multi-cluster-receive/otlp-receiver l.go:53: | 07:31:55 | multi-cluster | step-02  | CLEANUP | DONE | l.go:53: | 07:31:55 | multi-cluster | step-01  | CLEANUP | RUN | l.go:53: | 07:31:55 | multi-cluster | step-01  | DELETE | RUN | jaegertracing.io/v1/Jaeger @ chainsaw-multi-cluster-receive/jaeger-allinone l.go:53: | 07:31:55 | multi-cluster | step-01  | DELETE | OK | jaegertracing.io/v1/Jaeger @ chainsaw-multi-cluster-receive/jaeger-allinone l.go:53: | 07:31:55 | multi-cluster | step-01  | DELETE | DONE | jaegertracing.io/v1/Jaeger @ chainsaw-multi-cluster-receive/jaeger-allinone l.go:53: | 07:31:55 | multi-cluster | step-01  | CLEANUP | DONE | l.go:53: | 07:31:55 | multi-cluster | step-00  | CLEANUP | RUN | l.go:53: | 07:31:55 | multi-cluster | step-00  | DELETE | RUN | v1/Namespace @ chainsaw-multi-cluster-receive l.go:53: | 07:31:55 | multi-cluster | step-00  | DELETE | OK | v1/Namespace @ chainsaw-multi-cluster-receive === NAME chainsaw/opampbridge l.go:53: | 07:31:57 | opampbridge | step-7  | SLEEP | DONE | l.go:53: | 07:31:57 | opampbridge | step-7  | TRY | DONE | l.go:53: | 07:31:57 | opampbridge | step-8  | TRY | RUN | l.go:53: | 07:31:57 | opampbridge | step-8  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-just-grouper/test-opamp-bridge l.go:53: | 07:31:58 | opampbridge | step-8  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-just-grouper/test-opamp-bridge l.go:53: | 07:31:58 | opampbridge | step-8  | TRY | DONE | l.go:53: | 07:31:58 | opampbridge | Check effective config is not empty  | TRY | RUN | l.go:53: | 07:31:58 | opampbridge | Check effective config is not empty  | SCRIPT | RUN | === COMMAND /usr/bin/sh -c #!/bin/bash # set -ex # bridge_server_host=$(kubectl get service e2e-test-app-bridge-server -n $NAMESPACE -o jsonpath='{.spec.clusterIP}') # curl -H "Content-Type: application/json" http://${bridge_server_host}:4321/agents # TODO: Uncomment the above when proxying is available in chainsaw kubectl get --raw /api/v1/namespaces/$NAMESPACE/services/e2e-test-app-bridge-server:4321/proxy/agents l.go:53: | 07:31:58 | opampbridge | Check effective config is not empty  | SCRIPT | LOG | === STDOUT {"0194cab9-1661-74a9-9281-f41bf23ab746":{"status":{"instance_uid":"AZTKuRZhdKmSgfQb8jq3Rg==","sequence_num":2,"agent_description":{"identifying_attributes":[{"key":"service.name","value":{"Value":{"StringValue":"io.opentelemetry.operator-opamp-bridge"}}},{"key":"service.version","value":{"Value":{"StringValue":""}}}],"non_identifying_attributes":[{"key":"os.family","value":{"Value":{"StringValue":"linux"}}},{"key":"host.name","value":{"Value":{"StringValue":"test-opamp-bridge-848fc57d69-lz5p5"}}}]},"capabilities":8167,"health":{"healthy":true,"start_time_unix_nano":1738567915105317475,"status_time_unix_nano":1738567915306615694,"component_health_map":{"chainsaw-just-grouper/simplest":{"healthy":true,"start_time_unix_nano":1738567911000000000,"status":"1/1","status_time_unix_nano":1738567915306615114,"component_health_map":{"chainsaw-just-grouper/simplest-collector-5d5b5bdd9b-hpjrx":{"healthy":true,"start_time_unix_nano":1738567911000000000,"status":"Running","status_time_unix_nano":1738567915306612834}}}}},"effective_config":{"config_map":{"config_map":{"chainsaw-just-grouper/simplest":{"body":"YXBpVmVyc2lvbjogb3BlbnRlbGVtZXRyeS5pby92MWJldGExCmtpbmQ6IE9wZW5UZWxlbWV0cnlDb2xsZWN0b3IKbWV0YWRhdGE6CiAgY3JlYXRpb25UaW1lc3RhbXA6ICIyMDI1LTAyLTAzVDA3OjMxOjUxWiIKICBmaW5hbGl6ZXJzOgogIC0gb3BlbnRlbGVtZXRyeWNvbGxlY3Rvci5vcGVudGVsZW1ldHJ5LmlvL2ZpbmFsaXplcgogIGdlbmVyYXRpb246IDEKICBsYWJlbHM6CiAgICBvcGVudGVsZW1ldHJ5LmlvL29wYW1wLXJlcG9ydGluZzogInRydWUiCiAgbmFtZTogc2ltcGxlc3QKICBuYW1lc3BhY2U6IGNoYWluc2F3LWp1c3QtZ3JvdXBlcgogIHJlc291cmNlVmVyc2lvbjogIjUwNDA1IgogIHVpZDogOGZiMzlkZGEtMjZiYy00ZjhlLTkxNDYtYjI4MTA0ZDYxN2ZjCnNwZWM6CiAgY29uZmlnOgogICAgZXhwb3J0ZXJzOgogICAgICBkZWJ1ZzogbnVsbAogICAgcmVjZWl2ZXJzOgogICAgICBqYWVnZXI6CiAgICAgICAgcHJvdG9jb2xzOgogICAgICAgICAgZ3JwYzoKICAgICAgICAgICAgZW5kcG9pbnQ6IDAuMC4wLjA6MTQyNTAKICAgICAgb3RscDoKICAgICAgICBwcm90b2NvbHM6CiAgICAgICAgICBncnBjOgogICAgICAgICAgICBlbmRwb2ludDogMC4wLjAuMDo0MzE3CiAgICAgICAgICBodHRwOgogICAgICAgICAgICBlbmRwb2ludDogMC4wLjAuMDo0MzE4CiAgICBzZXJ2aWNlOgogICAgICBwaXBlbGluZXM6CiAgICAgICAgdHJhY2VzOgogICAgICAgICAgZXhwb3J0ZXJzOgogICAgICAgICAgLSBkZWJ1ZwogICAgICAgICAgcmVjZWl2ZXJzOgogICAgICAgICAgLSBqYWVnZXIKICAgICAgICAgIC0gb3RscAogICAgICB0ZWxlbWV0cnk6CiAgICAgICAgbWV0cmljczoKICAgICAgICAgIGFkZHJlc3M6IDAuMC4wLjA6ODg4OAogIGNvbmZpZ1ZlcnNpb25zOiAzCiAgZGFlbW9uU2V0VXBkYXRlU3RyYXRlZ3k6IHt9CiAgZGVwbG95bWVudFVwZGF0ZVN0cmF0ZWd5OiB7fQogIGluZ3Jlc3M6CiAgICByb3V0ZToge30KICBpcEZhbWlseVBvbGljeTogU2luZ2xlU3RhY2sKICBtYW5hZ2VtZW50U3RhdGU6IG1hbmFnZWQKICBtb2RlOiBkZXBsb3ltZW50CiAgb2JzZXJ2YWJpbGl0eToKICAgIG1ldHJpY3M6IHt9CiAgcG9kRG5zQ29uZmlnOiB7fQogIHJlcGxpY2FzOiAxCiAgcmVzb3VyY2VzOiB7fQogIHRhcmdldEFsbG9jYXRvcjoKICAgIGFsbG9jYXRpb25TdHJhdGVneTogY29uc2lzdGVudC1oYXNoaW5nCiAgICBmaWx0ZXJTdHJhdGVneTogcmVsYWJlbC1jb25maWcKICAgIG9ic2VydmFiaWxpdHk6CiAgICAgIG1ldHJpY3M6IHt9CiAgICBwcm9tZXRoZXVzQ1I6CiAgICAgIHBvZE1vbml0b3JTZWxlY3Rvcjoge30KICAgICAgc2NyYXBlSW50ZXJ2YWw6IDMwcwogICAgICBzZXJ2aWNlTW9uaXRvclNlbGVjdG9yOiB7fQogICAgcmVzb3VyY2VzOiB7fQogIHVwZ3JhZGVTdHJhdGVneTogYXV0b21hdGljCnN0YXR1czoKICBpbWFnZTogcmVnaXN0cnkucmVkaGF0LmlvL3Job3NkdC9vcGVudGVsZW1ldHJ5LWNvbGxlY3Rvci1yaGVsOEBzaGEyNTY6YjA0ODMwYTBiZWM0NTQ4NThkMWE5MWI2MjllMjI2ZTA0ODBmNmMyOTkxZDFiYTU2NGJjMDRhNzBmMmU1ZWQ4NwogIHNjYWxlOgogICAgcmVwbGljYXM6IDEKICAgIHNlbGVjdG9yOiBhcHAua3ViZXJuZXRlcy5pby9jb21wb25lbnQ9b3BlbnRlbGVtZXRyeS1jb2xsZWN0b3IsYXBwLmt1YmVybmV0ZXMuaW8vaW5zdGFuY2U9Y2hhaW5zYXctanVzdC1ncm91cGVyLnNpbXBsZXN0LGFwcC5rdWJlcm5ldGVzLmlvL21hbmFnZWQtYnk9b3BlbnRlbGVtZXRyeS1vcGVyYXRvcixhcHAua3ViZXJuZXRlcy5pby9uYW1lPXNpbXBsZXN0LWNvbGxlY3RvcixhcHAua3ViZXJuZXRlcy5pby9wYXJ0LW9mPW9wZW50ZWxlbWV0cnksYXBwLmt1YmVybmV0ZXMuaW8vdmVyc2lvbj1sYXRlc3Qsb3BlbnRlbGVtZXRyeS5pby9vcGFtcC1yZXBvcnRpbmc9dHJ1ZQogICAgc3RhdHVzUmVwbGljYXM6IDEvMQogIHZlcnNpb246IDAuMTEzLjAK","content_type":"yaml"}}}},"remote_config_status":{"last_remote_config_hash":"47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=","status":1},"package_statuses":{}},"started_at":"2025-02-03T07:31:55.105317475Z","effective_config":{"chainsaw-just-grouper/simplest":"apiVersion: opentelemetry.io/v1beta1\nkind: OpenTelemetryCollector\nmetadata:\n creationTimestamp: \"2025-02-03T07:31:51Z\"\n finalizers:\n - opentelemetrycollector.opentelemetry.io/finalizer\n generation: 1\n labels:\n opentelemetry.io/opamp-reporting: \"true\"\n name: simplest\n namespace: chainsaw-just-grouper\n resourceVersion: \"50405\"\n uid: 8fb39dda-26bc-4f8e-9146-b28104d617fc\nspec:\n config:\n exporters:\n debug: null\n receivers:\n jaeger:\n protocols:\n grpc:\n endpoint: 0.0.0.0:14250\n otlp:\n protocols:\n grpc:\n endpoint: 0.0.0.0:4317\n http:\n endpoint: 0.0.0.0:4318\n service:\n pipelines:\n traces:\n exporters:\n - debug\n receivers:\n - jaeger\n - otlp\n telemetry:\n metrics:\n address: 0.0.0.0:8888\n configVersions: 3\n daemonSetUpdateStrategy: {}\n deploymentUpdateStrategy: {}\n ingress:\n route: {}\n ipFamilyPolicy: SingleStack\n managementState: managed\n mode: deployment\n observability:\n metrics: {}\n podDnsConfig: {}\n replicas: 1\n resources: {}\n targetAllocator:\n allocationStrategy: consistent-hashing\n filterStrategy: relabel-config\n observability:\n metrics: {}\n prometheusCR:\n podMonitorSelector: {}\n scrapeInterval: 30s\n serviceMonitorSelector: {}\n resources: {}\n upgradeStrategy: automatic\nstatus:\n image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87\n scale:\n replicas: 1\n selector: app.kubernetes.io/component=opentelemetry-collector,app.kubernetes.io/instance=chainsaw-just-grouper.simplest,app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/name=simplest-collector,app.kubernetes.io/part-of=opentelemetry,app.kubernetes.io/version=latest,opentelemetry.io/opamp-reporting=true\n statusReplicas: 1/1\n version: 0.113.0\n"}}} l.go:53: | 07:31:58 | opampbridge | Check effective config is not empty  | SCRIPT | DONE | l.go:53: | 07:31:58 | opampbridge | Check effective config is not empty  | ASSERT | RUN | l.go:53: | 07:31:58 | opampbridge | Check effective config is not empty  | ASSERT | DONE | l.go:53: | 07:31:58 | opampbridge | Check effective config is not empty  | TRY | DONE | l.go:53: | 07:31:58 | opampbridge | Verify content is accurate  | TRY | RUN | l.go:53: | 07:31:58 | opampbridge | Verify content is accurate  | SCRIPT | RUN | === COMMAND /usr/bin/sh -c #!/bin/bash # set -ex # bridge_server_host=$(kubectl get service e2e-test-app-bridge-server -n $NAMESPACE -o jsonpath='{.spec.clusterIP}') # curl -H "Content-Type: application/json" http://${bridge_server_host}:4321/agents # TODO: Uncomment the above when proxying is available in chainsaw kubectl get --raw /api/v1/namespaces/$NAMESPACE/services/e2e-test-app-bridge-server:4321/proxy/agents l.go:53: | 07:31:58 | opampbridge | Verify content is accurate  | SCRIPT | LOG | === STDOUT {"0194cab9-1661-74a9-9281-f41bf23ab746":{"status":{"instance_uid":"AZTKuRZhdKmSgfQb8jq3Rg==","sequence_num":2,"agent_description":{"identifying_attributes":[{"key":"service.name","value":{"Value":{"StringValue":"io.opentelemetry.operator-opamp-bridge"}}},{"key":"service.version","value":{"Value":{"StringValue":""}}}],"non_identifying_attributes":[{"key":"os.family","value":{"Value":{"StringValue":"linux"}}},{"key":"host.name","value":{"Value":{"StringValue":"test-opamp-bridge-848fc57d69-lz5p5"}}}]},"capabilities":8167,"health":{"healthy":true,"start_time_unix_nano":1738567915105317475,"status_time_unix_nano":1738567915306615694,"component_health_map":{"chainsaw-just-grouper/simplest":{"healthy":true,"start_time_unix_nano":1738567911000000000,"status":"1/1","status_time_unix_nano":1738567915306615114,"component_health_map":{"chainsaw-just-grouper/simplest-collector-5d5b5bdd9b-hpjrx":{"healthy":true,"start_time_unix_nano":1738567911000000000,"status":"Running","status_time_unix_nano":1738567915306612834}}}}},"effective_config":{"config_map":{"config_map":{"chainsaw-just-grouper/simplest":{"body":"YXBpVmVyc2lvbjogb3BlbnRlbGVtZXRyeS5pby92MWJldGExCmtpbmQ6IE9wZW5UZWxlbWV0cnlDb2xsZWN0b3IKbWV0YWRhdGE6CiAgY3JlYXRpb25UaW1lc3RhbXA6ICIyMDI1LTAyLTAzVDA3OjMxOjUxWiIKICBmaW5hbGl6ZXJzOgogIC0gb3BlbnRlbGVtZXRyeWNvbGxlY3Rvci5vcGVudGVsZW1ldHJ5LmlvL2ZpbmFsaXplcgogIGdlbmVyYXRpb246IDEKICBsYWJlbHM6CiAgICBvcGVudGVsZW1ldHJ5LmlvL29wYW1wLXJlcG9ydGluZzogInRydWUiCiAgbmFtZTogc2ltcGxlc3QKICBuYW1lc3BhY2U6IGNoYWluc2F3LWp1c3QtZ3JvdXBlcgogIHJlc291cmNlVmVyc2lvbjogIjUwNDA1IgogIHVpZDogOGZiMzlkZGEtMjZiYy00ZjhlLTkxNDYtYjI4MTA0ZDYxN2ZjCnNwZWM6CiAgY29uZmlnOgogICAgZXhwb3J0ZXJzOgogICAgICBkZWJ1ZzogbnVsbAogICAgcmVjZWl2ZXJzOgogICAgICBqYWVnZXI6CiAgICAgICAgcHJvdG9jb2xzOgogICAgICAgICAgZ3JwYzoKICAgICAgICAgICAgZW5kcG9pbnQ6IDAuMC4wLjA6MTQyNTAKICAgICAgb3RscDoKICAgICAgICBwcm90b2NvbHM6CiAgICAgICAgICBncnBjOgogICAgICAgICAgICBlbmRwb2ludDogMC4wLjAuMDo0MzE3CiAgICAgICAgICBodHRwOgogICAgICAgICAgICBlbmRwb2ludDogMC4wLjAuMDo0MzE4CiAgICBzZXJ2aWNlOgogICAgICBwaXBlbGluZXM6CiAgICAgICAgdHJhY2VzOgogICAgICAgICAgZXhwb3J0ZXJzOgogICAgICAgICAgLSBkZWJ1ZwogICAgICAgICAgcmVjZWl2ZXJzOgogICAgICAgICAgLSBqYWVnZXIKICAgICAgICAgIC0gb3RscAogICAgICB0ZWxlbWV0cnk6CiAgICAgICAgbWV0cmljczoKICAgICAgICAgIGFkZHJlc3M6IDAuMC4wLjA6ODg4OAogIGNvbmZpZ1ZlcnNpb25zOiAzCiAgZGFlbW9uU2V0VXBkYXRlU3RyYXRlZ3k6IHt9CiAgZGVwbG95bWVudFVwZGF0ZVN0cmF0ZWd5OiB7fQogIGluZ3Jlc3M6CiAgICByb3V0ZToge30KICBpcEZhbWlseVBvbGljeTogU2luZ2xlU3RhY2sKICBtYW5hZ2VtZW50U3RhdGU6IG1hbmFnZWQKICBtb2RlOiBkZXBsb3ltZW50CiAgb2JzZXJ2YWJpbGl0eToKICAgIG1ldHJpY3M6IHt9CiAgcG9kRG5zQ29uZmlnOiB7fQogIHJlcGxpY2FzOiAxCiAgcmVzb3VyY2VzOiB7fQogIHRhcmdldEFsbG9jYXRvcjoKICAgIGFsbG9jYXRpb25TdHJhdGVneTogY29uc2lzdGVudC1oYXNoaW5nCiAgICBmaWx0ZXJTdHJhdGVneTogcmVsYWJlbC1jb25maWcKICAgIG9ic2VydmFiaWxpdHk6CiAgICAgIG1ldHJpY3M6IHt9CiAgICBwcm9tZXRoZXVzQ1I6CiAgICAgIHBvZE1vbml0b3JTZWxlY3Rvcjoge30KICAgICAgc2NyYXBlSW50ZXJ2YWw6IDMwcwogICAgICBzZXJ2aWNlTW9uaXRvclNlbGVjdG9yOiB7fQogICAgcmVzb3VyY2VzOiB7fQogIHVwZ3JhZGVTdHJhdGVneTogYXV0b21hdGljCnN0YXR1czoKICBpbWFnZTogcmVnaXN0cnkucmVkaGF0LmlvL3Job3NkdC9vcGVudGVsZW1ldHJ5LWNvbGxlY3Rvci1yaGVsOEBzaGEyNTY6YjA0ODMwYTBiZWM0NTQ4NThkMWE5MWI2MjllMjI2ZTA0ODBmNmMyOTkxZDFiYTU2NGJjMDRhNzBmMmU1ZWQ4NwogIHNjYWxlOgogICAgcmVwbGljYXM6IDEKICAgIHNlbGVjdG9yOiBhcHAua3ViZXJuZXRlcy5pby9jb21wb25lbnQ9b3BlbnRlbGVtZXRyeS1jb2xsZWN0b3IsYXBwLmt1YmVybmV0ZXMuaW8vaW5zdGFuY2U9Y2hhaW5zYXctanVzdC1ncm91cGVyLnNpbXBsZXN0LGFwcC5rdWJlcm5ldGVzLmlvL21hbmFnZWQtYnk9b3BlbnRlbGVtZXRyeS1vcGVyYXRvcixhcHAua3ViZXJuZXRlcy5pby9uYW1lPXNpbXBsZXN0LWNvbGxlY3RvcixhcHAua3ViZXJuZXRlcy5pby9wYXJ0LW9mPW9wZW50ZWxlbWV0cnksYXBwLmt1YmVybmV0ZXMuaW8vdmVyc2lvbj1sYXRlc3Qsb3BlbnRlbGVtZXRyeS5pby9vcGFtcC1yZXBvcnRpbmc9dHJ1ZQogICAgc3RhdHVzUmVwbGljYXM6IDEvMQogIHZlcnNpb246IDAuMTEzLjAK","content_type":"yaml"}}}},"remote_config_status":{"last_remote_config_hash":"47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=","status":1},"package_statuses":{}},"started_at":"2025-02-03T07:31:55.105317475Z","effective_config":{"chainsaw-just-grouper/simplest":"apiVersion: opentelemetry.io/v1beta1\nkind: OpenTelemetryCollector\nmetadata:\n creationTimestamp: \"2025-02-03T07:31:51Z\"\n finalizers:\n - opentelemetrycollector.opentelemetry.io/finalizer\n generation: 1\n labels:\n opentelemetry.io/opamp-reporting: \"true\"\n name: simplest\n namespace: chainsaw-just-grouper\n resourceVersion: \"50405\"\n uid: 8fb39dda-26bc-4f8e-9146-b28104d617fc\nspec:\n config:\n exporters:\n debug: null\n receivers:\n jaeger:\n protocols:\n grpc:\n endpoint: 0.0.0.0:14250\n otlp:\n protocols:\n grpc:\n endpoint: 0.0.0.0:4317\n http:\n endpoint: 0.0.0.0:4318\n service:\n pipelines:\n traces:\n exporters:\n - debug\n receivers:\n - jaeger\n - otlp\n telemetry:\n metrics:\n address: 0.0.0.0:8888\n configVersions: 3\n daemonSetUpdateStrategy: {}\n deploymentUpdateStrategy: {}\n ingress:\n route: {}\n ipFamilyPolicy: SingleStack\n managementState: managed\n mode: deployment\n observability:\n metrics: {}\n podDnsConfig: {}\n replicas: 1\n resources: {}\n targetAllocator:\n allocationStrategy: consistent-hashing\n filterStrategy: relabel-config\n observability:\n metrics: {}\n prometheusCR:\n podMonitorSelector: {}\n scrapeInterval: 30s\n serviceMonitorSelector: {}\n resources: {}\n upgradeStrategy: automatic\nstatus:\n image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87\n scale:\n replicas: 1\n selector: app.kubernetes.io/component=opentelemetry-collector,app.kubernetes.io/instance=chainsaw-just-grouper.simplest,app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/name=simplest-collector,app.kubernetes.io/part-of=opentelemetry,app.kubernetes.io/version=latest,opentelemetry.io/opamp-reporting=true\n statusReplicas: 1/1\n version: 0.113.0\n"}}} l.go:53: | 07:31:58 | opampbridge | Verify content is accurate  | SCRIPT | DONE | l.go:53: | 07:31:58 | opampbridge | Verify content is accurate  | ASSERT | RUN | l.go:53: | 07:31:58 | opampbridge | Verify content is accurate  | ASSERT | DONE | l.go:53: | 07:31:58 | opampbridge | Verify content is accurate  | TRY | DONE | l.go:53: | 07:31:58 | opampbridge | step-02  | CLEANUP | RUN | l.go:53: | 07:31:58 | opampbridge | step-02  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-just-grouper/simplest l.go:53: | 07:31:58 | opampbridge | step-02  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-just-grouper/simplest l.go:53: | 07:31:58 | opampbridge | step-02  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-just-grouper/simplest l.go:53: | 07:31:58 | opampbridge | step-02  | CLEANUP | DONE | l.go:53: | 07:31:58 | opampbridge | step-01  | CLEANUP | RUN | l.go:53: | 07:31:58 | opampbridge | step-01  | DELETE | RUN | opentelemetry.io/v1alpha1/OpAMPBridge @ chainsaw-just-grouper/test l.go:53: | 07:31:58 | opampbridge | step-01  | DELETE | OK | opentelemetry.io/v1alpha1/OpAMPBridge @ chainsaw-just-grouper/test l.go:53: | 07:31:58 | opampbridge | step-01  | DELETE | DONE | opentelemetry.io/v1alpha1/OpAMPBridge @ chainsaw-just-grouper/test l.go:53: | 07:31:58 | opampbridge | step-01  | DELETE | RUN | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ bridge-cluster-rolebinding l.go:53: | 07:31:58 | opampbridge | step-01  | DELETE | OK | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ bridge-cluster-rolebinding l.go:53: | 07:31:58 | opampbridge | step-01  | DELETE | DONE | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ bridge-cluster-rolebinding l.go:53: | 07:31:58 | opampbridge | step-01  | DELETE | RUN | rbac.authorization.k8s.io/v1/ClusterRole @ opamp-bridge l.go:53: | 07:31:58 | opampbridge | step-01  | DELETE | OK | rbac.authorization.k8s.io/v1/ClusterRole @ opamp-bridge l.go:53: | 07:31:58 | opampbridge | step-01  | DELETE | DONE | rbac.authorization.k8s.io/v1/ClusterRole @ opamp-bridge l.go:53: | 07:31:58 | opampbridge | step-01  | DELETE | RUN | v1/ServiceAccount @ chainsaw-just-grouper/opamp-bridge l.go:53: | 07:31:58 | opampbridge | step-01  | DELETE | OK | v1/ServiceAccount @ chainsaw-just-grouper/opamp-bridge l.go:53: | 07:31:58 | opampbridge | step-01  | DELETE | DONE | v1/ServiceAccount @ chainsaw-just-grouper/opamp-bridge l.go:53: | 07:31:58 | opampbridge | step-01  | CLEANUP | DONE | l.go:53: | 07:31:58 | opampbridge | step-00  | CLEANUP | RUN | l.go:53: | 07:31:58 | opampbridge | step-00  | DELETE | RUN | v1/Service @ chainsaw-just-grouper/e2e-test-app-bridge-server l.go:53: | 07:31:58 | opampbridge | step-00  | DELETE | OK | v1/Service @ chainsaw-just-grouper/e2e-test-app-bridge-server l.go:53: | 07:31:58 | opampbridge | step-00  | DELETE | DONE | v1/Service @ chainsaw-just-grouper/e2e-test-app-bridge-server l.go:53: | 07:31:58 | opampbridge | step-00  | DELETE | RUN | apps/v1/Deployment @ chainsaw-just-grouper/e2e-test-app-bridge-server l.go:53: | 07:31:58 | opampbridge | step-00  | DELETE | OK | apps/v1/Deployment @ chainsaw-just-grouper/e2e-test-app-bridge-server l.go:53: | 07:31:58 | opampbridge | step-00  | DELETE | DONE | apps/v1/Deployment @ chainsaw-just-grouper/e2e-test-app-bridge-server l.go:53: | 07:31:58 | opampbridge | step-00  | CLEANUP | DONE | l.go:53: | 07:31:58 | opampbridge | @cleanup  | DELETE | RUN | v1/Namespace @ chainsaw-just-grouper l.go:53: | 07:31:58 | opampbridge | @cleanup  | DELETE | OK | v1/Namespace @ chainsaw-just-grouper l.go:53: | 07:32:05 | opampbridge | @cleanup  | DELETE | DONE | v1/Namespace @ chainsaw-just-grouper === CONT chainsaw/instrumentation-python-multicontainer l.go:53: | 07:32:05 | instrumentation-python-multicontainer | @setup  | CREATE | OK | v1/Namespace @ chainsaw-improved-pipefish l.go:53: | 07:32:05 | instrumentation-python-multicontainer | step-00  | TRY | RUN | l.go:53: | 07:32:05 | instrumentation-python-multicontainer | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-improved-pipefish openshift.io/sa.scc.uid-range=1000/1000 --overwrite l.go:53: | 07:32:05 | instrumentation-python-multicontainer | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-improved-pipefish annotated l.go:53: | 07:32:05 | instrumentation-python-multicontainer | step-00  | CMD | DONE | l.go:53: | 07:32:05 | instrumentation-python-multicontainer | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-improved-pipefish openshift.io/sa.scc.supplemental-groups=3000/3000 --overwrite l.go:53: | 07:32:05 | instrumentation-python-multicontainer | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-improved-pipefish annotated l.go:53: | 07:32:05 | instrumentation-python-multicontainer | step-00  | CMD | DONE | l.go:53: | 07:32:05 | instrumentation-python-multicontainer | step-00  | TRY | DONE | l.go:53: | 07:32:05 | instrumentation-python-multicontainer | step-00  | TRY | RUN | l.go:53: | 07:32:05 | instrumentation-python-multicontainer | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-improved-pipefish/sidecar l.go:53: | 07:32:05 | instrumentation-python-multicontainer | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-improved-pipefish/sidecar l.go:53: | 07:32:05 | instrumentation-python-multicontainer | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-improved-pipefish/sidecar l.go:53: | 07:32:05 | instrumentation-python-multicontainer | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-improved-pipefish/java l.go:53: | 07:32:06 | instrumentation-python-multicontainer | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-improved-pipefish/java l.go:53: | 07:32:06 | instrumentation-python-multicontainer | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-improved-pipefish/java l.go:53: | 07:32:06 | instrumentation-python-multicontainer | step-00  | TRY | DONE | l.go:53: | 07:32:06 | instrumentation-python-multicontainer | step-01  | TRY | RUN | l.go:53: | 07:32:06 | instrumentation-python-multicontainer | step-01  | APPLY | RUN | apps/v1/Deployment @ chainsaw-improved-pipefish/my-python-multi l.go:53: | 07:32:06 | instrumentation-python-multicontainer | step-01  | CREATE | OK | apps/v1/Deployment @ chainsaw-improved-pipefish/my-python-multi l.go:53: | 07:32:06 | instrumentation-python-multicontainer | step-01  | APPLY | DONE | apps/v1/Deployment @ chainsaw-improved-pipefish/my-python-multi l.go:53: | 07:32:06 | instrumentation-python-multicontainer | step-01  | ASSERT | RUN | v1/Pod @ chainsaw-improved-pipefish/* === NAME chainsaw/multi-cluster l.go:53: | 07:32:07 | multi-cluster | step-00  | DELETE | DONE | v1/Namespace @ chainsaw-multi-cluster-receive l.go:53: | 07:32:07 | multi-cluster | step-00  | DELETE | RUN | v1/Namespace @ chainsaw-multi-cluster-send l.go:53: | 07:32:07 | multi-cluster | step-00  | DELETE | OK | v1/Namespace @ chainsaw-multi-cluster-send l.go:53: | 07:32:20 | multi-cluster | step-00  | DELETE | DONE | v1/Namespace @ chainsaw-multi-cluster-send l.go:53: | 07:32:20 | multi-cluster | step-00  | CLEANUP | DONE | l.go:53: | 07:32:20 | multi-cluster | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-amusing-mako l.go:53: | 07:32:20 | multi-cluster | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-amusing-mako l.go:53: | 07:32:26 | multi-cluster | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-amusing-mako === CONT chainsaw/instrumentation-java-multicontainer l.go:53: | 07:32:26 | instrumentation-java-multicontainer | @setup  | CREATE | OK | v1/Namespace @ chainsaw-complete-glider l.go:53: | 07:32:26 | instrumentation-java-multicontainer | step-00  | TRY | RUN | l.go:53: | 07:32:26 | instrumentation-java-multicontainer | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-complete-glider openshift.io/sa.scc.uid-range=1000/1000 --overwrite l.go:53: | 07:32:27 | instrumentation-java-multicontainer | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-complete-glider annotated l.go:53: | 07:32:27 | instrumentation-java-multicontainer | step-00  | CMD | DONE | l.go:53: | 07:32:27 | instrumentation-java-multicontainer | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-complete-glider openshift.io/sa.scc.supplemental-groups=3000/3000 --overwrite l.go:53: | 07:32:27 | instrumentation-java-multicontainer | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-complete-glider annotated l.go:53: | 07:32:27 | instrumentation-java-multicontainer | step-00  | CMD | DONE | l.go:53: | 07:32:27 | instrumentation-java-multicontainer | step-00  | TRY | DONE | l.go:53: | 07:32:27 | instrumentation-java-multicontainer | step-01  | TRY | RUN | l.go:53: | 07:32:27 | instrumentation-java-multicontainer | step-01  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-complete-glider/sidecar l.go:53: | 07:32:27 | instrumentation-java-multicontainer | step-01  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-complete-glider/sidecar l.go:53: | 07:32:27 | instrumentation-java-multicontainer | step-01  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-complete-glider/sidecar l.go:53: | 07:32:27 | instrumentation-java-multicontainer | step-01  | APPLY | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-complete-glider/java l.go:53: | 07:32:27 | instrumentation-java-multicontainer | step-01  | CREATE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-complete-glider/java l.go:53: | 07:32:27 | instrumentation-java-multicontainer | step-01  | APPLY | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-complete-glider/java l.go:53: | 07:32:27 | instrumentation-java-multicontainer | step-01  | TRY | DONE | l.go:53: | 07:32:27 | instrumentation-java-multicontainer | step-02  | TRY | RUN | l.go:53: | 07:32:27 | instrumentation-java-multicontainer | step-02  | APPLY | RUN | apps/v1/Deployment @ chainsaw-complete-glider/my-java-multi l.go:53: | 07:32:27 | instrumentation-java-multicontainer | step-02  | CREATE | OK | apps/v1/Deployment @ chainsaw-complete-glider/my-java-multi l.go:53: | 07:32:27 | instrumentation-java-multicontainer | step-02  | APPLY | DONE | apps/v1/Deployment @ chainsaw-complete-glider/my-java-multi l.go:53: | 07:32:27 | instrumentation-java-multicontainer | step-02  | ASSERT | RUN | v1/Pod @ chainsaw-complete-glider/* === NAME chainsaw/kafka l.go:53: | 07:32:57 | kafka | step-00  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-kafka/my-cluster-entity-operator l.go:53: | 07:32:57 | kafka | step-00  | ASSERT | RUN | v1/Pod @ chainsaw-kafka/my-cluster-kafka-0 l.go:53: | 07:32:57 | kafka | step-00  | ASSERT | DONE | v1/Pod @ chainsaw-kafka/my-cluster-kafka-0 l.go:53: | 07:32:57 | kafka | step-00  | ASSERT | RUN | v1/Pod @ chainsaw-kafka/my-cluster-zookeeper-0 l.go:53: | 07:32:57 | kafka | step-00  | ASSERT | DONE | v1/Pod @ chainsaw-kafka/my-cluster-zookeeper-0 l.go:53: | 07:32:57 | kafka | step-00  | ASSERT | RUN | v1/Service @ chainsaw-kafka/my-cluster-kafka-bootstrap l.go:53: | 07:32:57 | kafka | step-00  | ASSERT | DONE | v1/Service @ chainsaw-kafka/my-cluster-kafka-bootstrap l.go:53: | 07:32:57 | kafka | step-00  | ASSERT | RUN | v1/Service @ chainsaw-kafka/my-cluster-kafka-brokers l.go:53: | 07:32:57 | kafka | step-00  | ASSERT | DONE | v1/Service @ chainsaw-kafka/my-cluster-kafka-brokers l.go:53: | 07:32:57 | kafka | step-00  | ASSERT | RUN | v1/Service @ chainsaw-kafka/my-cluster-zookeeper-client l.go:53: | 07:32:57 | kafka | step-00  | ASSERT | DONE | v1/Service @ chainsaw-kafka/my-cluster-zookeeper-client l.go:53: | 07:32:57 | kafka | step-00  | ASSERT | RUN | v1/Service @ chainsaw-kafka/my-cluster-zookeeper-nodes l.go:53: | 07:32:57 | kafka | step-00  | ASSERT | DONE | v1/Service @ chainsaw-kafka/my-cluster-zookeeper-nodes l.go:53: | 07:32:57 | kafka | step-00  | TRY | DONE | l.go:53: | 07:32:57 | kafka | step-01  | TRY | RUN | l.go:53: | 07:32:57 | kafka | step-01  | APPLY | RUN | kafka.strimzi.io/v1beta1/KafkaTopic @ chainsaw-kafka/otlp-spans l.go:53: | 07:32:57 | kafka | step-01  | CREATE | OK | kafka.strimzi.io/v1beta1/KafkaTopic @ chainsaw-kafka/otlp-spans l.go:53: | 07:32:57 | kafka | step-01  | APPLY | DONE | kafka.strimzi.io/v1beta1/KafkaTopic @ chainsaw-kafka/otlp-spans l.go:53: | 07:32:57 | kafka | step-01  | ASSERT | RUN | kafka.strimzi.io/v1beta2/KafkaTopic @ chainsaw-kafka/otlp-spans l.go:53: | 07:32:58 | kafka | step-01  | ASSERT | DONE | kafka.strimzi.io/v1beta2/KafkaTopic @ chainsaw-kafka/otlp-spans l.go:53: | 07:32:58 | kafka | step-01  | TRY | DONE | l.go:53: | 07:32:58 | kafka | step-02  | TRY | RUN | l.go:53: | 07:32:58 | kafka | step-02  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-kafka/kafka-receiver l.go:53: | 07:32:58 | kafka | step-02  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-kafka/kafka-receiver l.go:53: | 07:32:58 | kafka | step-02  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-kafka/kafka-receiver l.go:53: | 07:32:58 | kafka | step-02  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-kafka/kafka-receiver-collector l.go:53: | 07:33:00 | kafka | step-02  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-kafka/kafka-receiver-collector l.go:53: | 07:33:00 | kafka | step-02  | ASSERT | RUN | v1/Service @ chainsaw-kafka/kafka-receiver-collector-monitoring l.go:53: | 07:33:00 | kafka | step-02  | ASSERT | DONE | v1/Service @ chainsaw-kafka/kafka-receiver-collector-monitoring l.go:53: | 07:33:00 | kafka | step-02  | TRY | DONE | l.go:53: | 07:33:00 | kafka | step-03  | TRY | RUN | l.go:53: | 07:33:00 | kafka | step-03  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-kafka/kafka-exporter l.go:53: | 07:33:00 | kafka | step-03  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-kafka/kafka-exporter l.go:53: | 07:33:00 | kafka | step-03  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-kafka/kafka-exporter l.go:53: | 07:33:00 | kafka | step-03  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-kafka/kafka-exporter-collector l.go:53: | 07:33:02 | kafka | step-03  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-kafka/kafka-exporter-collector l.go:53: | 07:33:02 | kafka | step-03  | ASSERT | RUN | v1/Service @ chainsaw-kafka/kafka-exporter-collector l.go:53: | 07:33:02 | kafka | step-03  | ASSERT | DONE | v1/Service @ chainsaw-kafka/kafka-exporter-collector l.go:53: | 07:33:02 | kafka | step-03  | ASSERT | RUN | v1/Service @ chainsaw-kafka/kafka-exporter-collector-headless l.go:53: | 07:33:02 | kafka | step-03  | ASSERT | DONE | v1/Service @ chainsaw-kafka/kafka-exporter-collector-headless l.go:53: | 07:33:02 | kafka | step-03  | TRY | DONE | l.go:53: | 07:33:02 | kafka | step-04  | TRY | RUN | l.go:53: | 07:33:02 | kafka | step-04  | APPLY | RUN | batch/v1/Job @ chainsaw-kafka/telemetrygen-traces l.go:53: | 07:33:02 | kafka | step-04  | CREATE | OK | batch/v1/Job @ chainsaw-kafka/telemetrygen-traces l.go:53: | 07:33:02 | kafka | step-04  | APPLY | DONE | batch/v1/Job @ chainsaw-kafka/telemetrygen-traces l.go:53: | 07:33:02 | kafka | step-04  | ASSERT | RUN | batch/v1/Job @ chainsaw-kafka/telemetrygen-traces l.go:53: | 07:33:36 | kafka | step-04  | ASSERT | DONE | batch/v1/Job @ chainsaw-kafka/telemetrygen-traces l.go:53: | 07:33:36 | kafka | step-04  | TRY | DONE | l.go:53: | 07:33:36 | kafka | step-05  | TRY | RUN | l.go:53: | 07:33:36 | kafka | step-05  | SCRIPT | RUN | === COMMAND /usr/bin/sh -c ./check_traces.sh l.go:53: | 07:33:36 | kafka | step-05  | SCRIPT | LOG | === STDOUT "-> service.name: Str("kafka")" found in kafka-receiver-collector-979dbfcf8-q4876 "-> test: Str(chainsaw-kafka)" found in kafka-receiver-collector-979dbfcf8-q4876 Traces with service name Kafka and attribute test=chainsaw-kafka found. l.go:53: | 07:33:36 | kafka | step-05  | SCRIPT | DONE | l.go:53: | 07:33:36 | kafka | step-05  | TRY | DONE | l.go:53: | 07:33:36 | kafka | step-04  | CLEANUP | RUN | l.go:53: | 07:33:36 | kafka | step-04  | DELETE | RUN | batch/v1/Job @ chainsaw-kafka/telemetrygen-traces l.go:53: | 07:33:37 | kafka | step-04  | DELETE | OK | batch/v1/Job @ chainsaw-kafka/telemetrygen-traces l.go:53: | 07:33:37 | kafka | step-04  | DELETE | DONE | batch/v1/Job @ chainsaw-kafka/telemetrygen-traces l.go:53: | 07:33:37 | kafka | step-04  | CLEANUP | DONE | l.go:53: | 07:33:37 | kafka | step-03  | CLEANUP | RUN | l.go:53: | 07:33:37 | kafka | step-03  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-kafka/kafka-exporter l.go:53: | 07:33:37 | kafka | step-03  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-kafka/kafka-exporter l.go:53: | 07:33:37 | kafka | step-03  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-kafka/kafka-exporter l.go:53: | 07:33:37 | kafka | step-03  | CLEANUP | DONE | l.go:53: | 07:33:37 | kafka | step-02  | CLEANUP | RUN | l.go:53: | 07:33:37 | kafka | step-02  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-kafka/kafka-receiver l.go:53: | 07:33:37 | kafka | step-02  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-kafka/kafka-receiver l.go:53: | 07:33:37 | kafka | step-02  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-kafka/kafka-receiver l.go:53: | 07:33:37 | kafka | step-02  | CLEANUP | DONE | l.go:53: | 07:33:37 | kafka | step-01  | CLEANUP | RUN | l.go:53: | 07:33:37 | kafka | step-01  | DELETE | RUN | kafka.strimzi.io/v1beta1/KafkaTopic @ chainsaw-kafka/otlp-spans l.go:53: | 07:33:37 | kafka | step-01  | DELETE | OK | kafka.strimzi.io/v1beta1/KafkaTopic @ chainsaw-kafka/otlp-spans l.go:53: | 07:33:37 | kafka | step-01  | DELETE | DONE | kafka.strimzi.io/v1beta1/KafkaTopic @ chainsaw-kafka/otlp-spans l.go:53: | 07:33:37 | kafka | step-01  | CLEANUP | DONE | l.go:53: | 07:33:37 | kafka | step-00  | CLEANUP | RUN | l.go:53: | 07:33:37 | kafka | step-00  | DELETE | RUN | kafka.strimzi.io/v1beta2/Kafka @ chainsaw-kafka/my-cluster l.go:53: | 07:33:37 | kafka | step-00  | DELETE | OK | kafka.strimzi.io/v1beta2/Kafka @ chainsaw-kafka/my-cluster l.go:53: | 07:33:37 | kafka | step-00  | DELETE | DONE | kafka.strimzi.io/v1beta2/Kafka @ chainsaw-kafka/my-cluster l.go:53: | 07:33:37 | kafka | step-00  | CLEANUP | DONE | l.go:53: | 07:33:37 | kafka | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-kafka l.go:53: | 07:33:37 | kafka | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-kafka l.go:53: | 07:34:06 | kafka | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-kafka === CONT chainsaw/instrumentation-nodejs-multicontainer l.go:53: | 07:34:06 | instrumentation-nodejs-multicontainer | @setup  | CREATE | OK | v1/Namespace @ chainsaw-huge-bird l.go:53: | 07:34:06 | instrumentation-nodejs-multicontainer | step-00  | TRY | RUN | l.go:53: | 07:34:06 | instrumentation-nodejs-multicontainer | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-huge-bird openshift.io/sa.scc.uid-range=1000/1000 --overwrite l.go:53: | 07:34:06 | instrumentation-nodejs-multicontainer | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-huge-bird annotated l.go:53: | 07:34:06 | instrumentation-nodejs-multicontainer | step-00  | CMD | DONE | l.go:53: | 07:34:06 | instrumentation-nodejs-multicontainer | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-huge-bird openshift.io/sa.scc.supplemental-groups=3000/3000 --overwrite l.go:53: | 07:34:06 | instrumentation-nodejs-multicontainer | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-huge-bird annotated l.go:53: | 07:34:06 | instrumentation-nodejs-multicontainer | step-00  | CMD | DONE | l.go:53: | 07:34:06 | instrumentation-nodejs-multicontainer | step-00  | TRY | DONE | l.go:53: | 07:34:06 | instrumentation-nodejs-multicontainer | step-00  | TRY | RUN | l.go:53: | 07:34:06 | instrumentation-nodejs-multicontainer | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-huge-bird/sidecar l.go:53: | 07:34:07 | instrumentation-nodejs-multicontainer | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-huge-bird/sidecar l.go:53: | 07:34:07 | instrumentation-nodejs-multicontainer | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-huge-bird/sidecar l.go:53: | 07:34:07 | instrumentation-nodejs-multicontainer | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-huge-bird/java l.go:53: | 07:34:07 | instrumentation-nodejs-multicontainer | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-huge-bird/java l.go:53: | 07:34:07 | instrumentation-nodejs-multicontainer | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-huge-bird/java l.go:53: | 07:34:07 | instrumentation-nodejs-multicontainer | step-00  | TRY | DONE | l.go:53: | 07:34:07 | instrumentation-nodejs-multicontainer | step-01  | TRY | RUN | l.go:53: | 07:34:07 | instrumentation-nodejs-multicontainer | step-01  | APPLY | RUN | apps/v1/Deployment @ chainsaw-huge-bird/my-nodejs-multi l.go:53: | 07:34:07 | instrumentation-nodejs-multicontainer | step-01  | CREATE | OK | apps/v1/Deployment @ chainsaw-huge-bird/my-nodejs-multi l.go:53: | 07:34:07 | instrumentation-nodejs-multicontainer | step-01  | APPLY | DONE | apps/v1/Deployment @ chainsaw-huge-bird/my-nodejs-multi l.go:53: | 07:34:07 | instrumentation-nodejs-multicontainer | step-01  | ASSERT | RUN | v1/Pod @ chainsaw-huge-bird/* === NAME chainsaw/instrumentation-sdk l.go:53: | 07:37:51 | instrumentation-sdk | step-01  | ASSERT | ERROR | v1/Pod @ chainsaw-light-quail/* === ERROR --------------------------------------------------- v1/Pod/chainsaw-light-quail/my-sdk-75dccfc89d-687z4 --------------------------------------------------- * spec.containers[1].args: Invalid value: []interface {}{"--config=env:OTEL_CONFIG"}: lengths of slices don't match --- expected +++ actual @@ -6,17 +6,27 @@ sidecar.opentelemetry.io/inject: "true" labels: app: my-sdk + name: my-sdk-75dccfc89d-687z4 namespace: chainsaw-light-quail + ownerReferences: + - apiVersion: apps/v1 + blockOwnerDeletion: true + controller: true + kind: ReplicaSet + name: my-sdk-75dccfc89d + uid: cf80c1c4-3da3-41d9-9144-39386f17251d spec: containers: - env: - name: OTEL_NODE_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.hostIP - name: OTEL_POD_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.podIP - name: SPLUNK_TRACE_RESPONSE_HEADER_ENABLED value: "true" @@ -41,21 +51,128 @@ - name: OTEL_TRACES_SAMPLER_ARG value: "0.25" - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.container.name=myapp,k8s.deployment.name=my-sdk,k8s.namespace.name=chainsaw-light-quail,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.replicaset.name=my-sdk-75dccfc89d,service.instance.id=chainsaw-light-quail.$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME).myapp,service.version=main + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-python:main + imagePullPolicy: IfNotPresent name: myapp + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-rz5pk readOnly: true - args: - - --feature-gates=-component.UseLocalHostAsDefaultHost - --config=env:OTEL_CONFIG + env: + - name: POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_CONFIG + value: | + receivers: + otlp: + protocols: + grpc: + endpoint: 0.0.0.0:4317 + http: + endpoint: 0.0.0.0:4318 + exporters: + debug: null + service: + telemetry: + metrics: + address: 0.0.0.0:8888 + pipelines: + traces: + exporters: + - debug + receivers: + - otlp + - name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_RESOURCE_ATTRIBUTES_POD_UID + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.uid + - name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.nodeName + - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.deployment.name=my-sdk,k8s.deployment.uid=fd7c9d48-0978-4209-b12e-1b49f661437d,k8s.namespace.name=chainsaw-light-quail,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.pod.uid=$(OTEL_RESOURCE_ATTRIBUTES_POD_UID),k8s.replicaset.name=my-sdk-75dccfc89d,k8s.replicaset.uid=cf80c1c4-3da3-41d9-9144-39386f17251d + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imagePullPolicy: IfNotPresent name: otc-container + ports: + - containerPort: 8888 + name: metrics + protocol: TCP + - containerPort: 4317 + name: otlp-grpc + protocol: TCP + - containerPort: 4318 + name: otlp-http + protocol: TCP + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-rz5pk + readOnly: true status: containerStatuses: - - name: myapp + - containerID: cri-o://341996a4cf2f68651e0e659b7cc45e1191d1c5b3743bd3a169f7b294fc344bcc + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-python:main + imageID: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-python@sha256:66dce9234c5068b519226fee0c8584bd9c104fed87643ed89e02428e909b18db + lastState: {} + name: myapp ready: true + restartCount: 0 started: true - - name: otc-container + state: + running: + startedAt: "2025-02-03T07:31:52Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-rz5pk + readOnly: true + recursiveReadOnly: Disabled + - containerID: cri-o://f724a29b4749448ebc29db337f95306fb7e5f1a7df87c455972d53eb9955335b + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imageID: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + lastState: {} + name: otc-container ready: true + restartCount: 0 started: true + state: + running: + startedAt: "2025-02-03T07:31:52Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-rz5pk + readOnly: true + recursiveReadOnly: Disabled phase: Running l.go:53: | 07:37:51 | instrumentation-sdk | step-01  | TRY | DONE | l.go:53: | 07:37:51 | instrumentation-sdk | step-01  | CATCH | RUN | l.go:53: | 07:37:51 | instrumentation-sdk | step-01  | CMD | RUN | === COMMAND /usr/local/bin/kubectl logs --prefix -l app=my-sdk -n chainsaw-light-quail --all-containers l.go:53: | 07:37:51 | instrumentation-sdk | step-01  | CMD | LOG | === STDOUT [pod/my-sdk-75dccfc89d-687z4/myapp] * Debug mode: off [pod/my-sdk-75dccfc89d-687z4/myapp] WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. [pod/my-sdk-75dccfc89d-687z4/myapp] * Running on http://127.0.0.1:8080 [pod/my-sdk-75dccfc89d-687z4/myapp] Press CTRL+C to quit [pod/my-sdk-75dccfc89d-687z4/otc-container] 2025-02-03T07:31:52.684Z warn service@v0.113.0/service.go:221 service::telemetry::metrics::address is being deprecated in favor of service::telemetry::metrics::readers [pod/my-sdk-75dccfc89d-687z4/otc-container] 2025-02-03T07:31:52.684Z info telemetry/metrics.go:70 Serving metrics {"address": "0.0.0.0:8888", "metrics level": "Normal"} [pod/my-sdk-75dccfc89d-687z4/otc-container] 2025-02-03T07:31:52.684Z info builders/builders.go:26 Development component. May change in the future. {"kind": "exporter", "data_type": "traces", "name": "debug"} [pod/my-sdk-75dccfc89d-687z4/otc-container] 2025-02-03T07:31:52.696Z info service@v0.113.0/service.go:238 Starting otelcol... {"Version": "0.113.0", "NumCPU": 4} [pod/my-sdk-75dccfc89d-687z4/otc-container] 2025-02-03T07:31:52.696Z info extensions/extensions.go:39 Starting extensions... [pod/my-sdk-75dccfc89d-687z4/otc-container] 2025-02-03T07:31:52.696Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-sdk-75dccfc89d-687z4/otc-container] 2025-02-03T07:31:52.696Z info otlpreceiver@v0.113.0/otlp.go:112 Starting GRPC server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4317"} [pod/my-sdk-75dccfc89d-687z4/otc-container] 2025-02-03T07:31:52.697Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-sdk-75dccfc89d-687z4/otc-container] 2025-02-03T07:31:52.697Z info otlpreceiver@v0.113.0/otlp.go:169 Starting HTTP server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4318"} [pod/my-sdk-75dccfc89d-687z4/otc-container] 2025-02-03T07:31:52.697Z info service@v0.113.0/service.go:261 Everything is ready. Begin running and processing data. l.go:53: | 07:37:51 | instrumentation-sdk | step-01  | CMD | DONE | l.go:53: | 07:37:51 | instrumentation-sdk | step-01  | CATCH | DONE | l.go:53: | 07:37:51 | instrumentation-sdk | step-01  | CLEANUP | RUN | l.go:53: | 07:37:51 | instrumentation-sdk | step-01  | DELETE | RUN | apps/v1/Deployment @ chainsaw-light-quail/my-sdk l.go:53: | 07:37:51 | instrumentation-sdk | step-01  | DELETE | OK | apps/v1/Deployment @ chainsaw-light-quail/my-sdk l.go:53: | 07:37:51 | instrumentation-sdk | step-01  | DELETE | DONE | apps/v1/Deployment @ chainsaw-light-quail/my-sdk l.go:53: | 07:37:51 | instrumentation-sdk | step-01  | CLEANUP | DONE | l.go:53: | 07:37:51 | instrumentation-sdk | step-00  | CLEANUP | RUN | l.go:53: | 07:37:51 | instrumentation-sdk | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-light-quail/sdk-only l.go:53: | 07:37:51 | instrumentation-sdk | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-light-quail/sdk-only l.go:53: | 07:37:51 | instrumentation-sdk | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-light-quail/sdk-only l.go:53: | 07:37:51 | instrumentation-sdk | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-light-quail/sidecar l.go:53: | 07:37:51 | instrumentation-sdk | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-light-quail/sidecar l.go:53: | 07:37:52 | instrumentation-sdk | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-light-quail/sidecar l.go:53: | 07:37:52 | instrumentation-sdk | step-00  | CLEANUP | DONE | l.go:53: | 07:37:52 | instrumentation-sdk | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-light-quail l.go:53: | 07:37:52 | instrumentation-sdk | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-light-quail === NAME chainsaw/instrumentation-python-multicontainer l.go:53: | 07:38:06 | instrumentation-python-multicontainer | step-01  | ASSERT | ERROR | v1/Pod @ chainsaw-improved-pipefish/* === ERROR ----------------------------------------------------------------- v1/Pod/chainsaw-improved-pipefish/my-python-multi-cb648664f-h6p5z ----------------------------------------------------------------- * spec.containers[0].env[3].name: Invalid value: "OTEL_EXPORTER_OTLP_PROTOCOL": Expected value: "OTEL_TRACES_EXPORTER" * spec.containers[0].env[3].value: Invalid value: "http/protobuf": Expected value: "otlp" * spec.containers[0].env[4].name: Invalid value: "OTEL_TRACES_EXPORTER": Expected value: "OTEL_EXPORTER_OTLP_TRACES_PROTOCOL" * spec.containers[0].env[4].value: Invalid value: "otlp": Expected value: "http/protobuf" * spec.containers[0].env[6].name: Invalid value: "OTEL_LOGS_EXPORTER": Expected value: "OTEL_EXPORTER_OTLP_METRICS_PROTOCOL" * spec.containers[0].env[6].value: Invalid value: "otlp": Expected value: "http/protobuf" * spec.containers[1].env[3].name: Invalid value: "OTEL_EXPORTER_OTLP_PROTOCOL": Expected value: "OTEL_TRACES_EXPORTER" * spec.containers[1].env[3].value: Invalid value: "http/protobuf": Expected value: "otlp" * spec.containers[1].env[4].name: Invalid value: "OTEL_TRACES_EXPORTER": Expected value: "OTEL_EXPORTER_OTLP_TRACES_PROTOCOL" * spec.containers[1].env[4].value: Invalid value: "otlp": Expected value: "http/protobuf" * spec.containers[1].env[6].name: Invalid value: "OTEL_LOGS_EXPORTER": Expected value: "OTEL_EXPORTER_OTLP_METRICS_PROTOCOL" * spec.containers[1].env[6].value: Invalid value: "otlp": Expected value: "http/protobuf" * spec.containers[2].args: Invalid value: []interface {}{"--config=env:OTEL_CONFIG"}: lengths of slices don't match --- expected +++ actual @@ -7,28 +7,38 @@ sidecar.opentelemetry.io/inject: "true" labels: app: my-python-multi + name: my-python-multi-cb648664f-h6p5z namespace: chainsaw-improved-pipefish + ownerReferences: + - apiVersion: apps/v1 + blockOwnerDeletion: true + controller: true + kind: ReplicaSet + name: my-python-multi-cb648664f + uid: 8c7686fd-c653-4c6f-91ac-acd22b103741 spec: containers: - env: - name: OTEL_NODE_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.hostIP - name: OTEL_POD_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.podIP - name: PYTHONPATH value: /otel-auto-instrumentation-python/opentelemetry/instrumentation/auto_instrumentation:/otel-auto-instrumentation-python + - name: OTEL_EXPORTER_OTLP_PROTOCOL + value: http/protobuf - name: OTEL_TRACES_EXPORTER value: otlp - - name: OTEL_EXPORTER_OTLP_TRACES_PROTOCOL - value: http/protobuf - name: OTEL_METRICS_EXPORTER value: otlp - - name: OTEL_EXPORTER_OTLP_METRICS_PROTOCOL - value: http/protobuf + - name: OTEL_LOGS_EXPORTER + value: otlp - name: OTEL_EXPORTER_OTLP_ENDPOINT value: http://localhost:4317 - name: OTEL_EXPORTER_OTLP_TIMEOUT @@ -54,9 +64,22 @@ - name: OTEL_PROPAGATORS value: jaeger,b3 - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.container.name=myapp,k8s.deployment.name=my-python-multi,k8s.namespace.name=chainsaw-improved-pipefish,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.replicaset.name=my-python-multi-cb648664f,service.instance.id=chainsaw-improved-pipefish.$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME).myapp,service.version=main + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-python:main + imagePullPolicy: IfNotPresent name: myapp - volumeMounts: - - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-wbjlz readOnly: true - mountPath: /otel-auto-instrumentation-python name: opentelemetry-auto-instrumentation-python @@ -64,21 +87,23 @@ - name: OTEL_NODE_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.hostIP - name: OTEL_POD_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.podIP - name: PYTHONPATH value: /otel-auto-instrumentation-python/opentelemetry/instrumentation/auto_instrumentation:/otel-auto-instrumentation-python + - name: OTEL_EXPORTER_OTLP_PROTOCOL + value: http/protobuf - name: OTEL_TRACES_EXPORTER value: otlp - - name: OTEL_EXPORTER_OTLP_TRACES_PROTOCOL - value: http/protobuf - name: OTEL_METRICS_EXPORTER value: otlp - - name: OTEL_EXPORTER_OTLP_METRICS_PROTOCOL - value: http/protobuf + - name: OTEL_LOGS_EXPORTER + value: otlp - name: OTEL_EXPORTER_OTLP_ENDPOINT value: http://localhost:4317 - name: OTEL_EXPORTER_OTLP_TIMEOUT @@ -104,31 +129,203 @@ - name: OTEL_PROPAGATORS value: jaeger,b3 - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.container.name=myrabbit,k8s.deployment.name=my-python-multi,k8s.namespace.name=chainsaw-improved-pipefish,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.replicaset.name=my-python-multi-cb648664f,service.instance.id=chainsaw-improved-pipefish.$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME).myrabbit,service.version=3 + image: rabbitmq:3 + imagePullPolicy: IfNotPresent name: myrabbit - volumeMounts: - - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-wbjlz readOnly: true - mountPath: /otel-auto-instrumentation-python name: opentelemetry-auto-instrumentation-python - args: - - --feature-gates=-component.UseLocalHostAsDefaultHost - --config=env:OTEL_CONFIG + env: + - name: POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_CONFIG + value: | + receivers: + otlp: + protocols: + grpc: + endpoint: 0.0.0.0:4317 + http: + endpoint: 0.0.0.0:4318 + exporters: + debug: null + service: + telemetry: + metrics: + address: 0.0.0.0:8888 + pipelines: + traces: + exporters: + - debug + receivers: + - otlp + - name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_RESOURCE_ATTRIBUTES_POD_UID + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.uid + - name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.nodeName + - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.deployment.name=my-python-multi,k8s.deployment.uid=420abad5-5312-4eae-893d-e2a64a7652fe,k8s.namespace.name=chainsaw-improved-pipefish,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.pod.uid=$(OTEL_RESOURCE_ATTRIBUTES_POD_UID),k8s.replicaset.name=my-python-multi-cb648664f,k8s.replicaset.uid=8c7686fd-c653-4c6f-91ac-acd22b103741 + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imagePullPolicy: IfNotPresent name: otc-container + ports: + - containerPort: 8888 + name: metrics + protocol: TCP + - containerPort: 4317 + name: otlp-grpc + protocol: TCP + - containerPort: 4318 + name: otlp-http + protocol: TCP + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-wbjlz + readOnly: true initContainers: - - name: opentelemetry-auto-instrumentation-python + - command: + - cp + - -r + - /autoinstrumentation/. + - /otel-auto-instrumentation-python + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-python:0.48b0 + imagePullPolicy: IfNotPresent + name: opentelemetry-auto-instrumentation-python + resources: + limits: + cpu: 500m + memory: 32Mi + requests: + cpu: 50m + memory: 32Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /otel-auto-instrumentation-python + name: opentelemetry-auto-instrumentation-python + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-wbjlz + readOnly: true status: containerStatuses: - - name: myapp + - containerID: cri-o://ac4fb809e3059b6ad68fa2bc30851cf696f8c23f76ed67aa904068d6193c798f + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-python:main + imageID: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-python@sha256:66dce9234c5068b519226fee0c8584bd9c104fed87643ed89e02428e909b18db + lastState: {} + name: myapp ready: true + restartCount: 0 started: true - - name: myrabbit + state: + running: + startedAt: "2025-02-03T07:32:08Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-wbjlz + readOnly: true + recursiveReadOnly: Disabled + - mountPath: /otel-auto-instrumentation-python + name: opentelemetry-auto-instrumentation-python + - containerID: cri-o://e1c4fa53147d336cee611670aaecf289b86789814bddd50753a589ba2bb93fde + image: docker.io/library/rabbitmq:3 + imageID: docker.io/library/rabbitmq@sha256:af395ea3037a1207af556d76f9c20e972a0855d1471a7c6c8f2c9d5eda54f9ff + lastState: {} + name: myrabbit ready: true + restartCount: 0 started: true - - name: otc-container + state: + running: + startedAt: "2025-02-03T07:32:11Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-wbjlz + readOnly: true + recursiveReadOnly: Disabled + - mountPath: /otel-auto-instrumentation-python + name: opentelemetry-auto-instrumentation-python + - containerID: cri-o://11cc0055dd60045feb4fdf5e3e9cce33c0ae5cfc46f413fe11f744db106316f1 + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imageID: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + lastState: {} + name: otc-container ready: true + restartCount: 0 started: true + state: + running: + startedAt: "2025-02-03T07:32:12Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-wbjlz + readOnly: true + recursiveReadOnly: Disabled initContainerStatuses: - - name: opentelemetry-auto-instrumentation-python + - containerID: cri-o://cf73d7309caea6b8dc980d1d64293227c4cac793afbb42faec1082c6372ce531 + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-python:0.48b0 + imageID: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-python@sha256:068a8968cc4be5e65169c5587cc582022d41841b5252f9d99972d487e749b584 + lastState: {} + name: opentelemetry-auto-instrumentation-python ready: true + restartCount: 0 + started: false + state: + terminated: + containerID: cri-o://cf73d7309caea6b8dc980d1d64293227c4cac793afbb42faec1082c6372ce531 + exitCode: 0 + finishedAt: "2025-02-03T07:32:07Z" + reason: Completed + startedAt: "2025-02-03T07:32:07Z" + volumeMounts: + - mountPath: /otel-auto-instrumentation-python + name: opentelemetry-auto-instrumentation-python + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-wbjlz + readOnly: true + recursiveReadOnly: Disabled phase: Running l.go:53: | 07:38:06 | instrumentation-python-multicontainer | step-01  | TRY | DONE | l.go:53: | 07:38:06 | instrumentation-python-multicontainer | step-01  | CATCH | RUN | l.go:53: | 07:38:06 | instrumentation-python-multicontainer | step-01  | CMD | RUN | === COMMAND /usr/local/bin/kubectl logs --prefix -l app=my-python-multi -n chainsaw-improved-pipefish --all-containers l.go:53: | 07:38:06 | instrumentation-python-multicontainer | step-01  | CMD | LOG | === STDOUT [pod/my-python-multi-cb648664f-h6p5z/myapp] import psutil [pod/my-python-multi-cb648664f-h6p5z/myapp] File "/otel-auto-instrumentation-python/psutil/__init__.py", line 103, in [pod/my-python-multi-cb648664f-h6p5z/myapp] from . import _pslinux as _psplatform [pod/my-python-multi-cb648664f-h6p5z/myapp] File "/otel-auto-instrumentation-python/psutil/_pslinux.py", line 25, in [pod/my-python-multi-cb648664f-h6p5z/myapp] from . import _psutil_linux as cext [pod/my-python-multi-cb648664f-h6p5z/myapp] ImportError: Error relocating /otel-auto-instrumentation-python/psutil/_psutil_linux.abi3.so: __sched_cpufree: symbol not found [pod/my-python-multi-cb648664f-h6p5z/myapp] * Debug mode: off [pod/my-python-multi-cb648664f-h6p5z/myapp] WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. [pod/my-python-multi-cb648664f-h6p5z/myapp] * Running on http://127.0.0.1:8080 [pod/my-python-multi-cb648664f-h6p5z/myapp] Press CTRL+C to quit [pod/my-python-multi-cb648664f-h6p5z/myrabbit] 2025-02-03 07:32:19.472595+00:00 [info] <0.711.0> Prometheus metrics: HTTP (non-TLS) listener started on port 15692 [pod/my-python-multi-cb648664f-h6p5z/myrabbit] 2025-02-03 07:32:19.472731+00:00 [info] <0.673.0> Ready to start client connection listeners [pod/my-python-multi-cb648664f-h6p5z/myrabbit] 2025-02-03 07:32:19.474027+00:00 [info] <0.755.0> started TCP listener on [::]:5672 [pod/my-python-multi-cb648664f-h6p5z/myrabbit] completed with 4 plugins. [pod/my-python-multi-cb648664f-h6p5z/myrabbit] 2025-02-03 07:32:19.558667+00:00 [info] <0.673.0> Server startup complete; 4 plugins started. [pod/my-python-multi-cb648664f-h6p5z/myrabbit] 2025-02-03 07:32:19.558667+00:00 [info] <0.673.0> * rabbitmq_prometheus [pod/my-python-multi-cb648664f-h6p5z/myrabbit] 2025-02-03 07:32:19.558667+00:00 [info] <0.673.0> * rabbitmq_federation [pod/my-python-multi-cb648664f-h6p5z/myrabbit] 2025-02-03 07:32:19.558667+00:00 [info] <0.673.0> * rabbitmq_management_agent [pod/my-python-multi-cb648664f-h6p5z/myrabbit] 2025-02-03 07:32:19.558667+00:00 [info] <0.673.0> * rabbitmq_web_dispatch [pod/my-python-multi-cb648664f-h6p5z/myrabbit] 2025-02-03 07:32:19.683276+00:00 [info] <0.9.0> Time to start RabbitMQ: 7455 ms [pod/my-python-multi-cb648664f-h6p5z/otc-container] 2025-02-03T07:32:12.154Z warn service@v0.113.0/service.go:221 service::telemetry::metrics::address is being deprecated in favor of service::telemetry::metrics::readers [pod/my-python-multi-cb648664f-h6p5z/otc-container] 2025-02-03T07:32:12.154Z info telemetry/metrics.go:70 Serving metrics {"address": "0.0.0.0:8888", "metrics level": "Normal"} [pod/my-python-multi-cb648664f-h6p5z/otc-container] 2025-02-03T07:32:12.154Z info builders/builders.go:26 Development component. May change in the future. {"kind": "exporter", "data_type": "traces", "name": "debug"} [pod/my-python-multi-cb648664f-h6p5z/otc-container] 2025-02-03T07:32:12.166Z info service@v0.113.0/service.go:238 Starting otelcol... {"Version": "0.113.0", "NumCPU": 4} [pod/my-python-multi-cb648664f-h6p5z/otc-container] 2025-02-03T07:32:12.166Z info extensions/extensions.go:39 Starting extensions... [pod/my-python-multi-cb648664f-h6p5z/otc-container] 2025-02-03T07:32:12.166Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-python-multi-cb648664f-h6p5z/otc-container] 2025-02-03T07:32:12.166Z info otlpreceiver@v0.113.0/otlp.go:112 Starting GRPC server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4317"} [pod/my-python-multi-cb648664f-h6p5z/otc-container] 2025-02-03T07:32:12.166Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-python-multi-cb648664f-h6p5z/otc-container] 2025-02-03T07:32:12.166Z info otlpreceiver@v0.113.0/otlp.go:169 Starting HTTP server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4318"} [pod/my-python-multi-cb648664f-h6p5z/otc-container] 2025-02-03T07:32:12.166Z info service@v0.113.0/service.go:261 Everything is ready. Begin running and processing data. l.go:53: | 07:38:06 | instrumentation-python-multicontainer | step-01  | CMD | DONE | l.go:53: | 07:38:06 | instrumentation-python-multicontainer | step-01  | CATCH | DONE | l.go:53: | 07:38:06 | instrumentation-python-multicontainer | step-01  | CLEANUP | RUN | l.go:53: | 07:38:06 | instrumentation-python-multicontainer | step-01  | DELETE | RUN | apps/v1/Deployment @ chainsaw-improved-pipefish/my-python-multi l.go:53: | 07:38:06 | instrumentation-python-multicontainer | step-01  | DELETE | OK | apps/v1/Deployment @ chainsaw-improved-pipefish/my-python-multi l.go:53: | 07:38:06 | instrumentation-python-multicontainer | step-01  | DELETE | DONE | apps/v1/Deployment @ chainsaw-improved-pipefish/my-python-multi l.go:53: | 07:38:06 | instrumentation-python-multicontainer | step-01  | CLEANUP | DONE | l.go:53: | 07:38:06 | instrumentation-python-multicontainer | step-00  | CLEANUP | RUN | l.go:53: | 07:38:06 | instrumentation-python-multicontainer | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-improved-pipefish/java l.go:53: | 07:38:06 | instrumentation-python-multicontainer | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-improved-pipefish/java l.go:53: | 07:38:06 | instrumentation-python-multicontainer | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-improved-pipefish/java l.go:53: | 07:38:06 | instrumentation-python-multicontainer | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-improved-pipefish/sidecar l.go:53: | 07:38:06 | instrumentation-python-multicontainer | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-improved-pipefish/sidecar l.go:53: | 07:38:06 | instrumentation-python-multicontainer | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-improved-pipefish/sidecar l.go:53: | 07:38:06 | instrumentation-python-multicontainer | step-00  | CLEANUP | DONE | l.go:53: | 07:38:06 | instrumentation-python-multicontainer | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-improved-pipefish l.go:53: | 07:38:06 | instrumentation-python-multicontainer | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-improved-pipefish === NAME chainsaw/instrumentation-java-multicontainer l.go:53: | 07:38:27 | instrumentation-java-multicontainer | step-02  | ASSERT | ERROR | v1/Pod @ chainsaw-complete-glider/* === ERROR -------------------------------------------------------------- v1/Pod/chainsaw-complete-glider/my-java-multi-778764cb59-xhbs4 -------------------------------------------------------------- * spec.containers[0].env[5].value: Invalid value: "-javaagent:/otel-auto-instrumentation-java/javaagent.jar": Expected value: " -javaagent:/otel-auto-instrumentation-java/javaagent.jar" * spec.containers[1].env[5].value: Invalid value: "-javaagent:/otel-auto-instrumentation-java/javaagent.jar": Expected value: " -javaagent:/otel-auto-instrumentation-java/javaagent.jar" * spec.containers[2].args: Invalid value: []interface {}{"--config=env:OTEL_CONFIG"}: lengths of slices don't match --- expected +++ actual @@ -7,17 +7,27 @@ sidecar.opentelemetry.io/inject: "true" labels: app: my-java-multi + name: my-java-multi-778764cb59-xhbs4 namespace: chainsaw-complete-glider + ownerReferences: + - apiVersion: apps/v1 + blockOwnerDeletion: true + controller: true + kind: ReplicaSet + name: my-java-multi-778764cb59 + uid: 45a62eee-d59d-4252-8175-3f1cddc23de5 spec: containers: - env: - name: OTEL_NODE_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.hostIP - name: OTEL_POD_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.podIP - name: OTEL_JAVAAGENT_DEBUG value: "true" @@ -26,7 +36,7 @@ - name: SPLUNK_PROFILER_ENABLED value: "false" - name: JAVA_TOOL_OPTIONS - value: ' -javaagent:/otel-auto-instrumentation-java/javaagent.jar' + value: -javaagent:/otel-auto-instrumentation-java/javaagent.jar - name: OTEL_TRACES_EXPORTER value: otlp - name: OTEL_EXPORTER_OTLP_ENDPOINT @@ -54,9 +64,22 @@ - name: OTEL_PROPAGATORS value: jaeger,b3 - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.container.name=myapp,k8s.deployment.name=my-java-multi,k8s.namespace.name=chainsaw-complete-glider,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.replicaset.name=my-java-multi-778764cb59,service.instance.id=chainsaw-complete-glider.$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME).myapp,service.version=main + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-java:main + imagePullPolicy: IfNotPresent name: myapp - volumeMounts: - - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-bkscj readOnly: true - mountPath: /otel-auto-instrumentation-java name: opentelemetry-auto-instrumentation-java @@ -64,10 +87,12 @@ - name: OTEL_NODE_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.hostIP - name: OTEL_POD_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.podIP - name: OTEL_JAVAAGENT_DEBUG value: "true" @@ -76,7 +101,7 @@ - name: SPLUNK_PROFILER_ENABLED value: "false" - name: JAVA_TOOL_OPTIONS - value: ' -javaagent:/otel-auto-instrumentation-java/javaagent.jar' + value: -javaagent:/otel-auto-instrumentation-java/javaagent.jar - name: OTEL_TRACES_EXPORTER value: otlp - name: OTEL_EXPORTER_OTLP_ENDPOINT @@ -104,31 +129,202 @@ - name: OTEL_PROPAGATORS value: jaeger,b3 - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.container.name=myrabbit,k8s.deployment.name=my-java-multi,k8s.namespace.name=chainsaw-complete-glider,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.replicaset.name=my-java-multi-778764cb59,service.instance.id=chainsaw-complete-glider.$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME).myrabbit,service.version=3 + image: rabbitmq:3 + imagePullPolicy: IfNotPresent name: myrabbit - volumeMounts: - - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-bkscj readOnly: true - mountPath: /otel-auto-instrumentation-java name: opentelemetry-auto-instrumentation-java - args: - - --feature-gates=-component.UseLocalHostAsDefaultHost - --config=env:OTEL_CONFIG + env: + - name: POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_CONFIG + value: | + receivers: + otlp: + protocols: + grpc: + endpoint: 0.0.0.0:4317 + http: + endpoint: 0.0.0.0:4318 + exporters: + debug: null + service: + telemetry: + metrics: + address: 0.0.0.0:8888 + pipelines: + traces: + exporters: + - debug + receivers: + - otlp + - name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_RESOURCE_ATTRIBUTES_POD_UID + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.uid + - name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.nodeName + - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.deployment.name=my-java-multi,k8s.deployment.uid=a1e3542e-c3b5-4169-8792-36a4672e1f39,k8s.namespace.name=chainsaw-complete-glider,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.pod.uid=$(OTEL_RESOURCE_ATTRIBUTES_POD_UID),k8s.replicaset.name=my-java-multi-778764cb59,k8s.replicaset.uid=45a62eee-d59d-4252-8175-3f1cddc23de5 + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imagePullPolicy: IfNotPresent name: otc-container + ports: + - containerPort: 8888 + name: metrics + protocol: TCP + - containerPort: 4317 + name: otlp-grpc + protocol: TCP + - containerPort: 4318 + name: otlp-http + protocol: TCP + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-bkscj + readOnly: true initContainers: - - name: opentelemetry-auto-instrumentation-java + - command: + - cp + - /javaagent.jar + - /otel-auto-instrumentation-java/javaagent.jar + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-java:1.33.5 + imagePullPolicy: IfNotPresent + name: opentelemetry-auto-instrumentation-java + resources: + limits: + cpu: 500m + memory: 64Mi + requests: + cpu: 50m + memory: 64Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /otel-auto-instrumentation-java + name: opentelemetry-auto-instrumentation-java + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-bkscj + readOnly: true status: containerStatuses: - - name: myapp + - containerID: cri-o://8aca153560492042a49da0da835c2dfe1a9527db32757d6807aef936f185c289 + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-java:main + imageID: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-java@sha256:a850ff524a0d974b08583210117889f27a3bd58b9bfb07ce232eb46134b22103 + lastState: {} + name: myapp ready: true + restartCount: 0 started: true - - name: myrabbit + state: + running: + startedAt: "2025-02-03T07:32:34Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-bkscj + readOnly: true + recursiveReadOnly: Disabled + - mountPath: /otel-auto-instrumentation-java + name: opentelemetry-auto-instrumentation-java + - containerID: cri-o://93770b7878eb6c14aec77a25266908b18fa3acd5cf17872b738f436984679fb7 + image: docker.io/library/rabbitmq:3 + imageID: docker.io/library/rabbitmq@sha256:af395ea3037a1207af556d76f9c20e972a0855d1471a7c6c8f2c9d5eda54f9ff + lastState: {} + name: myrabbit ready: true + restartCount: 0 started: true - - name: otc-container + state: + running: + startedAt: "2025-02-03T07:32:41Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-bkscj + readOnly: true + recursiveReadOnly: Disabled + - mountPath: /otel-auto-instrumentation-java + name: opentelemetry-auto-instrumentation-java + - containerID: cri-o://9b00ff58d14187eb56665b75c0fea898d7d21efe545c2049b68730a3801c05c8 + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imageID: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + lastState: {} + name: otc-container ready: true + restartCount: 0 started: true + state: + running: + startedAt: "2025-02-03T07:32:41Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-bkscj + readOnly: true + recursiveReadOnly: Disabled initContainerStatuses: - - name: opentelemetry-auto-instrumentation-java + - containerID: cri-o://a9707e2d682c586cd88f103f8764f1c9da29c8e403a6f33dfbcb441f011a95a4 + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-java:1.33.5 + imageID: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-java@sha256:100735f70446dc76d895ee1534da1c828e129b03010ba2bc161e6ca475d27815 + lastState: {} + name: opentelemetry-auto-instrumentation-java ready: true + restartCount: 0 + started: false + state: + terminated: + containerID: cri-o://a9707e2d682c586cd88f103f8764f1c9da29c8e403a6f33dfbcb441f011a95a4 + exitCode: 0 + finishedAt: "2025-02-03T07:32:29Z" + reason: Completed + startedAt: "2025-02-03T07:32:29Z" + volumeMounts: + - mountPath: /otel-auto-instrumentation-java + name: opentelemetry-auto-instrumentation-java + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-bkscj + readOnly: true + recursiveReadOnly: Disabled phase: Running l.go:53: | 07:38:27 | instrumentation-java-multicontainer | step-02  | TRY | DONE | l.go:53: | 07:38:27 | instrumentation-java-multicontainer | step-02  | CATCH | RUN | l.go:53: | 07:38:27 | instrumentation-java-multicontainer | step-02  | CMD | RUN | === COMMAND /usr/local/bin/kubectl logs --prefix -l app=my-java-multi -n chainsaw-complete-glider --all-containers l.go:53: | 07:38:27 | instrumentation-java-multicontainer | step-02  | CMD | LOG | === STDOUT [pod/my-java-multi-778764cb59-xhbs4/myrabbit] 2025-02-03 07:32:49.396506+00:00 [info] <0.711.0> Prometheus metrics: HTTP (non-TLS) listener started on port 15692 [pod/my-java-multi-778764cb59-xhbs4/myrabbit] 2025-02-03 07:32:49.396634+00:00 [info] <0.673.0> Ready to start client connection listeners [pod/my-java-multi-778764cb59-xhbs4/myrabbit] 2025-02-03 07:32:49.397923+00:00 [info] <0.755.0> started TCP listener on [::]:5672 [pod/my-java-multi-778764cb59-xhbs4/myrabbit] completed with 4 plugins. [pod/my-java-multi-778764cb59-xhbs4/myrabbit] 2025-02-03 07:32:49.463010+00:00 [info] <0.673.0> Server startup complete; 4 plugins started. [pod/my-java-multi-778764cb59-xhbs4/myrabbit] 2025-02-03 07:32:49.463010+00:00 [info] <0.673.0> * rabbitmq_prometheus [pod/my-java-multi-778764cb59-xhbs4/myrabbit] 2025-02-03 07:32:49.463010+00:00 [info] <0.673.0> * rabbitmq_federation [pod/my-java-multi-778764cb59-xhbs4/myrabbit] 2025-02-03 07:32:49.463010+00:00 [info] <0.673.0> * rabbitmq_management_agent [pod/my-java-multi-778764cb59-xhbs4/myrabbit] 2025-02-03 07:32:49.463010+00:00 [info] <0.673.0> * rabbitmq_web_dispatch [pod/my-java-multi-778764cb59-xhbs4/myrabbit] 2025-02-03 07:32:49.506610+00:00 [info] <0.9.0> Time to start RabbitMQ: 7853 ms [pod/my-java-multi-778764cb59-xhbs4/otc-container] 2025-02-03T07:32:41.371Z warn service@v0.113.0/service.go:221 service::telemetry::metrics::address is being deprecated in favor of service::telemetry::metrics::readers [pod/my-java-multi-778764cb59-xhbs4/otc-container] 2025-02-03T07:32:41.371Z info telemetry/metrics.go:70 Serving metrics {"address": "0.0.0.0:8888", "metrics level": "Normal"} [pod/my-java-multi-778764cb59-xhbs4/otc-container] 2025-02-03T07:32:41.371Z info builders/builders.go:26 Development component. May change in the future. {"kind": "exporter", "data_type": "traces", "name": "debug"} [pod/my-java-multi-778764cb59-xhbs4/otc-container] 2025-02-03T07:32:41.383Z info service@v0.113.0/service.go:238 Starting otelcol... {"Version": "0.113.0", "NumCPU": 4} [pod/my-java-multi-778764cb59-xhbs4/otc-container] 2025-02-03T07:32:41.384Z info extensions/extensions.go:39 Starting extensions... [pod/my-java-multi-778764cb59-xhbs4/otc-container] 2025-02-03T07:32:41.384Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-java-multi-778764cb59-xhbs4/otc-container] 2025-02-03T07:32:41.384Z info otlpreceiver@v0.113.0/otlp.go:112 Starting GRPC server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4317"} [pod/my-java-multi-778764cb59-xhbs4/otc-container] 2025-02-03T07:32:41.384Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-java-multi-778764cb59-xhbs4/otc-container] 2025-02-03T07:32:41.384Z info otlpreceiver@v0.113.0/otlp.go:169 Starting HTTP server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4318"} [pod/my-java-multi-778764cb59-xhbs4/otc-container] 2025-02-03T07:32:41.384Z info service@v0.113.0/service.go:261 Everything is ready. Begin running and processing data. [pod/my-java-multi-778764cb59-xhbs4/myapp] 2025-02-03T07:32:40.975Z INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path '' [pod/my-java-multi-778764cb59-xhbs4/myapp] 2025-02-03T07:32:40.990Z INFO 1 --- [ main] com.example.app.DemoApplication : Started DemoApplication in 2.548 seconds (process running for 6.232) [pod/my-java-multi-778764cb59-xhbs4/myapp] [otel.javaagent 2025-02-03 07:33:35:304 +0000] [OkHttp http://localhost:4317/...] WARN io.opentelemetry.exporter.internal.grpc.GrpcExporter - Failed to export metrics. Server responded with gRPC status code 2. Error message: timeout [pod/my-java-multi-778764cb59-xhbs4/myapp] [otel.javaagent 2025-02-03 07:33:35:304 +0000] [OkHttp http://localhost:4317/...] DEBUG io.opentelemetry.sdk.metrics.export.PeriodicMetricReader - Exporter failed [pod/my-java-multi-778764cb59-xhbs4/myapp] [otel.javaagent 2025-02-03 07:34:35:208 +0000] [OkHttp http://localhost:4317/...] WARN io.opentelemetry.exporter.internal.grpc.GrpcExporter - Failed to export metrics. Server responded with gRPC status code 2. Error message: timeout [pod/my-java-multi-778764cb59-xhbs4/myapp] [otel.javaagent 2025-02-03 07:34:35:208 +0000] [OkHttp http://localhost:4317/...] DEBUG io.opentelemetry.sdk.metrics.export.PeriodicMetricReader - Exporter failed [pod/my-java-multi-778764cb59-xhbs4/myapp] [otel.javaagent 2025-02-03 07:35:35:198 +0000] [OkHttp http://localhost:4317/...] ERROR io.opentelemetry.exporter.internal.grpc.GrpcExporter - Failed to export metrics. Server responded with UNIMPLEMENTED. This usually means that your collector is not configured with an otlp receiver in the "pipelines" section of the configuration. If export is not desired and you are using OpenTelemetry autoconfiguration or the javaagent, disable export by setting OTEL_METRICS_EXPORTER=none. Full error message: unknown service opentelemetry.proto.collector.metrics.v1.MetricsService [pod/my-java-multi-778764cb59-xhbs4/myapp] [otel.javaagent 2025-02-03 07:35:35:199 +0000] [OkHttp http://localhost:4317/...] DEBUG io.opentelemetry.sdk.metrics.export.PeriodicMetricReader - Exporter failed [pod/my-java-multi-778764cb59-xhbs4/myapp] [otel.javaagent 2025-02-03 07:36:35:188 +0000] [OkHttp http://localhost:4317/...] DEBUG io.opentelemetry.sdk.metrics.export.PeriodicMetricReader - Exporter failed [pod/my-java-multi-778764cb59-xhbs4/myapp] [otel.javaagent 2025-02-03 07:37:35:188 +0000] [OkHttp http://localhost:4317/...] DEBUG io.opentelemetry.sdk.metrics.export.PeriodicMetricReader - Exporter failed l.go:53: | 07:38:27 | instrumentation-java-multicontainer | step-02  | CMD | DONE | l.go:53: | 07:38:27 | instrumentation-java-multicontainer | step-02  | CATCH | DONE | l.go:53: | 07:38:27 | instrumentation-java-multicontainer | step-02  | CLEANUP | RUN | l.go:53: | 07:38:27 | instrumentation-java-multicontainer | step-02  | DELETE | RUN | apps/v1/Deployment @ chainsaw-complete-glider/my-java-multi l.go:53: | 07:38:27 | instrumentation-java-multicontainer | step-02  | DELETE | OK | apps/v1/Deployment @ chainsaw-complete-glider/my-java-multi l.go:53: | 07:38:27 | instrumentation-java-multicontainer | step-02  | DELETE | DONE | apps/v1/Deployment @ chainsaw-complete-glider/my-java-multi l.go:53: | 07:38:27 | instrumentation-java-multicontainer | step-02  | CLEANUP | DONE | l.go:53: | 07:38:27 | instrumentation-java-multicontainer | step-01  | CLEANUP | RUN | l.go:53: | 07:38:27 | instrumentation-java-multicontainer | step-01  | DELETE | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-complete-glider/java l.go:53: | 07:38:27 | instrumentation-java-multicontainer | step-01  | DELETE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-complete-glider/java l.go:53: | 07:38:27 | instrumentation-java-multicontainer | step-01  | DELETE | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-complete-glider/java l.go:53: | 07:38:27 | instrumentation-java-multicontainer | step-01  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-complete-glider/sidecar l.go:53: | 07:38:27 | instrumentation-java-multicontainer | step-01  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-complete-glider/sidecar l.go:53: | 07:38:28 | instrumentation-java-multicontainer | step-01  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-complete-glider/sidecar l.go:53: | 07:38:28 | instrumentation-java-multicontainer | step-01  | CLEANUP | DONE | l.go:53: | 07:38:28 | instrumentation-java-multicontainer | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-complete-glider l.go:53: | 07:38:28 | instrumentation-java-multicontainer | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-complete-glider l.go:53: | 07:38:34 | instrumentation-java-multicontainer | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-complete-glider === CONT chainsaw/instrumentation-nodejs l.go:53: | 07:38:34 | instrumentation-nodejs | @setup  | CREATE | OK | v1/Namespace @ chainsaw-oriented-stag l.go:53: | 07:38:34 | instrumentation-nodejs | step-00  | TRY | RUN | l.go:53: | 07:38:34 | instrumentation-nodejs | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-oriented-stag openshift.io/sa.scc.uid-range=1000/1000 --overwrite l.go:53: | 07:38:34 | instrumentation-nodejs | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-oriented-stag annotated l.go:53: | 07:38:34 | instrumentation-nodejs | step-00  | CMD | DONE | l.go:53: | 07:38:34 | instrumentation-nodejs | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-oriented-stag openshift.io/sa.scc.supplemental-groups=3000/3000 --overwrite l.go:53: | 07:38:34 | instrumentation-nodejs | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-oriented-stag annotated l.go:53: | 07:38:34 | instrumentation-nodejs | step-00  | CMD | DONE | l.go:53: | 07:38:34 | instrumentation-nodejs | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-oriented-stag/sidecar l.go:53: | 07:38:34 | instrumentation-nodejs | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-oriented-stag/sidecar l.go:53: | 07:38:34 | instrumentation-nodejs | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-oriented-stag/sidecar l.go:53: | 07:38:34 | instrumentation-nodejs | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-oriented-stag/nodejs l.go:53: | 07:38:34 | instrumentation-nodejs | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-oriented-stag/nodejs l.go:53: | 07:38:34 | instrumentation-nodejs | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-oriented-stag/nodejs l.go:53: | 07:38:34 | instrumentation-nodejs | step-00  | TRY | DONE | l.go:53: | 07:38:34 | instrumentation-nodejs | step-01  | TRY | RUN | l.go:53: | 07:38:34 | instrumentation-nodejs | step-01  | APPLY | RUN | apps/v1/Deployment @ chainsaw-oriented-stag/my-nodejs l.go:53: | 07:38:35 | instrumentation-nodejs | step-01  | CREATE | OK | apps/v1/Deployment @ chainsaw-oriented-stag/my-nodejs l.go:53: | 07:38:35 | instrumentation-nodejs | step-01  | APPLY | DONE | apps/v1/Deployment @ chainsaw-oriented-stag/my-nodejs l.go:53: | 07:38:35 | instrumentation-nodejs | step-01  | ASSERT | RUN | v1/Pod @ chainsaw-oriented-stag/* === NAME chainsaw/instrumentation-sdk l.go:53: | 07:38:37 | instrumentation-sdk | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-light-quail === CONT chainsaw/instrumentation-nginx-multicontainer l.go:53: | 07:38:37 | instrumentation-nginx-multicontainer | @setup  | CREATE | OK | v1/Namespace @ chainsaw-awake-mosquito l.go:53: | 07:38:37 | instrumentation-nginx-multicontainer | step-00  | TRY | RUN | l.go:53: | 07:38:37 | instrumentation-nginx-multicontainer | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-awake-mosquito openshift.io/sa.scc.uid-range=1000/1000 --overwrite l.go:53: | 07:38:38 | instrumentation-nginx-multicontainer | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-awake-mosquito annotated l.go:53: | 07:38:38 | instrumentation-nginx-multicontainer | step-00  | CMD | DONE | l.go:53: | 07:38:38 | instrumentation-nginx-multicontainer | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-awake-mosquito openshift.io/sa.scc.supplemental-groups=3000/3000 --overwrite l.go:53: | 07:38:38 | instrumentation-nginx-multicontainer | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-awake-mosquito annotated l.go:53: | 07:38:38 | instrumentation-nginx-multicontainer | step-00  | CMD | DONE | l.go:53: | 07:38:38 | instrumentation-nginx-multicontainer | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-awake-mosquito/sidecar l.go:53: | 07:38:38 | instrumentation-nginx-multicontainer | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-awake-mosquito/sidecar l.go:53: | 07:38:38 | instrumentation-nginx-multicontainer | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-awake-mosquito/sidecar l.go:53: | 07:38:38 | instrumentation-nginx-multicontainer | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-awake-mosquito/nginx l.go:53: | 07:38:38 | instrumentation-nginx-multicontainer | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-awake-mosquito/nginx l.go:53: | 07:38:38 | instrumentation-nginx-multicontainer | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-awake-mosquito/nginx l.go:53: | 07:38:38 | instrumentation-nginx-multicontainer | step-00  | TRY | DONE | l.go:53: | 07:38:38 | instrumentation-nginx-multicontainer | step-01  | TRY | RUN | l.go:53: | 07:38:38 | instrumentation-nginx-multicontainer | step-01  | APPLY | RUN | apps/v1/Deployment @ chainsaw-awake-mosquito/my-nginx-multi l.go:53: | 07:38:38 | instrumentation-nginx-multicontainer | step-01  | CREATE | OK | apps/v1/Deployment @ chainsaw-awake-mosquito/my-nginx-multi l.go:53: | 07:38:38 | instrumentation-nginx-multicontainer | step-01  | APPLY | DONE | apps/v1/Deployment @ chainsaw-awake-mosquito/my-nginx-multi l.go:53: | 07:38:38 | instrumentation-nginx-multicontainer | step-01  | APPLY | RUN | v1/ConfigMap @ chainsaw-awake-mosquito/nginx-conf l.go:53: | 07:38:38 | instrumentation-nginx-multicontainer | step-01  | CREATE | OK | v1/ConfigMap @ chainsaw-awake-mosquito/nginx-conf l.go:53: | 07:38:38 | instrumentation-nginx-multicontainer | step-01  | APPLY | DONE | v1/ConfigMap @ chainsaw-awake-mosquito/nginx-conf l.go:53: | 07:38:38 | instrumentation-nginx-multicontainer | step-01  | ASSERT | RUN | v1/Pod @ chainsaw-awake-mosquito/* === NAME chainsaw/instrumentation-python-multicontainer l.go:53: | 07:38:52 | instrumentation-python-multicontainer | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-improved-pipefish === CONT chainsaw/instrumentation-nginx-contnr-secctx l.go:53: | 07:38:52 | instrumentation-nginx-contnr-secctx | @setup  | CREATE | OK | v1/Namespace @ chainsaw-top-frog l.go:53: | 07:38:52 | instrumentation-nginx-contnr-secctx | step-00  | TRY | RUN | l.go:53: | 07:38:52 | instrumentation-nginx-contnr-secctx | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-top-frog openshift.io/sa.scc.uid-range=1000/1000 --overwrite l.go:53: | 07:38:52 | instrumentation-nginx-contnr-secctx | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-top-frog annotated l.go:53: | 07:38:52 | instrumentation-nginx-contnr-secctx | step-00  | CMD | DONE | l.go:53: | 07:38:52 | instrumentation-nginx-contnr-secctx | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-top-frog openshift.io/sa.scc.supplemental-groups=3000/3000 --overwrite l.go:53: | 07:38:52 | instrumentation-nginx-contnr-secctx | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-top-frog annotated l.go:53: | 07:38:52 | instrumentation-nginx-contnr-secctx | step-00  | CMD | DONE | l.go:53: | 07:38:53 | instrumentation-nginx-contnr-secctx | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-top-frog/sidecar l.go:53: | 07:38:53 | instrumentation-nginx-contnr-secctx | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-top-frog/sidecar l.go:53: | 07:38:53 | instrumentation-nginx-contnr-secctx | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-top-frog/sidecar l.go:53: | 07:38:53 | instrumentation-nginx-contnr-secctx | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-top-frog/nginx l.go:53: | 07:38:53 | instrumentation-nginx-contnr-secctx | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-top-frog/nginx l.go:53: | 07:38:53 | instrumentation-nginx-contnr-secctx | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-top-frog/nginx l.go:53: | 07:38:53 | instrumentation-nginx-contnr-secctx | step-00  | TRY | DONE | l.go:53: | 07:38:53 | instrumentation-nginx-contnr-secctx | step-01  | TRY | RUN | l.go:53: | 07:38:53 | instrumentation-nginx-contnr-secctx | step-01  | APPLY | RUN | apps/v1/Deployment @ chainsaw-top-frog/my-nginx-contnr-secctx l.go:53: | 07:38:53 | instrumentation-nginx-contnr-secctx | step-01  | CREATE | OK | apps/v1/Deployment @ chainsaw-top-frog/my-nginx-contnr-secctx l.go:53: | 07:38:53 | instrumentation-nginx-contnr-secctx | step-01  | APPLY | DONE | apps/v1/Deployment @ chainsaw-top-frog/my-nginx-contnr-secctx l.go:53: | 07:38:53 | instrumentation-nginx-contnr-secctx | step-01  | APPLY | RUN | v1/ConfigMap @ chainsaw-top-frog/nginx-conf l.go:53: | 07:38:53 | instrumentation-nginx-contnr-secctx | step-01  | CREATE | OK | v1/ConfigMap @ chainsaw-top-frog/nginx-conf l.go:53: | 07:38:53 | instrumentation-nginx-contnr-secctx | step-01  | APPLY | DONE | v1/ConfigMap @ chainsaw-top-frog/nginx-conf l.go:53: | 07:38:53 | instrumentation-nginx-contnr-secctx | step-01  | ASSERT | RUN | v1/Pod @ chainsaw-top-frog/* === NAME chainsaw/instrumentation-nodejs-multicontainer l.go:53: | 07:40:07 | instrumentation-nodejs-multicontainer | step-01  | ASSERT | ERROR | v1/Pod @ chainsaw-huge-bird/* === ERROR ---------------------------------------------------------- v1/Pod/chainsaw-huge-bird/my-nodejs-multi-64cbdd65db-vk2d2 ---------------------------------------------------------- * spec.containers[2].args: Invalid value: []interface {}{"--config=env:OTEL_CONFIG"}: lengths of slices don't match --- expected +++ actual @@ -7,17 +7,27 @@ sidecar.opentelemetry.io/inject: "true" labels: app: my-nodejs-multi + name: my-nodejs-multi-64cbdd65db-vk2d2 namespace: chainsaw-huge-bird + ownerReferences: + - apiVersion: apps/v1 + blockOwnerDeletion: true + controller: true + kind: ReplicaSet + name: my-nodejs-multi-64cbdd65db + uid: 3d217345-b947-4e3b-bc57-b97b501acd5c spec: containers: - env: - name: OTEL_NODE_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.hostIP - name: OTEL_POD_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.podIP - name: NODE_PATH value: /usr/local/lib/node_modules @@ -50,9 +60,22 @@ - name: OTEL_PROPAGATORS value: jaeger,b3 - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.container.name=myapp,k8s.deployment.name=my-nodejs-multi,k8s.namespace.name=chainsaw-huge-bird,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.replicaset.name=my-nodejs-multi-64cbdd65db,service.instance.id=chainsaw-huge-bird.$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME).myapp,service.version=main + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-nodejs:main + imagePullPolicy: IfNotPresent name: myapp - volumeMounts: - - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-lz78v readOnly: true - mountPath: /otel-auto-instrumentation-nodejs name: opentelemetry-auto-instrumentation-nodejs @@ -60,10 +83,12 @@ - name: OTEL_NODE_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.hostIP - name: OTEL_POD_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.podIP - name: NODE_OPTIONS value: ' --require /otel-auto-instrumentation-nodejs/autoinstrumentation.js' @@ -94,31 +119,203 @@ - name: OTEL_PROPAGATORS value: jaeger,b3 - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.container.name=myrabbit,k8s.deployment.name=my-nodejs-multi,k8s.namespace.name=chainsaw-huge-bird,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.replicaset.name=my-nodejs-multi-64cbdd65db,service.instance.id=chainsaw-huge-bird.$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME).myrabbit,service.version=3 + image: rabbitmq:3 + imagePullPolicy: IfNotPresent name: myrabbit - volumeMounts: - - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-lz78v readOnly: true - mountPath: /otel-auto-instrumentation-nodejs name: opentelemetry-auto-instrumentation-nodejs - args: - - --feature-gates=-component.UseLocalHostAsDefaultHost - --config=env:OTEL_CONFIG + env: + - name: POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_CONFIG + value: | + receivers: + otlp: + protocols: + grpc: + endpoint: 0.0.0.0:4317 + http: + endpoint: 0.0.0.0:4318 + exporters: + debug: null + service: + telemetry: + metrics: + address: 0.0.0.0:8888 + pipelines: + traces: + exporters: + - debug + receivers: + - otlp + - name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_RESOURCE_ATTRIBUTES_POD_UID + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.uid + - name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.nodeName + - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.deployment.name=my-nodejs-multi,k8s.deployment.uid=805b7d9c-7d72-4b34-8df3-ad82b9f63c70,k8s.namespace.name=chainsaw-huge-bird,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.pod.uid=$(OTEL_RESOURCE_ATTRIBUTES_POD_UID),k8s.replicaset.name=my-nodejs-multi-64cbdd65db,k8s.replicaset.uid=3d217345-b947-4e3b-bc57-b97b501acd5c + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imagePullPolicy: IfNotPresent name: otc-container + ports: + - containerPort: 8888 + name: metrics + protocol: TCP + - containerPort: 4317 + name: otlp-grpc + protocol: TCP + - containerPort: 4318 + name: otlp-http + protocol: TCP + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-lz78v + readOnly: true initContainers: - - name: opentelemetry-auto-instrumentation-nodejs + - command: + - cp + - -r + - /autoinstrumentation/. + - /otel-auto-instrumentation-nodejs + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-nodejs:0.53.0 + imagePullPolicy: IfNotPresent + name: opentelemetry-auto-instrumentation-nodejs + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 50m + memory: 128Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /otel-auto-instrumentation-nodejs + name: opentelemetry-auto-instrumentation-nodejs + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-lz78v + readOnly: true status: containerStatuses: - - name: myapp + - containerID: cri-o://192f5bedf22ed004339b08c75fe999814c2f9f3d30673a98e1140d241ffbc7f6 + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-nodejs:main + imageID: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-nodejs@sha256:0b37f3557ff72ec150ce227b4dc98a2977a67a3bddc426da495c5b40f4e9ce6a + lastState: {} + name: myapp ready: true + restartCount: 0 started: true - - name: myrabbit + state: + running: + startedAt: "2025-02-03T07:34:13Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-lz78v + readOnly: true + recursiveReadOnly: Disabled + - mountPath: /otel-auto-instrumentation-nodejs + name: opentelemetry-auto-instrumentation-nodejs + - containerID: cri-o://10a5277ec2a9311c37ebd3e4b13248cffe354da6c75886dbe7a234705afac533 + image: docker.io/library/rabbitmq:3 + imageID: docker.io/library/rabbitmq@sha256:af395ea3037a1207af556d76f9c20e972a0855d1471a7c6c8f2c9d5eda54f9ff + lastState: {} + name: myrabbit ready: true + restartCount: 0 started: true - - name: otc-container + state: + running: + startedAt: "2025-02-03T07:34:13Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-lz78v + readOnly: true + recursiveReadOnly: Disabled + - mountPath: /otel-auto-instrumentation-nodejs + name: opentelemetry-auto-instrumentation-nodejs + - containerID: cri-o://a1fd4fb9b519fc733dedcbbbbd9db4f45d4742c9275cca4de7acbbef7d60968d + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imageID: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + lastState: {} + name: otc-container ready: true + restartCount: 0 started: true + state: + running: + startedAt: "2025-02-03T07:34:14Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-lz78v + readOnly: true + recursiveReadOnly: Disabled initContainerStatuses: - - name: opentelemetry-auto-instrumentation-nodejs + - containerID: cri-o://768874eeec0569731f3091af13f5365933e204032791148367236af5b2acf359 + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-nodejs:0.53.0 + imageID: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-nodejs@sha256:70ba757df71d0596aaccac91f439e8be7f81136b868205e79178e8fd3c36a763 + lastState: {} + name: opentelemetry-auto-instrumentation-nodejs ready: true + restartCount: 0 + started: false + state: + terminated: + containerID: cri-o://768874eeec0569731f3091af13f5365933e204032791148367236af5b2acf359 + exitCode: 0 + finishedAt: "2025-02-03T07:34:10Z" + reason: Completed + startedAt: "2025-02-03T07:34:09Z" + volumeMounts: + - mountPath: /otel-auto-instrumentation-nodejs + name: opentelemetry-auto-instrumentation-nodejs + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-lz78v + readOnly: true + recursiveReadOnly: Disabled phase: Running l.go:53: | 07:40:07 | instrumentation-nodejs-multicontainer | step-01  | TRY | DONE | l.go:53: | 07:40:07 | instrumentation-nodejs-multicontainer | step-01  | CATCH | RUN | l.go:53: | 07:40:07 | instrumentation-nodejs-multicontainer | step-01  | CMD | RUN | === COMMAND /usr/local/bin/kubectl logs --prefix -l app=my-nodejs-multi -n chainsaw-huge-bird --all-containers l.go:53: | 07:40:07 | instrumentation-nodejs-multicontainer | step-01  | CMD | LOG | === STDOUT [pod/my-nodejs-multi-64cbdd65db-vk2d2/myapp] Hi [pod/my-nodejs-multi-64cbdd65db-vk2d2/myrabbit] 2025-02-03 07:34:19.110701+00:00 [info] <0.711.0> Prometheus metrics: HTTP (non-TLS) listener started on port 15692 [pod/my-nodejs-multi-64cbdd65db-vk2d2/myrabbit] 2025-02-03 07:34:19.110830+00:00 [info] <0.673.0> Ready to start client connection listeners [pod/my-nodejs-multi-64cbdd65db-vk2d2/myrabbit] 2025-02-03 07:34:19.112159+00:00 [info] <0.755.0> started TCP listener on [::]:5672 [pod/my-nodejs-multi-64cbdd65db-vk2d2/myrabbit] completed with 4 plugins. [pod/my-nodejs-multi-64cbdd65db-vk2d2/myrabbit] 2025-02-03 07:34:19.178174+00:00 [info] <0.673.0> Server startup complete; 4 plugins started. [pod/my-nodejs-multi-64cbdd65db-vk2d2/myrabbit] 2025-02-03 07:34:19.178174+00:00 [info] <0.673.0> * rabbitmq_prometheus [pod/my-nodejs-multi-64cbdd65db-vk2d2/myrabbit] 2025-02-03 07:34:19.178174+00:00 [info] <0.673.0> * rabbitmq_federation [pod/my-nodejs-multi-64cbdd65db-vk2d2/myrabbit] 2025-02-03 07:34:19.178174+00:00 [info] <0.673.0> * rabbitmq_management_agent [pod/my-nodejs-multi-64cbdd65db-vk2d2/myrabbit] 2025-02-03 07:34:19.178174+00:00 [info] <0.673.0> * rabbitmq_web_dispatch [pod/my-nodejs-multi-64cbdd65db-vk2d2/myrabbit] 2025-02-03 07:34:19.353755+00:00 [info] <0.9.0> Time to start RabbitMQ: 4809 ms [pod/my-nodejs-multi-64cbdd65db-vk2d2/otc-container] 2025-02-03T07:34:14.248Z warn service@v0.113.0/service.go:221 service::telemetry::metrics::address is being deprecated in favor of service::telemetry::metrics::readers [pod/my-nodejs-multi-64cbdd65db-vk2d2/otc-container] 2025-02-03T07:34:14.248Z info telemetry/metrics.go:70 Serving metrics {"address": "0.0.0.0:8888", "metrics level": "Normal"} [pod/my-nodejs-multi-64cbdd65db-vk2d2/otc-container] 2025-02-03T07:34:14.248Z info builders/builders.go:26 Development component. May change in the future. {"kind": "exporter", "data_type": "traces", "name": "debug"} [pod/my-nodejs-multi-64cbdd65db-vk2d2/otc-container] 2025-02-03T07:34:14.260Z info service@v0.113.0/service.go:238 Starting otelcol... {"Version": "0.113.0", "NumCPU": 4} [pod/my-nodejs-multi-64cbdd65db-vk2d2/otc-container] 2025-02-03T07:34:14.260Z info extensions/extensions.go:39 Starting extensions... [pod/my-nodejs-multi-64cbdd65db-vk2d2/otc-container] 2025-02-03T07:34:14.260Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-nodejs-multi-64cbdd65db-vk2d2/otc-container] 2025-02-03T07:34:14.260Z info otlpreceiver@v0.113.0/otlp.go:112 Starting GRPC server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4317"} [pod/my-nodejs-multi-64cbdd65db-vk2d2/otc-container] 2025-02-03T07:34:14.260Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-nodejs-multi-64cbdd65db-vk2d2/otc-container] 2025-02-03T07:34:14.261Z info otlpreceiver@v0.113.0/otlp.go:169 Starting HTTP server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4318"} [pod/my-nodejs-multi-64cbdd65db-vk2d2/otc-container] 2025-02-03T07:34:14.261Z info service@v0.113.0/service.go:261 Everything is ready. Begin running and processing data. l.go:53: | 07:40:07 | instrumentation-nodejs-multicontainer | step-01  | CMD | DONE | l.go:53: | 07:40:07 | instrumentation-nodejs-multicontainer | step-01  | CATCH | DONE | l.go:53: | 07:40:07 | instrumentation-nodejs-multicontainer | step-01  | CLEANUP | RUN | l.go:53: | 07:40:07 | instrumentation-nodejs-multicontainer | step-01  | DELETE | RUN | apps/v1/Deployment @ chainsaw-huge-bird/my-nodejs-multi l.go:53: | 07:40:07 | instrumentation-nodejs-multicontainer | step-01  | DELETE | OK | apps/v1/Deployment @ chainsaw-huge-bird/my-nodejs-multi l.go:53: | 07:40:07 | instrumentation-nodejs-multicontainer | step-01  | DELETE | DONE | apps/v1/Deployment @ chainsaw-huge-bird/my-nodejs-multi l.go:53: | 07:40:07 | instrumentation-nodejs-multicontainer | step-01  | CLEANUP | DONE | l.go:53: | 07:40:07 | instrumentation-nodejs-multicontainer | step-00  | CLEANUP | RUN | l.go:53: | 07:40:07 | instrumentation-nodejs-multicontainer | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-huge-bird/java l.go:53: | 07:40:07 | instrumentation-nodejs-multicontainer | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-huge-bird/java l.go:53: | 07:40:07 | instrumentation-nodejs-multicontainer | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-huge-bird/java l.go:53: | 07:40:07 | instrumentation-nodejs-multicontainer | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-huge-bird/sidecar l.go:53: | 07:40:07 | instrumentation-nodejs-multicontainer | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-huge-bird/sidecar l.go:53: | 07:40:07 | instrumentation-nodejs-multicontainer | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-huge-bird/sidecar l.go:53: | 07:40:07 | instrumentation-nodejs-multicontainer | step-00  | CLEANUP | DONE | l.go:53: | 07:40:07 | instrumentation-nodejs-multicontainer | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-huge-bird l.go:53: | 07:40:07 | instrumentation-nodejs-multicontainer | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-huge-bird l.go:53: | 07:40:52 | instrumentation-nodejs-multicontainer | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-huge-bird === CONT chainsaw/instrumentation-nginx l.go:53: | 07:40:53 | instrumentation-nginx | @setup  | CREATE | OK | v1/Namespace @ chainsaw-pumped-toad l.go:53: | 07:40:53 | instrumentation-nginx | step-00  | TRY | RUN | l.go:53: | 07:40:53 | instrumentation-nginx | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-pumped-toad openshift.io/sa.scc.uid-range=1000/1000 --overwrite l.go:53: | 07:40:53 | instrumentation-nginx | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-pumped-toad annotated l.go:53: | 07:40:53 | instrumentation-nginx | step-00  | CMD | DONE | l.go:53: | 07:40:53 | instrumentation-nginx | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-pumped-toad openshift.io/sa.scc.supplemental-groups=3000/3000 --overwrite l.go:53: | 07:40:53 | instrumentation-nginx | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-pumped-toad annotated l.go:53: | 07:40:53 | instrumentation-nginx | step-00  | CMD | DONE | l.go:53: | 07:40:53 | instrumentation-nginx | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-pumped-toad/sidecar l.go:53: | 07:40:53 | instrumentation-nginx | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-pumped-toad/sidecar l.go:53: | 07:40:53 | instrumentation-nginx | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-pumped-toad/sidecar l.go:53: | 07:40:53 | instrumentation-nginx | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-pumped-toad/nginx l.go:53: | 07:40:53 | instrumentation-nginx | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-pumped-toad/nginx l.go:53: | 07:40:53 | instrumentation-nginx | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-pumped-toad/nginx l.go:53: | 07:40:53 | instrumentation-nginx | step-00  | TRY | DONE | l.go:53: | 07:40:53 | instrumentation-nginx | step-01  | TRY | RUN | l.go:53: | 07:40:53 | instrumentation-nginx | step-01  | APPLY | RUN | apps/v1/Deployment @ chainsaw-pumped-toad/my-nginx l.go:53: | 07:40:53 | instrumentation-nginx | step-01  | CREATE | OK | apps/v1/Deployment @ chainsaw-pumped-toad/my-nginx l.go:53: | 07:40:53 | instrumentation-nginx | step-01  | APPLY | DONE | apps/v1/Deployment @ chainsaw-pumped-toad/my-nginx l.go:53: | 07:40:53 | instrumentation-nginx | step-01  | APPLY | RUN | v1/ConfigMap @ chainsaw-pumped-toad/nginx-conf l.go:53: | 07:40:53 | instrumentation-nginx | step-01  | CREATE | OK | v1/ConfigMap @ chainsaw-pumped-toad/nginx-conf l.go:53: | 07:40:53 | instrumentation-nginx | step-01  | APPLY | DONE | v1/ConfigMap @ chainsaw-pumped-toad/nginx-conf l.go:53: | 07:40:53 | instrumentation-nginx | step-01  | ASSERT | RUN | v1/Pod @ chainsaw-pumped-toad/* === NAME chainsaw/instrumentation-nodejs l.go:53: | 07:44:35 | instrumentation-nodejs | step-01  | ASSERT | ERROR | v1/Pod @ chainsaw-oriented-stag/* === ERROR -------------------------------------------------------- v1/Pod/chainsaw-oriented-stag/my-nodejs-68f7846995-cqmnr -------------------------------------------------------- * spec.containers[1].args: Invalid value: []interface {}{"--config=env:OTEL_CONFIG"}: lengths of slices don't match --- expected +++ actual @@ -6,17 +6,27 @@ sidecar.opentelemetry.io/inject: "true" labels: app: my-nodejs + name: my-nodejs-68f7846995-cqmnr namespace: chainsaw-oriented-stag + ownerReferences: + - apiVersion: apps/v1 + blockOwnerDeletion: true + controller: true + kind: ReplicaSet + name: my-nodejs-68f7846995 + uid: 6f243a91-ff57-44c7-b18c-21d616fa7a1c spec: containers: - env: - name: OTEL_NODE_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.hostIP - name: OTEL_POD_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.podIP - name: NODE_PATH value: /usr/local/lib/node_modules @@ -53,28 +63,185 @@ - name: OTEL_PROPAGATORS value: jaeger,b3 - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.container.name=myapp,k8s.deployment.name=my-nodejs,k8s.namespace.name=chainsaw-oriented-stag,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.replicaset.name=my-nodejs-68f7846995,service.instance.id=chainsaw-oriented-stag.$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME).myapp,service.version=main + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-nodejs:main + imagePullPolicy: IfNotPresent name: myapp - volumeMounts: - - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-7cjnb readOnly: true - mountPath: /otel-auto-instrumentation-nodejs name: opentelemetry-auto-instrumentation-nodejs - args: - - --feature-gates=-component.UseLocalHostAsDefaultHost - --config=env:OTEL_CONFIG + env: + - name: POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_CONFIG + value: | + receivers: + otlp: + protocols: + grpc: + endpoint: 0.0.0.0:4317 + http: + endpoint: 0.0.0.0:4318 + exporters: + debug: null + service: + telemetry: + metrics: + address: 0.0.0.0:8888 + pipelines: + traces: + exporters: + - debug + receivers: + - otlp + - name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_RESOURCE_ATTRIBUTES_POD_UID + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.uid + - name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.nodeName + - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.deployment.name=my-nodejs,k8s.deployment.uid=bd4451e5-118f-4c7d-aeda-a9c53dec2b8f,k8s.namespace.name=chainsaw-oriented-stag,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.pod.uid=$(OTEL_RESOURCE_ATTRIBUTES_POD_UID),k8s.replicaset.name=my-nodejs-68f7846995,k8s.replicaset.uid=6f243a91-ff57-44c7-b18c-21d616fa7a1c + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imagePullPolicy: IfNotPresent name: otc-container + ports: + - containerPort: 8888 + name: metrics + protocol: TCP + - containerPort: 4317 + name: otlp-grpc + protocol: TCP + - containerPort: 4318 + name: otlp-http + protocol: TCP + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-7cjnb + readOnly: true initContainers: - - name: opentelemetry-auto-instrumentation-nodejs + - command: + - cp + - -r + - /autoinstrumentation/. + - /otel-auto-instrumentation-nodejs + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-nodejs:0.53.0 + imagePullPolicy: IfNotPresent + name: opentelemetry-auto-instrumentation-nodejs + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 50m + memory: 128Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /otel-auto-instrumentation-nodejs + name: opentelemetry-auto-instrumentation-nodejs + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-7cjnb + readOnly: true status: containerStatuses: - - name: myapp + - containerID: cri-o://646ad817867f699f778aa5f5906b3c271300e5436a58e73ce043277f1588c45e + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-nodejs:main + imageID: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-nodejs@sha256:0b37f3557ff72ec150ce227b4dc98a2977a67a3bddc426da495c5b40f4e9ce6a + lastState: {} + name: myapp ready: true + restartCount: 0 started: true - - name: otc-container + state: + running: + startedAt: "2025-02-03T07:38:42Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-7cjnb + readOnly: true + recursiveReadOnly: Disabled + - mountPath: /otel-auto-instrumentation-nodejs + name: opentelemetry-auto-instrumentation-nodejs + - containerID: cri-o://7ee9e3e46c9ac44941aed53c0f194a54452b94d98db5043be86e606d3123aeb1 + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imageID: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + lastState: {} + name: otc-container ready: true + restartCount: 0 started: true + state: + running: + startedAt: "2025-02-03T07:38:42Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-7cjnb + readOnly: true + recursiveReadOnly: Disabled initContainerStatuses: - - name: opentelemetry-auto-instrumentation-nodejs + - containerID: cri-o://c0e9d84604b4854b897ac632c8705d8af067fac09cb43b03dc65e3f2eb7b322c + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-nodejs:0.53.0 + imageID: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-nodejs@sha256:70ba757df71d0596aaccac91f439e8be7f81136b868205e79178e8fd3c36a763 + lastState: {} + name: opentelemetry-auto-instrumentation-nodejs ready: true + restartCount: 0 + started: false + state: + terminated: + containerID: cri-o://c0e9d84604b4854b897ac632c8705d8af067fac09cb43b03dc65e3f2eb7b322c + exitCode: 0 + finishedAt: "2025-02-03T07:38:38Z" + reason: Completed + startedAt: "2025-02-03T07:38:37Z" + volumeMounts: + - mountPath: /otel-auto-instrumentation-nodejs + name: opentelemetry-auto-instrumentation-nodejs + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-7cjnb + readOnly: true + recursiveReadOnly: Disabled phase: Running l.go:53: | 07:44:35 | instrumentation-nodejs | step-01  | TRY | DONE | l.go:53: | 07:44:35 | instrumentation-nodejs | step-01  | CATCH | RUN | l.go:53: | 07:44:35 | instrumentation-nodejs | step-01  | CMD | RUN | === COMMAND /usr/local/bin/kubectl logs --prefix -l app=my-nodejs -n chainsaw-oriented-stag --all-containers l.go:53: | 07:44:35 | instrumentation-nodejs | step-01  | CMD | LOG | === STDOUT [pod/my-nodejs-68f7846995-cqmnr/otc-container] 2025-02-03T07:38:42.300Z info telemetry/metrics.go:70 Serving metrics {"address": "0.0.0.0:8888", "metrics level": "Normal"} [pod/my-nodejs-68f7846995-cqmnr/otc-container] 2025-02-03T07:38:42.300Z info builders/builders.go:26 Development component. May change in the future. {"kind": "exporter", "data_type": "traces", "name": "debug"} [pod/my-nodejs-68f7846995-cqmnr/otc-container] 2025-02-03T07:38:42.313Z info service@v0.113.0/service.go:238 Starting otelcol... {"Version": "0.113.0", "NumCPU": 4} [pod/my-nodejs-68f7846995-cqmnr/otc-container] 2025-02-03T07:38:42.313Z info extensions/extensions.go:39 Starting extensions... [pod/my-nodejs-68f7846995-cqmnr/otc-container] 2025-02-03T07:38:42.313Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-nodejs-68f7846995-cqmnr/otc-container] 2025-02-03T07:38:42.313Z info otlpreceiver@v0.113.0/otlp.go:112 Starting GRPC server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4317"} [pod/my-nodejs-68f7846995-cqmnr/otc-container] 2025-02-03T07:38:42.313Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-nodejs-68f7846995-cqmnr/otc-container] 2025-02-03T07:38:42.314Z info otlpreceiver@v0.113.0/otlp.go:169 Starting HTTP server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4318"} [pod/my-nodejs-68f7846995-cqmnr/otc-container] 2025-02-03T07:38:42.314Z info service@v0.113.0/service.go:261 Everything is ready. Begin running and processing data. [pod/my-nodejs-68f7846995-cqmnr/otc-container] 2025-02-03T07:38:47.860Z info Traces {"kind": "exporter", "data_type": "traces", "name": "debug", "resource spans": 1, "spans": 6} [pod/my-nodejs-68f7846995-cqmnr/myapp] Hi l.go:53: | 07:44:35 | instrumentation-nodejs | step-01  | CMD | DONE | l.go:53: | 07:44:35 | instrumentation-nodejs | step-01  | CATCH | DONE | l.go:53: | 07:44:35 | instrumentation-nodejs | step-01  | CLEANUP | RUN | l.go:53: | 07:44:35 | instrumentation-nodejs | step-01  | DELETE | RUN | apps/v1/Deployment @ chainsaw-oriented-stag/my-nodejs l.go:53: | 07:44:35 | instrumentation-nodejs | step-01  | DELETE | OK | apps/v1/Deployment @ chainsaw-oriented-stag/my-nodejs l.go:53: | 07:44:35 | instrumentation-nodejs | step-01  | DELETE | DONE | apps/v1/Deployment @ chainsaw-oriented-stag/my-nodejs l.go:53: | 07:44:35 | instrumentation-nodejs | step-01  | CLEANUP | DONE | l.go:53: | 07:44:35 | instrumentation-nodejs | step-00  | CLEANUP | RUN | l.go:53: | 07:44:35 | instrumentation-nodejs | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-oriented-stag/nodejs l.go:53: | 07:44:35 | instrumentation-nodejs | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-oriented-stag/nodejs l.go:53: | 07:44:35 | instrumentation-nodejs | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-oriented-stag/nodejs l.go:53: | 07:44:35 | instrumentation-nodejs | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-oriented-stag/sidecar l.go:53: | 07:44:35 | instrumentation-nodejs | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-oriented-stag/sidecar l.go:53: | 07:44:35 | instrumentation-nodejs | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-oriented-stag/sidecar l.go:53: | 07:44:35 | instrumentation-nodejs | step-00  | CLEANUP | DONE | l.go:53: | 07:44:35 | instrumentation-nodejs | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-oriented-stag l.go:53: | 07:44:35 | instrumentation-nodejs | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-oriented-stag === NAME chainsaw/instrumentation-nginx-multicontainer l.go:53: | 07:44:38 | instrumentation-nginx-multicontainer | step-01  | ASSERT | ERROR | v1/Pod @ chainsaw-awake-mosquito/* === ERROR ------------------------------------------------------------- v1/Pod/chainsaw-awake-mosquito/my-nginx-multi-84d5cb7d5-4x77r ------------------------------------------------------------- * spec.containers[2].args: Invalid value: []interface {}{"--config=env:OTEL_CONFIG"}: lengths of slices don't match --- expected +++ actual @@ -7,17 +7,27 @@ sidecar.opentelemetry.io/inject: "true" labels: app: my-nginx-multi + name: my-nginx-multi-84d5cb7d5-4x77r namespace: chainsaw-awake-mosquito + ownerReferences: + - apiVersion: apps/v1 + blockOwnerDeletion: true + controller: true + kind: ReplicaSet + name: my-nginx-multi-84d5cb7d5 + uid: c80bce6b-1aa7-4f07-891f-598db570f940 spec: containers: - env: - name: OTEL_NODE_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.hostIP - name: OTEL_POD_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.podIP - name: LD_LIBRARY_PATH value: /opt:/opt/opentelemetry-webserver/agent/sdk_lib/lib @@ -42,9 +52,35 @@ - name: OTEL_TRACES_SAMPLER_ARG value: "0.25" - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.container.name=myapp,k8s.deployment.name=my-nginx-multi,k8s.namespace.name=chainsaw-awake-mosquito,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.replicaset.name=my-nginx-multi-84d5cb7d5,service.instance.id=chainsaw-awake-mosquito.$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME).myapp,service.version=1.25.3 + image: nginxinc/nginx-unprivileged:1.25.3 + imagePullPolicy: Always name: myapp - volumeMounts: - - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + ports: + - containerPort: 8765 + protocol: TCP + resources: + limits: + cpu: 500m + memory: 500Mi + requests: + cpu: 100m + memory: 100Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsGroup: 3000 + runAsNonRoot: true + runAsUser: 1000 + seccompProfile: + type: RuntimeDefault + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-bmrdl readOnly: true - mountPath: /opt/opentelemetry-webserver/agent name: otel-nginx-agent @@ -54,10 +90,12 @@ - name: OTEL_NODE_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.hostIP - name: OTEL_POD_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.podIP - name: OTEL_SERVICE_NAME value: my-nginx-multi @@ -80,33 +118,318 @@ - name: OTEL_TRACES_SAMPLER_ARG value: "0.25" - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.container.name=myrabbit,k8s.deployment.name=my-nginx-multi,k8s.namespace.name=chainsaw-awake-mosquito,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.replicaset.name=my-nginx-multi-84d5cb7d5,service.instance.id=chainsaw-awake-mosquito.$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME).myrabbit,service.version=rabbitmq image: rabbitmq + imagePullPolicy: Always name: myrabbit - volumeMounts: - - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsGroup: 3000 + runAsNonRoot: true + runAsUser: 1000 + seccompProfile: + type: RuntimeDefault + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-bmrdl readOnly: true - args: - - --feature-gates=-component.UseLocalHostAsDefaultHost - --config=env:OTEL_CONFIG + env: + - name: POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_CONFIG + value: | + receivers: + otlp: + protocols: + grpc: + endpoint: 0.0.0.0:4317 + http: + endpoint: 0.0.0.0:4318 + exporters: + debug: null + service: + telemetry: + metrics: + address: 0.0.0.0:8888 + pipelines: + traces: + exporters: + - debug + receivers: + - otlp + - name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_RESOURCE_ATTRIBUTES_POD_UID + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.uid + - name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.nodeName + - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.deployment.name=my-nginx-multi,k8s.deployment.uid=def6126d-6992-4b72-8c71-ab419c488d69,k8s.namespace.name=chainsaw-awake-mosquito,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.pod.uid=$(OTEL_RESOURCE_ATTRIBUTES_POD_UID),k8s.replicaset.name=my-nginx-multi-84d5cb7d5,k8s.replicaset.uid=c80bce6b-1aa7-4f07-891f-598db570f940 + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imagePullPolicy: IfNotPresent name: otc-container + ports: + - containerPort: 8888 + name: metrics + protocol: TCP + - containerPort: 4317 + name: otlp-grpc + protocol: TCP + - containerPort: 4318 + name: otlp-http + protocol: TCP + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-bmrdl + readOnly: true initContainers: - - name: otel-agent-source-container-clone - - name: otel-agent-attach-nginx + - args: + - cp -r /etc/nginx/* /opt/opentelemetry-webserver/source-conf && export NGINX_VERSION=$( + { nginx -v ; } 2>&1 ) && echo ${NGINX_VERSION##*/} > /opt/opentelemetry-webserver/source-conf/version.txt + command: + - /bin/sh + - -c + env: + - name: LD_LIBRARY_PATH + value: /opt + image: nginxinc/nginx-unprivileged:1.25.3 + imagePullPolicy: Always + name: otel-agent-source-container-clone + ports: + - containerPort: 8765 + protocol: TCP + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 1m + memory: 128Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsGroup: 3000 + runAsNonRoot: true + runAsUser: 1000 + seccompProfile: + type: RuntimeDefault + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /etc/nginx/nginx.conf + name: nginx-conf + readOnly: true + subPath: nginx.conf + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-bmrdl + readOnly: true + - mountPath: /opt/opentelemetry-webserver/source-conf + name: otel-nginx-conf-dir + - args: + - echo -e $OTEL_NGINX_I13N_SCRIPT > /opt/opentelemetry-webserver/agent/nginx_instrumentation.sh + && chmod +x /opt/opentelemetry-webserver/agent/nginx_instrumentation.sh && cat + /opt/opentelemetry-webserver/agent/nginx_instrumentation.sh && /opt/opentelemetry-webserver/agent/nginx_instrumentation.sh + "/opt/opentelemetry-webserver/agent" "/opt/opentelemetry-webserver/source-conf" + "nginx.conf" "<>" + command: + - /bin/sh + - -c + env: + - name: OTEL_NGINX_AGENT_CONF + value: | + NginxModuleEnabled ON; + NginxModuleOtelExporterEndpoint http://localhost:4317; + NginxModuleOtelMaxQueueSize 4096; + NginxModuleOtelSpanExporter otlp; + NginxModuleResolveBackends ON; + NginxModuleServiceInstanceId <>; + NginxModuleServiceName my-nginx-multi; + NginxModuleServiceNamespace chainsaw-awake-mosquito; + NginxModuleTraceAsError ON; + - name: OTEL_NGINX_I13N_SCRIPT + value: "\nNGINX_AGENT_DIR_FULL=$1\t\\n\nNGINX_AGENT_CONF_DIR_FULL=$2 \\n\nNGINX_CONFIG_FILE=$3 + \\n\nNGINX_SID_PLACEHOLDER=$4 \\n\nNGINX_SID_VALUE=$5 \\n\necho \"Input Parameters: + $@\" \\n\nset -x \\n\n\\n\ncp -r /opt/opentelemetry/* ${NGINX_AGENT_DIR_FULL} + \\n\n\\n\nNGINX_VERSION=$(cat ${NGINX_AGENT_CONF_DIR_FULL}/version.txt) \\n\nNGINX_AGENT_LOG_DIR=$(echo + \"${NGINX_AGENT_DIR_FULL}/logs\" | sed 's,/,\\\\/,g') \\n\n\\n\ncat ${NGINX_AGENT_DIR_FULL}/conf/opentelemetry_sdk_log4cxx.xml.template + | sed 's,__agent_log_dir__,'${NGINX_AGENT_LOG_DIR}',g' > ${NGINX_AGENT_DIR_FULL}/conf/opentelemetry_sdk_log4cxx.xml + \\n\necho -e $OTEL_NGINX_AGENT_CONF > ${NGINX_AGENT_CONF_DIR_FULL}/opentelemetry_agent.conf + \\n\nsed -i \"s,${NGINX_SID_PLACEHOLDER},${OTEL_NGINX_SERVICE_INSTANCE_ID},g\" + ${NGINX_AGENT_CONF_DIR_FULL}/opentelemetry_agent.conf \\n\nsed -i \"1s,^,load_module + ${NGINX_AGENT_DIR_FULL}/WebServerModule/Nginx/${NGINX_VERSION}/ngx_http_opentelemetry_module.so;\\\\n,g\" + ${NGINX_AGENT_CONF_DIR_FULL}/${NGINX_CONFIG_FILE} \\n\nsed -i \"1s,^,env OTEL_RESOURCE_ATTRIBUTES;\\\\n,g\" + ${NGINX_AGENT_CONF_DIR_FULL}/${NGINX_CONFIG_FILE} \\n\nmv ${NGINX_AGENT_CONF_DIR_FULL}/opentelemetry_agent.conf + \ ${NGINX_AGENT_CONF_DIR_FULL}/conf.d \\n\n\t\t" + - name: OTEL_NGINX_SERVICE_INSTANCE_ID + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-apache-httpd:1.0.4 + imagePullPolicy: IfNotPresent + name: otel-agent-attach-nginx + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 1m + memory: 128Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsGroup: 3000 + runAsNonRoot: true + runAsUser: 1000 + seccompProfile: + type: RuntimeDefault + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /opt/opentelemetry-webserver/agent + name: otel-nginx-agent + - mountPath: /opt/opentelemetry-webserver/source-conf + name: otel-nginx-conf-dir + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-bmrdl + readOnly: true status: containerStatuses: - - name: myapp + - containerID: cri-o://71c404c89d4d2f397fb70fedeab07a3ad63b674a74cfb994e6598bef4ef58128 + image: docker.io/nginxinc/nginx-unprivileged:1.25.3 + imageID: docker.io/nginxinc/nginx-unprivileged@sha256:352cdd57b8e29ac484d8ad31a0624ecd16e61662dbec863ee8b2b67ef90f537e + lastState: {} + name: myapp ready: true + restartCount: 0 started: true - - name: myrabbit + state: + running: + startedAt: "2025-02-03T07:38:45Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-bmrdl + readOnly: true + recursiveReadOnly: Disabled + - mountPath: /opt/opentelemetry-webserver/agent + name: otel-nginx-agent + - mountPath: /etc/nginx + name: otel-nginx-conf-dir + - containerID: cri-o://5dc13dc0953785a5cedeb61dd246c8cff39c5f4dee4e3eda63da6d41ce5b664c + image: docker.io/library/rabbitmq:latest + imageID: docker.io/library/rabbitmq@sha256:4fc6a2c182ab768f233f602a965684e1db91f0b01562d4efa5ca35de8db148db + lastState: {} + name: myrabbit ready: true + restartCount: 0 started: true - - name: otc-container + state: + running: + startedAt: "2025-02-03T07:38:48Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-bmrdl + readOnly: true + recursiveReadOnly: Disabled + - containerID: cri-o://b6afcdc84370b3d3fb509782dd1a57e4484ffa0a9b4045da2a23a8c5755c0270 + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imageID: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + lastState: {} + name: otc-container ready: true + restartCount: 0 started: true + state: + running: + startedAt: "2025-02-03T07:38:48Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-bmrdl + readOnly: true + recursiveReadOnly: Disabled initContainerStatuses: - - name: otel-agent-source-container-clone + - containerID: cri-o://e6604e0d9a1993a7cfa560735b96e20c0e61fb3315c0976e597421b1375fceea + image: docker.io/nginxinc/nginx-unprivileged:1.25.3 + imageID: docker.io/nginxinc/nginx-unprivileged@sha256:352cdd57b8e29ac484d8ad31a0624ecd16e61662dbec863ee8b2b67ef90f537e + lastState: {} + name: otel-agent-source-container-clone ready: true - - name: otel-agent-attach-nginx + restartCount: 0 + started: false + state: + terminated: + containerID: cri-o://e6604e0d9a1993a7cfa560735b96e20c0e61fb3315c0976e597421b1375fceea + exitCode: 0 + finishedAt: "2025-02-03T07:38:42Z" + reason: Completed + startedAt: "2025-02-03T07:38:42Z" + volumeMounts: + - mountPath: /etc/nginx/nginx.conf + name: nginx-conf + readOnly: true + recursiveReadOnly: Disabled + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-bmrdl + readOnly: true + recursiveReadOnly: Disabled + - mountPath: /opt/opentelemetry-webserver/source-conf + name: otel-nginx-conf-dir + - containerID: cri-o://00fb78c98af84ce1474b185ff80dbc4fd8be48761b440968f7e567d321d8943b + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-apache-httpd:1.0.4 + imageID: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-apache-httpd@sha256:4275db94ebbf4b9f78762b248ecab219790bbb98c59cf2bf5b3383908b727cfe + lastState: {} + name: otel-agent-attach-nginx ready: true + restartCount: 0 + started: false + state: + terminated: + containerID: cri-o://00fb78c98af84ce1474b185ff80dbc4fd8be48761b440968f7e567d321d8943b + exitCode: 0 + finishedAt: "2025-02-03T07:38:44Z" + reason: Completed + startedAt: "2025-02-03T07:38:44Z" + volumeMounts: + - mountPath: /opt/opentelemetry-webserver/agent + name: otel-nginx-agent + - mountPath: /opt/opentelemetry-webserver/source-conf + name: otel-nginx-conf-dir + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-bmrdl + readOnly: true + recursiveReadOnly: Disabled phase: Running l.go:53: | 07:44:38 | instrumentation-nginx-multicontainer | step-01  | TRY | DONE | l.go:53: | 07:44:38 | instrumentation-nginx-multicontainer | step-01  | CATCH | RUN | l.go:53: | 07:44:38 | instrumentation-nginx-multicontainer | step-01  | CMD | RUN | === COMMAND /usr/local/bin/kubectl logs --prefix -l app=my-nginx-multi -n chainsaw-awake-mosquito --all-containers l.go:53: | 07:44:39 | instrumentation-nginx-multicontainer | step-01  | CMD | LOG | === STDOUT [pod/my-nginx-multi-84d5cb7d5-4x77r/myapp] /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh [pod/my-nginx-multi-84d5cb7d5-4x77r/myapp] /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh [pod/my-nginx-multi-84d5cb7d5-4x77r/myapp] /docker-entrypoint.sh: Configuration complete; ready for start up [pod/my-nginx-multi-84d5cb7d5-4x77r/myapp] 2025/02/03 07:38:45 [error] 1#1: mod_opentelemetry: ngx_http_opentelemetry_init: Starting Opentelemetry Module init [pod/my-nginx-multi-84d5cb7d5-4x77r/myapp] nginx: [error] mod_opentelemetry: ngx_http_opentelemetry_init: Starting Opentelemetry Module init [pod/my-nginx-multi-84d5cb7d5-4x77r/myapp] 2025/02/03 07:38:45 [error] 1#1: mod_opentelemetry: ngx_http_opentelemetry_init: Registering handlers for modules in different phases [pod/my-nginx-multi-84d5cb7d5-4x77r/myapp] nginx: [error] mod_opentelemetry: ngx_http_opentelemetry_init: Registering handlers for modules in different phases [pod/my-nginx-multi-84d5cb7d5-4x77r/myapp] 2025/02/03 07:38:45 [error] 1#1: mod_opentelemetry: ngx_http_opentelemetry_init: Opentelemetry Module init completed! [pod/my-nginx-multi-84d5cb7d5-4x77r/myapp] nginx: [error] mod_opentelemetry: ngx_http_opentelemetry_init: Opentelemetry Module init completed! [pod/my-nginx-multi-84d5cb7d5-4x77r/myapp] 2025/02/03 07:38:45 [error] 23#23: mod_opentelemetry: ngx_http_opentelemetry_init_worker: Initializing Nginx Worker for process with PID: 23 [pod/my-nginx-multi-84d5cb7d5-4x77r/myrabbit] 2025-02-03 07:38:51.048685+00:00 [info] <0.583.0> Resetting node maintenance status [pod/my-nginx-multi-84d5cb7d5-4x77r/myrabbit] 2025-02-03 07:38:51.092931+00:00 [info] <0.606.0> Prometheus metrics: HTTP (non-TLS) listener started on port 15692 [pod/my-nginx-multi-84d5cb7d5-4x77r/myrabbit] 2025-02-03 07:38:51.093085+00:00 [info] <0.583.0> Ready to start client connection listeners [pod/my-nginx-multi-84d5cb7d5-4x77r/myrabbit] 2025-02-03 07:38:51.094435+00:00 [info] <0.650.0> started TCP listener on [::]:5672 [pod/my-nginx-multi-84d5cb7d5-4x77r/myrabbit] completed with 3 plugins. [pod/my-nginx-multi-84d5cb7d5-4x77r/myrabbit] 2025-02-03 07:38:51.159428+00:00 [info] <0.583.0> Server startup complete; 3 plugins started. [pod/my-nginx-multi-84d5cb7d5-4x77r/myrabbit] 2025-02-03 07:38:51.159428+00:00 [info] <0.583.0> * rabbitmq_prometheus [pod/my-nginx-multi-84d5cb7d5-4x77r/myrabbit] 2025-02-03 07:38:51.159428+00:00 [info] <0.583.0> * rabbitmq_management_agent [pod/my-nginx-multi-84d5cb7d5-4x77r/myrabbit] 2025-02-03 07:38:51.159428+00:00 [info] <0.583.0> * rabbitmq_web_dispatch [pod/my-nginx-multi-84d5cb7d5-4x77r/myrabbit] 2025-02-03 07:38:51.249067+00:00 [info] <0.10.0> Time to start RabbitMQ: 2585 ms [pod/my-nginx-multi-84d5cb7d5-4x77r/otc-container] 2025-02-03T07:38:48.609Z warn service@v0.113.0/service.go:221 service::telemetry::metrics::address is being deprecated in favor of service::telemetry::metrics::readers [pod/my-nginx-multi-84d5cb7d5-4x77r/otc-container] 2025-02-03T07:38:48.609Z info telemetry/metrics.go:70 Serving metrics {"address": "0.0.0.0:8888", "metrics level": "Normal"} [pod/my-nginx-multi-84d5cb7d5-4x77r/otc-container] 2025-02-03T07:38:48.609Z info builders/builders.go:26 Development component. May change in the future. {"kind": "exporter", "data_type": "traces", "name": "debug"} [pod/my-nginx-multi-84d5cb7d5-4x77r/otc-container] 2025-02-03T07:38:48.622Z info service@v0.113.0/service.go:238 Starting otelcol... {"Version": "0.113.0", "NumCPU": 4} [pod/my-nginx-multi-84d5cb7d5-4x77r/otc-container] 2025-02-03T07:38:48.622Z info extensions/extensions.go:39 Starting extensions... [pod/my-nginx-multi-84d5cb7d5-4x77r/otc-container] 2025-02-03T07:38:48.622Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-nginx-multi-84d5cb7d5-4x77r/otc-container] 2025-02-03T07:38:48.622Z info otlpreceiver@v0.113.0/otlp.go:112 Starting GRPC server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4317"} [pod/my-nginx-multi-84d5cb7d5-4x77r/otc-container] 2025-02-03T07:38:48.622Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-nginx-multi-84d5cb7d5-4x77r/otc-container] 2025-02-03T07:38:48.622Z info otlpreceiver@v0.113.0/otlp.go:169 Starting HTTP server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4318"} [pod/my-nginx-multi-84d5cb7d5-4x77r/otc-container] 2025-02-03T07:38:48.622Z info service@v0.113.0/service.go:261 Everything is ready. Begin running and processing data. [pod/my-nginx-multi-84d5cb7d5-4x77r/otel-agent-attach-nginx] + cat /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml.template [pod/my-nginx-multi-84d5cb7d5-4x77r/otel-agent-attach-nginx] + sed s,__agent_log_dir__,/opt/opentelemetry-webserver/agent/logs,g [pod/my-nginx-multi-84d5cb7d5-4x77r/otel-agent-attach-nginx] + echo -e NginxModuleEnabled 'ON;' NginxModuleOtelExporterEndpoint 'http://localhost:4317;' NginxModuleOtelMaxQueueSize '4096;' NginxModuleOtelSpanExporter 'otlp;' NginxModuleResolveBackends 'ON;' NginxModuleServiceInstanceId '<>;' NginxModuleServiceName 'my-nginx-multi;' NginxModuleServiceNamespace 'chainsaw-awake-mosquito;' NginxModuleTraceAsError 'ON;' [pod/my-nginx-multi-84d5cb7d5-4x77r/otel-agent-attach-nginx] + sed -i 's,<>,my-nginx-multi-84d5cb7d5-4x77r,g' /opt/opentelemetry-webserver/source-conf/opentelemetry_agent.conf [pod/my-nginx-multi-84d5cb7d5-4x77r/otel-agent-attach-nginx] + sed -i '1s,^,load_module /opt/opentelemetry-webserver/agent/WebServerModule/Nginx/1.25.3/ngx_http_opentelemetry_module.so;\n,g' /opt/opentelemetry-webserver/source-conf/nginx.conf [pod/my-nginx-multi-84d5cb7d5-4x77r/otel-agent-attach-nginx] + sed -i '1s,^,env OTEL_RESOURCE_ATTRIBUTES;\n,g' /opt/opentelemetry-webserver/source-conf/nginx.conf [pod/my-nginx-multi-84d5cb7d5-4x77r/otel-agent-attach-nginx] + mv /opt/opentelemetry-webserver/source-conf/opentelemetry_agent.conf /opt/opentelemetry-webserver/source-conf/conf.d l.go:53: | 07:44:39 | instrumentation-nginx-multicontainer | step-01  | CMD | DONE | l.go:53: | 07:44:39 | instrumentation-nginx-multicontainer | step-01  | CATCH | DONE | l.go:53: | 07:44:39 | instrumentation-nginx-multicontainer | step-01  | CLEANUP | RUN | l.go:53: | 07:44:39 | instrumentation-nginx-multicontainer | step-01  | DELETE | RUN | v1/ConfigMap @ chainsaw-awake-mosquito/nginx-conf l.go:53: | 07:44:39 | instrumentation-nginx-multicontainer | step-01  | DELETE | OK | v1/ConfigMap @ chainsaw-awake-mosquito/nginx-conf l.go:53: | 07:44:39 | instrumentation-nginx-multicontainer | step-01  | DELETE | DONE | v1/ConfigMap @ chainsaw-awake-mosquito/nginx-conf l.go:53: | 07:44:39 | instrumentation-nginx-multicontainer | step-01  | DELETE | RUN | apps/v1/Deployment @ chainsaw-awake-mosquito/my-nginx-multi l.go:53: | 07:44:39 | instrumentation-nginx-multicontainer | step-01  | DELETE | OK | apps/v1/Deployment @ chainsaw-awake-mosquito/my-nginx-multi l.go:53: | 07:44:39 | instrumentation-nginx-multicontainer | step-01  | DELETE | DONE | apps/v1/Deployment @ chainsaw-awake-mosquito/my-nginx-multi l.go:53: | 07:44:39 | instrumentation-nginx-multicontainer | step-01  | CLEANUP | DONE | l.go:53: | 07:44:39 | instrumentation-nginx-multicontainer | step-00  | CLEANUP | RUN | l.go:53: | 07:44:39 | instrumentation-nginx-multicontainer | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-awake-mosquito/nginx l.go:53: | 07:44:39 | instrumentation-nginx-multicontainer | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-awake-mosquito/nginx l.go:53: | 07:44:39 | instrumentation-nginx-multicontainer | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-awake-mosquito/nginx l.go:53: | 07:44:39 | instrumentation-nginx-multicontainer | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-awake-mosquito/sidecar l.go:53: | 07:44:39 | instrumentation-nginx-multicontainer | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-awake-mosquito/sidecar l.go:53: | 07:44:39 | instrumentation-nginx-multicontainer | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-awake-mosquito/sidecar l.go:53: | 07:44:39 | instrumentation-nginx-multicontainer | step-00  | CLEANUP | DONE | l.go:53: | 07:44:39 | instrumentation-nginx-multicontainer | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-awake-mosquito l.go:53: | 07:44:39 | instrumentation-nginx-multicontainer | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-awake-mosquito l.go:53: | 07:44:45 | instrumentation-nginx-multicontainer | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-awake-mosquito === CONT chainsaw/instrumentation-java-other-ns l.go:53: | 07:44:45 | instrumentation-java-other-ns | @setup  | CREATE | OK | v1/Namespace @ chainsaw-upward-burro l.go:53: | 07:44:45 | instrumentation-java-other-ns | step-01  | TRY | RUN | l.go:53: | 07:44:45 | instrumentation-java-other-ns | step-01  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-upward-burro openshift.io/sa.scc.uid-range=1000/1000 --overwrite l.go:53: | 07:44:45 | instrumentation-java-other-ns | step-01  | CMD | LOG | === STDOUT namespace/chainsaw-upward-burro annotated l.go:53: | 07:44:45 | instrumentation-java-other-ns | step-01  | CMD | DONE | l.go:53: | 07:44:45 | instrumentation-java-other-ns | step-01  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-upward-burro openshift.io/sa.scc.supplemental-groups=3000/3000 --overwrite l.go:53: | 07:44:45 | instrumentation-java-other-ns | step-01  | CMD | LOG | === STDOUT namespace/chainsaw-upward-burro annotated l.go:53: | 07:44:45 | instrumentation-java-other-ns | step-01  | CMD | DONE | l.go:53: | 07:44:45 | instrumentation-java-other-ns | step-01  | DELETE | RUN | v1/Namespace @ my-other-ns l.go:53: | 07:44:45 | instrumentation-java-other-ns | step-01  | DELETE | DONE | v1/Namespace @ my-other-ns l.go:53: | 07:44:45 | instrumentation-java-other-ns | step-01  | TRY | DONE | l.go:53: | 07:44:45 | instrumentation-java-other-ns | step-02  | TRY | RUN | l.go:53: | 07:44:45 | instrumentation-java-other-ns | step-02  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-upward-burro/sidecar l.go:53: | 07:44:45 | instrumentation-java-other-ns | step-02  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-upward-burro/sidecar l.go:53: | 07:44:45 | instrumentation-java-other-ns | step-02  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-upward-burro/sidecar l.go:53: | 07:44:45 | instrumentation-java-other-ns | step-02  | APPLY | RUN | v1/Namespace @ my-other-ns l.go:53: | 07:44:46 | instrumentation-java-other-ns | step-02  | CREATE | OK | v1/Namespace @ my-other-ns l.go:53: | 07:44:46 | instrumentation-java-other-ns | step-02  | APPLY | DONE | v1/Namespace @ my-other-ns l.go:53: | 07:44:46 | instrumentation-java-other-ns | step-02  | APPLY | RUN | opentelemetry.io/v1alpha1/Instrumentation @ my-other-ns/java l.go:53: | 07:44:46 | instrumentation-java-other-ns | step-02  | CREATE | OK | opentelemetry.io/v1alpha1/Instrumentation @ my-other-ns/java l.go:53: | 07:44:46 | instrumentation-java-other-ns | step-02  | APPLY | DONE | opentelemetry.io/v1alpha1/Instrumentation @ my-other-ns/java l.go:53: | 07:44:46 | instrumentation-java-other-ns | step-02  | TRY | DONE | l.go:53: | 07:44:46 | instrumentation-java-other-ns | step-03  | TRY | RUN | l.go:53: | 07:44:46 | instrumentation-java-other-ns | step-03  | APPLY | RUN | apps/v1/Deployment @ chainsaw-upward-burro/my-java-other-ns l.go:53: | 07:44:46 | instrumentation-java-other-ns | step-03  | CREATE | OK | apps/v1/Deployment @ chainsaw-upward-burro/my-java-other-ns l.go:53: | 07:44:46 | instrumentation-java-other-ns | step-03  | APPLY | DONE | apps/v1/Deployment @ chainsaw-upward-burro/my-java-other-ns l.go:53: | 07:44:46 | instrumentation-java-other-ns | step-03  | ASSERT | RUN | v1/Pod @ chainsaw-upward-burro/* === NAME chainsaw/instrumentation-nginx-contnr-secctx l.go:53: | 07:44:53 | instrumentation-nginx-contnr-secctx | step-01  | ASSERT | ERROR | v1/Pod @ chainsaw-top-frog/* === ERROR ---------------------------------------------------------------- v1/Pod/chainsaw-top-frog/my-nginx-contnr-secctx-6c6559ddb5-mfpmt ---------------------------------------------------------------- * spec.containers[1].args: Invalid value: []interface {}{"--config=env:OTEL_CONFIG"}: lengths of slices don't match --- expected +++ actual @@ -6,17 +6,27 @@ sidecar.opentelemetry.io/inject: "true" labels: app: my-nginx-contnr-secctx + name: my-nginx-contnr-secctx-6c6559ddb5-mfpmt namespace: chainsaw-top-frog + ownerReferences: + - apiVersion: apps/v1 + blockOwnerDeletion: true + controller: true + kind: ReplicaSet + name: my-nginx-contnr-secctx-6c6559ddb5 + uid: 4b20886f-5b12-49f2-b689-d007364f7c8f spec: containers: - env: - name: OTEL_NODE_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.hostIP - name: OTEL_POD_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.podIP - name: LD_LIBRARY_PATH value: /opt:/opt/opentelemetry-webserver/agent/sdk_lib/lib @@ -41,33 +51,310 @@ - name: OTEL_TRACES_SAMPLER_ARG value: "0.25" - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.container.name=myapp,k8s.deployment.name=my-nginx-contnr-secctx,k8s.namespace.name=chainsaw-top-frog,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.replicaset.name=my-nginx-contnr-secctx-6c6559ddb5,service.instance.id=chainsaw-top-frog.$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME).myapp,service.version=1.25.3 + image: nginxinc/nginx-unprivileged:1.25.3 + imagePullPolicy: Always name: myapp - volumeMounts: - - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + ports: + - containerPort: 8765 + protocol: TCP + resources: + limits: + cpu: "1" + memory: 500Mi + requests: + cpu: 250m + memory: 100Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsGroup: 3000 + runAsNonRoot: true + runAsUser: 1000 + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-8ftbw readOnly: true - mountPath: /opt/opentelemetry-webserver/agent name: otel-nginx-agent - mountPath: /etc/nginx name: otel-nginx-conf-dir - args: - - --feature-gates=-component.UseLocalHostAsDefaultHost - --config=env:OTEL_CONFIG + env: + - name: POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_CONFIG + value: | + receivers: + otlp: + protocols: + grpc: + endpoint: 0.0.0.0:4317 + http: + endpoint: 0.0.0.0:4318 + exporters: + debug: null + service: + telemetry: + metrics: + address: 0.0.0.0:8888 + pipelines: + traces: + exporters: + - debug + receivers: + - otlp + - name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_RESOURCE_ATTRIBUTES_POD_UID + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.uid + - name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.nodeName + - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.deployment.name=my-nginx-contnr-secctx,k8s.deployment.uid=7b0045fd-31fc-4a21-a692-505acb73811b,k8s.namespace.name=chainsaw-top-frog,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.pod.uid=$(OTEL_RESOURCE_ATTRIBUTES_POD_UID),k8s.replicaset.name=my-nginx-contnr-secctx-6c6559ddb5,k8s.replicaset.uid=4b20886f-5b12-49f2-b689-d007364f7c8f + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imagePullPolicy: IfNotPresent name: otc-container + ports: + - containerPort: 8888 + name: metrics + protocol: TCP + - containerPort: 4317 + name: otlp-grpc + protocol: TCP + - containerPort: 4318 + name: otlp-http + protocol: TCP + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + runAsUser: 1000 + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-8ftbw + readOnly: true initContainers: - - name: otel-agent-source-container-clone - - name: otel-agent-attach-nginx + - args: + - cp -r /etc/nginx/* /opt/opentelemetry-webserver/source-conf && export NGINX_VERSION=$( + { nginx -v ; } 2>&1 ) && echo ${NGINX_VERSION##*/} > /opt/opentelemetry-webserver/source-conf/version.txt + command: + - /bin/sh + - -c + env: + - name: LD_LIBRARY_PATH + value: /opt + image: nginxinc/nginx-unprivileged:1.25.3 + imagePullPolicy: Always + name: otel-agent-source-container-clone + ports: + - containerPort: 8765 + protocol: TCP + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 1m + memory: 128Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsGroup: 3000 + runAsNonRoot: true + runAsUser: 1000 + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /etc/nginx/nginx.conf + name: nginx-conf + readOnly: true + subPath: nginx.conf + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-8ftbw + readOnly: true + - mountPath: /opt/opentelemetry-webserver/source-conf + name: otel-nginx-conf-dir + - args: + - echo -e $OTEL_NGINX_I13N_SCRIPT > /opt/opentelemetry-webserver/agent/nginx_instrumentation.sh + && chmod +x /opt/opentelemetry-webserver/agent/nginx_instrumentation.sh && cat + /opt/opentelemetry-webserver/agent/nginx_instrumentation.sh && /opt/opentelemetry-webserver/agent/nginx_instrumentation.sh + "/opt/opentelemetry-webserver/agent" "/opt/opentelemetry-webserver/source-conf" + "nginx.conf" "<>" + command: + - /bin/sh + - -c + env: + - name: OTEL_NGINX_AGENT_CONF + value: | + NginxModuleEnabled ON; + NginxModuleOtelExporterEndpoint http://localhost:4317; + NginxModuleOtelMaxQueueSize 4096; + NginxModuleOtelSpanExporter otlp; + NginxModuleResolveBackends ON; + NginxModuleServiceInstanceId <>; + NginxModuleServiceName my-nginx-contnr-secctx; + NginxModuleServiceNamespace chainsaw-top-frog; + NginxModuleTraceAsError ON; + - name: OTEL_NGINX_I13N_SCRIPT + value: "\nNGINX_AGENT_DIR_FULL=$1\t\\n\nNGINX_AGENT_CONF_DIR_FULL=$2 \\n\nNGINX_CONFIG_FILE=$3 + \\n\nNGINX_SID_PLACEHOLDER=$4 \\n\nNGINX_SID_VALUE=$5 \\n\necho \"Input Parameters: + $@\" \\n\nset -x \\n\n\\n\ncp -r /opt/opentelemetry/* ${NGINX_AGENT_DIR_FULL} + \\n\n\\n\nNGINX_VERSION=$(cat ${NGINX_AGENT_CONF_DIR_FULL}/version.txt) \\n\nNGINX_AGENT_LOG_DIR=$(echo + \"${NGINX_AGENT_DIR_FULL}/logs\" | sed 's,/,\\\\/,g') \\n\n\\n\ncat ${NGINX_AGENT_DIR_FULL}/conf/opentelemetry_sdk_log4cxx.xml.template + | sed 's,__agent_log_dir__,'${NGINX_AGENT_LOG_DIR}',g' > ${NGINX_AGENT_DIR_FULL}/conf/opentelemetry_sdk_log4cxx.xml + \\n\necho -e $OTEL_NGINX_AGENT_CONF > ${NGINX_AGENT_CONF_DIR_FULL}/opentelemetry_agent.conf + \\n\nsed -i \"s,${NGINX_SID_PLACEHOLDER},${OTEL_NGINX_SERVICE_INSTANCE_ID},g\" + ${NGINX_AGENT_CONF_DIR_FULL}/opentelemetry_agent.conf \\n\nsed -i \"1s,^,load_module + ${NGINX_AGENT_DIR_FULL}/WebServerModule/Nginx/${NGINX_VERSION}/ngx_http_opentelemetry_module.so;\\\\n,g\" + ${NGINX_AGENT_CONF_DIR_FULL}/${NGINX_CONFIG_FILE} \\n\nsed -i \"1s,^,env OTEL_RESOURCE_ATTRIBUTES;\\\\n,g\" + ${NGINX_AGENT_CONF_DIR_FULL}/${NGINX_CONFIG_FILE} \\n\nmv ${NGINX_AGENT_CONF_DIR_FULL}/opentelemetry_agent.conf + \ ${NGINX_AGENT_CONF_DIR_FULL}/conf.d \\n\n\t\t" + - name: OTEL_NGINX_SERVICE_INSTANCE_ID + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-apache-httpd:1.0.4 + imagePullPolicy: IfNotPresent + name: otel-agent-attach-nginx + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 1m + memory: 128Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsGroup: 3000 + runAsNonRoot: true + runAsUser: 1000 + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /opt/opentelemetry-webserver/agent + name: otel-nginx-agent + - mountPath: /opt/opentelemetry-webserver/source-conf + name: otel-nginx-conf-dir + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-8ftbw + readOnly: true status: containerStatuses: - - name: myapp + - containerID: cri-o://48415832ba07264f963d8a560feb833eb5721233880488f61d4c42a296b997e1 + image: docker.io/nginxinc/nginx-unprivileged:1.25.3 + imageID: docker.io/nginxinc/nginx-unprivileged@sha256:352cdd57b8e29ac484d8ad31a0624ecd16e61662dbec863ee8b2b67ef90f537e + lastState: {} + name: myapp ready: true + restartCount: 0 started: true - - name: otc-container + state: + running: + startedAt: "2025-02-03T07:39:00Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-8ftbw + readOnly: true + recursiveReadOnly: Disabled + - mountPath: /opt/opentelemetry-webserver/agent + name: otel-nginx-agent + - mountPath: /etc/nginx + name: otel-nginx-conf-dir + - containerID: cri-o://b362b6bf7344e7e3f59fec105f75045a4770b9e5adb4e208c586f25d942deba4 + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imageID: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + lastState: {} + name: otc-container ready: true + restartCount: 0 started: true + state: + running: + startedAt: "2025-02-03T07:39:00Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-8ftbw + readOnly: true + recursiveReadOnly: Disabled initContainerStatuses: - - name: otel-agent-source-container-clone + - containerID: cri-o://6ff7a7e2dfb1fc6a825266477e9aff601d69bdcc33d7dc08341743a8b3ce4461 + image: docker.io/nginxinc/nginx-unprivileged:1.25.3 + imageID: docker.io/nginxinc/nginx-unprivileged@sha256:352cdd57b8e29ac484d8ad31a0624ecd16e61662dbec863ee8b2b67ef90f537e + lastState: {} + name: otel-agent-source-container-clone ready: true - - name: otel-agent-attach-nginx + restartCount: 0 + started: false + state: + terminated: + containerID: cri-o://6ff7a7e2dfb1fc6a825266477e9aff601d69bdcc33d7dc08341743a8b3ce4461 + exitCode: 0 + finishedAt: "2025-02-03T07:38:57Z" + reason: Completed + startedAt: "2025-02-03T07:38:57Z" + volumeMounts: + - mountPath: /etc/nginx/nginx.conf + name: nginx-conf + readOnly: true + recursiveReadOnly: Disabled + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-8ftbw + readOnly: true + recursiveReadOnly: Disabled + - mountPath: /opt/opentelemetry-webserver/source-conf + name: otel-nginx-conf-dir + - containerID: cri-o://f67f0942cdb5dd12d3d528d1e2c5f52ec573beb04a6fb87f3d32cab0d971d70d + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-apache-httpd:1.0.4 + imageID: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-apache-httpd@sha256:4275db94ebbf4b9f78762b248ecab219790bbb98c59cf2bf5b3383908b727cfe + lastState: {} + name: otel-agent-attach-nginx ready: true + restartCount: 0 + started: false + state: + terminated: + containerID: cri-o://f67f0942cdb5dd12d3d528d1e2c5f52ec573beb04a6fb87f3d32cab0d971d70d + exitCode: 0 + finishedAt: "2025-02-03T07:38:58Z" + reason: Completed + startedAt: "2025-02-03T07:38:58Z" + volumeMounts: + - mountPath: /opt/opentelemetry-webserver/agent + name: otel-nginx-agent + - mountPath: /opt/opentelemetry-webserver/source-conf + name: otel-nginx-conf-dir + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-8ftbw + readOnly: true + recursiveReadOnly: Disabled phase: Running l.go:53: | 07:44:53 | instrumentation-nginx-contnr-secctx | step-01  | TRY | DONE | l.go:53: | 07:44:53 | instrumentation-nginx-contnr-secctx | step-01  | CATCH | RUN | l.go:53: | 07:44:53 | instrumentation-nginx-contnr-secctx | step-01  | CMD | RUN | === COMMAND /usr/local/bin/kubectl logs --prefix -l app=my-nginx-contnr-secctx -n chainsaw-top-frog --all-containers l.go:53: | 07:44:53 | instrumentation-nginx-contnr-secctx | step-01  | CMD | LOG | === STDOUT [pod/my-nginx-contnr-secctx-6c6559ddb5-mfpmt/otel-agent-attach-nginx] + cat /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml.template [pod/my-nginx-contnr-secctx-6c6559ddb5-mfpmt/otel-agent-attach-nginx] + sed s,__agent_log_dir__,/opt/opentelemetry-webserver/agent/logs,g [pod/my-nginx-contnr-secctx-6c6559ddb5-mfpmt/otel-agent-attach-nginx] + echo -e NginxModuleEnabled 'ON;' NginxModuleOtelExporterEndpoint 'http://localhost:4317;' NginxModuleOtelMaxQueueSize '4096;' NginxModuleOtelSpanExporter 'otlp;' NginxModuleResolveBackends 'ON;' NginxModuleServiceInstanceId '<>;' NginxModuleServiceName 'my-nginx-contnr-secctx;' NginxModuleServiceNamespace 'chainsaw-top-frog;' NginxModuleTraceAsError 'ON;' [pod/my-nginx-contnr-secctx-6c6559ddb5-mfpmt/otel-agent-attach-nginx] + sed -i 's,<>,my-nginx-contnr-secctx-6c6559ddb5-mfpmt,g' /opt/opentelemetry-webserver/source-conf/opentelemetry_agent.conf [pod/my-nginx-contnr-secctx-6c6559ddb5-mfpmt/otel-agent-attach-nginx] + sed -i '1s,^,load_module /opt/opentelemetry-webserver/agent/WebServerModule/Nginx/1.25.3/ngx_http_opentelemetry_module.so;\n,g' /opt/opentelemetry-webserver/source-conf/nginx.conf [pod/my-nginx-contnr-secctx-6c6559ddb5-mfpmt/otel-agent-attach-nginx] + sed -i '1s,^,env OTEL_RESOURCE_ATTRIBUTES;\n,g' /opt/opentelemetry-webserver/source-conf/nginx.conf [pod/my-nginx-contnr-secctx-6c6559ddb5-mfpmt/otel-agent-attach-nginx] + mv /opt/opentelemetry-webserver/source-conf/opentelemetry_agent.conf /opt/opentelemetry-webserver/source-conf/conf.d [pod/my-nginx-contnr-secctx-6c6559ddb5-mfpmt/myapp] /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh [pod/my-nginx-contnr-secctx-6c6559ddb5-mfpmt/myapp] /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh [pod/my-nginx-contnr-secctx-6c6559ddb5-mfpmt/myapp] /docker-entrypoint.sh: Configuration complete; ready for start up [pod/my-nginx-contnr-secctx-6c6559ddb5-mfpmt/myapp] 2025/02/03 07:39:00 [error] 1#1: mod_opentelemetry: ngx_http_opentelemetry_init: Starting Opentelemetry Module init [pod/my-nginx-contnr-secctx-6c6559ddb5-mfpmt/myapp] nginx: [error] mod_opentelemetry: ngx_http_opentelemetry_init: Starting Opentelemetry Module init [pod/my-nginx-contnr-secctx-6c6559ddb5-mfpmt/myapp] 2025/02/03 07:39:00 [error] 1#1: mod_opentelemetry: ngx_http_opentelemetry_init: Registering handlers for modules in different phases [pod/my-nginx-contnr-secctx-6c6559ddb5-mfpmt/myapp] nginx: [error] mod_opentelemetry: ngx_http_opentelemetry_init: Registering handlers for modules in different phases [pod/my-nginx-contnr-secctx-6c6559ddb5-mfpmt/myapp] 2025/02/03 07:39:00 [error] 1#1: mod_opentelemetry: ngx_http_opentelemetry_init: Opentelemetry Module init completed! [pod/my-nginx-contnr-secctx-6c6559ddb5-mfpmt/myapp] nginx: [error] mod_opentelemetry: ngx_http_opentelemetry_init: Opentelemetry Module init completed! [pod/my-nginx-contnr-secctx-6c6559ddb5-mfpmt/myapp] 2025/02/03 07:39:00 [error] 23#23: mod_opentelemetry: ngx_http_opentelemetry_init_worker: Initializing Nginx Worker for process with PID: 23 [pod/my-nginx-contnr-secctx-6c6559ddb5-mfpmt/otc-container] 2025-02-03T07:39:00.312Z warn service@v0.113.0/service.go:221 service::telemetry::metrics::address is being deprecated in favor of service::telemetry::metrics::readers [pod/my-nginx-contnr-secctx-6c6559ddb5-mfpmt/otc-container] 2025-02-03T07:39:00.312Z info telemetry/metrics.go:70 Serving metrics {"address": "0.0.0.0:8888", "metrics level": "Normal"} [pod/my-nginx-contnr-secctx-6c6559ddb5-mfpmt/otc-container] 2025-02-03T07:39:00.312Z info builders/builders.go:26 Development component. May change in the future. {"kind": "exporter", "data_type": "traces", "name": "debug"} [pod/my-nginx-contnr-secctx-6c6559ddb5-mfpmt/otc-container] 2025-02-03T07:39:00.313Z info service@v0.113.0/service.go:238 Starting otelcol... {"Version": "0.113.0", "NumCPU": 4} [pod/my-nginx-contnr-secctx-6c6559ddb5-mfpmt/otc-container] 2025-02-03T07:39:00.313Z info extensions/extensions.go:39 Starting extensions... [pod/my-nginx-contnr-secctx-6c6559ddb5-mfpmt/otc-container] 2025-02-03T07:39:00.313Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-nginx-contnr-secctx-6c6559ddb5-mfpmt/otc-container] 2025-02-03T07:39:00.313Z info otlpreceiver@v0.113.0/otlp.go:112 Starting GRPC server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4317"} [pod/my-nginx-contnr-secctx-6c6559ddb5-mfpmt/otc-container] 2025-02-03T07:39:00.313Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-nginx-contnr-secctx-6c6559ddb5-mfpmt/otc-container] 2025-02-03T07:39:00.313Z info otlpreceiver@v0.113.0/otlp.go:169 Starting HTTP server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4318"} [pod/my-nginx-contnr-secctx-6c6559ddb5-mfpmt/otc-container] 2025-02-03T07:39:00.314Z info service@v0.113.0/service.go:261 Everything is ready. Begin running and processing data. l.go:53: | 07:44:53 | instrumentation-nginx-contnr-secctx | step-01  | CMD | DONE | l.go:53: | 07:44:53 | instrumentation-nginx-contnr-secctx | step-01  | CATCH | DONE | l.go:53: | 07:44:53 | instrumentation-nginx-contnr-secctx | step-01  | CLEANUP | RUN | l.go:53: | 07:44:53 | instrumentation-nginx-contnr-secctx | step-01  | DELETE | RUN | v1/ConfigMap @ chainsaw-top-frog/nginx-conf l.go:53: | 07:44:53 | instrumentation-nginx-contnr-secctx | step-01  | DELETE | OK | v1/ConfigMap @ chainsaw-top-frog/nginx-conf l.go:53: | 07:44:53 | instrumentation-nginx-contnr-secctx | step-01  | DELETE | DONE | v1/ConfigMap @ chainsaw-top-frog/nginx-conf l.go:53: | 07:44:53 | instrumentation-nginx-contnr-secctx | step-01  | DELETE | RUN | apps/v1/Deployment @ chainsaw-top-frog/my-nginx-contnr-secctx l.go:53: | 07:44:53 | instrumentation-nginx-contnr-secctx | step-01  | DELETE | OK | apps/v1/Deployment @ chainsaw-top-frog/my-nginx-contnr-secctx l.go:53: | 07:44:53 | instrumentation-nginx-contnr-secctx | step-01  | DELETE | DONE | apps/v1/Deployment @ chainsaw-top-frog/my-nginx-contnr-secctx l.go:53: | 07:44:53 | instrumentation-nginx-contnr-secctx | step-01  | CLEANUP | DONE | l.go:53: | 07:44:53 | instrumentation-nginx-contnr-secctx | step-00  | CLEANUP | RUN | l.go:53: | 07:44:53 | instrumentation-nginx-contnr-secctx | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-top-frog/nginx l.go:53: | 07:44:53 | instrumentation-nginx-contnr-secctx | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-top-frog/nginx l.go:53: | 07:44:53 | instrumentation-nginx-contnr-secctx | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-top-frog/nginx l.go:53: | 07:44:53 | instrumentation-nginx-contnr-secctx | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-top-frog/sidecar l.go:53: | 07:44:53 | instrumentation-nginx-contnr-secctx | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-top-frog/sidecar l.go:53: | 07:44:53 | instrumentation-nginx-contnr-secctx | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-top-frog/sidecar l.go:53: | 07:44:53 | instrumentation-nginx-contnr-secctx | step-00  | CLEANUP | DONE | l.go:53: | 07:44:53 | instrumentation-nginx-contnr-secctx | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-top-frog l.go:53: | 07:44:53 | instrumentation-nginx-contnr-secctx | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-top-frog l.go:53: | 07:45:00 | instrumentation-nginx-contnr-secctx | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-top-frog === CONT chainsaw/instrumentation-dotnet-multicontainer l.go:53: | 07:45:00 | instrumentation-dotnet-multicontainer | @setup  | CREATE | OK | v1/Namespace @ chainsaw-fit-worm l.go:53: | 07:45:00 | instrumentation-dotnet-multicontainer | step-00  | TRY | RUN | l.go:53: | 07:45:00 | instrumentation-dotnet-multicontainer | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-fit-worm openshift.io/sa.scc.uid-range=1000/1000 --overwrite l.go:53: | 07:45:00 | instrumentation-dotnet-multicontainer | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-fit-worm annotated l.go:53: | 07:45:00 | instrumentation-dotnet-multicontainer | step-00  | CMD | DONE | l.go:53: | 07:45:00 | instrumentation-dotnet-multicontainer | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-fit-worm openshift.io/sa.scc.supplemental-groups=3000/3000 --overwrite l.go:53: | 07:45:00 | instrumentation-dotnet-multicontainer | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-fit-worm annotated l.go:53: | 07:45:00 | instrumentation-dotnet-multicontainer | step-00  | CMD | DONE | l.go:53: | 07:45:00 | instrumentation-dotnet-multicontainer | step-00  | TRY | DONE | l.go:53: | 07:45:00 | instrumentation-dotnet-multicontainer | step-01  | TRY | RUN | l.go:53: | 07:45:01 | instrumentation-dotnet-multicontainer | step-01  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-fit-worm/sidecar l.go:53: | 07:45:01 | instrumentation-dotnet-multicontainer | step-01  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-fit-worm/sidecar l.go:53: | 07:45:01 | instrumentation-dotnet-multicontainer | step-01  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-fit-worm/sidecar l.go:53: | 07:45:01 | instrumentation-dotnet-multicontainer | step-01  | APPLY | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-fit-worm/dotnet l.go:53: | 07:45:01 | instrumentation-dotnet-multicontainer | step-01  | CREATE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-fit-worm/dotnet l.go:53: | 07:45:01 | instrumentation-dotnet-multicontainer | step-01  | APPLY | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-fit-worm/dotnet l.go:53: | 07:45:01 | instrumentation-dotnet-multicontainer | step-01  | TRY | DONE | l.go:53: | 07:45:01 | instrumentation-dotnet-multicontainer | step-02  | TRY | RUN | l.go:53: | 07:45:01 | instrumentation-dotnet-multicontainer | step-02  | APPLY | RUN | apps/v1/Deployment @ chainsaw-fit-worm/my-dotnet-multi l.go:53: | 07:45:01 | instrumentation-dotnet-multicontainer | step-02  | CREATE | OK | apps/v1/Deployment @ chainsaw-fit-worm/my-dotnet-multi l.go:53: | 07:45:01 | instrumentation-dotnet-multicontainer | step-02  | APPLY | DONE | apps/v1/Deployment @ chainsaw-fit-worm/my-dotnet-multi l.go:53: | 07:45:01 | instrumentation-dotnet-multicontainer | step-02  | ASSERT | RUN | v1/Pod @ chainsaw-fit-worm/* === NAME chainsaw/instrumentation-nodejs l.go:53: | 07:45:22 | instrumentation-nodejs | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-oriented-stag === CONT chainsaw/instrumentation-java l.go:53: | 07:45:22 | instrumentation-java | @setup  | CREATE | OK | v1/Namespace @ chainsaw-talented-walleye l.go:53: | 07:45:22 | instrumentation-java | step-00  | TRY | RUN | l.go:53: | 07:45:22 | instrumentation-java | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-talented-walleye openshift.io/sa.scc.uid-range=1000/1000 --overwrite l.go:53: | 07:45:22 | instrumentation-java | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-talented-walleye annotated l.go:53: | 07:45:22 | instrumentation-java | step-00  | CMD | DONE | l.go:53: | 07:45:22 | instrumentation-java | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-talented-walleye openshift.io/sa.scc.supplemental-groups=3000/3000 --overwrite l.go:53: | 07:45:22 | instrumentation-java | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-talented-walleye annotated l.go:53: | 07:45:22 | instrumentation-java | step-00  | CMD | DONE | l.go:53: | 07:45:23 | instrumentation-java | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-talented-walleye/sidecar l.go:53: | 07:45:23 | instrumentation-java | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-talented-walleye/sidecar l.go:53: | 07:45:23 | instrumentation-java | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-talented-walleye/sidecar l.go:53: | 07:45:23 | instrumentation-java | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-talented-walleye/java l.go:53: | 07:45:23 | instrumentation-java | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-talented-walleye/java l.go:53: | 07:45:23 | instrumentation-java | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-talented-walleye/java l.go:53: | 07:45:23 | instrumentation-java | step-00  | TRY | DONE | l.go:53: | 07:45:23 | instrumentation-java | step-01  | TRY | RUN | l.go:53: | 07:45:23 | instrumentation-java | step-01  | APPLY | RUN | apps/v1/Deployment @ chainsaw-talented-walleye/my-java l.go:53: | 07:45:23 | instrumentation-java | step-01  | CREATE | OK | apps/v1/Deployment @ chainsaw-talented-walleye/my-java l.go:53: | 07:45:23 | instrumentation-java | step-01  | APPLY | DONE | apps/v1/Deployment @ chainsaw-talented-walleye/my-java l.go:53: | 07:45:23 | instrumentation-java | step-01  | ASSERT | RUN | v1/Pod @ chainsaw-talented-walleye/* === NAME chainsaw/instrumentation-nginx l.go:53: | 07:46:53 | instrumentation-nginx | step-01  | ASSERT | ERROR | v1/Pod @ chainsaw-pumped-toad/* === ERROR ---------------------------------------------------- v1/Pod/chainsaw-pumped-toad/my-nginx-7748d8dc7-hcbfz ---------------------------------------------------- * spec.containers[1].args: Invalid value: []interface {}{"--config=env:OTEL_CONFIG"}: lengths of slices don't match --- expected +++ actual @@ -6,17 +6,27 @@ sidecar.opentelemetry.io/inject: "true" labels: app: my-nginx + name: my-nginx-7748d8dc7-hcbfz namespace: chainsaw-pumped-toad + ownerReferences: + - apiVersion: apps/v1 + blockOwnerDeletion: true + controller: true + kind: ReplicaSet + name: my-nginx-7748d8dc7 + uid: 0f9f7bbd-5aea-41e5-b4c5-61a06dfae622 spec: containers: - env: - name: OTEL_NODE_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.hostIP - name: OTEL_POD_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.podIP - name: LD_LIBRARY_PATH value: /opt:/opt/opentelemetry-webserver/agent/sdk_lib/lib @@ -41,6 +51,9 @@ - name: OTEL_TRACES_SAMPLER_ARG value: "0.25" - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.container.name=myapp,k8s.deployment.name=my-nginx,k8s.namespace.name=chainsaw-pumped-toad,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.replicaset.name=my-nginx-7748d8dc7,service.instance.id=chainsaw-pumped-toad.$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME).myapp,service.version=1.25.3 + image: nginxinc/nginx-unprivileged:1.25.3 + imagePullPolicy: Always lifecycle: postStart: exec: @@ -49,32 +62,299 @@ - -c - echo Hello from the postStart handler name: myapp - volumeMounts: - - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + ports: + - containerPort: 8765 + protocol: TCP + resources: + limits: + cpu: "1" + memory: 500Mi + requests: + cpu: 250m + memory: 100Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-748nh readOnly: true - mountPath: /opt/opentelemetry-webserver/agent name: otel-nginx-agent - mountPath: /etc/nginx name: otel-nginx-conf-dir - args: - - --feature-gates=-component.UseLocalHostAsDefaultHost - --config=env:OTEL_CONFIG + env: + - name: POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_CONFIG + value: | + receivers: + otlp: + protocols: + grpc: + endpoint: 0.0.0.0:4317 + http: + endpoint: 0.0.0.0:4318 + exporters: + debug: null + service: + telemetry: + metrics: + address: 0.0.0.0:8888 + pipelines: + traces: + exporters: + - debug + receivers: + - otlp + - name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_RESOURCE_ATTRIBUTES_POD_UID + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.uid + - name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.nodeName + - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.deployment.name=my-nginx,k8s.deployment.uid=4d2f98d5-260b-48e7-b984-0e3837e2f029,k8s.namespace.name=chainsaw-pumped-toad,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.pod.uid=$(OTEL_RESOURCE_ATTRIBUTES_POD_UID),k8s.replicaset.name=my-nginx-7748d8dc7,k8s.replicaset.uid=0f9f7bbd-5aea-41e5-b4c5-61a06dfae622 + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imagePullPolicy: IfNotPresent name: otc-container + ports: + - containerPort: 8888 + name: metrics + protocol: TCP + - containerPort: 4317 + name: otlp-grpc + protocol: TCP + - containerPort: 4318 + name: otlp-http + protocol: TCP + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-748nh + readOnly: true initContainers: - - name: otel-agent-source-container-clone - - name: otel-agent-attach-nginx + - args: + - cp -r /etc/nginx/* /opt/opentelemetry-webserver/source-conf && export NGINX_VERSION=$( + { nginx -v ; } 2>&1 ) && echo ${NGINX_VERSION##*/} > /opt/opentelemetry-webserver/source-conf/version.txt + command: + - /bin/sh + - -c + env: + - name: LD_LIBRARY_PATH + value: /opt + image: nginxinc/nginx-unprivileged:1.25.3 + imagePullPolicy: Always + name: otel-agent-source-container-clone + ports: + - containerPort: 8765 + protocol: TCP + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 1m + memory: 128Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /etc/nginx/nginx.conf + name: nginx-conf + readOnly: true + subPath: nginx.conf + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-748nh + readOnly: true + - mountPath: /opt/opentelemetry-webserver/source-conf + name: otel-nginx-conf-dir + - args: + - echo -e $OTEL_NGINX_I13N_SCRIPT > /opt/opentelemetry-webserver/agent/nginx_instrumentation.sh + && chmod +x /opt/opentelemetry-webserver/agent/nginx_instrumentation.sh && cat + /opt/opentelemetry-webserver/agent/nginx_instrumentation.sh && /opt/opentelemetry-webserver/agent/nginx_instrumentation.sh + "/opt/opentelemetry-webserver/agent" "/opt/opentelemetry-webserver/source-conf" + "nginx.conf" "<>" + command: + - /bin/sh + - -c + env: + - name: OTEL_NGINX_AGENT_CONF + value: | + NginxModuleEnabled ON; + NginxModuleOtelExporterEndpoint http://localhost:4317; + NginxModuleOtelMaxQueueSize 4096; + NginxModuleOtelSpanExporter otlp; + NginxModuleResolveBackends ON; + NginxModuleServiceInstanceId <>; + NginxModuleServiceName my-nginx; + NginxModuleServiceNamespace chainsaw-pumped-toad; + NginxModuleTraceAsError ON; + - name: OTEL_NGINX_I13N_SCRIPT + value: "\nNGINX_AGENT_DIR_FULL=$1\t\\n\nNGINX_AGENT_CONF_DIR_FULL=$2 \\n\nNGINX_CONFIG_FILE=$3 + \\n\nNGINX_SID_PLACEHOLDER=$4 \\n\nNGINX_SID_VALUE=$5 \\n\necho \"Input Parameters: + $@\" \\n\nset -x \\n\n\\n\ncp -r /opt/opentelemetry/* ${NGINX_AGENT_DIR_FULL} + \\n\n\\n\nNGINX_VERSION=$(cat ${NGINX_AGENT_CONF_DIR_FULL}/version.txt) \\n\nNGINX_AGENT_LOG_DIR=$(echo + \"${NGINX_AGENT_DIR_FULL}/logs\" | sed 's,/,\\\\/,g') \\n\n\\n\ncat ${NGINX_AGENT_DIR_FULL}/conf/opentelemetry_sdk_log4cxx.xml.template + | sed 's,__agent_log_dir__,'${NGINX_AGENT_LOG_DIR}',g' > ${NGINX_AGENT_DIR_FULL}/conf/opentelemetry_sdk_log4cxx.xml + \\n\necho -e $OTEL_NGINX_AGENT_CONF > ${NGINX_AGENT_CONF_DIR_FULL}/opentelemetry_agent.conf + \\n\nsed -i \"s,${NGINX_SID_PLACEHOLDER},${OTEL_NGINX_SERVICE_INSTANCE_ID},g\" + ${NGINX_AGENT_CONF_DIR_FULL}/opentelemetry_agent.conf \\n\nsed -i \"1s,^,load_module + ${NGINX_AGENT_DIR_FULL}/WebServerModule/Nginx/${NGINX_VERSION}/ngx_http_opentelemetry_module.so;\\\\n,g\" + ${NGINX_AGENT_CONF_DIR_FULL}/${NGINX_CONFIG_FILE} \\n\nsed -i \"1s,^,env OTEL_RESOURCE_ATTRIBUTES;\\\\n,g\" + ${NGINX_AGENT_CONF_DIR_FULL}/${NGINX_CONFIG_FILE} \\n\nmv ${NGINX_AGENT_CONF_DIR_FULL}/opentelemetry_agent.conf + \ ${NGINX_AGENT_CONF_DIR_FULL}/conf.d \\n\n\t\t" + - name: OTEL_NGINX_SERVICE_INSTANCE_ID + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-apache-httpd:1.0.4 + imagePullPolicy: IfNotPresent + name: otel-agent-attach-nginx + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 1m + memory: 128Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /opt/opentelemetry-webserver/agent + name: otel-nginx-agent + - mountPath: /opt/opentelemetry-webserver/source-conf + name: otel-nginx-conf-dir + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-748nh + readOnly: true status: containerStatuses: - - name: myapp + - containerID: cri-o://db1483678b56e2b173f3a96d1b0d6cbd73786ff7297038e5119cd6e7879fd475 + image: docker.io/nginxinc/nginx-unprivileged:1.25.3 + imageID: docker.io/nginxinc/nginx-unprivileged@sha256:352cdd57b8e29ac484d8ad31a0624ecd16e61662dbec863ee8b2b67ef90f537e + lastState: {} + name: myapp ready: true + restartCount: 0 started: true - - name: otc-container + state: + running: + startedAt: "2025-02-03T07:41:00Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-748nh + readOnly: true + recursiveReadOnly: Disabled + - mountPath: /opt/opentelemetry-webserver/agent + name: otel-nginx-agent + - mountPath: /etc/nginx + name: otel-nginx-conf-dir + - containerID: cri-o://ea2f00a5cfe0eebf39eac0c4121b292cccf1952876c629019a74b346c48d0e2d + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imageID: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + lastState: {} + name: otc-container ready: true + restartCount: 0 started: true + state: + running: + startedAt: "2025-02-03T07:41:01Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-748nh + readOnly: true + recursiveReadOnly: Disabled initContainerStatuses: - - name: otel-agent-source-container-clone + - containerID: cri-o://3185787c248bb41410abd1305600e16b1612f345a4feba3aa802f098225e1a71 + image: docker.io/nginxinc/nginx-unprivileged:1.25.3 + imageID: docker.io/nginxinc/nginx-unprivileged@sha256:352cdd57b8e29ac484d8ad31a0624ecd16e61662dbec863ee8b2b67ef90f537e + lastState: {} + name: otel-agent-source-container-clone ready: true - - name: otel-agent-attach-nginx + restartCount: 0 + started: false + state: + terminated: + containerID: cri-o://3185787c248bb41410abd1305600e16b1612f345a4feba3aa802f098225e1a71 + exitCode: 0 + finishedAt: "2025-02-03T07:40:57Z" + reason: Completed + startedAt: "2025-02-03T07:40:57Z" + volumeMounts: + - mountPath: /etc/nginx/nginx.conf + name: nginx-conf + readOnly: true + recursiveReadOnly: Disabled + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-748nh + readOnly: true + recursiveReadOnly: Disabled + - mountPath: /opt/opentelemetry-webserver/source-conf + name: otel-nginx-conf-dir + - containerID: cri-o://dda8ce891a93cad54e8b5cc2dab9e386e9698c29cc91e2023d3a9f4c13b2d7c4 + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-apache-httpd:1.0.4 + imageID: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-apache-httpd@sha256:4275db94ebbf4b9f78762b248ecab219790bbb98c59cf2bf5b3383908b727cfe + lastState: {} + name: otel-agent-attach-nginx ready: true + restartCount: 0 + started: false + state: + terminated: + containerID: cri-o://dda8ce891a93cad54e8b5cc2dab9e386e9698c29cc91e2023d3a9f4c13b2d7c4 + exitCode: 0 + finishedAt: "2025-02-03T07:40:59Z" + reason: Completed + startedAt: "2025-02-03T07:40:59Z" + volumeMounts: + - mountPath: /opt/opentelemetry-webserver/agent + name: otel-nginx-agent + - mountPath: /opt/opentelemetry-webserver/source-conf + name: otel-nginx-conf-dir + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-748nh + readOnly: true + recursiveReadOnly: Disabled phase: Running l.go:53: | 07:46:53 | instrumentation-nginx | step-01  | TRY | DONE | l.go:53: | 07:46:53 | instrumentation-nginx | step-01  | CATCH | RUN | l.go:53: | 07:46:53 | instrumentation-nginx | step-01  | CMD | RUN | === COMMAND /usr/local/bin/kubectl logs --prefix -l app=my-nginx -n chainsaw-pumped-toad --all-containers l.go:53: | 07:46:54 | instrumentation-nginx | step-01  | CMD | LOG | === STDOUT [pod/my-nginx-7748d8dc7-hcbfz/otel-agent-attach-nginx] + NGINX_AGENT_LOG_DIR=/opt/opentelemetry-webserver/agent/logs [pod/my-nginx-7748d8dc7-hcbfz/otel-agent-attach-nginx] + cat /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml.template [pod/my-nginx-7748d8dc7-hcbfz/otel-agent-attach-nginx] + sed s,__agent_log_dir__,/opt/opentelemetry-webserver/agent/logs,g [pod/my-nginx-7748d8dc7-hcbfz/otel-agent-attach-nginx] + echo -e NginxModuleEnabled 'ON;' NginxModuleOtelExporterEndpoint 'http://localhost:4317;' NginxModuleOtelMaxQueueSize '4096;' NginxModuleOtelSpanExporter 'otlp;' NginxModuleResolveBackends 'ON;' NginxModuleServiceInstanceId '<>;' NginxModuleServiceName 'my-nginx;' NginxModuleServiceNamespace 'chainsaw-pumped-toad;' NginxModuleTraceAsError 'ON;' [pod/my-nginx-7748d8dc7-hcbfz/otel-agent-attach-nginx] + sed -i 's,<>,my-nginx-7748d8dc7-hcbfz,g' /opt/opentelemetry-webserver/source-conf/opentelemetry_agent.conf [pod/my-nginx-7748d8dc7-hcbfz/otel-agent-attach-nginx] + sed -i '1s,^,load_module /opt/opentelemetry-webserver/agent/WebServerModule/Nginx/1.25.3/ngx_http_opentelemetry_module.so;\n,g' /opt/opentelemetry-webserver/source-conf/nginx.conf [pod/my-nginx-7748d8dc7-hcbfz/otel-agent-attach-nginx] + sed -i '1s,^,env OTEL_RESOURCE_ATTRIBUTES;\n,g' /opt/opentelemetry-webserver/source-conf/nginx.conf [pod/my-nginx-7748d8dc7-hcbfz/otel-agent-attach-nginx] + mv /opt/opentelemetry-webserver/source-conf/opentelemetry_agent.conf /opt/opentelemetry-webserver/source-conf/conf.d [pod/my-nginx-7748d8dc7-hcbfz/myapp] /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh [pod/my-nginx-7748d8dc7-hcbfz/myapp] /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh [pod/my-nginx-7748d8dc7-hcbfz/myapp] /docker-entrypoint.sh: Configuration complete; ready for start up [pod/my-nginx-7748d8dc7-hcbfz/myapp] 2025/02/03 07:41:00 [error] 1#1: mod_opentelemetry: ngx_http_opentelemetry_init: Starting Opentelemetry Module init [pod/my-nginx-7748d8dc7-hcbfz/myapp] nginx: [error] mod_opentelemetry: ngx_http_opentelemetry_init: Starting Opentelemetry Module init [pod/my-nginx-7748d8dc7-hcbfz/myapp] 2025/02/03 07:41:00 [error] 1#1: mod_opentelemetry: ngx_http_opentelemetry_init: Registering handlers for modules in different phases [pod/my-nginx-7748d8dc7-hcbfz/myapp] nginx: [error] mod_opentelemetry: ngx_http_opentelemetry_init: Registering handlers for modules in different phases [pod/my-nginx-7748d8dc7-hcbfz/myapp] 2025/02/03 07:41:00 [error] 1#1: mod_opentelemetry: ngx_http_opentelemetry_init: Opentelemetry Module init completed! [pod/my-nginx-7748d8dc7-hcbfz/myapp] nginx: [error] mod_opentelemetry: ngx_http_opentelemetry_init: Opentelemetry Module init completed! [pod/my-nginx-7748d8dc7-hcbfz/myapp] 2025/02/03 07:41:00 [error] 23#23: mod_opentelemetry: ngx_http_opentelemetry_init_worker: Initializing Nginx Worker for process with PID: 23 [pod/my-nginx-7748d8dc7-hcbfz/otc-container] 2025-02-03T07:41:01.214Z warn service@v0.113.0/service.go:221 service::telemetry::metrics::address is being deprecated in favor of service::telemetry::metrics::readers [pod/my-nginx-7748d8dc7-hcbfz/otc-container] 2025-02-03T07:41:01.214Z info telemetry/metrics.go:70 Serving metrics {"address": "0.0.0.0:8888", "metrics level": "Normal"} [pod/my-nginx-7748d8dc7-hcbfz/otc-container] 2025-02-03T07:41:01.215Z info builders/builders.go:26 Development component. May change in the future. {"kind": "exporter", "data_type": "traces", "name": "debug"} [pod/my-nginx-7748d8dc7-hcbfz/otc-container] 2025-02-03T07:41:01.228Z info service@v0.113.0/service.go:238 Starting otelcol... {"Version": "0.113.0", "NumCPU": 4} [pod/my-nginx-7748d8dc7-hcbfz/otc-container] 2025-02-03T07:41:01.228Z info extensions/extensions.go:39 Starting extensions... [pod/my-nginx-7748d8dc7-hcbfz/otc-container] 2025-02-03T07:41:01.228Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-nginx-7748d8dc7-hcbfz/otc-container] 2025-02-03T07:41:01.228Z info otlpreceiver@v0.113.0/otlp.go:112 Starting GRPC server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4317"} [pod/my-nginx-7748d8dc7-hcbfz/otc-container] 2025-02-03T07:41:01.228Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-nginx-7748d8dc7-hcbfz/otc-container] 2025-02-03T07:41:01.228Z info otlpreceiver@v0.113.0/otlp.go:169 Starting HTTP server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4318"} [pod/my-nginx-7748d8dc7-hcbfz/otc-container] 2025-02-03T07:41:01.228Z info service@v0.113.0/service.go:261 Everything is ready. Begin running and processing data. l.go:53: | 07:46:54 | instrumentation-nginx | step-01  | CMD | DONE | l.go:53: | 07:46:54 | instrumentation-nginx | step-01  | CATCH | DONE | l.go:53: | 07:46:54 | instrumentation-nginx | step-01  | CLEANUP | RUN | l.go:53: | 07:46:54 | instrumentation-nginx | step-01  | DELETE | RUN | v1/ConfigMap @ chainsaw-pumped-toad/nginx-conf l.go:53: | 07:46:54 | instrumentation-nginx | step-01  | DELETE | OK | v1/ConfigMap @ chainsaw-pumped-toad/nginx-conf l.go:53: | 07:46:54 | instrumentation-nginx | step-01  | DELETE | DONE | v1/ConfigMap @ chainsaw-pumped-toad/nginx-conf l.go:53: | 07:46:54 | instrumentation-nginx | step-01  | DELETE | RUN | apps/v1/Deployment @ chainsaw-pumped-toad/my-nginx l.go:53: | 07:46:54 | instrumentation-nginx | step-01  | DELETE | OK | apps/v1/Deployment @ chainsaw-pumped-toad/my-nginx l.go:53: | 07:46:54 | instrumentation-nginx | step-01  | DELETE | DONE | apps/v1/Deployment @ chainsaw-pumped-toad/my-nginx l.go:53: | 07:46:54 | instrumentation-nginx | step-01  | CLEANUP | DONE | l.go:53: | 07:46:54 | instrumentation-nginx | step-00  | CLEANUP | RUN | l.go:53: | 07:46:54 | instrumentation-nginx | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-pumped-toad/nginx l.go:53: | 07:46:54 | instrumentation-nginx | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-pumped-toad/nginx l.go:53: | 07:46:54 | instrumentation-nginx | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-pumped-toad/nginx l.go:53: | 07:46:54 | instrumentation-nginx | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-pumped-toad/sidecar l.go:53: | 07:46:54 | instrumentation-nginx | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-pumped-toad/sidecar l.go:53: | 07:46:54 | instrumentation-nginx | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-pumped-toad/sidecar l.go:53: | 07:46:54 | instrumentation-nginx | step-00  | CLEANUP | DONE | l.go:53: | 07:46:54 | instrumentation-nginx | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-pumped-toad l.go:53: | 07:46:54 | instrumentation-nginx | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-pumped-toad l.go:53: | 07:47:00 | instrumentation-nginx | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-pumped-toad === CONT chainsaw/instrumentation-go l.go:53: | 07:47:01 | instrumentation-go | @setup  | CREATE | OK | v1/Namespace @ chainsaw-uncommon-cheetah l.go:53: | 07:47:01 | instrumentation-go | step-00  | TRY | RUN | l.go:53: | 07:47:01 | instrumentation-go | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-uncommon-cheetah openshift.io/sa.scc.uid-range=0/0 --overwrite l.go:53: | 07:47:01 | instrumentation-go | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-uncommon-cheetah annotated l.go:53: | 07:47:01 | instrumentation-go | step-00  | CMD | DONE | l.go:53: | 07:47:01 | instrumentation-go | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-uncommon-cheetah openshift.io/sa.scc.supplemental-groups=3000/3000 --overwrite l.go:53: | 07:47:01 | instrumentation-go | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-uncommon-cheetah annotated l.go:53: | 07:47:01 | instrumentation-go | step-00  | CMD | DONE | l.go:53: | 07:47:01 | instrumentation-go | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-uncommon-cheetah/sidecar l.go:53: | 07:47:01 | instrumentation-go | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-uncommon-cheetah/sidecar l.go:53: | 07:47:01 | instrumentation-go | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-uncommon-cheetah/sidecar l.go:53: | 07:47:01 | instrumentation-go | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-uncommon-cheetah/go l.go:53: | 07:47:01 | instrumentation-go | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-uncommon-cheetah/go l.go:53: | 07:47:01 | instrumentation-go | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-uncommon-cheetah/go l.go:53: | 07:47:01 | instrumentation-go | step-00  | TRY | DONE | l.go:53: | 07:47:01 | instrumentation-go | step-01  | TRY | RUN | l.go:53: | 07:47:01 | instrumentation-go | step-01  | SCRIPT | RUN | === COMMAND /usr/bin/sh -c ./add-scc.sh l.go:53: | 07:47:02 | instrumentation-go | step-01  | SCRIPT | LOG | === STDOUT securitycontextconstraints.security.openshift.io/otel-go-instrumentation created clusterrole.rbac.authorization.k8s.io/system:openshift:scc:otel-go-instrumentation added: "otel-instrumentation-go" l.go:53: | 07:47:02 | instrumentation-go | step-01  | SCRIPT | DONE | l.go:53: | 07:47:02 | instrumentation-go | step-01  | APPLY | RUN | v1/ServiceAccount @ chainsaw-uncommon-cheetah/otel-instrumentation-go l.go:53: | 07:47:02 | instrumentation-go | step-01  | CREATE | OK | v1/ServiceAccount @ chainsaw-uncommon-cheetah/otel-instrumentation-go l.go:53: | 07:47:02 | instrumentation-go | step-01  | APPLY | DONE | v1/ServiceAccount @ chainsaw-uncommon-cheetah/otel-instrumentation-go l.go:53: | 07:47:02 | instrumentation-go | step-01  | TRY | DONE | l.go:53: | 07:47:02 | instrumentation-go | step-02  | TRY | RUN | l.go:53: | 07:47:02 | instrumentation-go | step-02  | APPLY | RUN | apps/v1/Deployment @ chainsaw-uncommon-cheetah/my-golang l.go:53: | 07:47:02 | instrumentation-go | step-02  | CREATE | OK | apps/v1/Deployment @ chainsaw-uncommon-cheetah/my-golang l.go:53: | 07:47:02 | instrumentation-go | step-02  | APPLY | DONE | apps/v1/Deployment @ chainsaw-uncommon-cheetah/my-golang l.go:53: | 07:47:02 | instrumentation-go | step-02  | ASSERT | RUN | v1/Pod @ chainsaw-uncommon-cheetah/* === NAME chainsaw/instrumentation-java-other-ns l.go:53: | 07:50:46 | instrumentation-java-other-ns | step-03  | ASSERT | ERROR | v1/Pod @ chainsaw-upward-burro/* === ERROR -------------------------------------------------------------- v1/Pod/chainsaw-upward-burro/my-java-other-ns-686cdff4f7-t9k8x -------------------------------------------------------------- * spec.containers[0].env[5].value: Invalid value: "-javaagent:/otel-auto-instrumentation-java/javaagent.jar": Expected value: " -javaagent:/otel-auto-instrumentation-java/javaagent.jar" * spec.containers[1].args: Invalid value: []interface {}{"--config=env:OTEL_CONFIG"}: lengths of slices don't match --- expected +++ actual @@ -6,17 +6,27 @@ sidecar.opentelemetry.io/inject: "true" labels: app: my-java-other-ns + name: my-java-other-ns-686cdff4f7-t9k8x namespace: chainsaw-upward-burro + ownerReferences: + - apiVersion: apps/v1 + blockOwnerDeletion: true + controller: true + kind: ReplicaSet + name: my-java-other-ns-686cdff4f7 + uid: 831e119a-431c-4018-a3d9-799ed31097f3 spec: containers: - env: - name: OTEL_NODE_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.hostIP - name: OTEL_POD_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.podIP - name: OTEL_JAVAAGENT_DEBUG value: "true" @@ -25,7 +35,7 @@ - name: SPLUNK_PROFILER_ENABLED value: "false" - name: JAVA_TOOL_OPTIONS - value: ' -javaagent:/otel-auto-instrumentation-java/javaagent.jar' + value: -javaagent:/otel-auto-instrumentation-java/javaagent.jar - name: OTEL_TRACES_EXPORTER value: otlp - name: OTEL_EXPORTER_OTLP_ENDPOINT @@ -53,28 +63,184 @@ - name: OTEL_PROPAGATORS value: jaeger,b3 - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.container.name=myapp,k8s.deployment.name=my-java-other-ns,k8s.namespace.name=chainsaw-upward-burro,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.replicaset.name=my-java-other-ns-686cdff4f7,service.instance.id=chainsaw-upward-burro.$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME).myapp,service.version=main + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-java:main + imagePullPolicy: IfNotPresent name: myapp - volumeMounts: - - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-g4lh2 readOnly: true - mountPath: /otel-auto-instrumentation-java name: opentelemetry-auto-instrumentation-java - args: - - --feature-gates=-component.UseLocalHostAsDefaultHost - --config=env:OTEL_CONFIG + env: + - name: POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_CONFIG + value: | + receivers: + otlp: + protocols: + grpc: + endpoint: 0.0.0.0:4317 + http: + endpoint: 0.0.0.0:4318 + exporters: + debug: null + service: + telemetry: + metrics: + address: 0.0.0.0:8888 + pipelines: + traces: + exporters: + - debug + receivers: + - otlp + - name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_RESOURCE_ATTRIBUTES_POD_UID + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.uid + - name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.nodeName + - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.deployment.name=my-java-other-ns,k8s.deployment.uid=c7872c1e-7e7b-4849-8ae5-84d328084999,k8s.namespace.name=chainsaw-upward-burro,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.pod.uid=$(OTEL_RESOURCE_ATTRIBUTES_POD_UID),k8s.replicaset.name=my-java-other-ns-686cdff4f7,k8s.replicaset.uid=831e119a-431c-4018-a3d9-799ed31097f3 + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imagePullPolicy: IfNotPresent name: otc-container + ports: + - containerPort: 8888 + name: metrics + protocol: TCP + - containerPort: 4317 + name: otlp-grpc + protocol: TCP + - containerPort: 4318 + name: otlp-http + protocol: TCP + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-g4lh2 + readOnly: true initContainers: - - name: opentelemetry-auto-instrumentation-java + - command: + - cp + - /javaagent.jar + - /otel-auto-instrumentation-java/javaagent.jar + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-java:1.33.5 + imagePullPolicy: IfNotPresent + name: opentelemetry-auto-instrumentation-java + resources: + limits: + cpu: 500m + memory: 64Mi + requests: + cpu: 50m + memory: 64Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /otel-auto-instrumentation-java + name: opentelemetry-auto-instrumentation-java + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-g4lh2 + readOnly: true status: containerStatuses: - - name: myapp + - containerID: cri-o://e8c80f49dcd44762ba6a5873d34548240a6f9f3582c2926f006727c260937e7c + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-java:main + imageID: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-java@sha256:a850ff524a0d974b08583210117889f27a3bd58b9bfb07ce232eb46134b22103 + lastState: {} + name: myapp ready: true + restartCount: 0 started: true - - name: otc-container + state: + running: + startedAt: "2025-02-03T07:44:48Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-g4lh2 + readOnly: true + recursiveReadOnly: Disabled + - mountPath: /otel-auto-instrumentation-java + name: opentelemetry-auto-instrumentation-java + - containerID: cri-o://196410b01e4434931f7fbc6828c21b2bef2ab3f05e70008c10f04a2f7eb6cd23 + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imageID: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + lastState: {} + name: otc-container ready: true + restartCount: 0 started: true + state: + running: + startedAt: "2025-02-03T07:44:48Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-g4lh2 + readOnly: true + recursiveReadOnly: Disabled initContainerStatuses: - - name: opentelemetry-auto-instrumentation-java + - containerID: cri-o://dd10ee9136c64dfaa266e6196fe7a1df374d7bdba5741d658bf867370c1c2c18 + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-java:1.33.5 + imageID: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-java@sha256:100735f70446dc76d895ee1534da1c828e129b03010ba2bc161e6ca475d27815 + lastState: {} + name: opentelemetry-auto-instrumentation-java ready: true + restartCount: 0 + started: false + state: + terminated: + containerID: cri-o://dd10ee9136c64dfaa266e6196fe7a1df374d7bdba5741d658bf867370c1c2c18 + exitCode: 0 + finishedAt: "2025-02-03T07:44:47Z" + reason: Completed + startedAt: "2025-02-03T07:44:47Z" + volumeMounts: + - mountPath: /otel-auto-instrumentation-java + name: opentelemetry-auto-instrumentation-java + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-g4lh2 + readOnly: true + recursiveReadOnly: Disabled phase: Running l.go:53: | 07:50:46 | instrumentation-java-other-ns | step-03  | TRY | DONE | l.go:53: | 07:50:46 | instrumentation-java-other-ns | step-03  | CATCH | RUN | l.go:53: | 07:50:46 | instrumentation-java-other-ns | step-03  | CMD | RUN | === COMMAND /usr/local/bin/kubectl logs --prefix -l app=my-java-other-ns -n chainsaw-upward-burro --all-containers l.go:53: | 07:50:46 | instrumentation-java-other-ns | step-03  | CMD | LOG | === STDOUT [pod/my-java-other-ns-686cdff4f7-t9k8x/otc-container] 2025-02-03T07:44:48.522Z warn service@v0.113.0/service.go:221 service::telemetry::metrics::address is being deprecated in favor of service::telemetry::metrics::readers [pod/my-java-other-ns-686cdff4f7-t9k8x/otc-container] 2025-02-03T07:44:48.522Z info telemetry/metrics.go:70 Serving metrics {"address": "0.0.0.0:8888", "metrics level": "Normal"} [pod/my-java-other-ns-686cdff4f7-t9k8x/otc-container] 2025-02-03T07:44:48.522Z info builders/builders.go:26 Development component. May change in the future. {"kind": "exporter", "data_type": "traces", "name": "debug"} [pod/my-java-other-ns-686cdff4f7-t9k8x/otc-container] 2025-02-03T07:44:48.535Z info service@v0.113.0/service.go:238 Starting otelcol... {"Version": "0.113.0", "NumCPU": 4} [pod/my-java-other-ns-686cdff4f7-t9k8x/otc-container] 2025-02-03T07:44:48.535Z info extensions/extensions.go:39 Starting extensions... [pod/my-java-other-ns-686cdff4f7-t9k8x/otc-container] 2025-02-03T07:44:48.535Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-java-other-ns-686cdff4f7-t9k8x/otc-container] 2025-02-03T07:44:48.535Z info otlpreceiver@v0.113.0/otlp.go:112 Starting GRPC server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4317"} [pod/my-java-other-ns-686cdff4f7-t9k8x/otc-container] 2025-02-03T07:44:48.536Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-java-other-ns-686cdff4f7-t9k8x/otc-container] 2025-02-03T07:44:48.536Z info otlpreceiver@v0.113.0/otlp.go:169 Starting HTTP server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4318"} [pod/my-java-other-ns-686cdff4f7-t9k8x/otc-container] 2025-02-03T07:44:48.536Z info service@v0.113.0/service.go:261 Everything is ready. Begin running and processing data. [pod/my-java-other-ns-686cdff4f7-t9k8x/myapp] 2025-02-03T07:44:52.344Z INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path '' [pod/my-java-other-ns-686cdff4f7-t9k8x/myapp] 2025-02-03T07:44:52.354Z INFO 1 --- [ main] com.example.app.DemoApplication : Started DemoApplication in 1.911 seconds (process running for 4.108) [pod/my-java-other-ns-686cdff4f7-t9k8x/myapp] [otel.javaagent 2025-02-03 07:45:48:767 +0000] [OkHttp http://localhost:4317/...] WARN io.opentelemetry.exporter.internal.grpc.GrpcExporter - Failed to export metrics. Server responded with gRPC status code 2. Error message: timeout [pod/my-java-other-ns-686cdff4f7-t9k8x/myapp] [otel.javaagent 2025-02-03 07:45:48:767 +0000] [OkHttp http://localhost:4317/...] DEBUG io.opentelemetry.sdk.metrics.export.PeriodicMetricReader - Exporter failed [pod/my-java-other-ns-686cdff4f7-t9k8x/myapp] [otel.javaagent 2025-02-03 07:46:48:677 +0000] [OkHttp http://localhost:4317/...] WARN io.opentelemetry.exporter.internal.grpc.GrpcExporter - Failed to export metrics. Server responded with gRPC status code 2. Error message: timeout [pod/my-java-other-ns-686cdff4f7-t9k8x/myapp] [otel.javaagent 2025-02-03 07:46:48:677 +0000] [OkHttp http://localhost:4317/...] DEBUG io.opentelemetry.sdk.metrics.export.PeriodicMetricReader - Exporter failed [pod/my-java-other-ns-686cdff4f7-t9k8x/myapp] [otel.javaagent 2025-02-03 07:47:48:662 +0000] [OkHttp http://localhost:4317/...] ERROR io.opentelemetry.exporter.internal.grpc.GrpcExporter - Failed to export metrics. Server responded with UNIMPLEMENTED. This usually means that your collector is not configured with an otlp receiver in the "pipelines" section of the configuration. If export is not desired and you are using OpenTelemetry autoconfiguration or the javaagent, disable export by setting OTEL_METRICS_EXPORTER=none. Full error message: unknown service opentelemetry.proto.collector.metrics.v1.MetricsService [pod/my-java-other-ns-686cdff4f7-t9k8x/myapp] [otel.javaagent 2025-02-03 07:47:48:662 +0000] [OkHttp http://localhost:4317/...] DEBUG io.opentelemetry.sdk.metrics.export.PeriodicMetricReader - Exporter failed [pod/my-java-other-ns-686cdff4f7-t9k8x/myapp] [otel.javaagent 2025-02-03 07:48:48:659 +0000] [OkHttp http://localhost:4317/...] DEBUG io.opentelemetry.sdk.metrics.export.PeriodicMetricReader - Exporter failed [pod/my-java-other-ns-686cdff4f7-t9k8x/myapp] [otel.javaagent 2025-02-03 07:49:48:658 +0000] [OkHttp http://localhost:4317/...] DEBUG io.opentelemetry.sdk.metrics.export.PeriodicMetricReader - Exporter failed l.go:53: | 07:50:46 | instrumentation-java-other-ns | step-03  | CMD | DONE | l.go:53: | 07:50:46 | instrumentation-java-other-ns | step-03  | CATCH | DONE | l.go:53: | 07:50:46 | instrumentation-java-other-ns | step-03  | CLEANUP | RUN | l.go:53: | 07:50:46 | instrumentation-java-other-ns | step-03  | DELETE | RUN | apps/v1/Deployment @ chainsaw-upward-burro/my-java-other-ns l.go:53: | 07:50:46 | instrumentation-java-other-ns | step-03  | DELETE | OK | apps/v1/Deployment @ chainsaw-upward-burro/my-java-other-ns l.go:53: | 07:50:46 | instrumentation-java-other-ns | step-03  | DELETE | DONE | apps/v1/Deployment @ chainsaw-upward-burro/my-java-other-ns l.go:53: | 07:50:46 | instrumentation-java-other-ns | step-03  | CLEANUP | DONE | l.go:53: | 07:50:46 | instrumentation-java-other-ns | step-02  | CLEANUP | RUN | l.go:53: | 07:50:46 | instrumentation-java-other-ns | step-02  | DELETE | RUN | opentelemetry.io/v1alpha1/Instrumentation @ my-other-ns/java l.go:53: | 07:50:46 | instrumentation-java-other-ns | step-02  | DELETE | OK | opentelemetry.io/v1alpha1/Instrumentation @ my-other-ns/java l.go:53: | 07:50:46 | instrumentation-java-other-ns | step-02  | DELETE | DONE | opentelemetry.io/v1alpha1/Instrumentation @ my-other-ns/java l.go:53: | 07:50:46 | instrumentation-java-other-ns | step-02  | DELETE | RUN | v1/Namespace @ my-other-ns l.go:53: | 07:50:46 | instrumentation-java-other-ns | step-02  | DELETE | OK | v1/Namespace @ my-other-ns l.go:53: | 07:50:52 | instrumentation-java-other-ns | step-02  | DELETE | DONE | v1/Namespace @ my-other-ns l.go:53: | 07:50:52 | instrumentation-java-other-ns | step-02  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-upward-burro/sidecar l.go:53: | 07:50:52 | instrumentation-java-other-ns | step-02  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-upward-burro/sidecar l.go:53: | 07:50:52 | instrumentation-java-other-ns | step-02  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-upward-burro/sidecar l.go:53: | 07:50:52 | instrumentation-java-other-ns | step-02  | CLEANUP | DONE | l.go:53: | 07:50:52 | instrumentation-java-other-ns | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-upward-burro l.go:53: | 07:50:52 | instrumentation-java-other-ns | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-upward-burro l.go:53: | 07:50:59 | instrumentation-java-other-ns | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-upward-burro === CONT chainsaw/instrumentation-dotnet-musl l.go:53: | 07:50:59 | instrumentation-dotnet-musl | @setup  | CREATE | OK | v1/Namespace @ chainsaw-still-drake l.go:53: | 07:50:59 | instrumentation-dotnet-musl | step-00  | TRY | RUN | l.go:53: | 07:50:59 | instrumentation-dotnet-musl | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-still-drake openshift.io/sa.scc.uid-range=1000/1000 --overwrite l.go:53: | 07:50:59 | instrumentation-dotnet-musl | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-still-drake annotated l.go:53: | 07:50:59 | instrumentation-dotnet-musl | step-00  | CMD | DONE | l.go:53: | 07:50:59 | instrumentation-dotnet-musl | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-still-drake openshift.io/sa.scc.supplemental-groups=3000/3000 --overwrite l.go:53: | 07:50:59 | instrumentation-dotnet-musl | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-still-drake annotated l.go:53: | 07:50:59 | instrumentation-dotnet-musl | step-00  | CMD | DONE | l.go:53: | 07:50:59 | instrumentation-dotnet-musl | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-still-drake/sidecar l.go:53: | 07:50:59 | instrumentation-dotnet-musl | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-still-drake/sidecar l.go:53: | 07:50:59 | instrumentation-dotnet-musl | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-still-drake/sidecar l.go:53: | 07:50:59 | instrumentation-dotnet-musl | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-still-drake/dotnet l.go:53: | 07:50:59 | instrumentation-dotnet-musl | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-still-drake/dotnet l.go:53: | 07:50:59 | instrumentation-dotnet-musl | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-still-drake/dotnet l.go:53: | 07:50:59 | instrumentation-dotnet-musl | step-00  | TRY | DONE | l.go:53: | 07:50:59 | instrumentation-dotnet-musl | step-01  | TRY | RUN | l.go:53: | 07:50:59 | instrumentation-dotnet-musl | step-01  | APPLY | RUN | apps/v1/Deployment @ chainsaw-still-drake/my-dotnet-musl l.go:53: | 07:50:59 | instrumentation-dotnet-musl | step-01  | CREATE | OK | apps/v1/Deployment @ chainsaw-still-drake/my-dotnet-musl l.go:53: | 07:50:59 | instrumentation-dotnet-musl | step-01  | APPLY | DONE | apps/v1/Deployment @ chainsaw-still-drake/my-dotnet-musl l.go:53: | 07:50:59 | instrumentation-dotnet-musl | step-01  | ASSERT | RUN | v1/Pod @ chainsaw-still-drake/* === NAME chainsaw/instrumentation-dotnet-multicontainer l.go:53: | 07:51:01 | instrumentation-dotnet-multicontainer | step-02  | ASSERT | ERROR | v1/Pod @ chainsaw-fit-worm/* === ERROR --------------------------------------------------------- v1/Pod/chainsaw-fit-worm/my-dotnet-multi-6d65849547-6ptsb --------------------------------------------------------- * spec.containers[2].args: Invalid value: []interface {}{"--config=env:OTEL_CONFIG"}: lengths of slices don't match --- expected +++ actual @@ -7,17 +7,27 @@ sidecar.opentelemetry.io/inject: "true" labels: app: my-dotnet-multi + name: my-dotnet-multi-6d65849547-6ptsb namespace: chainsaw-fit-worm + ownerReferences: + - apiVersion: apps/v1 + blockOwnerDeletion: true + controller: true + kind: ReplicaSet + name: my-dotnet-multi-6d65849547 + uid: 7a6d5ea9-653c-443f-9d72-a1f834ea6bc7 spec: containers: - env: - name: OTEL_NODE_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.hostIP - name: OTEL_POD_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.podIP - name: ASPNETCORE_URLS value: http://+:8080 @@ -64,9 +74,22 @@ - name: OTEL_PROPAGATORS value: jaeger,b3multi - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.container.name=myapp,k8s.deployment.name=my-dotnet-multi,k8s.namespace.name=chainsaw-fit-worm,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.replicaset.name=my-dotnet-multi-6d65849547,service.instance.id=chainsaw-fit-worm.$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME).myapp,service.version=main + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-dotnet:main + imagePullPolicy: IfNotPresent name: myapp - volumeMounts: - - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-jshjk readOnly: true - mountPath: /otel-auto-instrumentation-dotnet name: opentelemetry-auto-instrumentation-dotnet @@ -74,10 +97,12 @@ - name: OTEL_NODE_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.hostIP - name: OTEL_POD_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.podIP - name: OTEL_LOG_LEVEL value: debug @@ -122,31 +147,203 @@ - name: OTEL_PROPAGATORS value: jaeger,b3multi - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.container.name=myrabbit,k8s.deployment.name=my-dotnet-multi,k8s.namespace.name=chainsaw-fit-worm,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.replicaset.name=my-dotnet-multi-6d65849547,service.instance.id=chainsaw-fit-worm.$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME).myrabbit,service.version=3 + image: rabbitmq:3 + imagePullPolicy: IfNotPresent name: myrabbit - volumeMounts: - - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-jshjk readOnly: true - mountPath: /otel-auto-instrumentation-dotnet name: opentelemetry-auto-instrumentation-dotnet - args: - - --feature-gates=-component.UseLocalHostAsDefaultHost - --config=env:OTEL_CONFIG + env: + - name: POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_CONFIG + value: | + receivers: + otlp: + protocols: + grpc: + endpoint: 0.0.0.0:4317 + http: + endpoint: 0.0.0.0:4318 + exporters: + debug: null + service: + telemetry: + metrics: + address: 0.0.0.0:8888 + pipelines: + traces: + exporters: + - debug + receivers: + - otlp + - name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_RESOURCE_ATTRIBUTES_POD_UID + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.uid + - name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.nodeName + - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.deployment.name=my-dotnet-multi,k8s.deployment.uid=b34154e8-4675-4a4d-b797-ae1e8f14c752,k8s.namespace.name=chainsaw-fit-worm,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.pod.uid=$(OTEL_RESOURCE_ATTRIBUTES_POD_UID),k8s.replicaset.name=my-dotnet-multi-6d65849547,k8s.replicaset.uid=7a6d5ea9-653c-443f-9d72-a1f834ea6bc7 + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imagePullPolicy: IfNotPresent name: otc-container + ports: + - containerPort: 8888 + name: metrics + protocol: TCP + - containerPort: 4317 + name: otlp-grpc + protocol: TCP + - containerPort: 4318 + name: otlp-http + protocol: TCP + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-jshjk + readOnly: true initContainers: - - name: opentelemetry-auto-instrumentation-dotnet + - command: + - cp + - -r + - /autoinstrumentation/. + - /otel-auto-instrumentation-dotnet + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-dotnet:1.2.0 + imagePullPolicy: IfNotPresent + name: opentelemetry-auto-instrumentation-dotnet + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 50m + memory: 128Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /otel-auto-instrumentation-dotnet + name: opentelemetry-auto-instrumentation-dotnet + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-jshjk + readOnly: true status: containerStatuses: - - name: myapp + - containerID: cri-o://3a00f7551f20848d0f915fa183bd42e5698d3b6ba9a04fdfd288acb5bad99af0 + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-dotnet:main + imageID: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-dotnet@sha256:84b98a53aa0acad5fca02dbcf2da37df7b3aaaa4a6ebbead2bc06fb715d982ce + lastState: {} + name: myapp ready: true + restartCount: 0 started: true - - name: myrabbit + state: + running: + startedAt: "2025-02-03T07:45:08Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-jshjk + readOnly: true + recursiveReadOnly: Disabled + - mountPath: /otel-auto-instrumentation-dotnet + name: opentelemetry-auto-instrumentation-dotnet + - containerID: cri-o://24f9ad675ceae13201d6c4d66ad7f077aad97301ac4b8045f090e521221592d8 + image: docker.io/library/rabbitmq:3 + imageID: docker.io/library/rabbitmq@sha256:af395ea3037a1207af556d76f9c20e972a0855d1471a7c6c8f2c9d5eda54f9ff + lastState: {} + name: myrabbit ready: true + restartCount: 0 started: true - - name: otc-container + state: + running: + startedAt: "2025-02-03T07:45:09Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-jshjk + readOnly: true + recursiveReadOnly: Disabled + - mountPath: /otel-auto-instrumentation-dotnet + name: opentelemetry-auto-instrumentation-dotnet + - containerID: cri-o://6795b72b59295a005d258a8e55431ddddaf79d65662f92ebb0b766984a8ff928 + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imageID: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + lastState: {} + name: otc-container ready: true + restartCount: 0 started: true + state: + running: + startedAt: "2025-02-03T07:45:09Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-jshjk + readOnly: true + recursiveReadOnly: Disabled initContainerStatuses: - - name: opentelemetry-auto-instrumentation-dotnet + - containerID: cri-o://2524270793d4da2f5d32618cbc821f6a16c60bab385969ec0d3834b2af8f561f + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-dotnet:1.2.0 + imageID: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-dotnet@sha256:093f0057f30022d0d4f4fbdbd3104c48879c8424d7acec0b46e9cb86a3d95e10 + lastState: {} + name: opentelemetry-auto-instrumentation-dotnet ready: true + restartCount: 0 + started: false + state: + terminated: + containerID: cri-o://2524270793d4da2f5d32618cbc821f6a16c60bab385969ec0d3834b2af8f561f + exitCode: 0 + finishedAt: "2025-02-03T07:45:05Z" + reason: Completed + startedAt: "2025-02-03T07:45:05Z" + volumeMounts: + - mountPath: /otel-auto-instrumentation-dotnet + name: opentelemetry-auto-instrumentation-dotnet + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-jshjk + readOnly: true + recursiveReadOnly: Disabled phase: Running l.go:53: | 07:51:01 | instrumentation-dotnet-multicontainer | step-02  | TRY | DONE | l.go:53: | 07:51:01 | instrumentation-dotnet-multicontainer | step-02  | CATCH | RUN | l.go:53: | 07:51:01 | instrumentation-dotnet-multicontainer | step-02  | CMD | RUN | === COMMAND /usr/local/bin/kubectl logs --prefix -l app=my-dotnet-multi -n chainsaw-fit-worm --all-containers l.go:53: | 07:51:01 | instrumentation-dotnet-multicontainer | step-02  | CMD | LOG | === STDOUT [pod/my-dotnet-multi-6d65849547-6ptsb/myapp] warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35] [pod/my-dotnet-multi-6d65849547-6ptsb/myapp] No XML encryptor configured. Key {88c74650-b6eb-47f9-80ee-e47876dbc673} may be persisted to storage in unencrypted form. [pod/my-dotnet-multi-6d65849547-6ptsb/myapp] info: Microsoft.Hosting.Lifetime[14] [pod/my-dotnet-multi-6d65849547-6ptsb/myapp] Now listening on: http://[::]:8080 [pod/my-dotnet-multi-6d65849547-6ptsb/myapp] info: Microsoft.Hosting.Lifetime[0] [pod/my-dotnet-multi-6d65849547-6ptsb/myapp] Application started. Press Ctrl+C to shut down. [pod/my-dotnet-multi-6d65849547-6ptsb/myapp] info: Microsoft.Hosting.Lifetime[0] [pod/my-dotnet-multi-6d65849547-6ptsb/myapp] Hosting environment: Production [pod/my-dotnet-multi-6d65849547-6ptsb/myapp] info: Microsoft.Hosting.Lifetime[0] [pod/my-dotnet-multi-6d65849547-6ptsb/myapp] Content root path: /app [pod/my-dotnet-multi-6d65849547-6ptsb/myrabbit] 2025-02-03 07:45:14.668561+00:00 [info] <0.711.0> Prometheus metrics: HTTP (non-TLS) listener started on port 15692 [pod/my-dotnet-multi-6d65849547-6ptsb/myrabbit] 2025-02-03 07:45:14.668680+00:00 [info] <0.673.0> Ready to start client connection listeners [pod/my-dotnet-multi-6d65849547-6ptsb/myrabbit] 2025-02-03 07:45:14.669936+00:00 [info] <0.755.0> started TCP listener on [::]:5672 [pod/my-dotnet-multi-6d65849547-6ptsb/myrabbit] completed with 4 plugins. [pod/my-dotnet-multi-6d65849547-6ptsb/myrabbit] 2025-02-03 07:45:14.737993+00:00 [info] <0.673.0> Server startup complete; 4 plugins started. [pod/my-dotnet-multi-6d65849547-6ptsb/myrabbit] 2025-02-03 07:45:14.737993+00:00 [info] <0.673.0> * rabbitmq_prometheus [pod/my-dotnet-multi-6d65849547-6ptsb/myrabbit] 2025-02-03 07:45:14.737993+00:00 [info] <0.673.0> * rabbitmq_federation [pod/my-dotnet-multi-6d65849547-6ptsb/myrabbit] 2025-02-03 07:45:14.737993+00:00 [info] <0.673.0> * rabbitmq_management_agent [pod/my-dotnet-multi-6d65849547-6ptsb/myrabbit] 2025-02-03 07:45:14.737993+00:00 [info] <0.673.0> * rabbitmq_web_dispatch [pod/my-dotnet-multi-6d65849547-6ptsb/myrabbit] 2025-02-03 07:45:14.794635+00:00 [info] <0.9.0> Time to start RabbitMQ: 5136 ms [pod/my-dotnet-multi-6d65849547-6ptsb/otc-container] 2025-02-03T07:45:09.424Z warn service@v0.113.0/service.go:221 service::telemetry::metrics::address is being deprecated in favor of service::telemetry::metrics::readers [pod/my-dotnet-multi-6d65849547-6ptsb/otc-container] 2025-02-03T07:45:09.424Z info telemetry/metrics.go:70 Serving metrics {"address": "0.0.0.0:8888", "metrics level": "Normal"} [pod/my-dotnet-multi-6d65849547-6ptsb/otc-container] 2025-02-03T07:45:09.424Z info builders/builders.go:26 Development component. May change in the future. {"kind": "exporter", "data_type": "traces", "name": "debug"} [pod/my-dotnet-multi-6d65849547-6ptsb/otc-container] 2025-02-03T07:45:09.437Z info service@v0.113.0/service.go:238 Starting otelcol... {"Version": "0.113.0", "NumCPU": 4} [pod/my-dotnet-multi-6d65849547-6ptsb/otc-container] 2025-02-03T07:45:09.438Z info extensions/extensions.go:39 Starting extensions... [pod/my-dotnet-multi-6d65849547-6ptsb/otc-container] 2025-02-03T07:45:09.438Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-dotnet-multi-6d65849547-6ptsb/otc-container] 2025-02-03T07:45:09.438Z info otlpreceiver@v0.113.0/otlp.go:112 Starting GRPC server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4317"} [pod/my-dotnet-multi-6d65849547-6ptsb/otc-container] 2025-02-03T07:45:09.438Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-dotnet-multi-6d65849547-6ptsb/otc-container] 2025-02-03T07:45:09.438Z info otlpreceiver@v0.113.0/otlp.go:169 Starting HTTP server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4318"} [pod/my-dotnet-multi-6d65849547-6ptsb/otc-container] 2025-02-03T07:45:09.438Z info service@v0.113.0/service.go:261 Everything is ready. Begin running and processing data. l.go:53: | 07:51:01 | instrumentation-dotnet-multicontainer | step-02  | CMD | DONE | l.go:53: | 07:51:01 | instrumentation-dotnet-multicontainer | step-02  | CATCH | DONE | l.go:53: | 07:51:01 | instrumentation-dotnet-multicontainer | step-02  | CLEANUP | RUN | l.go:53: | 07:51:01 | instrumentation-dotnet-multicontainer | step-02  | DELETE | RUN | apps/v1/Deployment @ chainsaw-fit-worm/my-dotnet-multi l.go:53: | 07:51:01 | instrumentation-dotnet-multicontainer | step-02  | DELETE | OK | apps/v1/Deployment @ chainsaw-fit-worm/my-dotnet-multi l.go:53: | 07:51:01 | instrumentation-dotnet-multicontainer | step-02  | DELETE | DONE | apps/v1/Deployment @ chainsaw-fit-worm/my-dotnet-multi l.go:53: | 07:51:01 | instrumentation-dotnet-multicontainer | step-02  | CLEANUP | DONE | l.go:53: | 07:51:01 | instrumentation-dotnet-multicontainer | step-01  | CLEANUP | RUN | l.go:53: | 07:51:01 | instrumentation-dotnet-multicontainer | step-01  | DELETE | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-fit-worm/dotnet l.go:53: | 07:51:01 | instrumentation-dotnet-multicontainer | step-01  | DELETE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-fit-worm/dotnet l.go:53: | 07:51:01 | instrumentation-dotnet-multicontainer | step-01  | DELETE | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-fit-worm/dotnet l.go:53: | 07:51:01 | instrumentation-dotnet-multicontainer | step-01  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-fit-worm/sidecar l.go:53: | 07:51:01 | instrumentation-dotnet-multicontainer | step-01  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-fit-worm/sidecar l.go:53: | 07:51:01 | instrumentation-dotnet-multicontainer | step-01  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-fit-worm/sidecar l.go:53: | 07:51:01 | instrumentation-dotnet-multicontainer | step-01  | CLEANUP | DONE | l.go:53: | 07:51:01 | instrumentation-dotnet-multicontainer | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-fit-worm l.go:53: | 07:51:01 | instrumentation-dotnet-multicontainer | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-fit-worm l.go:53: | 07:51:07 | instrumentation-dotnet-multicontainer | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-fit-worm === CONT chainsaw/managed-reconcile l.go:53: | 07:51:08 | managed-reconcile | @setup  | CREATE | OK | v1/Namespace @ chainsaw-live-alien l.go:53: | 07:51:08 | managed-reconcile | step-00  | TRY | RUN | l.go:53: | 07:51:08 | managed-reconcile | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-live-alien/simplest l.go:53: | 07:51:08 | managed-reconcile | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-live-alien/simplest l.go:53: | 07:51:08 | managed-reconcile | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-live-alien/simplest l.go:53: | 07:51:08 | managed-reconcile | step-00  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-live-alien/simplest-collector l.go:53: | 07:51:10 | managed-reconcile | step-00  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-live-alien/simplest-collector l.go:53: | 07:51:10 | managed-reconcile | step-00  | ASSERT | RUN | v1/Service @ chainsaw-live-alien/simplest-collector-headless l.go:53: | 07:51:10 | managed-reconcile | step-00  | ASSERT | DONE | v1/Service @ chainsaw-live-alien/simplest-collector-headless l.go:53: | 07:51:10 | managed-reconcile | step-00  | ASSERT | RUN | v1/Service @ chainsaw-live-alien/simplest-collector l.go:53: | 07:51:10 | managed-reconcile | step-00  | ASSERT | DONE | v1/Service @ chainsaw-live-alien/simplest-collector l.go:53: | 07:51:10 | managed-reconcile | step-00  | TRY | DONE | l.go:53: | 07:51:10 | managed-reconcile | step-01  | TRY | RUN | l.go:53: | 07:51:10 | managed-reconcile | step-01  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-live-alien/simplest l.go:53: | 07:51:10 | managed-reconcile | step-01  | PATCH | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-live-alien/simplest l.go:53: | 07:51:10 | managed-reconcile | step-01  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-live-alien/simplest l.go:53: | 07:51:10 | managed-reconcile | step-01  | APPLY | RUN | v1/ConfigMap @ chainsaw-live-alien/simplest-collector l.go:53: | 07:51:10 | managed-reconcile | step-01  | CREATE | OK | v1/ConfigMap @ chainsaw-live-alien/simplest-collector l.go:53: | 07:51:10 | managed-reconcile | step-01  | APPLY | DONE | v1/ConfigMap @ chainsaw-live-alien/simplest-collector l.go:53: | 07:51:10 | managed-reconcile | step-01  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-live-alien/simplest-collector l.go:53: | 07:51:10 | managed-reconcile | step-01  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-live-alien/simplest-collector l.go:53: | 07:51:10 | managed-reconcile | step-01  | ASSERT | RUN | v1/Service @ chainsaw-live-alien/simplest-collector-headless l.go:53: | 07:51:10 | managed-reconcile | step-01  | ASSERT | DONE | v1/Service @ chainsaw-live-alien/simplest-collector-headless l.go:53: | 07:51:10 | managed-reconcile | step-01  | ASSERT | RUN | v1/Service @ chainsaw-live-alien/simplest-collector l.go:53: | 07:51:10 | managed-reconcile | step-01  | ASSERT | DONE | v1/Service @ chainsaw-live-alien/simplest-collector l.go:53: | 07:51:10 | managed-reconcile | step-01  | ASSERT | RUN | v1/ConfigMap @ chainsaw-live-alien/simplest-collector l.go:53: | 07:51:10 | managed-reconcile | step-01  | ASSERT | DONE | v1/ConfigMap @ chainsaw-live-alien/simplest-collector l.go:53: | 07:51:10 | managed-reconcile | step-01  | TRY | DONE | l.go:53: | 07:51:10 | managed-reconcile | step-02  | TRY | RUN | l.go:53: | 07:51:10 | managed-reconcile | step-02  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-live-alien/simplest l.go:53: | 07:51:10 | managed-reconcile | step-02  | PATCH | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-live-alien/simplest l.go:53: | 07:51:10 | managed-reconcile | step-02  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-live-alien/simplest l.go:53: | 07:51:10 | managed-reconcile | step-02  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-live-alien/simplest-collector l.go:53: | 07:51:11 | managed-reconcile | step-02  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-live-alien/simplest-collector l.go:53: | 07:51:11 | managed-reconcile | step-02  | ASSERT | RUN | v1/Service @ chainsaw-live-alien/simplest-collector-headless l.go:53: | 07:51:11 | managed-reconcile | step-02  | ASSERT | DONE | v1/Service @ chainsaw-live-alien/simplest-collector-headless l.go:53: | 07:51:11 | managed-reconcile | step-02  | ASSERT | RUN | v1/Service @ chainsaw-live-alien/simplest-collector l.go:53: | 07:51:11 | managed-reconcile | step-02  | ASSERT | DONE | v1/Service @ chainsaw-live-alien/simplest-collector l.go:53: | 07:51:11 | managed-reconcile | step-02  | ASSERT | RUN | v1/ConfigMap @ chainsaw-live-alien/simplest-collector-ea71c537 === NAME chainsaw/instrumentation-java l.go:53: | 07:51:23 | instrumentation-java | step-01  | ASSERT | ERROR | v1/Pod @ chainsaw-talented-walleye/* === ERROR --------------------------------------------------------- v1/Pod/chainsaw-talented-walleye/my-java-758dd7ccd6-6fzk4 --------------------------------------------------------- * spec.containers[0].env[5].value: Invalid value: "-javaagent:/otel-auto-instrumentation-java/javaagent.jar": Expected value: " -javaagent:/otel-auto-instrumentation-java/javaagent.jar" * spec.containers[1].args: Invalid value: []interface {}{"--config=env:OTEL_CONFIG"}: lengths of slices don't match --- expected +++ actual @@ -6,17 +6,27 @@ sidecar.opentelemetry.io/inject: "true" labels: app: my-java + name: my-java-758dd7ccd6-6fzk4 namespace: chainsaw-talented-walleye + ownerReferences: + - apiVersion: apps/v1 + blockOwnerDeletion: true + controller: true + kind: ReplicaSet + name: my-java-758dd7ccd6 + uid: 18975063-ff5e-46b6-9767-7bd173b84acf spec: containers: - env: - name: OTEL_NODE_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.hostIP - name: OTEL_POD_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.podIP - name: OTEL_JAVAAGENT_DEBUG value: "true" @@ -25,7 +35,7 @@ - name: SPLUNK_PROFILER_ENABLED value: "false" - name: JAVA_TOOL_OPTIONS - value: ' -javaagent:/otel-auto-instrumentation-java/javaagent.jar' + value: -javaagent:/otel-auto-instrumentation-java/javaagent.jar - name: OTEL_TRACES_EXPORTER value: otlp - name: OTEL_EXPORTER_OTLP_ENDPOINT @@ -53,28 +63,184 @@ - name: OTEL_PROPAGATORS value: jaeger,b3 - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.container.name=myapp,k8s.deployment.name=my-java,k8s.namespace.name=chainsaw-talented-walleye,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.replicaset.name=my-java-758dd7ccd6,service.instance.id=chainsaw-talented-walleye.$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME).myapp,service.version=main + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-java:main + imagePullPolicy: IfNotPresent name: myapp - volumeMounts: - - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-mvg52 readOnly: true - mountPath: /otel-auto-instrumentation-java name: opentelemetry-auto-instrumentation-java - args: - - --feature-gates=-component.UseLocalHostAsDefaultHost - --config=env:OTEL_CONFIG + env: + - name: POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_CONFIG + value: | + receivers: + otlp: + protocols: + grpc: + endpoint: 0.0.0.0:4317 + http: + endpoint: 0.0.0.0:4318 + exporters: + debug: null + service: + telemetry: + metrics: + address: 0.0.0.0:8888 + pipelines: + traces: + exporters: + - debug + receivers: + - otlp + - name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_RESOURCE_ATTRIBUTES_POD_UID + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.uid + - name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.nodeName + - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.deployment.name=my-java,k8s.deployment.uid=ac3e1e24-7330-41a3-95d9-cdc16f211f75,k8s.namespace.name=chainsaw-talented-walleye,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.pod.uid=$(OTEL_RESOURCE_ATTRIBUTES_POD_UID),k8s.replicaset.name=my-java-758dd7ccd6,k8s.replicaset.uid=18975063-ff5e-46b6-9767-7bd173b84acf + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imagePullPolicy: IfNotPresent name: otc-container + ports: + - containerPort: 8888 + name: metrics + protocol: TCP + - containerPort: 4317 + name: otlp-grpc + protocol: TCP + - containerPort: 4318 + name: otlp-http + protocol: TCP + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-mvg52 + readOnly: true initContainers: - - name: opentelemetry-auto-instrumentation-java + - command: + - cp + - /javaagent.jar + - /otel-auto-instrumentation-java/javaagent.jar + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-java:1.33.5 + imagePullPolicy: IfNotPresent + name: opentelemetry-auto-instrumentation-java + resources: + limits: + cpu: 500m + memory: 64Mi + requests: + cpu: 50m + memory: 64Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /otel-auto-instrumentation-java + name: opentelemetry-auto-instrumentation-java + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-mvg52 + readOnly: true status: containerStatuses: - - name: myapp + - containerID: cri-o://e6438defd45392d20b8f8f04cd2ad7a4956020bcdd680ffce2cb9aba81ca099d + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-java:main + imageID: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-java@sha256:a850ff524a0d974b08583210117889f27a3bd58b9bfb07ce232eb46134b22103 + lastState: {} + name: myapp ready: true + restartCount: 0 started: true - - name: otc-container + state: + running: + startedAt: "2025-02-03T07:45:29Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-mvg52 + readOnly: true + recursiveReadOnly: Disabled + - mountPath: /otel-auto-instrumentation-java + name: opentelemetry-auto-instrumentation-java + - containerID: cri-o://a3ea9a06ff22e5256b7b84773c342f3f2d8eebe022322fdb7dded98fb79d490d + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imageID: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + lastState: {} + name: otc-container ready: true + restartCount: 0 started: true + state: + running: + startedAt: "2025-02-03T07:45:29Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-mvg52 + readOnly: true + recursiveReadOnly: Disabled initContainerStatuses: - - name: opentelemetry-auto-instrumentation-java + - containerID: cri-o://40765254baa290e20d59f1a4de85bfd7b9c2d2900691f876742e58fba2d302a7 + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-java:1.33.5 + imageID: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-java@sha256:100735f70446dc76d895ee1534da1c828e129b03010ba2bc161e6ca475d27815 + lastState: {} + name: opentelemetry-auto-instrumentation-java ready: true + restartCount: 0 + started: false + state: + terminated: + containerID: cri-o://40765254baa290e20d59f1a4de85bfd7b9c2d2900691f876742e58fba2d302a7 + exitCode: 0 + finishedAt: "2025-02-03T07:45:25Z" + reason: Completed + startedAt: "2025-02-03T07:45:25Z" + volumeMounts: + - mountPath: /otel-auto-instrumentation-java + name: opentelemetry-auto-instrumentation-java + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-mvg52 + readOnly: true + recursiveReadOnly: Disabled phase: Running l.go:53: | 07:51:23 | instrumentation-java | step-01  | TRY | DONE | l.go:53: | 07:51:23 | instrumentation-java | step-01  | CATCH | RUN | l.go:53: | 07:51:23 | instrumentation-java | step-01  | CMD | RUN | === COMMAND /usr/local/bin/kubectl logs --prefix -l app=my-java -n chainsaw-talented-walleye --all-containers l.go:53: | 07:51:23 | instrumentation-java | step-01  | CMD | LOG | === STDOUT [pod/my-java-758dd7ccd6-6fzk4/myapp] [otel.javaagent 2025-02-03 07:45:33:724 +0000] [main] DEBUG io.opentelemetry.javaagent.tooling.AgentInstaller$TransformLoggingListener - Transformed org.springframework.boot.web.embedded.tomcat.TomcatEmbeddedContext$$Lambda$963 -- org.springframework.boot.loader.LaunchedURLClassLoader@656922a0 [pod/my-java-758dd7ccd6-6fzk4/myapp] 2025-02-03T07:45:33.725Z INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path '' [pod/my-java-758dd7ccd6-6fzk4/myapp] 2025-02-03T07:45:33.733Z INFO 1 --- [ main] com.example.app.DemoApplication : Started DemoApplication in 1.897 seconds (process running for 4.044) [pod/my-java-758dd7ccd6-6fzk4/myapp] [otel.javaagent 2025-02-03 07:46:30:154 +0000] [OkHttp http://localhost:4317/...] WARN io.opentelemetry.exporter.internal.grpc.GrpcExporter - Failed to export metrics. Server responded with gRPC status code 2. Error message: timeout [pod/my-java-758dd7ccd6-6fzk4/myapp] [otel.javaagent 2025-02-03 07:46:30:154 +0000] [OkHttp http://localhost:4317/...] DEBUG io.opentelemetry.sdk.metrics.export.PeriodicMetricReader - Exporter failed [pod/my-java-758dd7ccd6-6fzk4/myapp] [otel.javaagent 2025-02-03 07:47:30:086 +0000] [OkHttp http://localhost:4317/...] ERROR io.opentelemetry.exporter.internal.grpc.GrpcExporter - Failed to export metrics. Server responded with UNIMPLEMENTED. This usually means that your collector is not configured with an otlp receiver in the "pipelines" section of the configuration. If export is not desired and you are using OpenTelemetry autoconfiguration or the javaagent, disable export by setting OTEL_METRICS_EXPORTER=none. Full error message: unknown service opentelemetry.proto.collector.metrics.v1.MetricsService [pod/my-java-758dd7ccd6-6fzk4/myapp] [otel.javaagent 2025-02-03 07:47:30:086 +0000] [OkHttp http://localhost:4317/...] DEBUG io.opentelemetry.sdk.metrics.export.PeriodicMetricReader - Exporter failed [pod/my-java-758dd7ccd6-6fzk4/myapp] [otel.javaagent 2025-02-03 07:48:30:074 +0000] [OkHttp http://localhost:4317/...] DEBUG io.opentelemetry.sdk.metrics.export.PeriodicMetricReader - Exporter failed [pod/my-java-758dd7ccd6-6fzk4/myapp] [otel.javaagent 2025-02-03 07:49:30:074 +0000] [OkHttp http://localhost:4317/...] DEBUG io.opentelemetry.sdk.metrics.export.PeriodicMetricReader - Exporter failed [pod/my-java-758dd7ccd6-6fzk4/myapp] [otel.javaagent 2025-02-03 07:50:30:074 +0000] [OkHttp http://localhost:4317/...] DEBUG io.opentelemetry.sdk.metrics.export.PeriodicMetricReader - Exporter failed [pod/my-java-758dd7ccd6-6fzk4/otc-container] 2025-02-03T07:45:29.964Z warn service@v0.113.0/service.go:221 service::telemetry::metrics::address is being deprecated in favor of service::telemetry::metrics::readers [pod/my-java-758dd7ccd6-6fzk4/otc-container] 2025-02-03T07:45:29.964Z info telemetry/metrics.go:70 Serving metrics {"address": "0.0.0.0:8888", "metrics level": "Normal"} [pod/my-java-758dd7ccd6-6fzk4/otc-container] 2025-02-03T07:45:29.964Z info builders/builders.go:26 Development component. May change in the future. {"kind": "exporter", "data_type": "traces", "name": "debug"} [pod/my-java-758dd7ccd6-6fzk4/otc-container] 2025-02-03T07:45:29.965Z info service@v0.113.0/service.go:238 Starting otelcol... {"Version": "0.113.0", "NumCPU": 4} [pod/my-java-758dd7ccd6-6fzk4/otc-container] 2025-02-03T07:45:29.965Z info extensions/extensions.go:39 Starting extensions... [pod/my-java-758dd7ccd6-6fzk4/otc-container] 2025-02-03T07:45:29.965Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-java-758dd7ccd6-6fzk4/otc-container] 2025-02-03T07:45:29.965Z info otlpreceiver@v0.113.0/otlp.go:112 Starting GRPC server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4317"} [pod/my-java-758dd7ccd6-6fzk4/otc-container] 2025-02-03T07:45:29.966Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-java-758dd7ccd6-6fzk4/otc-container] 2025-02-03T07:45:29.966Z info otlpreceiver@v0.113.0/otlp.go:169 Starting HTTP server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4318"} [pod/my-java-758dd7ccd6-6fzk4/otc-container] 2025-02-03T07:45:29.966Z info service@v0.113.0/service.go:261 Everything is ready. Begin running and processing data. l.go:53: | 07:51:23 | instrumentation-java | step-01  | CMD | DONE | l.go:53: | 07:51:23 | instrumentation-java | step-01  | CATCH | DONE | l.go:53: | 07:51:23 | instrumentation-java | step-01  | CLEANUP | RUN | l.go:53: | 07:51:23 | instrumentation-java | step-01  | DELETE | RUN | apps/v1/Deployment @ chainsaw-talented-walleye/my-java l.go:53: | 07:51:23 | instrumentation-java | step-01  | DELETE | OK | apps/v1/Deployment @ chainsaw-talented-walleye/my-java l.go:53: | 07:51:23 | instrumentation-java | step-01  | DELETE | DONE | apps/v1/Deployment @ chainsaw-talented-walleye/my-java l.go:53: | 07:51:23 | instrumentation-java | step-01  | CLEANUP | DONE | l.go:53: | 07:51:23 | instrumentation-java | step-00  | CLEANUP | RUN | l.go:53: | 07:51:23 | instrumentation-java | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-talented-walleye/java l.go:53: | 07:51:23 | instrumentation-java | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-talented-walleye/java l.go:53: | 07:51:23 | instrumentation-java | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-talented-walleye/java l.go:53: | 07:51:23 | instrumentation-java | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-talented-walleye/sidecar l.go:53: | 07:51:23 | instrumentation-java | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-talented-walleye/sidecar l.go:53: | 07:51:23 | instrumentation-java | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-talented-walleye/sidecar l.go:53: | 07:51:23 | instrumentation-java | step-00  | CLEANUP | DONE | l.go:53: | 07:51:23 | instrumentation-java | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-talented-walleye l.go:53: | 07:51:23 | instrumentation-java | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-talented-walleye l.go:53: | 07:51:30 | instrumentation-java | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-talented-walleye === CONT chainsaw/node-selector-collector l.go:53: | 07:51:30 | node-selector-collector | @setup  | CREATE | OK | v1/Namespace @ chainsaw-social-gibbon l.go:53: | 07:51:30 | node-selector-collector | step-00  | TRY | RUN | l.go:53: | 07:51:30 | node-selector-collector | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-social-gibbon/deployment l.go:53: | 07:51:30 | node-selector-collector | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-social-gibbon/deployment l.go:53: | 07:51:30 | node-selector-collector | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-social-gibbon/deployment l.go:53: | 07:51:30 | node-selector-collector | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-social-gibbon/daemonset l.go:53: | 07:51:31 | node-selector-collector | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-social-gibbon/daemonset l.go:53: | 07:51:31 | node-selector-collector | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-social-gibbon/daemonset l.go:53: | 07:51:31 | node-selector-collector | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-social-gibbon/statefulset l.go:53: | 07:51:31 | node-selector-collector | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-social-gibbon/statefulset l.go:53: | 07:51:31 | node-selector-collector | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-social-gibbon/statefulset l.go:53: | 07:51:31 | node-selector-collector | step-00  | ASSERT | RUN | v1/Pod @ chainsaw-social-gibbon/* l.go:53: | 07:51:31 | node-selector-collector | step-00  | ASSERT | DONE | v1/Pod @ chainsaw-social-gibbon/* l.go:53: | 07:51:31 | node-selector-collector | step-00  | ASSERT | RUN | v1/Pod @ chainsaw-social-gibbon/* l.go:53: | 07:51:31 | node-selector-collector | step-00  | ASSERT | DONE | v1/Pod @ chainsaw-social-gibbon/* l.go:53: | 07:51:31 | node-selector-collector | step-00  | ASSERT | RUN | v1/Pod @ chainsaw-social-gibbon/* l.go:53: | 07:51:31 | node-selector-collector | step-00  | ASSERT | DONE | v1/Pod @ chainsaw-social-gibbon/* l.go:53: | 07:51:31 | node-selector-collector | step-00  | TRY | DONE | l.go:53: | 07:51:31 | node-selector-collector | step-01  | TRY | RUN | l.go:53: | 07:51:31 | node-selector-collector | step-01  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-social-gibbon/deployment l.go:53: | 07:51:31 | node-selector-collector | step-01  | PATCH | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-social-gibbon/deployment l.go:53: | 07:51:31 | node-selector-collector | step-01  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-social-gibbon/deployment l.go:53: | 07:51:31 | node-selector-collector | step-01  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-social-gibbon/daemonset l.go:53: | 07:51:31 | node-selector-collector | step-01  | PATCH | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-social-gibbon/daemonset l.go:53: | 07:51:31 | node-selector-collector | step-01  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-social-gibbon/daemonset l.go:53: | 07:51:31 | node-selector-collector | step-01  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-social-gibbon/statefulset l.go:53: | 07:51:31 | node-selector-collector | step-01  | PATCH | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-social-gibbon/statefulset l.go:53: | 07:51:31 | node-selector-collector | step-01  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-social-gibbon/statefulset l.go:53: | 07:51:31 | node-selector-collector | step-01  | ASSERT | RUN | v1/Pod @ chainsaw-social-gibbon/* l.go:53: | 07:51:32 | node-selector-collector | step-01  | ASSERT | DONE | v1/Pod @ chainsaw-social-gibbon/* l.go:53: | 07:51:32 | node-selector-collector | step-01  | ASSERT | RUN | v1/Pod @ chainsaw-social-gibbon/* l.go:53: | 07:51:33 | node-selector-collector | step-01  | ASSERT | DONE | v1/Pod @ chainsaw-social-gibbon/* l.go:53: | 07:51:33 | node-selector-collector | step-01  | ASSERT | RUN | v1/Pod @ chainsaw-social-gibbon/* l.go:53: | 07:51:34 | node-selector-collector | step-01  | ASSERT | DONE | v1/Pod @ chainsaw-social-gibbon/* l.go:53: | 07:51:34 | node-selector-collector | step-01  | TRY | DONE | l.go:53: | 07:51:34 | node-selector-collector | step-02  | TRY | RUN | l.go:53: | 07:51:34 | node-selector-collector | step-02  | CMD | RUN | === COMMAND /usr/local/bin/kubectl -n chainsaw-social-gibbon replace -f 00-install-collectors-without-node-selector.yaml l.go:53: | 07:51:34 | node-selector-collector | step-02  | CMD | LOG | === STDOUT opentelemetrycollector.opentelemetry.io/deployment replaced opentelemetrycollector.opentelemetry.io/daemonset replaced opentelemetrycollector.opentelemetry.io/statefulset replaced === STDERR Warning: OpenTelemetryCollector v1alpha1 is deprecated. Migrate to v1beta1. Warning: Collector config spec.config has null objects: exporters.debug:. For compatibility with other tooling, such as kustomize and kubectl edit, it is recommended to use empty objects e.g. batch: {}. l.go:53: | 07:51:34 | node-selector-collector | step-02  | CMD | DONE | l.go:53: | 07:51:34 | node-selector-collector | step-02  | ASSERT | RUN | v1/Pod @ chainsaw-social-gibbon/* l.go:53: | 07:51:34 | node-selector-collector | step-02  | ASSERT | DONE | v1/Pod @ chainsaw-social-gibbon/* l.go:53: | 07:51:34 | node-selector-collector | step-02  | ASSERT | RUN | v1/Pod @ chainsaw-social-gibbon/* l.go:53: | 07:51:36 | node-selector-collector | step-02  | ASSERT | DONE | v1/Pod @ chainsaw-social-gibbon/* l.go:53: | 07:51:36 | node-selector-collector | step-02  | ASSERT | RUN | v1/Pod @ chainsaw-social-gibbon/* l.go:53: | 07:51:37 | node-selector-collector | step-02  | ASSERT | DONE | v1/Pod @ chainsaw-social-gibbon/* l.go:53: | 07:51:37 | node-selector-collector | step-02  | TRY | DONE | l.go:53: | 07:51:37 | node-selector-collector | step-00  | CLEANUP | RUN | l.go:53: | 07:51:37 | node-selector-collector | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-social-gibbon/statefulset l.go:53: | 07:51:37 | node-selector-collector | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-social-gibbon/statefulset l.go:53: | 07:51:37 | node-selector-collector | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-social-gibbon/statefulset l.go:53: | 07:51:37 | node-selector-collector | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-social-gibbon/daemonset l.go:53: | 07:51:37 | node-selector-collector | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-social-gibbon/daemonset l.go:53: | 07:51:37 | node-selector-collector | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-social-gibbon/daemonset l.go:53: | 07:51:37 | node-selector-collector | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-social-gibbon/deployment l.go:53: | 07:51:37 | node-selector-collector | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-social-gibbon/deployment l.go:53: | 07:51:37 | node-selector-collector | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-social-gibbon/deployment l.go:53: | 07:51:37 | node-selector-collector | step-00  | CLEANUP | DONE | l.go:53: | 07:51:37 | node-selector-collector | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-social-gibbon l.go:53: | 07:51:37 | node-selector-collector | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-social-gibbon l.go:53: | 07:51:44 | node-selector-collector | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-social-gibbon === CONT chainsaw/multiple-configmaps l.go:53: | 07:51:44 | multiple-configmaps | @setup  | CREATE | OK | v1/Namespace @ chainsaw-decent-ox l.go:53: | 07:51:44 | multiple-configmaps | step-00  | TRY | RUN | l.go:53: | 07:51:44 | multiple-configmaps | step-00  | APPLY | RUN | v1/ConfigMap @ chainsaw-decent-ox/mount-test1 l.go:53: | 07:51:44 | multiple-configmaps | step-00  | CREATE | OK | v1/ConfigMap @ chainsaw-decent-ox/mount-test1 l.go:53: | 07:51:44 | multiple-configmaps | step-00  | APPLY | DONE | v1/ConfigMap @ chainsaw-decent-ox/mount-test1 l.go:53: | 07:51:44 | multiple-configmaps | step-00  | APPLY | RUN | v1/ConfigMap @ chainsaw-decent-ox/mount-test2 l.go:53: | 07:51:44 | multiple-configmaps | step-00  | CREATE | OK | v1/ConfigMap @ chainsaw-decent-ox/mount-test2 l.go:53: | 07:51:44 | multiple-configmaps | step-00  | APPLY | DONE | v1/ConfigMap @ chainsaw-decent-ox/mount-test2 l.go:53: | 07:51:44 | multiple-configmaps | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-decent-ox/simplest-with-configmaps l.go:53: | 07:51:44 | multiple-configmaps | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-decent-ox/simplest-with-configmaps l.go:53: | 07:51:44 | multiple-configmaps | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-decent-ox/simplest-with-configmaps l.go:53: | 07:51:44 | multiple-configmaps | step-00  | ASSERT | RUN | v1/ConfigMap @ chainsaw-decent-ox/mount-test1 l.go:53: | 07:51:44 | multiple-configmaps | step-00  | ASSERT | DONE | v1/ConfigMap @ chainsaw-decent-ox/mount-test1 l.go:53: | 07:51:44 | multiple-configmaps | step-00  | ASSERT | RUN | v1/ConfigMap @ chainsaw-decent-ox/mount-test2 l.go:53: | 07:51:44 | multiple-configmaps | step-00  | ASSERT | DONE | v1/ConfigMap @ chainsaw-decent-ox/mount-test2 l.go:53: | 07:51:44 | multiple-configmaps | step-00  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-decent-ox/simplest-with-configmaps-collector === NAME chainsaw/instrumentation-go l.go:53: | 07:53:02 | instrumentation-go | step-02  | ASSERT | ERROR | v1/Pod @ chainsaw-uncommon-cheetah/* === ERROR ----------------------------------------------------------- v1/Pod/chainsaw-uncommon-cheetah/my-golang-5c59b8b445-kjsz5 ----------------------------------------------------------- * spec.containers[1].args: Invalid value: []interface {}{"--config=env:OTEL_CONFIG"}: lengths of slices don't match --- expected +++ actual @@ -7,22 +7,108 @@ sidecar.opentelemetry.io/inject: "true" labels: app: my-golang + name: my-golang-5c59b8b445-kjsz5 namespace: chainsaw-uncommon-cheetah + ownerReferences: + - apiVersion: apps/v1 + blockOwnerDeletion: true + controller: true + kind: ReplicaSet + name: my-golang-5c59b8b445 + uid: 0f3d8a1c-87fb-4017-b6d3-0006a09b1c96 spec: containers: - - name: productcatalogservice + - image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-golang:main + imagePullPolicy: IfNotPresent + name: productcatalogservice + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-42ng7 + readOnly: true - args: - - --feature-gates=-component.UseLocalHostAsDefaultHost - --config=env:OTEL_CONFIG + env: + - name: POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_CONFIG + value: | + receivers: + otlp: + protocols: + grpc: + endpoint: 0.0.0.0:4317 + http: + endpoint: 0.0.0.0:4318 + exporters: + debug: null + service: + telemetry: + metrics: + address: 0.0.0.0:8888 + pipelines: + traces: + exporters: + - debug + receivers: + - otlp + - name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_RESOURCE_ATTRIBUTES_POD_UID + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.uid + - name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.nodeName + - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.deployment.name=my-golang,k8s.deployment.uid=ecb7b5fc-b848-4f2d-a6d4-c7374e7c02c2,k8s.namespace.name=chainsaw-uncommon-cheetah,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.pod.uid=$(OTEL_RESOURCE_ATTRIBUTES_POD_UID),k8s.replicaset.name=my-golang-5c59b8b445,k8s.replicaset.uid=0f3d8a1c-87fb-4017-b6d3-0006a09b1c96 + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imagePullPolicy: IfNotPresent name: otc-container + ports: + - containerPort: 8888 + name: metrics + protocol: TCP + - containerPort: 4317 + name: otlp-grpc + protocol: TCP + - containerPort: 4318 + name: otlp-http + protocol: TCP + resources: {} + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-42ng7 + readOnly: true - env: - name: OTEL_NODE_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.hostIP - name: OTEL_POD_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.podIP - name: OTEL_GO_AUTO_TARGET_EXE value: /usr/src/app/productcatalogservice @@ -53,22 +139,79 @@ - name: OTEL_PROPAGATORS value: jaeger,b3 - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.container.name=productcatalogservice,k8s.deployment.name=my-golang,k8s.namespace.name=chainsaw-uncommon-cheetah,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.replicaset.name=my-golang-5c59b8b445,service.instance.id=chainsaw-uncommon-cheetah.$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME).productcatalogservice,service.version=main + image: ghcr.io/open-telemetry/opentelemetry-go-instrumentation/autoinstrumentation-go:v0.17.0-alpha + imagePullPolicy: IfNotPresent name: opentelemetry-auto-instrumentation + resources: + limits: + cpu: 500m + memory: 32Mi + requests: + cpu: 50m + memory: 32Mi + securityContext: + privileged: true + runAsUser: 0 + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File volumeMounts: - mountPath: /sys/kernel/debug name: kernel-debug - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-42ng7 readOnly: true status: containerStatuses: - - name: opentelemetry-auto-instrumentation + - containerID: cri-o://73a6ac8c5249a2bd1670fcb15b4ff875c758916c08881dd5da6d6f321ccf6acc + image: ghcr.io/open-telemetry/opentelemetry-go-instrumentation/autoinstrumentation-go:v0.17.0-alpha + imageID: ghcr.io/open-telemetry/opentelemetry-go-instrumentation/autoinstrumentation-go@sha256:274a807557afea353e7fc10a5cd2b52a0b63ea3f6ec41b01d5efc97dd5475d06 + lastState: {} + name: opentelemetry-auto-instrumentation ready: true + restartCount: 0 started: true - - name: otc-container + state: + running: + startedAt: "2025-02-03T07:47:06Z" + volumeMounts: + - mountPath: /sys/kernel/debug + name: kernel-debug + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-42ng7 + readOnly: true + recursiveReadOnly: Disabled + - containerID: cri-o://0a66cb86d16b8e751d65bd35ae2fd4b1e42c5c350a31e00be98f64cf39ec6140 + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imageID: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + lastState: {} + name: otc-container ready: true + restartCount: 0 started: true - - name: productcatalogservice + state: + running: + startedAt: "2025-02-03T07:47:04Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-42ng7 + readOnly: true + recursiveReadOnly: Disabled + - containerID: cri-o://41881fd853c0d79cafe48adc408ebab3c128fa653606148a371ac5760d3b75e0 + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-golang:main + imageID: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-golang@sha256:1fd3af7b899256415cb5bfd176ae60715f0216ec73a51df620c0824dca40da57 + lastState: {} + name: productcatalogservice ready: true + restartCount: 0 started: true + state: + running: + startedAt: "2025-02-03T07:47:04Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-42ng7 + readOnly: true + recursiveReadOnly: Disabled phase: Running l.go:53: | 07:53:02 | instrumentation-go | step-02  | TRY | DONE | l.go:53: | 07:53:02 | instrumentation-go | step-02  | CATCH | RUN | l.go:53: | 07:53:02 | instrumentation-go | step-02  | CMD | RUN | === COMMAND /usr/local/bin/kubectl logs --prefix -l app=my-golang -n chainsaw-uncommon-cheetah --all-containers l.go:53: | 07:53:02 | instrumentation-go | step-02  | CMD | LOG | === STDOUT [pod/my-golang-5c59b8b445-kjsz5/opentelemetry-auto-instrumentation] {"time":"2025-02-03T07:47:06.158383069Z","level":"INFO","source":{"function":"main.main","file":"/app/cli/main.go","line":103},"msg":"building OpenTelemetry Go instrumentation ...","globalImpl":false,"version":{"Release":"v0.17.0-alpha","Revision":"unknown","Go":{"Version":"go1.23.2","OS":"linux","Arch":"amd64"}}} [pod/my-golang-5c59b8b445-kjsz5/otc-container] 2025-02-03T07:47:04.249Z warn service@v0.113.0/service.go:221 service::telemetry::metrics::address is being deprecated in favor of service::telemetry::metrics::readers [pod/my-golang-5c59b8b445-kjsz5/otc-container] 2025-02-03T07:47:04.249Z info telemetry/metrics.go:70 Serving metrics {"address": "0.0.0.0:8888", "metrics level": "Normal"} [pod/my-golang-5c59b8b445-kjsz5/otc-container] 2025-02-03T07:47:04.249Z info builders/builders.go:26 Development component. May change in the future. {"kind": "exporter", "data_type": "traces", "name": "debug"} [pod/my-golang-5c59b8b445-kjsz5/otc-container] 2025-02-03T07:47:04.263Z info service@v0.113.0/service.go:238 Starting otelcol... {"Version": "0.113.0", "NumCPU": 4} [pod/my-golang-5c59b8b445-kjsz5/otc-container] 2025-02-03T07:47:04.263Z info extensions/extensions.go:39 Starting extensions... [pod/my-golang-5c59b8b445-kjsz5/otc-container] 2025-02-03T07:47:04.263Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-golang-5c59b8b445-kjsz5/otc-container] 2025-02-03T07:47:04.263Z info otlpreceiver@v0.113.0/otlp.go:112 Starting GRPC server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4317"} [pod/my-golang-5c59b8b445-kjsz5/otc-container] 2025-02-03T07:47:04.263Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-golang-5c59b8b445-kjsz5/otc-container] 2025-02-03T07:47:04.263Z info otlpreceiver@v0.113.0/otlp.go:169 Starting HTTP server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4318"} [pod/my-golang-5c59b8b445-kjsz5/otc-container] 2025-02-03T07:47:04.263Z info service@v0.113.0/service.go:261 Everything is ready. Begin running and processing data. l.go:53: | 07:53:02 | instrumentation-go | step-02  | CMD | DONE | l.go:53: | 07:53:02 | instrumentation-go | step-02  | CATCH | DONE | l.go:53: | 07:53:02 | instrumentation-go | step-02  | CLEANUP | RUN | l.go:53: | 07:53:02 | instrumentation-go | step-02  | DELETE | RUN | apps/v1/Deployment @ chainsaw-uncommon-cheetah/my-golang l.go:53: | 07:53:02 | instrumentation-go | step-02  | DELETE | OK | apps/v1/Deployment @ chainsaw-uncommon-cheetah/my-golang l.go:53: | 07:53:02 | instrumentation-go | step-02  | DELETE | DONE | apps/v1/Deployment @ chainsaw-uncommon-cheetah/my-golang l.go:53: | 07:53:02 | instrumentation-go | step-02  | CLEANUP | DONE | l.go:53: | 07:53:02 | instrumentation-go | step-01  | CLEANUP | RUN | l.go:53: | 07:53:02 | instrumentation-go | step-01  | DELETE | RUN | v1/ServiceAccount @ chainsaw-uncommon-cheetah/otel-instrumentation-go l.go:53: | 07:53:02 | instrumentation-go | step-01  | DELETE | OK | v1/ServiceAccount @ chainsaw-uncommon-cheetah/otel-instrumentation-go l.go:53: | 07:53:02 | instrumentation-go | step-01  | DELETE | DONE | v1/ServiceAccount @ chainsaw-uncommon-cheetah/otel-instrumentation-go l.go:53: | 07:53:02 | instrumentation-go | step-01  | CLEANUP | DONE | l.go:53: | 07:53:02 | instrumentation-go | step-00  | CLEANUP | RUN | l.go:53: | 07:53:02 | instrumentation-go | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-uncommon-cheetah/go l.go:53: | 07:53:02 | instrumentation-go | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-uncommon-cheetah/go l.go:53: | 07:53:02 | instrumentation-go | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-uncommon-cheetah/go l.go:53: | 07:53:02 | instrumentation-go | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-uncommon-cheetah/sidecar l.go:53: | 07:53:02 | instrumentation-go | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-uncommon-cheetah/sidecar l.go:53: | 07:53:02 | instrumentation-go | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-uncommon-cheetah/sidecar l.go:53: | 07:53:02 | instrumentation-go | step-00  | CLEANUP | DONE | l.go:53: | 07:53:02 | instrumentation-go | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-uncommon-cheetah l.go:53: | 07:53:03 | instrumentation-go | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-uncommon-cheetah l.go:53: | 07:53:09 | instrumentation-go | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-uncommon-cheetah === CONT chainsaw/instrumentation-apache-multicontainer l.go:53: | 07:53:09 | instrumentation-apache-multicontainer | @setup  | CREATE | OK | v1/Namespace @ chainsaw-pro-crane l.go:53: | 07:53:09 | instrumentation-apache-multicontainer | step-00  | TRY | RUN | l.go:53: | 07:53:09 | instrumentation-apache-multicontainer | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-pro-crane openshift.io/sa.scc.uid-range=1000/1000 --overwrite l.go:53: | 07:53:09 | instrumentation-apache-multicontainer | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-pro-crane annotated l.go:53: | 07:53:09 | instrumentation-apache-multicontainer | step-00  | CMD | DONE | l.go:53: | 07:53:09 | instrumentation-apache-multicontainer | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-pro-crane openshift.io/sa.scc.supplemental-groups=3000/3000 --overwrite l.go:53: | 07:53:09 | instrumentation-apache-multicontainer | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-pro-crane annotated l.go:53: | 07:53:09 | instrumentation-apache-multicontainer | step-00  | CMD | DONE | l.go:53: | 07:53:09 | instrumentation-apache-multicontainer | step-00  | TRY | DONE | l.go:53: | 07:53:09 | instrumentation-apache-multicontainer | step-01  | TRY | RUN | l.go:53: | 07:53:09 | instrumentation-apache-multicontainer | step-01  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-pro-crane/sidecar l.go:53: | 07:53:10 | instrumentation-apache-multicontainer | step-01  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-pro-crane/sidecar l.go:53: | 07:53:10 | instrumentation-apache-multicontainer | step-01  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-pro-crane/sidecar l.go:53: | 07:53:10 | instrumentation-apache-multicontainer | step-01  | APPLY | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-pro-crane/apache l.go:53: | 07:53:10 | instrumentation-apache-multicontainer | step-01  | CREATE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-pro-crane/apache l.go:53: | 07:53:10 | instrumentation-apache-multicontainer | step-01  | APPLY | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-pro-crane/apache l.go:53: | 07:53:10 | instrumentation-apache-multicontainer | step-01  | TRY | DONE | l.go:53: | 07:53:10 | instrumentation-apache-multicontainer | step-02  | TRY | RUN | l.go:53: | 07:53:10 | instrumentation-apache-multicontainer | step-02  | APPLY | RUN | apps/v1/Deployment @ chainsaw-pro-crane/my-apache-multi l.go:53: | 07:53:10 | instrumentation-apache-multicontainer | step-02  | CREATE | OK | apps/v1/Deployment @ chainsaw-pro-crane/my-apache-multi l.go:53: | 07:53:10 | instrumentation-apache-multicontainer | step-02  | APPLY | DONE | apps/v1/Deployment @ chainsaw-pro-crane/my-apache-multi l.go:53: | 07:53:10 | instrumentation-apache-multicontainer | step-02  | ASSERT | RUN | v1/Pod @ chainsaw-pro-crane/* === NAME chainsaw/instrumentation-dotnet-musl l.go:53: | 07:56:59 | instrumentation-dotnet-musl | step-01  | ASSERT | ERROR | v1/Pod @ chainsaw-still-drake/* === ERROR ----------------------------------------------------------- v1/Pod/chainsaw-still-drake/my-dotnet-musl-84b98ffc4f-r68w2 ----------------------------------------------------------- * spec.containers[1].args: Invalid value: []interface {}{"--config=env:OTEL_CONFIG"}: lengths of slices don't match --- expected +++ actual @@ -6,17 +6,27 @@ sidecar.opentelemetry.io/inject: "true" labels: app: my-dotnet-musl + name: my-dotnet-musl-84b98ffc4f-r68w2 namespace: chainsaw-still-drake + ownerReferences: + - apiVersion: apps/v1 + blockOwnerDeletion: true + controller: true + kind: ReplicaSet + name: my-dotnet-musl-84b98ffc4f + uid: 42d0a591-3c32-4f35-bfb5-8bb242cc9bfd spec: containers: - env: - name: OTEL_NODE_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.hostIP - name: OTEL_POD_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.podIP - name: ASPNETCORE_URLS value: http://+:8080 @@ -61,28 +71,185 @@ - name: OTEL_PROPAGATORS value: jaeger,b3multi - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.container.name=myapp,k8s.deployment.name=my-dotnet-musl,k8s.namespace.name=chainsaw-still-drake,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.replicaset.name=my-dotnet-musl-84b98ffc4f,service.instance.id=chainsaw-still-drake.$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME).myapp,service.version=main + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-dotnet:main + imagePullPolicy: IfNotPresent name: myapp - volumeMounts: - - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-9tq45 readOnly: true - mountPath: /otel-auto-instrumentation-dotnet name: opentelemetry-auto-instrumentation-dotnet - args: - - --feature-gates=-component.UseLocalHostAsDefaultHost - --config=env:OTEL_CONFIG + env: + - name: POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_CONFIG + value: | + receivers: + otlp: + protocols: + grpc: + endpoint: 0.0.0.0:4317 + http: + endpoint: 0.0.0.0:4318 + exporters: + debug: null + service: + telemetry: + metrics: + address: 0.0.0.0:8888 + pipelines: + traces: + exporters: + - debug + receivers: + - otlp + - name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_RESOURCE_ATTRIBUTES_POD_UID + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.uid + - name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.nodeName + - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.deployment.name=my-dotnet-musl,k8s.deployment.uid=053a84c6-d1fa-436a-852f-cc5f476f5f97,k8s.namespace.name=chainsaw-still-drake,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.pod.uid=$(OTEL_RESOURCE_ATTRIBUTES_POD_UID),k8s.replicaset.name=my-dotnet-musl-84b98ffc4f,k8s.replicaset.uid=42d0a591-3c32-4f35-bfb5-8bb242cc9bfd + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imagePullPolicy: IfNotPresent name: otc-container + ports: + - containerPort: 8888 + name: metrics + protocol: TCP + - containerPort: 4317 + name: otlp-grpc + protocol: TCP + - containerPort: 4318 + name: otlp-http + protocol: TCP + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-9tq45 + readOnly: true initContainers: - - name: opentelemetry-auto-instrumentation-dotnet + - command: + - cp + - -r + - /autoinstrumentation/. + - /otel-auto-instrumentation-dotnet + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-dotnet:1.2.0 + imagePullPolicy: IfNotPresent + name: opentelemetry-auto-instrumentation-dotnet + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 50m + memory: 128Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /otel-auto-instrumentation-dotnet + name: opentelemetry-auto-instrumentation-dotnet + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-9tq45 + readOnly: true status: containerStatuses: - - name: myapp + - containerID: cri-o://57cab60e27388a29c6fdc343b8577547f7f53c7f3deafd0bda1057d7455fd63c + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-dotnet:main + imageID: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-dotnet@sha256:84b98a53aa0acad5fca02dbcf2da37df7b3aaaa4a6ebbead2bc06fb715d982ce + lastState: {} + name: myapp ready: true + restartCount: 0 started: true - - name: otc-container + state: + running: + startedAt: "2025-02-03T07:51:06Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-9tq45 + readOnly: true + recursiveReadOnly: Disabled + - mountPath: /otel-auto-instrumentation-dotnet + name: opentelemetry-auto-instrumentation-dotnet + - containerID: cri-o://b331b1ef8131ddb0bd5cc46a6652a6d7cd75ea1b5c861e2770632ac3643a5848 + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imageID: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + lastState: {} + name: otc-container ready: true + restartCount: 0 started: true + state: + running: + startedAt: "2025-02-03T07:51:06Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-9tq45 + readOnly: true + recursiveReadOnly: Disabled initContainerStatuses: - - name: opentelemetry-auto-instrumentation-dotnet + - containerID: cri-o://d78ca22f9de8f719a86ec093509367e73ad0e2ccd4f1a10f453daf4f4a7ad2f5 + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-dotnet:1.2.0 + imageID: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-dotnet@sha256:093f0057f30022d0d4f4fbdbd3104c48879c8424d7acec0b46e9cb86a3d95e10 + lastState: {} + name: opentelemetry-auto-instrumentation-dotnet ready: true + restartCount: 0 + started: false + state: + terminated: + containerID: cri-o://d78ca22f9de8f719a86ec093509367e73ad0e2ccd4f1a10f453daf4f4a7ad2f5 + exitCode: 0 + finishedAt: "2025-02-03T07:51:02Z" + reason: Completed + startedAt: "2025-02-03T07:51:02Z" + volumeMounts: + - mountPath: /otel-auto-instrumentation-dotnet + name: opentelemetry-auto-instrumentation-dotnet + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-9tq45 + readOnly: true + recursiveReadOnly: Disabled phase: Running l.go:53: | 07:56:59 | instrumentation-dotnet-musl | step-01  | TRY | DONE | l.go:53: | 07:56:59 | instrumentation-dotnet-musl | step-01  | CATCH | RUN | l.go:53: | 07:56:59 | instrumentation-dotnet-musl | step-01  | CMD | RUN | === COMMAND /usr/local/bin/kubectl logs --prefix -l app=my-dotnet-musl -n chainsaw-still-drake --all-containers l.go:53: | 07:57:00 | instrumentation-dotnet-musl | step-01  | CMD | LOG | === STDOUT [pod/my-dotnet-musl-84b98ffc4f-r68w2/myapp] warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35] [pod/my-dotnet-musl-84b98ffc4f-r68w2/myapp] No XML encryptor configured. Key {df31075c-2dec-47e4-bce4-3d5d1c817b8b} may be persisted to storage in unencrypted form. [pod/my-dotnet-musl-84b98ffc4f-r68w2/myapp] info: Microsoft.Hosting.Lifetime[14] [pod/my-dotnet-musl-84b98ffc4f-r68w2/myapp] Now listening on: http://[::]:8080 [pod/my-dotnet-musl-84b98ffc4f-r68w2/myapp] info: Microsoft.Hosting.Lifetime[0] [pod/my-dotnet-musl-84b98ffc4f-r68w2/myapp] Application started. Press Ctrl+C to shut down. [pod/my-dotnet-musl-84b98ffc4f-r68w2/myapp] info: Microsoft.Hosting.Lifetime[0] [pod/my-dotnet-musl-84b98ffc4f-r68w2/myapp] Hosting environment: Production [pod/my-dotnet-musl-84b98ffc4f-r68w2/myapp] info: Microsoft.Hosting.Lifetime[0] [pod/my-dotnet-musl-84b98ffc4f-r68w2/myapp] Content root path: /app [pod/my-dotnet-musl-84b98ffc4f-r68w2/otc-container] 2025-02-03T07:51:06.436Z warn service@v0.113.0/service.go:221 service::telemetry::metrics::address is being deprecated in favor of service::telemetry::metrics::readers [pod/my-dotnet-musl-84b98ffc4f-r68w2/otc-container] 2025-02-03T07:51:06.436Z info telemetry/metrics.go:70 Serving metrics {"address": "0.0.0.0:8888", "metrics level": "Normal"} [pod/my-dotnet-musl-84b98ffc4f-r68w2/otc-container] 2025-02-03T07:51:06.436Z info builders/builders.go:26 Development component. May change in the future. {"kind": "exporter", "data_type": "traces", "name": "debug"} [pod/my-dotnet-musl-84b98ffc4f-r68w2/otc-container] 2025-02-03T07:51:06.449Z info service@v0.113.0/service.go:238 Starting otelcol... {"Version": "0.113.0", "NumCPU": 4} [pod/my-dotnet-musl-84b98ffc4f-r68w2/otc-container] 2025-02-03T07:51:06.449Z info extensions/extensions.go:39 Starting extensions... [pod/my-dotnet-musl-84b98ffc4f-r68w2/otc-container] 2025-02-03T07:51:06.449Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-dotnet-musl-84b98ffc4f-r68w2/otc-container] 2025-02-03T07:51:06.449Z info otlpreceiver@v0.113.0/otlp.go:112 Starting GRPC server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4317"} [pod/my-dotnet-musl-84b98ffc4f-r68w2/otc-container] 2025-02-03T07:51:06.450Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-dotnet-musl-84b98ffc4f-r68w2/otc-container] 2025-02-03T07:51:06.450Z info otlpreceiver@v0.113.0/otlp.go:169 Starting HTTP server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4318"} [pod/my-dotnet-musl-84b98ffc4f-r68w2/otc-container] 2025-02-03T07:51:06.450Z info service@v0.113.0/service.go:261 Everything is ready. Begin running and processing data. l.go:53: | 07:57:00 | instrumentation-dotnet-musl | step-01  | CMD | DONE | l.go:53: | 07:57:00 | instrumentation-dotnet-musl | step-01  | CATCH | DONE | l.go:53: | 07:57:00 | instrumentation-dotnet-musl | step-01  | CLEANUP | RUN | l.go:53: | 07:57:00 | instrumentation-dotnet-musl | step-01  | DELETE | RUN | apps/v1/Deployment @ chainsaw-still-drake/my-dotnet-musl l.go:53: | 07:57:00 | instrumentation-dotnet-musl | step-01  | DELETE | OK | apps/v1/Deployment @ chainsaw-still-drake/my-dotnet-musl l.go:53: | 07:57:00 | instrumentation-dotnet-musl | step-01  | DELETE | DONE | apps/v1/Deployment @ chainsaw-still-drake/my-dotnet-musl l.go:53: | 07:57:00 | instrumentation-dotnet-musl | step-01  | CLEANUP | DONE | l.go:53: | 07:57:00 | instrumentation-dotnet-musl | step-00  | CLEANUP | RUN | l.go:53: | 07:57:00 | instrumentation-dotnet-musl | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-still-drake/dotnet l.go:53: | 07:57:00 | instrumentation-dotnet-musl | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-still-drake/dotnet l.go:53: | 07:57:00 | instrumentation-dotnet-musl | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-still-drake/dotnet l.go:53: | 07:57:00 | instrumentation-dotnet-musl | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-still-drake/sidecar l.go:53: | 07:57:00 | instrumentation-dotnet-musl | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-still-drake/sidecar l.go:53: | 07:57:00 | instrumentation-dotnet-musl | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-still-drake/sidecar l.go:53: | 07:57:00 | instrumentation-dotnet-musl | step-00  | CLEANUP | DONE | l.go:53: | 07:57:00 | instrumentation-dotnet-musl | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-still-drake l.go:53: | 07:57:00 | instrumentation-dotnet-musl | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-still-drake l.go:53: | 07:57:06 | instrumentation-dotnet-musl | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-still-drake === CONT chainsaw/instrumentation-dotnet l.go:53: | 07:57:06 | instrumentation-dotnet | @setup  | CREATE | OK | v1/Namespace @ chainsaw-precious-ox l.go:53: | 07:57:06 | instrumentation-dotnet | step-00  | TRY | RUN | l.go:53: | 07:57:06 | instrumentation-dotnet | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-precious-ox openshift.io/sa.scc.uid-range=1000/1000 --overwrite l.go:53: | 07:57:06 | instrumentation-dotnet | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-precious-ox annotated l.go:53: | 07:57:06 | instrumentation-dotnet | step-00  | CMD | DONE | l.go:53: | 07:57:06 | instrumentation-dotnet | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-precious-ox openshift.io/sa.scc.supplemental-groups=3000/3000 --overwrite l.go:53: | 07:57:07 | instrumentation-dotnet | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-precious-ox annotated l.go:53: | 07:57:07 | instrumentation-dotnet | step-00  | CMD | DONE | l.go:53: | 07:57:07 | instrumentation-dotnet | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-precious-ox/sidecar l.go:53: | 07:57:07 | instrumentation-dotnet | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-precious-ox/sidecar l.go:53: | 07:57:07 | instrumentation-dotnet | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-precious-ox/sidecar l.go:53: | 07:57:07 | instrumentation-dotnet | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-precious-ox/dotnet l.go:53: | 07:57:07 | instrumentation-dotnet | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-precious-ox/dotnet l.go:53: | 07:57:07 | instrumentation-dotnet | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-precious-ox/dotnet l.go:53: | 07:57:07 | instrumentation-dotnet | step-00  | TRY | DONE | l.go:53: | 07:57:07 | instrumentation-dotnet | step-01  | TRY | RUN | l.go:53: | 07:57:07 | instrumentation-dotnet | step-01  | APPLY | RUN | apps/v1/Deployment @ chainsaw-precious-ox/my-dotnet l.go:53: | 07:57:07 | instrumentation-dotnet | step-01  | CREATE | OK | apps/v1/Deployment @ chainsaw-precious-ox/my-dotnet l.go:53: | 07:57:07 | instrumentation-dotnet | step-01  | APPLY | DONE | apps/v1/Deployment @ chainsaw-precious-ox/my-dotnet l.go:53: | 07:57:07 | instrumentation-dotnet | step-01  | ASSERT | RUN | v1/Pod @ chainsaw-precious-ox/* === NAME chainsaw/managed-reconcile l.go:53: | 07:57:11 | managed-reconcile | step-02  | ASSERT | ERROR | v1/ConfigMap @ chainsaw-live-alien/simplest-collector-ea71c537 === ERROR actual resource not found l.go:53: | 07:57:11 | managed-reconcile | step-02  | TRY | DONE | l.go:53: | 07:57:11 | managed-reconcile | step-01  | CLEANUP | RUN | l.go:53: | 07:57:11 | managed-reconcile | step-01  | DELETE | RUN | v1/ConfigMap @ chainsaw-live-alien/simplest-collector l.go:53: | 07:57:11 | managed-reconcile | step-01  | DELETE | OK | v1/ConfigMap @ chainsaw-live-alien/simplest-collector l.go:53: | 07:57:11 | managed-reconcile | step-01  | DELETE | DONE | v1/ConfigMap @ chainsaw-live-alien/simplest-collector l.go:53: | 07:57:11 | managed-reconcile | step-01  | CLEANUP | DONE | l.go:53: | 07:57:11 | managed-reconcile | step-00  | CLEANUP | RUN | l.go:53: | 07:57:11 | managed-reconcile | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-live-alien/simplest l.go:53: | 07:57:11 | managed-reconcile | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-live-alien/simplest l.go:53: | 07:57:11 | managed-reconcile | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-live-alien/simplest l.go:53: | 07:57:11 | managed-reconcile | step-00  | CLEANUP | DONE | l.go:53: | 07:57:11 | managed-reconcile | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-live-alien l.go:53: | 07:57:11 | managed-reconcile | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-live-alien l.go:53: | 07:57:17 | managed-reconcile | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-live-alien === CONT chainsaw/smoke-simplest-v1beta1 l.go:53: | 07:57:17 | smoke-simplest-v1beta1 | @setup  | CREATE | OK | v1/Namespace @ chainsaw-legible-ladybird l.go:53: | 07:57:17 | smoke-simplest-v1beta1 | step-00  | TRY | RUN | l.go:53: | 07:57:17 | smoke-simplest-v1beta1 | step-00  | APPLY | RUN | opentelemetry.io/v1beta1/OpenTelemetryCollector @ chainsaw-legible-ladybird/simplest l.go:53: | 07:57:17 | smoke-simplest-v1beta1 | step-00  | CREATE | OK | opentelemetry.io/v1beta1/OpenTelemetryCollector @ chainsaw-legible-ladybird/simplest l.go:53: | 07:57:17 | smoke-simplest-v1beta1 | step-00  | APPLY | DONE | opentelemetry.io/v1beta1/OpenTelemetryCollector @ chainsaw-legible-ladybird/simplest l.go:53: | 07:57:17 | smoke-simplest-v1beta1 | step-00  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-legible-ladybird/simplest-collector l.go:53: | 07:57:19 | smoke-simplest-v1beta1 | step-00  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-legible-ladybird/simplest-collector l.go:53: | 07:57:19 | smoke-simplest-v1beta1 | step-00  | ASSERT | RUN | v1/Service @ chainsaw-legible-ladybird/simplest-collector-headless l.go:53: | 07:57:19 | smoke-simplest-v1beta1 | step-00  | ASSERT | DONE | v1/Service @ chainsaw-legible-ladybird/simplest-collector-headless l.go:53: | 07:57:19 | smoke-simplest-v1beta1 | step-00  | ASSERT | RUN | v1/Service @ chainsaw-legible-ladybird/simplest-collector l.go:53: | 07:57:19 | smoke-simplest-v1beta1 | step-00  | ASSERT | DONE | v1/Service @ chainsaw-legible-ladybird/simplest-collector l.go:53: | 07:57:19 | smoke-simplest-v1beta1 | step-00  | TRY | DONE | l.go:53: | 07:57:19 | smoke-simplest-v1beta1 | step-00  | CLEANUP | RUN | l.go:53: | 07:57:19 | smoke-simplest-v1beta1 | step-00  | DELETE | RUN | opentelemetry.io/v1beta1/OpenTelemetryCollector @ chainsaw-legible-ladybird/simplest l.go:53: | 07:57:19 | smoke-simplest-v1beta1 | step-00  | DELETE | OK | opentelemetry.io/v1beta1/OpenTelemetryCollector @ chainsaw-legible-ladybird/simplest l.go:53: | 07:57:19 | smoke-simplest-v1beta1 | step-00  | DELETE | DONE | opentelemetry.io/v1beta1/OpenTelemetryCollector @ chainsaw-legible-ladybird/simplest l.go:53: | 07:57:19 | smoke-simplest-v1beta1 | step-00  | CLEANUP | DONE | l.go:53: | 07:57:19 | smoke-simplest-v1beta1 | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-legible-ladybird l.go:53: | 07:57:19 | smoke-simplest-v1beta1 | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-legible-ladybird l.go:53: | 07:57:25 | smoke-simplest-v1beta1 | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-legible-ladybird === CONT chainsaw/versioned-configmaps l.go:53: | 07:57:25 | versioned-configmaps | @setup  | CREATE | OK | v1/Namespace @ chainsaw-heroic-man l.go:53: | 07:57:25 | versioned-configmaps | step-00  | TRY | RUN | l.go:53: | 07:57:25 | versioned-configmaps | step-00  | APPLY | RUN | opentelemetry.io/v1beta1/OpenTelemetryCollector @ chainsaw-heroic-man/simple l.go:53: | 07:57:26 | versioned-configmaps | step-00  | CREATE | OK | opentelemetry.io/v1beta1/OpenTelemetryCollector @ chainsaw-heroic-man/simple l.go:53: | 07:57:26 | versioned-configmaps | step-00  | APPLY | DONE | opentelemetry.io/v1beta1/OpenTelemetryCollector @ chainsaw-heroic-man/simple l.go:53: | 07:57:26 | versioned-configmaps | step-00  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-heroic-man/simple-collector === NAME chainsaw/multiple-configmaps l.go:53: | 07:57:44 | multiple-configmaps | step-00  | ASSERT | ERROR | apps/v1/Deployment @ chainsaw-decent-ox/simplest-with-configmaps-collector === ERROR ------------------------------------------------------------------------ apps/v1/Deployment/chainsaw-decent-ox/simplest-with-configmaps-collector ------------------------------------------------------------------------ * spec.template.spec.volumes[0].configMap.name: Invalid value: "simplest-with-configmaps-collector-aec5aa11": Expected value: "simplest-with-configmaps-collector-ea71c537" --- expected +++ actual @@ -3,11 +3,41 @@ metadata: name: simplest-with-configmaps-collector namespace: chainsaw-decent-ox + ownerReferences: + - apiVersion: opentelemetry.io/v1beta1 + blockOwnerDeletion: true + controller: true + kind: OpenTelemetryCollector + name: simplest-with-configmaps + uid: 53bd5bd0-b576-4058-8277-4161dab94944 spec: template: spec: containers: - - name: otc-container + - args: + - --config=/conf/collector.yaml + env: + - name: POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imagePullPolicy: IfNotPresent + name: otc-container + ports: + - containerPort: 8888 + name: metrics + protocol: TCP + - containerPort: 4317 + name: otlp-grpc + protocol: TCP + - containerPort: 4318 + name: otlp-http + protocol: TCP + resources: {} + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File volumeMounts: - mountPath: /conf name: otc-internal @@ -21,7 +51,7 @@ items: - key: collector.yaml path: collector.yaml - name: simplest-with-configmaps-collector-ea71c537 + name: simplest-with-configmaps-collector-aec5aa11 name: otc-internal - configMap: defaultMode: 420 l.go:53: | 07:57:44 | multiple-configmaps | step-00  | TRY | DONE | l.go:53: | 07:57:44 | multiple-configmaps | step-00  | CLEANUP | RUN | l.go:53: | 07:57:44 | multiple-configmaps | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-decent-ox/simplest-with-configmaps l.go:53: | 07:57:44 | multiple-configmaps | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-decent-ox/simplest-with-configmaps l.go:53: | 07:57:44 | multiple-configmaps | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-decent-ox/simplest-with-configmaps l.go:53: | 07:57:44 | multiple-configmaps | step-00  | DELETE | RUN | v1/ConfigMap @ chainsaw-decent-ox/mount-test2 l.go:53: | 07:57:44 | multiple-configmaps | step-00  | DELETE | OK | v1/ConfigMap @ chainsaw-decent-ox/mount-test2 l.go:53: | 07:57:44 | multiple-configmaps | step-00  | DELETE | DONE | v1/ConfigMap @ chainsaw-decent-ox/mount-test2 l.go:53: | 07:57:44 | multiple-configmaps | step-00  | DELETE | RUN | v1/ConfigMap @ chainsaw-decent-ox/mount-test1 l.go:53: | 07:57:45 | multiple-configmaps | step-00  | DELETE | OK | v1/ConfigMap @ chainsaw-decent-ox/mount-test1 l.go:53: | 07:57:45 | multiple-configmaps | step-00  | DELETE | DONE | v1/ConfigMap @ chainsaw-decent-ox/mount-test1 l.go:53: | 07:57:45 | multiple-configmaps | step-00  | CLEANUP | DONE | l.go:53: | 07:57:45 | multiple-configmaps | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-decent-ox l.go:53: | 07:57:45 | multiple-configmaps | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-decent-ox l.go:53: | 07:57:50 | multiple-configmaps | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-decent-ox === CONT chainsaw/statefulset-features l.go:53: | 07:57:51 | statefulset-features | @setup  | CREATE | OK | v1/Namespace @ chainsaw-gentle-shrew l.go:53: | 07:57:51 | statefulset-features | step-00  | TRY | RUN | l.go:53: | 07:57:51 | statefulset-features | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-gentle-shrew/stateful l.go:53: | 07:57:51 | statefulset-features | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-gentle-shrew/stateful l.go:53: | 07:57:51 | statefulset-features | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-gentle-shrew/stateful l.go:53: | 07:57:51 | statefulset-features | step-00  | ASSERT | RUN | apps/v1/StatefulSet @ chainsaw-gentle-shrew/stateful-collector === NAME chainsaw/instrumentation-apache-multicontainer l.go:53: | 07:59:10 | instrumentation-apache-multicontainer | step-02  | ASSERT | ERROR | v1/Pod @ chainsaw-pro-crane/* === ERROR ---------------------------------------------------------- v1/Pod/chainsaw-pro-crane/my-apache-multi-5fd5d79dcb-qgrns ---------------------------------------------------------- * spec.containers[2].args: Invalid value: []interface {}{"--config=env:OTEL_CONFIG"}: lengths of slices don't match --- expected +++ actual @@ -7,17 +7,27 @@ sidecar.opentelemetry.io/inject: "true" labels: app: my-apache-multi + name: my-apache-multi-5fd5d79dcb-qgrns namespace: chainsaw-pro-crane + ownerReferences: + - apiVersion: apps/v1 + blockOwnerDeletion: true + controller: true + kind: ReplicaSet + name: my-apache-multi-5fd5d79dcb + uid: 6c7d2136-1522-4c34-bf2c-dc84929f2f4f spec: containers: - env: - name: OTEL_NODE_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.hostIP - name: OTEL_POD_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.podIP - name: OTEL_SERVICE_NAME value: my-apache-multi @@ -40,15 +50,46 @@ - name: OTEL_TRACES_SAMPLER_ARG value: "0.25" - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.container.name=myapp,k8s.deployment.name=my-apache-multi,k8s.namespace.name=chainsaw-pro-crane,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.replicaset.name=my-apache-multi-5fd5d79dcb,service.instance.id=chainsaw-pro-crane.$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME).myapp,service.version=main + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-apache-httpd:main + imagePullPolicy: Always name: myapp + ports: + - containerPort: 8080 + protocol: TCP + resources: + limits: + cpu: "1" + memory: 500Mi + requests: + cpu: 250m + memory: 100Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-qcznr + readOnly: true + - mountPath: /opt/opentelemetry-webserver/agent + name: otel-apache-agent + - mountPath: /usr/local/apache2/conf + name: otel-apache-conf-dir - env: - name: OTEL_NODE_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.hostIP - name: OTEL_POD_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.podIP - name: OTEL_SERVICE_NAME value: my-apache-multi @@ -71,29 +112,294 @@ - name: OTEL_TRACES_SAMPLER_ARG value: "0.25" - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.container.name=myrabbit,k8s.deployment.name=my-apache-multi,k8s.namespace.name=chainsaw-pro-crane,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.replicaset.name=my-apache-multi-5fd5d79dcb,service.instance.id=chainsaw-pro-crane.$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME).myrabbit,service.version=rabbitmq + image: rabbitmq + imagePullPolicy: Always name: myrabbit + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-qcznr + readOnly: true - args: - - --feature-gates=-component.UseLocalHostAsDefaultHost - --config=env:OTEL_CONFIG + env: + - name: POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_CONFIG + value: | + receivers: + otlp: + protocols: + grpc: + endpoint: 0.0.0.0:4317 + http: + endpoint: 0.0.0.0:4318 + exporters: + debug: null + service: + telemetry: + metrics: + address: 0.0.0.0:8888 + pipelines: + traces: + exporters: + - debug + receivers: + - otlp + - name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_RESOURCE_ATTRIBUTES_POD_UID + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.uid + - name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.nodeName + - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.deployment.name=my-apache-multi,k8s.deployment.uid=5791de69-aa47-4ee3-bc21-d05268be169f,k8s.namespace.name=chainsaw-pro-crane,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.pod.uid=$(OTEL_RESOURCE_ATTRIBUTES_POD_UID),k8s.replicaset.name=my-apache-multi-5fd5d79dcb,k8s.replicaset.uid=6c7d2136-1522-4c34-bf2c-dc84929f2f4f + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imagePullPolicy: IfNotPresent name: otc-container + ports: + - containerPort: 8888 + name: metrics + protocol: TCP + - containerPort: 4317 + name: otlp-grpc + protocol: TCP + - containerPort: 4318 + name: otlp-http + protocol: TCP + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-qcznr + readOnly: true initContainers: - - name: otel-agent-source-container-clone - - name: otel-agent-attach-apache + - args: + - cp -r /usr/local/apache2/conf/* /opt/opentelemetry-webserver/source-conf + command: + - /bin/sh + - -c + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-apache-httpd:main + imagePullPolicy: Always + name: otel-agent-source-container-clone + ports: + - containerPort: 8080 + protocol: TCP + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 1m + memory: 128Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-qcznr + readOnly: true + - mountPath: /opt/opentelemetry-webserver/source-conf + name: otel-apache-conf-dir + - args: + - |- + cp -r /opt/opentelemetry/* /opt/opentelemetry-webserver/agent && export agentLogDir=$(echo "/opt/opentelemetry-webserver/agent/logs" | sed 's,/,\\/,g') && cat /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml.template | sed 's/__agent_log_dir__/'${agentLogDir}'/g' > /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml &&echo "$OTEL_APACHE_AGENT_CONF" > /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && sed -i 's/<>/'${APACHE_SERVICE_INSTANCE_ID}'/g' /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && echo -e ' + Include /usr/local/apache2/conf/opentemetry_agent.conf' >> /opt/opentelemetry-webserver/source-conf/httpd.conf + command: + - /bin/sh + - -c + env: + - name: OTEL_APACHE_AGENT_CONF + value: |2 + + #Load the Otel Webserver SDK + LoadFile /opt/opentelemetry-webserver/agent/sdk_lib/lib/libopentelemetry_common.so + LoadFile /opt/opentelemetry-webserver/agent/sdk_lib/lib/libopentelemetry_resources.so + LoadFile /opt/opentelemetry-webserver/agent/sdk_lib/lib/libopentelemetry_trace.so + LoadFile /opt/opentelemetry-webserver/agent/sdk_lib/lib/libopentelemetry_otlp_recordable.so + LoadFile /opt/opentelemetry-webserver/agent/sdk_lib/lib/libopentelemetry_exporter_ostream_span.so + LoadFile /opt/opentelemetry-webserver/agent/sdk_lib/lib/libopentelemetry_exporter_otlp_grpc.so + #Load the Otel ApacheModule SDK + LoadFile /opt/opentelemetry-webserver/agent/sdk_lib/lib/libopentelemetry_webserver_sdk.so + #Load the Apache Module. In this example for Apache 2.4 + #LoadModule otel_apache_module /opt/opentelemetry-webserver/agent/WebServerModule/Apache/libmod_apache_otel.so + #Load the Apache Module. In this example for Apache 2.2 + #LoadModule otel_apache_module /opt/opentelemetry-webserver/agent/WebServerModule/Apache/libmod_apache_otel22.so + LoadModule otel_apache_module /opt/opentelemetry-webserver/agent/WebServerModule/Apache/libmod_apache_otel.so + #Attributes + ApacheModuleEnabled ON + ApacheModuleOtelExporterEndpoint http://localhost:4317 + ApacheModuleOtelMaxQueueSize 4096 + ApacheModuleOtelSpanExporter otlp + ApacheModuleResolveBackends ON + ApacheModuleServiceInstanceId <> + ApacheModuleServiceName my-apache-multi + ApacheModuleServiceNamespace chainsaw-pro-crane + ApacheModuleTraceAsError ON + - name: APACHE_SERVICE_INSTANCE_ID + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-apache-httpd:1.0.4 + imagePullPolicy: IfNotPresent + name: otel-agent-attach-apache + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 1m + memory: 128Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /opt/opentelemetry-webserver/agent + name: otel-apache-agent + - mountPath: /opt/opentelemetry-webserver/source-conf + name: otel-apache-conf-dir + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-qcznr + readOnly: true status: containerStatuses: - - name: myapp + - containerID: cri-o://7f3613355fa7ae58a55175b1c8ea092e4b18310ce134db64f630ab2b902cc7a3 + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-apache-httpd:main + imageID: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-apache-httpd@sha256:a6f298153a411bb65e27901fc5a5f964005064bd0d3c223053fdd944496c6976 + lastState: {} + name: myapp ready: true + restartCount: 0 started: true - - name: myrabbit + state: + running: + startedAt: "2025-02-03T07:53:15Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-qcznr + readOnly: true + recursiveReadOnly: Disabled + - mountPath: /opt/opentelemetry-webserver/agent + name: otel-apache-agent + - mountPath: /usr/local/apache2/conf + name: otel-apache-conf-dir + - containerID: cri-o://9bcc26ae6f73228f9984a4b41cd0d53d3f30270b585f691cd022eaf8ccce051e + image: docker.io/library/rabbitmq:latest + imageID: docker.io/library/rabbitmq@sha256:4fc6a2c182ab768f233f602a965684e1db91f0b01562d4efa5ca35de8db148db + lastState: {} + name: myrabbit ready: true + restartCount: 0 started: true - - name: otc-container + state: + running: + startedAt: "2025-02-03T07:53:21Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-qcznr + readOnly: true + recursiveReadOnly: Disabled + - containerID: cri-o://137a7b7957884eeae504d4d7f0840a344809ce1bf8bb6da916b20981ef61af32 + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imageID: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + lastState: {} + name: otc-container ready: true + restartCount: 0 started: true + state: + running: + startedAt: "2025-02-03T07:53:21Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-qcznr + readOnly: true + recursiveReadOnly: Disabled initContainerStatuses: - - name: otel-agent-source-container-clone + - containerID: cri-o://9b3aae5e7bb5ccdff083397e2b356ee54bc608719162221e1ab65959c1aef57c + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-apache-httpd:main + imageID: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-apache-httpd@sha256:a6f298153a411bb65e27901fc5a5f964005064bd0d3c223053fdd944496c6976 + lastState: {} + name: otel-agent-source-container-clone ready: true - - name: otel-agent-attach-apache + restartCount: 0 + started: false + state: + terminated: + containerID: cri-o://9b3aae5e7bb5ccdff083397e2b356ee54bc608719162221e1ab65959c1aef57c + exitCode: 0 + finishedAt: "2025-02-03T07:53:13Z" + reason: Completed + startedAt: "2025-02-03T07:53:13Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-qcznr + readOnly: true + recursiveReadOnly: Disabled + - mountPath: /opt/opentelemetry-webserver/source-conf + name: otel-apache-conf-dir + - containerID: cri-o://a0f359c66e3536c9cb4baf57bae507417aa3706b78bfcb2ece19f8256c4a3091 + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-apache-httpd:1.0.4 + imageID: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-apache-httpd@sha256:4275db94ebbf4b9f78762b248ecab219790bbb98c59cf2bf5b3383908b727cfe + lastState: {} + name: otel-agent-attach-apache ready: true + restartCount: 0 + started: false + state: + terminated: + containerID: cri-o://a0f359c66e3536c9cb4baf57bae507417aa3706b78bfcb2ece19f8256c4a3091 + exitCode: 0 + finishedAt: "2025-02-03T07:53:14Z" + reason: Completed + startedAt: "2025-02-03T07:53:14Z" + volumeMounts: + - mountPath: /opt/opentelemetry-webserver/agent + name: otel-apache-agent + - mountPath: /opt/opentelemetry-webserver/source-conf + name: otel-apache-conf-dir + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-qcznr + readOnly: true + recursiveReadOnly: Disabled phase: Running l.go:53: | 07:59:10 | instrumentation-apache-multicontainer | step-02  | TRY | DONE | l.go:53: | 07:59:10 | instrumentation-apache-multicontainer | step-02  | CATCH | RUN | l.go:53: | 07:59:10 | instrumentation-apache-multicontainer | step-02  | CMD | RUN | === COMMAND /usr/local/bin/kubectl logs --prefix -l app=my-apache-multi -n chainsaw-pro-crane --all-containers l.go:53: | 07:59:10 | instrumentation-apache-multicontainer | step-02  | CMD | LOG | === STDOUT [pod/my-apache-multi-5fd5d79dcb-qgrns/myapp] [Mon Feb 03 07:53:15.757790 2025] [otel_apache:error] [pid 9:tid 9] Config: otel_set_enabled(ON) [pod/my-apache-multi-5fd5d79dcb-qgrns/myapp] [Mon Feb 03 07:53:15.757792 2025] [otel_apache:error] [pid 9:tid 9] Config: otel_set_otelExporterEndpoint(http://localhost:4317) [pod/my-apache-multi-5fd5d79dcb-qgrns/myapp] [Mon Feb 03 07:53:15.757794 2025] [otel_apache:error] [pid 9:tid 9] Config: otel_set_otelMaxQueueSize(4096) [pod/my-apache-multi-5fd5d79dcb-qgrns/myapp] [Mon Feb 03 07:53:15.757796 2025] [otel_apache:error] [pid 9:tid 9] Config: otel_set_otelExporterType(otlp) [pod/my-apache-multi-5fd5d79dcb-qgrns/myapp] [Mon Feb 03 07:53:15.757798 2025] [otel_apache:error] [pid 9:tid 9] Config: otel_set_resolveBackends(ON) [pod/my-apache-multi-5fd5d79dcb-qgrns/myapp] [Mon Feb 03 07:53:15.757800 2025] [otel_apache:error] [pid 9:tid 9] Config: otel_set_serviceInstanceId(my-apache-multi-5fd5d79dcb-qgrns) [pod/my-apache-multi-5fd5d79dcb-qgrns/myapp] [Mon Feb 03 07:53:15.757802 2025] [otel_apache:error] [pid 9:tid 9] Config: otel_set_serviceName(my-apache-multi) [pod/my-apache-multi-5fd5d79dcb-qgrns/myapp] [Mon Feb 03 07:53:15.757804 2025] [otel_apache:error] [pid 9:tid 9] Config: otel_set_serviceNamespace(chainsaw-pro-crane) [pod/my-apache-multi-5fd5d79dcb-qgrns/myapp] [Mon Feb 03 07:53:15.757806 2025] [otel_apache:error] [pid 9:tid 9] Config: Context chainsaw-pro-crane:my-apache-multi:my-apache-multi-5fd5d79dcb-qgrns:chainsaw-pro-crane,my-apache-multi,my-apache-multi-5fd5d79dcb-qgrns [pod/my-apache-multi-5fd5d79dcb-qgrns/myapp] [Mon Feb 03 07:53:15.757809 2025] [otel_apache:error] [pid 9:tid 9] Config: otel_set_traceAsError(ON) [pod/my-apache-multi-5fd5d79dcb-qgrns/myrabbit] 2025-02-03 07:53:23.832142+00:00 [info] <0.583.0> Resetting node maintenance status [pod/my-apache-multi-5fd5d79dcb-qgrns/myrabbit] 2025-02-03 07:53:23.864876+00:00 [info] <0.606.0> Prometheus metrics: HTTP (non-TLS) listener started on port 15692 [pod/my-apache-multi-5fd5d79dcb-qgrns/myrabbit] 2025-02-03 07:53:23.864994+00:00 [info] <0.583.0> Ready to start client connection listeners [pod/my-apache-multi-5fd5d79dcb-qgrns/myrabbit] 2025-02-03 07:53:23.866293+00:00 [info] <0.650.0> started TCP listener on [::]:5672 [pod/my-apache-multi-5fd5d79dcb-qgrns/myrabbit] completed with 3 plugins. [pod/my-apache-multi-5fd5d79dcb-qgrns/myrabbit] 2025-02-03 07:53:23.924656+00:00 [info] <0.583.0> Server startup complete; 3 plugins started. [pod/my-apache-multi-5fd5d79dcb-qgrns/myrabbit] 2025-02-03 07:53:23.924656+00:00 [info] <0.583.0> * rabbitmq_prometheus [pod/my-apache-multi-5fd5d79dcb-qgrns/myrabbit] 2025-02-03 07:53:23.924656+00:00 [info] <0.583.0> * rabbitmq_management_agent [pod/my-apache-multi-5fd5d79dcb-qgrns/myrabbit] 2025-02-03 07:53:23.924656+00:00 [info] <0.583.0> * rabbitmq_web_dispatch [pod/my-apache-multi-5fd5d79dcb-qgrns/myrabbit] 2025-02-03 07:53:24.032464+00:00 [info] <0.10.0> Time to start RabbitMQ: 2578 ms [pod/my-apache-multi-5fd5d79dcb-qgrns/otc-container] 2025-02-03T07:53:21.390Z warn service@v0.113.0/service.go:221 service::telemetry::metrics::address is being deprecated in favor of service::telemetry::metrics::readers [pod/my-apache-multi-5fd5d79dcb-qgrns/otc-container] 2025-02-03T07:53:21.390Z info telemetry/metrics.go:70 Serving metrics {"address": "0.0.0.0:8888", "metrics level": "Normal"} [pod/my-apache-multi-5fd5d79dcb-qgrns/otc-container] 2025-02-03T07:53:21.390Z info builders/builders.go:26 Development component. May change in the future. {"kind": "exporter", "data_type": "traces", "name": "debug"} [pod/my-apache-multi-5fd5d79dcb-qgrns/otc-container] 2025-02-03T07:53:21.403Z info service@v0.113.0/service.go:238 Starting otelcol... {"Version": "0.113.0", "NumCPU": 4} [pod/my-apache-multi-5fd5d79dcb-qgrns/otc-container] 2025-02-03T07:53:21.403Z info extensions/extensions.go:39 Starting extensions... [pod/my-apache-multi-5fd5d79dcb-qgrns/otc-container] 2025-02-03T07:53:21.403Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-apache-multi-5fd5d79dcb-qgrns/otc-container] 2025-02-03T07:53:21.403Z info otlpreceiver@v0.113.0/otlp.go:112 Starting GRPC server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4317"} [pod/my-apache-multi-5fd5d79dcb-qgrns/otc-container] 2025-02-03T07:53:21.404Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-apache-multi-5fd5d79dcb-qgrns/otc-container] 2025-02-03T07:53:21.404Z info otlpreceiver@v0.113.0/otlp.go:169 Starting HTTP server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4318"} [pod/my-apache-multi-5fd5d79dcb-qgrns/otc-container] 2025-02-03T07:53:21.404Z info service@v0.113.0/service.go:261 Everything is ready. Begin running and processing data. l.go:53: | 07:59:10 | instrumentation-apache-multicontainer | step-02  | CMD | DONE | l.go:53: | 07:59:10 | instrumentation-apache-multicontainer | step-02  | CATCH | DONE | l.go:53: | 07:59:10 | instrumentation-apache-multicontainer | step-02  | CLEANUP | RUN | l.go:53: | 07:59:10 | instrumentation-apache-multicontainer | step-02  | DELETE | RUN | apps/v1/Deployment @ chainsaw-pro-crane/my-apache-multi l.go:53: | 07:59:10 | instrumentation-apache-multicontainer | step-02  | DELETE | OK | apps/v1/Deployment @ chainsaw-pro-crane/my-apache-multi l.go:53: | 07:59:10 | instrumentation-apache-multicontainer | step-02  | DELETE | DONE | apps/v1/Deployment @ chainsaw-pro-crane/my-apache-multi l.go:53: | 07:59:10 | instrumentation-apache-multicontainer | step-02  | CLEANUP | DONE | l.go:53: | 07:59:10 | instrumentation-apache-multicontainer | step-01  | CLEANUP | RUN | l.go:53: | 07:59:10 | instrumentation-apache-multicontainer | step-01  | DELETE | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-pro-crane/apache l.go:53: | 07:59:10 | instrumentation-apache-multicontainer | step-01  | DELETE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-pro-crane/apache l.go:53: | 07:59:10 | instrumentation-apache-multicontainer | step-01  | DELETE | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-pro-crane/apache l.go:53: | 07:59:10 | instrumentation-apache-multicontainer | step-01  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-pro-crane/sidecar l.go:53: | 07:59:10 | instrumentation-apache-multicontainer | step-01  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-pro-crane/sidecar l.go:53: | 07:59:10 | instrumentation-apache-multicontainer | step-01  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-pro-crane/sidecar l.go:53: | 07:59:10 | instrumentation-apache-multicontainer | step-01  | CLEANUP | DONE | l.go:53: | 07:59:10 | instrumentation-apache-multicontainer | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-pro-crane l.go:53: | 07:59:10 | instrumentation-apache-multicontainer | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-pro-crane l.go:53: | 07:59:16 | instrumentation-apache-multicontainer | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-pro-crane === CONT chainsaw/smoke-targetallocator l.go:53: | 07:59:17 | smoke-targetallocator | @setup  | CREATE | OK | v1/Namespace @ chainsaw-smoke-targetallocator l.go:53: | 07:59:17 | smoke-targetallocator | step-00  | TRY | RUN | l.go:53: | 07:59:17 | smoke-targetallocator | step-00  | APPLY | RUN | v1/ServiceAccount @ chainsaw-smoke-targetallocator/ta l.go:53: | 07:59:17 | smoke-targetallocator | step-00  | CREATE | OK | v1/ServiceAccount @ chainsaw-smoke-targetallocator/ta l.go:53: | 07:59:17 | smoke-targetallocator | step-00  | APPLY | DONE | v1/ServiceAccount @ chainsaw-smoke-targetallocator/ta l.go:53: | 07:59:17 | smoke-targetallocator | step-00  | APPLY | RUN | rbac.authorization.k8s.io/v1/ClusterRole @ smoke-targetallocator l.go:53: | 07:59:17 | smoke-targetallocator | step-00  | CREATE | OK | rbac.authorization.k8s.io/v1/ClusterRole @ smoke-targetallocator l.go:53: | 07:59:17 | smoke-targetallocator | step-00  | APPLY | DONE | rbac.authorization.k8s.io/v1/ClusterRole @ smoke-targetallocator l.go:53: | 07:59:17 | smoke-targetallocator | step-00  | APPLY | RUN | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ default-view-chainsaw-smoke-targetallocator l.go:53: | 07:59:17 | smoke-targetallocator | step-00  | CREATE | OK | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ default-view-chainsaw-smoke-targetallocator l.go:53: | 07:59:17 | smoke-targetallocator | step-00  | APPLY | DONE | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ default-view-chainsaw-smoke-targetallocator l.go:53: | 07:59:17 | smoke-targetallocator | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-smoke-targetallocator/stateful l.go:53: | 07:59:17 | smoke-targetallocator | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-smoke-targetallocator/stateful l.go:53: | 07:59:17 | smoke-targetallocator | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-smoke-targetallocator/stateful l.go:53: | 07:59:17 | smoke-targetallocator | step-00  | ASSERT | RUN | apps/v1/StatefulSet @ chainsaw-smoke-targetallocator/stateful-collector l.go:53: | 07:59:19 | smoke-targetallocator | step-00  | ASSERT | DONE | apps/v1/StatefulSet @ chainsaw-smoke-targetallocator/stateful-collector l.go:53: | 07:59:19 | smoke-targetallocator | step-00  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-smoke-targetallocator/stateful-targetallocator l.go:53: | 07:59:19 | smoke-targetallocator | step-00  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-smoke-targetallocator/stateful-targetallocator l.go:53: | 07:59:19 | smoke-targetallocator | step-00  | ASSERT | RUN | v1/ConfigMap @ chainsaw-smoke-targetallocator/stateful-targetallocator l.go:53: | 07:59:19 | smoke-targetallocator | step-00  | ASSERT | DONE | v1/ConfigMap @ chainsaw-smoke-targetallocator/stateful-targetallocator l.go:53: | 07:59:19 | smoke-targetallocator | step-00  | ASSERT | RUN | v1/ConfigMap @ chainsaw-smoke-targetallocator/stateful-collector-2687b61c === NAME chainsaw/instrumentation-dotnet l.go:53: | 08:03:07 | instrumentation-dotnet | step-01  | ASSERT | ERROR | v1/Pod @ chainsaw-precious-ox/* === ERROR ------------------------------------------------------ v1/Pod/chainsaw-precious-ox/my-dotnet-555645964b-dq2gg ------------------------------------------------------ * spec.containers[1].args: Invalid value: []interface {}{"--config=env:OTEL_CONFIG"}: lengths of slices don't match --- expected +++ actual @@ -6,17 +6,27 @@ sidecar.opentelemetry.io/inject: "true" labels: app: my-dotnet + name: my-dotnet-555645964b-dq2gg namespace: chainsaw-precious-ox + ownerReferences: + - apiVersion: apps/v1 + blockOwnerDeletion: true + controller: true + kind: ReplicaSet + name: my-dotnet-555645964b + uid: a8ed827a-311e-49c4-8daa-568f59260d24 spec: containers: - env: - name: OTEL_NODE_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.hostIP - name: OTEL_POD_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.podIP - name: ASPNETCORE_URLS value: http://+:8080 @@ -61,28 +71,185 @@ - name: OTEL_PROPAGATORS value: jaeger,b3multi - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.container.name=myapp,k8s.deployment.name=my-dotnet,k8s.namespace.name=chainsaw-precious-ox,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.replicaset.name=my-dotnet-555645964b,service.instance.id=chainsaw-precious-ox.$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME).myapp,service.version=main + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-dotnet:main + imagePullPolicy: IfNotPresent name: myapp - volumeMounts: - - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-mst87 readOnly: true - mountPath: /otel-auto-instrumentation-dotnet name: opentelemetry-auto-instrumentation-dotnet - args: - - --feature-gates=-component.UseLocalHostAsDefaultHost - --config=env:OTEL_CONFIG + env: + - name: POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_CONFIG + value: | + receivers: + otlp: + protocols: + grpc: + endpoint: 0.0.0.0:4317 + http: + endpoint: 0.0.0.0:4318 + exporters: + debug: null + service: + telemetry: + metrics: + address: 0.0.0.0:8888 + pipelines: + traces: + exporters: + - debug + receivers: + - otlp + - name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_RESOURCE_ATTRIBUTES_POD_UID + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.uid + - name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.nodeName + - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.deployment.name=my-dotnet,k8s.deployment.uid=d22ac5f7-287b-4f55-9058-335e9a43814c,k8s.namespace.name=chainsaw-precious-ox,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.pod.uid=$(OTEL_RESOURCE_ATTRIBUTES_POD_UID),k8s.replicaset.name=my-dotnet-555645964b,k8s.replicaset.uid=a8ed827a-311e-49c4-8daa-568f59260d24 + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imagePullPolicy: IfNotPresent name: otc-container + ports: + - containerPort: 8888 + name: metrics + protocol: TCP + - containerPort: 4317 + name: otlp-grpc + protocol: TCP + - containerPort: 4318 + name: otlp-http + protocol: TCP + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-mst87 + readOnly: true initContainers: - - name: opentelemetry-auto-instrumentation-dotnet + - command: + - cp + - -r + - /autoinstrumentation/. + - /otel-auto-instrumentation-dotnet + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-dotnet:1.2.0 + imagePullPolicy: IfNotPresent + name: opentelemetry-auto-instrumentation-dotnet + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 50m + memory: 128Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /otel-auto-instrumentation-dotnet + name: opentelemetry-auto-instrumentation-dotnet + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-mst87 + readOnly: true status: containerStatuses: - - name: myapp + - containerID: cri-o://713dc5d83783d7ef4cefe95257a351f77a3bfdec6eccd94b9d51ab7eaf8a8884 + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-dotnet:main + imageID: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-dotnet@sha256:84b98a53aa0acad5fca02dbcf2da37df7b3aaaa4a6ebbead2bc06fb715d982ce + lastState: {} + name: myapp ready: true + restartCount: 0 started: true - - name: otc-container + state: + running: + startedAt: "2025-02-03T07:57:09Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-mst87 + readOnly: true + recursiveReadOnly: Disabled + - mountPath: /otel-auto-instrumentation-dotnet + name: opentelemetry-auto-instrumentation-dotnet + - containerID: cri-o://5628d6cc02196697406ad4ff4d8d92e23bd1c9204438c1c23f11936cf6377467 + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imageID: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + lastState: {} + name: otc-container ready: true + restartCount: 0 started: true + state: + running: + startedAt: "2025-02-03T07:57:09Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-mst87 + readOnly: true + recursiveReadOnly: Disabled initContainerStatuses: - - name: opentelemetry-auto-instrumentation-dotnet + - containerID: cri-o://5a7327c92ffa0e12710d7fae3512b6d55452a10986eb71b3d5ede55c59869591 + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-dotnet:1.2.0 + imageID: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-dotnet@sha256:093f0057f30022d0d4f4fbdbd3104c48879c8424d7acec0b46e9cb86a3d95e10 + lastState: {} + name: opentelemetry-auto-instrumentation-dotnet ready: true + restartCount: 0 + started: false + state: + terminated: + containerID: cri-o://5a7327c92ffa0e12710d7fae3512b6d55452a10986eb71b3d5ede55c59869591 + exitCode: 0 + finishedAt: "2025-02-03T07:57:08Z" + reason: Completed + startedAt: "2025-02-03T07:57:08Z" + volumeMounts: + - mountPath: /otel-auto-instrumentation-dotnet + name: opentelemetry-auto-instrumentation-dotnet + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-mst87 + readOnly: true + recursiveReadOnly: Disabled phase: Running l.go:53: | 08:03:07 | instrumentation-dotnet | step-01  | TRY | DONE | l.go:53: | 08:03:07 | instrumentation-dotnet | step-01  | CATCH | RUN | l.go:53: | 08:03:07 | instrumentation-dotnet | step-01  | CMD | RUN | === COMMAND /usr/local/bin/kubectl logs --prefix -l app=my-dotnet -n chainsaw-precious-ox --all-containers l.go:53: | 08:03:07 | instrumentation-dotnet | step-01  | CMD | LOG | === STDOUT [pod/my-dotnet-555645964b-dq2gg/myapp] warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35] [pod/my-dotnet-555645964b-dq2gg/myapp] No XML encryptor configured. Key {df356120-9ebf-40ce-abc7-d4947ec07828} may be persisted to storage in unencrypted form. [pod/my-dotnet-555645964b-dq2gg/myapp] info: Microsoft.Hosting.Lifetime[14] [pod/my-dotnet-555645964b-dq2gg/myapp] Now listening on: http://[::]:8080 [pod/my-dotnet-555645964b-dq2gg/myapp] info: Microsoft.Hosting.Lifetime[0] [pod/my-dotnet-555645964b-dq2gg/myapp] Application started. Press Ctrl+C to shut down. [pod/my-dotnet-555645964b-dq2gg/myapp] info: Microsoft.Hosting.Lifetime[0] [pod/my-dotnet-555645964b-dq2gg/myapp] Hosting environment: Production [pod/my-dotnet-555645964b-dq2gg/myapp] info: Microsoft.Hosting.Lifetime[0] [pod/my-dotnet-555645964b-dq2gg/myapp] Content root path: /app [pod/my-dotnet-555645964b-dq2gg/otc-container] 2025-02-03T07:57:09.408Z warn service@v0.113.0/service.go:221 service::telemetry::metrics::address is being deprecated in favor of service::telemetry::metrics::readers [pod/my-dotnet-555645964b-dq2gg/otc-container] 2025-02-03T07:57:09.408Z info telemetry/metrics.go:70 Serving metrics {"address": "0.0.0.0:8888", "metrics level": "Normal"} [pod/my-dotnet-555645964b-dq2gg/otc-container] 2025-02-03T07:57:09.408Z info builders/builders.go:26 Development component. May change in the future. {"kind": "exporter", "data_type": "traces", "name": "debug"} [pod/my-dotnet-555645964b-dq2gg/otc-container] 2025-02-03T07:57:09.420Z info service@v0.113.0/service.go:238 Starting otelcol... {"Version": "0.113.0", "NumCPU": 4} [pod/my-dotnet-555645964b-dq2gg/otc-container] 2025-02-03T07:57:09.420Z info extensions/extensions.go:39 Starting extensions... [pod/my-dotnet-555645964b-dq2gg/otc-container] 2025-02-03T07:57:09.420Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-dotnet-555645964b-dq2gg/otc-container] 2025-02-03T07:57:09.420Z info otlpreceiver@v0.113.0/otlp.go:112 Starting GRPC server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4317"} [pod/my-dotnet-555645964b-dq2gg/otc-container] 2025-02-03T07:57:09.420Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-dotnet-555645964b-dq2gg/otc-container] 2025-02-03T07:57:09.420Z info otlpreceiver@v0.113.0/otlp.go:169 Starting HTTP server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4318"} [pod/my-dotnet-555645964b-dq2gg/otc-container] 2025-02-03T07:57:09.420Z info service@v0.113.0/service.go:261 Everything is ready. Begin running and processing data. l.go:53: | 08:03:07 | instrumentation-dotnet | step-01  | CMD | DONE | l.go:53: | 08:03:07 | instrumentation-dotnet | step-01  | CATCH | DONE | l.go:53: | 08:03:07 | instrumentation-dotnet | step-01  | CLEANUP | RUN | l.go:53: | 08:03:07 | instrumentation-dotnet | step-01  | DELETE | RUN | apps/v1/Deployment @ chainsaw-precious-ox/my-dotnet l.go:53: | 08:03:07 | instrumentation-dotnet | step-01  | DELETE | OK | apps/v1/Deployment @ chainsaw-precious-ox/my-dotnet l.go:53: | 08:03:07 | instrumentation-dotnet | step-01  | DELETE | DONE | apps/v1/Deployment @ chainsaw-precious-ox/my-dotnet l.go:53: | 08:03:07 | instrumentation-dotnet | step-01  | CLEANUP | DONE | l.go:53: | 08:03:07 | instrumentation-dotnet | step-00  | CLEANUP | RUN | l.go:53: | 08:03:07 | instrumentation-dotnet | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-precious-ox/dotnet l.go:53: | 08:03:07 | instrumentation-dotnet | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-precious-ox/dotnet l.go:53: | 08:03:07 | instrumentation-dotnet | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-precious-ox/dotnet l.go:53: | 08:03:07 | instrumentation-dotnet | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-precious-ox/sidecar l.go:53: | 08:03:08 | instrumentation-dotnet | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-precious-ox/sidecar l.go:53: | 08:03:08 | instrumentation-dotnet | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-precious-ox/sidecar l.go:53: | 08:03:08 | instrumentation-dotnet | step-00  | CLEANUP | DONE | l.go:53: | 08:03:08 | instrumentation-dotnet | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-precious-ox l.go:53: | 08:03:08 | instrumentation-dotnet | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-precious-ox l.go:53: | 08:03:14 | instrumentation-dotnet | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-precious-ox === CONT chainsaw/smoke-statefulset l.go:53: | 08:03:14 | smoke-statefulset | @setup  | CREATE | OK | v1/Namespace @ chainsaw-intent-buffalo l.go:53: | 08:03:14 | smoke-statefulset | step-00  | TRY | RUN | l.go:53: | 08:03:14 | smoke-statefulset | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-intent-buffalo/stateful l.go:53: | 08:03:14 | smoke-statefulset | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-intent-buffalo/stateful l.go:53: | 08:03:14 | smoke-statefulset | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-intent-buffalo/stateful l.go:53: | 08:03:14 | smoke-statefulset | step-00  | ASSERT | RUN | apps/v1/StatefulSet @ chainsaw-intent-buffalo/stateful-collector l.go:53: | 08:03:16 | smoke-statefulset | step-00  | ASSERT | DONE | apps/v1/StatefulSet @ chainsaw-intent-buffalo/stateful-collector l.go:53: | 08:03:16 | smoke-statefulset | step-00  | TRY | DONE | l.go:53: | 08:03:16 | smoke-statefulset | step-00  | CLEANUP | RUN | l.go:53: | 08:03:16 | smoke-statefulset | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-intent-buffalo/stateful l.go:53: | 08:03:16 | smoke-statefulset | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-intent-buffalo/stateful l.go:53: | 08:03:16 | smoke-statefulset | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-intent-buffalo/stateful l.go:53: | 08:03:16 | smoke-statefulset | step-00  | CLEANUP | DONE | l.go:53: | 08:03:16 | smoke-statefulset | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-intent-buffalo l.go:53: | 08:03:16 | smoke-statefulset | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-intent-buffalo l.go:53: | 08:03:22 | smoke-statefulset | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-intent-buffalo === CONT chainsaw/smoke-init-containers l.go:53: | 08:03:22 | smoke-init-containers | @setup  | CREATE | OK | v1/Namespace @ chainsaw-polite-snapper l.go:53: | 08:03:22 | smoke-init-containers | step-00  | TRY | RUN | l.go:53: | 08:03:22 | smoke-init-containers | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-polite-snapper/simplest l.go:53: | 08:03:22 | smoke-init-containers | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-polite-snapper/simplest l.go:53: | 08:03:22 | smoke-init-containers | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-polite-snapper/simplest l.go:53: | 08:03:22 | smoke-init-containers | step-00  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-polite-snapper/simplest-collector === NAME chainsaw/versioned-configmaps l.go:53: | 08:03:26 | versioned-configmaps | step-00  | ASSERT | ERROR | apps/v1/Deployment @ chainsaw-heroic-man/simple-collector === ERROR ------------------------------------------------------- apps/v1/Deployment/chainsaw-heroic-man/simple-collector ------------------------------------------------------- * spec.template.spec.volumes[0].configMap.name: Invalid value: "simple-collector-de9b8847": Expected value: "simple-collector-d6f40475" --- expected +++ actual @@ -3,12 +3,23 @@ metadata: name: simple-collector namespace: chainsaw-heroic-man + ownerReferences: + - apiVersion: opentelemetry.io/v1beta1 + blockOwnerDeletion: true + controller: true + kind: OpenTelemetryCollector + name: simple + uid: 4392ab63-25f7-4291-bee6-5e80a15597e6 spec: template: spec: volumes: - configMap: - name: simple-collector-d6f40475 + defaultMode: 420 + items: + - key: collector.yaml + path: collector.yaml + name: simple-collector-de9b8847 name: otc-internal status: readyReplicas: 1 l.go:53: | 08:03:26 | versioned-configmaps | step-00  | TRY | DONE | l.go:53: | 08:03:26 | versioned-configmaps | step-00  | CLEANUP | RUN | l.go:53: | 08:03:26 | versioned-configmaps | step-00  | DELETE | RUN | opentelemetry.io/v1beta1/OpenTelemetryCollector @ chainsaw-heroic-man/simple l.go:53: | 08:03:26 | versioned-configmaps | step-00  | DELETE | OK | opentelemetry.io/v1beta1/OpenTelemetryCollector @ chainsaw-heroic-man/simple l.go:53: | 08:03:26 | versioned-configmaps | step-00  | DELETE | DONE | opentelemetry.io/v1beta1/OpenTelemetryCollector @ chainsaw-heroic-man/simple l.go:53: | 08:03:26 | versioned-configmaps | step-00  | CLEANUP | DONE | l.go:53: | 08:03:26 | versioned-configmaps | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-heroic-man l.go:53: | 08:03:26 | versioned-configmaps | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-heroic-man === NAME chainsaw/smoke-init-containers l.go:53: | 08:03:31 | smoke-init-containers | step-00  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-polite-snapper/simplest-collector l.go:53: | 08:03:31 | smoke-init-containers | step-00  | ASSERT | RUN | v1/Service @ chainsaw-polite-snapper/simplest-collector-headless l.go:53: | 08:03:31 | smoke-init-containers | step-00  | ASSERT | DONE | v1/Service @ chainsaw-polite-snapper/simplest-collector-headless l.go:53: | 08:03:31 | smoke-init-containers | step-00  | ASSERT | RUN | v1/Service @ chainsaw-polite-snapper/simplest-collector l.go:53: | 08:03:31 | smoke-init-containers | step-00  | ASSERT | DONE | v1/Service @ chainsaw-polite-snapper/simplest-collector l.go:53: | 08:03:31 | smoke-init-containers | step-00  | TRY | DONE | l.go:53: | 08:03:31 | smoke-init-containers | step-00  | CLEANUP | RUN | l.go:53: | 08:03:31 | smoke-init-containers | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-polite-snapper/simplest l.go:53: | 08:03:31 | smoke-init-containers | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-polite-snapper/simplest l.go:53: | 08:03:31 | smoke-init-containers | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-polite-snapper/simplest l.go:53: | 08:03:31 | smoke-init-containers | step-00  | CLEANUP | DONE | l.go:53: | 08:03:31 | smoke-init-containers | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-polite-snapper l.go:53: | 08:03:31 | smoke-init-containers | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-polite-snapper === NAME chainsaw/versioned-configmaps l.go:53: | 08:03:32 | versioned-configmaps | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-heroic-man === CONT chainsaw/smoke-pod-labels l.go:53: | 08:03:32 | smoke-pod-labels | @setup  | CREATE | OK | v1/Namespace @ chainsaw-beloved-boa l.go:53: | 08:03:32 | smoke-pod-labels | step-00  | TRY | RUN | l.go:53: | 08:03:32 | smoke-pod-labels | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-beloved-boa/testlabel l.go:53: | 08:03:32 | smoke-pod-labels | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-beloved-boa/testlabel l.go:53: | 08:03:32 | smoke-pod-labels | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-beloved-boa/testlabel l.go:53: | 08:03:32 | smoke-pod-labels | step-00  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-beloved-boa/testlabel-collector l.go:53: | 08:03:36 | smoke-pod-labels | step-00  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-beloved-boa/testlabel-collector l.go:53: | 08:03:36 | smoke-pod-labels | step-00  | TRY | DONE | l.go:53: | 08:03:36 | smoke-pod-labels | step-00  | CLEANUP | RUN | l.go:53: | 08:03:36 | smoke-pod-labels | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-beloved-boa/testlabel l.go:53: | 08:03:36 | smoke-pod-labels | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-beloved-boa/testlabel l.go:53: | 08:03:36 | smoke-pod-labels | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-beloved-boa/testlabel l.go:53: | 08:03:36 | smoke-pod-labels | step-00  | CLEANUP | DONE | l.go:53: | 08:03:36 | smoke-pod-labels | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-beloved-boa l.go:53: | 08:03:36 | smoke-pod-labels | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-beloved-boa === NAME chainsaw/smoke-init-containers l.go:53: | 08:03:37 | smoke-init-containers | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-polite-snapper === CONT chainsaw/smoke-pod-annotations l.go:53: | 08:03:37 | smoke-pod-annotations | @setup  | CREATE | OK | v1/Namespace @ chainsaw-civil-heron l.go:53: | 08:03:37 | smoke-pod-annotations | step-00  | TRY | RUN | l.go:53: | 08:03:37 | smoke-pod-annotations | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-civil-heron/pa l.go:53: | 08:03:37 | smoke-pod-annotations | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-civil-heron/pa l.go:53: | 08:03:37 | smoke-pod-annotations | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-civil-heron/pa l.go:53: | 08:03:37 | smoke-pod-annotations | step-00  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-civil-heron/pa-collector l.go:53: | 08:03:39 | smoke-pod-annotations | step-00  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-civil-heron/pa-collector l.go:53: | 08:03:39 | smoke-pod-annotations | step-00  | TRY | DONE | l.go:53: | 08:03:39 | smoke-pod-annotations | step-00  | CLEANUP | RUN | l.go:53: | 08:03:39 | smoke-pod-annotations | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-civil-heron/pa l.go:53: | 08:03:39 | smoke-pod-annotations | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-civil-heron/pa l.go:53: | 08:03:39 | smoke-pod-annotations | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-civil-heron/pa l.go:53: | 08:03:39 | smoke-pod-annotations | step-00  | CLEANUP | DONE | l.go:53: | 08:03:39 | smoke-pod-annotations | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-civil-heron l.go:53: | 08:03:39 | smoke-pod-annotations | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-civil-heron === NAME chainsaw/smoke-pod-labels l.go:53: | 08:03:42 | smoke-pod-labels | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-beloved-boa === CONT chainsaw/smoke-sidecar l.go:53: | 08:03:42 | smoke-sidecar | @setup  | CREATE | OK | v1/Namespace @ chainsaw-fleet-ape l.go:53: | 08:03:42 | smoke-sidecar | step-00  | TRY | RUN | l.go:53: | 08:03:42 | smoke-sidecar | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-fleet-ape/sidecar-for-my-app l.go:53: | 08:03:42 | smoke-sidecar | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-fleet-ape/sidecar-for-my-app l.go:53: | 08:03:42 | smoke-sidecar | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-fleet-ape/sidecar-for-my-app l.go:53: | 08:03:42 | smoke-sidecar | step-00  | TRY | DONE | l.go:53: | 08:03:42 | smoke-sidecar | step-01  | TRY | RUN | l.go:53: | 08:03:42 | smoke-sidecar | step-01  | APPLY | RUN | apps/v1/Deployment @ chainsaw-fleet-ape/my-deployment-with-sidecar l.go:53: | 08:03:42 | smoke-sidecar | step-01  | CREATE | OK | apps/v1/Deployment @ chainsaw-fleet-ape/my-deployment-with-sidecar l.go:53: | 08:03:42 | smoke-sidecar | step-01  | APPLY | DONE | apps/v1/Deployment @ chainsaw-fleet-ape/my-deployment-with-sidecar l.go:53: | 08:03:42 | smoke-sidecar | step-01  | ASSERT | RUN | v1/Pod @ chainsaw-fleet-ape/* l.go:53: | 08:03:44 | smoke-sidecar | step-01  | ASSERT | DONE | v1/Pod @ chainsaw-fleet-ape/* l.go:53: | 08:03:44 | smoke-sidecar | step-01  | TRY | DONE | l.go:53: | 08:03:44 | smoke-sidecar | step-01  | CLEANUP | RUN | l.go:53: | 08:03:44 | smoke-sidecar | step-01  | DELETE | RUN | apps/v1/Deployment @ chainsaw-fleet-ape/my-deployment-with-sidecar l.go:53: | 08:03:44 | smoke-sidecar | step-01  | DELETE | OK | apps/v1/Deployment @ chainsaw-fleet-ape/my-deployment-with-sidecar l.go:53: | 08:03:44 | smoke-sidecar | step-01  | DELETE | DONE | apps/v1/Deployment @ chainsaw-fleet-ape/my-deployment-with-sidecar l.go:53: | 08:03:44 | smoke-sidecar | step-01  | CLEANUP | DONE | l.go:53: | 08:03:44 | smoke-sidecar | step-00  | CLEANUP | RUN | l.go:53: | 08:03:44 | smoke-sidecar | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-fleet-ape/sidecar-for-my-app l.go:53: | 08:03:44 | smoke-sidecar | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-fleet-ape/sidecar-for-my-app l.go:53: | 08:03:44 | smoke-sidecar | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-fleet-ape/sidecar-for-my-app l.go:53: | 08:03:44 | smoke-sidecar | step-00  | CLEANUP | DONE | l.go:53: | 08:03:44 | smoke-sidecar | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-fleet-ape l.go:53: | 08:03:44 | smoke-sidecar | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-fleet-ape === NAME chainsaw/smoke-pod-annotations l.go:53: | 08:03:45 | smoke-pod-annotations | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-civil-heron === CONT chainsaw/smoke-simplest l.go:53: | 08:03:45 | smoke-simplest | @setup  | CREATE | OK | v1/Namespace @ chainsaw-usable-hound l.go:53: | 08:03:45 | smoke-simplest | step-00  | TRY | RUN | l.go:53: | 08:03:45 | smoke-simplest | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-usable-hound/simplest l.go:53: | 08:03:45 | smoke-simplest | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-usable-hound/simplest l.go:53: | 08:03:45 | smoke-simplest | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-usable-hound/simplest l.go:53: | 08:03:45 | smoke-simplest | step-00  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-usable-hound/simplest-collector l.go:53: | 08:03:47 | smoke-simplest | step-00  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-usable-hound/simplest-collector l.go:53: | 08:03:47 | smoke-simplest | step-00  | ASSERT | RUN | v1/Service @ chainsaw-usable-hound/simplest-collector-headless l.go:53: | 08:03:47 | smoke-simplest | step-00  | ASSERT | DONE | v1/Service @ chainsaw-usable-hound/simplest-collector-headless l.go:53: | 08:03:47 | smoke-simplest | step-00  | ASSERT | RUN | v1/Service @ chainsaw-usable-hound/simplest-collector l.go:53: | 08:03:47 | smoke-simplest | step-00  | ASSERT | DONE | v1/Service @ chainsaw-usable-hound/simplest-collector l.go:53: | 08:03:47 | smoke-simplest | step-00  | TRY | DONE | l.go:53: | 08:03:47 | smoke-simplest | step-00  | CLEANUP | RUN | l.go:53: | 08:03:47 | smoke-simplest | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-usable-hound/simplest l.go:53: | 08:03:47 | smoke-simplest | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-usable-hound/simplest l.go:53: | 08:03:47 | smoke-simplest | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-usable-hound/simplest l.go:53: | 08:03:47 | smoke-simplest | step-00  | CLEANUP | DONE | l.go:53: | 08:03:47 | smoke-simplest | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-usable-hound l.go:53: | 08:03:47 | smoke-simplest | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-usable-hound === NAME chainsaw/statefulset-features l.go:53: | 08:03:51 | statefulset-features | step-00  | ASSERT | ERROR | apps/v1/StatefulSet @ chainsaw-gentle-shrew/stateful-collector === ERROR ------------------------------------------------------------ apps/v1/StatefulSet/chainsaw-gentle-shrew/stateful-collector ------------------------------------------------------------ * spec.template.spec.containers[0].args: Invalid value: []interface {}{"--config=/conf/collector.yaml"}: lengths of slices don't match * spec.template.spec.volumes[0].configMap.name: Invalid value: "stateful-collector-52b86f05": Expected value: "stateful-collector-81dcbcb5" --- expected +++ actual @@ -3,6 +3,13 @@ metadata: name: stateful-collector namespace: chainsaw-gentle-shrew + ownerReferences: + - apiVersion: opentelemetry.io/v1beta1 + blockOwnerDeletion: true + controller: true + kind: OpenTelemetryCollector + name: stateful + uid: dfbf2da1-0c33-440a-81a1-33774a0e3ed2 spec: podManagementPolicy: Parallel template: @@ -10,8 +17,25 @@ containers: - args: - --config=/conf/collector.yaml - - --feature-gates=-component.UseLocalHostAsDefaultHost + env: + - name: POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imagePullPolicy: IfNotPresent name: otc-container + ports: + - containerPort: 14250 + name: jaeger-grpc + protocol: TCP + - containerPort: 8888 + name: metrics + protocol: TCP + resources: {} + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File volumeMounts: - mountPath: /conf name: otc-internal @@ -19,10 +43,11 @@ name: testvolume volumes: - configMap: + defaultMode: 420 items: - key: collector.yaml path: collector.yaml - name: stateful-collector-81dcbcb5 + name: stateful-collector-52b86f05 name: otc-internal - emptyDir: {} name: testvolume @@ -30,6 +55,7 @@ - apiVersion: v1 kind: PersistentVolumeClaim metadata: + creationTimestamp: null name: testvolume spec: accessModes: @@ -38,6 +64,8 @@ requests: storage: 1Gi volumeMode: Filesystem + status: + phase: Pending status: readyReplicas: 3 replicas: 3 l.go:53: | 08:03:51 | statefulset-features | step-00  | TRY | DONE | l.go:53: | 08:03:51 | statefulset-features | step-00  | CLEANUP | RUN | l.go:53: | 08:03:51 | statefulset-features | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-gentle-shrew/stateful l.go:53: | 08:03:51 | statefulset-features | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-gentle-shrew/stateful l.go:53: | 08:03:51 | statefulset-features | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-gentle-shrew/stateful l.go:53: | 08:03:51 | statefulset-features | step-00  | CLEANUP | DONE | l.go:53: | 08:03:51 | statefulset-features | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-gentle-shrew l.go:53: | 08:03:51 | statefulset-features | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-gentle-shrew === NAME chainsaw/smoke-simplest l.go:53: | 08:03:53 | smoke-simplest | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-usable-hound === CONT chainsaw/smoke-sidecar-other-namespace l.go:53: | 08:03:53 | smoke-sidecar-other-namespace | @setup  | CREATE | OK | v1/Namespace @ chainsaw-noble-marmot l.go:53: | 08:03:53 | smoke-sidecar-other-namespace | step-00  | TRY | RUN | l.go:53: | 08:03:53 | smoke-sidecar-other-namespace | step-00  | APPLY | RUN | v1/Namespace @ kuttl-otel-sidecar-other-namespace l.go:53: | 08:03:54 | smoke-sidecar-other-namespace | step-00  | CREATE | OK | v1/Namespace @ kuttl-otel-sidecar-other-namespace l.go:53: | 08:03:54 | smoke-sidecar-other-namespace | step-00  | APPLY | DONE | v1/Namespace @ kuttl-otel-sidecar-other-namespace l.go:53: | 08:03:54 | smoke-sidecar-other-namespace | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ kuttl-otel-sidecar-other-namespace/sidecar-for-my-app l.go:53: | 08:03:54 | smoke-sidecar-other-namespace | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ kuttl-otel-sidecar-other-namespace/sidecar-for-my-app l.go:53: | 08:03:54 | smoke-sidecar-other-namespace | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ kuttl-otel-sidecar-other-namespace/sidecar-for-my-app l.go:53: | 08:03:54 | smoke-sidecar-other-namespace | step-00  | TRY | DONE | l.go:53: | 08:03:54 | smoke-sidecar-other-namespace | step-01  | TRY | RUN | l.go:53: | 08:03:54 | smoke-sidecar-other-namespace | step-01  | APPLY | RUN | apps/v1/Deployment @ kuttl-otel-sidecar-other-namespace/my-deployment-with-sidecar l.go:53: | 08:03:54 | smoke-sidecar-other-namespace | step-01  | CREATE | OK | apps/v1/Deployment @ kuttl-otel-sidecar-other-namespace/my-deployment-with-sidecar l.go:53: | 08:03:54 | smoke-sidecar-other-namespace | step-01  | APPLY | DONE | apps/v1/Deployment @ kuttl-otel-sidecar-other-namespace/my-deployment-with-sidecar l.go:53: | 08:03:54 | smoke-sidecar-other-namespace | step-01  | ASSERT | RUN | v1/Pod @ kuttl-otel-sidecar-other-namespace/* l.go:53: | 08:03:56 | smoke-sidecar-other-namespace | step-01  | ASSERT | DONE | v1/Pod @ kuttl-otel-sidecar-other-namespace/* l.go:53: | 08:03:56 | smoke-sidecar-other-namespace | step-01  | TRY | DONE | l.go:53: | 08:03:56 | smoke-sidecar-other-namespace | step-01  | CLEANUP | RUN | l.go:53: | 08:03:56 | smoke-sidecar-other-namespace | step-01  | DELETE | RUN | apps/v1/Deployment @ kuttl-otel-sidecar-other-namespace/my-deployment-with-sidecar l.go:53: | 08:03:56 | smoke-sidecar-other-namespace | step-01  | DELETE | OK | apps/v1/Deployment @ kuttl-otel-sidecar-other-namespace/my-deployment-with-sidecar l.go:53: | 08:03:56 | smoke-sidecar-other-namespace | step-01  | DELETE | DONE | apps/v1/Deployment @ kuttl-otel-sidecar-other-namespace/my-deployment-with-sidecar l.go:53: | 08:03:56 | smoke-sidecar-other-namespace | step-01  | CLEANUP | DONE | l.go:53: | 08:03:56 | smoke-sidecar-other-namespace | step-00  | CLEANUP | RUN | l.go:53: | 08:03:56 | smoke-sidecar-other-namespace | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ kuttl-otel-sidecar-other-namespace/sidecar-for-my-app l.go:53: | 08:03:56 | smoke-sidecar-other-namespace | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ kuttl-otel-sidecar-other-namespace/sidecar-for-my-app l.go:53: | 08:03:56 | smoke-sidecar-other-namespace | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ kuttl-otel-sidecar-other-namespace/sidecar-for-my-app l.go:53: | 08:03:56 | smoke-sidecar-other-namespace | step-00  | DELETE | RUN | v1/Namespace @ kuttl-otel-sidecar-other-namespace l.go:53: | 08:03:56 | smoke-sidecar-other-namespace | step-00  | DELETE | OK | v1/Namespace @ kuttl-otel-sidecar-other-namespace === NAME chainsaw/statefulset-features l.go:53: | 08:04:04 | statefulset-features | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-gentle-shrew === CONT chainsaw/smoke-pod-dns-config l.go:53: | 08:04:04 | smoke-pod-dns-config | @setup  | CREATE | OK | v1/Namespace @ chainsaw-alert-orca l.go:53: | 08:04:04 | smoke-pod-dns-config | step-00  | TRY | RUN | l.go:53: | 08:04:04 | smoke-pod-dns-config | step-00  | APPLY | RUN | opentelemetry.io/v1beta1/OpenTelemetryCollector @ chainsaw-alert-orca/poddnsconfig l.go:53: | 08:04:04 | smoke-pod-dns-config | step-00  | CREATE | OK | opentelemetry.io/v1beta1/OpenTelemetryCollector @ chainsaw-alert-orca/poddnsconfig l.go:53: | 08:04:04 | smoke-pod-dns-config | step-00  | APPLY | DONE | opentelemetry.io/v1beta1/OpenTelemetryCollector @ chainsaw-alert-orca/poddnsconfig l.go:53: | 08:04:04 | smoke-pod-dns-config | step-00  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-alert-orca/poddnsconfig-collector l.go:53: | 08:04:05 | smoke-pod-dns-config | step-00  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-alert-orca/poddnsconfig-collector l.go:53: | 08:04:05 | smoke-pod-dns-config | step-00  | TRY | DONE | l.go:53: | 08:04:05 | smoke-pod-dns-config | step-00  | CLEANUP | RUN | l.go:53: | 08:04:05 | smoke-pod-dns-config | step-00  | DELETE | RUN | opentelemetry.io/v1beta1/OpenTelemetryCollector @ chainsaw-alert-orca/poddnsconfig l.go:53: | 08:04:05 | smoke-pod-dns-config | step-00  | DELETE | OK | opentelemetry.io/v1beta1/OpenTelemetryCollector @ chainsaw-alert-orca/poddnsconfig l.go:53: | 08:04:05 | smoke-pod-dns-config | step-00  | DELETE | DONE | opentelemetry.io/v1beta1/OpenTelemetryCollector @ chainsaw-alert-orca/poddnsconfig l.go:53: | 08:04:05 | smoke-pod-dns-config | step-00  | CLEANUP | DONE | l.go:53: | 08:04:05 | smoke-pod-dns-config | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-alert-orca l.go:53: | 08:04:05 | smoke-pod-dns-config | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-alert-orca l.go:53: | 08:04:11 | smoke-pod-dns-config | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-alert-orca === CONT chainsaw/ingress-subdomains l.go:53: | 08:04:11 | ingress-subdomains | @setup  | CREATE | OK | v1/Namespace @ chainsaw-happy-roughy l.go:53: | 08:04:11 | ingress-subdomains | step-00  | TRY | RUN | l.go:53: | 08:04:12 | ingress-subdomains | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-happy-roughy/simplest l.go:53: | 08:04:12 | ingress-subdomains | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-happy-roughy/simplest l.go:53: | 08:04:12 | ingress-subdomains | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-happy-roughy/simplest l.go:53: | 08:04:12 | ingress-subdomains | step-00  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-happy-roughy/simplest-collector l.go:53: | 08:04:13 | ingress-subdomains | step-00  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-happy-roughy/simplest-collector l.go:53: | 08:04:13 | ingress-subdomains | step-00  | ASSERT | RUN | networking.k8s.io/v1/Ingress @ chainsaw-happy-roughy/simplest-ingress l.go:53: | 08:04:13 | ingress-subdomains | step-00  | ASSERT | DONE | networking.k8s.io/v1/Ingress @ chainsaw-happy-roughy/simplest-ingress l.go:53: | 08:04:13 | ingress-subdomains | step-00  | TRY | DONE | l.go:53: | 08:04:13 | ingress-subdomains | step-01  | TRY | RUN | l.go:53: | 08:04:13 | ingress-subdomains | step-01  | SCRIPT | RUN | === COMMAND /usr/bin/sh -c #!/bin/bash set -ex # Export empty payload and check of collector accepted it with 2xx status code for i in {1..40}; do curl --fail -ivX POST --resolve 'otlp-http.test.otel:80:127.0.0.1' http://otlp-http.test.otel:80/v1/traces -H "Content-Type: application/json" -d '{}' && break || sleep 1; done l.go:53: | 08:04:14 | ingress-subdomains | step-01  | SCRIPT | LOG | === STDERR + curl --fail -ivX POST --resolve otlp-http.test.otel:80:127.0.0.1 http://otlp-http.test.otel:80/v1/traces -H Content-Type: application/json -d {} Note: Unnecessary use of -X or --request, POST is already inferred. * Added otlp-http.test.otel:80:127.0.0.1 to DNS cache * Hostname otlp-http.test.otel was found in DNS cache % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 127.0.0.1:80... * connect to 127.0.0.1 port 80 failed: Connection refused * Failed to connect to otlp-http.test.otel port 80 after 0 ms: Couldn't connect to server 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 * Closing connection 0 curl: (7) Failed to connect to otlp-http.test.otel port 80 after 0 ms: Couldn't connect to server + sleep 1 l.go:53: | 08:04:14 | ingress-subdomains | step-01  | SCRIPT | DONE | l.go:53: | 08:04:14 | ingress-subdomains | step-01  | TRY | DONE | l.go:53: | 08:04:14 | ingress-subdomains | step-00  | CLEANUP | RUN | l.go:53: | 08:04:14 | ingress-subdomains | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-happy-roughy/simplest l.go:53: | 08:04:14 | ingress-subdomains | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-happy-roughy/simplest l.go:53: | 08:04:14 | ingress-subdomains | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-happy-roughy/simplest l.go:53: | 08:04:14 | ingress-subdomains | step-00  | CLEANUP | DONE | l.go:53: | 08:04:14 | ingress-subdomains | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-happy-roughy l.go:53: | 08:04:14 | ingress-subdomains | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-happy-roughy l.go:53: | 08:04:20 | ingress-subdomains | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-happy-roughy === CONT chainsaw/smoke-daemonset l.go:53: | 08:04:20 | smoke-daemonset | @setup  | CREATE | OK | v1/Namespace @ chainsaw-refined-unicorn l.go:53: | 08:04:20 | smoke-daemonset | step-00  | TRY | RUN | l.go:53: | 08:04:21 | smoke-daemonset | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-refined-unicorn/daemonset-test l.go:53: | 08:04:21 | smoke-daemonset | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-refined-unicorn/daemonset-test l.go:53: | 08:04:21 | smoke-daemonset | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-refined-unicorn/daemonset-test l.go:53: | 08:04:21 | smoke-daemonset | step-00  | ASSERT | RUN | apps/v1/DaemonSet @ chainsaw-refined-unicorn/daemonset-test-collector l.go:53: | 08:04:21 | smoke-daemonset | step-00  | ASSERT | DONE | apps/v1/DaemonSet @ chainsaw-refined-unicorn/daemonset-test-collector l.go:53: | 08:04:21 | smoke-daemonset | step-00  | TRY | DONE | l.go:53: | 08:04:21 | smoke-daemonset | step-00  | CLEANUP | RUN | l.go:53: | 08:04:21 | smoke-daemonset | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-refined-unicorn/daemonset-test l.go:53: | 08:04:21 | smoke-daemonset | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-refined-unicorn/daemonset-test l.go:53: | 08:04:21 | smoke-daemonset | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-refined-unicorn/daemonset-test l.go:53: | 08:04:21 | smoke-daemonset | step-00  | CLEANUP | DONE | l.go:53: | 08:04:21 | smoke-daemonset | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-refined-unicorn l.go:53: | 08:04:21 | smoke-daemonset | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-refined-unicorn l.go:53: | 08:04:27 | smoke-daemonset | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-refined-unicorn === CONT chainsaw/instrumentation-apache-httpd l.go:53: | 08:04:27 | instrumentation-apache-httpd | @setup  | CREATE | OK | v1/Namespace @ chainsaw-settling-cat l.go:53: | 08:04:27 | instrumentation-apache-httpd | step-00  | TRY | RUN | l.go:53: | 08:04:27 | instrumentation-apache-httpd | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-settling-cat openshift.io/sa.scc.uid-range=1000/1000 --overwrite l.go:53: | 08:04:28 | instrumentation-apache-httpd | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-settling-cat annotated l.go:53: | 08:04:28 | instrumentation-apache-httpd | step-00  | CMD | DONE | l.go:53: | 08:04:28 | instrumentation-apache-httpd | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-settling-cat openshift.io/sa.scc.supplemental-groups=3000/3000 --overwrite l.go:53: | 08:04:28 | instrumentation-apache-httpd | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-settling-cat annotated l.go:53: | 08:04:28 | instrumentation-apache-httpd | step-00  | CMD | DONE | l.go:53: | 08:04:28 | instrumentation-apache-httpd | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-settling-cat/sidecar l.go:53: | 08:04:28 | instrumentation-apache-httpd | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-settling-cat/sidecar l.go:53: | 08:04:28 | instrumentation-apache-httpd | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-settling-cat/sidecar l.go:53: | 08:04:28 | instrumentation-apache-httpd | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-settling-cat/apache l.go:53: | 08:04:28 | instrumentation-apache-httpd | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-settling-cat/apache l.go:53: | 08:04:28 | instrumentation-apache-httpd | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-settling-cat/apache l.go:53: | 08:04:28 | instrumentation-apache-httpd | step-00  | TRY | DONE | l.go:53: | 08:04:28 | instrumentation-apache-httpd | step-01  | TRY | RUN | l.go:53: | 08:04:28 | instrumentation-apache-httpd | step-01  | APPLY | RUN | apps/v1/Deployment @ chainsaw-settling-cat/my-apache l.go:53: | 08:04:28 | instrumentation-apache-httpd | step-01  | CREATE | OK | apps/v1/Deployment @ chainsaw-settling-cat/my-apache l.go:53: | 08:04:28 | instrumentation-apache-httpd | step-01  | APPLY | DONE | apps/v1/Deployment @ chainsaw-settling-cat/my-apache l.go:53: | 08:04:28 | instrumentation-apache-httpd | step-01  | ASSERT | RUN | v1/Pod @ chainsaw-settling-cat/* === NAME chainsaw/smoke-sidecar l.go:53: | 08:04:32 | smoke-sidecar | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-fleet-ape === CONT chainsaw/smoke-shareprocessnamespace l.go:53: | 08:04:32 | smoke-shareprocessnamespace | @setup  | CREATE | OK | v1/Namespace @ chainsaw-warm-martin l.go:53: | 08:04:32 | smoke-shareprocessnamespace | step-00  | TRY | RUN | l.go:53: | 08:04:32 | smoke-shareprocessnamespace | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-warm-martin/test-shareprocns l.go:53: | 08:04:32 | smoke-shareprocessnamespace | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-warm-martin/test-shareprocns l.go:53: | 08:04:32 | smoke-shareprocessnamespace | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-warm-martin/test-shareprocns l.go:53: | 08:04:32 | smoke-shareprocessnamespace | step-00  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-warm-martin/test-shareprocns-collector l.go:53: | 08:04:32 | smoke-shareprocessnamespace | step-00  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-warm-martin/test-shareprocns-collector l.go:53: | 08:04:32 | smoke-shareprocessnamespace | step-00  | TRY | DONE | l.go:53: | 08:04:32 | smoke-shareprocessnamespace | step-00  | CLEANUP | RUN | l.go:53: | 08:04:32 | smoke-shareprocessnamespace | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-warm-martin/test-shareprocns l.go:53: | 08:04:32 | smoke-shareprocessnamespace | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-warm-martin/test-shareprocns l.go:53: | 08:04:32 | smoke-shareprocessnamespace | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-warm-martin/test-shareprocns l.go:53: | 08:04:32 | smoke-shareprocessnamespace | step-00  | CLEANUP | DONE | l.go:53: | 08:04:32 | smoke-shareprocessnamespace | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-warm-martin l.go:53: | 08:04:32 | smoke-shareprocessnamespace | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-warm-martin l.go:53: | 08:04:39 | smoke-shareprocessnamespace | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-warm-martin === CONT chainsaw/ingress l.go:53: | 08:04:39 | ingress | @setup  | CREATE | OK | v1/Namespace @ chainsaw-fast-reindeer l.go:53: | 08:04:39 | ingress | step-00  | TRY | RUN | l.go:53: | 08:04:39 | ingress | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-fast-reindeer/otel-simplest l.go:53: | 08:04:39 | ingress | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-fast-reindeer/otel-simplest l.go:53: | 08:04:39 | ingress | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-fast-reindeer/otel-simplest l.go:53: | 08:04:39 | ingress | step-00  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-fast-reindeer/otel-simplest-collector l.go:53: | 08:04:42 | ingress | step-00  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-fast-reindeer/otel-simplest-collector l.go:53: | 08:04:42 | ingress | step-00  | ASSERT | RUN | networking.k8s.io/v1/Ingress @ chainsaw-fast-reindeer/otel-simplest-ingress l.go:53: | 08:04:42 | ingress | step-00  | ASSERT | DONE | networking.k8s.io/v1/Ingress @ chainsaw-fast-reindeer/otel-simplest-ingress l.go:53: | 08:04:42 | ingress | step-00  | TRY | DONE | l.go:53: | 08:04:42 | ingress | step-01  | TRY | RUN | l.go:53: | 08:04:42 | ingress | step-01  | PATCH | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-fast-reindeer/otel-simplest l.go:53: | 08:04:42 | ingress | step-01  | PATCH | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-fast-reindeer/otel-simplest l.go:53: | 08:04:42 | ingress | step-01  | PATCH | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-fast-reindeer/otel-simplest l.go:53: | 08:04:42 | ingress | step-01  | ERROR | RUN | networking.k8s.io/v1/Ingress @ chainsaw-fast-reindeer/otel-simplest-ingress === NAME chainsaw/smoke-sidecar-other-namespace l.go:53: | 08:04:42 | smoke-sidecar-other-namespace | step-00  | DELETE | DONE | v1/Namespace @ kuttl-otel-sidecar-other-namespace l.go:53: | 08:04:42 | smoke-sidecar-other-namespace | step-00  | CLEANUP | DONE | l.go:53: | 08:04:42 | smoke-sidecar-other-namespace | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-noble-marmot === NAME chainsaw/ingress l.go:53: | 08:04:42 | ingress | step-01  | ERROR | DONE | networking.k8s.io/v1/Ingress @ chainsaw-fast-reindeer/otel-simplest-ingress l.go:53: | 08:04:42 | ingress | step-01  | TRY | DONE | l.go:53: | 08:04:42 | ingress | step-00  | CLEANUP | RUN | l.go:53: | 08:04:42 | ingress | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-fast-reindeer/otel-simplest === NAME chainsaw/smoke-sidecar-other-namespace l.go:53: | 08:04:42 | smoke-sidecar-other-namespace | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-noble-marmot === NAME chainsaw/ingress l.go:53: | 08:04:42 | ingress | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-fast-reindeer/otel-simplest l.go:53: | 08:04:42 | ingress | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-fast-reindeer/otel-simplest l.go:53: | 08:04:42 | ingress | step-00  | CLEANUP | DONE | l.go:53: | 08:04:42 | ingress | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-fast-reindeer l.go:53: | 08:04:42 | ingress | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-fast-reindeer === NAME chainsaw/smoke-sidecar-other-namespace l.go:53: | 08:04:48 | smoke-sidecar-other-namespace | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-noble-marmot === CONT chainsaw/smoke-restarting-deployment l.go:53: | 08:04:49 | smoke-restarting-deployment | @setup  | CREATE | OK | v1/Namespace @ chainsaw-topical-krill l.go:53: | 08:04:49 | smoke-restarting-deployment | step-00  | TRY | RUN | l.go:53: | 08:04:49 | smoke-restarting-deployment | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-topical-krill/restarting l.go:53: | 08:04:49 | smoke-restarting-deployment | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-topical-krill/restarting l.go:53: | 08:04:49 | smoke-restarting-deployment | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-topical-krill/restarting l.go:53: | 08:04:49 | smoke-restarting-deployment | step-00  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-topical-krill/restarting-collector === NAME chainsaw/ingress l.go:53: | 08:04:50 | ingress | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-fast-reindeer === NAME chainsaw/smoke-restarting-deployment l.go:53: | 08:04:51 | smoke-restarting-deployment | step-00  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-topical-krill/restarting-collector l.go:53: | 08:04:51 | smoke-restarting-deployment | step-00  | ASSERT | RUN | v1/Pod @ chainsaw-topical-krill/* l.go:53: | 08:04:51 | smoke-restarting-deployment | step-00  | ASSERT | DONE | v1/Pod @ chainsaw-topical-krill/* l.go:53: | 08:04:51 | smoke-restarting-deployment | step-00  | ASSERT | RUN | v1/Service @ chainsaw-topical-krill/restarting-collector l.go:53: | 08:04:51 | smoke-restarting-deployment | step-00  | ASSERT | DONE | v1/Service @ chainsaw-topical-krill/restarting-collector l.go:53: | 08:04:51 | smoke-restarting-deployment | step-00  | ERROR | RUN | v1/Service @ chainsaw-topical-krill/restarting-collector l.go:53: | 08:04:51 | smoke-restarting-deployment | step-00  | ERROR | DONE | v1/Service @ chainsaw-topical-krill/restarting-collector l.go:53: | 08:04:51 | smoke-restarting-deployment | step-00  | TRY | DONE | l.go:53: | 08:04:51 | smoke-restarting-deployment | step-01  | TRY | RUN | l.go:53: | 08:04:51 | smoke-restarting-deployment | step-01  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-topical-krill/restarting l.go:53: | 08:04:51 | smoke-restarting-deployment | step-01  | PATCH | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-topical-krill/restarting l.go:53: | 08:04:51 | smoke-restarting-deployment | step-01  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-topical-krill/restarting l.go:53: | 08:04:51 | smoke-restarting-deployment | step-01  | ASSERT | RUN | apps/v1/Deployment @ chainsaw-topical-krill/restarting-collector l.go:53: | 08:04:51 | smoke-restarting-deployment | step-01  | ASSERT | DONE | apps/v1/Deployment @ chainsaw-topical-krill/restarting-collector l.go:53: | 08:04:51 | smoke-restarting-deployment | step-01  | ASSERT | RUN | v1/Pod @ chainsaw-topical-krill/* l.go:53: | 08:04:51 | smoke-restarting-deployment | step-01  | ASSERT | DONE | v1/Pod @ chainsaw-topical-krill/* l.go:53: | 08:04:51 | smoke-restarting-deployment | step-01  | ASSERT | RUN | v1/Service @ chainsaw-topical-krill/restarting-collector l.go:53: | 08:04:52 | smoke-restarting-deployment | step-01  | ASSERT | DONE | v1/Service @ chainsaw-topical-krill/restarting-collector l.go:53: | 08:04:52 | smoke-restarting-deployment | step-01  | TRY | DONE | l.go:53: | 08:04:52 | smoke-restarting-deployment | step-00  | CLEANUP | RUN | l.go:53: | 08:04:52 | smoke-restarting-deployment | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-topical-krill/restarting l.go:53: | 08:04:52 | smoke-restarting-deployment | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-topical-krill/restarting l.go:53: | 08:04:52 | smoke-restarting-deployment | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-topical-krill/restarting l.go:53: | 08:04:52 | smoke-restarting-deployment | step-00  | CLEANUP | DONE | l.go:53: | 08:04:52 | smoke-restarting-deployment | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-topical-krill l.go:53: | 08:04:52 | smoke-restarting-deployment | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-topical-krill l.go:53: | 08:04:58 | smoke-restarting-deployment | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-topical-krill === NAME chainsaw/smoke-targetallocator l.go:53: | 08:05:19 | smoke-targetallocator | step-00  | ASSERT | ERROR | v1/ConfigMap @ chainsaw-smoke-targetallocator/stateful-collector-2687b61c === ERROR actual resource not found l.go:53: | 08:05:19 | smoke-targetallocator | step-00  | TRY | DONE | l.go:53: | 08:05:19 | smoke-targetallocator | step-00  | CATCH | RUN | l.go:53: | 08:05:19 | smoke-targetallocator | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl logs --prefix -l app.kubernetes.io/component=opentelemetry-targetallocator -n chainsaw-smoke-targetallocator --all-containers l.go:53: | 08:05:19 | smoke-targetallocator | step-00  | CMD | LOG | === STDOUT [pod/stateful-targetallocator-649684cf7c-8w2j5/ta-container] {"level":"info","ts":"2025-02-03T07:59:18Z","msg":"Starting the Target Allocator"} [pod/stateful-targetallocator-649684cf7c-8w2j5/ta-container] {"level":"info","ts":"2025-02-03T07:59:18Z","logger":"allocator","msg":"Starting server..."} l.go:53: | 08:05:19 | smoke-targetallocator | step-00  | CMD | DONE | l.go:53: | 08:05:19 | smoke-targetallocator | step-00  | CATCH | DONE | l.go:53: | 08:05:19 | smoke-targetallocator | step-00  | CLEANUP | RUN | l.go:53: | 08:05:19 | smoke-targetallocator | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-smoke-targetallocator/stateful l.go:53: | 08:05:19 | smoke-targetallocator | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-smoke-targetallocator/stateful l.go:53: | 08:05:19 | smoke-targetallocator | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-smoke-targetallocator/stateful l.go:53: | 08:05:19 | smoke-targetallocator | step-00  | DELETE | RUN | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ default-view-chainsaw-smoke-targetallocator l.go:53: | 08:05:20 | smoke-targetallocator | step-00  | DELETE | OK | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ default-view-chainsaw-smoke-targetallocator l.go:53: | 08:05:20 | smoke-targetallocator | step-00  | DELETE | DONE | rbac.authorization.k8s.io/v1/ClusterRoleBinding @ default-view-chainsaw-smoke-targetallocator l.go:53: | 08:05:20 | smoke-targetallocator | step-00  | DELETE | RUN | rbac.authorization.k8s.io/v1/ClusterRole @ smoke-targetallocator l.go:53: | 08:05:20 | smoke-targetallocator | step-00  | DELETE | OK | rbac.authorization.k8s.io/v1/ClusterRole @ smoke-targetallocator l.go:53: | 08:05:20 | smoke-targetallocator | step-00  | DELETE | DONE | rbac.authorization.k8s.io/v1/ClusterRole @ smoke-targetallocator l.go:53: | 08:05:20 | smoke-targetallocator | step-00  | DELETE | RUN | v1/ServiceAccount @ chainsaw-smoke-targetallocator/ta l.go:53: | 08:05:20 | smoke-targetallocator | step-00  | DELETE | OK | v1/ServiceAccount @ chainsaw-smoke-targetallocator/ta l.go:53: | 08:05:20 | smoke-targetallocator | step-00  | DELETE | DONE | v1/ServiceAccount @ chainsaw-smoke-targetallocator/ta l.go:53: | 08:05:20 | smoke-targetallocator | step-00  | CLEANUP | DONE | l.go:53: | 08:05:20 | smoke-targetallocator | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-smoke-targetallocator l.go:53: | 08:05:20 | smoke-targetallocator | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-smoke-targetallocator l.go:53: | 08:05:26 | smoke-targetallocator | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-smoke-targetallocator === NAME chainsaw/instrumentation-apache-httpd l.go:53: | 08:10:28 | instrumentation-apache-httpd | step-01  | ASSERT | ERROR | v1/Pod @ chainsaw-settling-cat/* === ERROR ------------------------------------------------------- v1/Pod/chainsaw-settling-cat/my-apache-688d5f5cc6-4s6ct ------------------------------------------------------- * spec.containers[1].args: Invalid value: []interface {}{"--config=env:OTEL_CONFIG"}: lengths of slices don't match --- expected +++ actual @@ -6,17 +6,27 @@ sidecar.opentelemetry.io/inject: "true" labels: app: my-apache + name: my-apache-688d5f5cc6-4s6ct namespace: chainsaw-settling-cat + ownerReferences: + - apiVersion: apps/v1 + blockOwnerDeletion: true + controller: true + kind: ReplicaSet + name: my-apache-688d5f5cc6 + uid: 8c7e0497-c162-478f-8d39-740adea56ebd spec: containers: - env: - name: OTEL_NODE_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.hostIP - name: OTEL_POD_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.podIP - name: OTEL_SERVICE_NAME value: my-apache @@ -39,32 +49,291 @@ - name: OTEL_TRACES_SAMPLER_ARG value: "0.25" - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.container.name=myapp,k8s.deployment.name=my-apache,k8s.namespace.name=chainsaw-settling-cat,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.replicaset.name=my-apache-688d5f5cc6,service.instance.id=chainsaw-settling-cat.$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME).myapp,service.version=main + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-apache-httpd:main + imagePullPolicy: IfNotPresent name: myapp - volumeMounts: - - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + ports: + - containerPort: 8080 + protocol: TCP + resources: + limits: + cpu: "1" + memory: 500Mi + requests: + cpu: 250m + memory: 100Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-bxsp8 + readOnly: true - mountPath: /opt/opentelemetry-webserver/agent name: otel-apache-agent - mountPath: /usr/local/apache2/conf name: otel-apache-conf-dir - args: - - --feature-gates=-component.UseLocalHostAsDefaultHost - --config=env:OTEL_CONFIG + env: + - name: POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_CONFIG + value: | + receivers: + otlp: + protocols: + grpc: + endpoint: 0.0.0.0:4317 + http: + endpoint: 0.0.0.0:4318 + exporters: + debug: null + service: + telemetry: + metrics: + address: 0.0.0.0:8888 + pipelines: + traces: + exporters: + - debug + receivers: + - otlp + - name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_RESOURCE_ATTRIBUTES_POD_UID + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.uid + - name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.nodeName + - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.deployment.name=my-apache,k8s.deployment.uid=720e115a-4f2a-4e9f-834a-3a33e5692232,k8s.namespace.name=chainsaw-settling-cat,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.pod.uid=$(OTEL_RESOURCE_ATTRIBUTES_POD_UID),k8s.replicaset.name=my-apache-688d5f5cc6,k8s.replicaset.uid=8c7e0497-c162-478f-8d39-740adea56ebd + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imagePullPolicy: IfNotPresent name: otc-container + ports: + - containerPort: 8888 + name: metrics + protocol: TCP + - containerPort: 4317 + name: otlp-grpc + protocol: TCP + - containerPort: 4318 + name: otlp-http + protocol: TCP + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-bxsp8 + readOnly: true initContainers: - - name: otel-agent-source-container-clone - - name: otel-agent-attach-apache + - args: + - cp -r /usr/local/apache2/conf/* /opt/opentelemetry-webserver/source-conf + command: + - /bin/sh + - -c + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-apache-httpd:main + imagePullPolicy: IfNotPresent + name: otel-agent-source-container-clone + ports: + - containerPort: 8080 + protocol: TCP + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 1m + memory: 128Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-bxsp8 + readOnly: true + - mountPath: /opt/opentelemetry-webserver/source-conf + name: otel-apache-conf-dir + - args: + - |- + cp -r /opt/opentelemetry/* /opt/opentelemetry-webserver/agent && export agentLogDir=$(echo "/opt/opentelemetry-webserver/agent/logs" | sed 's,/,\\/,g') && cat /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml.template | sed 's/__agent_log_dir__/'${agentLogDir}'/g' > /opt/opentelemetry-webserver/agent/conf/opentelemetry_sdk_log4cxx.xml &&echo "$OTEL_APACHE_AGENT_CONF" > /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && sed -i 's/<>/'${APACHE_SERVICE_INSTANCE_ID}'/g' /opt/opentelemetry-webserver/source-conf/opentemetry_agent.conf && echo -e ' + Include /usr/local/apache2/conf/opentemetry_agent.conf' >> /opt/opentelemetry-webserver/source-conf/httpd.conf + command: + - /bin/sh + - -c + env: + - name: OTEL_APACHE_AGENT_CONF + value: |2 + + #Load the Otel Webserver SDK + LoadFile /opt/opentelemetry-webserver/agent/sdk_lib/lib/libopentelemetry_common.so + LoadFile /opt/opentelemetry-webserver/agent/sdk_lib/lib/libopentelemetry_resources.so + LoadFile /opt/opentelemetry-webserver/agent/sdk_lib/lib/libopentelemetry_trace.so + LoadFile /opt/opentelemetry-webserver/agent/sdk_lib/lib/libopentelemetry_otlp_recordable.so + LoadFile /opt/opentelemetry-webserver/agent/sdk_lib/lib/libopentelemetry_exporter_ostream_span.so + LoadFile /opt/opentelemetry-webserver/agent/sdk_lib/lib/libopentelemetry_exporter_otlp_grpc.so + #Load the Otel ApacheModule SDK + LoadFile /opt/opentelemetry-webserver/agent/sdk_lib/lib/libopentelemetry_webserver_sdk.so + #Load the Apache Module. In this example for Apache 2.4 + #LoadModule otel_apache_module /opt/opentelemetry-webserver/agent/WebServerModule/Apache/libmod_apache_otel.so + #Load the Apache Module. In this example for Apache 2.2 + #LoadModule otel_apache_module /opt/opentelemetry-webserver/agent/WebServerModule/Apache/libmod_apache_otel22.so + LoadModule otel_apache_module /opt/opentelemetry-webserver/agent/WebServerModule/Apache/libmod_apache_otel.so + #Attributes + ApacheModuleEnabled ON + ApacheModuleOtelExporterEndpoint http://localhost:4317 + ApacheModuleOtelMaxQueueSize 4096 + ApacheModuleOtelSpanExporter otlp + ApacheModuleResolveBackends ON + ApacheModuleServiceInstanceId <> + ApacheModuleServiceName my-apache + ApacheModuleServiceNamespace chainsaw-settling-cat + ApacheModuleTraceAsError ON + - name: APACHE_SERVICE_INSTANCE_ID + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-apache-httpd:1.0.4 + imagePullPolicy: IfNotPresent + name: otel-agent-attach-apache + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 1m + memory: 128Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /opt/opentelemetry-webserver/agent + name: otel-apache-agent + - mountPath: /opt/opentelemetry-webserver/source-conf + name: otel-apache-conf-dir + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-bxsp8 + readOnly: true status: containerStatuses: - - name: myapp + - containerID: cri-o://c82cae1c2f022679c383bfc194a71b924bf91a498111247c75634a02f69e74e7 + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-apache-httpd:main + imageID: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-apache-httpd@sha256:a6f298153a411bb65e27901fc5a5f964005064bd0d3c223053fdd944496c6976 + lastState: {} + name: myapp ready: true + restartCount: 0 started: true - - name: otc-container + state: + running: + startedAt: "2025-02-03T08:04:31Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-bxsp8 + readOnly: true + recursiveReadOnly: Disabled + - mountPath: /opt/opentelemetry-webserver/agent + name: otel-apache-agent + - mountPath: /usr/local/apache2/conf + name: otel-apache-conf-dir + - containerID: cri-o://b57aaa5ff4c5c8575c604ae24e31a1d17511310c8f391ef08bdff7b353faa6b0 + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imageID: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + lastState: {} + name: otc-container ready: true + restartCount: 0 started: true + state: + running: + startedAt: "2025-02-03T08:04:31Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-bxsp8 + readOnly: true + recursiveReadOnly: Disabled initContainerStatuses: - - name: otel-agent-source-container-clone + - containerID: cri-o://41a9c3ca97ed367aa25be95414f6f75f0ac47ef00fc460da21ffdc77ca202ebb + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-apache-httpd:main + imageID: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-apache-httpd@sha256:a6f298153a411bb65e27901fc5a5f964005064bd0d3c223053fdd944496c6976 + lastState: {} + name: otel-agent-source-container-clone ready: true - - name: otel-agent-attach-apache + restartCount: 0 + started: false + state: + terminated: + containerID: cri-o://41a9c3ca97ed367aa25be95414f6f75f0ac47ef00fc460da21ffdc77ca202ebb + exitCode: 0 + finishedAt: "2025-02-03T08:04:29Z" + reason: Completed + startedAt: "2025-02-03T08:04:29Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-bxsp8 + readOnly: true + recursiveReadOnly: Disabled + - mountPath: /opt/opentelemetry-webserver/source-conf + name: otel-apache-conf-dir + - containerID: cri-o://100f482e7adb71315cda49c2ca415da0bd7ab235f739cb0032f66a786d50a064 + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-apache-httpd:1.0.4 + imageID: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-apache-httpd@sha256:4275db94ebbf4b9f78762b248ecab219790bbb98c59cf2bf5b3383908b727cfe + lastState: {} + name: otel-agent-attach-apache ready: true + restartCount: 0 + started: false + state: + terminated: + containerID: cri-o://100f482e7adb71315cda49c2ca415da0bd7ab235f739cb0032f66a786d50a064 + exitCode: 0 + finishedAt: "2025-02-03T08:04:30Z" + reason: Completed + startedAt: "2025-02-03T08:04:30Z" + volumeMounts: + - mountPath: /opt/opentelemetry-webserver/agent + name: otel-apache-agent + - mountPath: /opt/opentelemetry-webserver/source-conf + name: otel-apache-conf-dir + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-bxsp8 + readOnly: true + recursiveReadOnly: Disabled phase: Running l.go:53: | 08:10:28 | instrumentation-apache-httpd | step-01  | TRY | DONE | l.go:53: | 08:10:28 | instrumentation-apache-httpd | step-01  | CATCH | RUN | l.go:53: | 08:10:28 | instrumentation-apache-httpd | step-01  | CMD | RUN | === COMMAND /usr/local/bin/kubectl logs --prefix -l app=my-apache -n chainsaw-settling-cat --all-containers l.go:53: | 08:10:28 | instrumentation-apache-httpd | step-01  | CMD | LOG | === STDOUT [pod/my-apache-688d5f5cc6-4s6ct/myapp] [Mon Feb 03 08:04:31.638785 2025] [otel_apache:error] [pid 4:tid 4] Config: otel_set_enabled(ON) [pod/my-apache-688d5f5cc6-4s6ct/myapp] [Mon Feb 03 08:04:31.638787 2025] [otel_apache:error] [pid 4:tid 4] Config: otel_set_otelExporterEndpoint(http://localhost:4317) [pod/my-apache-688d5f5cc6-4s6ct/myapp] [Mon Feb 03 08:04:31.638790 2025] [otel_apache:error] [pid 4:tid 4] Config: otel_set_otelMaxQueueSize(4096) [pod/my-apache-688d5f5cc6-4s6ct/myapp] [Mon Feb 03 08:04:31.638791 2025] [otel_apache:error] [pid 4:tid 4] Config: otel_set_otelExporterType(otlp) [pod/my-apache-688d5f5cc6-4s6ct/myapp] [Mon Feb 03 08:04:31.638793 2025] [otel_apache:error] [pid 4:tid 4] Config: otel_set_resolveBackends(ON) [pod/my-apache-688d5f5cc6-4s6ct/myapp] [Mon Feb 03 08:04:31.638795 2025] [otel_apache:error] [pid 4:tid 4] Config: otel_set_serviceInstanceId(my-apache-688d5f5cc6-4s6ct) [pod/my-apache-688d5f5cc6-4s6ct/myapp] [Mon Feb 03 08:04:31.638797 2025] [otel_apache:error] [pid 4:tid 4] Config: otel_set_serviceName(my-apache) [pod/my-apache-688d5f5cc6-4s6ct/myapp] [Mon Feb 03 08:04:31.638799 2025] [otel_apache:error] [pid 4:tid 4] Config: otel_set_serviceNamespace(chainsaw-settling-cat) [pod/my-apache-688d5f5cc6-4s6ct/myapp] [Mon Feb 03 08:04:31.638801 2025] [otel_apache:error] [pid 4:tid 4] Config: Context chainsaw-settling-cat:my-apache:my-apache-688d5f5cc6-4s6ct:chainsaw-settling-cat,my-apache,my-apache-688d5f5cc6-4s6ct [pod/my-apache-688d5f5cc6-4s6ct/myapp] [Mon Feb 03 08:04:31.638803 2025] [otel_apache:error] [pid 4:tid 4] Config: otel_set_traceAsError(ON) [pod/my-apache-688d5f5cc6-4s6ct/otc-container] 2025-02-03T08:04:31.873Z warn service@v0.113.0/service.go:221 service::telemetry::metrics::address is being deprecated in favor of service::telemetry::metrics::readers [pod/my-apache-688d5f5cc6-4s6ct/otc-container] 2025-02-03T08:04:31.874Z info telemetry/metrics.go:70 Serving metrics {"address": "0.0.0.0:8888", "metrics level": "Normal"} [pod/my-apache-688d5f5cc6-4s6ct/otc-container] 2025-02-03T08:04:31.874Z info builders/builders.go:26 Development component. May change in the future. {"kind": "exporter", "data_type": "traces", "name": "debug"} [pod/my-apache-688d5f5cc6-4s6ct/otc-container] 2025-02-03T08:04:31.886Z info service@v0.113.0/service.go:238 Starting otelcol... {"Version": "0.113.0", "NumCPU": 4} [pod/my-apache-688d5f5cc6-4s6ct/otc-container] 2025-02-03T08:04:31.886Z info extensions/extensions.go:39 Starting extensions... [pod/my-apache-688d5f5cc6-4s6ct/otc-container] 2025-02-03T08:04:31.886Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-apache-688d5f5cc6-4s6ct/otc-container] 2025-02-03T08:04:31.886Z info otlpreceiver@v0.113.0/otlp.go:112 Starting GRPC server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4317"} [pod/my-apache-688d5f5cc6-4s6ct/otc-container] 2025-02-03T08:04:31.886Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/my-apache-688d5f5cc6-4s6ct/otc-container] 2025-02-03T08:04:31.886Z info otlpreceiver@v0.113.0/otlp.go:169 Starting HTTP server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4318"} [pod/my-apache-688d5f5cc6-4s6ct/otc-container] 2025-02-03T08:04:31.886Z info service@v0.113.0/service.go:261 Everything is ready. Begin running and processing data. l.go:53: | 08:10:28 | instrumentation-apache-httpd | step-01  | CMD | DONE | l.go:53: | 08:10:28 | instrumentation-apache-httpd | step-01  | CATCH | DONE | l.go:53: | 08:10:28 | instrumentation-apache-httpd | step-01  | CLEANUP | RUN | l.go:53: | 08:10:28 | instrumentation-apache-httpd | step-01  | DELETE | RUN | apps/v1/Deployment @ chainsaw-settling-cat/my-apache l.go:53: | 08:10:28 | instrumentation-apache-httpd | step-01  | DELETE | OK | apps/v1/Deployment @ chainsaw-settling-cat/my-apache l.go:53: | 08:10:28 | instrumentation-apache-httpd | step-01  | DELETE | DONE | apps/v1/Deployment @ chainsaw-settling-cat/my-apache l.go:53: | 08:10:28 | instrumentation-apache-httpd | step-01  | CLEANUP | DONE | l.go:53: | 08:10:28 | instrumentation-apache-httpd | step-00  | CLEANUP | RUN | l.go:53: | 08:10:28 | instrumentation-apache-httpd | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-settling-cat/apache l.go:53: | 08:10:29 | instrumentation-apache-httpd | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-settling-cat/apache l.go:53: | 08:10:29 | instrumentation-apache-httpd | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-settling-cat/apache l.go:53: | 08:10:29 | instrumentation-apache-httpd | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-settling-cat/sidecar l.go:53: | 08:10:29 | instrumentation-apache-httpd | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-settling-cat/sidecar l.go:53: | 08:10:29 | instrumentation-apache-httpd | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-settling-cat/sidecar l.go:53: | 08:10:29 | instrumentation-apache-httpd | step-00  | CLEANUP | DONE | l.go:53: | 08:10:29 | instrumentation-apache-httpd | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-settling-cat l.go:53: | 08:10:29 | instrumentation-apache-httpd | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-settling-cat l.go:53: | 08:10:35 | instrumentation-apache-httpd | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-settling-cat --- FAIL: chainsaw (133.30s) --- PASS: chainsaw/monitoring (55.67s) --- PASS: chainsaw/otlp-metrics-traces (77.62s) --- PASS: chainsaw/smoke-ports (7.77s) --- PASS: chainsaw/env-vars (9.79s) --- PASS: chainsaw/autoscale (19.80s) --- PASS: chainsaw/prometheus-config-validation (44.00s) --- FAIL: chainsaw/daemonset-features (367.14s) --- FAIL: chainsaw/targetallocator-prometheuscr (373.67s) --- FAIL: chainsaw/targetallocator-kubernetessd (369.33s) --- FAIL: chainsaw/instrumentation-python (407.87s) --- PASS: chainsaw/target-allocator (10.80s) --- PASS: chainsaw/pdb (9.04s) --- PASS: chainsaw/route (10.33s) --- PASS: chainsaw/create-pm-prometheus-exporters (44.42s) --- PASS: chainsaw/create-sm-prometheus-exporters (67.60s) --- PASS: chainsaw/scrape-in-cluster-monitoring (27.92s) --- PASS: chainsaw/opampbridge (22.14s) --- PASS: chainsaw/multi-cluster (53.10s) --- PASS: chainsaw/kafka (149.93s) --- FAIL: chainsaw/instrumentation-java-multicontainer (367.38s) --- FAIL: chainsaw/instrumentation-sdk (407.12s) --- FAIL: chainsaw/instrumentation-python-multicontainer (407.24s) --- FAIL: chainsaw/instrumentation-nodejs-multicontainer (406.44s) --- FAIL: chainsaw/instrumentation-nginx-multicontainer (367.55s) --- FAIL: chainsaw/instrumentation-nginx-contnr-secctx (368.01s) --- FAIL: chainsaw/instrumentation-nodejs (408.41s) --- FAIL: chainsaw/instrumentation-nginx (367.96s) --- FAIL: chainsaw/instrumentation-java-other-ns (373.68s) --- FAIL: chainsaw/instrumentation-dotnet-multicontainer (367.36s) --- FAIL: chainsaw/instrumentation-java (368.03s) --- PASS: chainsaw/node-selector-collector (13.53s) --- FAIL: chainsaw/instrumentation-go (368.49s) --- FAIL: chainsaw/instrumentation-dotnet-musl (367.65s) --- FAIL: chainsaw/managed-reconcile (369.36s) --- PASS: chainsaw/smoke-simplest-v1beta1 (8.51s) --- FAIL: chainsaw/multiple-configmaps (366.84s) --- FAIL: chainsaw/instrumentation-apache-multicontainer (367.53s) --- FAIL: chainsaw/instrumentation-dotnet (367.70s) --- PASS: chainsaw/smoke-statefulset (8.13s) --- FAIL: chainsaw/versioned-configmaps (366.79s) --- PASS: chainsaw/smoke-init-containers (14.96s) --- PASS: chainsaw/smoke-pod-labels (9.76s) --- PASS: chainsaw/smoke-pod-annotations (8.03s) --- PASS: chainsaw/smoke-simplest (8.35s) --- FAIL: chainsaw/statefulset-features (373.15s) --- PASS: chainsaw/smoke-pod-dns-config (7.70s) --- PASS: chainsaw/ingress-subdomains (9.02s) --- PASS: chainsaw/smoke-daemonset (7.00s) --- PASS: chainsaw/smoke-sidecar (49.91s) --- PASS: chainsaw/smoke-shareprocessnamespace (6.92s) --- PASS: chainsaw/smoke-sidecar-other-namespace (55.08s) --- PASS: chainsaw/ingress (11.26s) --- PASS: chainsaw/smoke-restarting-deployment (9.32s) --- FAIL: chainsaw/smoke-targetallocator (369.37s) --- FAIL: chainsaw/instrumentation-apache-httpd (367.47s) FAIL Tests Summary... - Passed tests 30 - Failed tests 25 - Skipped tests 0 Done with failures. Error: some tests failed clusterserviceversion.operators.coreos.com/opentelemetry-operator.v0.113.0-2 patched deployment.apps/opentelemetry-operator-controller-manager condition met Version: v0.2.6 No configuration provided but found default file: .chainsaw.yaml Loading config (.chainsaw.yaml)... - Using test file: chainsaw-test - TestDirs [tests/e2e-multi-instrumentation] - SkipDelete false - FailFast false - ReportFormat 'XML' - ReportName 'junit_otel_multi_instrumentation' - ReportPath '/logs/artifacts' - Namespace '' - FullName false - IncludeTestRegex '' - ExcludeTestRegex '' - ApplyTimeout 15s - AssertTimeout 6m0s - CleanupTimeout 5m0s - DeleteTimeout 5m0s - ErrorTimeout 5m0s - ExecTimeout 15s - DeletionPropagationPolicy Background - Parallel 4 - NoCluster false - PauseOnFailure false Loading tests... - instrumentation-multi-no-containers (tests/e2e-multi-instrumentation/instrumentation-multi-no-containers) - instrumentation-single-instr-first-container (tests/e2e-multi-instrumentation/instrumentation-single-instr-first-container) Loading values... Running tests... === RUN chainsaw === PAUSE chainsaw === CONT chainsaw === RUN chainsaw/instrumentation-multi-no-containers === PAUSE chainsaw/instrumentation-multi-no-containers === RUN chainsaw/instrumentation-single-instr-first-container === PAUSE chainsaw/instrumentation-single-instr-first-container === CONT chainsaw/instrumentation-multi-no-containers === CONT chainsaw/instrumentation-single-instr-first-container === NAME chainsaw/instrumentation-multi-no-containers l.go:53: | 08:10:46 | instrumentation-multi-no-containers | @setup  | CREATE | OK | v1/Namespace @ chainsaw-one-muskox l.go:53: | 08:10:46 | instrumentation-multi-no-containers | step-00  | TRY | RUN | l.go:53: | 08:10:46 | instrumentation-multi-no-containers | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-one-muskox openshift.io/sa.scc.uid-range=1000/1000 --overwrite === NAME chainsaw/instrumentation-single-instr-first-container l.go:53: | 08:10:46 | instrumentation-single-instr-first-container | @setup  | CREATE | OK | v1/Namespace @ chainsaw-factual-raven l.go:53: | 08:10:46 | instrumentation-single-instr-first-container | step-00  | TRY | RUN | l.go:53: | 08:10:46 | instrumentation-single-instr-first-container | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-factual-raven openshift.io/sa.scc.uid-range=1000/1000 --overwrite l.go:53: | 08:10:46 | instrumentation-single-instr-first-container | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-factual-raven annotated l.go:53: | 08:10:46 | instrumentation-single-instr-first-container | step-00  | CMD | DONE | === NAME chainsaw/instrumentation-multi-no-containers l.go:53: | 08:10:46 | instrumentation-multi-no-containers | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-one-muskox annotated l.go:53: | 08:10:46 | instrumentation-multi-no-containers | step-00  | CMD | DONE | === NAME chainsaw/instrumentation-single-instr-first-container l.go:53: | 08:10:46 | instrumentation-single-instr-first-container | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-factual-raven openshift.io/sa.scc.supplemental-groups=3000/3000 --overwrite === NAME chainsaw/instrumentation-multi-no-containers l.go:53: | 08:10:46 | instrumentation-multi-no-containers | step-00  | CMD | RUN | === COMMAND /usr/local/bin/kubectl annotate namespace chainsaw-one-muskox openshift.io/sa.scc.supplemental-groups=3000/3000 --overwrite === NAME chainsaw/instrumentation-single-instr-first-container l.go:53: | 08:10:46 | instrumentation-single-instr-first-container | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-factual-raven annotated l.go:53: | 08:10:46 | instrumentation-single-instr-first-container | step-00  | CMD | DONE | l.go:53: | 08:10:46 | instrumentation-single-instr-first-container | step-00  | TRY | DONE | l.go:53: | 08:10:46 | instrumentation-single-instr-first-container | step-00  | TRY | RUN | === NAME chainsaw/instrumentation-multi-no-containers l.go:53: | 08:10:46 | instrumentation-multi-no-containers | step-00  | CMD | LOG | === STDOUT namespace/chainsaw-one-muskox annotated l.go:53: | 08:10:46 | instrumentation-multi-no-containers | step-00  | CMD | DONE | l.go:53: | 08:10:46 | instrumentation-multi-no-containers | step-00  | TRY | DONE | l.go:53: | 08:10:46 | instrumentation-multi-no-containers | step-00  | TRY | RUN | === NAME chainsaw/instrumentation-single-instr-first-container l.go:53: | 08:10:46 | instrumentation-single-instr-first-container | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-factual-raven/sidecar === NAME chainsaw/instrumentation-multi-no-containers l.go:53: | 08:10:46 | instrumentation-multi-no-containers | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-one-muskox/sidecar === NAME chainsaw/instrumentation-single-instr-first-container l.go:53: | 08:10:46 | instrumentation-single-instr-first-container | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-factual-raven/sidecar l.go:53: | 08:10:46 | instrumentation-single-instr-first-container | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-factual-raven/sidecar l.go:53: | 08:10:46 | instrumentation-single-instr-first-container | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-factual-raven/multi === NAME chainsaw/instrumentation-multi-no-containers l.go:53: | 08:10:46 | instrumentation-multi-no-containers | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-one-muskox/sidecar l.go:53: | 08:10:46 | instrumentation-multi-no-containers | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-one-muskox/sidecar l.go:53: | 08:10:46 | instrumentation-multi-no-containers | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-one-muskox/multi === NAME chainsaw/instrumentation-single-instr-first-container l.go:53: | 08:10:46 | instrumentation-single-instr-first-container | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-factual-raven/multi l.go:53: | 08:10:46 | instrumentation-single-instr-first-container | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-factual-raven/multi l.go:53: | 08:10:46 | instrumentation-single-instr-first-container | step-00  | TRY | DONE | l.go:53: | 08:10:46 | instrumentation-single-instr-first-container | step-01  | TRY | RUN | === NAME chainsaw/instrumentation-multi-no-containers l.go:53: | 08:10:46 | instrumentation-multi-no-containers | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-one-muskox/multi l.go:53: | 08:10:46 | instrumentation-multi-no-containers | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-one-muskox/multi l.go:53: | 08:10:46 | instrumentation-multi-no-containers | step-00  | TRY | DONE | === NAME chainsaw/instrumentation-single-instr-first-container l.go:53: | 08:10:46 | instrumentation-single-instr-first-container | step-01  | APPLY | RUN | apps/v1/Deployment @ chainsaw-factual-raven/dep-single-instr-first-container === NAME chainsaw/instrumentation-multi-no-containers l.go:53: | 08:10:46 | instrumentation-multi-no-containers | step-01  | TRY | RUN | l.go:53: | 08:10:46 | instrumentation-multi-no-containers | step-01  | APPLY | RUN | apps/v1/Deployment @ chainsaw-one-muskox/dep-multi-instr-no-containers l.go:53: | 08:10:47 | instrumentation-multi-no-containers | step-01  | CREATE | OK | apps/v1/Deployment @ chainsaw-one-muskox/dep-multi-instr-no-containers l.go:53: | 08:10:47 | instrumentation-multi-no-containers | step-01  | APPLY | DONE | apps/v1/Deployment @ chainsaw-one-muskox/dep-multi-instr-no-containers l.go:53: | 08:10:47 | instrumentation-multi-no-containers | step-01  | ASSERT | RUN | v1/Pod @ chainsaw-one-muskox/* === NAME chainsaw/instrumentation-single-instr-first-container l.go:53: | 08:10:47 | instrumentation-single-instr-first-container | step-01  | CREATE | OK | apps/v1/Deployment @ chainsaw-factual-raven/dep-single-instr-first-container l.go:53: | 08:10:47 | instrumentation-single-instr-first-container | step-01  | APPLY | DONE | apps/v1/Deployment @ chainsaw-factual-raven/dep-single-instr-first-container l.go:53: | 08:10:47 | instrumentation-single-instr-first-container | step-01  | ASSERT | RUN | v1/Pod @ chainsaw-factual-raven/* === NAME chainsaw/instrumentation-multi-no-containers l.go:53: | 08:16:47 | instrumentation-multi-no-containers | step-01  | ASSERT | ERROR | v1/Pod @ chainsaw-one-muskox/* === ERROR ------------------------------------------------------------------------- v1/Pod/chainsaw-one-muskox/dep-multi-instr-no-containers-76c85d86cd-trqbc ------------------------------------------------------------------------- * spec.containers[2].args: Invalid value: []interface {}{"--config=env:OTEL_CONFIG"}: lengths of slices don't match --- expected +++ actual @@ -7,7 +7,15 @@ sidecar.opentelemetry.io/inject: "true" labels: app: pod-multi-instr-no-containers + name: dep-multi-instr-no-containers-76c85d86cd-trqbc namespace: chainsaw-one-muskox + ownerReferences: + - apiVersion: apps/v1 + blockOwnerDeletion: true + controller: true + kind: ReplicaSet + name: dep-multi-instr-no-containers-76c85d86cd + uid: 8585d657-ed27-4481-9f68-fccb31459753 spec: containers: - env: @@ -15,31 +23,162 @@ value: test - name: NODE_PATH value: /usr/local/lib/node_modules + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-nodejs:main + imagePullPolicy: IfNotPresent name: nodejsapp + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-92mvk readOnly: true - env: - name: TEST value: test + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-python:main + imagePullPolicy: IfNotPresent name: pythonapp + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-92mvk readOnly: true - args: - - --feature-gates=-component.UseLocalHostAsDefaultHost - --config=env:OTEL_CONFIG + env: + - name: POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_CONFIG + value: | + receivers: + otlp: + protocols: + grpc: + endpoint: 0.0.0.0:4317 + http: + endpoint: 0.0.0.0:4318 + exporters: + debug: null + service: + telemetry: + metrics: + address: 0.0.0.0:8888 + pipelines: + traces: + exporters: + - debug + receivers: + - otlp + - name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_RESOURCE_ATTRIBUTES_POD_UID + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.uid + - name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.nodeName + - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.deployment.name=dep-multi-instr-no-containers,k8s.deployment.uid=c8d7cb2a-b70d-470f-8870-a05acfd065fc,k8s.namespace.name=chainsaw-one-muskox,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.pod.uid=$(OTEL_RESOURCE_ATTRIBUTES_POD_UID),k8s.replicaset.name=dep-multi-instr-no-containers-76c85d86cd,k8s.replicaset.uid=8585d657-ed27-4481-9f68-fccb31459753 + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imagePullPolicy: IfNotPresent name: otc-container + ports: + - containerPort: 8888 + name: metrics + protocol: TCP + - containerPort: 4317 + name: otlp-grpc + protocol: TCP + - containerPort: 4318 + name: otlp-http + protocol: TCP + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-92mvk + readOnly: true status: containerStatuses: - - name: nodejsapp + - containerID: cri-o://e30cbdd886e885a7ddf6d1734a09a38fe4bc63e465dd61f3d1fe787d7513139b + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-nodejs:main + imageID: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-nodejs@sha256:0b37f3557ff72ec150ce227b4dc98a2977a67a3bddc426da495c5b40f4e9ce6a + lastState: {} + name: nodejsapp ready: true + restartCount: 0 started: true - - name: otc-container + state: + running: + startedAt: "2025-02-03T08:10:48Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-92mvk + readOnly: true + recursiveReadOnly: Disabled + - containerID: cri-o://ad790a99d95ca427ad5f86e2f4f2c8203ee640e7293bf84a5e4b76dafd088b1c + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imageID: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + lastState: {} + name: otc-container ready: true + restartCount: 0 started: true - - name: pythonapp + state: + running: + startedAt: "2025-02-03T08:10:48Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-92mvk + readOnly: true + recursiveReadOnly: Disabled + - containerID: cri-o://5fbf18492f3f7fb7c136220bece32b6d3de97ad26ae3b3fbf529abd9a7e2e023 + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-python:main + imageID: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-python@sha256:66dce9234c5068b519226fee0c8584bd9c104fed87643ed89e02428e909b18db + lastState: {} + name: pythonapp ready: true + restartCount: 0 started: true + state: + running: + startedAt: "2025-02-03T08:10:48Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-92mvk + readOnly: true + recursiveReadOnly: Disabled phase: Running l.go:53: | 08:16:47 | instrumentation-multi-no-containers | step-01  | TRY | DONE | l.go:53: | 08:16:47 | instrumentation-multi-no-containers | step-01  | CATCH | RUN | === NAME chainsaw/instrumentation-single-instr-first-container l.go:53: | 08:16:47 | instrumentation-single-instr-first-container | step-01  | ASSERT | ERROR | v1/Pod @ chainsaw-factual-raven/* === ERROR ------------------------------------------------------------------------------- v1/Pod/chainsaw-factual-raven/dep-single-instr-first-container-775f977c98-rpjcm ------------------------------------------------------------------------------- * spec.containers[2].args: Invalid value: []interface {}{"--config=env:OTEL_CONFIG"}: lengths of slices don't match --- expected +++ actual @@ -6,17 +6,27 @@ sidecar.opentelemetry.io/inject: "true" labels: app: pod-single-instr-first-container + name: dep-single-instr-first-container-775f977c98-rpjcm namespace: chainsaw-factual-raven + ownerReferences: + - apiVersion: apps/v1 + blockOwnerDeletion: true + controller: true + kind: ReplicaSet + name: dep-single-instr-first-container-775f977c98 + uid: b3ad38cf-4cde-4238-b423-1e731d1c08ad spec: containers: - env: - name: OTEL_NODE_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.hostIP - name: OTEL_POD_IP valueFrom: fieldRef: + apiVersion: v1 fieldPath: status.podIP - name: NODE_PATH value: /usr/local/lib/node_modules @@ -43,38 +53,220 @@ - name: OTEL_PROPAGATORS value: jaeger,b3 - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.container.name=nodejsapp,k8s.deployment.name=dep-single-instr-first-container,k8s.namespace.name=chainsaw-factual-raven,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.replicaset.name=dep-single-instr-first-container-775f977c98,service.instance.id=chainsaw-factual-raven.$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME).nodejsapp,service.version=main + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-nodejs:main + imagePullPolicy: IfNotPresent name: nodejsapp - volumeMounts: - - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-fsxll readOnly: true - mountPath: /otel-auto-instrumentation-nodejs name: opentelemetry-auto-instrumentation-nodejs - env: - name: TEST value: test + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-python:main + imagePullPolicy: IfNotPresent name: pythonapp - volumeMounts: - - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-fsxll readOnly: true - args: - - --feature-gates=-component.UseLocalHostAsDefaultHost - --config=env:OTEL_CONFIG + env: + - name: POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_CONFIG + value: | + receivers: + otlp: + protocols: + grpc: + endpoint: 0.0.0.0:4317 + http: + endpoint: 0.0.0.0:4318 + exporters: + debug: null + service: + telemetry: + metrics: + address: 0.0.0.0:8888 + pipelines: + traces: + exporters: + - debug + receivers: + - otlp + - name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: OTEL_RESOURCE_ATTRIBUTES_POD_UID + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.uid + - name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.nodeName + - name: OTEL_RESOURCE_ATTRIBUTES + value: k8s.deployment.name=dep-single-instr-first-container,k8s.deployment.uid=30b495a8-3cfa-42f5-8184-13acb5e097a7,k8s.namespace.name=chainsaw-factual-raven,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.pod.uid=$(OTEL_RESOURCE_ATTRIBUTES_POD_UID),k8s.replicaset.name=dep-single-instr-first-container-775f977c98,k8s.replicaset.uid=b3ad38cf-4cde-4238-b423-1e731d1c08ad + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imagePullPolicy: IfNotPresent name: otc-container + ports: + - containerPort: 8888 + name: metrics + protocol: TCP + - containerPort: 4317 + name: otlp-grpc + protocol: TCP + - containerPort: 4318 + name: otlp-http + protocol: TCP + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-fsxll + readOnly: true initContainers: - - name: opentelemetry-auto-instrumentation-nodejs + - command: + - cp + - -r + - /autoinstrumentation/. + - /otel-auto-instrumentation-nodejs + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-nodejs:0.53.0 + imagePullPolicy: IfNotPresent + name: opentelemetry-auto-instrumentation-nodejs + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 50m + memory: 128Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /otel-auto-instrumentation-nodejs + name: opentelemetry-auto-instrumentation-nodejs + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-fsxll + readOnly: true status: containerStatuses: - - name: nodejsapp - ready: true + - containerID: cri-o://ddc9def331232190528df43404b8df48719b678ecb1013c9cb541f4942e71cfe + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-nodejs:main + imageID: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-nodejs@sha256:0b37f3557ff72ec150ce227b4dc98a2977a67a3bddc426da495c5b40f4e9ce6a + lastState: {} + name: nodejsapp + ready: true + restartCount: 0 started: true - - name: otc-container - ready: true + state: + running: + startedAt: "2025-02-03T08:10:49Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-fsxll + readOnly: true + recursiveReadOnly: Disabled + - mountPath: /otel-auto-instrumentation-nodejs + name: opentelemetry-auto-instrumentation-nodejs + - containerID: cri-o://cf2d8188c4320023b0e2c42f338b75178c1617adc6c33a049c2613f7879a4806 + image: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + imageID: registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:b04830a0bec454858d1a91b629e226e0480f6c2991d1ba564bc04a70f2e5ed87 + lastState: {} + name: otc-container + ready: true + restartCount: 0 started: true - - name: pythonapp - ready: true + state: + running: + startedAt: "2025-02-03T08:10:49Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-fsxll + readOnly: true + recursiveReadOnly: Disabled + - containerID: cri-o://e69a6be4cce9441040781b8bf54c8a64ca559e1baef16e198ebb9ae9d65a6645 + image: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-python:main + imageID: ghcr.io/open-telemetry/opentelemetry-operator/e2e-test-app-python@sha256:66dce9234c5068b519226fee0c8584bd9c104fed87643ed89e02428e909b18db + lastState: {} + name: pythonapp + ready: true + restartCount: 0 started: true + state: + running: + startedAt: "2025-02-03T08:10:49Z" + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-fsxll + readOnly: true + recursiveReadOnly: Disabled initContainerStatuses: - - name: opentelemetry-auto-instrumentation-nodejs - ready: true + - containerID: cri-o://27b0acba7a2c84a2d39d5a33229e80af079759b6575a1033ce22f0a05b6a015d + image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-nodejs:0.53.0 + imageID: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-nodejs@sha256:70ba757df71d0596aaccac91f439e8be7f81136b868205e79178e8fd3c36a763 + lastState: {} + name: opentelemetry-auto-instrumentation-nodejs + ready: true + restartCount: 0 + started: false + state: + terminated: + containerID: cri-o://27b0acba7a2c84a2d39d5a33229e80af079759b6575a1033ce22f0a05b6a015d + exitCode: 0 + finishedAt: "2025-02-03T08:10:49Z" + reason: Completed + startedAt: "2025-02-03T08:10:47Z" + volumeMounts: + - mountPath: /otel-auto-instrumentation-nodejs + name: opentelemetry-auto-instrumentation-nodejs + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-fsxll + readOnly: true + recursiveReadOnly: Disabled phase: Running l.go:53: | 08:16:47 | instrumentation-single-instr-first-container | step-01  | TRY | DONE | l.go:53: | 08:16:47 | instrumentation-single-instr-first-container | step-01  | CATCH | RUN | === NAME chainsaw/instrumentation-multi-no-containers l.go:53: | 08:16:47 | instrumentation-multi-no-containers | step-01  | CMD | RUN | === COMMAND /usr/local/bin/kubectl logs --prefix -l app=pod-multi-instr-no-containers -n chainsaw-one-muskox --all-containers === NAME chainsaw/instrumentation-single-instr-first-container l.go:53: | 08:16:47 | instrumentation-single-instr-first-container | step-01  | CMD | RUN | === COMMAND /usr/local/bin/kubectl logs --prefix -l app=pod-single-instr-first-container -n chainsaw-factual-raven --all-containers === NAME chainsaw/instrumentation-multi-no-containers l.go:53: | 08:16:47 | instrumentation-multi-no-containers | step-01  | CMD | LOG | === STDOUT [pod/dep-multi-instr-no-containers-76c85d86cd-trqbc/pythonapp] * Debug mode: off [pod/dep-multi-instr-no-containers-76c85d86cd-trqbc/pythonapp] WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. [pod/dep-multi-instr-no-containers-76c85d86cd-trqbc/pythonapp] * Running on http://127.0.0.1:8080 [pod/dep-multi-instr-no-containers-76c85d86cd-trqbc/pythonapp] Press CTRL+C to quit [pod/dep-multi-instr-no-containers-76c85d86cd-trqbc/otc-container] 2025-02-03T08:10:48.425Z warn service@v0.113.0/service.go:221 service::telemetry::metrics::address is being deprecated in favor of service::telemetry::metrics::readers [pod/dep-multi-instr-no-containers-76c85d86cd-trqbc/otc-container] 2025-02-03T08:10:48.425Z info telemetry/metrics.go:70 Serving metrics {"address": "0.0.0.0:8888", "metrics level": "Normal"} [pod/dep-multi-instr-no-containers-76c85d86cd-trqbc/otc-container] 2025-02-03T08:10:48.425Z info builders/builders.go:26 Development component. May change in the future. {"kind": "exporter", "data_type": "traces", "name": "debug"} [pod/dep-multi-instr-no-containers-76c85d86cd-trqbc/otc-container] 2025-02-03T08:10:48.438Z info service@v0.113.0/service.go:238 Starting otelcol... {"Version": "0.113.0", "NumCPU": 4} [pod/dep-multi-instr-no-containers-76c85d86cd-trqbc/otc-container] 2025-02-03T08:10:48.438Z info extensions/extensions.go:39 Starting extensions... [pod/dep-multi-instr-no-containers-76c85d86cd-trqbc/otc-container] 2025-02-03T08:10:48.438Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/dep-multi-instr-no-containers-76c85d86cd-trqbc/otc-container] 2025-02-03T08:10:48.438Z info otlpreceiver@v0.113.0/otlp.go:112 Starting GRPC server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4317"} [pod/dep-multi-instr-no-containers-76c85d86cd-trqbc/otc-container] 2025-02-03T08:10:48.438Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/dep-multi-instr-no-containers-76c85d86cd-trqbc/otc-container] 2025-02-03T08:10:48.438Z info otlpreceiver@v0.113.0/otlp.go:169 Starting HTTP server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4318"} [pod/dep-multi-instr-no-containers-76c85d86cd-trqbc/otc-container] 2025-02-03T08:10:48.438Z info service@v0.113.0/service.go:261 Everything is ready. Begin running and processing data. [pod/dep-multi-instr-no-containers-76c85d86cd-trqbc/nodejsapp] Hi l.go:53: | 08:16:47 | instrumentation-multi-no-containers | step-01  | CMD | DONE | l.go:53: | 08:16:47 | instrumentation-multi-no-containers | step-01  | CATCH | DONE | l.go:53: | 08:16:47 | instrumentation-multi-no-containers | step-01  | CLEANUP | RUN | l.go:53: | 08:16:47 | instrumentation-multi-no-containers | step-01  | DELETE | RUN | apps/v1/Deployment @ chainsaw-one-muskox/dep-multi-instr-no-containers l.go:53: | 08:16:47 | instrumentation-multi-no-containers | step-01  | DELETE | OK | apps/v1/Deployment @ chainsaw-one-muskox/dep-multi-instr-no-containers l.go:53: | 08:16:47 | instrumentation-multi-no-containers | step-01  | DELETE | DONE | apps/v1/Deployment @ chainsaw-one-muskox/dep-multi-instr-no-containers l.go:53: | 08:16:47 | instrumentation-multi-no-containers | step-01  | CLEANUP | DONE | l.go:53: | 08:16:47 | instrumentation-multi-no-containers | step-00  | CLEANUP | RUN | l.go:53: | 08:16:47 | instrumentation-multi-no-containers | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-one-muskox/multi === NAME chainsaw/instrumentation-single-instr-first-container l.go:53: | 08:16:47 | instrumentation-single-instr-first-container | step-01  | CMD | LOG | === STDOUT [pod/dep-single-instr-first-container-775f977c98-rpjcm/otc-container] 2025-02-03T08:10:49.912Z info telemetry/metrics.go:70 Serving metrics {"address": "0.0.0.0:8888", "metrics level": "Normal"} [pod/dep-single-instr-first-container-775f977c98-rpjcm/otc-container] 2025-02-03T08:10:49.912Z info builders/builders.go:26 Development component. May change in the future. {"kind": "exporter", "data_type": "traces", "name": "debug"} [pod/dep-single-instr-first-container-775f977c98-rpjcm/otc-container] 2025-02-03T08:10:49.925Z info service@v0.113.0/service.go:238 Starting otelcol... {"Version": "0.113.0", "NumCPU": 4} [pod/dep-single-instr-first-container-775f977c98-rpjcm/otc-container] 2025-02-03T08:10:49.925Z info extensions/extensions.go:39 Starting extensions... [pod/dep-single-instr-first-container-775f977c98-rpjcm/otc-container] 2025-02-03T08:10:49.925Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/dep-single-instr-first-container-775f977c98-rpjcm/otc-container] 2025-02-03T08:10:49.925Z info otlpreceiver@v0.113.0/otlp.go:112 Starting GRPC server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4317"} [pod/dep-single-instr-first-container-775f977c98-rpjcm/otc-container] 2025-02-03T08:10:49.926Z warn internal@v0.113.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"} [pod/dep-single-instr-first-container-775f977c98-rpjcm/otc-container] 2025-02-03T08:10:49.926Z info otlpreceiver@v0.113.0/otlp.go:169 Starting HTTP server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4318"} [pod/dep-single-instr-first-container-775f977c98-rpjcm/otc-container] 2025-02-03T08:10:49.926Z info service@v0.113.0/service.go:261 Everything is ready. Begin running and processing data. [pod/dep-single-instr-first-container-775f977c98-rpjcm/otc-container] 2025-02-03T08:10:55.291Z info Traces {"kind": "exporter", "data_type": "traces", "name": "debug", "resource spans": 1, "spans": 6} [pod/dep-single-instr-first-container-775f977c98-rpjcm/nodejsapp] Hi [pod/dep-single-instr-first-container-775f977c98-rpjcm/pythonapp] * Debug mode: off [pod/dep-single-instr-first-container-775f977c98-rpjcm/pythonapp] WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. [pod/dep-single-instr-first-container-775f977c98-rpjcm/pythonapp] * Running on http://127.0.0.1:8080 [pod/dep-single-instr-first-container-775f977c98-rpjcm/pythonapp] Press CTRL+C to quit l.go:53: | 08:16:47 | instrumentation-single-instr-first-container | step-01  | CMD | DONE | l.go:53: | 08:16:47 | instrumentation-single-instr-first-container | step-01  | CATCH | DONE | l.go:53: | 08:16:47 | instrumentation-single-instr-first-container | step-01  | CLEANUP | RUN | l.go:53: | 08:16:47 | instrumentation-single-instr-first-container | step-01  | DELETE | RUN | apps/v1/Deployment @ chainsaw-factual-raven/dep-single-instr-first-container l.go:53: | 08:16:47 | instrumentation-single-instr-first-container | step-01  | DELETE | OK | apps/v1/Deployment @ chainsaw-factual-raven/dep-single-instr-first-container === NAME chainsaw/instrumentation-multi-no-containers l.go:53: | 08:16:47 | instrumentation-multi-no-containers | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-one-muskox/multi === NAME chainsaw/instrumentation-single-instr-first-container l.go:53: | 08:16:47 | instrumentation-single-instr-first-container | step-01  | DELETE | DONE | apps/v1/Deployment @ chainsaw-factual-raven/dep-single-instr-first-container l.go:53: | 08:16:47 | instrumentation-single-instr-first-container | step-01  | CLEANUP | DONE | l.go:53: | 08:16:47 | instrumentation-single-instr-first-container | step-00  | CLEANUP | RUN | l.go:53: | 08:16:47 | instrumentation-single-instr-first-container | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-factual-raven/multi === NAME chainsaw/instrumentation-multi-no-containers l.go:53: | 08:16:47 | instrumentation-multi-no-containers | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-one-muskox/multi l.go:53: | 08:16:47 | instrumentation-multi-no-containers | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-one-muskox/sidecar === NAME chainsaw/instrumentation-single-instr-first-container l.go:53: | 08:16:47 | instrumentation-single-instr-first-container | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-factual-raven/multi l.go:53: | 08:16:47 | instrumentation-single-instr-first-container | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/Instrumentation @ chainsaw-factual-raven/multi l.go:53: | 08:16:47 | instrumentation-single-instr-first-container | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-factual-raven/sidecar === NAME chainsaw/instrumentation-multi-no-containers l.go:53: | 08:16:47 | instrumentation-multi-no-containers | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-one-muskox/sidecar === NAME chainsaw/instrumentation-single-instr-first-container l.go:53: | 08:16:47 | instrumentation-single-instr-first-container | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-factual-raven/sidecar === NAME chainsaw/instrumentation-multi-no-containers l.go:53: | 08:16:47 | instrumentation-multi-no-containers | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-one-muskox/sidecar l.go:53: | 08:16:47 | instrumentation-multi-no-containers | step-00  | CLEANUP | DONE | l.go:53: | 08:16:47 | instrumentation-multi-no-containers | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-one-muskox === NAME chainsaw/instrumentation-single-instr-first-container l.go:53: | 08:16:47 | instrumentation-single-instr-first-container | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-factual-raven/sidecar l.go:53: | 08:16:47 | instrumentation-single-instr-first-container | step-00  | CLEANUP | DONE | l.go:53: | 08:16:47 | instrumentation-single-instr-first-container | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-factual-raven === NAME chainsaw/instrumentation-multi-no-containers l.go:53: | 08:16:47 | instrumentation-multi-no-containers | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-one-muskox === NAME chainsaw/instrumentation-single-instr-first-container l.go:53: | 08:16:47 | instrumentation-single-instr-first-container | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-factual-raven l.go:53: | 08:17:19 | instrumentation-single-instr-first-container | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-factual-raven === NAME chainsaw/instrumentation-multi-no-containers l.go:53: | 08:17:34 | instrumentation-multi-no-containers | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-one-muskox --- FAIL: chainsaw (0.00s) --- FAIL: chainsaw/instrumentation-single-instr-first-container (393.74s) --- FAIL: chainsaw/instrumentation-multi-no-containers (407.91s) FAIL Tests Summary... - Passed tests 0 - Failed tests 2 - Skipped tests 0 Done with failures. Error: some tests failed clusterserviceversion.operators.coreos.com/opentelemetry-operator.v0.113.0-2 patched deployment.apps/opentelemetry-operator-controller-manager condition met Version: v0.2.6 No configuration provided but found default file: .chainsaw.yaml Loading config (.chainsaw.yaml)... - Using test file: chainsaw-test - TestDirs [tests/e2e-metadata-filters] - SkipDelete false - FailFast false - ReportFormat 'XML' - ReportName 'junit_otel_metadata_filters' - ReportPath '/logs/artifacts' - Namespace '' - FullName false - IncludeTestRegex '' - ExcludeTestRegex '' - ApplyTimeout 15s - AssertTimeout 6m0s - CleanupTimeout 5m0s - DeleteTimeout 5m0s - ErrorTimeout 5m0s - ExecTimeout 15s - DeletionPropagationPolicy Background - Parallel 4 - NoCluster false - PauseOnFailure false Loading tests... - smoke-pod-annotations (tests/e2e-metadata-filters/annotations) - smoke-pod-annotations (tests/e2e-metadata-filters/labels) Loading values... Running tests... === RUN chainsaw === PAUSE chainsaw === CONT chainsaw === RUN chainsaw/smoke-pod-annotations === PAUSE chainsaw/smoke-pod-annotations === RUN chainsaw/smoke-pod-annotations#01 === PAUSE chainsaw/smoke-pod-annotations#01 === CONT chainsaw/smoke-pod-annotations === CONT chainsaw/smoke-pod-annotations#01 === NAME chainsaw/smoke-pod-annotations l.go:53: | 08:17:44 | smoke-pod-annotations | @setup  | CREATE | OK | v1/Namespace @ chainsaw-awake-buck l.go:53: | 08:17:44 | smoke-pod-annotations | step-00  | TRY | RUN | === NAME chainsaw/smoke-pod-annotations#01 l.go:53: | 08:17:44 | smoke-pod-annotations | @setup  | CREATE | OK | v1/Namespace @ chainsaw-glad-marmoset l.go:53: | 08:17:44 | smoke-pod-annotations | step-00  | TRY | RUN | === NAME chainsaw/smoke-pod-annotations l.go:53: | 08:17:45 | smoke-pod-annotations | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-awake-buck/test-annotations === NAME chainsaw/smoke-pod-annotations#01 l.go:53: | 08:17:45 | smoke-pod-annotations | step-00  | APPLY | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-glad-marmoset/test-labels l.go:53: | 08:17:45 | smoke-pod-annotations | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-glad-marmoset/test-labels l.go:53: | 08:17:45 | smoke-pod-annotations | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-glad-marmoset/test-labels === NAME chainsaw/smoke-pod-annotations l.go:53: | 08:17:45 | smoke-pod-annotations | step-00  | CREATE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-awake-buck/test-annotations l.go:53: | 08:17:45 | smoke-pod-annotations | step-00  | APPLY | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-awake-buck/test-annotations === NAME chainsaw/smoke-pod-annotations#01 l.go:53: | 08:17:45 | smoke-pod-annotations | step-00  | ERROR | RUN | apps/v1/DaemonSet @ chainsaw-glad-marmoset/test-annotations-collector === NAME chainsaw/smoke-pod-annotations l.go:53: | 08:17:45 | smoke-pod-annotations | step-00  | ERROR | RUN | apps/v1/DaemonSet @ chainsaw-awake-buck/test-annotations-collector l.go:53: | 08:17:45 | smoke-pod-annotations | step-00  | ERROR | DONE | apps/v1/DaemonSet @ chainsaw-awake-buck/test-annotations-collector l.go:53: | 08:17:45 | smoke-pod-annotations | step-00  | TRY | DONE | l.go:53: | 08:17:45 | smoke-pod-annotations | step-01  | TRY | RUN | l.go:53: | 08:17:45 | smoke-pod-annotations | step-01  | PATCH | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-awake-buck/test-annotations === NAME chainsaw/smoke-pod-annotations#01 l.go:53: | 08:17:45 | smoke-pod-annotations | step-00  | ERROR | DONE | apps/v1/DaemonSet @ chainsaw-glad-marmoset/test-annotations-collector l.go:53: | 08:17:45 | smoke-pod-annotations | step-00  | TRY | DONE | l.go:53: | 08:17:45 | smoke-pod-annotations | step-01  | TRY | RUN | l.go:53: | 08:17:45 | smoke-pod-annotations | step-01  | PATCH | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-glad-marmoset/test-labels l.go:53: | 08:17:45 | smoke-pod-annotations | step-01  | PATCH | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-glad-marmoset/test-labels l.go:53: | 08:17:45 | smoke-pod-annotations | step-01  | PATCH | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-glad-marmoset/test-labels l.go:53: | 08:17:45 | smoke-pod-annotations | step-01  | ERROR | RUN | apps/v1/DaemonSet @ chainsaw-glad-marmoset/test-annotations-collector === NAME chainsaw/smoke-pod-annotations l.go:53: | 08:17:45 | smoke-pod-annotations | step-01  | PATCH | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-awake-buck/test-annotations l.go:53: | 08:17:45 | smoke-pod-annotations | step-01  | PATCH | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-awake-buck/test-annotations l.go:53: | 08:17:45 | smoke-pod-annotations | step-01  | ERROR | RUN | apps/v1/DaemonSet @ chainsaw-awake-buck/test-annotations-collector === NAME chainsaw/smoke-pod-annotations#01 l.go:53: | 08:17:45 | smoke-pod-annotations | step-01  | ERROR | DONE | apps/v1/DaemonSet @ chainsaw-glad-marmoset/test-annotations-collector l.go:53: | 08:17:45 | smoke-pod-annotations | step-01  | TRY | DONE | l.go:53: | 08:17:45 | smoke-pod-annotations | step-00  | CLEANUP | RUN | l.go:53: | 08:17:45 | smoke-pod-annotations | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-glad-marmoset/test-labels === NAME chainsaw/smoke-pod-annotations l.go:53: | 08:17:45 | smoke-pod-annotations | step-01  | ERROR | DONE | apps/v1/DaemonSet @ chainsaw-awake-buck/test-annotations-collector l.go:53: | 08:17:45 | smoke-pod-annotations | step-01  | TRY | DONE | l.go:53: | 08:17:45 | smoke-pod-annotations | step-00  | CLEANUP | RUN | l.go:53: | 08:17:45 | smoke-pod-annotations | step-00  | DELETE | RUN | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-awake-buck/test-annotations === NAME chainsaw/smoke-pod-annotations#01 l.go:53: | 08:17:45 | smoke-pod-annotations | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-glad-marmoset/test-labels === NAME chainsaw/smoke-pod-annotations l.go:53: | 08:17:45 | smoke-pod-annotations | step-00  | DELETE | OK | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-awake-buck/test-annotations l.go:53: | 08:17:45 | smoke-pod-annotations | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-awake-buck/test-annotations l.go:53: | 08:17:45 | smoke-pod-annotations | step-00  | CLEANUP | DONE | l.go:53: | 08:17:45 | smoke-pod-annotations | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-awake-buck l.go:53: | 08:17:45 | smoke-pod-annotations | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-awake-buck === NAME chainsaw/smoke-pod-annotations#01 l.go:53: | 08:17:46 | smoke-pod-annotations | step-00  | DELETE | DONE | opentelemetry.io/v1alpha1/OpenTelemetryCollector @ chainsaw-glad-marmoset/test-labels l.go:53: | 08:17:46 | smoke-pod-annotations | step-00  | CLEANUP | DONE | l.go:53: | 08:17:46 | smoke-pod-annotations | @cleanup | DELETE | RUN | v1/Namespace @ chainsaw-glad-marmoset l.go:53: | 08:17:46 | smoke-pod-annotations | @cleanup | DELETE | OK | v1/Namespace @ chainsaw-glad-marmoset === NAME chainsaw/smoke-pod-annotations l.go:53: | 08:17:52 | smoke-pod-annotations | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-awake-buck === NAME chainsaw/smoke-pod-annotations#01 l.go:53: | 08:17:53 | smoke-pod-annotations | @cleanup | DELETE | DONE | v1/Namespace @ chainsaw-glad-marmoset --- PASS: chainsaw (0.00s) --- PASS: chainsaw/smoke-pod-annotations (7.56s) --- PASS: chainsaw/smoke-pod-annotations#01 (8.29s) PASS Tests Summary... - Passed tests 2 - Failed tests 0 - Skipped tests 0 Done. Tests failed, check the logs for more details.