Uploaded image for project: 'Distributed Tracing'
  1. Distributed Tracing
  2. TRACING-4970

[Upstream] OpenTelemetry operator pod crashes with error creating servicemonitors.monitoring.coreos.com opentelemetry-operator-metrics-monitor

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Undefined Undefined
    • rhosdt-3.4
    • None
    • None
    • None
    • Tracing Sprint # 262

      Version of components:

      opentelemetry-operator.v0.113.0

      OCP version 4.16.0-0.nightly-2024-11-08-100216

      Description of the issue:

      When the OpenTelemetry operator csv is updated, the operator pod fails with the following error:

      {"level":"INFO","timestamp":"2024-11-11T10:18:28.446310667Z","message":"All workers finished","controller":"opampbridge","controllerGroup":"opentelemetry.io","controllerKind":"OpAMPBridge"}
      {"level":"INFO","timestamp":"2024-11-11T10:18:28.446315312Z","message":"Stopping and waiting for caches"}
      W1111 10:18:28.446400       1 reflector.go:484] pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding
      {"level":"INFO","timestamp":"2024-11-11T10:18:28.446627238Z","message":"Stopping and waiting for webhooks"}
      {"level":"INFO","timestamp":"2024-11-11T10:18:28.446652862Z","logger":"controller-runtime.webhook","message":"Shutting down webhook server with timeout of 1 minute"}
      {"level":"INFO","timestamp":"2024-11-11T10:18:28.446740103Z","message":"Stopping and waiting for HTTP servers"}
      {"level":"INFO","timestamp":"2024-11-11T10:18:28.446760527Z","logger":"controller-runtime.metrics","message":"Shutting down metrics server with timeout of 1 minute"}
      {"level":"INFO","timestamp":"2024-11-11T10:18:28.44679548Z","message":"shutting down server","name":"health probe","addr":"[::]:8081"}
      {"level":"INFO","timestamp":"2024-11-11T10:18:28.446854274Z","message":"Wait completed, proceeding to shutdown the manager"}
      {"level":"ERROR","timestamp":"2024-11-11T10:18:28.454439947Z","message":"error received after stop sequence was engaged","error":"leader election lost","stacktrace":"sigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).engageStopProcedure.func1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/manager/internal.go:512"}
      {"level":"ERROR","timestamp":"2024-11-11T10:18:28.454409611Z","logger":"setup","message":"problem running manager","error":"error creating service monitor: servicemonitors.monitoring.coreos.com \"opentelemetry-operator-metrics-monitor\" already exists","stacktrace":"main.main\n\t/Users/ikanse/opentelemetry-operator/main.go:517\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:271"}
      

      Steps to reproduce the issue:

      *Build and install the OpenTelemetry operator from the latest upstream branch.

      oc new-project opentelemetry-operator
      oc label namespace opentelemetry-operator openshift.io/cluster-monitoring="true"
      operator-sdk run bundle --timeout=5m --security-context-config=restricted quay.io/rhn_support_ikanse/opentelemetry-operator-bundle:latest
      

      *Update the Operator CSV and try adding additional feature flags. For example.

      oc edit csv opentelemetry-operator.v0.113.0

      *Add --openshift-create-dashboard=true feature flag

      *Wait for sometime, the operator pod fails with the following error.

      % oc logs -f opentelemetry-operator-controller-manager-6d74966ff7-5jsjg 
      Defaulted container "manager" out of: manager, kube-rbac-proxy
      {"level":"INFO","timestamp":"2024-11-11T10:43:00.075833444Z","message":"Starting the OpenTelemetry Operator","opentelemetry-operator":"0.111.0-37-g7c79f2df","opentelemetry-collector":"ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector:0.113.0","opentelemetry-targetallocator":"ghcr.io/open-telemetry/opentelemetry-operator/target-allocator:0.113.0","operator-opamp-bridge":"ghcr.io/open-telemetry/opentelemetry-operator/operator-opamp-bridge:0.113.0","auto-instrumentation-java":"ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-java:1.33.5","auto-instrumentation-nodejs":"ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-nodejs:0.53.0","auto-instrumentation-python":"ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-python:0.48b0","auto-instrumentation-dotnet":"ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-dotnet:1.2.0","auto-instrumentation-go":"ghcr.io/open-telemetry/opentelemetry-go-instrumentation/autoinstrumentation-go:v0.17.0-alpha","auto-instrumentation-apache-httpd":"ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-apache-httpd:1.0.4","auto-instrumentation-nginx":"ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-apache-httpd:1.0.4","feature-gates":"operator.collector.default.config,-operator.collector.targetallocatorcr,-operator.golang.flags,operator.observability.prometheus,-operator.sidecarcontainers.native,-operator.targetallocator.mtls","build-date":"2024-11-11T09:41:30Z","go-version":"go1.22.7","go-arch":"amd64","go-os":"linux","labels-filter":[],"annotations-filter":[],"enable-multi-instrumentation":true,"enable-apache-httpd-instrumentation":true,"enable-dotnet-instrumentation":true,"enable-go-instrumentation":true,"enable-python-instrumentation":true,"enable-nginx-instrumentation":true,"enable-nodejs-instrumentation":true,"enable-java-instrumentation":true,"create-openshift-dashboard":true,"zap-message-key":"message","zap-level-key":"level","zap-time-key":"timestamp","zap-level-format":"uppercase"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:00.076024807Z","logger":"setup","message":"the env var WATCH_NAMESPACE isn't set, watching all namespaces"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:00.124258263Z","logger":"setup","message":"Prometheus CRDs are installed, adding to scheme."}
      {"level":"INFO","timestamp":"2024-11-11T10:43:00.124408958Z","logger":"setup","message":"Openshift CRDs are installed, adding to scheme."}
      {"level":"INFO","timestamp":"2024-11-11T10:43:00.12449486Z","logger":"setup","message":"Cert-Manager is not available to the operator, skipping adding to scheme."}
      {"level":"INFO","timestamp":"2024-11-11T10:43:00.132806879Z","logger":"controller-runtime.builder","message":"Registering a mutating webhook","GVK":"opentelemetry.io/v1beta1, Kind=OpenTelemetryCollector","path":"/mutate-opentelemetry-io-v1beta1-opentelemetrycollector"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:00.132939372Z","logger":"controller-runtime.webhook","message":"Registering webhook","path":"/mutate-opentelemetry-io-v1beta1-opentelemetrycollector"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:00.132975794Z","logger":"controller-runtime.builder","message":"Registering a validating webhook","GVK":"opentelemetry.io/v1beta1, Kind=OpenTelemetryCollector","path":"/validate-opentelemetry-io-v1beta1-opentelemetrycollector"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:00.133040912Z","logger":"controller-runtime.webhook","message":"Registering webhook","path":"/validate-opentelemetry-io-v1beta1-opentelemetrycollector"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:00.133125994Z","logger":"controller-runtime.webhook","message":"Registering webhook","path":"/convert"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:00.133132572Z","logger":"controller-runtime.builder","message":"Conversion webhook enabled","GVK":"opentelemetry.io/v1beta1, Kind=OpenTelemetryCollector"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:00.133167389Z","logger":"controller-runtime.builder","message":"Registering a mutating webhook","GVK":"opentelemetry.io/v1alpha1, Kind=Instrumentation","path":"/mutate-opentelemetry-io-v1alpha1-instrumentation"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:00.133213816Z","logger":"controller-runtime.webhook","message":"Registering webhook","path":"/mutate-opentelemetry-io-v1alpha1-instrumentation"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:00.133246192Z","logger":"controller-runtime.builder","message":"Registering a validating webhook","GVK":"opentelemetry.io/v1alpha1, Kind=Instrumentation","path":"/validate-opentelemetry-io-v1alpha1-instrumentation"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:00.133300882Z","logger":"controller-runtime.webhook","message":"Registering webhook","path":"/validate-opentelemetry-io-v1alpha1-instrumentation"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:00.133399225Z","logger":"controller-runtime.webhook","message":"Registering webhook","path":"/mutate-v1-pod"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:00.133433438Z","logger":"controller-runtime.builder","message":"Registering a mutating webhook","GVK":"opentelemetry.io/v1alpha1, Kind=OpAMPBridge","path":"/mutate-opentelemetry-io-v1alpha1-opampbridge"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:00.133492285Z","logger":"controller-runtime.webhook","message":"Registering webhook","path":"/mutate-opentelemetry-io-v1alpha1-opampbridge"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:00.133518112Z","logger":"controller-runtime.builder","message":"Registering a validating webhook","GVK":"opentelemetry.io/v1alpha1, Kind=OpAMPBridge","path":"/validate-opentelemetry-io-v1alpha1-opampbridge"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:00.133559052Z","logger":"controller-runtime.webhook","message":"Registering webhook","path":"/validate-opentelemetry-io-v1alpha1-opampbridge"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:00.133600475Z","logger":"setup","message":"starting manager"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:00.133670785Z","logger":"controller-runtime.metrics","message":"Starting metrics server"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:00.133741287Z","message":"starting server","name":"health probe","addr":"[::]:8081"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:00.133763218Z","logger":"controller-runtime.metrics","message":"Serving metrics server","bindAddress":"127.0.0.1:8080","secure":false}
      {"level":"INFO","timestamp":"2024-11-11T10:43:00.13379875Z","logger":"controller-runtime.webhook","message":"Starting webhook server"}
      I1111 10:43:00.133919       1 leaderelection.go:254] attempting to acquire leader lease opentelemetry-operator/9f7554c3.opentelemetry.io...
      {"level":"INFO","timestamp":"2024-11-11T10:43:00.134264931Z","logger":"controller-runtime.certwatcher","message":"Updated current TLS certificate"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:00.134351701Z","logger":"controller-runtime.webhook","message":"Serving webhook server","host":"","port":9443}
      {"level":"INFO","timestamp":"2024-11-11T10:43:00.134375087Z","logger":"controller-runtime.certwatcher","message":"Starting certificate watcher"}
      I1111 10:43:39.597334       1 leaderelection.go:268] successfully acquired lease opentelemetry-operator/9f7554c3.opentelemetry.io
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.597559804Z","logger":"instrumentation-upgrade","message":"looking for managed Instrumentation instances to upgrade"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.597655731Z","logger":"collector-upgrade","message":"looking for managed instances to upgrade"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.598076817Z","message":"Starting EventSource","controller":"opampbridge","controllerGroup":"opentelemetry.io","controllerKind":"OpAMPBridge","source":"kind source: *v1alpha1.OpAMPBridge"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.598127544Z","message":"Starting EventSource","controller":"opampbridge","controllerGroup":"opentelemetry.io","controllerKind":"OpAMPBridge","source":"kind source: *v1.ConfigMap"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.598147553Z","message":"Starting EventSource","controller":"opampbridge","controllerGroup":"opentelemetry.io","controllerKind":"OpAMPBridge","source":"kind source: *v1.ServiceAccount"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.598164479Z","message":"Starting EventSource","controller":"opampbridge","controllerGroup":"opentelemetry.io","controllerKind":"OpAMPBridge","source":"kind source: *v1.Service"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.598181215Z","message":"Starting EventSource","controller":"opampbridge","controllerGroup":"opentelemetry.io","controllerKind":"OpAMPBridge","source":"kind source: *v1.Deployment"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.598201057Z","message":"Starting Controller","controller":"opampbridge","controllerGroup":"opentelemetry.io","controllerKind":"OpAMPBridge"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.598301273Z","message":"Starting EventSource","controller":"opentelemetrycollector","controllerGroup":"opentelemetry.io","controllerKind":"OpenTelemetryCollector","source":"kind source: *v1beta1.OpenTelemetryCollector"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.598335311Z","message":"Starting EventSource","controller":"opentelemetrycollector","controllerGroup":"opentelemetry.io","controllerKind":"OpenTelemetryCollector","source":"kind source: *v1.ConfigMap"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.598353313Z","message":"Starting EventSource","controller":"opentelemetrycollector","controllerGroup":"opentelemetry.io","controllerKind":"OpenTelemetryCollector","source":"kind source: *v1.ServiceAccount"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.598378915Z","message":"Starting EventSource","controller":"opentelemetrycollector","controllerGroup":"opentelemetry.io","controllerKind":"OpenTelemetryCollector","source":"kind source: *v1.Service"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.598394453Z","message":"Starting EventSource","controller":"opentelemetrycollector","controllerGroup":"opentelemetry.io","controllerKind":"OpenTelemetryCollector","source":"kind source: *v1.Deployment"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.598411555Z","message":"Starting EventSource","controller":"opentelemetrycollector","controllerGroup":"opentelemetry.io","controllerKind":"OpenTelemetryCollector","source":"kind source: *v1.DaemonSet"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.598447622Z","message":"Starting EventSource","controller":"opentelemetrycollector","controllerGroup":"opentelemetry.io","controllerKind":"OpenTelemetryCollector","source":"kind source: *v1.StatefulSet"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.598463995Z","message":"Starting EventSource","controller":"opentelemetrycollector","controllerGroup":"opentelemetry.io","controllerKind":"OpenTelemetryCollector","source":"kind source: *v1.Ingress"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.598481487Z","message":"Starting EventSource","controller":"opentelemetrycollector","controllerGroup":"opentelemetry.io","controllerKind":"OpenTelemetryCollector","source":"kind source: *v2.HorizontalPodAutoscaler"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.598502517Z","message":"Starting EventSource","controller":"opentelemetrycollector","controllerGroup":"opentelemetry.io","controllerKind":"OpenTelemetryCollector","source":"kind source: *v1.PodDisruptionBudget"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.598519809Z","message":"Starting EventSource","controller":"opentelemetrycollector","controllerGroup":"opentelemetry.io","controllerKind":"OpenTelemetryCollector","source":"kind source: *v1.ServiceMonitor"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.598536778Z","message":"Starting EventSource","controller":"opentelemetrycollector","controllerGroup":"opentelemetry.io","controllerKind":"OpenTelemetryCollector","source":"kind source: *v1.PodMonitor"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.598557484Z","message":"Starting EventSource","controller":"opentelemetrycollector","controllerGroup":"opentelemetry.io","controllerKind":"OpenTelemetryCollector","source":"kind source: *v1.Route"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.598573652Z","message":"Starting Controller","controller":"opentelemetrycollector","controllerGroup":"opentelemetry.io","controllerKind":"OpenTelemetryCollector"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.620410644Z","message":"Stopping and waiting for non leader election runnables"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.620624593Z","message":"Stopping and waiting for leader election runnables"}
      {"level":"ERROR","timestamp":"2024-11-11T10:43:39.620700279Z","message":"error received after stop sequence was engaged","error":"failed to list: Timeout: failed waiting for *v1beta1.OpenTelemetryCollector Informer to sync","stacktrace":"sigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).engageStopProcedure.func1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/manager/internal.go:512"}
      {"level":"ERROR","timestamp":"2024-11-11T10:43:39.620777195Z","message":"error received after stop sequence was engaged","error":"failed to list: Timeout: failed waiting for *v1alpha1.Instrumentation Informer to sync","stacktrace":"sigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).engageStopProcedure.func1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/manager/internal.go:512"}
      {"level":"ERROR","timestamp":"2024-11-11T10:43:39.620804602Z","logger":"controller-runtime.source.EventHandler","message":"failed to get informer from cache","error":"Timeout: failed waiting for *v1.Deployment Informer to sync","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1.1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/source/kind.go:76\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/loop.go:53\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/loop.go:54\nk8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:33\nsigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/source/kind.go:64"}
      {"level":"ERROR","timestamp":"2024-11-11T10:43:39.620901879Z","logger":"controller-runtime.source.EventHandler","message":"failed to get informer from cache","error":"Timeout: failed waiting for *v2.HorizontalPodAutoscaler Informer to sync","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1.1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/source/kind.go:76\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/loop.go:53\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/loop.go:54\nk8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:33\nsigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/source/kind.go:64"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.620958526Z","message":"Starting workers","controller":"opentelemetrycollector","controllerGroup":"opentelemetry.io","controllerKind":"OpenTelemetryCollector","worker count":1}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.620990845Z","message":"Shutdown signal received, waiting for all workers to finish","controller":"opentelemetrycollector","controllerGroup":"opentelemetry.io","controllerKind":"OpenTelemetryCollector"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.621002432Z","message":"All workers finished","controller":"opentelemetrycollector","controllerGroup":"opentelemetry.io","controllerKind":"OpenTelemetryCollector"}
      {"level":"ERROR","timestamp":"2024-11-11T10:43:39.621008288Z","logger":"controller-runtime.source.EventHandler","message":"failed to get informer from cache","error":"Timeout: failed waiting for *v1alpha1.OpAMPBridge Informer to sync","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1.1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/source/kind.go:76\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/loop.go:53\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/loop.go:54\nk8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:33\nsigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/source/kind.go:64"}
      {"level":"ERROR","timestamp":"2024-11-11T10:43:39.621028809Z","logger":"controller-runtime.source.EventHandler","message":"failed to get informer from cache","error":"Timeout: failed waiting for *v1.Service Informer to sync","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1.1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/source/kind.go:76\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/loop.go:53\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/loop.go:54\nk8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:33\nsigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/source/kind.go:64"}
      {"level":"ERROR","timestamp":"2024-11-11T10:43:39.621045292Z","logger":"controller-runtime.source.EventHandler","message":"failed to get informer from cache","error":"Timeout: failed waiting for *v1.ConfigMap Informer to sync","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1.1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/source/kind.go:76\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/loop.go:53\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/loop.go:54\nk8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:33\nsigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/source/kind.go:64"}
      {"level":"ERROR","timestamp":"2024-11-11T10:43:39.621074139Z","logger":"controller-runtime.source.EventHandler","message":"failed to get informer from cache","error":"Timeout: failed waiting for *v1.ServiceAccount Informer to sync","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1.1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/source/kind.go:76\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/loop.go:53\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/loop.go:54\nk8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:33\nsigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/source/kind.go:64"}
      {"level":"ERROR","timestamp":"2024-11-11T10:43:39.621089702Z","logger":"controller-runtime.source.EventHandler","message":"failed to get informer from cache","error":"Timeout: failed waiting for *v1.Deployment Informer to sync","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1.1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/source/kind.go:76\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/loop.go:53\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/loop.go:54\nk8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:33\nsigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/source/kind.go:64"}
      {"level":"ERROR","timestamp":"2024-11-11T10:43:39.621104869Z","logger":"controller-runtime.source.EventHandler","message":"failed to get informer from cache","error":"Timeout: failed waiting for *v1.Ingress Informer to sync","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1.1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/source/kind.go:76\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/loop.go:53\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/loop.go:54\nk8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:33\nsigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/source/kind.go:64"}
      {"level":"ERROR","timestamp":"2024-11-11T10:43:39.621139674Z","logger":"controller-runtime.source.EventHandler","message":"failed to get informer from cache","error":"Timeout: failed waiting for *v1.Route Informer to sync","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1.1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/source/kind.go:76\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/loop.go:53\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/loop.go:54\nk8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:33\nsigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/source/kind.go:64"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.621157051Z","message":"Starting workers","controller":"opampbridge","controllerGroup":"opentelemetry.io","controllerKind":"OpAMPBridge","worker count":1}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.621161652Z","message":"Shutdown signal received, waiting for all workers to finish","controller":"opampbridge","controllerGroup":"opentelemetry.io","controllerKind":"OpAMPBridge"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.621165807Z","message":"All workers finished","controller":"opampbridge","controllerGroup":"opentelemetry.io","controllerKind":"OpAMPBridge"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.621170561Z","message":"Stopping and waiting for caches"}
      {"level":"ERROR","timestamp":"2024-11-11T10:43:39.621180579Z","logger":"controller-runtime.source.EventHandler","message":"failed to get informer from cache","error":"Timeout: failed waiting for *v1.StatefulSet Informer to sync","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1.1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/source/kind.go:76\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/loop.go:53\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/loop.go:54\nk8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:33\nsigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/source/kind.go:64"}
      E1111 10:43:39.621359       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: Failed to watch *v1.Deployment: Get \"https://172.30.0.1:443/apis/apps/v1/deployments?allowWatchBookmarks=true&resourceVersion=216563&timeoutSeconds=575&watch=true\": context canceled" logger="UnhandledError"
      W1111 10:43:39.621380       1 reflector.go:484] pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: watch of *v1.Ingress ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding
      W1111 10:43:39.621477       1 reflector.go:484] pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding
      E1111 10:43:39.621777       1 request.go:1255] Unexpected error when reading response body: context canceled
      {"level":"ERROR","timestamp":"2024-11-11T10:43:39.622074616Z","logger":"controller-runtime.source.EventHandler","message":"failed to get informer from cache","error":"Timeout: failed waiting for *v1.ServiceMonitor Informer to sync","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1.1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/source/kind.go:76\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/loop.go:53\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/loop.go:54\nk8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:33\nsigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/source/kind.go:64"}
      {"level":"ERROR","timestamp":"2024-11-11T10:43:39.623458899Z","logger":"controller-runtime.source.EventHandler","message":"failed to get informer from cache","error":"Timeout: failed waiting for *v1.PodDisruptionBudget Informer to sync","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1.1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/source/kind.go:76\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/loop.go:53\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/loop.go:54\nk8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:33\nsigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/source/kind.go:64"}
      {"level":"ERROR","timestamp":"2024-11-11T10:43:39.623687209Z","logger":"controller-runtime.source.EventHandler","message":"failed to get informer from cache","error":"Timeout: failed waiting for *v1.DaemonSet Informer to sync","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1.1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/source/kind.go:76\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/loop.go:53\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/loop.go:54\nk8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:33\nsigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/source/kind.go:64"}
      {"level":"ERROR","timestamp":"2024-11-11T10:43:39.623784452Z","logger":"controller-runtime.source.EventHandler","message":"failed to get informer from cache","error":"Timeout: failed waiting for *v1.PodMonitor Informer to sync","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1.1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/source/kind.go:76\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/loop.go:53\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/loop.go:54\nk8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:33\nsigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/source/kind.go:64"}
      {"level":"ERROR","timestamp":"2024-11-11T10:43:39.623815357Z","logger":"controller-runtime.source.EventHandler","message":"failed to get informer from cache","error":"Timeout: failed waiting for *v1.ConfigMap Informer to sync","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1.1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/source/kind.go:76\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/loop.go:53\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/loop.go:54\nk8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel\n\t/Users/ikanse/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:33\nsigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/internal/source/kind.go:64"}
      W1111 10:43:39.624062       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.DaemonSet: client rate limiter Wait returned an error: context canceled
      E1111 10:43:39.624107       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: Failed to watch *v1.DaemonSet: failed to list *v1.DaemonSet: client rate limiter Wait returned an error: context canceled" logger="UnhandledError"
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.626215875Z","message":"Stopping and waiting for webhooks"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.626490589Z","logger":"controller-runtime.webhook","message":"Shutting down webhook server with timeout of 1 minute"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.626564634Z","message":"Stopping and waiting for HTTP servers"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.626600579Z","logger":"controller-runtime.metrics","message":"Shutting down metrics server with timeout of 1 minute"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.626670999Z","message":"shutting down server","name":"health probe","addr":"[::]:8081"}
      {"level":"INFO","timestamp":"2024-11-11T10:43:39.626704566Z","message":"Wait completed, proceeding to shutdown the manager"}
      {"level":"ERROR","timestamp":"2024-11-11T10:43:39.631076418Z","message":"error received after stop sequence was engaged","error":"leader election lost","stacktrace":"sigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).engageStopProcedure.func1\n\t/Users/ikanse/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.1/pkg/manager/internal.go:512"}
      {"level":"ERROR","timestamp":"2024-11-11T10:43:39.631054406Z","logger":"setup","message":"problem running manager","error":"error creating service monitor: servicemonitors.monitoring.coreos.com \"opentelemetry-operator-metrics-monitor\" already exists","stacktrace":"main.main\n\t/Users/ikanse/opentelemetry-operator/main.go:517\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:271"} 

      Additional notes:

      Issue detected in https://github.com/openshift/open-telemetry-opentelemetry-operator/pull/100 and verified locally. 

       

              rhn-support-iblancas Israel Blancas Alvarez
              rhn-support-ikanse Ishwar Kanse
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

                Created:
                Updated:
                Resolved: