-
Feature Request
-
Resolution: Unresolved
-
Undefined
-
None
-
Quality / Stability / Reliability
-
1
-
False
-
-
False
Configuring prometheusremotewrite causes extensive debug logs (go stacktrace) when restarting the OTC collector due to out-of-bounds metrics.
spec:
config:
exporters:
prometheusremotewrite:
endpoint: 'http://prometheus-remote-write.prometheus.svc.cluster.local:9090/api/v1/write'
target_info:
enabled: true
go.opentelemetry.io/collector/exporter/exporterhelper/internal.NewQueueSender.func1
go.opentelemetry.io/collector/exporter/exporterhelper@v0.140.0/internal/queue_sender.go:49
go.opentelemetry.io/collector/exporter/exporterhelper/internal/queuebatch.(*disabledBatcher[...]).Consume
go.opentelemetry.io/collector/exporter/exporterhelper@v0.140.0/internal/queuebatch/disabled_batcher.go:23
go.opentelemetry.io/collector/exporter/exporterhelper/internal/queue.(*asyncQueue[...]).Start.func1
go.opentelemetry.io/collector/exporter/exporterhelper@v0.140.0/internal/queue/async_queue.go:49
2025-11-27T11:17:50.496Z error prometheusremotewriteexporter@v0.140.1/exporter.go:487 failed to send WriteRequest to remote endpoint{"resource": {"service.instance.id": "3da41a58-de87-4ec4-8f3b-0932a208a921", "service.name": "otelcol", "service.version": "0.140.1"}, "otelcol.component.id": "prometheusremotewrite", "otelcol.component.kind": "exporter", "otelcol.signal": "metrics", "status_code": 400, "status": "400 Bad Request", "endpoint": "http://prometheus-remote-write.prometheus.svc.cluster.local:9090/api/v1/write", "retry_attempt": 1, "error": "out of bounds\n"}
that settles after the timestamps catchup from the OTC queue for the remote-write-target but causes a ton of error logs and confusion. Cleaning the go stacktraces out of the logs would be helpful and highly appreciated.
- clones
-
TRACING-5836 [RHOSDT 3.8] SCC issues when assigning privileges to the ServiceAccount and attach storage
-
- New
-