-
Task
-
Resolution: Done
-
Normal
-
None
-
5
-
False
-
None
-
False
-
NEW
-
OBSDA-550 - Updated APIs for Logging 6.0
-
NEW
-
With this enhancement, the collector config is now deployed as a configmap instead of a secret. This will facilitate the users being able to see and edit the config when the collector is in an unmanaged state.
-
Enhancement
-
-
-
Log Collection - Sprint 254, Log Collection - Sprint 255
Summary
Replace current Vector configuration from the Secret to the ConfigMap
Acceptance Criteria
- Test should pass
- Change documentation if needed
Implementation proposal
As a solution, it is proposed to use the Secret management mechanism from Vector. More information can be found at the following links:
- Vector Secrets Management Highlights
- Vector Configuration Reference - Secrets
- Vector GitHub PR #11985
On the Cluster Logging Operator side, we need to take care of mounting Secret data containing sensitive information such as passwords, tokens, or other security keys related to authorization to a known/predictable file path.
For example:
apiVersion: v1 kind: Pod metadata: name: collector-inst spec: containers: - name: collector image: vector:latest volumeMounts: - name: secret-volume mountPath: "/etc/secret-data" readOnly: true volumes: - name: secret-volume secret: secretName: secret-data
Changes in the generator
For sensitive values, a new secret section needs to be added to the Vector config file and assign obtained sensitive data to the corespondent value. This will be done only once when the config file is loaded:
for example:
CLO config:
outputs: - name: myhttp type: http http: authentication: username: key: username secret: name: foo - name: mygcp type: googleCloudLogging googleCloudLogging: authentication: credentials: key: credentials.json secret: name: foo
Resulting vector.toml
[secret.my-sink] type = "exec" command = ["./read-secret-data"] [sinks.my_sink] type = "my_logs" inputs = [""] endpoint = "https://endpoint" password = "SECRET[my-sink.password]" username = "SECRET[my-sink.username]" [sinks.output_myhttp] type = "my_logs" inputs = [""] endpoint = "https://endpoint" username = "SECRET[my-sink.foo_username]" [sinks.output_mygcp] type = "my_logs" inputs = [""] endpoint = "https://endpoint" credentials = "SECRET[my-sink.foo_credentials_json]"
The read-secret-data script for reading data from the file must return data in JSON format, e.g.:
{ "password": {"value": "AKIAIOSFODNN7EXAMPLE", "error": null}, "username": {"value": "Thor", "error": null}, "foo_username": {"value": "mypassword", "error": null}, "foo_credentials_json": {"value": "{\"a\":\"b\"}", "error": null} }
The script can look something like this:
#!/bin/bash cat <<EOF { "username": { "value": "$(cat /tmp/username)", "error": null }, "password": { "value": "$(cat /tmp/password)", "error": null } } EOF
Note:
Need to think about script generation, something more intelligent/universal for any value name.
- relates to
-
LOG-5548 Investigate refactoring config secrets to rely upon env vars
- Closed
- links to
-
RHBA-2024:137361 Logging for Red Hat OpenShift - 6.0.0