-
Task
-
Resolution: Done
-
Major
-
None
-
None
-
None
-
Tracing Sprint #41, Tracing Sprint #42
This is related and depends on https://issues.redhat.com/browse/TRACING-1161
The goal is to find out maximum ingestion capabilities for Apache Druid when used as a local storage, see https://druid.apache.org/docs/latest/operations/single-server.html
This task will require a new span storage reader/writer to be implemented to communicate with the external storage (running in a sidecar on localhost). This storage layer will need to flatten the spans into a set of column name/values - start time will be mapped to the timestamp for the record, and can also include a metric field (dimension) for the duration field, allowing aggregated queries. So the top level fields would be columns, e.g. traceId, spanId, operation, etc - as well as the tags - for now ignore the log events.
To enable the span to be retrieved (i.e. for when reconstructing a trace), for now just include a column for the span (either in json or compressed binary possibly). So when a particular trace id is being retrieved, you would issue a query for the 'span' column with the relevant 'traceId' - so it will only touch those two columns.
- clones
-
TRACING-1162 Run performance tests against Badger storage
-
- Closed
-
- is cloned by
-
TRACING-1169 Run performance tests against QuestDB local storage
-
- Closed
-