Uploaded image for project: 'Distributed Tracing'
  1. Distributed Tracing
  2. TRACING-1169

Run performance tests against QuestDB local storage

XMLWordPrintable

    • Icon: Task Task
    • Resolution: Done
    • Icon: Major Major
    • jaeger-1.20.0
    • None
    • None
    • None
    • Tracing Sprint #41, Tracing Sprint #42

      This is related and depends on https://issues.redhat.com/browse/TRACING-1161

      The goal is to find out maximum ingestion capabilities for QuestDB when used as a local storage, see https://www.questdb.io/getstarted

      This task will require a new span storage reader/writer to be implemented to communicate with the external storage (running in a sidecar on localhost). QuestDB supports REST as well as PostgresQL.

      As with Apache Druid, this storage layer will need to flatten the spans into a set of column name/values - start time will be mapped to the timestamp for the record, and can also include a metric field (dimension) for the duration field, allowing aggregated queries. So the top level fields would be columns, e.g. traceId, spanId, operation, etc - as well as the tags - for now ignore the log events.

      To enable the span to be retrieved (i.e. for when reconstructing a trace), for now just include a column for the span (either in json or compressed binary possibly). So when a particular trace id is being retrieved, you would issue a query for the 'span' column with the relevant 'traceId' - so it will only touch those two columns.

      The complexity with this one, is that QuestDB is schema based - so when a new tag is reported with a span it will be necessary to use the Alter Table Add Column command. Possibly keep the current set of columns cached and auto perform the add column command it a new column is detected.

              rvargasp@redhat.com Ruben Vargas Palma
              ploffay@redhat.com Pavol Loffay
              Distributed Tracing
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated:
                Resolved: