Uploaded image for project: 'WildFly'
  1. WildFly
  2. WFLY-5495

Can't configure ID/data/timestamp columns of JDBC-based cache stores

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Major Major
    • 10.0.0.CR4
    • 10.0.0.CR2
    • Clustering
    • None

      When I'm trying to configure a JDBC-based cache store for an "offload" Infinispan cache, configuration for the ID/data/timestamp columns is not applied. I found that this is the case by writing:

      <replicated-cache name="offload" mode="SYNC">
          <transaction mode="BATCH"/>
          <binary-keyed-jdbc-store data-source="testDS" passivation="false" preload="true" purge="false" shared="true">
              <binary-keyed-table prefix="b">
                  <id-column name="id" type="VARCHAR(255)"/>
                  <data-column name="datum" type="VARBINARY(10000)"/>
                  <timestamp-column name="ver" type="BIGINT"/>
              </binary-keyed-table>
          </binary-keyed-jdbc-store>
      </replicated-cache>
      

      and getting the following error:

      15:12:44,254 ERROR [org.infinispan.persistence.jdbc.TableManipulation] (ServerService Thread Pool -- 65) ISPN008011: Error while creating table; used DDL statement: 'CREATE TABLE `b_clusterbench_ee7_ear_clusterbench_ee7_web_offload_war`(id VARCHAR NOT NULL, datum BINARY, version BIGINT, PRIMARY KEY (id))': com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'NOT NULL, datum BINARY, version BIGINT, PRIMARY KEY (id))' at line 1
      

      Clearly, the configuration isn't applied, because the SQL statement cited in the error message uses default column names and types, which are different from what was configured.

              pferraro@redhat.com Paul Ferraro
              lthon@redhat.com Ladislav Thon
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: