Uploaded image for project: 'Kogito'
  1. Kogito
  2. KOGITO-5359

Native Image tests are broken in MongoDB persistence

XMLWordPrintable

    • 2021 Week 31-33 (from Aug 2)

      https://github.com/kiegroup/kogito-runtimes/pull/1379

      https://eng-jenkins-csb-business-automation.apps.ocp4.prod.psi.redhat.com/blue/organizations/jenkins/KIE%2Fkogito%2Fpullrequest%2Fkogito-runtimes.native.runtimes/detail/kogito-runtimes.native.runtimes/4/tests

      it looks like there is an issue with marshalling

      Stacktrace
      java.lang.AssertionError: 
      1 expectation failed.
      Expected status code <201> but was <500>.
      Standard Output
      2021-06-17 13:34:34,562 INFO  [org.kie.kog.cod.api.uti.AddonsConfigDiscovery] (build-41) Performed addonsConfig discovery, found: AddonsConfig{usePersistence=true, useTracing=false, useMonitoring=false, usePrometheusMonitoring=false, useCloudEvents=false, useExplainability=false, useProcessSVG=false}
      2021-06-17 13:34:34,857 INFO  [org.kie.kog.cod.cor.uti.ApplicationGeneratorDiscovery] (build-41) Generator discovery performed, found [openapispecs, processes, rules, decisions, predictions]
      2021-06-17 13:34:36,521 INFO  [org.kie.kog.cod.api.uti.AddonsConfigDiscovery] (build-6) Performed addonsConfig discovery, found: AddonsConfig{usePersistence=true, useTracing=false, useMonitoring=false, usePrometheusMonitoring=false, useCloudEvents=false, useExplainability=false, useProcessSVG=false}
      2021-06-17 13:34:37,039 INFO  [org.tes.doc.DockerClientProviderStrategy] (build-19) Loaded org.testcontainers.dockerclient.EnvironmentAndSystemPropertyClientProviderStrategy from ~/.testcontainers.properties, will try it first
      2021-06-17 13:34:37,764 INFO  [org.tes.doc.DockerClientProviderStrategy] (build-19) Found Docker environment with Environment variables, system properties and defaults. Resolved dockerHost=unix:///var/run/docker.sock
      2021-06-17 13:34:37,766 INFO  [org.tes.DockerClientFactory] (build-19) Docker host IP address is localhost
      2021-06-17 13:34:37,817 INFO  [org.tes.DockerClientFactory] (build-19) Connected to docker: 
        Server Version: 20.10.6
        API Version: 1.41
        Operating System: Red Hat Enterprise Linux Server 7.9 (Maipo)
        Total Memory: 15884 MB
      2021-06-17 13:34:37,821 INFO  [org.tes.uti.ImageNameSubstitutor] (build-19) Found configured ImageNameSubstitutor: Kogito Image Name Substitutor
      2021-06-17 13:34:37,822 INFO  [org.tes.uti.ImageNameSubstitutor] (build-19) Image name substitution will be performed by: Chained substitutor of 'DefaultImageNameSubstitutor (composite of 'ConfigurationFileImageNameSubstitutor' and 'PrefixingImageNameSubstitutor')' and then 'Kogito Image Name Substitutor'
      2021-06-17 13:34:37,859 INFO  [org.tes.uti.RegistryAuthLocator] (build-19) Failure when attempting to lookup auth config. Please ignore if you don't have images in an authenticated registry. Details: (dockerImageName: testcontainers/ryuk:0.3.1, configFile: /home/jenkins/.docker/config.json. Falling back to docker-java default behaviour. Exception message: /home/jenkins/.docker/config.json (No such file or directory)
      2021-06-17 13:34:38,454 INFO  [org.tes.DockerClientFactory] (build-19) Ryuk started - will monitor and terminate Testcontainers containers on JVM exit
      2021-06-17 13:34:38,456 INFO  [org.tes.uti.ImageNameSubstitutor] (build-19) Using library/mongo:4.0.10 as a substitute image for mongo:4.0.10 (using image substitutor: Kogito Image Name Substitutor)
      2021-06-17 13:34:38,484 INFO  [🐳 .0.10]] (build-19) Creating container for image: library/mongo:4.0.10
      2021-06-17 13:34:38,533 INFO  [🐳 .0.10]] (build-19) Starting container with ID: 2d5d189005406daf1775319d0a2dde533d6de75694318e697a324371324da5b4
      2021-06-17 13:34:38,940 INFO  [🐳 .0.10]] (build-19) Container library/mongo:4.0.10 is starting: 2d5d189005406daf1775319d0a2dde533d6de75694318e697a324371324da5b4
      2021-06-17 13:34:39,674 INFO  [🐳 .0.10]] (build-19) Container library/mongo:4.0.10 started in PT1.218642S
      2021-06-17T17:34:44.239+0000 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
      2021-06-17T17:34:44.244+0000 I CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=13136f02833f
      2021-06-17T17:34:44.244+0000 I CONTROL  [initandlisten] db version v4.0.10
      2021-06-17T17:34:44.245+0000 I CONTROL  [initandlisten] git version: c389e7f69f637f7a1ac3cc9fae843b635f20b766
      2021-06-17T17:34:44.245+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016
      2021-06-17T17:34:44.245+0000 I CONTROL  [initandlisten] allocator: tcmalloc
      2021-06-17T17:34:44.245+0000 I CONTROL  [initandlisten] modules: none
      2021-06-17T17:34:44.245+0000 I CONTROL  [initandlisten] build environment:
      2021-06-17T17:34:44.245+0000 I CONTROL  [initandlisten]     distmod: ubuntu1604
      2021-06-17T17:34:44.245+0000 I CONTROL  [initandlisten]     distarch: x86_64
      2021-06-17T17:34:44.245+0000 I CONTROL  [initandlisten]     target_arch: x86_64
      2021-06-17T17:34:44.245+0000 I CONTROL  [initandlisten] options: { net: { bindIpAll: true }, replication: { replSet: "docker-rs" } }
      2021-06-17T17:34:44.245+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=7430M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
      2021-06-17T17:34:44.856+0000 I STORAGE  [initandlisten] WiredTiger message [1623951284:856741][1:0x7f8898406a80], txn-recover: Set global recovery timestamp: 0
      2021-06-17T17:34:44.861+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
      2021-06-17T17:34:44.869+0000 I CONTROL  [initandlisten] 
      2021-06-17T17:34:44.869+0000 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
      2021-06-17T17:34:44.869+0000 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
      2021-06-17T17:34:44.869+0000 I CONTROL  [initandlisten] 
      2021-06-17T17:34:44.870+0000 I CONTROL  [initandlisten] 
      2021-06-17T17:34:44.870+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
      2021-06-17T17:34:44.870+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
      2021-06-17T17:34:44.870+0000 I CONTROL  [initandlisten] 
      2021-06-17T17:34:44.870+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
      2021-06-17T17:34:44.870+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
      2021-06-17T17:34:44.870+0000 I CONTROL  [initandlisten] 
      2021-06-17T17:34:44.873+0000 I STORAGE  [initandlisten] createCollection: local.startup_log with generated UUID: e29208d4-5a5f-455b-8f05-97d404acde1d
      2021-06-17T17:34:44.879+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
      2021-06-17T17:34:44.880+0000 I STORAGE  [initandlisten] createCollection: local.replset.oplogTruncateAfterPoint with generated UUID: 82a5c1fd-a9be-466a-80a5-16c0bc3f9014
      2021-06-17T17:34:44.886+0000 I STORAGE  [initandlisten] createCollection: local.replset.minvalid with generated UUID: 15d7bb89-cc81-44c4-b38e-e81a4805fcdd
      2021-06-17T17:34:44.891+0000 I REPL     [initandlisten] Did not find local voted for document at startup.
      2021-06-17T17:34:44.891+0000 I REPL     [initandlisten] Did not find local Rollback ID document at startup. Creating one.
      2021-06-17T17:34:44.891+0000 I STORAGE  [initandlisten] createCollection: local.system.rollback.id with generated UUID: bf915fd8-5989-4e0e-8792-16a92ad9ca0f
      2021-06-17T17:34:44.896+0000 I REPL     [initandlisten] Initialized the rollback ID to 1
      2021-06-17T17:34:44.896+0000 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument: Did not find replica set configuration document in local.system.replset
      2021-06-17T17:34:44.897+0000 I CONTROL  [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
      2021-06-17T17:34:44.897+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
      2021-06-17T17:34:44.897+0000 I CONTROL  [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
      2021-06-17T17:34:45.057+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:60066 #1 (1 connection now open)
      2021-06-17T17:34:45.058+0000 I NETWORK  [conn1] received client metadata from 127.0.0.1:60066 conn1: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.10" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
      2021-06-17T17:34:45.061+0000 I COMMAND  [conn1] initiate : no configuration specified. Using a default configuration for the set
      2021-06-17T17:34:45.061+0000 I COMMAND  [conn1] created this configuration for initiation : { _id: "docker-rs", version: 1, members: [ { _id: 0, host: "13136f02833f:27017" } ] }
      2021-06-17T17:34:45.061+0000 I REPL     [conn1] replSetInitiate admin command received from client
      2021-06-17T17:34:45.062+0000 I REPL     [conn1] replSetInitiate config object with 1 members parses ok
      2021-06-17T17:34:45.062+0000 I REPL     [conn1] ******
      2021-06-17T17:34:45.062+0000 I REPL     [conn1] creating replication oplog of size: 1640MB...
      2021-06-17T17:34:45.062+0000 I STORAGE  [conn1] createCollection: local.oplog.rs with generated UUID: ffa8d04a-3e7a-4fa1-885f-71cdc1c3c356
      2021-06-17T17:34:45.077+0000 I STORAGE  [conn1] Starting OplogTruncaterThread local.oplog.rs
      2021-06-17T17:34:45.077+0000 I STORAGE  [conn1] The size storer reports that the oplog contains 0 records totaling to 0 bytes
      2021-06-17T17:34:45.077+0000 I STORAGE  [conn1] Scanning the oplog to determine where to place markers for truncation
      2021-06-17T17:34:45.092+0000 I REPL     [conn1] ******
      2021-06-17T17:34:45.092+0000 I STORAGE  [conn1] createCollection: local.system.replset with generated UUID: 2dce4d8f-6129-4ac5-87fa-4e162e29a036
      2021-06-17T17:34:45.099+0000 I STORAGE  [conn1] createCollection: admin.system.version with provided UUID: a265ee16-4b2b-46a6-a1a9-6422546c3d27
      2021-06-17T17:34:45.105+0000 I COMMAND  [conn1] setting featureCompatibilityVersion to 4.0
      2021-06-17T17:34:45.106+0000 I NETWORK  [conn1] Skip closing connection for connection # 1
      2021-06-17T17:34:45.106+0000 I REPL     [conn1] New replica set config in use: { _id: "docker-rs", version: 1, protocolVersion: 1, writeConcernMajorityJournalDefault: true, members: [ { _id: 0, host: "13136f02833f:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('60cb87b5d621d448c2a2f6e5') } }
      2021-06-17T17:34:45.106+0000 I REPL     [conn1] This node is 13136f02833f:27017 in the config
      2021-06-17T17:34:45.106+0000 I REPL     [conn1] transition to STARTUP2 from STARTUP
      2021-06-17T17:34:45.106+0000 I REPL     [conn1] Starting replication storage threads
      2021-06-17T17:34:45.107+0000 I REPL     [conn1] transition to RECOVERING from STARTUP2
      2021-06-17T17:34:45.107+0000 I REPL     [conn1] Starting replication fetcher thread
      2021-06-17T17:34:45.107+0000 I REPL     [conn1] Starting replication applier thread
      2021-06-17T17:34:45.107+0000 I REPL     [conn1] Starting replication reporter thread
      2021-06-17T17:34:45.107+0000 I REPL     [rsSync-0] Starting oplog application
      2021-06-17T17:34:45.107+0000 I REPL     [rsSync-0] transition to SECONDARY from RECOVERING
      2021-06-17T17:34:45.107+0000 I REPL     [rsSync-0] conducting a dry run election to see if we could be elected. current term: 0
      2021-06-17T17:34:45.107+0000 I REPL     [replexec-0] dry election run succeeded, running for election in term 1
      2021-06-17T17:34:45.108+0000 I STORAGE  [replexec-1] createCollection: local.replset.election with generated UUID: a64b9f75-ae9d-4d45-a786-75da48f65b24
      2021-06-17T17:34:45.111+0000 I NETWORK  [conn1] end connection 127.0.0.1:60066 (0 connections now open)
      2021-06-17T17:34:45.114+0000 I REPL     [replexec-1] election succeeded, assuming primary role in term 1
      2021-06-17T17:34:45.114+0000 I REPL     [replexec-1] transition to PRIMARY from SECONDARY
      2021-06-17T17:34:45.114+0000 I REPL     [replexec-1] Resetting sync source to empty, which was :27017
      2021-06-17T17:34:45.114+0000 I REPL     [replexec-1] Entering primary catch-up mode.
      2021-06-17T17:34:45.114+0000 I REPL     [replexec-1] Exited primary catch-up mode.
      2021-06-17T17:34:45.114+0000 I REPL     [replexec-1] Stopping replication producer
      2021-06-17T17:34:45.274+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:60068 #2 (1 connection now open)
      2021-06-17T17:34:45.275+0000 I NETWORK  [conn2] received client metadata from 127.0.0.1:60068 conn2: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.10" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
      2021-06-17T17:34:47.109+0000 I STORAGE  [rsSync-0] createCollection: config.transactions with generated UUID: bd4ab382-09cb-4037-b090-0fa57a501860
      2021-06-17T17:34:47.116+0000 I REPL     [rsSync-0] transition to primary complete; database writes are now permitted
      2021-06-17T17:34:47.116+0000 I STORAGE  [monitoring keys for HMAC] createCollection: admin.system.keys with generated UUID: b7b5b116-0c7d-4197-89ad-1ee57dd86ad7
      2021-06-17T17:34:47.122+0000 I STORAGE  [monitoring keys for HMAC] Triggering the first stable checkpoint. Initial Data: Timestamp(1623951285, 1) PrevStable: Timestamp(0, 0) CurrStable: Timestamp(1623951287, 4)
      2021-06-17T17:34:47.198+0000 I NETWORK  [conn2] end connection 127.0.0.1:60068 (0 connections now open)
      Executing [/home/jenkins/workspace/KIE/kogito/pullrequest/kogito-runtimes.native.runtimes/kogito-runtimes/integration-tests/integration-tests-quarkus-processes-persistence/integration-tests-quarkus-processes-mongodb/target/integration-tests-quarkus-processes-mongodb-2.0.0-SNAPSHOT-runner, -Dquarkus.http.port=0, -Dquarkus.http.ssl-port=8444, -Dtest.url=http://localhost:0, -Dquarkus.log.file.path=/home/jenkins/workspace/KIE/kogito/pullrequest/kogito-runtimes.native.runtimes/kogito-runtimes/integration-tests/integration-tests-quarkus-processes-persistence/integration-tests-quarkus-processes-mongodb/target/quarkus.log, -Dquarkus.log.file.enable=true, -Dquarkus.mongodb.connection-string=mongodb://localhost:49266/test]
      __  ____  __  _____   ___  __ ____  ______ 
       --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ 
       -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \   
      --\___\_\____/_/ |_/_/|_/_/|_|\____/___/   
      2021-06-17 13:34:47,332 WARN  [io.qua.config] (main) Unrecognized configuration key "quarkus.jib.base.jvm.image" was provided; it will be ignored; verify that the dependency extension for this configuration is set or that you did not make a typo
      2021-06-17 13:34:47,333 WARN  [io.qua.config] (main) Unrecognized configuration key "quarkus.container.image.insecure" was provided; it will be ignored; verify that the dependency extension for this configuration is set or that you did not make a typo
      2021-06-17 13:34:47,345 INFO  [org.mon.dri.cluster] (main) Cluster created with settings {hosts=[localhost:49266], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms'}
      2021-06-17T17:34:47.349+0000 I NETWORK  [listener] connection accepted from 172.17.0.1:48384 #3 (1 connection now open)
      2021-06-17T17:34:47.349+0000 I NETWORK  [listener] connection accepted from 172.17.0.1:48382 #4 (2 connections now open)
      2021-06-17T17:34:47.349+0000 I NETWORK  [conn3] received client metadata from 172.17.0.1:48384 conn3: { driver: { name: "mongo-java-driver|sync", version: "4.2.3" }, os: { type: "Linux", name: "Linux", architecture: "amd64", version: "3.10.0-1160.25.1.el7.x86_64" }, platform: "Java/Oracle Corporation/unknown-version" }
      2021-06-17T17:34:47.349+0000 I NETWORK  [conn4] received client metadata from 172.17.0.1:48382 conn4: { driver: { name: "mongo-java-driver|sync", version: "4.2.3" }, os: { type: "Linux", name: "Linux", architecture: "amd64", version: "3.10.0-1160.25.1.el7.x86_64" }, platform: "Java/Oracle Corporation/unknown-version" }
      2021-06-17 13:34:47,350 INFO  [org.mon.dri.connection] (cluster-rtt-ClusterId{value='60cb87b7e2721f3d580dce43', description='null'}-localhost:49266) Opened connection [connectionId{localValue:2, serverValue:4}] to localhost:49266
      2021-06-17 13:34:47,351 INFO  [org.mon.dri.connection] (cluster-ClusterId{value='60cb87b7e2721f3d580dce43', description='null'}-localhost:49266) Opened connection [connectionId{localValue:1, serverValue:3}] to localhost:49266
      2021-06-17 13:34:47,351 INFO  [org.mon.dri.cluster] (cluster-ClusterId{value='60cb87b7e2721f3d580dce43', description='null'}-localhost:49266) Monitor thread successfully connected to server with description ServerDescription{address=localhost:49266, type=REPLICA_SET_PRIMARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=7, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=1540671, setName='docker-rs', canonicalAddress=13136f02833f:27017, hosts=[13136f02833f:27017], passives=[], arbiters=[], primary='13136f02833f:27017', tagSet=TagSet{[]}, electionId=7fffffff0000000000000001, setVersion=1, topologyVersion=null, lastWriteDate=Thu Jun 17 13:34:47 EDT 2021, lastUpdateTimeNanos=4706248349487}
      2021-06-17 13:34:47,363 INFO  [io.quarkus] (main) integration-tests-quarkus-processes-mongodb 2.0.0-SNAPSHOT native (powered by Quarkus 2.0.0.CR3) started in 0.041s. Listening on: http://0.0.0.0:39926
      2021-06-17 13:34:47,363 INFO  [io.quarkus] (main) Profile native activated. 
      2021-06-17 13:34:47,364 INFO  [io.quarkus] (main) Installed features: [cdi, kogito-decisions, kogito-predictions, kogito-processes, kogito-rules, mongodb-client, resteasy, resteasy-jackson, servlet, smallrye-context-propagation]
      2021-06-17 13:34:49,558 INFO  [org.kie.kog.HelloService] (executor-thread-0) HelloService.hello invoked with params: [Tiago]
      2021-06-17 13:34:49,559 ERROR [io.und.req.io] (executor-thread-0) Exception handling request 1efe9cee-66d0-491b-8619-45e06ded24ec-1 to /hello: org.jboss.resteasy.spi.UnhandledException: org.kie.kogito.serialization.process.ProcessInstanceMarshallerException: Error while marshalling process instance
      	at org.jboss.resteasy.core.ExceptionHandler.handleApplicationException(ExceptionHandler.java:106)
      	at org.jboss.resteasy.core.ExceptionHandler.handleException(ExceptionHandler.java:372)
      	at org.jboss.resteasy.core.SynchronousDispatcher.writeException(SynchronousDispatcher.java:218)
      	at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:519)
      	at org.jboss.resteasy.core.SynchronousDispatcher.lambda$invoke$4(SynchronousDispatcher.java:261)
      	at org.jboss.resteasy.core.SynchronousDispatcher.lambda$preprocess$0(SynchronousDispatcher.java:161)
      	at org.jboss.resteasy.core.interception.jaxrs.PreMatchContainerRequestContext.filter(PreMatchContainerRequestContext.java:364)
      	at org.jboss.resteasy.core.SynchronousDispatcher.preprocess(SynchronousDispatcher.java:164)
      	at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:247)
      	at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:249)
      	at io.quarkus.resteasy.runtime.ResteasyFilter.doFilter(ResteasyFilter.java:35)
      	at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
      	at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
      	at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84)
      	at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:63)
      	at io.undertow.servlet.handlers.ServletChain$1.handleRequest(ServletChain.java:68)
      	at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
      	at io.undertow.servlet.handlers.RedirectDirHandler.handleRequest(RedirectDirHandler.java:67)
      	at io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:133)
      	at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)
      	at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
      	at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)
      	at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:65)
      	at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60)
      	at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77)
      	at io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50)
      	at io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43)
      	at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
      	at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
      	at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:247)
      	at io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletInitialHandler.java:56)
      	at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:111)
      	at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:108)
      	at io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:48)
      	at io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
      	at io.quarkus.undertow.runtime.UndertowDeploymentRecorder$9$1.call(UndertowDeploymentRecorder.java:587)
      	at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:227)
      	at io.undertow.servlet.handlers.ServletInitialHandler.handleRequest(ServletInitialHandler.java:152)
      	at io.quarkus.undertow.runtime.UndertowDeploymentRecorder$1.handleRequest(UndertowDeploymentRecorder.java:119)
      	at io.undertow.server.Connectors.executeRootHandler(Connectors.java:290)
      	at io.undertow.server.DefaultExchangeHandler.handle(DefaultExchangeHandler.java:18)
      	at io.quarkus.undertow.runtime.UndertowDeploymentRecorder$5$1.run(UndertowDeploymentRecorder.java:413)
      	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
      	at java.util.concurrent.FutureTask.run(FutureTask.java:264)
      	at io.quarkus.vertx.core.runtime.VertxCoreRecorder$14.runWith(VertxCoreRecorder.java:481)
      	at org.jboss.threads.EnhancedQueueExecutor$Task.run(EnhancedQueueExecutor.java:2442)
      	at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1476)
      	at org.jboss.threads.DelegatingRunnable.run(DelegatingRunnable.java:29)
      	at org.jboss.threads.ThreadLocalResettingRunnable.run(ThreadLocalResettingRunnable.java:29)
      	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
      	at java.lang.Thread.run(Thread.java:829)
      	at com.oracle.svm.core.thread.JavaThreads.threadStartRoutine(JavaThreads.java:553)
      	at com.oracle.svm.core.posix.thread.PosixJavaThreads.pthreadStartRoutine(PosixJavaThreads.java:192)
      Caused by: org.kie.kogito.serialization.process.ProcessInstanceMarshallerException: Error while marshalling process instance
      	at org.kie.kogito.serialization.process.ProcessInstanceMarshallerService.marshallProcessInstance(ProcessInstanceMarshallerService.java:111)
      	at org.kie.kogito.mongodb.MongoDBProcessInstances.updateStorage(MongoDBProcessInstances.java:113)
      	at org.kie.kogito.mongodb.MongoDBProcessInstances.create(MongoDBProcessInstances.java:98)
      	at org.kie.kogito.process.impl.AbstractProcessInstance.lambda$start$0(AbstractProcessInstance.java:226)
      	at org.kie.kogito.services.uow.ProcessInstanceWorkUnit.perform(ProcessInstanceWorkUnit.java:47)
      	at org.kie.kogito.services.uow.CollectingUnitOfWork.end(CollectingUnitOfWork.java:62)
      	at org.kie.kogito.services.uow.ManagedUnitOfWork.end(ManagedUnitOfWork.java:51)
      	at org.kie.kogito.services.uow.UnitOfWorkExecutor.executeInUnitOfWork(UnitOfWorkExecutor.java:34)
      	at org.kie.kogito.process.impl.ProcessServiceImpl.createProcessInstance(ProcessServiceImpl.java:58)
      	at org.kie.kogito.HelloResource.createResource_hello(HelloResource.java:72)
      	at org.kie.kogito.HelloResource_ClientProxy.createResource_hello(HelloResource_ClientProxy.zig:171)
      	at java.lang.reflect.Method.invoke(Method.java:566)
      	at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:170)
      	at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:130)
      	at org.jboss.resteasy.core.ResourceMethodInvoker.internalInvokeOnTarget(ResourceMethodInvoker.java:646)
      	at org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTargetAfterFilter(ResourceMethodInvoker.java:510)
      	at org.jboss.resteasy.core.ResourceMethodInvoker.lambda$invokeOnTarget$2(ResourceMethodInvoker.java:460)
      	at org.jboss.resteasy.core.interception.jaxrs.PreMatchContainerRequestContext.filter(PreMatchContainerRequestContext.java:364)
      	at org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(ResourceMethodInvoker.java:462)
      	at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:420)
      	at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:394)
      	at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:69)
      	at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:492)
      	... 49 more
      Caused by: org.kie.kogito.serialization.process.ProcessInstanceMarshallerException: No marshaller found for class java.lang.String
      	at org.kie.kogito.serialization.process.impl.ProtobufAbstractMarshallerContext.findMarshaller(ProtobufAbstractMarshallerContext.java:66)
      	at org.kie.kogito.serialization.process.impl.ProtobufAbstractMarshallerContext.findObjectMarshallerStrategyFor(ProtobufAbstractMarshallerContext.java:48)
      	at org.kie.kogito.serialization.process.impl.ProtobufVariableWriter.buildVariables(ProtobufVariableWriter.java:51)
      	at org.kie.kogito.serialization.process.impl.ProtobufProcessInstanceWriter.buildWorkflowContext(ProtobufProcessInstanceWriter.java:195)
      	at org.kie.kogito.serialization.process.impl.ProtobufProcessInstanceWriter.writeProcessInstance(ProtobufProcessInstanceWriter.java:146)
      	at org.kie.kogito.serialization.process.impl.ProtobufProcessInstanceMarshaller.writeProcessInstance(ProtobufProcessInstanceMarshaller.java:39)
      	at org.kie.kogito.serialization.process.ProcessInstanceMarshallerService.marshallProcessInstance(ProcessInstanceMarshallerService.java:108)
      	... 71 more
      Request method:	POST
      Request URI:	http://localhost:39926/hello
      Proxy:			<none>
      Request params:	<none>
      Query params:	<none>
      Form params:	<none>
      Path params:	<none>
      Headers:		Accept=*/*
      				Content-Type=application/json
      Cookies:		<none>
      Multiparts:		<none>
      Body:
      {
          "var1": "Tiago"
      }
      HTTP/1.1 500 Internal Server Error
      Content-Type: text/html;charset=UTF-8
      Content-Length: 3180
      <html lang="en">
        <head>
          <title>Internal Server Error - org.kie.kogito.serialization.process.ProcessInstanceMarshallerException: Error while marshalling process instanceError id 1efe9cee-66d0-491b-8619-45e06ded24ec-1</title>
          <meta charset="utf-8"/>
          <style>
      html, body {
          margin: 0;
          padding: 0;
          font-family: 'Open Sans', Helvetica, Arial, sans-serif;
          font-size: 100%;
          font-weight: 100;
          line-height: 1.4;
      }
      html {
          overflow-y: scroll;
      }
      body {
          background: #f9f9f9;
      }
      .container {
          width: 80%;
          margin: 0 auto;
      }
      .content {
          padding: 1em 0 1em 0;
      }
      header, .component-name {
          background-color: #ad1c1c;
      }
      ul {
          line-height: 1.5rem;
          margin: 0.25em 0 0.25em 0;
      }
      .exception-message {
          background: #be2828;
      }
      h1, h2 {
          margin: 0;
          padding: 0;
      }
      h1 {
          font-size: 2rem;
          color: #fff;
          line-height: 3.75rem;
          font-weight: 700;
          padding: 0.4rem 0rem 0.4rem 0rem;
      }
      h2 {
          font-size: 1.2rem;
          color: rgba(255, 255, 255, 0.85);
          line-height: 2.5rem;
          font-weight: 400;
          padding: 0.4rem 0rem 0.4rem 0rem;
      }
      .intro {    font-size: 1.2rem;
          font-weight: 400;
          margin: 0.25em 0 1em 0;
      }
      h3 {
          font-size: 1.2rem;
          line-height: 2.5rem;
          font-weight: 400;
          color: #555;
          margin: 0.25em 0 0.25em 0;
      }
      .trace, .resources {
          background: #fff;
          padding: 15px;
          margin: 15px auto;
          border: 1px solid #ececec;
      }
      .trace {
          overflow-y: scroll;
      }
      .hidden {
         display: none;
      }
      pre {
          white-space: pre;
          font-family: Consolas, Monaco, Menlo, "Ubuntu Mono", "Liberation Mono", monospace;
          font-size: 12px;
          line-height: 1.5;
          color: #555;
      }
      </style>
          <script>
      	function toggleStackTraceOrder() {
      		var stElement = document.getElementById('stacktrace');
      		var current = stElement.getAttribute('data-current-setting');
      		if (current == 'original-stacktrace') {
      			var reverseOrder = document.getElementById('reversed-stacktrace');
      			stElement.innerHTML = reverseOrder.innerHTML;
      			stElement.setAttribute('data-current-setting', 'reversed-stacktrace');
      		} else {
      			var originalOrder = document.getElementById('original-stacktrace');
      			stElement.innerHTML = originalOrder.innerHTML;
      			stElement.setAttribute('data-current-setting', 'original-stacktrace');
      		}
      		return;
      	}
      	function showDefaultStackTraceOrder() {
      		var reverseOrder = document.getElementById('reversed-stacktrace');
      		var stElement = document.getElementById('stacktrace');
             if (reverseOrder == null || stElement == null) {
                 return;
             }
      		// default to reverse ordered stacktrace
      		stElement.innerHTML = reverseOrder.innerHTML;
      		stElement.setAttribute('data-current-setting', 'reversed-stacktrace');
      		return;
      	}
      </script>
        </head>
        <body onload="showDefaultStackTraceOrder()">
          <header>
            <h1 class="container">Internal Server Error</h1>
            <div class="exception-message">
              <h2 class="container">org.kie.kogito.serialization.process.ProcessInstanceMarshallerException: Error while marshalling process instanceError id 1efe9cee-66d0-491b-8619-45e06ded24ec-1</h2>
            </div>
          </header>
          <div class="container content"/>
        </body>
      </html>
      

              cnicolai@redhat.com Cristiano Nicolai (Inactive)
              evacchi Edoardo Vacchi (Inactive)
              Marian Macik Marian Macik
              Marian Macik Marian Macik
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

                Created:
                Updated:
                Resolved: