Connection leak during XATransaction in high load

      This jira is being opened to remind me to pull in a DBCP fix for Narayana that should land in Tomcat 9.0.9. The PR for the fix in the DBCP project is here.

        1. 9.0.7.log.zip
          0.3 kB
        2. 9.0.7.redhat-12.log.zip
          4 kB
        3. JWS-996.patch
          3 kB

            [JWS-996] Connection leak during XATransaction in high load

            Errata Tool added a comment -

            Since the problem described in this issue should be resolved in a recent advisory, it has been closed.

            For information on the advisory, and where to find the updated files, follow the link below.

            If the solution does not work for you, open a new bug report.
            https://access.redhat.com/errata/RHSA-2018:2867

            Errata Tool added a comment - Since the problem described in this issue should be resolved in a recent advisory, it has been closed. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2867

            Karm Karm added a comment -

            Karm Karm added a comment - Verified 9.0.7.log.zip 9.0.7.redhat-12.log.zip

            Karm Karm added a comment -

            Reopening.

            This is just a partial patch of a bigger thing. I don't think SanityOnity is a good move after all.

            Karm Karm added a comment - Reopening. This is just a partial patch of a bigger thing . I don't think SanityOnity is a good move after all.

            Karm Karm added a comment - - edited

            SanityOnly, because I am not confident I really reproduced the DBCP 2.2.0 issue as the DBCP-484 test output does not seem convincing. It might be my postgres setup.
            The patch is present though.

            -cdeadlock_timeout=1s 
            -cdefault_transaction_deferrable=off 
            -cdefault_transaction_isolation="read committed" 
            -cdefault_transaction_read_only=off 
            -clog_directory=/tmp 
            -clog_filename=db.log 
            -clog_line_prefix="%m transaction_id: %x " 
            -clog_statement=all 
            -clogging_collector=on 
            -cmax_connections=20 
            -cmax_locks_per_transaction=64 
            -cmax_pred_locks_per_transaction=64 
            -cmax_prepared_transactions=50
            
            <dependency>
                <groupId>org.apache.tomcat</groupId>
                <artifactId>tomcat-dbcp</artifactId>
                <version>9.0.7.redhat-10</version>
            </dependency>
            <dependency>
                <groupId>org.apache.tomcat</groupId>
                <artifactId>tomcat-juli</artifactId>
                <version>9.0.7.redhat-10</version>
            </dependency>
            
            Test started...
            1530281996132 [Thread-1] [5] Pool 1/80 
            1530282001142 [Thread-1] [10] Pool 0/80 
            1530282006149 [Thread-1] [15] Pool 1/80 
            1530282011156 [Thread-1] [20] Pool 0/80 
            
            ...
            2018-06-29 14:20:10.152 UTC transaction_id: 640 LOG:  execute <unnamed>: SELECT KEY, ID, VALUE, INFO, TS FROM public.TEST_DATA LIMIT 1
            2018-06-29 14:20:10.153 UTC transaction_id: 640 LOG:  execute <unnamed>: SELECT KEY, ID, VALUE, INFO, TS FROM public.TEST_DATA LIMIT 1
            2018-06-29 14:20:10.153 UTC transaction_id: 640 LOG:  execute S_1: ROLLBACK
            2018-06-29 14:20:10.153 UTC transaction_id: 0 LOG:  execute <unnamed>: SELECT KEY, ID, VALUE, INFO, TS FROM public.TEST_DATA LIMIT 1
            2018-06-29 14:20:10.154 UTC transaction_id: 0 LOG:  execute <unnamed>: SELECT KEY, ID, VALUE, INFO, TS FROM public.TEST_DATA LIMIT 1
            2018-06-29 14:20:10.154 UTC transaction_id: 0 LOG:  execute <unnamed>: SELECT KEY, ID, VALUE, INFO, TS FROM public.TEST_DATA LIMIT 1
            2018-06-29 14:20:10.155 UTC transaction_id: 0 LOG:  execute <unnamed>: BEGIN
            2018-06-29 14:20:10.156 UTC transaction_id: 0 LOG:  execute <unnamed>: INSERT INTO public.TEST_DATA   (KEY, ID, VALUE, INFO, TS) VALUES ($1,$2,$3,$4::bytea,$5)
            2018-06-29 14:20:10.156 UTC transaction_id: 0 DETAIL:  parameters: $1 = 'Thread-1', $2 = '20', $3 = '0.732812881837747732', $4 = 'Startpayload...0...1...2...3...4...5...6...7...8...9...10...11...12...13...14...15...16...17...18...19...20...21...22...23...24...25...26...27...28...29...30...31...32...33...34...35...36...37...38...39...40...41...42...43...44...45...46...47...48...49...50...51...52...53...54...55...56...57.
            ...
            <SNIP>
            2018-06-29 14:20:11.155 UTC transaction_id: 641 LOG:  execute <unnamed>: SELECT KEY, ID, VALUE, INFO, TS FROM public.TEST_DATA LIMIT 1
            2018-06-29 14:20:11.155 UTC transaction_id: 641 LOG:  execute S_1: ROLLBACK
            2018-06-29 14:20:11.156 UTC transaction_id: 0 LOG:  execute <unnamed>: SELECT KEY, ID, VALUE, INFO, TS FROM public.TEST_DATA LIMIT 1
            2018-06-29 14:20:11.156 UTC transaction_id: 0 LOG:  execute <unnamed>: SELECT KEY, ID, VALUE, INFO, TS FROM public.TEST_DATA LIMIT 1
            

            Karm Karm added a comment - - edited SanityOnly , because I am not confident I really reproduced the DBCP 2.2.0 issue as the DBCP-484 test output does not seem convincing. It might be my postgres setup. The patch is present though. -cdeadlock_timeout=1s -cdefault_transaction_deferrable=off -cdefault_transaction_isolation="read committed" -cdefault_transaction_read_only=off -clog_directory=/tmp -clog_filename=db.log -clog_line_prefix="%m transaction_id: %x " -clog_statement=all -clogging_collector=on -cmax_connections=20 -cmax_locks_per_transaction=64 -cmax_pred_locks_per_transaction=64 -cmax_prepared_transactions=50 <dependency> <groupId> org.apache.tomcat </groupId> <artifactId> tomcat-dbcp </artifactId> <version> 9.0.7.redhat-10 </version> </dependency> <dependency> <groupId> org.apache.tomcat </groupId> <artifactId> tomcat-juli </artifactId> <version> 9.0.7.redhat-10 </version> </dependency> Test started... 1530281996132 [Thread-1] [5] Pool 1/80 1530282001142 [Thread-1] [10] Pool 0/80 1530282006149 [Thread-1] [15] Pool 1/80 1530282011156 [Thread-1] [20] Pool 0/80 ... 2018-06-29 14:20:10.152 UTC transaction_id: 640 LOG: execute <unnamed>: SELECT KEY, ID, VALUE, INFO, TS FROM public.TEST_DATA LIMIT 1 2018-06-29 14:20:10.153 UTC transaction_id: 640 LOG: execute <unnamed>: SELECT KEY, ID, VALUE, INFO, TS FROM public.TEST_DATA LIMIT 1 2018-06-29 14:20:10.153 UTC transaction_id: 640 LOG: execute S_1: ROLLBACK 2018-06-29 14:20:10.153 UTC transaction_id: 0 LOG: execute <unnamed>: SELECT KEY, ID, VALUE, INFO, TS FROM public.TEST_DATA LIMIT 1 2018-06-29 14:20:10.154 UTC transaction_id: 0 LOG: execute <unnamed>: SELECT KEY, ID, VALUE, INFO, TS FROM public.TEST_DATA LIMIT 1 2018-06-29 14:20:10.154 UTC transaction_id: 0 LOG: execute <unnamed>: SELECT KEY, ID, VALUE, INFO, TS FROM public.TEST_DATA LIMIT 1 2018-06-29 14:20:10.155 UTC transaction_id: 0 LOG: execute <unnamed>: BEGIN 2018-06-29 14:20:10.156 UTC transaction_id: 0 LOG: execute <unnamed>: INSERT INTO public.TEST_DATA (KEY, ID, VALUE, INFO, TS) VALUES ($1,$2,$3,$4::bytea,$5) 2018-06-29 14:20:10.156 UTC transaction_id: 0 DETAIL: parameters: $1 = 'Thread-1', $2 = '20', $3 = '0.732812881837747732', $4 = 'Startpayload...0...1...2...3...4...5...6...7...8...9...10...11...12...13...14...15...16...17...18...19...20...21...22...23...24...25...26...27...28...29...30...31...32...33...34...35...36...37...38...39...40...41...42...43...44...45...46...47...48...49...50...51...52...53...54...55...56...57. ... <SNIP> 2018-06-29 14:20:11.155 UTC transaction_id: 641 LOG: execute <unnamed>: SELECT KEY, ID, VALUE, INFO, TS FROM public.TEST_DATA LIMIT 1 2018-06-29 14:20:11.155 UTC transaction_id: 641 LOG: execute S_1: ROLLBACK 2018-06-29 14:20:11.156 UTC transaction_id: 0 LOG: execute <unnamed>: SELECT KEY, ID, VALUE, INFO, TS FROM public.TEST_DATA LIMIT 1 2018-06-29 14:20:11.156 UTC transaction_id: 0 LOG: execute <unnamed>: SELECT KEY, ID, VALUE, INFO, TS FROM public.TEST_DATA LIMIT 1

            Adding link to upstream jira.

            Coty Sutherland added a comment - Adding link to upstream jira.

              rhn-support-csutherl Coty Sutherland
              rhn-support-csutherl Coty Sutherland
              Zheng Feng
              Karm Karm Karm Karm
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: