Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-5114

ns-slapd crashs when lmdb import fails or is aborted [rhel-9.4.0]

Linking RHIVOS CVEs to...Migration: Automation ...SWIFT: POC ConversionSync from "Extern...XMLWordPrintable

    • None
    • None
    • rhel-idm-ds
    • ssg_idm
    • 24
    • 0
    • QE ack
    • False
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • If docs needed, set a value
    • None
    • 0

      Description of problem:
      The following issue was found while trying to test BZ 2116948 LMDB import is too slow
      but configuring a database size too small to hosts the users

      After adding a new test case:
      diff --git a/dirsrvtests/tests/suites/import/import_test.py b/dirsrvtests/tests/suites/import/import_test.py
      index 84c8cf290..ee71e0bea 100644
      — a/dirsrvtests/tests/suites/import/import_test.py
      +++ b/dirsrvtests/tests/suites/import/import_test.py
      @@ -22,6 +22,7 @@ from lib389.tasks import ImportTask
      from lib389.index import Indexes
      from lib389.monitor import Monitor
      from lib389.backend import Backends
      +from lib389.config import LMDB_LDBMConfig
      from lib389.config import LDBMConfig
      from lib389.utils import ds_is_newer, get_default_db_lib
      from lib389.idm.user import UserAccount
      @@ -550,6 +551,15 @@ def test_import_wrong_file_path(topo):
      dbtasks_ldif2db(topo.standalone, log, args)
      assert "The LDIF file does not exist" in str(e.value)

      +def test_crash_on_ldif2db_with_lmdb(topo, _import_clean):
      + BIG_MAP_SIZE = 20 * 1024 * 1024 * 1024
      + if get_default_db_lib() == "mdb":
      + handler = LMDB_LDBMConfig(topo.standalone)
      + mapsize = BIG_MAP_SIZE
      + log.info(f'Set lmdb map size to

      {mapsize}

      .')
      + handler.replace('nsslapd-mdb-max-size', str(mapsize))
      + topo.standalone.restart()
      + _import_offline(topo, 10_000_000)

      if _name_ == '_main_':

      1. Run isolated

      While running it with mdb, it crashes
      NSSLAPD_DB_LIB=mdb py.test -v import_test.py::test_crash_on_ldif2db_with_lmdb

      core was not generated, but I attached gdb during the test and then generated the core.

      Thread 7 "ns-slapd" received signal SIGSEGV, Segmentation fault.
      [Switching to Thread 0x7f9269bfa640 (LWP 3924)]
      __strncmp_avx2_rtm () at ../sysdeps/x86_64/multiarch/strcmp-avx2.S:284
      284 VMOVU (%rdi), %ymm0
      (gdb) bt
      #0 __strncmp_avx2_rtm () at ../sysdeps/x86_64/multiarch/strcmp-avx2.S:284
      #1 0x00007f976d18fb7d in dbmdb_import_prepare_worker_entry (wqelmnt=0x55cd73989a40)
      at ldap/servers/slapd/back-ldbm/db-mdb/mdb_import_threads.c:1347
      #2 0x00007f976d1958ce in dbmdb_import_worker (param=<optimized out>)
      at ldap/servers/slapd/back-ldbm/db-mdb/mdb_import_threads.c:3191
      #3 0x00007f9770ab4c34 in _pt_root (arg=0x55cd73973dc0) at pthreads/../../../../nspr/pr/src/pthreads/ptthread.c:201
      #4 0x00007f977089f822 in start_thread (arg=<optimized out>) at pthread_create.c:443
      #5 0x00007f977083f450 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

      Full backtrace is attached, core is available at https://drive.google.com/file/d/1bGYpP0JuifLKVGdXRVMYj-SkuWCvlnXZ/view?usp=sharing

      Version-Release number of selected component (if applicable):

      How reproducible:
      Always

      Steps to Reproduce:
      1. See test case in the description

      Actual results:
      ns-slapd crashes

      Expected results:
      ns-slapd should fail without crashing.

      Additional info:
      Database size should be at least 50 Gb for 10 000 000 users

      Crash is caused by a double free when freeing the import pipe line resources.

              progier Pierre Rogier
              progier Pierre Rogier
              Pierre Rogier Pierre Rogier
              Barbora Simonova Barbora Simonova
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

                Created:
                Updated:
                Resolved: