-
Bug
-
Resolution: Duplicate
-
Normal
-
None
-
rhel-10.0.beta
-
None
-
None
-
None
-
rhel-sst-high-availability
-
ssg_filesystems_storage_and_HA
-
5
-
False
-
-
None
-
None
-
None
-
None
-
-
x86_64
-
None
What were you trying to do that didn't work?
With FIPS enabled, errors are produced by cibadmin, resulting in fails in pcs. Initially, this was found while using pcs for updating property upon specified file. This issue is not present, when FIPS is disabled.
Please provide the package NVR for which bug is seen:
pacemaker-2.1.7-4.el10.2
How reproducible:
always
Steps to reproduce
FIPS is enabled
[root@virt-252 ~]# fips-mode-setup --check FIPS mode is enabled. Initramfs fips module is enabled. The current crypto policy (FIPS) is based on the FIPS policy.
Cluster is running
[root@virt-252 ~]# pcs cluster status
Cluster Status:
Error in GnuTLS initialization: Error while performing self checks.
Cluster Summary:
* Stack: corosync (Pacemaker is running)
* Current DC: virt-253 (version 2.1.7-4.el10.2-0eeb8b6) - partition with quorum
* Last updated: Fri Jun 21 14:43:10 2024 on virt-252
* Last change: Fri Jun 21 10:31:00 2024 by root via root on virt-252
* 2 nodes configured
* 2 resource instances configured
Node List:
* Online: [ virt-252 virt-253 ]
PCSD Status:
virt-253: Online
virt-252: Online
Save current cib into file and try to update property upon it
[root@virt-252 ~]# pcs cluster cib > foo.xml [root@virt-252 ~]# pcs -f foo.xml property set no-quorum-policy=freeze Error: unable to get cib [root@virt-252 ~]# echo $? 1
Full pcs debug log
[root@virt-252 ~]# pcs -f foo.xml property set no-quorum-policy=freeze --debug Writing to a temporary file /tmp/tmp7q2ij89m.pcs: -Debug Content Start- Error in GnuTLS initialization: Error while performing self checks. <cib crm_feature_set="3.19.0" validate-with="pacemaker-3.9" epoch="7" num_updates="8" admin_epoch="0" cib-last-written="Fri Jun 21 10:31:00 2024" update-origin="virt-252" update-client="root" update-user="root" have-quorum="1" dc-uuid="2"> <configuration> <crm_config> <cluster_property_set id="cib-bootstrap-options"> <nvpair id="cib-bootstrap-options-have-watchdog" name="have-watchdog" value="false"/> <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="2.1.7-4.el10.2-0eeb8b6"/> <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/> <nvpair id="cib-bootstrap-options-cluster-name" name="cluster-name" value="STSRHTS10163"/> </cluster_property_set> </crm_config> <nodes> <node id="1" uname="virt-252"/> <node id="2" uname="virt-253"/> </nodes> <resources> <primitive id="fence-virt-252" class="stonith" type="fence_xvm"> <instance_attributes id="fence-virt-252-instance_attributes"> <nvpair id="fence-virt-252-instance_attributes-delay" name="delay" value="5"/> <nvpair id="fence-virt-252-instance_attributes-pcmk_host_check" name="pcmk_host_check" value="static-list"/> <nvpair id="fence-virt-252-instance_attributes-pcmk_host_list" name="pcmk_host_list" value="virt-252"/> <nvpair id="fence-virt-252-instance_attributes-pcmk_host_map" name="pcmk_host_map" value="virt-252:virt-252.cluster-qe.lab.eng.brq.redhat.com"/> </instance_attributes> <operations> <op name="monitor" interval="60s" id="fence-virt-252-monitor-interval-60s"/> </operations> </primitive> <primitive id="fence-virt-253" class="stonith" type="fence_xvm"> <instance_attributes id="fence-virt-253-instance_attributes"> <nvpair id="fence-virt-253-instance_attributes-pcmk_host_check" name="pcmk_host_check" value="static-list"/> <nvpair id="fence-virt-253-instance_attributes-pcmk_host_list" name="pcmk_host_list" value="virt-253"/> <nvpair id="fence-virt-253-instance_attributes-pcmk_host_map" name="pcmk_host_map" value="virt-253:virt-253.cluster-qe.lab.eng.brq.redhat.com"/> </instance_attributes> <operations> <op name="monitor" interval="60s" id="fence-virt-253-monitor-interval-60s"/> </operations> </primitive> </resources> <constraints/> <rsc_defaults> <meta_attributes id="build-resource-defaults"> <nvpair id="build-resource-stickiness" name="resource-stickiness" value="1"/> </meta_attributes> </rsc_defaults> </configuration> <status> <node_state id="1" uname="virt-252" in_ccm="1718958632" crmd="1718958632" crm-debug-origin="controld_update_resource_history" join="member" expected="member"> <transient_attributes id="1"> <instance_attributes id="status-1"> <nvpair id="status-1-.feature-set" name="#feature-set" value="3.19.0"/> </instance_attributes> </transient_attributes> <lrm id="1"> <lrm_resources> <lrm_resource id="fence-virt-252" class="stonith" type="fence_xvm"> <lrm_rsc_op id="fence-virt-252_last_0" operation_key="fence-virt-252_start_0" operation="start" crm-debug-origin="controld_update_resource_history" crm_feature_set="3.19.0" transition-key="3:1:0:5e398154-62f0-4779-adaa-d1237560c0dd" transition-magic="0:0;3:1:0:5e398154-62f0-4779-adaa-d1237560c0dd" exit-reason="" on_node="virt-252" call-id="6" rc-code="0" op-status="0" interval="0" last-rc-change="1718958655" exec-time="50" queue-time="0" op-digest="171c2ced6cb6ff97172b6b3b04b39bea"/> <lrm_rsc_op id="fence-virt-252_monitor_60000" operation_key="fence-virt-252_monitor_60000" operation="monitor" crm-debug-origin="controld_update_resource_history" crm_feature_set="3.19.0" transition-key="4:1:0:5e398154-62f0-4779-adaa-d1237560c0dd" transition-magic="0:0;4:1:0:5e398154-62f0-4779-adaa-d1237560c0dd" exit-reason="" on_node="virt-252" call-id="7" rc-code="0" op-status="0" interval="60000" last-rc-change="1718958656" exec-time="38" queue-time="0" op-digest="96cd577b540c067fcd92ec299c22d82d"/> </lrm_resource> <lrm_resource id="fence-virt-253" class="stonith" type="fence_xvm"> <lrm_rsc_op id="fence-virt-253_last_0" operation_key="fence-virt-253_monitor_0" operation="monitor" crm-debug-origin="controld_update_resource_history" crm_feature_set="3.19.0" transition-key="2:2:7:5e398154-62f0-4779-adaa-d1237560c0dd" transition-magic="0:7;2:2:7:5e398154-62f0-4779-adaa-d1237560c0dd" exit-reason="" on_node="virt-252" call-id="11" rc-code="7" op-status="0" interval="0" last-rc-change="1718958660" exec-time="0" queue-time="0" op-digest="03c8e7c6befc34414f826b3eea3a8683"/> </lrm_resource> </lrm_resources> </lrm> </node_state> <node_state id="2" uname="virt-253" in_ccm="1718958631" crmd="1718958631" crm-debug-origin="controld_update_resource_history" join="member" expected="member"> <transient_attributes id="2"> <instance_attributes id="status-2"> <nvpair id="status-2-.feature-set" name="#feature-set" value="3.19.0"/> </instance_attributes> </transient_attributes> <lrm id="2"> <lrm_resources> <lrm_resource id="fence-virt-252" class="stonith" type="fence_xvm"> <lrm_rsc_op id="fence-virt-252_last_0" operation_key="fence-virt-252_monitor_0" operation="monitor" crm-debug-origin="controld_update_resource_history" crm_feature_set="3.19.0" transition-key="2:1:7:5e398154-62f0-4779-adaa-d1237560c0dd" transition-magic="0:7;2:1:7:5e398154-62f0-4779-adaa-d1237560c0dd" exit-reason="" on_node="virt-253" call-id="5" rc-code="7" op-status="0" interval="0" last-rc-change="1718958655" exec-time="4" queue-time="0" op-digest="171c2ced6cb6ff97172b6b3b04b39bea"/> </lrm_resource> <lrm_resource id="fence-virt-253" class="stonith" type="fence_xvm"> <lrm_rsc_op id="fence-virt-253_last_0" operation_key="fence-virt-253_start_0" operation="start" crm-debug-origin="controld_update_resource_history" crm_feature_set="3.19.0" transition-key="6:2:0:5e398154-62f0-4779-adaa-d1237560c0dd" transition-magic="0:0;6:2:0:5e398154-62f0-4779-adaa-d1237560c0dd" exit-reason="" on_node="virt-253" call-id="10" rc-code="0" op-status="0" interval="0" last-rc-change="1718958660" exec-time="51" queue-time="0" op-digest="03c8e7c6befc34414f826b3eea3a8683"/> <lrm_rsc_op id="fence-virt-253_monitor_60000" operation_key="fence-virt-253_monitor_60000" operation="monitor" crm-debug-origin="controld_update_resource_history" crm_feature_set="3.19.0" transition-key="7:2:0:5e398154-62f0-4779-adaa-d1237560c0dd" transition-magic="0:0;7:2:0:5e398154-62f0-4779-adaa-d1237560c0dd" exit-reason="" on_node="virt-253" call-id="11" rc-code="0" op-status="0" interval="60000" last-rc-change="1718958660" exec-time="35" queue-time="0" op-digest="806821a5dbb572ee82151e21ce3634e6"/> </lrm_resource> </lrm_resources> </lrm> </node_state> </status> </cib> -Debug Content End- Running: /usr/sbin/cibadmin --local --query Environment: CIB_file=/tmp/tmp7q2ij89m.pcs LC_ALL=C Finished running: /usr/sbin/cibadmin --local --query Return value: 78 -Debug Stdout Start- -Debug Stdout End- -Debug Stderr Start- Error in GnuTLS initialization: Error while performing self checks. Could not connect to the CIB: Update does not conform to the configured schema cibadmin: Init failed, could not perform requested operations: Update does not conform to the configured schema -Debug Stderr End- Error: unable to get cib
Expected results
no errors produced by cibadmin
Actual results
pcs is failing on /usr/sbin/cibadmin --local --query
Finished running: /usr/sbin/cibadmin --local --query
Return value: 78
-Debug Stdout Start-
-Debug Stdout End-
-Debug Stderr Start-
Error in GnuTLS initialization: Error while performing self checks.
Could not connect to the CIB: Update does not conform to the configured schema
cibadmin: Init failed, could not perform requested operations: Update does not conform to the configured schema
-Debug Stderr End-
Error: unable to get cib
- blocks
-
RHEL-40410 Rebase booth to v1.2 (rhel-10.0)
- Integration
- duplicates
-
RHEL-46008 GnuTLS: "Error in GnuTLS initialization: Error while performing self checks." when FIPS mode is enabled
- Closed
- is blocked by
-
RHEL-46008 GnuTLS: "Error in GnuTLS initialization: Error while performing self checks." when FIPS mode is enabled
- Closed