ansible-playbook 2.9.27 config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.9/site-packages/ansible executable location = /usr/local/bin/ansible-playbook python version = 3.9.19 (main, May 16 2024, 11:40:09) [GCC 8.5.0 20210514 (Red Hat 8.5.0-22)] No config file found; using defaults [WARNING]: running playbook inside collection fedora.linux_system_roles Skipping callback 'actionable', as we already have a stdout callback. Skipping callback 'counter_enabled', as we already have a stdout callback. Skipping callback 'debug', as we already have a stdout callback. Skipping callback 'dense', as we already have a stdout callback. Skipping callback 'dense', as we already have a stdout callback. Skipping callback 'full_skip', as we already have a stdout callback. Skipping callback 'json', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'null', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. Skipping callback 'selective', as we already have a stdout callback. Skipping callback 'skippy', as we already have a stdout callback. Skipping callback 'stderr', as we already have a stdout callback. Skipping callback 'unixy', as we already have a stdout callback. Skipping callback 'yaml', as we already have a stdout callback. PLAYBOOK: tests_cluster_destroy.yml ******************************************** 2 plays in /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_cluster_destroy.yml PLAY [all] ********************************************************************* META: ran handlers TASK [Include vault variables] ************************************************* task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_cluster_destroy.yml:5 Wednesday 31 July 2024 10:56:51 -0400 (0:00:00.018) 0:00:00.018 ******** ok: [managed_node1] => { "ansible_facts": { "ha_cluster_hacluster_password": { "__ansible_vault": "$ANSIBLE_VAULT;1.1;AES256\n31303833633366333561656439323930303361333161363239346166656537323933313436\n3432386236656563343237306335323637396239616230353561330a313731623238393238\n62343064666336643930663239383936616465643134646536656532323461356237646133\n3761616633323839633232353637366266350a313163633236376666653238633435306565\n3264623032333736393535663833\n" } }, "ansible_included_var_files": [ "/var/ARTIFACTS/work-generallcpu2cdd/plans/general/tree/ha_cluster/tests/vars/vault-variables.yml" ], "changed": false } META: ran handlers META: ran handlers PLAY [Deconfigure cluster] ***************************************************** TASK [Gathering Facts] ********************************************************* task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_cluster_destroy.yml:9 Wednesday 31 July 2024 10:56:52 -0400 (0:00:00.015) 0:00:00.034 ******** ok: [managed_node1] META: ran handlers TASK [Set up test environment] ************************************************* task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_cluster_destroy.yml:18 Wednesday 31 July 2024 10:56:52 -0400 (0:00:00.988) 0:00:01.022 ******** TASK [fedora.linux_system_roles.ha_cluster : Set node name to 'localhost' for single-node clusters] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:9 Wednesday 31 July 2024 10:56:53 -0400 (0:00:00.028) 0:00:01.050 ******** ok: [managed_node1] => { "ansible_facts": { "inventory_hostname": "localhost" }, "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Ensure facts used by tests] ******* task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:14 Wednesday 31 July 2024 10:56:53 -0400 (0:00:00.035) 0:00:01.085 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Check if system is ostree] ******** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:22 Wednesday 31 July 2024 10:56:53 -0400 (0:00:00.032) 0:00:01.118 ******** ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.ha_cluster : Set flag to indicate system is ostree] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:27 Wednesday 31 July 2024 10:56:53 -0400 (0:00:00.461) 0:00:01.579 ******** ok: [managed_node1] => { "ansible_facts": { "__ha_cluster_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Do not try to enable RHEL repositories] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:32 Wednesday 31 July 2024 10:56:53 -0400 (0:00:00.053) 0:00:01.633 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Copy nss-altfiles ha_cluster users to /etc/passwd] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:41 Wednesday 31 July 2024 10:56:53 -0400 (0:00:00.066) 0:00:01.700 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Run HA Cluster role] ***************************************************** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_cluster_destroy.yml:23 Wednesday 31 July 2024 10:56:53 -0400 (0:00:00.035) 0:00:01.735 ******** TASK [fedora.linux_system_roles.ha_cluster : Set platform/version specific variables] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:3 Wednesday 31 July 2024 10:56:53 -0400 (0:00:00.034) 0:00:01.770 ******** included: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : Ensure ansible_facts used by role] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml:2 Wednesday 31 July 2024 10:56:53 -0400 (0:00:00.021) 0:00:01.792 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Check if system is ostree] ******** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml:10 Wednesday 31 July 2024 10:56:53 -0400 (0:00:00.035) 0:00:01.827 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Set flag to indicate system is ostree] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml:15 Wednesday 31 July 2024 10:56:53 -0400 (0:00:00.035) 0:00:01.863 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Set platform/version specific variables] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml:19 Wednesday 31 July 2024 10:56:53 -0400 (0:00:00.032) 0:00:01.895 ******** ok: [managed_node1] => (item=RedHat.yml) => { "ansible_facts": { "__ha_cluster_cloud_agents_packages": [], "__ha_cluster_fence_agent_packages_default": "{{ ['fence-agents-all'] + (['fence-virt'] if ansible_architecture == 'x86_64' else []) }}", "__ha_cluster_fullstack_node_packages": [ "corosync", "libknet1-plugins-all", "resource-agents", "pacemaker", "openssl" ], "__ha_cluster_pcs_provider": "pcs-0.10", "__ha_cluster_qdevice_node_packages": [ "corosync-qdevice", "bash", "coreutils", "curl", "grep", "nss-tools", "openssl", "sed" ], "__ha_cluster_repos": [], "__ha_cluster_role_essential_packages": [ "pcs", "corosync-qnetd" ], "__ha_cluster_sbd_packages": [ "sbd" ], "__ha_cluster_services": [ "corosync", "corosync-qdevice", "pacemaker" ] }, "ansible_included_var_files": [ "/tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/vars/RedHat.yml" ], "ansible_loop_var": "item", "changed": false, "item": "RedHat.yml" } skipping: [managed_node1] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "item": "CentOS.yml", "skip_reason": "Conditional result was False" } ok: [managed_node1] => (item=CentOS_8.yml) => { "ansible_facts": { "__ha_cluster_cloud_agents_packages": [ "resource-agents-aliyun", "resource-agents-gcp", "fence-agents-aliyun", "fence-agents-aws", "fence-agents-azure-arm", "fence-agents-gce" ], "__ha_cluster_repos": [ { "id": "ha", "name": "HighAvailability" }, { "id": "resilientstorage", "name": "ResilientStorage" } ] }, "ansible_included_var_files": [ "/tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/vars/CentOS_8.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_8.yml" } ok: [managed_node1] => (item=CentOS_8.yml) => { "ansible_facts": { "__ha_cluster_cloud_agents_packages": [ "resource-agents-aliyun", "resource-agents-gcp", "fence-agents-aliyun", "fence-agents-aws", "fence-agents-azure-arm", "fence-agents-gce" ], "__ha_cluster_repos": [ { "id": "ha", "name": "HighAvailability" }, { "id": "resilientstorage", "name": "ResilientStorage" } ] }, "ansible_included_var_files": [ "/tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/vars/CentOS_8.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_8.yml" } TASK [fedora.linux_system_roles.ha_cluster : Set Linux Pacemaker shell specific variables] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml:34 Wednesday 31 July 2024 10:56:53 -0400 (0:00:00.063) 0:00:01.959 ******** ok: [managed_node1] => { "ansible_facts": {}, "ansible_included_var_files": [ "/tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/vars/shell_pcs.yml" ], "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Enable package repositories] ****** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:6 Wednesday 31 July 2024 10:56:53 -0400 (0:00:00.036) 0:00:01.995 ******** included: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-package-repositories.yml for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : Find platform/version specific tasks to enable repositories] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-package-repositories.yml:3 Wednesday 31 July 2024 10:56:53 -0400 (0:00:00.019) 0:00:02.015 ******** ok: [managed_node1] => (item=RedHat.yml) => { "ansible_facts": { "__ha_cluster_enable_repo_tasks_file": "/tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-repositories/RedHat.yml" }, "ansible_loop_var": "item", "changed": false, "item": "RedHat.yml" } ok: [managed_node1] => (item=CentOS.yml) => { "ansible_facts": { "__ha_cluster_enable_repo_tasks_file": "/tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-repositories/CentOS.yml" }, "ansible_loop_var": "item", "changed": false, "item": "CentOS.yml" } skipping: [managed_node1] => (item=CentOS_8.yml) => { "ansible_loop_var": "item", "changed": false, "item": "CentOS_8.yml", "skip_reason": "Conditional result was False" } skipping: [managed_node1] => (item=CentOS_8.yml) => { "ansible_loop_var": "item", "changed": false, "item": "CentOS_8.yml", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Run platform/version specific tasks to enable repositories] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-package-repositories.yml:21 Wednesday 31 July 2024 10:56:54 -0400 (0:00:00.058) 0:00:02.073 ******** included: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-repositories/CentOS.yml for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : List active CentOS repositories] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-repositories/CentOS.yml:3 Wednesday 31 July 2024 10:56:54 -0400 (0:00:00.043) 0:00:02.116 ******** [WARNING]: Consider using the dnf module rather than running 'dnf'. If you need to use command because dnf is insufficient you can add 'warn: false' to this command task or set 'command_warnings=False' in ansible.cfg to get rid of this message. ok: [managed_node1] => { "changed": false, "cmd": [ "dnf", "repolist" ], "delta": "0:00:00.253828", "end": "2024-07-31 10:56:54.715470", "rc": 0, "start": "2024-07-31 10:56:54.461642" } STDOUT: repo id repo name appstream CentOS Stream 8 - AppStream baseos CentOS Stream 8 - BaseOS beaker-client Beaker Client - RedHatEnterpriseLinux8 beaker-harness Beaker harness beakerlib-libraries Copr repo for beakerlib-libraries owned by bgoncalv copr:copr.devel.redhat.com:lpol:qa-tools Copr repo for qa-tools owned by lpol extras CentOS Stream 8 - Extras extras-common CentOS Stream 8 - Extras common packages ha CentOS Stream 8 - HighAvailability TASK [fedora.linux_system_roles.ha_cluster : Enable CentOS repositories] ******* task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-repositories/CentOS.yml:10 Wednesday 31 July 2024 10:56:54 -0400 (0:00:00.681) 0:00:02.797 ******** skipping: [managed_node1] => (item={'id': 'ha', 'name': 'HighAvailability'}) => { "ansible_loop_var": "item", "changed": false, "item": { "id": "ha", "name": "HighAvailability" }, "skip_reason": "Conditional result was False" } skipping: [managed_node1] => (item={'id': 'resilientstorage', 'name': 'ResilientStorage'}) => { "ansible_loop_var": "item", "changed": false, "item": { "id": "resilientstorage", "name": "ResilientStorage" }, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Install role essential packages] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:11 Wednesday 31 July 2024 10:56:54 -0400 (0:00:00.043) 0:00:02.841 ******** ok: [managed_node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: corosync-qnetd pcs TASK [fedora.linux_system_roles.ha_cluster : Check and prepare role variables] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:17 Wednesday 31 July 2024 10:56:57 -0400 (0:00:02.838) 0:00:05.679 ******** included: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/check-and-prepare-role-variables.yml for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : Discover cluster node names] ****** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/check-and-prepare-role-variables.yml:3 Wednesday 31 July 2024 10:56:57 -0400 (0:00:00.052) 0:00:05.731 ******** ok: [managed_node1] => { "ansible_facts": { "__ha_cluster_node_name": "localhost" }, "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Collect cluster node names] ******* task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/check-and-prepare-role-variables.yml:7 Wednesday 31 July 2024 10:56:57 -0400 (0:00:00.035) 0:00:05.767 ******** ok: [managed_node1] => { "ansible_facts": { "__ha_cluster_all_node_names": [ "localhost" ] }, "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Fail if ha_cluster_node_options contains unknown or duplicate nodes] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/check-and-prepare-role-variables.yml:16 Wednesday 31 July 2024 10:56:57 -0400 (0:00:00.037) 0:00:05.805 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Extract node options] ************* task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/check-and-prepare-role-variables.yml:30 Wednesday 31 July 2024 10:56:57 -0400 (0:00:00.037) 0:00:05.842 ******** ok: [managed_node1] => { "ansible_facts": { "__ha_cluster_local_node": {} }, "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Fail if passwords are not specified] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/check-and-prepare-role-variables.yml:43 Wednesday 31 July 2024 10:56:57 -0400 (0:00:00.037) 0:00:05.879 ******** skipping: [managed_node1] => (item=ha_cluster_hacluster_password) => { "ansible_loop_var": "item", "changed": false, "item": "ha_cluster_hacluster_password", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Fail if nodes do not have the same number of SBD devices specified] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/check-and-prepare-role-variables.yml:53 Wednesday 31 July 2024 10:56:57 -0400 (0:00:00.062) 0:00:05.942 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Fail if configuring qnetd on a cluster node] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/check-and-prepare-role-variables.yml:69 Wednesday 31 July 2024 10:56:57 -0400 (0:00:00.010) 0:00:05.952 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Fail if no valid level is specified for a fencing level] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/check-and-prepare-role-variables.yml:79 Wednesday 31 July 2024 10:56:57 -0400 (0:00:00.010) 0:00:05.963 ******** TASK [fedora.linux_system_roles.ha_cluster : Fail if no target is specified for a fencing level] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/check-and-prepare-role-variables.yml:87 Wednesday 31 July 2024 10:56:57 -0400 (0:00:00.034) 0:00:05.997 ******** TASK [fedora.linux_system_roles.ha_cluster : Extract qdevice settings] ********* task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/check-and-prepare-role-variables.yml:101 Wednesday 31 July 2024 10:56:57 -0400 (0:00:00.030) 0:00:06.028 ******** ok: [managed_node1] => { "ansible_facts": { "__ha_cluster_qdevice_host": "", "__ha_cluster_qdevice_in_use": false, "__ha_cluster_qdevice_model": "", "__ha_cluster_qdevice_pcs_address": "" }, "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Figure out if ATB needs to be enabled for SBD] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/check-and-prepare-role-variables.yml:110 Wednesday 31 July 2024 10:56:58 -0400 (0:00:00.040) 0:00:06.069 ******** ok: [managed_node1] => { "ansible_facts": { "__ha_cluster_sbd_needs_atb": false }, "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Fail if SBD needs ATB enabled and the user configured ATB to be disabled] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/check-and-prepare-role-variables.yml:120 Wednesday 31 July 2024 10:56:58 -0400 (0:00:00.035) 0:00:06.104 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Fail if ha_cluster_pcsd_public_key_src and ha_cluster_pcsd_private_key_src are set along with ha_cluster_pcsd_certificates] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/check-and-prepare-role-variables.yml:127 Wednesday 31 July 2024 10:56:58 -0400 (0:00:00.035) 0:00:06.140 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Fetch pcs capabilities] *********** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/check-and-prepare-role-variables.yml:141 Wednesday 31 July 2024 10:56:58 -0400 (0:00:00.034) 0:00:06.174 ******** ok: [managed_node1] => { "changed": false, "cmd": [ "pcs", "--version", "--full" ], "delta": "0:00:00.623250", "end": "2024-07-31 10:56:59.069863", "rc": 0, "start": "2024-07-31 10:56:58.446613" } STDOUT: 0.10.18 booth booth.enable-authfile.set booth.enable-authfile.unset cluster.config.backup-local cluster.config.restore-cluster cluster.config.restore-local cluster.config.uuid cluster.create cluster.create.enable cluster.create.local cluster.create.no-keys-sync cluster.create.separated-name-and-address cluster.create.start cluster.create.start.wait cluster.create.transport.knet cluster.create.transport.udp-udpu cluster.create.transport.udp-udpu.no-rrp cluster.destroy cluster.destroy.all cluster.report cluster.verify corosync.authkey.update corosync.config.get corosync.config.get.struct corosync.config.reload corosync.config.sync-to-local-cluster corosync.config.update corosync.link.add corosync.link.remove corosync.link.remove.list corosync.link.update corosync.qdevice corosync.qdevice.model.net corosync.quorum corosync.quorum.device corosync.quorum.device.heuristics corosync.quorum.device.model.net corosync.quorum.device.model.net.options_tls_and_kaptb corosync.quorum.set-expected-votes-runtime corosync.quorum.status corosync.quorum.unblock corosync.totem.block_unlisted_ips corosync.uidgid node.add node.add.enable node.add.separated-name-and-address node.add.start node.add.start.wait node.attributes node.attributes.set-list-for-node node.confirm-off node.fence node.guest node.kill node.maintenance node.maintenance.all node.maintenance.list node.maintenance.wait node.remote node.remote.onfail-demote node.remove node.remove-from-caches node.remove.list node.standby node.standby.all node.standby.list node.standby.wait node.start-stop-enable-disable node.start-stop-enable-disable.all node.start-stop-enable-disable.list node.start-stop-enable-disable.start-wait node.utilization node.utilization.set-list-for-node pcmk.acl.enable-disable pcmk.acl.group pcmk.acl.role pcmk.acl.role.create-with-permissions pcmk.acl.role.delete-with-users-groups pcmk.acl.user pcmk.alert pcmk.cib.checkpoints pcmk.cib.checkpoints.diff pcmk.cib.edit pcmk.cib.get pcmk.cib.get.scope pcmk.cib.roles.promoted-unpromoted pcmk.cib.set pcmk.constraint.colocation.set pcmk.constraint.colocation.set.options pcmk.constraint.colocation.simple pcmk.constraint.colocation.simple.options pcmk.constraint.hide-expired pcmk.constraint.location.simple pcmk.constraint.location.simple.options pcmk.constraint.location.simple.resource-regexp pcmk.constraint.location.simple.rule pcmk.constraint.location.simple.rule.node-attr-type-number pcmk.constraint.location.simple.rule.options pcmk.constraint.location.simple.rule.rule-add-remove pcmk.constraint.no-autocorrect pcmk.constraint.order.set pcmk.constraint.order.set.options pcmk.constraint.order.simple pcmk.constraint.order.simple.options pcmk.constraint.ticket.set pcmk.constraint.ticket.set.options pcmk.constraint.ticket.simple pcmk.constraint.ticket.simple.constraint-id pcmk.properties.cluster pcmk.properties.cluster.config.output-formats pcmk.properties.cluster.defaults pcmk.properties.cluster.describe pcmk.properties.cluster.describe.output-formats pcmk.properties.operation-defaults pcmk.properties.operation-defaults.multiple pcmk.properties.operation-defaults.rule pcmk.properties.operation-defaults.rule-rsc-op pcmk.properties.operation-defaults.rule.hide-expired pcmk.properties.operation-defaults.rule.node-attr-type-number pcmk.properties.resource-defaults pcmk.properties.resource-defaults.multiple pcmk.properties.resource-defaults.rule pcmk.properties.resource-defaults.rule-rsc-op pcmk.properties.resource-defaults.rule.hide-expired pcmk.properties.resource-defaults.rule.node-attr-type-number pcmk.resource.ban-move-clear pcmk.resource.ban-move-clear.bundles pcmk.resource.ban-move-clear.clear-expired pcmk.resource.ban-move-clear.clone pcmk.resource.bundle pcmk.resource.bundle.container-docker pcmk.resource.bundle.container-docker.promoted-max pcmk.resource.bundle.container-podman pcmk.resource.bundle.container-podman.promoted-max pcmk.resource.bundle.container-rkt pcmk.resource.bundle.container-rkt.promoted-max pcmk.resource.bundle.reset pcmk.resource.bundle.wait pcmk.resource.cleanup pcmk.resource.cleanup.one-resource pcmk.resource.cleanup.strict pcmk.resource.clone pcmk.resource.clone.custom-id pcmk.resource.clone.meta-in-create pcmk.resource.clone.wait pcmk.resource.config.output-formats pcmk.resource.create pcmk.resource.create.clone.custom-id pcmk.resource.create.in-existing-bundle pcmk.resource.create.meta pcmk.resource.create.no-master pcmk.resource.create.operations pcmk.resource.create.operations.onfail-demote pcmk.resource.create.promotable pcmk.resource.create.promotable.custom-id pcmk.resource.create.wait pcmk.resource.debug pcmk.resource.delete pcmk.resource.disable.safe pcmk.resource.disable.safe.brief pcmk.resource.disable.safe.tag pcmk.resource.disable.simulate pcmk.resource.disable.simulate.brief pcmk.resource.disable.simulate.tag pcmk.resource.enable-disable pcmk.resource.enable-disable.list pcmk.resource.enable-disable.tag pcmk.resource.enable-disable.wait pcmk.resource.failcount pcmk.resource.group pcmk.resource.group.add-remove-list pcmk.resource.group.wait pcmk.resource.manage-unmanage pcmk.resource.manage-unmanage.list pcmk.resource.manage-unmanage.tag pcmk.resource.manage-unmanage.with-monitor pcmk.resource.move.autoclean pcmk.resource.promotable pcmk.resource.promotable.custom-id pcmk.resource.promotable.meta-in-create pcmk.resource.promotable.wait pcmk.resource.refresh pcmk.resource.refresh.one-resource pcmk.resource.refresh.strict pcmk.resource.relations pcmk.resource.relocate pcmk.resource.restart pcmk.resource.update pcmk.resource.update-meta pcmk.resource.update-meta.list pcmk.resource.update-meta.wait pcmk.resource.update-operations pcmk.resource.update-operations.onfail-demote pcmk.resource.update.meta pcmk.resource.update.operations pcmk.resource.update.operations.onfail-demote pcmk.resource.update.wait pcmk.resource.utilization pcmk.resource.utilization-set-list-for-resource pcmk.stonith.cleanup pcmk.stonith.cleanup.one-resource pcmk.stonith.cleanup.strict pcmk.stonith.create pcmk.stonith.create.in-group pcmk.stonith.create.meta pcmk.stonith.create.operations pcmk.stonith.create.operations.onfail-demote pcmk.stonith.create.wait pcmk.stonith.delete pcmk.stonith.enable-disable pcmk.stonith.enable-disable.list pcmk.stonith.enable-disable.wait pcmk.stonith.history.cleanup pcmk.stonith.history.show pcmk.stonith.history.update pcmk.stonith.levels pcmk.stonith.levels.add-remove-devices-list pcmk.stonith.levels.clear pcmk.stonith.levels.node-attr pcmk.stonith.levels.node-regexp pcmk.stonith.levels.verify pcmk.stonith.refresh pcmk.stonith.refresh.one-resource pcmk.stonith.refresh.strict pcmk.stonith.update pcmk.stonith.update.scsi-devices pcmk.stonith.update.scsi-devices.add-remove pcmk.stonith.update.scsi-devices.mpath pcmk.tag pcmk.tag.resources pcs.auth.client pcs.auth.client.cluster pcs.auth.client.token pcs.auth.deauth-client pcs.auth.deauth-server pcs.auth.no-bidirectional pcs.auth.separated-name-and-address pcs.auth.server.token pcs.cfg-in-file.cib pcs.daemon-ssl-cert.set pcs.daemon-ssl-cert.sync-to-local-cluster pcs.disaster-recovery.essentials pcs.request-timeout resource-agents.describe resource-agents.list resource-agents.list.detailed resource-agents.ocf.version-1-0 resource-agents.ocf.version-1-1 resource-agents.self-validation sbd sbd.option-timeout-action sbd.shared-block-device status.corosync.membership status.pcmk.resources.hide-inactive status.pcmk.resources.id status.pcmk.resources.node status.pcmk.resources.orphaned status.pcmk.xml stonith-agents.describe stonith-agents.list stonith-agents.list.detailed stonith-agents.ocf.version-1-0 stonith-agents.ocf.version-1-1 stonith-agents.self-validation TASK [fedora.linux_system_roles.ha_cluster : Parse pcs capabilities] *********** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/check-and-prepare-role-variables.yml:148 Wednesday 31 July 2024 10:56:59 -0400 (0:00:00.978) 0:00:07.152 ******** ok: [managed_node1] => { "ansible_facts": { "__ha_cluster_pcs_capabilities": [ "booth", "booth.enable-authfile.set", "booth.enable-authfile.unset", "cluster.config.backup-local", "cluster.config.restore-cluster", "cluster.config.restore-local", "cluster.config.uuid", "cluster.create", "cluster.create.enable", "cluster.create.local", "cluster.create.no-keys-sync", "cluster.create.separated-name-and-address", "cluster.create.start", "cluster.create.start.wait", "cluster.create.transport.knet", "cluster.create.transport.udp-udpu", "cluster.create.transport.udp-udpu.no-rrp", "cluster.destroy", "cluster.destroy.all", "cluster.report", "cluster.verify", "corosync.authkey.update", "corosync.config.get", "corosync.config.get.struct", "corosync.config.reload", "corosync.config.sync-to-local-cluster", "corosync.config.update", "corosync.link.add", "corosync.link.remove", "corosync.link.remove.list", "corosync.link.update", "corosync.qdevice", "corosync.qdevice.model.net", "corosync.quorum", "corosync.quorum.device", "corosync.quorum.device.heuristics", "corosync.quorum.device.model.net", "corosync.quorum.device.model.net.options_tls_and_kaptb", "corosync.quorum.set-expected-votes-runtime", "corosync.quorum.status", "corosync.quorum.unblock", "corosync.totem.block_unlisted_ips", "corosync.uidgid", "node.add", "node.add.enable", "node.add.separated-name-and-address", "node.add.start", "node.add.start.wait", "node.attributes", "node.attributes.set-list-for-node", "node.confirm-off", "node.fence", "node.guest", "node.kill", "node.maintenance", "node.maintenance.all", "node.maintenance.list", "node.maintenance.wait", "node.remote", "node.remote.onfail-demote", "node.remove", "node.remove-from-caches", "node.remove.list", "node.standby", "node.standby.all", "node.standby.list", "node.standby.wait", "node.start-stop-enable-disable", "node.start-stop-enable-disable.all", "node.start-stop-enable-disable.list", "node.start-stop-enable-disable.start-wait", "node.utilization", "node.utilization.set-list-for-node", "pcmk.acl.enable-disable", "pcmk.acl.group", "pcmk.acl.role", "pcmk.acl.role.create-with-permissions", "pcmk.acl.role.delete-with-users-groups", "pcmk.acl.user", "pcmk.alert", "pcmk.cib.checkpoints", "pcmk.cib.checkpoints.diff", "pcmk.cib.edit", "pcmk.cib.get", "pcmk.cib.get.scope", "pcmk.cib.roles.promoted-unpromoted", "pcmk.cib.set", "pcmk.constraint.colocation.set", "pcmk.constraint.colocation.set.options", "pcmk.constraint.colocation.simple", "pcmk.constraint.colocation.simple.options", "pcmk.constraint.hide-expired", "pcmk.constraint.location.simple", "pcmk.constraint.location.simple.options", "pcmk.constraint.location.simple.resource-regexp", "pcmk.constraint.location.simple.rule", "pcmk.constraint.location.simple.rule.node-attr-type-number", "pcmk.constraint.location.simple.rule.options", "pcmk.constraint.location.simple.rule.rule-add-remove", "pcmk.constraint.no-autocorrect", "pcmk.constraint.order.set", "pcmk.constraint.order.set.options", "pcmk.constraint.order.simple", "pcmk.constraint.order.simple.options", "pcmk.constraint.ticket.set", "pcmk.constraint.ticket.set.options", "pcmk.constraint.ticket.simple", "pcmk.constraint.ticket.simple.constraint-id", "pcmk.properties.cluster", "pcmk.properties.cluster.config.output-formats", "pcmk.properties.cluster.defaults", "pcmk.properties.cluster.describe", "pcmk.properties.cluster.describe.output-formats", "pcmk.properties.operation-defaults", "pcmk.properties.operation-defaults.multiple", "pcmk.properties.operation-defaults.rule", "pcmk.properties.operation-defaults.rule-rsc-op", "pcmk.properties.operation-defaults.rule.hide-expired", "pcmk.properties.operation-defaults.rule.node-attr-type-number", "pcmk.properties.resource-defaults", "pcmk.properties.resource-defaults.multiple", "pcmk.properties.resource-defaults.rule", "pcmk.properties.resource-defaults.rule-rsc-op", "pcmk.properties.resource-defaults.rule.hide-expired", "pcmk.properties.resource-defaults.rule.node-attr-type-number", "pcmk.resource.ban-move-clear", "pcmk.resource.ban-move-clear.bundles", "pcmk.resource.ban-move-clear.clear-expired", "pcmk.resource.ban-move-clear.clone", "pcmk.resource.bundle", "pcmk.resource.bundle.container-docker", "pcmk.resource.bundle.container-docker.promoted-max", "pcmk.resource.bundle.container-podman", "pcmk.resource.bundle.container-podman.promoted-max", "pcmk.resource.bundle.container-rkt", "pcmk.resource.bundle.container-rkt.promoted-max", "pcmk.resource.bundle.reset", "pcmk.resource.bundle.wait", "pcmk.resource.cleanup", "pcmk.resource.cleanup.one-resource", "pcmk.resource.cleanup.strict", "pcmk.resource.clone", "pcmk.resource.clone.custom-id", "pcmk.resource.clone.meta-in-create", "pcmk.resource.clone.wait", "pcmk.resource.config.output-formats", "pcmk.resource.create", "pcmk.resource.create.clone.custom-id", "pcmk.resource.create.in-existing-bundle", "pcmk.resource.create.meta", "pcmk.resource.create.no-master", "pcmk.resource.create.operations", "pcmk.resource.create.operations.onfail-demote", "pcmk.resource.create.promotable", "pcmk.resource.create.promotable.custom-id", "pcmk.resource.create.wait", "pcmk.resource.debug", "pcmk.resource.delete", "pcmk.resource.disable.safe", "pcmk.resource.disable.safe.brief", "pcmk.resource.disable.safe.tag", "pcmk.resource.disable.simulate", "pcmk.resource.disable.simulate.brief", "pcmk.resource.disable.simulate.tag", "pcmk.resource.enable-disable", "pcmk.resource.enable-disable.list", "pcmk.resource.enable-disable.tag", "pcmk.resource.enable-disable.wait", "pcmk.resource.failcount", "pcmk.resource.group", "pcmk.resource.group.add-remove-list", "pcmk.resource.group.wait", "pcmk.resource.manage-unmanage", "pcmk.resource.manage-unmanage.list", "pcmk.resource.manage-unmanage.tag", "pcmk.resource.manage-unmanage.with-monitor", "pcmk.resource.move.autoclean", "pcmk.resource.promotable", "pcmk.resource.promotable.custom-id", "pcmk.resource.promotable.meta-in-create", "pcmk.resource.promotable.wait", "pcmk.resource.refresh", "pcmk.resource.refresh.one-resource", "pcmk.resource.refresh.strict", "pcmk.resource.relations", "pcmk.resource.relocate", "pcmk.resource.restart", "pcmk.resource.update", "pcmk.resource.update-meta", "pcmk.resource.update-meta.list", "pcmk.resource.update-meta.wait", "pcmk.resource.update-operations", "pcmk.resource.update-operations.onfail-demote", "pcmk.resource.update.meta", "pcmk.resource.update.operations", "pcmk.resource.update.operations.onfail-demote", "pcmk.resource.update.wait", "pcmk.resource.utilization", "pcmk.resource.utilization-set-list-for-resource", "pcmk.stonith.cleanup", "pcmk.stonith.cleanup.one-resource", "pcmk.stonith.cleanup.strict", "pcmk.stonith.create", "pcmk.stonith.create.in-group", "pcmk.stonith.create.meta", "pcmk.stonith.create.operations", "pcmk.stonith.create.operations.onfail-demote", "pcmk.stonith.create.wait", "pcmk.stonith.delete", "pcmk.stonith.enable-disable", "pcmk.stonith.enable-disable.list", "pcmk.stonith.enable-disable.wait", "pcmk.stonith.history.cleanup", "pcmk.stonith.history.show", "pcmk.stonith.history.update", "pcmk.stonith.levels", "pcmk.stonith.levels.add-remove-devices-list", "pcmk.stonith.levels.clear", "pcmk.stonith.levels.node-attr", "pcmk.stonith.levels.node-regexp", "pcmk.stonith.levels.verify", "pcmk.stonith.refresh", "pcmk.stonith.refresh.one-resource", "pcmk.stonith.refresh.strict", "pcmk.stonith.update", "pcmk.stonith.update.scsi-devices", "pcmk.stonith.update.scsi-devices.add-remove", "pcmk.stonith.update.scsi-devices.mpath", "pcmk.tag", "pcmk.tag.resources", "pcs.auth.client", "pcs.auth.client.cluster", "pcs.auth.client.token", "pcs.auth.deauth-client", "pcs.auth.deauth-server", "pcs.auth.no-bidirectional", "pcs.auth.separated-name-and-address", "pcs.auth.server.token", "pcs.cfg-in-file.cib", "pcs.daemon-ssl-cert.set", "pcs.daemon-ssl-cert.sync-to-local-cluster", "pcs.disaster-recovery.essentials", "pcs.request-timeout", "resource-agents.describe", "resource-agents.list", "resource-agents.list.detailed", "resource-agents.ocf.version-1-0", "resource-agents.ocf.version-1-1", "resource-agents.self-validation", "sbd", "sbd.option-timeout-action", "sbd.shared-block-device", "status.corosync.membership", "status.pcmk.resources.hide-inactive", "status.pcmk.resources.id", "status.pcmk.resources.node", "status.pcmk.resources.orphaned", "status.pcmk.xml", "stonith-agents.describe", "stonith-agents.list", "stonith-agents.list.detailed", "stonith-agents.ocf.version-1-0", "stonith-agents.ocf.version-1-1", "stonith-agents.self-validation" ], "__ha_cluster_pcsd_capabilities_available": false }, "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Fetch pcsd capabilities] ********** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/check-and-prepare-role-variables.yml:155 Wednesday 31 July 2024 10:56:59 -0400 (0:00:00.040) 0:00:07.193 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Parse pcsd capabilities] ********** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/check-and-prepare-role-variables.yml:163 Wednesday 31 July 2024 10:56:59 -0400 (0:00:00.010) 0:00:07.203 ******** ok: [managed_node1] => { "ansible_facts": { "__ha_cluster_pcsd_capabilities": [] }, "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Fail if pcs is to old to configure resources and operations defaults] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/check-and-prepare-role-variables.yml:172 Wednesday 31 July 2024 10:56:59 -0400 (0:00:00.034) 0:00:07.238 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Set hacluster password] *********** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:22 Wednesday 31 July 2024 10:56:59 -0400 (0:00:00.035) 0:00:07.273 ******** ok: [managed_node1] => { "append": false, "changed": false, "comment": "cluster user", "group": 189, "home": "/home/hacluster", "move_home": false, "name": "hacluster", "password": "NOT_LOGGING_PASSWORD", "shell": "/sbin/nologin", "state": "present", "uid": 189 } TASK [fedora.linux_system_roles.ha_cluster : Configure shell] ****************** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:29 Wednesday 31 July 2024 10:56:59 -0400 (0:00:00.584) 0:00:07.858 ******** included: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/configure-shell.yml for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : Stop pcsd] ************************ task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/configure-shell.yml:6 Wednesday 31 July 2024 10:56:59 -0400 (0:00:00.055) 0:00:07.914 ******** changed: [managed_node1] => { "changed": true, "name": "pcsd", "state": "stopped", "status": { "ActiveEnterTimestamp": "Wed 2024-07-31 10:55:57 EDT", "ActiveEnterTimestampMonotonic": "1875980795", "ActiveExitTimestamp": "Wed 2024-07-31 10:55:55 EDT", "ActiveExitTimestampMonotonic": "1873631524", "ActiveState": "active", "After": "basic.target system.slice network-online.target pcsd-ruby.service sysinit.target systemd-journald.socket", "AllowIsolate": "no", "AllowedCPUs": "", "AllowedMemoryNodes": "", "AmbientCapabilities": "", "AssertResult": "yes", "AssertTimestamp": "Wed 2024-07-31 10:55:57 EDT", "AssertTimestampMonotonic": "1875653119", "Before": "multi-user.target shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "no", "CPUAffinity": "", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Wed 2024-07-31 10:55:57 EDT", "ConditionTimestampMonotonic": "1875653117", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ConsistsOf": "pcsd-ruby.service", "ControlGroup": "/system.slice/pcsd.service", "ControlPID": "0", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "Delegate": "no", "Description": "PCS GUI and remote configuration interface", "DevicePolicy": "auto", "Documentation": "man:pcsd(8) man:pcs(8)", "DynamicUser": "no", "EffectiveCPUs": "", "EffectiveMemoryNodes": "", "EnvironmentFiles": "/etc/sysconfig/pcsd (ignore_errors=no)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "208935", "ExecMainStartTimestamp": "Wed 2024-07-31 10:55:57 EDT", "ExecMainStartTimestampMonotonic": "1875654654", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/sbin/pcsd ; argv[]=/usr/sbin/pcsd ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/pcsd.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOSchedulingClass": "0", "IOSchedulingPriority": "0", "IOWeight": "[not set]", "IPAccounting": "no", "IPEgressBytes": "18446744073709551615", "IPEgressPackets": "18446744073709551615", "IPIngressBytes": "18446744073709551615", "IPIngressPackets": "18446744073709551615", "Id": "pcsd.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Wed 2024-07-31 10:55:55 EDT", "InactiveEnterTimestampMonotonic": "1873662142", "InactiveExitTimestamp": "Wed 2024-07-31 10:55:57 EDT", "InactiveExitTimestampMonotonic": "1875654711", "InvocationID": "f447fa81e72e46f38a754298c8a1319f", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "0", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "65536", "LimitMEMLOCKSoft": "65536", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "262144", "LimitNOFILESoft": "1024", "LimitNPROC": "14003", "LimitNPROCSoft": "14003", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "14003", "LimitSIGPENDINGSoft": "14003", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "208935", "MemoryAccounting": "yes", "MemoryCurrent": "23756800", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "MountFlags": "", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAMask": "", "NUMAPolicy": "n/a", "Names": "pcsd.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "Perpetual": "no", "PrivateDevices": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "system.slice network-online.target pcsd-ruby.service sysinit.target", "Restart": "no", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardInputData": "", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Wed 2024-07-31 10:55:57 EDT", "StateChangeTimestampMonotonic": "1875980795", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "1", "TasksMax": "22405", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "WatchdogTimestamp": "Wed 2024-07-31 10:55:57 EDT", "WatchdogTimestampMonotonic": "1875980791", "WatchdogUSec": "0" } } TASK [fedora.linux_system_roles.ha_cluster : Regenerate pcsd TLS certificate and key] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/configure-shell.yml:11 Wednesday 31 July 2024 10:57:00 -0400 (0:00:00.768) 0:00:08.682 ******** skipping: [managed_node1] => (item=/var/lib/pcsd/pcsd.key) => { "ansible_loop_var": "item", "changed": false, "item": "/var/lib/pcsd/pcsd.key", "skip_reason": "Conditional result was False" } skipping: [managed_node1] => (item=/var/lib/pcsd/pcsd.crt) => { "ansible_loop_var": "item", "changed": false, "item": "/var/lib/pcsd/pcsd.crt", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Get the stat of /var/lib/pcsd] **** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/configure-shell.yml:25 Wednesday 31 July 2024 10:57:00 -0400 (0:00:00.036) 0:00:08.718 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Allow certmonger to write into pcsd's certificate directory] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/configure-shell.yml:30 Wednesday 31 July 2024 10:57:00 -0400 (0:00:00.036) 0:00:08.755 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Ensure the name of ha_cluster_pcsd_certificates is /var/lib/pcsd/pcsd; Create certificates using the certificate role] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/configure-shell.yml:37 Wednesday 31 July 2024 10:57:00 -0400 (0:00:00.033) 0:00:08.788 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Set pcsd's certificate directory back to cluster_var_lib_t] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/configure-shell.yml:49 Wednesday 31 July 2024 10:57:00 -0400 (0:00:00.035) 0:00:08.824 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Distribute pcsd TLS private key] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/configure-shell.yml:64 Wednesday 31 July 2024 10:57:00 -0400 (0:00:00.034) 0:00:08.858 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Distribute pcsd TLS certificate] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/configure-shell.yml:71 Wednesday 31 July 2024 10:57:00 -0400 (0:00:00.035) 0:00:08.893 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Distribute pcs_settings.conf] ***** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/configure-shell.yml:79 Wednesday 31 July 2024 10:57:00 -0400 (0:00:00.035) 0:00:08.929 ******** changed: [managed_node1] => { "changed": true, "checksum": "b504e1b9c9aa23803dd6f95e66c757088b08551d", "dest": "/var/lib/pcsd/pcs_settings.conf", "gid": 0, "group": "root", "md5sum": "087ff556d850518c8fff5ad1179d8817", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:cluster_var_lib_t:s0", "size": 359, "src": "/root/.ansible/tmp/ansible-tmp-1722437820.938834-28992-56711229643096/source", "state": "file", "uid": 0 } TASK [fedora.linux_system_roles.ha_cluster : Start pcsd with updated config files and configure it to start on boot] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/configure-shell.yml:88 Wednesday 31 July 2024 10:57:01 -0400 (0:00:00.839) 0:00:09.768 ******** changed: [managed_node1] => { "changed": true, "enabled": true, "name": "pcsd", "state": "started", "status": { "ActiveEnterTimestamp": "Wed 2024-07-31 10:55:57 EDT", "ActiveEnterTimestampMonotonic": "1875980795", "ActiveExitTimestamp": "Wed 2024-07-31 10:57:00 EDT", "ActiveExitTimestampMonotonic": "1938631181", "ActiveState": "inactive", "After": "basic.target system.slice network-online.target pcsd-ruby.service sysinit.target systemd-journald.socket", "AllowIsolate": "no", "AllowedCPUs": "", "AllowedMemoryNodes": "", "AmbientCapabilities": "", "AssertResult": "yes", "AssertTimestamp": "Wed 2024-07-31 10:55:57 EDT", "AssertTimestampMonotonic": "1875653119", "Before": "multi-user.target shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "no", "CPUAffinity": "", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Wed 2024-07-31 10:55:57 EDT", "ConditionTimestampMonotonic": "1875653117", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ConsistsOf": "pcsd-ruby.service", "ControlPID": "0", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "Delegate": "no", "Description": "PCS GUI and remote configuration interface", "DevicePolicy": "auto", "Documentation": "man:pcsd(8) man:pcs(8)", "DynamicUser": "no", "EffectiveCPUs": "", "EffectiveMemoryNodes": "", "EnvironmentFiles": "/etc/sysconfig/pcsd (ignore_errors=no)", "ExecMainCode": "1", "ExecMainExitTimestamp": "Wed 2024-07-31 10:57:00 EDT", "ExecMainExitTimestampMonotonic": "1938663664", "ExecMainPID": "208935", "ExecMainStartTimestamp": "Wed 2024-07-31 10:55:57 EDT", "ExecMainStartTimestampMonotonic": "1875654654", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/sbin/pcsd ; argv[]=/usr/sbin/pcsd ; ignore_errors=no ; start_time=[Wed 2024-07-31 10:55:57 EDT] ; stop_time=[Wed 2024-07-31 10:57:00 EDT] ; pid=208935 ; code=exited ; status=0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/pcsd.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOSchedulingClass": "0", "IOSchedulingPriority": "0", "IOWeight": "[not set]", "IPAccounting": "no", "IPEgressBytes": "18446744073709551615", "IPEgressPackets": "18446744073709551615", "IPIngressBytes": "18446744073709551615", "IPIngressPackets": "18446744073709551615", "Id": "pcsd.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Wed 2024-07-31 10:57:00 EDT", "InactiveEnterTimestampMonotonic": "1938663806", "InactiveExitTimestamp": "Wed 2024-07-31 10:55:57 EDT", "InactiveExitTimestampMonotonic": "1875654711", "InvocationID": "f447fa81e72e46f38a754298c8a1319f", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "0", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "65536", "LimitMEMLOCKSoft": "65536", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "262144", "LimitNOFILESoft": "1024", "LimitNPROC": "14003", "LimitNPROCSoft": "14003", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "14003", "LimitSIGPENDINGSoft": "14003", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "MemoryAccounting": "yes", "MemoryCurrent": "[not set]", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "MountFlags": "", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAMask": "", "NUMAPolicy": "n/a", "Names": "pcsd.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "Perpetual": "no", "PrivateDevices": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "system.slice network-online.target pcsd-ruby.service sysinit.target", "Restart": "no", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardInputData": "", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Wed 2024-07-31 10:57:00 EDT", "StateChangeTimestampMonotonic": "1938663806", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "[not set]", "TasksMax": "22405", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0" } } TASK [fedora.linux_system_roles.ha_cluster : Configure firewall] *************** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:35 Wednesday 31 July 2024 10:57:03 -0400 (0:00:01.350) 0:00:11.119 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Configure selinux] **************** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:38 Wednesday 31 July 2024 10:57:03 -0400 (0:00:00.036) 0:00:11.155 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Install cluster packages] ********* task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:44 Wednesday 31 July 2024 10:57:03 -0400 (0:00:00.033) 0:00:11.189 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Distribute fence-virt authkey] **** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:50 Wednesday 31 July 2024 10:57:03 -0400 (0:00:00.033) 0:00:11.223 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Configure SBD] ******************** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:55 Wednesday 31 July 2024 10:57:03 -0400 (0:00:00.032) 0:00:11.255 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Configure corosync] *************** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:58 Wednesday 31 July 2024 10:57:03 -0400 (0:00:00.060) 0:00:11.315 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Cluster auth] ********************* task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:61 Wednesday 31 July 2024 10:57:03 -0400 (0:00:00.036) 0:00:11.352 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Distribute cluster shared keys] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:66 Wednesday 31 July 2024 10:57:03 -0400 (0:00:00.033) 0:00:11.385 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Enable or disable cluster services on boot] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:72 Wednesday 31 July 2024 10:57:03 -0400 (0:00:00.036) 0:00:11.422 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Start the cluster and reload corosync.conf] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:75 Wednesday 31 July 2024 10:57:03 -0400 (0:00:00.035) 0:00:11.458 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Create and push CIB] ************** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:78 Wednesday 31 July 2024 10:57:03 -0400 (0:00:00.033) 0:00:11.492 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Remove cluster configuration] ***** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:87 Wednesday 31 July 2024 10:57:03 -0400 (0:00:00.035) 0:00:11.527 ******** included: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/cluster-destroy-pcs-0.10.yml for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : Remove cluster configuration] ***** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/cluster-destroy-pcs-0.10.yml:9 Wednesday 31 July 2024 10:57:03 -0400 (0:00:00.039) 0:00:11.566 ******** changed: [managed_node1] => (item=/etc/corosync/corosync.conf) => { "ansible_loop_var": "item", "changed": true, "cmd": [ "pcs", "cluster", "destroy" ], "delta": "0:00:02.401610", "end": "2024-07-31 10:57:06.271731", "item": "/etc/corosync/corosync.conf", "rc": 0, "start": "2024-07-31 10:57:03.870121" } STDOUT: Shutting down pacemaker/corosync services... Killing any remaining services... Removing all cluster configuration files... ok: [managed_node1] => (item=/var/lib/pacemaker/cib/cib.xml) => { "ansible_loop_var": "item", "changed": false, "cmd": [ "pcs", "cluster", "destroy" ], "item": "/var/lib/pacemaker/cib/cib.xml", "rc": 0 } STDOUT: skipped, since /var/lib/pacemaker/cib/cib.xml does not exist TASK [fedora.linux_system_roles.ha_cluster : Remove fence-virt authkey] ******** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:90 Wednesday 31 July 2024 10:57:06 -0400 (0:00:03.167) 0:00:14.734 ******** changed: [managed_node1] => { "changed": true, "path": "/etc/cluster/fence_xvm.key", "state": "absent" } TASK [fedora.linux_system_roles.ha_cluster : Configure qnetd] ****************** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:95 Wednesday 31 July 2024 10:57:07 -0400 (0:00:00.486) 0:00:15.221 ******** included: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/pcs-qnetd.yml for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : Remove qnetd configuration] ******* task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/pcs-qnetd.yml:3 Wednesday 31 July 2024 10:57:07 -0400 (0:00:00.039) 0:00:15.260 ******** changed: [managed_node1] => { "changed": true, "cmd": [ "pcs", "--force", "--", "qdevice", "destroy", "net" ], "delta": "0:00:01.153289", "end": "2024-07-31 10:57:08.715548", "rc": 0, "start": "2024-07-31 10:57:07.562259" } STDOUT: Stopping quorum device... quorum device stopped quorum device disabled Quorum device 'net' configuration files removed TASK [fedora.linux_system_roles.ha_cluster : Setup qnetd] ********************** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/pcs-qnetd.yml:16 Wednesday 31 July 2024 10:57:08 -0400 (0:00:01.536) 0:00:16.797 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Enable or disable qnetd service on boot] *** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/pcs-qnetd.yml:26 Wednesday 31 July 2024 10:57:08 -0400 (0:00:00.034) 0:00:16.832 ******** skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Stat corosync.conf] ****************************************************** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_cluster_destroy.yml:28 Wednesday 31 July 2024 10:57:08 -0400 (0:00:00.035) 0:00:16.868 ******** ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [Stat cib.xml] ************************************************************ task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_cluster_destroy.yml:33 Wednesday 31 July 2024 10:57:09 -0400 (0:00:00.353) 0:00:17.221 ******** ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [Stat fence_xvm.key] ****************************************************** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_cluster_destroy.yml:38 Wednesday 31 July 2024 10:57:09 -0400 (0:00:00.352) 0:00:17.574 ******** ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [Check the files do not exist] ******************************************** task path: /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_cluster_destroy.yml:43 Wednesday 31 July 2024 10:57:09 -0400 (0:00:00.353) 0:00:17.927 ******** ok: [managed_node1] => { "changed": false } MSG: All assertions passed META: ran handlers META: ran handlers PLAY RECAP ********************************************************************* managed_node1 : ok=36 changed=6 unreachable=0 failed=0 skipped=37 rescued=0 ignored=0 Wednesday 31 July 2024 10:57:09 -0400 (0:00:00.036) 0:00:17.964 ******** =============================================================================== fedora.linux_system_roles.ha_cluster : Remove cluster configuration ----- 3.17s /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/cluster-destroy-pcs-0.10.yml:9 fedora.linux_system_roles.ha_cluster : Install role essential packages --- 2.84s /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:11 fedora.linux_system_roles.ha_cluster : Remove qnetd configuration ------- 1.54s /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/pcs-qnetd.yml:3 fedora.linux_system_roles.ha_cluster : Start pcsd with updated config files and configure it to start on boot --- 1.35s /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/configure-shell.yml:88 Gathering Facts --------------------------------------------------------- 0.99s /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_cluster_destroy.yml:9 fedora.linux_system_roles.ha_cluster : Fetch pcs capabilities ----------- 0.98s /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/check-and-prepare-role-variables.yml:141 fedora.linux_system_roles.ha_cluster : Distribute pcs_settings.conf ----- 0.84s /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/configure-shell.yml:79 fedora.linux_system_roles.ha_cluster : Stop pcsd ------------------------ 0.77s /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/configure-shell.yml:6 fedora.linux_system_roles.ha_cluster : List active CentOS repositories --- 0.68s /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-repositories/CentOS.yml:3 fedora.linux_system_roles.ha_cluster : Set hacluster password ----------- 0.58s /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:22 fedora.linux_system_roles.ha_cluster : Remove fence-virt authkey -------- 0.49s /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:90 fedora.linux_system_roles.ha_cluster : Check if system is ostree -------- 0.46s /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:22 Stat fence_xvm.key ------------------------------------------------------ 0.35s /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_cluster_destroy.yml:38 Stat corosync.conf ------------------------------------------------------ 0.35s /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_cluster_destroy.yml:28 Stat cib.xml ------------------------------------------------------------ 0.35s /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_cluster_destroy.yml:33 fedora.linux_system_roles.ha_cluster : Do not try to enable RHEL repositories --- 0.07s /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:32 fedora.linux_system_roles.ha_cluster : Set platform/version specific variables --- 0.06s /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml:19 fedora.linux_system_roles.ha_cluster : Fail if passwords are not specified --- 0.06s /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/check-and-prepare-role-variables.yml:43 fedora.linux_system_roles.ha_cluster : Configure SBD -------------------- 0.06s /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:55 fedora.linux_system_roles.ha_cluster : Find platform/version specific tasks to enable repositories --- 0.06s /tmp/tmp.Qs2kIDLrag/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-package-repositories.yml:3