[DEPRECATION WARNING]: ANSIBLE_COLLECTIONS_PATHS option, does not fit var naming standard, use the singular form ANSIBLE_COLLECTIONS_PATH instead. This feature will be removed from ansible-core in version 2.19. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. No config file found; using defaults running playbook inside collection fedora.linux_system_roles PLAY [Test qdevice - minimal configuration] ************************************ TASK [Gathering Facts] ********************************************************* Thursday 25 July 2024 08:24:40 -0400 (0:00:00.008) 0:00:00.008 ********* [WARNING]: Platform linux on host managed_node1 is using the discovered Python interpreter at /usr/bin/python3.12, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.17/reference_appendices/interpreter_discovery.html for more information. ok: [managed_node1] TASK [Set qnetd address] ******************************************************* Thursday 25 July 2024 08:24:41 -0400 (0:00:01.122) 0:00:01.131 ********* ok: [managed_node1] => { "ansible_facts": { "__test_qnetd_address": "localhost" }, "changed": false } TASK [Run test] **************************************************************** Thursday 25 July 2024 08:24:41 -0400 (0:00:00.020) 0:00:01.151 ********* included: /var/ARTIFACTS/work-generalqe6kbm9_/plans/general/tree/tmp.ytRvH8g1eY/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/template_qdevice.yml for managed_node1 TASK [Set up test environment] ************************************************* Thursday 25 July 2024 08:24:41 -0400 (0:00:00.021) 0:00:01.173 ********* included: fedora.linux_system_roles.ha_cluster for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : Set node name to 'localhost' for single-node clusters] *** Thursday 25 July 2024 08:24:41 -0400 (0:00:00.028) 0:00:01.201 ********* ok: [managed_node1] => { "ansible_facts": { "inventory_hostname": "localhost" }, "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Ensure facts used by tests] ******* Thursday 25 July 2024 08:24:41 -0400 (0:00:00.021) 0:00:01.223 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "'distribution' not in ansible_facts", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Check if system is ostree] ******** Thursday 25 July 2024 08:24:41 -0400 (0:00:00.017) 0:00:01.240 ********* ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.ha_cluster : Set flag to indicate system is ostree] *** Thursday 25 July 2024 08:24:42 -0400 (0:00:00.415) 0:00:01.656 ********* ok: [managed_node1] => { "ansible_facts": { "__ha_cluster_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Do not try to enable RHEL repositories] *** Thursday 25 July 2024 08:24:42 -0400 (0:00:00.021) 0:00:01.677 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "ansible_distribution == 'RedHat'", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Copy nss-altfiles ha_cluster users to /etc/passwd] *** Thursday 25 July 2024 08:24:42 -0400 (0:00:00.014) 0:00:01.692 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__ha_cluster_is_ostree | d(false)", "skip_reason": "Conditional result was False" } TASK [Clean up test environment for qnetd] ************************************* Thursday 25 July 2024 08:24:42 -0400 (0:00:00.021) 0:00:01.713 ********* included: fedora.linux_system_roles.ha_cluster for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : Make sure qnetd is not installed] *** Thursday 25 July 2024 08:24:42 -0400 (0:00:00.030) 0:00:01.743 ********* ok: [managed_node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do TASK [fedora.linux_system_roles.ha_cluster : Make sure qnetd config files are not present] *** Thursday 25 July 2024 08:24:42 -0400 (0:00:00.787) 0:00:02.531 ********* ok: [managed_node1] => { "changed": false, "path": "/etc/corosync/qnetd", "state": "absent" } TASK [Set up test environment for qnetd] *************************************** Thursday 25 July 2024 08:24:43 -0400 (0:00:00.430) 0:00:02.961 ********* included: fedora.linux_system_roles.ha_cluster for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : Install qnetd packages] *********** Thursday 25 July 2024 08:24:43 -0400 (0:00:00.031) 0:00:02.993 ********* changed: [managed_node1] => { "changed": true, "rc": 0, "results": [ "Installed: corosync-qnetd-3.0.3-6.el10.x86_64" ] } lsrpackages: corosync-qnetd pcs TASK [fedora.linux_system_roles.ha_cluster : Set up qnetd] ********************* Thursday 25 July 2024 08:24:44 -0400 (0:00:01.557) 0:00:04.550 ********* changed: [managed_node1] => { "changed": true, "cmd": [ "pcs", "--start", "--", "qdevice", "setup", "model", "net" ], "delta": "0:00:01.140064", "end": "2024-07-25 08:24:46.418912", "failed_when_result": false, "rc": 0, "start": "2024-07-25 08:24:45.278848" } STDERR: Quorum device 'net' initialized Starting quorum device... quorum device started TASK [Back up qnetd] *********************************************************** Thursday 25 July 2024 08:24:46 -0400 (0:00:01.569) 0:00:06.119 ********* included: /var/ARTIFACTS/work-generalqe6kbm9_/plans/general/tree/tmp.ytRvH8g1eY/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tasks/qnetd_backup_restore.yml for managed_node1 TASK [Create /etc/corosync/qnetd_backup directory] ***************************** Thursday 25 July 2024 08:24:46 -0400 (0:00:00.025) 0:00:06.145 ********* ok: [managed_node1] => { "changed": false, "gid": 0, "group": "root", "mode": "0700", "owner": "root", "path": "/etc/corosync/qnetd_backup", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 19, "state": "directory", "uid": 0 } TASK [Back up qnetd settings] ************************************************** Thursday 25 July 2024 08:24:46 -0400 (0:00:00.350) 0:00:06.495 ********* changed: [managed_node1] => { "changed": true, "cmd": [ "cp", "--preserve=all", "--recursive", "/etc/corosync/qnetd", "/etc/corosync/qnetd_backup" ], "delta": "0:00:00.007736", "end": "2024-07-25 08:24:47.148152", "rc": 0, "start": "2024-07-25 08:24:47.140416" } TASK [Restore qnetd settings] ************************************************** Thursday 25 July 2024 08:24:47 -0400 (0:00:00.345) 0:00:06.841 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "operation == \"restore\"", "skip_reason": "Conditional result was False" } TASK [Start qnetd] ************************************************************* Thursday 25 July 2024 08:24:47 -0400 (0:00:00.013) 0:00:06.854 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "operation == \"restore\"", "skip_reason": "Conditional result was False" } TASK [Run HA Cluster role] ***************************************************** Thursday 25 July 2024 08:24:47 -0400 (0:00:00.012) 0:00:06.867 ********* included: fedora.linux_system_roles.ha_cluster for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : Set platform/version specific variables] *** Thursday 25 July 2024 08:24:47 -0400 (0:00:00.055) 0:00:06.923 ********* included: /var/ARTIFACTS/work-generalqe6kbm9_/plans/general/tree/tmp.ytRvH8g1eY/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : Ensure ansible_facts used by role] *** Thursday 25 July 2024 08:24:47 -0400 (0:00:00.022) 0:00:06.946 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__ha_cluster_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Check if system is ostree] ******** Thursday 25 July 2024 08:24:47 -0400 (0:00:00.021) 0:00:06.968 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __ha_cluster_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Set flag to indicate system is ostree] *** Thursday 25 July 2024 08:24:47 -0400 (0:00:00.017) 0:00:06.985 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __ha_cluster_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Set platform/version specific variables] *** Thursday 25 July 2024 08:24:47 -0400 (0:00:00.020) 0:00:07.006 ********* ok: [managed_node1] => (item=RedHat.yml) => { "ansible_facts": { "__ha_cluster_cloud_agents_packages": [], "__ha_cluster_fence_agent_packages_default": "{{ ['fence-agents-all'] + (['fence-virt'] if ansible_architecture == 'x86_64' else []) }}", "__ha_cluster_fullstack_node_packages": [ "corosync", "libknet1-plugins-all", "resource-agents", "pacemaker", "openssl" ], "__ha_cluster_pcs_provider": "pcs-0.10", "__ha_cluster_qdevice_node_packages": [ "corosync-qdevice", "bash", "coreutils", "curl", "grep", "nss-tools", "openssl", "sed" ], "__ha_cluster_repos": [], "__ha_cluster_role_essential_packages": [ "pcs", "corosync-qnetd" ], "__ha_cluster_sbd_packages": [ "sbd" ], "__ha_cluster_services": [ "corosync", "corosync-qdevice", "pacemaker" ] }, "ansible_included_var_files": [ "/var/ARTIFACTS/work-generalqe6kbm9_/plans/general/tree/tmp.ytRvH8g1eY/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/vars/RedHat.yml" ], "ansible_loop_var": "item", "changed": false, "item": "RedHat.yml" } skipping: [managed_node1] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "CentOS.yml", "skip_reason": "Conditional result was False" } ok: [managed_node1] => (item=CentOS_10.yml) => { "ansible_facts": { "__ha_cluster_cloud_agents_packages": [ "resource-agents-cloud", "fence-agents-aliyun", "fence-agents-aws", "fence-agents-azure-arm", "fence-agents-compute", "fence-agents-gce", "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt", "fence-agents-openstack" ], "__ha_cluster_repos": [ { "id": "highavailability", "name": "HighAvailability" }, { "id": "resilientstorage", "name": "ResilientStorage" } ] }, "ansible_included_var_files": [ "/var/ARTIFACTS/work-generalqe6kbm9_/plans/general/tree/tmp.ytRvH8g1eY/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/vars/CentOS_10.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_10.yml" } ok: [managed_node1] => (item=CentOS_10.yml) => { "ansible_facts": { "__ha_cluster_cloud_agents_packages": [ "resource-agents-cloud", "fence-agents-aliyun", "fence-agents-aws", "fence-agents-azure-arm", "fence-agents-compute", "fence-agents-gce", "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt", "fence-agents-openstack" ], "__ha_cluster_repos": [ { "id": "highavailability", "name": "HighAvailability" }, { "id": "resilientstorage", "name": "ResilientStorage" } ] }, "ansible_included_var_files": [ "/var/ARTIFACTS/work-generalqe6kbm9_/plans/general/tree/tmp.ytRvH8g1eY/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/vars/CentOS_10.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_10.yml" } TASK [fedora.linux_system_roles.ha_cluster : Set Linux Pacemaker shell specific variables] *** Thursday 25 July 2024 08:24:47 -0400 (0:00:00.043) 0:00:07.050 ********* ok: [managed_node1] => { "ansible_facts": {}, "ansible_included_var_files": [ "/var/ARTIFACTS/work-generalqe6kbm9_/plans/general/tree/tmp.ytRvH8g1eY/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/vars/shell_pcs.yml" ], "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Enable package repositories] ****** Thursday 25 July 2024 08:24:47 -0400 (0:00:00.017) 0:00:07.067 ********* included: /var/ARTIFACTS/work-generalqe6kbm9_/plans/general/tree/tmp.ytRvH8g1eY/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-package-repositories.yml for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : Find platform/version specific tasks to enable repositories] *** Thursday 25 July 2024 08:24:47 -0400 (0:00:00.023) 0:00:07.090 ********* ok: [managed_node1] => (item=RedHat.yml) => { "ansible_facts": { "__ha_cluster_enable_repo_tasks_file": "/var/ARTIFACTS/work-generalqe6kbm9_/plans/general/tree/tmp.ytRvH8g1eY/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-repositories/RedHat.yml" }, "ansible_loop_var": "item", "changed": false, "item": "RedHat.yml" } ok: [managed_node1] => (item=CentOS.yml) => { "ansible_facts": { "__ha_cluster_enable_repo_tasks_file": "/var/ARTIFACTS/work-generalqe6kbm9_/plans/general/tree/tmp.ytRvH8g1eY/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-repositories/CentOS.yml" }, "ansible_loop_var": "item", "changed": false, "item": "CentOS.yml" } skipping: [managed_node1] => (item=CentOS_10.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__ha_cluster_enable_repo_tasks_file_candidate is file", "item": "CentOS_10.yml", "skip_reason": "Conditional result was False" } skipping: [managed_node1] => (item=CentOS_10.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__ha_cluster_enable_repo_tasks_file_candidate is file", "item": "CentOS_10.yml", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Run platform/version specific tasks to enable repositories] *** Thursday 25 July 2024 08:24:47 -0400 (0:00:00.041) 0:00:07.132 ********* included: /var/ARTIFACTS/work-generalqe6kbm9_/plans/general/tree/tmp.ytRvH8g1eY/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-repositories/CentOS.yml for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : List active CentOS repositories] *** Thursday 25 July 2024 08:24:47 -0400 (0:00:00.033) 0:00:07.165 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "dnf", "repolist" ], "delta": "0:00:00.197890", "end": "2024-07-25 08:24:48.005990", "rc": 0, "start": "2024-07-25 08:24:47.808100" } STDOUT: repo id repo name appstream CentOS Stream 10 - AppStream baseos CentOS Stream 10 - BaseOS beaker-client Beaker Client - RedHatEnterpriseLinux9 beaker-harness Beaker harness beakerlib-libraries Copr repo for beakerlib-libraries owned by bgoncalv highavailability CentOS Stream 10 - HighAvailability TASK [fedora.linux_system_roles.ha_cluster : Enable CentOS repositories] ******* Thursday 25 July 2024 08:24:48 -0400 (0:00:00.534) 0:00:07.700 ********* skipping: [managed_node1] => (item={'id': 'highavailability', 'name': 'HighAvailability'}) => { "ansible_loop_var": "item", "changed": false, "false_condition": "item.id not in __ha_cluster_repolist.stdout", "item": { "id": "highavailability", "name": "HighAvailability" }, "skip_reason": "Conditional result was False" } skipping: [managed_node1] => (item={'id': 'resilientstorage', 'name': 'ResilientStorage'}) => { "ansible_loop_var": "item", "changed": false, "false_condition": "item.name != \"ResilientStorage\" or ha_cluster_enable_repos_resilient_storage", "item": { "id": "resilientstorage", "name": "ResilientStorage" }, "skip_reason": "Conditional result was False" } skipping: [managed_node1] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.ha_cluster : Install role essential packages] *** Thursday 25 July 2024 08:24:48 -0400 (0:00:00.021) 0:00:07.721 ********* ok: [managed_node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: corosync-qnetd pcs TASK [fedora.linux_system_roles.ha_cluster : Check and prepare role variables] *** Thursday 25 July 2024 08:24:48 -0400 (0:00:00.680) 0:00:08.402 ********* included: /var/ARTIFACTS/work-generalqe6kbm9_/plans/general/tree/tmp.ytRvH8g1eY/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/check-and-prepare-role-variables.yml for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : Discover cluster node names] ****** Thursday 25 July 2024 08:24:48 -0400 (0:00:00.038) 0:00:08.440 ********* ok: [managed_node1] => { "ansible_facts": { "__ha_cluster_node_name": "localhost" }, "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Collect cluster node names] ******* Thursday 25 July 2024 08:24:48 -0400 (0:00:00.022) 0:00:08.462 ********* ok: [managed_node1] => { "ansible_facts": { "__ha_cluster_all_node_names": [ "localhost" ] }, "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Fail if ha_cluster_node_options contains unknown or duplicate nodes] *** Thursday 25 July 2024 08:24:48 -0400 (0:00:00.026) 0:00:08.489 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "(\n __nodes_from_options != (__nodes_from_options | unique)\n) or (\n __nodes_from_options | difference(__ha_cluster_all_node_names)\n)\n", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Extract node options] ************* Thursday 25 July 2024 08:24:48 -0400 (0:00:00.021) 0:00:08.510 ********* ok: [managed_node1] => { "ansible_facts": { "__ha_cluster_local_node": {} }, "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Fail if passwords are not specified] *** Thursday 25 July 2024 08:24:48 -0400 (0:00:00.024) 0:00:08.535 ********* failed: [managed_node1] (item=ha_cluster_hacluster_password) => { "ansible_loop_var": "item", "changed": false, "item": "ha_cluster_hacluster_password" } MSG: ha_cluster_hacluster_password must be specified TASK [Clean up test environment for qnetd] ************************************* Thursday 25 July 2024 08:24:48 -0400 (0:00:00.028) 0:00:08.563 ********* included: fedora.linux_system_roles.ha_cluster for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : Make sure qnetd is not installed] *** Thursday 25 July 2024 08:24:48 -0400 (0:00:00.053) 0:00:08.617 ********* changed: [managed_node1] => { "changed": true, "rc": 0, "results": [ "Removed: corosync-qnetd-3.0.3-6.el10.x86_64" ] } TASK [fedora.linux_system_roles.ha_cluster : Make sure qnetd config files are not present] *** Thursday 25 July 2024 08:24:50 -0400 (0:00:01.063) 0:00:09.681 ********* changed: [managed_node1] => { "changed": true, "path": "/etc/corosync/qnetd", "state": "absent" } PLAY RECAP ********************************************************************* managed_node1 : ok=32 changed=5 unreachable=0 failed=1 skipped=10 rescued=0 ignored=0 Thursday 25 July 2024 08:24:50 -0400 (0:00:00.366) 0:00:10.048 ********* =============================================================================== fedora.linux_system_roles.ha_cluster : Set up qnetd --------------------- 1.57s fedora.linux_system_roles.ha_cluster : Install qnetd packages ----------- 1.56s Gathering Facts --------------------------------------------------------- 1.12s fedora.linux_system_roles.ha_cluster : Make sure qnetd is not installed --- 1.06s fedora.linux_system_roles.ha_cluster : Make sure qnetd is not installed --- 0.79s fedora.linux_system_roles.ha_cluster : Install role essential packages --- 0.68s fedora.linux_system_roles.ha_cluster : List active CentOS repositories --- 0.53s fedora.linux_system_roles.ha_cluster : Make sure qnetd config files are not present --- 0.43s fedora.linux_system_roles.ha_cluster : Check if system is ostree -------- 0.42s fedora.linux_system_roles.ha_cluster : Make sure qnetd config files are not present --- 0.37s Create /etc/corosync/qnetd_backup directory ----------------------------- 0.35s Back up qnetd settings -------------------------------------------------- 0.35s Run HA Cluster role ----------------------------------------------------- 0.06s Clean up test environment for qnetd ------------------------------------- 0.05s fedora.linux_system_roles.ha_cluster : Set platform/version specific variables --- 0.04s fedora.linux_system_roles.ha_cluster : Find platform/version specific tasks to enable repositories --- 0.04s fedora.linux_system_roles.ha_cluster : Check and prepare role variables --- 0.04s fedora.linux_system_roles.ha_cluster : Run platform/version specific tasks to enable repositories --- 0.03s Set up test environment for qnetd --------------------------------------- 0.03s Clean up test environment for qnetd ------------------------------------- 0.03s