[DEPRECATION WARNING]: ANSIBLE_COLLECTIONS_PATHS option, does not fit var naming standard, use the singular form ANSIBLE_COLLECTIONS_PATH instead. This feature will be removed from ansible-core in version 2.19. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. ansible-playbook [core 2.17.2] config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.12/site-packages/ansible ansible collection location = /tmp/tmp.kBSwYGXc3s executable location = /usr/local/bin/ansible-playbook python version = 3.12.4 (main, Jul 17 2024, 00:00:00) [GCC 14.1.1 20240607 (Red Hat 14.1.1-5)] (/usr/bin/python3.12) jinja version = 3.1.4 libyaml = True No config file found; using defaults running playbook inside collection fedora.linux_system_roles redirecting (type: callback) ansible.builtin.debug to ansible.posix.debug redirecting (type: callback) ansible.builtin.debug to ansible.posix.debug redirecting (type: callback) ansible.builtin.profile_tasks to ansible.posix.profile_tasks Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. PLAYBOOK: tests_quadlet_demo.yml *********************************************** 1 plays in /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml PLAY [Deploy the quadlet demo app] ********************************************* TASK [Gathering Facts] ********************************************************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:3 Saturday 27 July 2024 12:36:41 -0400 (0:00:00.021) 0:00:00.021 ********* [WARNING]: Platform linux on host managed_node1 is using the discovered Python interpreter at /usr/bin/python3.12, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.17/reference_appendices/interpreter_discovery.html for more information. ok: [managed_node1] TASK [Generate certificates] *************************************************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:33 Saturday 27 July 2024 12:36:42 -0400 (0:00:01.230) 0:00:01.251 ********* included: fedora.linux_system_roles.certificate for managed_node1 TASK [fedora.linux_system_roles.certificate : Set version specific variables] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:2 Saturday 27 July 2024 12:36:42 -0400 (0:00:00.041) 0:00:01.293 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/set_vars.yml for managed_node1 TASK [fedora.linux_system_roles.certificate : Ensure ansible_facts used by role] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/set_vars.yml:2 Saturday 27 July 2024 12:36:42 -0400 (0:00:00.024) 0:00:01.317 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__certificate_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.certificate : Check if system is ostree] ******* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/set_vars.yml:10 Saturday 27 July 2024 12:36:42 -0400 (0:00:00.024) 0:00:01.341 ********* ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.certificate : Set flag to indicate system is ostree] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/set_vars.yml:15 Saturday 27 July 2024 12:36:43 -0400 (0:00:00.465) 0:00:01.807 ********* ok: [managed_node1] => { "ansible_facts": { "__certificate_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.certificate : Set platform/version specific variables] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/set_vars.yml:19 Saturday 27 July 2024 12:36:43 -0400 (0:00:00.025) 0:00:01.833 ********* skipping: [managed_node1] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "RedHat.yml", "skip_reason": "Conditional result was False" } skipping: [managed_node1] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "CentOS.yml", "skip_reason": "Conditional result was False" } ok: [managed_node1] => (item=CentOS_10.yml) => { "ansible_facts": { "__certificate_certmonger_packages": [ "certmonger", "python3-packaging" ] }, "ansible_included_var_files": [ "/tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/certificate/vars/CentOS_10.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_10.yml" } ok: [managed_node1] => (item=CentOS_10.yml) => { "ansible_facts": { "__certificate_certmonger_packages": [ "certmonger", "python3-packaging" ] }, "ansible_included_var_files": [ "/tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/certificate/vars/CentOS_10.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_10.yml" } TASK [fedora.linux_system_roles.certificate : Ensure certificate role dependencies are installed] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:5 Saturday 27 July 2024 12:36:43 -0400 (0:00:00.045) 0:00:01.878 ********* ok: [managed_node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: python3-cryptography python3-dbus python3-pyasn1 TASK [fedora.linux_system_roles.certificate : Ensure provider packages are installed] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:23 Saturday 27 July 2024 12:36:44 -0400 (0:00:00.848) 0:00:02.727 ********* ok: [managed_node1] => (item=certmonger) => { "__certificate_provider": "certmonger", "ansible_loop_var": "__certificate_provider", "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: certmonger python3-packaging TASK [fedora.linux_system_roles.certificate : Ensure pre-scripts hooks directory exists] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:35 Saturday 27 July 2024 12:36:45 -0400 (0:00:00.741) 0:00:03.468 ********* ok: [managed_node1] => (item=certmonger) => { "__certificate_provider": "certmonger", "ansible_loop_var": "__certificate_provider", "changed": false, "gid": 0, "group": "root", "mode": "0700", "owner": "root", "path": "/etc/certmonger//pre-scripts", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 6, "state": "directory", "uid": 0 } TASK [fedora.linux_system_roles.certificate : Ensure post-scripts hooks directory exists] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:61 Saturday 27 July 2024 12:36:45 -0400 (0:00:00.499) 0:00:03.967 ********* ok: [managed_node1] => (item=certmonger) => { "__certificate_provider": "certmonger", "ansible_loop_var": "__certificate_provider", "changed": false, "gid": 0, "group": "root", "mode": "0700", "owner": "root", "path": "/etc/certmonger//post-scripts", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 6, "state": "directory", "uid": 0 } TASK [fedora.linux_system_roles.certificate : Ensure provider service is running] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:90 Saturday 27 July 2024 12:36:45 -0400 (0:00:00.405) 0:00:04.373 ********* ok: [managed_node1] => (item=certmonger) => { "__certificate_provider": "certmonger", "ansible_loop_var": "__certificate_provider", "changed": false, "enabled": true, "name": "certmonger", "state": "started", "status": { "AccessSELinuxContext": "system_u:object_r:certmonger_unit_file_t:s0", "ActiveEnterTimestamp": "Sat 2024-07-27 12:28:50 EDT", "ActiveEnterTimestampMonotonic": "281853369", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "systemd-journald.socket system.slice sysinit.target network.target syslog.target dbus.socket basic.target dbus-broker.service", "AllowIsolate": "no", "AssertResult": "yes", "AssertTimestamp": "Sat 2024-07-27 12:28:50 EDT", "AssertTimestampMonotonic": "281823849", "Before": "multi-user.target shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "BusName": "org.fedorahosted.certmonger", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "469507000", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Sat 2024-07-27 12:28:50 EDT", "ConditionTimestampMonotonic": "281823845", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/certmonger.service", "ControlGroupId": "4563", "ControlPID": "0", "CoredumpFilter": "0x33", "CoredumpReceive": "no", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "DefaultStartupMemoryLow": "0", "Delegate": "no", "Description": "Certificate monitoring and PKI enrollment", "DevicePolicy": "auto", "DynamicUser": "no", "EffectiveMemoryHigh": "3700936704", "EffectiveMemoryMax": "3700936704", "EffectiveTasksMax": "22402", "EnvironmentFiles": "/etc/sysconfig/certmonger (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainHandoffTimestamp": "Sat 2024-07-27 12:28:50 EDT", "ExecMainHandoffTimestampMonotonic": "281840531", "ExecMainPID": "7402", "ExecMainStartTimestamp": "Sat 2024-07-27 12:28:50 EDT", "ExecMainStartTimestampMonotonic": "281825365", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/sbin/certmonger ; argv[]=/usr/sbin/certmonger -S -p /run/certmonger.pid -n $OPTS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/usr/sbin/certmonger ; argv[]=/usr/sbin/certmonger -S -p /run/certmonger.pid -n $OPTS ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "ExtensionImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FileDescriptorStorePreserve": "restart", "FinalKillSignal": "9", "FragmentPath": "/usr/lib/systemd/system/certmonger.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOReadBytes": "[not set]", "IOReadOperations": "[not set]", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "[not set]", "IOWriteOperations": "[not set]", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "certmonger.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Sat 2024-07-27 12:28:50 EDT", "InactiveExitTimestampMonotonic": "281828545", "InvocationID": "4f85e32d2bea4eae8acfa6363bc57cee", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "14001", "LimitNPROCSoft": "14001", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "14001", "LimitSIGPENDINGSoft": "14001", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "7402", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "3159982080", "MemoryCurrent": "2134016", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryKSM": "no", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemoryPeak": "9224192", "MemoryPressureThresholdUSec": "200ms", "MemoryPressureWatch": "auto", "MemorySwapCurrent": "0", "MemorySwapMax": "infinity", "MemorySwapPeak": "0", "MemoryZSwapCurrent": "0", "MemoryZSwapMax": "infinity", "MemoryZSwapWriteback": "yes", "MountAPIVFS": "no", "MountImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "certmonger.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "PIDFile": "/run/certmonger.pid", "PartOf": "dbus-broker.service", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "ReloadSignal": "1", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "sysinit.target system.slice dbus.socket", "Restart": "no", "RestartKillSignal": "15", "RestartMaxDelayUSec": "infinity", "RestartMode": "normal", "RestartSteps": "0", "RestartUSec": "100ms", "RestartUSecNext": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RootEphemeral": "no", "RootImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SetLoginEnvironment": "no", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StartupMemoryHigh": "infinity", "StartupMemoryLow": "0", "StartupMemoryMax": "infinity", "StartupMemorySwapMax": "infinity", "StartupMemoryZSwapMax": "infinity", "StateChangeTimestamp": "Sat 2024-07-27 12:36:27 EDT", "StateChangeTimestampMonotonic": "738844758", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none", "SurviveFinalKillSignal": "no", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "1", "TasksMax": "22402", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "1min 30s", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "dbus", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0" } } TASK [fedora.linux_system_roles.certificate : Ensure certificate requests] ***** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:101 Saturday 27 July 2024 12:36:46 -0400 (0:00:00.772) 0:00:05.146 ********* changed: [managed_node1] => (item={'name': 'quadlet_demo', 'dns': ['localhost'], 'ca': 'self-sign'}) => { "ansible_loop_var": "item", "changed": true, "item": { "ca": "self-sign", "dns": [ "localhost" ], "name": "quadlet_demo" } } MSG: Certificate requested (new). TASK [fedora.linux_system_roles.certificate : Slurp the contents of the files] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:152 Saturday 27 July 2024 12:36:47 -0400 (0:00:00.884) 0:00:06.030 ********* ok: [managed_node1] => (item=['cert', {'name': 'quadlet_demo', 'dns': ['localhost'], 'ca': 'self-sign'}]) => { "ansible_loop_var": "item", "changed": false, "content": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURnakNDQW1xZ0F3SUJBZ0lRSUNrVFdGYWFSVVdPblVjdmNRVm0yVEFOQmdrcWhraUc5dzBCQVFzRkFEQlEKTVNBd0hnWURWUVFEREJkTWIyTmhiQ0JUYVdkdWFXNW5JRUYxZEdodmNtbDBlVEVzTUNvR0ExVUVBd3dqTWpBeQpPVEV6TlRndE5UWTVZVFExTkRVdE9HVTVaRFEzTW1ZdE56RXdOVFkyWkRjd0hoY05NalF3TnpJM01UWXpOalEzCldoY05NalV3TnpJM01UWXlPRFV3V2pBVU1SSXdFQVlEVlFRREV3bHNiMk5oYkdodmMzUXdnZ0VpTUEwR0NTcUcKU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQ2ttUFUwQldMcDBydWxSQXdCMFlUWW5sYmoxY1dLd2ZrdApZZjNjQVZUQWZFZ1JRUlZqMFVRTTcvdUd3ZGN2QXZodFYxblFuK21ZdW1qZ3pkcG9udWd4MjE5NE9ncCs3TGFxClgyNFNZNGxFc0pHRjhmaktRU0hPVlpSQ0pYczlPRldzR3djdjg2aEJYWXd0NUNsYTZjQVhHK3JZQ1owc1Q0S1gKbWx5WWZCdjlDZ2lhNDZrUkxxUE5lYXg3cUw2Wi9wZVozREErTit0RU03ZVJjTGwrWTFwcWE2TmY1QlpyOTcycQp4a1YwaXpkRHJHeDdyQ1k4Sm1ZOFZtMTYrRTgvcnFEWDA1MUN5N1QvY3FGN3cvWXV3dk9UN1dueFZLdGJNbHBqClk1TlFZOG1ieTVIMFVlb3cwZzNXUnNHeEdrN3Bubm54bHhUQW9ObjNUME91djJPdWVSYTVBZ01CQUFHamdaTXcKZ1pBd0N3WURWUjBQQkFRREFnV2dNQlFHQTFVZEVRUU5NQXVDQ1d4dlkyRnNhRzl6ZERBZEJnTlZIU1VFRmpBVQpCZ2dyQmdFRkJRY0RBUVlJS3dZQkJRVUhBd0l3REFZRFZSMFRBUUgvQkFJd0FEQWRCZ05WSFE0RUZnUVVaOHFmCmtjN3BTaGU0SU02NjdZL2liK3NuM0pBd0h3WURWUjBqQkJnd0ZvQVU5djJuSks2THlKVDU2bFZtQVhSbzAwVVUKaUlVd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFGWS9aWm9mYW56RnJxNkxlS1JtbWI5TmJYNGdhNmx6WmFBMwp2SUcxK3VlY0N3NXhDYkEwM3dyeUpOYSs2VHhCUlA1dm1jNm9IN2tYZUZEcDQzMGNBZ2RtZXcyOVYwcnYzbndICkViN3VKa1NONi9Pc256VE51ZnpHQWc3T3NuQmlMOWtiQngwR05HU0o0Rm1DTlA1Q1dEVWFKZnZxOWkrSlpGdlcKaEJub00wdGJyQXE1cUJiazZ6Z25NQ0NSekJGbE81MXdiS0V3U08rVEQrMHV5OVYrUEMvMnRnT1Z4bldaNzFQaQpzTkFnWjNYQmozL0JqMlJRS3E1SXlpbmRrdURPM0FRcm9jTFM1ZDl2Z2JEWXdjMjNQZGxQK2JHTkEvQjNQUGllCkVBc3I1eDZmQzhOQ09kNFMvc1hLUEh4aU0veEdzczhjZ294ZC9vU2JBa1kwMWNOMFhNMD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", "encoding": "base64", "item": [ "cert", { "ca": "self-sign", "dns": [ "localhost" ], "name": "quadlet_demo" } ], "source": "/etc/pki/tls/certs/quadlet_demo.crt" } ok: [managed_node1] => (item=['key', {'name': 'quadlet_demo', 'dns': ['localhost'], 'ca': 'self-sign'}]) => { "ansible_loop_var": "item", "changed": false, "content": "LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2Z0lCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktnd2dnU2tBZ0VBQW9JQkFRQ2ttUFUwQldMcDBydWwKUkF3QjBZVFlubGJqMWNXS3dma3RZZjNjQVZUQWZFZ1JRUlZqMFVRTTcvdUd3ZGN2QXZodFYxblFuK21ZdW1qZwp6ZHBvbnVneDIxOTRPZ3ArN0xhcVgyNFNZNGxFc0pHRjhmaktRU0hPVlpSQ0pYczlPRldzR3djdjg2aEJYWXd0CjVDbGE2Y0FYRytyWUNaMHNUNEtYbWx5WWZCdjlDZ2lhNDZrUkxxUE5lYXg3cUw2Wi9wZVozREErTit0RU03ZVIKY0xsK1kxcHFhNk5mNUJacjk3MnF4a1YwaXpkRHJHeDdyQ1k4Sm1ZOFZtMTYrRTgvcnFEWDA1MUN5N1QvY3FGNwp3L1l1d3ZPVDdXbnhWS3RiTWxwalk1TlFZOG1ieTVIMFVlb3cwZzNXUnNHeEdrN3Bubm54bHhUQW9ObjNUME91CnYyT3VlUmE1QWdNQkFBRUNnZ0VBSTN2b0xMbFdqQ01HbWdmVDhOWnE1Y29vNEVBM01JVkZ3eTlqYTNrTC9OMHUKS2k0V1B1a2Yyd3duZFBNNEFTWUtTWVF4MUNTTlZ3UWsxUVg3NWw4UG5xUDA3blhReW5FY3Boa2hvU3c5TFFaRgpzUk1ydCtxWHI2UktiSUlwRWRjaHZSTXNsdjFYMGhPcFEwRnpXdXFXbzBTOCthc0U2OGNPVjhHSzRjS3J3LzU3CnJEWk1IRFU5a1F2dUxlanJEbDhxeGFtaStReFM3YWhwUzY2RHhPZTRNZnpaRlhYdlBDNm1PQlhsTVpMQ0R3aUQKOVA3MG5EcU1UOUhuajNpWCtFZ1ppU21iZjFIK2ptWVB2YUQ3bFdseCtkRi9BUDh2UkJDVmRhS0VnUVRKYkJxbwpRNjZZcDBoTGtNN1M2ajlzSjJ3ZElvMmVKei9BUWcrKzF4TXYrRHhMU1FLQmdRRFY4dEFZUWVSOXR2SkFaODRBCmgrMmE2WU5nSmtGYmV1ejJuTnd2RFRzK2J0SDlvS0lCZU5FK3cvcGFDVFQxVGs0NEdDa211V2k2WDhuUjFSWE4KMVpBTDZMQTNaS3V4aS9ld0d1azBIZkZhUkdmN0tiRkNqaDlRYTVaY05nUFN4S1U3VFdSNXVDZ2Z6ZmpMcjlXOApWQWprSjBoZUZQV3BQNi9weHFrRjg2NndaUUtCZ1FERTh2Ym5hNTRVK21POXNMUU1rVkNjV2Qrc1VwTHpYdTF0CkNKK3RFZFNvbUQzY3dJUmdqMW5OZG9rbEFObWU5Y2lNTDZNdmJ1VzBscUd4TFJSMk9CQTNaYUwyOWNvRllmdmMKSTVJV0d2dXppdkcrcENnNFNDYzBsTEt0UnZGOXhMK0JFbzh0Rm1UbHk1ajBBRiszblRkL3cxcW1oMUtxOXBaYgpVR2pLWGEzbHhRS0JnUUNHb2hMemdOdWhoTE96ZGQ4N2xFNGdVdHdhY0ZobWtkZDJaVVZsMG9TNmlCQmE4MitmClQ0RVZaMHd1eG1adUM4WExKT0VZZmtwNkpmY2h0VjdRTlprODlVT1d5Q0lIUzFZbG12bXZrendqR3JMNGFjY2oKWTc0dTVGVXRWOHhXSU9yOWczazc0M2hVYzFBaUZWZUIrTHZUbnlpNkU2UjN5aDBRRnJTY2l6a2R4UUtCZ1FDMgpmRGx5TERFSlZ3ZmIxMEs4OGxneXhzT05NK1dkUXJQVGQwNGNXbzBrdWd0MzQ1bkVybzZTNWVZbE55aHROV2RoCkhUS2kzS3BTTGRBY0RwMEsvTjlwdE83T3pPY25IYWIwVHJFcGNrOE9DUXY5akxVSGtUTmljUFV0d0xJNXluZDIKN085azQzOFJ2UmczM2JEU3ZRV1RpRHNTV2dpckNGaEF1N3ljNVRnZjBRS0JnQmZMNnBocU1MR3FkWGtUYng1TAo2aU9VQzFLa3RZOGNwNDZONDZnQkNsSnU1NXVoWWNmVFpUUlRSTjh2TmwxeGhDcUdCemQzc2dzODM1b21uamNMCnQyZk9BRXhiZXhEOFZMVWhqbGhCNTdXM3Rpbys1d2VFZzhsd2ZNK3VVcGl5SlpYOGxabTZab3N3ZUFYRld6L0IKNmZQZXIrSVVVdVNRcnQyWExOUzFIczBxCi0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K", "encoding": "base64", "item": [ "key", { "ca": "self-sign", "dns": [ "localhost" ], "name": "quadlet_demo" } ], "source": "/etc/pki/tls/private/quadlet_demo.key" } ok: [managed_node1] => (item=['ca', {'name': 'quadlet_demo', 'dns': ['localhost'], 'ca': 'self-sign'}]) => { "ansible_loop_var": "item", "changed": false, "content": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURnakNDQW1xZ0F3SUJBZ0lRSUNrVFdGYWFSVVdPblVjdmNRVm0yVEFOQmdrcWhraUc5dzBCQVFzRkFEQlEKTVNBd0hnWURWUVFEREJkTWIyTmhiQ0JUYVdkdWFXNW5JRUYxZEdodmNtbDBlVEVzTUNvR0ExVUVBd3dqTWpBeQpPVEV6TlRndE5UWTVZVFExTkRVdE9HVTVaRFEzTW1ZdE56RXdOVFkyWkRjd0hoY05NalF3TnpJM01UWXpOalEzCldoY05NalV3TnpJM01UWXlPRFV3V2pBVU1SSXdFQVlEVlFRREV3bHNiMk5oYkdodmMzUXdnZ0VpTUEwR0NTcUcKU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQ2ttUFUwQldMcDBydWxSQXdCMFlUWW5sYmoxY1dLd2ZrdApZZjNjQVZUQWZFZ1JRUlZqMFVRTTcvdUd3ZGN2QXZodFYxblFuK21ZdW1qZ3pkcG9udWd4MjE5NE9ncCs3TGFxClgyNFNZNGxFc0pHRjhmaktRU0hPVlpSQ0pYczlPRldzR3djdjg2aEJYWXd0NUNsYTZjQVhHK3JZQ1owc1Q0S1gKbWx5WWZCdjlDZ2lhNDZrUkxxUE5lYXg3cUw2Wi9wZVozREErTit0RU03ZVJjTGwrWTFwcWE2TmY1QlpyOTcycQp4a1YwaXpkRHJHeDdyQ1k4Sm1ZOFZtMTYrRTgvcnFEWDA1MUN5N1QvY3FGN3cvWXV3dk9UN1dueFZLdGJNbHBqClk1TlFZOG1ieTVIMFVlb3cwZzNXUnNHeEdrN3Bubm54bHhUQW9ObjNUME91djJPdWVSYTVBZ01CQUFHamdaTXcKZ1pBd0N3WURWUjBQQkFRREFnV2dNQlFHQTFVZEVRUU5NQXVDQ1d4dlkyRnNhRzl6ZERBZEJnTlZIU1VFRmpBVQpCZ2dyQmdFRkJRY0RBUVlJS3dZQkJRVUhBd0l3REFZRFZSMFRBUUgvQkFJd0FEQWRCZ05WSFE0RUZnUVVaOHFmCmtjN3BTaGU0SU02NjdZL2liK3NuM0pBd0h3WURWUjBqQkJnd0ZvQVU5djJuSks2THlKVDU2bFZtQVhSbzAwVVUKaUlVd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFGWS9aWm9mYW56RnJxNkxlS1JtbWI5TmJYNGdhNmx6WmFBMwp2SUcxK3VlY0N3NXhDYkEwM3dyeUpOYSs2VHhCUlA1dm1jNm9IN2tYZUZEcDQzMGNBZ2RtZXcyOVYwcnYzbndICkViN3VKa1NONi9Pc256VE51ZnpHQWc3T3NuQmlMOWtiQngwR05HU0o0Rm1DTlA1Q1dEVWFKZnZxOWkrSlpGdlcKaEJub00wdGJyQXE1cUJiazZ6Z25NQ0NSekJGbE81MXdiS0V3U08rVEQrMHV5OVYrUEMvMnRnT1Z4bldaNzFQaQpzTkFnWjNYQmozL0JqMlJRS3E1SXlpbmRrdURPM0FRcm9jTFM1ZDl2Z2JEWXdjMjNQZGxQK2JHTkEvQjNQUGllCkVBc3I1eDZmQzhOQ09kNFMvc1hLUEh4aU0veEdzczhjZ294ZC9vU2JBa1kwMWNOMFhNMD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", "encoding": "base64", "item": [ "ca", { "ca": "self-sign", "dns": [ "localhost" ], "name": "quadlet_demo" } ], "source": "/etc/pki/tls/certs/quadlet_demo.crt" } TASK [fedora.linux_system_roles.certificate : Create return data] ************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:160 Saturday 27 July 2024 12:36:48 -0400 (0:00:01.185) 0:00:07.216 ********* ok: [managed_node1] => { "ansible_facts": { "certificate_test_certs": { "quadlet_demo": { "ca": "/etc/pki/tls/certs/quadlet_demo.crt", "ca_content": "-----BEGIN CERTIFICATE-----\nMIIDgjCCAmqgAwIBAgIQICkTWFaaRUWOnUcvcQVm2TANBgkqhkiG9w0BAQsFADBQ\nMSAwHgYDVQQDDBdMb2NhbCBTaWduaW5nIEF1dGhvcml0eTEsMCoGA1UEAwwjMjAy\nOTEzNTgtNTY5YTQ1NDUtOGU5ZDQ3MmYtNzEwNTY2ZDcwHhcNMjQwNzI3MTYzNjQ3\nWhcNMjUwNzI3MTYyODUwWjAUMRIwEAYDVQQDEwlsb2NhbGhvc3QwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCkmPU0BWLp0rulRAwB0YTYnlbj1cWKwfkt\nYf3cAVTAfEgRQRVj0UQM7/uGwdcvAvhtV1nQn+mYumjgzdponugx2194Ogp+7Laq\nX24SY4lEsJGF8fjKQSHOVZRCJXs9OFWsGwcv86hBXYwt5Cla6cAXG+rYCZ0sT4KX\nmlyYfBv9Cgia46kRLqPNeax7qL6Z/peZ3DA+N+tEM7eRcLl+Y1pqa6Nf5BZr972q\nxkV0izdDrGx7rCY8JmY8Vm16+E8/rqDX051Cy7T/cqF7w/YuwvOT7WnxVKtbMlpj\nY5NQY8mby5H0Ueow0g3WRsGxGk7pnnnxlxTAoNn3T0Ouv2OueRa5AgMBAAGjgZMw\ngZAwCwYDVR0PBAQDAgWgMBQGA1UdEQQNMAuCCWxvY2FsaG9zdDAdBgNVHSUEFjAU\nBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQUZ8qf\nkc7pShe4IM667Y/ib+sn3JAwHwYDVR0jBBgwFoAU9v2nJK6LyJT56lVmAXRo00UU\niIUwDQYJKoZIhvcNAQELBQADggEBAFY/ZZofanzFrq6LeKRmmb9NbX4ga6lzZaA3\nvIG1+uecCw5xCbA03wryJNa+6TxBRP5vmc6oH7kXeFDp430cAgdmew29V0rv3nwH\nEb7uJkSN6/OsnzTNufzGAg7OsnBiL9kbBx0GNGSJ4FmCNP5CWDUaJfvq9i+JZFvW\nhBnoM0tbrAq5qBbk6zgnMCCRzBFlO51wbKEwSO+TD+0uy9V+PC/2tgOVxnWZ71Pi\nsNAgZ3XBj3/Bj2RQKq5IyindkuDO3AQrocLS5d9vgbDYwc23PdlP+bGNA/B3PPie\nEAsr5x6fC8NCOd4S/sXKPHxiM/xGss8cgoxd/oSbAkY01cN0XM0=\n-----END CERTIFICATE-----\n", "cert": "/etc/pki/tls/certs/quadlet_demo.crt", "cert_content": "-----BEGIN CERTIFICATE-----\nMIIDgjCCAmqgAwIBAgIQICkTWFaaRUWOnUcvcQVm2TANBgkqhkiG9w0BAQsFADBQ\nMSAwHgYDVQQDDBdMb2NhbCBTaWduaW5nIEF1dGhvcml0eTEsMCoGA1UEAwwjMjAy\nOTEzNTgtNTY5YTQ1NDUtOGU5ZDQ3MmYtNzEwNTY2ZDcwHhcNMjQwNzI3MTYzNjQ3\nWhcNMjUwNzI3MTYyODUwWjAUMRIwEAYDVQQDEwlsb2NhbGhvc3QwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCkmPU0BWLp0rulRAwB0YTYnlbj1cWKwfkt\nYf3cAVTAfEgRQRVj0UQM7/uGwdcvAvhtV1nQn+mYumjgzdponugx2194Ogp+7Laq\nX24SY4lEsJGF8fjKQSHOVZRCJXs9OFWsGwcv86hBXYwt5Cla6cAXG+rYCZ0sT4KX\nmlyYfBv9Cgia46kRLqPNeax7qL6Z/peZ3DA+N+tEM7eRcLl+Y1pqa6Nf5BZr972q\nxkV0izdDrGx7rCY8JmY8Vm16+E8/rqDX051Cy7T/cqF7w/YuwvOT7WnxVKtbMlpj\nY5NQY8mby5H0Ueow0g3WRsGxGk7pnnnxlxTAoNn3T0Ouv2OueRa5AgMBAAGjgZMw\ngZAwCwYDVR0PBAQDAgWgMBQGA1UdEQQNMAuCCWxvY2FsaG9zdDAdBgNVHSUEFjAU\nBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQUZ8qf\nkc7pShe4IM667Y/ib+sn3JAwHwYDVR0jBBgwFoAU9v2nJK6LyJT56lVmAXRo00UU\niIUwDQYJKoZIhvcNAQELBQADggEBAFY/ZZofanzFrq6LeKRmmb9NbX4ga6lzZaA3\nvIG1+uecCw5xCbA03wryJNa+6TxBRP5vmc6oH7kXeFDp430cAgdmew29V0rv3nwH\nEb7uJkSN6/OsnzTNufzGAg7OsnBiL9kbBx0GNGSJ4FmCNP5CWDUaJfvq9i+JZFvW\nhBnoM0tbrAq5qBbk6zgnMCCRzBFlO51wbKEwSO+TD+0uy9V+PC/2tgOVxnWZ71Pi\nsNAgZ3XBj3/Bj2RQKq5IyindkuDO3AQrocLS5d9vgbDYwc23PdlP+bGNA/B3PPie\nEAsr5x6fC8NCOd4S/sXKPHxiM/xGss8cgoxd/oSbAkY01cN0XM0=\n-----END CERTIFICATE-----\n", "key": "/etc/pki/tls/private/quadlet_demo.key", "key_content": "-----BEGIN PRIVATE KEY-----\nMIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQCkmPU0BWLp0rul\nRAwB0YTYnlbj1cWKwfktYf3cAVTAfEgRQRVj0UQM7/uGwdcvAvhtV1nQn+mYumjg\nzdponugx2194Ogp+7LaqX24SY4lEsJGF8fjKQSHOVZRCJXs9OFWsGwcv86hBXYwt\n5Cla6cAXG+rYCZ0sT4KXmlyYfBv9Cgia46kRLqPNeax7qL6Z/peZ3DA+N+tEM7eR\ncLl+Y1pqa6Nf5BZr972qxkV0izdDrGx7rCY8JmY8Vm16+E8/rqDX051Cy7T/cqF7\nw/YuwvOT7WnxVKtbMlpjY5NQY8mby5H0Ueow0g3WRsGxGk7pnnnxlxTAoNn3T0Ou\nv2OueRa5AgMBAAECggEAI3voLLlWjCMGmgfT8NZq5coo4EA3MIVFwy9ja3kL/N0u\nKi4WPukf2wwndPM4ASYKSYQx1CSNVwQk1QX75l8PnqP07nXQynEcphkhoSw9LQZF\nsRMrt+qXr6RKbIIpEdchvRMslv1X0hOpQ0FzWuqWo0S8+asE68cOV8GK4cKrw/57\nrDZMHDU9kQvuLejrDl8qxami+QxS7ahpS66DxOe4MfzZFXXvPC6mOBXlMZLCDwiD\n9P70nDqMT9Hnj3iX+EgZiSmbf1H+jmYPvaD7lWlx+dF/AP8vRBCVdaKEgQTJbBqo\nQ66Yp0hLkM7S6j9sJ2wdIo2eJz/AQg++1xMv+DxLSQKBgQDV8tAYQeR9tvJAZ84A\nh+2a6YNgJkFbeuz2nNwvDTs+btH9oKIBeNE+w/paCTT1Tk44GCkmuWi6X8nR1RXN\n1ZAL6LA3ZKuxi/ewGuk0HfFaRGf7KbFCjh9Qa5ZcNgPSxKU7TWR5uCgfzfjLr9W8\nVAjkJ0heFPWpP6/pxqkF866wZQKBgQDE8vbna54U+mO9sLQMkVCcWd+sUpLzXu1t\nCJ+tEdSomD3cwIRgj1nNdoklANme9ciML6MvbuW0lqGxLRR2OBA3ZaL29coFYfvc\nI5IWGvuzivG+pCg4SCc0lLKtRvF9xL+BEo8tFmTly5j0AF+3nTd/w1qmh1Kq9pZb\nUGjKXa3lxQKBgQCGohLzgNuhhLOzdd87lE4gUtwacFhmkdd2ZUVl0oS6iBBa82+f\nT4EVZ0wuxmZuC8XLJOEYfkp6JfchtV7QNZk89UOWyCIHS1YlmvmvkzwjGrL4accj\nY74u5FUtV8xWIOr9g3k743hUc1AiFVeB+LvTnyi6E6R3yh0QFrScizkdxQKBgQC2\nfDlyLDEJVwfb10K88lgyxsONM+WdQrPTd04cWo0kugt345nEro6S5eYlNyhtNWdh\nHTKi3KpSLdAcDp0K/N9ptO7OzOcnHab0TrEpck8OCQv9jLUHkTNicPUtwLI5ynd2\n7O9k438RvRg33bDSvQWTiDsSWgirCFhAu7yc5Tgf0QKBgBfL6phqMLGqdXkTbx5L\n6iOUC1KktY8cp46N46gBClJu55uhYcfTZTRTRN8vNl1xhCqGBzd3sgs835omnjcL\nt2fOAExbexD8VLUhjlhB57W3tio+5weEg8lwfM+uUpiyJZX8lZm6ZosweAXFWz/B\n6fPer+IUUuSQrt2XLNS1Hs0q\n-----END PRIVATE KEY-----\n" } } }, "changed": false } TASK [fedora.linux_system_roles.certificate : Stop tracking certificates] ****** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:176 Saturday 27 July 2024 12:36:48 -0400 (0:00:00.032) 0:00:07.248 ********* ok: [managed_node1] => (item={'cert': '/etc/pki/tls/certs/quadlet_demo.crt', 'cert_content': '-----BEGIN CERTIFICATE-----\nMIIDgjCCAmqgAwIBAgIQICkTWFaaRUWOnUcvcQVm2TANBgkqhkiG9w0BAQsFADBQ\nMSAwHgYDVQQDDBdMb2NhbCBTaWduaW5nIEF1dGhvcml0eTEsMCoGA1UEAwwjMjAy\nOTEzNTgtNTY5YTQ1NDUtOGU5ZDQ3MmYtNzEwNTY2ZDcwHhcNMjQwNzI3MTYzNjQ3\nWhcNMjUwNzI3MTYyODUwWjAUMRIwEAYDVQQDEwlsb2NhbGhvc3QwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCkmPU0BWLp0rulRAwB0YTYnlbj1cWKwfkt\nYf3cAVTAfEgRQRVj0UQM7/uGwdcvAvhtV1nQn+mYumjgzdponugx2194Ogp+7Laq\nX24SY4lEsJGF8fjKQSHOVZRCJXs9OFWsGwcv86hBXYwt5Cla6cAXG+rYCZ0sT4KX\nmlyYfBv9Cgia46kRLqPNeax7qL6Z/peZ3DA+N+tEM7eRcLl+Y1pqa6Nf5BZr972q\nxkV0izdDrGx7rCY8JmY8Vm16+E8/rqDX051Cy7T/cqF7w/YuwvOT7WnxVKtbMlpj\nY5NQY8mby5H0Ueow0g3WRsGxGk7pnnnxlxTAoNn3T0Ouv2OueRa5AgMBAAGjgZMw\ngZAwCwYDVR0PBAQDAgWgMBQGA1UdEQQNMAuCCWxvY2FsaG9zdDAdBgNVHSUEFjAU\nBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQUZ8qf\nkc7pShe4IM667Y/ib+sn3JAwHwYDVR0jBBgwFoAU9v2nJK6LyJT56lVmAXRo00UU\niIUwDQYJKoZIhvcNAQELBQADggEBAFY/ZZofanzFrq6LeKRmmb9NbX4ga6lzZaA3\nvIG1+uecCw5xCbA03wryJNa+6TxBRP5vmc6oH7kXeFDp430cAgdmew29V0rv3nwH\nEb7uJkSN6/OsnzTNufzGAg7OsnBiL9kbBx0GNGSJ4FmCNP5CWDUaJfvq9i+JZFvW\nhBnoM0tbrAq5qBbk6zgnMCCRzBFlO51wbKEwSO+TD+0uy9V+PC/2tgOVxnWZ71Pi\nsNAgZ3XBj3/Bj2RQKq5IyindkuDO3AQrocLS5d9vgbDYwc23PdlP+bGNA/B3PPie\nEAsr5x6fC8NCOd4S/sXKPHxiM/xGss8cgoxd/oSbAkY01cN0XM0=\n-----END CERTIFICATE-----\n', 'key': '/etc/pki/tls/private/quadlet_demo.key', 'key_content': '-----BEGIN PRIVATE KEY-----\nMIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQCkmPU0BWLp0rul\nRAwB0YTYnlbj1cWKwfktYf3cAVTAfEgRQRVj0UQM7/uGwdcvAvhtV1nQn+mYumjg\nzdponugx2194Ogp+7LaqX24SY4lEsJGF8fjKQSHOVZRCJXs9OFWsGwcv86hBXYwt\n5Cla6cAXG+rYCZ0sT4KXmlyYfBv9Cgia46kRLqPNeax7qL6Z/peZ3DA+N+tEM7eR\ncLl+Y1pqa6Nf5BZr972qxkV0izdDrGx7rCY8JmY8Vm16+E8/rqDX051Cy7T/cqF7\nw/YuwvOT7WnxVKtbMlpjY5NQY8mby5H0Ueow0g3WRsGxGk7pnnnxlxTAoNn3T0Ou\nv2OueRa5AgMBAAECggEAI3voLLlWjCMGmgfT8NZq5coo4EA3MIVFwy9ja3kL/N0u\nKi4WPukf2wwndPM4ASYKSYQx1CSNVwQk1QX75l8PnqP07nXQynEcphkhoSw9LQZF\nsRMrt+qXr6RKbIIpEdchvRMslv1X0hOpQ0FzWuqWo0S8+asE68cOV8GK4cKrw/57\nrDZMHDU9kQvuLejrDl8qxami+QxS7ahpS66DxOe4MfzZFXXvPC6mOBXlMZLCDwiD\n9P70nDqMT9Hnj3iX+EgZiSmbf1H+jmYPvaD7lWlx+dF/AP8vRBCVdaKEgQTJbBqo\nQ66Yp0hLkM7S6j9sJ2wdIo2eJz/AQg++1xMv+DxLSQKBgQDV8tAYQeR9tvJAZ84A\nh+2a6YNgJkFbeuz2nNwvDTs+btH9oKIBeNE+w/paCTT1Tk44GCkmuWi6X8nR1RXN\n1ZAL6LA3ZKuxi/ewGuk0HfFaRGf7KbFCjh9Qa5ZcNgPSxKU7TWR5uCgfzfjLr9W8\nVAjkJ0heFPWpP6/pxqkF866wZQKBgQDE8vbna54U+mO9sLQMkVCcWd+sUpLzXu1t\nCJ+tEdSomD3cwIRgj1nNdoklANme9ciML6MvbuW0lqGxLRR2OBA3ZaL29coFYfvc\nI5IWGvuzivG+pCg4SCc0lLKtRvF9xL+BEo8tFmTly5j0AF+3nTd/w1qmh1Kq9pZb\nUGjKXa3lxQKBgQCGohLzgNuhhLOzdd87lE4gUtwacFhmkdd2ZUVl0oS6iBBa82+f\nT4EVZ0wuxmZuC8XLJOEYfkp6JfchtV7QNZk89UOWyCIHS1YlmvmvkzwjGrL4accj\nY74u5FUtV8xWIOr9g3k743hUc1AiFVeB+LvTnyi6E6R3yh0QFrScizkdxQKBgQC2\nfDlyLDEJVwfb10K88lgyxsONM+WdQrPTd04cWo0kugt345nEro6S5eYlNyhtNWdh\nHTKi3KpSLdAcDp0K/N9ptO7OzOcnHab0TrEpck8OCQv9jLUHkTNicPUtwLI5ynd2\n7O9k438RvRg33bDSvQWTiDsSWgirCFhAu7yc5Tgf0QKBgBfL6phqMLGqdXkTbx5L\n6iOUC1KktY8cp46N46gBClJu55uhYcfTZTRTRN8vNl1xhCqGBzd3sgs835omnjcL\nt2fOAExbexD8VLUhjlhB57W3tio+5weEg8lwfM+uUpiyJZX8lZm6ZosweAXFWz/B\n6fPer+IUUuSQrt2XLNS1Hs0q\n-----END PRIVATE KEY-----\n', 'ca': '/etc/pki/tls/certs/quadlet_demo.crt', 'ca_content': '-----BEGIN CERTIFICATE-----\nMIIDgjCCAmqgAwIBAgIQICkTWFaaRUWOnUcvcQVm2TANBgkqhkiG9w0BAQsFADBQ\nMSAwHgYDVQQDDBdMb2NhbCBTaWduaW5nIEF1dGhvcml0eTEsMCoGA1UEAwwjMjAy\nOTEzNTgtNTY5YTQ1NDUtOGU5ZDQ3MmYtNzEwNTY2ZDcwHhcNMjQwNzI3MTYzNjQ3\nWhcNMjUwNzI3MTYyODUwWjAUMRIwEAYDVQQDEwlsb2NhbGhvc3QwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCkmPU0BWLp0rulRAwB0YTYnlbj1cWKwfkt\nYf3cAVTAfEgRQRVj0UQM7/uGwdcvAvhtV1nQn+mYumjgzdponugx2194Ogp+7Laq\nX24SY4lEsJGF8fjKQSHOVZRCJXs9OFWsGwcv86hBXYwt5Cla6cAXG+rYCZ0sT4KX\nmlyYfBv9Cgia46kRLqPNeax7qL6Z/peZ3DA+N+tEM7eRcLl+Y1pqa6Nf5BZr972q\nxkV0izdDrGx7rCY8JmY8Vm16+E8/rqDX051Cy7T/cqF7w/YuwvOT7WnxVKtbMlpj\nY5NQY8mby5H0Ueow0g3WRsGxGk7pnnnxlxTAoNn3T0Ouv2OueRa5AgMBAAGjgZMw\ngZAwCwYDVR0PBAQDAgWgMBQGA1UdEQQNMAuCCWxvY2FsaG9zdDAdBgNVHSUEFjAU\nBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQUZ8qf\nkc7pShe4IM667Y/ib+sn3JAwHwYDVR0jBBgwFoAU9v2nJK6LyJT56lVmAXRo00UU\niIUwDQYJKoZIhvcNAQELBQADggEBAFY/ZZofanzFrq6LeKRmmb9NbX4ga6lzZaA3\nvIG1+uecCw5xCbA03wryJNa+6TxBRP5vmc6oH7kXeFDp430cAgdmew29V0rv3nwH\nEb7uJkSN6/OsnzTNufzGAg7OsnBiL9kbBx0GNGSJ4FmCNP5CWDUaJfvq9i+JZFvW\nhBnoM0tbrAq5qBbk6zgnMCCRzBFlO51wbKEwSO+TD+0uy9V+PC/2tgOVxnWZ71Pi\nsNAgZ3XBj3/Bj2RQKq5IyindkuDO3AQrocLS5d9vgbDYwc23PdlP+bGNA/B3PPie\nEAsr5x6fC8NCOd4S/sXKPHxiM/xGss8cgoxd/oSbAkY01cN0XM0=\n-----END CERTIFICATE-----\n'}) => { "ansible_loop_var": "item", "changed": false, "cmd": [ "getcert", "stop-tracking", "-f", "/etc/pki/tls/certs/quadlet_demo.crt" ], "delta": "0:00:00.029170", "end": "2024-07-27 12:36:49.302431", "item": { "ca": "/etc/pki/tls/certs/quadlet_demo.crt", "ca_content": "-----BEGIN CERTIFICATE-----\nMIIDgjCCAmqgAwIBAgIQICkTWFaaRUWOnUcvcQVm2TANBgkqhkiG9w0BAQsFADBQ\nMSAwHgYDVQQDDBdMb2NhbCBTaWduaW5nIEF1dGhvcml0eTEsMCoGA1UEAwwjMjAy\nOTEzNTgtNTY5YTQ1NDUtOGU5ZDQ3MmYtNzEwNTY2ZDcwHhcNMjQwNzI3MTYzNjQ3\nWhcNMjUwNzI3MTYyODUwWjAUMRIwEAYDVQQDEwlsb2NhbGhvc3QwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCkmPU0BWLp0rulRAwB0YTYnlbj1cWKwfkt\nYf3cAVTAfEgRQRVj0UQM7/uGwdcvAvhtV1nQn+mYumjgzdponugx2194Ogp+7Laq\nX24SY4lEsJGF8fjKQSHOVZRCJXs9OFWsGwcv86hBXYwt5Cla6cAXG+rYCZ0sT4KX\nmlyYfBv9Cgia46kRLqPNeax7qL6Z/peZ3DA+N+tEM7eRcLl+Y1pqa6Nf5BZr972q\nxkV0izdDrGx7rCY8JmY8Vm16+E8/rqDX051Cy7T/cqF7w/YuwvOT7WnxVKtbMlpj\nY5NQY8mby5H0Ueow0g3WRsGxGk7pnnnxlxTAoNn3T0Ouv2OueRa5AgMBAAGjgZMw\ngZAwCwYDVR0PBAQDAgWgMBQGA1UdEQQNMAuCCWxvY2FsaG9zdDAdBgNVHSUEFjAU\nBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQUZ8qf\nkc7pShe4IM667Y/ib+sn3JAwHwYDVR0jBBgwFoAU9v2nJK6LyJT56lVmAXRo00UU\niIUwDQYJKoZIhvcNAQELBQADggEBAFY/ZZofanzFrq6LeKRmmb9NbX4ga6lzZaA3\nvIG1+uecCw5xCbA03wryJNa+6TxBRP5vmc6oH7kXeFDp430cAgdmew29V0rv3nwH\nEb7uJkSN6/OsnzTNufzGAg7OsnBiL9kbBx0GNGSJ4FmCNP5CWDUaJfvq9i+JZFvW\nhBnoM0tbrAq5qBbk6zgnMCCRzBFlO51wbKEwSO+TD+0uy9V+PC/2tgOVxnWZ71Pi\nsNAgZ3XBj3/Bj2RQKq5IyindkuDO3AQrocLS5d9vgbDYwc23PdlP+bGNA/B3PPie\nEAsr5x6fC8NCOd4S/sXKPHxiM/xGss8cgoxd/oSbAkY01cN0XM0=\n-----END CERTIFICATE-----\n", "cert": "/etc/pki/tls/certs/quadlet_demo.crt", "cert_content": "-----BEGIN CERTIFICATE-----\nMIIDgjCCAmqgAwIBAgIQICkTWFaaRUWOnUcvcQVm2TANBgkqhkiG9w0BAQsFADBQ\nMSAwHgYDVQQDDBdMb2NhbCBTaWduaW5nIEF1dGhvcml0eTEsMCoGA1UEAwwjMjAy\nOTEzNTgtNTY5YTQ1NDUtOGU5ZDQ3MmYtNzEwNTY2ZDcwHhcNMjQwNzI3MTYzNjQ3\nWhcNMjUwNzI3MTYyODUwWjAUMRIwEAYDVQQDEwlsb2NhbGhvc3QwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCkmPU0BWLp0rulRAwB0YTYnlbj1cWKwfkt\nYf3cAVTAfEgRQRVj0UQM7/uGwdcvAvhtV1nQn+mYumjgzdponugx2194Ogp+7Laq\nX24SY4lEsJGF8fjKQSHOVZRCJXs9OFWsGwcv86hBXYwt5Cla6cAXG+rYCZ0sT4KX\nmlyYfBv9Cgia46kRLqPNeax7qL6Z/peZ3DA+N+tEM7eRcLl+Y1pqa6Nf5BZr972q\nxkV0izdDrGx7rCY8JmY8Vm16+E8/rqDX051Cy7T/cqF7w/YuwvOT7WnxVKtbMlpj\nY5NQY8mby5H0Ueow0g3WRsGxGk7pnnnxlxTAoNn3T0Ouv2OueRa5AgMBAAGjgZMw\ngZAwCwYDVR0PBAQDAgWgMBQGA1UdEQQNMAuCCWxvY2FsaG9zdDAdBgNVHSUEFjAU\nBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQUZ8qf\nkc7pShe4IM667Y/ib+sn3JAwHwYDVR0jBBgwFoAU9v2nJK6LyJT56lVmAXRo00UU\niIUwDQYJKoZIhvcNAQELBQADggEBAFY/ZZofanzFrq6LeKRmmb9NbX4ga6lzZaA3\nvIG1+uecCw5xCbA03wryJNa+6TxBRP5vmc6oH7kXeFDp430cAgdmew29V0rv3nwH\nEb7uJkSN6/OsnzTNufzGAg7OsnBiL9kbBx0GNGSJ4FmCNP5CWDUaJfvq9i+JZFvW\nhBnoM0tbrAq5qBbk6zgnMCCRzBFlO51wbKEwSO+TD+0uy9V+PC/2tgOVxnWZ71Pi\nsNAgZ3XBj3/Bj2RQKq5IyindkuDO3AQrocLS5d9vgbDYwc23PdlP+bGNA/B3PPie\nEAsr5x6fC8NCOd4S/sXKPHxiM/xGss8cgoxd/oSbAkY01cN0XM0=\n-----END CERTIFICATE-----\n", "key": "/etc/pki/tls/private/quadlet_demo.key", "key_content": "-----BEGIN PRIVATE KEY-----\nMIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQCkmPU0BWLp0rul\nRAwB0YTYnlbj1cWKwfktYf3cAVTAfEgRQRVj0UQM7/uGwdcvAvhtV1nQn+mYumjg\nzdponugx2194Ogp+7LaqX24SY4lEsJGF8fjKQSHOVZRCJXs9OFWsGwcv86hBXYwt\n5Cla6cAXG+rYCZ0sT4KXmlyYfBv9Cgia46kRLqPNeax7qL6Z/peZ3DA+N+tEM7eR\ncLl+Y1pqa6Nf5BZr972qxkV0izdDrGx7rCY8JmY8Vm16+E8/rqDX051Cy7T/cqF7\nw/YuwvOT7WnxVKtbMlpjY5NQY8mby5H0Ueow0g3WRsGxGk7pnnnxlxTAoNn3T0Ou\nv2OueRa5AgMBAAECggEAI3voLLlWjCMGmgfT8NZq5coo4EA3MIVFwy9ja3kL/N0u\nKi4WPukf2wwndPM4ASYKSYQx1CSNVwQk1QX75l8PnqP07nXQynEcphkhoSw9LQZF\nsRMrt+qXr6RKbIIpEdchvRMslv1X0hOpQ0FzWuqWo0S8+asE68cOV8GK4cKrw/57\nrDZMHDU9kQvuLejrDl8qxami+QxS7ahpS66DxOe4MfzZFXXvPC6mOBXlMZLCDwiD\n9P70nDqMT9Hnj3iX+EgZiSmbf1H+jmYPvaD7lWlx+dF/AP8vRBCVdaKEgQTJbBqo\nQ66Yp0hLkM7S6j9sJ2wdIo2eJz/AQg++1xMv+DxLSQKBgQDV8tAYQeR9tvJAZ84A\nh+2a6YNgJkFbeuz2nNwvDTs+btH9oKIBeNE+w/paCTT1Tk44GCkmuWi6X8nR1RXN\n1ZAL6LA3ZKuxi/ewGuk0HfFaRGf7KbFCjh9Qa5ZcNgPSxKU7TWR5uCgfzfjLr9W8\nVAjkJ0heFPWpP6/pxqkF866wZQKBgQDE8vbna54U+mO9sLQMkVCcWd+sUpLzXu1t\nCJ+tEdSomD3cwIRgj1nNdoklANme9ciML6MvbuW0lqGxLRR2OBA3ZaL29coFYfvc\nI5IWGvuzivG+pCg4SCc0lLKtRvF9xL+BEo8tFmTly5j0AF+3nTd/w1qmh1Kq9pZb\nUGjKXa3lxQKBgQCGohLzgNuhhLOzdd87lE4gUtwacFhmkdd2ZUVl0oS6iBBa82+f\nT4EVZ0wuxmZuC8XLJOEYfkp6JfchtV7QNZk89UOWyCIHS1YlmvmvkzwjGrL4accj\nY74u5FUtV8xWIOr9g3k743hUc1AiFVeB+LvTnyi6E6R3yh0QFrScizkdxQKBgQC2\nfDlyLDEJVwfb10K88lgyxsONM+WdQrPTd04cWo0kugt345nEro6S5eYlNyhtNWdh\nHTKi3KpSLdAcDp0K/N9ptO7OzOcnHab0TrEpck8OCQv9jLUHkTNicPUtwLI5ynd2\n7O9k438RvRg33bDSvQWTiDsSWgirCFhAu7yc5Tgf0QKBgBfL6phqMLGqdXkTbx5L\n6iOUC1KktY8cp46N46gBClJu55uhYcfTZTRTRN8vNl1xhCqGBzd3sgs835omnjcL\nt2fOAExbexD8VLUhjlhB57W3tio+5weEg8lwfM+uUpiyJZX8lZm6ZosweAXFWz/B\n6fPer+IUUuSQrt2XLNS1Hs0q\n-----END PRIVATE KEY-----\n" }, "rc": 0, "start": "2024-07-27 12:36:49.273261" } STDOUT: Request "20240727163647" removed. TASK [fedora.linux_system_roles.certificate : Remove files] ******************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:181 Saturday 27 July 2024 12:36:49 -0400 (0:00:00.522) 0:00:07.771 ********* changed: [managed_node1] => (item=/etc/pki/tls/certs/quadlet_demo.crt) => { "ansible_loop_var": "item", "changed": true, "item": "/etc/pki/tls/certs/quadlet_demo.crt", "path": "/etc/pki/tls/certs/quadlet_demo.crt", "state": "absent" } changed: [managed_node1] => (item=/etc/pki/tls/private/quadlet_demo.key) => { "ansible_loop_var": "item", "changed": true, "item": "/etc/pki/tls/private/quadlet_demo.key", "path": "/etc/pki/tls/private/quadlet_demo.key", "state": "absent" } ok: [managed_node1] => (item=/etc/pki/tls/certs/quadlet_demo.crt) => { "ansible_loop_var": "item", "changed": false, "item": "/etc/pki/tls/certs/quadlet_demo.crt", "path": "/etc/pki/tls/certs/quadlet_demo.crt", "state": "absent" } TASK [Run the role] ************************************************************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:44 Saturday 27 July 2024 12:36:50 -0400 (0:00:01.139) 0:00:08.910 ********* included: fedora.linux_system_roles.podman for managed_node1 TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:3 Saturday 27 July 2024 12:36:50 -0400 (0:00:00.060) 0:00:08.971 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Ensure ansible_facts used by role] **** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:3 Saturday 27 July 2024 12:36:50 -0400 (0:00:00.028) 0:00:08.999 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check if system is ostree] ************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:11 Saturday 27 July 2024 12:36:50 -0400 (0:00:00.026) 0:00:09.026 ********* ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.podman : Set flag to indicate system is ostree] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:16 Saturday 27 July 2024 12:36:50 -0400 (0:00:00.378) 0:00:09.404 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:20 Saturday 27 July 2024 12:36:51 -0400 (0:00:00.024) 0:00:09.429 ********* ok: [managed_node1] => (item=RedHat.yml) => { "ansible_facts": { "__podman_packages": [ "podman", "shadow-utils-subid" ] }, "ansible_included_var_files": [ "/tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/vars/RedHat.yml" ], "ansible_loop_var": "item", "changed": false, "item": "RedHat.yml" } skipping: [managed_node1] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "CentOS.yml", "skip_reason": "Conditional result was False" } ok: [managed_node1] => (item=CentOS_10.yml) => { "ansible_facts": { "__podman_packages": [ "iptables-nft", "podman", "shadow-utils-subid" ] }, "ansible_included_var_files": [ "/tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/vars/CentOS_10.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_10.yml" } ok: [managed_node1] => (item=CentOS_10.yml) => { "ansible_facts": { "__podman_packages": [ "iptables-nft", "podman", "shadow-utils-subid" ] }, "ansible_included_var_files": [ "/tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/vars/CentOS_10.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_10.yml" } TASK [fedora.linux_system_roles.podman : Gather the package facts] ************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 Saturday 27 July 2024 12:36:51 -0400 (0:00:00.048) 0:00:09.478 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Enable copr if requested] ************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:10 Saturday 27 July 2024 12:36:52 -0400 (0:00:01.020) 0:00:10.498 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_use_copr | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure required packages are installed] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:14 Saturday 27 July 2024 12:36:52 -0400 (0:00:00.035) 0:00:10.534 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "(__podman_packages | difference(ansible_facts.packages))", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get podman version] ******************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:22 Saturday 27 July 2024 12:36:52 -0400 (0:00:00.039) 0:00:10.573 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "--version" ], "delta": "0:00:00.030394", "end": "2024-07-27 12:36:52.505778", "rc": 0, "start": "2024-07-27 12:36:52.475384" } STDOUT: podman version 5.1.2 TASK [fedora.linux_system_roles.podman : Set podman version] ******************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:28 Saturday 27 July 2024 12:36:52 -0400 (0:00:00.411) 0:00:10.985 ********* ok: [managed_node1] => { "ansible_facts": { "podman_version": "5.1.2" }, "changed": false } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.2 or later] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:32 Saturday 27 July 2024 12:36:52 -0400 (0:00:00.034) 0:00:11.019 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_version is version(\"4.2\", \"<\")", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.4 or later for quadlet, secrets] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:39 Saturday 27 July 2024 12:36:52 -0400 (0:00:00.033) 0:00:11.053 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_version is version(\"4.4\", \"<\")", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.4 or later for quadlet, secrets] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:49 Saturday 27 July 2024 12:36:52 -0400 (0:00:00.043) 0:00:11.096 ********* META: end_host conditional evaluated to False, continuing execution for managed_node1 skipping: [managed_node1] => { "skip_reason": "end_host conditional evaluated to False, continuing execution for managed_node1" } MSG: end_host conditional evaluated to false, continuing execution for managed_node1 TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:56 Saturday 27 July 2024 12:36:52 -0400 (0:00:00.044) 0:00:11.140 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:2 Saturday 27 July 2024 12:36:52 -0400 (0:00:00.066) 0:00:11.207 ********* ok: [managed_node1] => { "ansible_facts": { "getent_passwd": { "root": [ "x", "0", "0", "Super User", "/root", "/bin/bash" ] } }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:9 Saturday 27 July 2024 12:36:53 -0400 (0:00:00.536) 0:00:11.744 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:16 Saturday 27 July 2024 12:36:53 -0400 (0:00:00.034) 0:00:11.779 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get group information] **************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:28 Saturday 27 July 2024 12:36:53 -0400 (0:00:00.043) 0:00:11.822 ********* ok: [managed_node1] => { "ansible_facts": { "getent_group": { "root": [ "x", "0", "" ] } }, "changed": false } TASK [fedora.linux_system_roles.podman : Set group name] *********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:35 Saturday 27 July 2024 12:36:53 -0400 (0:00:00.397) 0:00:12.220 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group_name": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 27 July 2024 12:36:53 -0400 (0:00:00.041) 0:00:12.261 ********* ok: [managed_node1] => { "changed": false, "stat": { "atime": 1722097719.096431, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "86395ad7ce62834c967dc50f963a68f042029188", "ctime": 1722097690.6782997, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 4755377, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1719187200.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15728, "uid": 0, "version": "2656437169", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check user with getsubids] ************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 27 July 2024 12:36:54 -0400 (0:00:00.389) 0:00:12.650 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check group with getsubids] *********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 27 July 2024 12:36:54 -0400 (0:00:00.030) 0:00:12.681 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 27 July 2024 12:36:54 -0400 (0:00:00.029) 0:00:12.710 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:74 Saturday 27 July 2024 12:36:54 -0400 (0:00:00.030) 0:00:12.741 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:79 Saturday 27 July 2024 12:36:54 -0400 (0:00:00.028) 0:00:12.770 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:84 Saturday 27 July 2024 12:36:54 -0400 (0:00:00.029) 0:00:12.800 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:94 Saturday 27 July 2024 12:36:54 -0400 (0:00:00.029) 0:00:12.829 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if group not in subgid file] ***** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:101 Saturday 27 July 2024 12:36:54 -0400 (0:00:00.030) 0:00:12.859 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set config file paths] **************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:62 Saturday 27 July 2024 12:36:54 -0400 (0:00:00.029) 0:00:12.888 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_container_conf_file": "/etc/containers/containers.conf.d/50-systemroles.conf", "__podman_policy_json_file": "/etc/containers/policy.json", "__podman_registries_conf_file": "/etc/containers/registries.conf.d/50-systemroles.conf", "__podman_storage_conf_file": "/etc/containers/storage.conf" }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle container.conf.d] ************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:71 Saturday 27 July 2024 12:36:54 -0400 (0:00:00.059) 0:00:12.947 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Ensure containers.d exists] *********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:5 Saturday 27 July 2024 12:36:54 -0400 (0:00:00.061) 0:00:13.009 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_containers_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update container config file] ********* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:13 Saturday 27 July 2024 12:36:54 -0400 (0:00:00.031) 0:00:13.040 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_containers_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle registries.conf.d] ************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:74 Saturday 27 July 2024 12:36:54 -0400 (0:00:00.031) 0:00:13.072 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Ensure registries.d exists] *********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:5 Saturday 27 July 2024 12:36:54 -0400 (0:00:00.062) 0:00:13.134 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_registries_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update registries config file] ******** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:13 Saturday 27 July 2024 12:36:54 -0400 (0:00:00.030) 0:00:13.165 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_registries_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle storage.conf] ****************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:77 Saturday 27 July 2024 12:36:54 -0400 (0:00:00.061) 0:00:13.226 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Ensure storage.conf parent dir exists] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:5 Saturday 27 July 2024 12:36:54 -0400 (0:00:00.063) 0:00:13.290 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_storage_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update storage config file] *********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:13 Saturday 27 July 2024 12:36:54 -0400 (0:00:00.032) 0:00:13.322 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_storage_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle policy.json] ******************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:80 Saturday 27 July 2024 12:36:54 -0400 (0:00:00.031) 0:00:13.353 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Ensure policy.json parent dir exists] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:6 Saturday 27 July 2024 12:36:55 -0400 (0:00:00.066) 0:00:13.420 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat the policy.json file] ************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:14 Saturday 27 July 2024 12:36:55 -0400 (0:00:00.032) 0:00:13.452 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get the existing policy.json] ********* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:19 Saturday 27 July 2024 12:36:55 -0400 (0:00:00.031) 0:00:13.484 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Write new policy.json file] *********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:25 Saturday 27 July 2024 12:36:55 -0400 (0:00:00.032) 0:00:13.516 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [Manage firewall for specified ports] ************************************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:86 Saturday 27 July 2024 12:36:55 -0400 (0:00:00.030) 0:00:13.547 ********* included: fedora.linux_system_roles.firewall for managed_node1 TASK [fedora.linux_system_roles.firewall : Setup firewalld] ******************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:2 Saturday 27 July 2024 12:36:55 -0400 (0:00:00.113) 0:00:13.660 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml for managed_node1 TASK [fedora.linux_system_roles.firewall : Ensure ansible_facts used by role] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:2 Saturday 27 July 2024 12:36:55 -0400 (0:00:00.058) 0:00:13.718 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__firewall_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Check if system is ostree] ********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:10 Saturday 27 July 2024 12:36:55 -0400 (0:00:00.038) 0:00:13.757 ********* ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.firewall : Set flag to indicate system is ostree] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:15 Saturday 27 July 2024 12:36:55 -0400 (0:00:00.392) 0:00:14.149 ********* ok: [managed_node1] => { "ansible_facts": { "__firewall_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.firewall : Check if transactional-update exists in /sbin] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:22 Saturday 27 July 2024 12:36:55 -0400 (0:00:00.037) 0:00:14.186 ********* ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.firewall : Set flag if transactional-update exists] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:27 Saturday 27 July 2024 12:36:56 -0400 (0:00:00.386) 0:00:14.573 ********* ok: [managed_node1] => { "ansible_facts": { "__firewall_is_transactional": false }, "changed": false } TASK [fedora.linux_system_roles.firewall : Install firewalld] ****************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:31 Saturday 27 July 2024 12:36:56 -0400 (0:00:00.072) 0:00:14.645 ********* ok: [managed_node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: firewalld TASK [fedora.linux_system_roles.firewall : Notify user that reboot is needed to apply changes] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:43 Saturday 27 July 2024 12:36:56 -0400 (0:00:00.745) 0:00:15.391 ********* skipping: [managed_node1] => { "false_condition": "__firewall_is_transactional | d(false)" } TASK [fedora.linux_system_roles.firewall : Reboot transactional update systems] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:48 Saturday 27 July 2024 12:36:57 -0400 (0:00:00.033) 0:00:15.424 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__firewall_is_transactional | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Fail if reboot is needed and not set] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:53 Saturday 27 July 2024 12:36:57 -0400 (0:00:00.031) 0:00:15.456 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__firewall_is_transactional | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Collect service facts] ************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:5 Saturday 27 July 2024 12:36:57 -0400 (0:00:00.032) 0:00:15.488 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "firewall_disable_conflicting_services | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Attempt to stop and disable conflicting services] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:9 Saturday 27 July 2024 12:36:57 -0400 (0:00:00.031) 0:00:15.520 ********* skipping: [managed_node1] => (item=nftables) => { "ansible_loop_var": "item", "changed": false, "false_condition": "firewall_disable_conflicting_services | bool", "item": "nftables", "skip_reason": "Conditional result was False" } skipping: [managed_node1] => (item=iptables) => { "ansible_loop_var": "item", "changed": false, "false_condition": "firewall_disable_conflicting_services | bool", "item": "iptables", "skip_reason": "Conditional result was False" } skipping: [managed_node1] => (item=ufw) => { "ansible_loop_var": "item", "changed": false, "false_condition": "firewall_disable_conflicting_services | bool", "item": "ufw", "skip_reason": "Conditional result was False" } skipping: [managed_node1] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.firewall : Unmask firewalld service] *********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:22 Saturday 27 July 2024 12:36:57 -0400 (0:00:00.041) 0:00:15.561 ********* ok: [managed_node1] => { "changed": false, "name": "firewalld", "status": { "AccessSELinuxContext": "system_u:object_r:firewalld_unit_file_t:s0", "ActiveEnterTimestamp": "Sat 2024-07-27 12:29:18 EDT", "ActiveEnterTimestampMonotonic": "309776823", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "dbus.socket dbus-broker.service sysinit.target basic.target system.slice polkit.service", "AllowIsolate": "no", "AssertResult": "yes", "AssertTimestamp": "Sat 2024-07-27 12:29:17 EDT", "AssertTimestampMonotonic": "308800686", "Before": "network-pre.target multi-user.target shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "BusName": "org.fedoraproject.FirewallD1", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "645419000", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Sat 2024-07-27 12:29:17 EDT", "ConditionTimestampMonotonic": "308800682", "ConfigurationDirectoryMode": "0755", "Conflicts": "ebtables.service shutdown.target ip6tables.service iptables.service ipset.service nftables.service", "ControlGroup": "/system.slice/firewalld.service", "ControlGroupId": "4842", "ControlPID": "0", "CoredumpFilter": "0x33", "CoredumpReceive": "no", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "DefaultStartupMemoryLow": "0", "Delegate": "no", "Description": "firewalld - dynamic firewall daemon", "DevicePolicy": "auto", "Documentation": "\"man:firewalld(1)\"", "DynamicUser": "no", "EffectiveMemoryHigh": "3700936704", "EffectiveMemoryMax": "3700936704", "EffectiveTasksMax": "22402", "EnvironmentFiles": "/etc/sysconfig/firewalld (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainHandoffTimestamp": "Sat 2024-07-27 12:29:17 EDT", "ExecMainHandoffTimestampMonotonic": "308813520", "ExecMainPID": "11870", "ExecMainStartTimestamp": "Sat 2024-07-27 12:29:17 EDT", "ExecMainStartTimestampMonotonic": "308803091", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecReloadEx": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/firewalld ; argv[]=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/usr/sbin/firewalld ; argv[]=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "ExtensionImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FileDescriptorStorePreserve": "restart", "FinalKillSignal": "9", "FragmentPath": "/usr/lib/systemd/system/firewalld.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOReadBytes": "[not set]", "IOReadOperations": "[not set]", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "[not set]", "IOWriteOperations": "[not set]", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "firewalld.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Sat 2024-07-27 12:29:17 EDT", "InactiveExitTimestampMonotonic": "308803722", "InvocationID": "b164b44216a943caa07e4d87cac33d98", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "mixed", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "14001", "LimitNPROCSoft": "14001", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "14001", "LimitSIGPENDINGSoft": "14001", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "11870", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "3146485760", "MemoryCurrent": "33136640", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryKSM": "no", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemoryPeak": "33751040", "MemoryPressureThresholdUSec": "200ms", "MemoryPressureWatch": "auto", "MemorySwapCurrent": "0", "MemorySwapMax": "infinity", "MemorySwapPeak": "0", "MemoryZSwapCurrent": "0", "MemoryZSwapMax": "infinity", "MemoryZSwapWriteback": "yes", "MountAPIVFS": "no", "MountImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "firewalld.service dbus-org.fedoraproject.FirewallD1.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "ReloadSignal": "1", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "system.slice sysinit.target dbus.socket", "Restart": "no", "RestartKillSignal": "15", "RestartMaxDelayUSec": "infinity", "RestartMode": "normal", "RestartSteps": "0", "RestartUSec": "100ms", "RestartUSecNext": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RootEphemeral": "no", "RootImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SetLoginEnvironment": "no", "Slice": "system.slice", "StandardError": "null", "StandardInput": "null", "StandardOutput": "null", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StartupMemoryHigh": "infinity", "StartupMemoryLow": "0", "StartupMemoryMax": "infinity", "StartupMemorySwapMax": "infinity", "StartupMemoryZSwapMax": "infinity", "StateChangeTimestamp": "Sat 2024-07-27 12:36:27 EDT", "StateChangeTimestampMonotonic": "738844426", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none", "SurviveFinalKillSignal": "no", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "2", "TasksMax": "22402", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "1min 30s", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "dbus", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-pre.target", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0" } } TASK [fedora.linux_system_roles.firewall : Enable and start firewalld service] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:28 Saturday 27 July 2024 12:36:57 -0400 (0:00:00.570) 0:00:16.132 ********* ok: [managed_node1] => { "changed": false, "enabled": true, "name": "firewalld", "state": "started", "status": { "AccessSELinuxContext": "system_u:object_r:firewalld_unit_file_t:s0", "ActiveEnterTimestamp": "Sat 2024-07-27 12:29:18 EDT", "ActiveEnterTimestampMonotonic": "309776823", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "dbus.socket dbus-broker.service sysinit.target basic.target system.slice polkit.service", "AllowIsolate": "no", "AssertResult": "yes", "AssertTimestamp": "Sat 2024-07-27 12:29:17 EDT", "AssertTimestampMonotonic": "308800686", "Before": "network-pre.target multi-user.target shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "BusName": "org.fedoraproject.FirewallD1", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "645419000", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Sat 2024-07-27 12:29:17 EDT", "ConditionTimestampMonotonic": "308800682", "ConfigurationDirectoryMode": "0755", "Conflicts": "ebtables.service shutdown.target ip6tables.service iptables.service ipset.service nftables.service", "ControlGroup": "/system.slice/firewalld.service", "ControlGroupId": "4842", "ControlPID": "0", "CoredumpFilter": "0x33", "CoredumpReceive": "no", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "DefaultStartupMemoryLow": "0", "Delegate": "no", "Description": "firewalld - dynamic firewall daemon", "DevicePolicy": "auto", "Documentation": "\"man:firewalld(1)\"", "DynamicUser": "no", "EffectiveMemoryHigh": "3700936704", "EffectiveMemoryMax": "3700936704", "EffectiveTasksMax": "22402", "EnvironmentFiles": "/etc/sysconfig/firewalld (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainHandoffTimestamp": "Sat 2024-07-27 12:29:17 EDT", "ExecMainHandoffTimestampMonotonic": "308813520", "ExecMainPID": "11870", "ExecMainStartTimestamp": "Sat 2024-07-27 12:29:17 EDT", "ExecMainStartTimestampMonotonic": "308803091", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecReloadEx": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/firewalld ; argv[]=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/usr/sbin/firewalld ; argv[]=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "ExtensionImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FileDescriptorStorePreserve": "restart", "FinalKillSignal": "9", "FragmentPath": "/usr/lib/systemd/system/firewalld.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOReadBytes": "[not set]", "IOReadOperations": "[not set]", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "[not set]", "IOWriteOperations": "[not set]", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "firewalld.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Sat 2024-07-27 12:29:17 EDT", "InactiveExitTimestampMonotonic": "308803722", "InvocationID": "b164b44216a943caa07e4d87cac33d98", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "mixed", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "14001", "LimitNPROCSoft": "14001", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "14001", "LimitSIGPENDINGSoft": "14001", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "11870", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "3159416832", "MemoryCurrent": "33136640", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryKSM": "no", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemoryPeak": "33751040", "MemoryPressureThresholdUSec": "200ms", "MemoryPressureWatch": "auto", "MemorySwapCurrent": "0", "MemorySwapMax": "infinity", "MemorySwapPeak": "0", "MemoryZSwapCurrent": "0", "MemoryZSwapMax": "infinity", "MemoryZSwapWriteback": "yes", "MountAPIVFS": "no", "MountImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "firewalld.service dbus-org.fedoraproject.FirewallD1.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "ReloadSignal": "1", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "system.slice sysinit.target dbus.socket", "Restart": "no", "RestartKillSignal": "15", "RestartMaxDelayUSec": "infinity", "RestartMode": "normal", "RestartSteps": "0", "RestartUSec": "100ms", "RestartUSecNext": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RootEphemeral": "no", "RootImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SetLoginEnvironment": "no", "Slice": "system.slice", "StandardError": "null", "StandardInput": "null", "StandardOutput": "null", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StartupMemoryHigh": "infinity", "StartupMemoryLow": "0", "StartupMemoryMax": "infinity", "StartupMemorySwapMax": "infinity", "StartupMemoryZSwapMax": "infinity", "StateChangeTimestamp": "Sat 2024-07-27 12:36:27 EDT", "StateChangeTimestampMonotonic": "738844426", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none", "SurviveFinalKillSignal": "no", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "2", "TasksMax": "22402", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "1min 30s", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "dbus", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-pre.target", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0" } } TASK [fedora.linux_system_roles.firewall : Check if previous replaced is defined] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:34 Saturday 27 July 2024 12:36:58 -0400 (0:00:00.570) 0:00:16.702 ********* ok: [managed_node1] => { "ansible_facts": { "__firewall_previous_replaced": false, "__firewall_python_cmd": "/usr/bin/python3.12", "__firewall_report_changed": true }, "changed": false } TASK [fedora.linux_system_roles.firewall : Get config files, checksums before and remove] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:43 Saturday 27 July 2024 12:36:58 -0400 (0:00:00.042) 0:00:16.745 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__firewall_previous_replaced | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Tell firewall module it is able to report changed] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:55 Saturday 27 July 2024 12:36:58 -0400 (0:00:00.031) 0:00:16.776 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__firewall_previous_replaced | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Configure firewall] ***************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:71 Saturday 27 July 2024 12:36:58 -0400 (0:00:00.029) 0:00:16.805 ********* changed: [managed_node1] => (item={'port': '8000/tcp', 'state': 'enabled'}) => { "__firewall_changed": true, "ansible_loop_var": "item", "changed": true, "item": { "port": "8000/tcp", "state": "enabled" } } changed: [managed_node1] => (item={'port': '9000/tcp', 'state': 'enabled'}) => { "__firewall_changed": true, "ansible_loop_var": "item", "changed": true, "item": { "port": "9000/tcp", "state": "enabled" } } TASK [fedora.linux_system_roles.firewall : Gather firewall config information] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:120 Saturday 27 July 2024 12:36:59 -0400 (0:00:01.148) 0:00:17.954 ********* skipping: [managed_node1] => (item={'port': '8000/tcp', 'state': 'enabled'}) => { "ansible_loop_var": "item", "changed": false, "false_condition": "firewall | length == 1", "item": { "port": "8000/tcp", "state": "enabled" }, "skip_reason": "Conditional result was False" } skipping: [managed_node1] => (item={'port': '9000/tcp', 'state': 'enabled'}) => { "ansible_loop_var": "item", "changed": false, "false_condition": "firewall | length == 1", "item": { "port": "9000/tcp", "state": "enabled" }, "skip_reason": "Conditional result was False" } skipping: [managed_node1] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.firewall : Update firewalld_config fact] ******* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:130 Saturday 27 July 2024 12:36:59 -0400 (0:00:00.050) 0:00:18.004 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "firewall | length == 1", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Gather firewall config if no arguments] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:139 Saturday 27 July 2024 12:36:59 -0400 (0:00:00.033) 0:00:18.038 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "firewall == None or firewall | length == 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Update firewalld_config fact] ******* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:144 Saturday 27 July 2024 12:36:59 -0400 (0:00:00.033) 0:00:18.071 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "firewall == None or firewall | length == 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Get config files, checksums after] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:153 Saturday 27 July 2024 12:36:59 -0400 (0:00:00.032) 0:00:18.104 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__firewall_previous_replaced | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Calculate what has changed] ********* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:163 Saturday 27 July 2024 12:36:59 -0400 (0:00:00.030) 0:00:18.134 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__firewall_previous_replaced | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Show diffs] ************************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:169 Saturday 27 July 2024 12:36:59 -0400 (0:00:00.029) 0:00:18.164 ********* skipping: [managed_node1] => { "false_condition": "__firewall_previous_replaced | bool" } TASK [Manage selinux for specified ports] ************************************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:93 Saturday 27 July 2024 12:36:59 -0400 (0:00:00.046) 0:00:18.211 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_selinux_ports | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Keep track of users that need to cancel linger] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:100 Saturday 27 July 2024 12:36:59 -0400 (0:00:00.032) 0:00:18.243 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_cancel_user_linger": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle certs.d files - present] ******* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:104 Saturday 27 July 2024 12:36:59 -0400 (0:00:00.064) 0:00:18.308 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle credential files - present] **** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:113 Saturday 27 July 2024 12:36:59 -0400 (0:00:00.029) 0:00:18.337 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle secrets] *********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:122 Saturday 27 July 2024 12:36:59 -0400 (0:00:00.027) 0:00:18.365 ********* fatal: [managed_node1]: FAILED! => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result" } TASK [Dump journal] ************************************************************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:124 Saturday 27 July 2024 12:36:59 -0400 (0:00:00.031) 0:00:18.396 ********* fatal: [managed_node1]: FAILED! => { "changed": false, "cmd": [ "journalctl", "-ex" ], "delta": "0:00:00.034722", "end": "2024-07-27 12:37:00.337034", "failed_when_result": true, "rc": 0, "start": "2024-07-27 12:37:00.302312" } STDOUT: Jul 27 12:31:27 ip-10-31-14-141.us-east-1.aws.redhat.com podman[24960]: Container: Jul 27 12:31:27 ip-10-31-14-141.us-east-1.aws.redhat.com podman[24960]: 76f2ca2651d698039f05d1d26188e9285c257012702c15c7df24f3cff7078f97 Jul 27 12:31:27 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[23350]: Started podman-kube@-home-podman_basic_user-.config-containers-ansible\x2dkubernetes.d-httpd1.yml.service - A template for running K8s workloads via podman-kube-play. â–‘â–‘ Subject: A start job for unit UNIT has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit UNIT has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 75. Jul 27 12:31:27 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[24954]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:31:27 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[25160]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Jul 27 12:31:28 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[25274]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:31:28 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[25388]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:31:29 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[25503]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /etc/containers/ansible-kubernetes.d/httpd2.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:31:30 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[25617]: ansible-file Invoked with path=/tmp/lsr_ykmfaaw1_podman/httpd2 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:31:31 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[25730]: ansible-file Invoked with path=/tmp/lsr_ykmfaaw1_podman/httpd2-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:31:31 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jul 27 12:31:31 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jul 27 12:31:32 ip-10-31-14-141.us-east-1.aws.redhat.com podman[25859]: 2024-07-27 12:31:32.240860999 -0400 EDT m=+0.341601332 image pull 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f quay.io/libpod/testimage:20210610 Jul 27 12:31:32 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[25987]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:31:32 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jul 27 12:31:33 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[26100]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:31:33 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[26213]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:31:33 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[26303]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/ansible-kubernetes.d/httpd2.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1722097893.2574754-8233-220178835220762/.source.yml _original_basename=.b8zurbx6 follow=False checksum=091c16d925d6727a426e671d7cddd074cf4d4a44 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[26416]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_options=None Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Created slice machine-libpod_pod_ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba.slice - cgroup machine-libpod_pod_ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba.slice. â–‘â–‘ Subject: A start job for unit machine-libpod_pod_ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba.slice has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit machine-libpod_pod_ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba.slice has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 2400. Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26424]: 2024-07-27 12:31:34.480997315 -0400 EDT m=+0.092060451 container create dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833 (image=localhost/podman-pause:5.1.2-1720656000, name=ab3f47105f53-infra, pod_id=ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba, io.buildah.version=1.36.0) Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26424]: 2024-07-27 12:31:34.486968544 -0400 EDT m=+0.098031812 pod create ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba (image=, name=httpd2) Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26424]: 2024-07-27 12:31:34.491218115 -0400 EDT m=+0.102281418 image pull 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f quay.io/libpod/testimage:20210610 Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26424]: 2024-07-27 12:31:34.5195566 -0400 EDT m=+0.130619732 container create 93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba, io.containers.autoupdate=registry, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, app=test) Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097894.5330] manager: (podman1): new Bridge device (/org/freedesktop/NetworkManager/Devices/3) Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: podman1: port 1(veth0) entered blocking state Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: podman1: port 1(veth0) entered disabled state Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: veth0: entered allmulticast mode Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: veth0: entered promiscuous mode Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097894.5556] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/4) Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: podman1: port 1(veth0) entered blocking state Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: podman1: port 1(veth0) entered forwarding state Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097894.5613] device (veth0): carrier: link connected Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097894.5615] device (podman1): carrier: link connected Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com (udev-worker)[26438]: Network interface NamePolicy= disabled on kernel command line. Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com (udev-worker)[26437]: Network interface NamePolicy= disabled on kernel command line. Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097894.6273] device (podman1): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097894.6280] device (podman1): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097894.6293] device (podman1): Activation: starting connection 'podman1' (dc037794-997c-44a1-b081-d822a2c4882e) Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097894.6296] device (podman1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097894.6304] device (podman1): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097894.6308] device (podman1): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097894.6311] device (podman1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Starting NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service... â–‘â–‘ Subject: A start job for unit NetworkManager-dispatcher.service has begun execution â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit NetworkManager-dispatcher.service has begun execution. â–‘â–‘ â–‘â–‘ The job identifier is 2407. Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Started NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service. â–‘â–‘ Subject: A start job for unit NetworkManager-dispatcher.service has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit NetworkManager-dispatcher.service has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 2407. Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097894.6707] device (podman1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097894.6710] device (podman1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097894.6716] device (podman1): Activation: successful, device activated. Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Started run-r195f68d412d64ef19135845924417b7c.scope - /usr/libexec/podman/aardvark-dns --config /run/containers/networks/aardvark-dns -p 53 run. â–‘â–‘ Subject: A start job for unit run-r195f68d412d64ef19135845924417b7c.scope has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit run-r195f68d412d64ef19135845924417b7c.scope has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 2487. Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com aardvark-dns[26463]: starting aardvark on a child with pid 26474 Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com aardvark-dns[26474]: Successfully parsed config Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com aardvark-dns[26474]: Listen v4 ip {"podman-default-kube-network": [10.89.0.1]} Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com aardvark-dns[26474]: Listen v6 ip {} Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com aardvark-dns[26474]: Will Forward dns requests to udp://1.1.1.1:53 Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com aardvark-dns[26474]: Starting listen on udp 10.89.0.1:53 Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Started libpod-conmon-dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833.scope. â–‘â–‘ Subject: A start job for unit libpod-conmon-dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833.scope has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit libpod-conmon-dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833.scope has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 2493. Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com conmon[26480]: conmon dbbab1074644820422d3 : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/12/attach} Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com conmon[26480]: conmon dbbab1074644820422d3 : terminal_ctrl_fd: 12 Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com conmon[26480]: conmon dbbab1074644820422d3 : winsz read side: 16, winsz write side: 17 Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: tmp-crun.mO1K4v.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit tmp-crun.mO1K4v.mount has successfully entered the 'dead' state. Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Started libpod-dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833.scope - libcrun container. â–‘â–‘ Subject: A start job for unit libpod-dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833.scope has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit libpod-dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833.scope has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 2500. Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com conmon[26480]: conmon dbbab1074644820422d3 : container PID: 26482 Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26424]: 2024-07-27 12:31:34.779301625 -0400 EDT m=+0.390365067 container init dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833 (image=localhost/podman-pause:5.1.2-1720656000, name=ab3f47105f53-infra, pod_id=ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba, io.buildah.version=1.36.0) Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26424]: 2024-07-27 12:31:34.783331928 -0400 EDT m=+0.394395361 container start dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833 (image=localhost/podman-pause:5.1.2-1720656000, name=ab3f47105f53-infra, pod_id=ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba, io.buildah.version=1.36.0) Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Started libpod-conmon-93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf.scope. â–‘â–‘ Subject: A start job for unit libpod-conmon-93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf.scope has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit libpod-conmon-93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf.scope has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 2507. Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com conmon[26486]: conmon 93532dadc1783be2948a : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/11/attach} Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com conmon[26486]: conmon 93532dadc1783be2948a : terminal_ctrl_fd: 11 Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com conmon[26486]: conmon 93532dadc1783be2948a : winsz read side: 15, winsz write side: 16 Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Started libpod-93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf.scope - libcrun container. â–‘â–‘ Subject: A start job for unit libpod-93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf.scope has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit libpod-93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf.scope has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 2514. Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com conmon[26486]: conmon 93532dadc1783be2948a : container PID: 26488 Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26424]: 2024-07-27 12:31:34.84980643 -0400 EDT m=+0.460869953 container init 93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba, io.buildah.version=1.21.0, io.containers.autoupdate=registry, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage) Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26424]: 2024-07-27 12:31:34.853382187 -0400 EDT m=+0.464445620 container start 93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry) Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26424]: 2024-07-27 12:31:34.859125838 -0400 EDT m=+0.470188983 pod start ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba (image=, name=httpd2) Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[26416]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[26416]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pod: ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba Container: 93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[26416]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time="2024-07-27T12:31:34-04:00" level=info msg="/usr/bin/podman filtering at log level debug" time="2024-07-27T12:31:34-04:00" level=debug msg="Called kube.PersistentPreRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)" time="2024-07-27T12:31:34-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" time="2024-07-27T12:31:34-04:00" level=info msg="Using sqlite as database backend" time="2024-07-27T12:31:34-04:00" level=debug msg="Using graph driver overlay" time="2024-07-27T12:31:34-04:00" level=debug msg="Using graph root /var/lib/containers/storage" time="2024-07-27T12:31:34-04:00" level=debug msg="Using run root /run/containers/storage" time="2024-07-27T12:31:34-04:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod" time="2024-07-27T12:31:34-04:00" level=debug msg="Using tmp dir /run/libpod" time="2024-07-27T12:31:34-04:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes" time="2024-07-27T12:31:34-04:00" level=debug msg="Using transient store: false" time="2024-07-27T12:31:34-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" time="2024-07-27T12:31:34-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2024-07-27T12:31:34-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2024-07-27T12:31:34-04:00" level=debug msg="Cached value indicated that metacopy is being used" time="2024-07-27T12:31:34-04:00" level=debug msg="Cached value indicated that native-diff is not being used" time="2024-07-27T12:31:34-04:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" time="2024-07-27T12:31:34-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true" time="2024-07-27T12:31:34-04:00" level=debug msg="Initializing event backend journald" time="2024-07-27T12:31:34-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" time="2024-07-27T12:31:34-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" time="2024-07-27T12:31:34-04:00" level=debug msg="Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument" time="2024-07-27T12:31:34-04:00" level=debug msg="Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument" time="2024-07-27T12:31:34-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" time="2024-07-27T12:31:34-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" time="2024-07-27T12:31:34-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" time="2024-07-27T12:31:34-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" time="2024-07-27T12:31:34-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" time="2024-07-27T12:31:34-04:00" level=debug msg="Using OCI runtime \"/usr/bin/crun\"" time="2024-07-27T12:31:34-04:00" level=info msg="Setting parallel job count to 7" time="2024-07-27T12:31:34-04:00" level=debug msg="Successfully loaded network podman-default-kube-network: &{podman-default-kube-network 8f5134bcda5f516a62e074f65b89e1dc156f1d967eb9102e712a946d948ca3ab bridge podman1 2024-07-27 12:29:35.997811221 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}" time="2024-07-27T12:31:34-04:00" level=debug msg="Successfully loaded 2 networks" time="2024-07-27T12:31:34-04:00" level=debug msg="Looking up image \"localhost/podman-pause:5.1.2-1720656000\" in local containers storage" time="2024-07-27T12:31:34-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2024-07-27T12:31:34-04:00" level=debug msg="Trying \"localhost/podman-pause:5.1.2-1720656000\" ..." time="2024-07-27T12:31:34-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@dd68369a4c98aa462789d69687e5025b82f278ebe66afc63c07c145727b83b12\"" time="2024-07-27T12:31:34-04:00" level=debug msg="Found image \"localhost/podman-pause:5.1.2-1720656000\" as \"localhost/podman-pause:5.1.2-1720656000\" in local containers storage" time="2024-07-27T12:31:34-04:00" level=debug msg="Found image \"localhost/podman-pause:5.1.2-1720656000\" as \"localhost/podman-pause:5.1.2-1720656000\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@dd68369a4c98aa462789d69687e5025b82f278ebe66afc63c07c145727b83b12)" time="2024-07-27T12:31:34-04:00" level=debug msg="exporting opaque data as blob \"sha256:dd68369a4c98aa462789d69687e5025b82f278ebe66afc63c07c145727b83b12\"" time="2024-07-27T12:31:34-04:00" level=debug msg="Pod using bridge network mode" time="2024-07-27T12:31:34-04:00" level=debug msg="Created cgroup path machine.slice/machine-libpod_pod_ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba.slice for parent machine.slice and name libpod_pod_ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba" time="2024-07-27T12:31:34-04:00" level=debug msg="Created cgroup machine.slice/machine-libpod_pod_ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba.slice" time="2024-07-27T12:31:34-04:00" level=debug msg="Got pod cgroup as machine.slice/machine-libpod_pod_ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba.slice" time="2024-07-27T12:31:34-04:00" level=debug msg="Looking up image \"localhost/podman-pause:5.1.2-1720656000\" in local containers storage" time="2024-07-27T12:31:34-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2024-07-27T12:31:34-04:00" level=debug msg="Trying \"localhost/podman-pause:5.1.2-1720656000\" ..." time="2024-07-27T12:31:34-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@dd68369a4c98aa462789d69687e5025b82f278ebe66afc63c07c145727b83b12\"" time="2024-07-27T12:31:34-04:00" level=debug msg="Found image \"localhost/podman-pause:5.1.2-1720656000\" as \"localhost/podman-pause:5.1.2-1720656000\" in local containers storage" time="2024-07-27T12:31:34-04:00" level=debug msg="Found image \"localhost/podman-pause:5.1.2-1720656000\" as \"localhost/podman-pause:5.1.2-1720656000\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@dd68369a4c98aa462789d69687e5025b82f278ebe66afc63c07c145727b83b12)" time="2024-07-27T12:31:34-04:00" level=debug msg="exporting opaque data as blob \"sha256:dd68369a4c98aa462789d69687e5025b82f278ebe66afc63c07c145727b83b12\"" time="2024-07-27T12:31:34-04:00" level=debug msg="Inspecting image dd68369a4c98aa462789d69687e5025b82f278ebe66afc63c07c145727b83b12" time="2024-07-27T12:31:34-04:00" level=debug msg="exporting opaque data as blob \"sha256:dd68369a4c98aa462789d69687e5025b82f278ebe66afc63c07c145727b83b12\"" time="2024-07-27T12:31:34-04:00" level=debug msg="Inspecting image dd68369a4c98aa462789d69687e5025b82f278ebe66afc63c07c145727b83b12" time="2024-07-27T12:31:34-04:00" level=debug msg="Inspecting image dd68369a4c98aa462789d69687e5025b82f278ebe66afc63c07c145727b83b12" time="2024-07-27T12:31:34-04:00" level=debug msg="using systemd mode: false" time="2024-07-27T12:31:34-04:00" level=debug msg="setting container name ab3f47105f53-infra" time="2024-07-27T12:31:34-04:00" level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\"" time="2024-07-27T12:31:34-04:00" level=debug msg="Allocated lock 1 for container dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833" time="2024-07-27T12:31:34-04:00" level=debug msg="exporting opaque data as blob \"sha256:dd68369a4c98aa462789d69687e5025b82f278ebe66afc63c07c145727b83b12\"" time="2024-07-27T12:31:34-04:00" level=debug msg="Cached value indicated that idmapped mounts for overlay are supported" time="2024-07-27T12:31:34-04:00" level=debug msg="Created container \"dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833\"" time="2024-07-27T12:31:34-04:00" level=debug msg="Container \"dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833\" has work directory \"/var/lib/containers/storage/overlay-containers/dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833/userdata\"" time="2024-07-27T12:31:34-04:00" level=debug msg="Container \"dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833\" has run directory \"/run/containers/storage/overlay-containers/dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833/userdata\"" time="2024-07-27T12:31:34-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2024-07-27T12:31:34-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2024-07-27T12:31:34-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2024-07-27T12:31:34-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2024-07-27T12:31:34-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2024-07-27T12:31:34-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2024-07-27T12:31:34-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2024-07-27T12:31:34-04:00" level=debug msg="Pulling image quay.io/libpod/testimage:20210610 (policy: missing)" time="2024-07-27T12:31:34-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2024-07-27T12:31:34-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2024-07-27T12:31:34-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2024-07-27T12:31:34-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2024-07-27T12:31:34-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2024-07-27T12:31:34-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2024-07-27T12:31:34-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2024-07-27T12:31:34-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2024-07-27T12:31:34-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2024-07-27T12:31:34-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2024-07-27T12:31:34-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2024-07-27T12:31:34-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2024-07-27T12:31:34-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2024-07-27T12:31:34-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2024-07-27T12:31:34-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2024-07-27T12:31:34-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2024-07-27T12:31:34-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2024-07-27T12:31:34-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2024-07-27T12:31:34-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2024-07-27T12:31:34-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2024-07-27T12:31:34-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2024-07-27T12:31:34-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2024-07-27T12:31:34-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2024-07-27T12:31:34-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2024-07-27T12:31:34-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2024-07-27T12:31:34-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2024-07-27T12:31:34-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2024-07-27T12:31:34-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2024-07-27T12:31:34-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2024-07-27T12:31:34-04:00" level=debug msg="using systemd mode: false" time="2024-07-27T12:31:34-04:00" level=debug msg="adding container to pod httpd2" time="2024-07-27T12:31:34-04:00" level=debug msg="setting container name httpd2-httpd2" time="2024-07-27T12:31:34-04:00" level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\"" time="2024-07-27T12:31:34-04:00" level=info msg="Sysctl net.ipv4.ping_group_range=0 0 ignored in containers.conf, since Network Namespace set to host" time="2024-07-27T12:31:34-04:00" level=debug msg="Adding mount /proc" time="2024-07-27T12:31:34-04:00" level=debug msg="Adding mount /dev" time="2024-07-27T12:31:34-04:00" level=debug msg="Adding mount /dev/pts" time="2024-07-27T12:31:34-04:00" level=debug msg="Adding mount /dev/mqueue" time="2024-07-27T12:31:34-04:00" level=debug msg="Adding mount /sys" time="2024-07-27T12:31:34-04:00" level=debug msg="Adding mount /sys/fs/cgroup" time="2024-07-27T12:31:34-04:00" level=debug msg="Allocated lock 2 for container 93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf" time="2024-07-27T12:31:34-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2024-07-27T12:31:34-04:00" level=debug msg="Created container \"93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf\"" time="2024-07-27T12:31:34-04:00" level=debug msg="Container \"93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf\" has work directory \"/var/lib/containers/storage/overlay-containers/93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf/userdata\"" time="2024-07-27T12:31:34-04:00" level=debug msg="Container \"93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf\" has run directory \"/run/containers/storage/overlay-containers/93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf/userdata\"" time="2024-07-27T12:31:34-04:00" level=debug msg="Strongconnecting node dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833" time="2024-07-27T12:31:34-04:00" level=debug msg="Pushed dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833 onto stack" time="2024-07-27T12:31:34-04:00" level=debug msg="Finishing node dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833. Popped dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833 off stack" time="2024-07-27T12:31:34-04:00" level=debug msg="Strongconnecting node 93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf" time="2024-07-27T12:31:34-04:00" level=debug msg="Pushed 93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf onto stack" time="2024-07-27T12:31:34-04:00" level=debug msg="Finishing node 93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf. Popped 93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf off stack" time="2024-07-27T12:31:34-04:00" level=debug msg="overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/XHP5KYIFQTOBEZIWH55P4XSJMS,upperdir=/var/lib/containers/storage/overlay/41f4fbe1d2e8d3fc1965435eec797ddd0ee4533f30fb8fafae206940b3c6c4b1/diff,workdir=/var/lib/containers/storage/overlay/41f4fbe1d2e8d3fc1965435eec797ddd0ee4533f30fb8fafae206940b3c6c4b1/work,nodev,metacopy=on,context=\"system_u:object_r:container_file_t:s0:c508,c881\"" time="2024-07-27T12:31:34-04:00" level=debug msg="Mounted container \"dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833\" at \"/var/lib/containers/storage/overlay/41f4fbe1d2e8d3fc1965435eec797ddd0ee4533f30fb8fafae206940b3c6c4b1/merged\"" time="2024-07-27T12:31:34-04:00" level=debug msg="Created root filesystem for container dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833 at /var/lib/containers/storage/overlay/41f4fbe1d2e8d3fc1965435eec797ddd0ee4533f30fb8fafae206940b3c6c4b1/merged" time="2024-07-27T12:31:34-04:00" level=debug msg="Made network namespace at /run/netns/netns-ad67feaf-bee7-774e-b54b-8987f6ef7689 for container dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833" [DEBUG netavark::network::validation] "Validating network namespace..." [DEBUG netavark::commands::setup] "Setting up..." [INFO netavark::firewall] Using nftables firewall driver [DEBUG netavark::network::bridge] Setup network podman-default-kube-network [DEBUG netavark::network::bridge] Container interface name: eth0 with IP addresses [10.89.0.2/24] [DEBUG netavark::network::bridge] Bridge name: podman1 with IP addresses [10.89.0.1/24] [DEBUG netavark::network::core_utils] Setting sysctl value for net.ipv4.ip_forward to 1 [DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv4/conf/podman1/rp_filter to 2 [DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv6/conf/eth0/autoconf to 0 [DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv4/conf/eth0/arp_notify to 1 [DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv4/conf/eth0/rp_filter to 2 [INFO netavark::network::netlink] Adding route (dest: 0.0.0.0/0 ,gw: 10.89.0.1, metric 100) [DEBUG netavark::firewall::firewalld] Adding firewalld rules for network 10.89.0.0/24 [DEBUG netavark::firewall::firewalld] Adding subnet 10.89.0.0/24 to zone trusted as source [INFO netavark::firewall::nft] Creating container chain nv_8f5134bc_10_89_0_0_nm24 [DEBUG netavark::network::core_utils] Setting sysctl value for net.ipv4.conf.podman1.route_localnet to 1 [DEBUG netavark::dns::aardvark] Spawning aardvark server [DEBUG netavark::dns::aardvark] start aardvark-dns: ["systemd-run", "-q", "--scope", "/usr/libexec/podman/aardvark-dns", "--config", "/run/containers/networks/aardvark-dns", "-p", "53", "run"] [DEBUG netavark::commands::setup] { "podman-default-kube-network": StatusBlock { dns_search_domains: Some( [ "dns.podman", ], ), dns_server_ips: Some( [ 10.89.0.1, ], ), interfaces: Some( { "eth0": NetInterface { mac_address: "4e:2d:a7:87:2c:f2", subnets: Some( [ NetAddress { gateway: Some( 10.89.0.1, ), ipnet: 10.89.0.2/24, }, ], ), }, }, ), }, } [DEBUG netavark::commands::setup] "Setup complete" time="2024-07-27T12:31:34-04:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription" time="2024-07-27T12:31:34-04:00" level=debug msg="Setting Cgroups for container dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833 to machine-libpod_pod_ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba.slice:libpod:dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833" time="2024-07-27T12:31:34-04:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d" time="2024-07-27T12:31:34-04:00" level=debug msg="Workdir \"/\" resolved to host path \"/var/lib/containers/storage/overlay/41f4fbe1d2e8d3fc1965435eec797ddd0ee4533f30fb8fafae206940b3c6c4b1/merged\"" time="2024-07-27T12:31:34-04:00" level=debug msg="Created OCI spec for container dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833 at /var/lib/containers/storage/overlay-containers/dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833/userdata/config.json" time="2024-07-27T12:31:34-04:00" level=debug msg="Created cgroup path machine.slice/machine-libpod_pod_ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba.slice for parent machine.slice and name libpod_pod_ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba" time="2024-07-27T12:31:34-04:00" level=debug msg="Created cgroup machine.slice/machine-libpod_pod_ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba.slice" time="2024-07-27T12:31:34-04:00" level=debug msg="Got pod cgroup as machine.slice/machine-libpod_pod_ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba.slice" time="2024-07-27T12:31:34-04:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog" time="2024-07-27T12:31:34-04:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833 -u dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833 -r /usr/bin/crun -b /var/lib/containers/storage/overlay-containers/dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833/userdata -p /run/containers/storage/overlay-containers/dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833/userdata/pidfile -n ab3f47105f53-infra --exit-dir /run/libpod/exits --persist-dir /run/libpod/persist/dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833 --full-attach -s -l journald --log-level debug --syslog --conmon-pidfile /run/containers/storage/overlay-containers/dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833]" time="2024-07-27T12:31:34-04:00" level=info msg="Running conmon under slice machine-libpod_pod_ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba.slice and unitName libpod-conmon-dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833.scope" time="2024-07-27T12:31:34-04:00" level=debug msg="Received: 26482" time="2024-07-27T12:31:34-04:00" level=info msg="Got Conmon PID as 26480" time="2024-07-27T12:31:34-04:00" level=debug msg="Created container dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833 in OCI runtime" time="2024-07-27T12:31:34-04:00" level=debug msg="Adding nameserver(s) from network status of '[\"10.89.0.1\"]'" time="2024-07-27T12:31:34-04:00" level=debug msg="Adding search domain(s) from network status of '[\"dns.podman\"]'" time="2024-07-27T12:31:34-04:00" level=debug msg="Starting container dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833 with command [/catatonit -P]" time="2024-07-27T12:31:34-04:00" level=debug msg="Started container dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833" time="2024-07-27T12:31:34-04:00" level=debug msg="overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/BZ2UWY2RJNF2TFCOFMW4W4NBVO,upperdir=/var/lib/containers/storage/overlay/3ca3df692edc254075fdbd4c8f1f9612973492e52c45047ed57ead77a27e28fd/diff,workdir=/var/lib/containers/storage/overlay/3ca3df692edc254075fdbd4c8f1f9612973492e52c45047ed57ead77a27e28fd/work,nodev,metacopy=on,context=\"system_u:object_r:container_file_t:s0:c508,c881\"" time="2024-07-27T12:31:34-04:00" level=debug msg="Mounted container \"93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf\" at \"/var/lib/containers/storage/overlay/3ca3df692edc254075fdbd4c8f1f9612973492e52c45047ed57ead77a27e28fd/merged\"" time="2024-07-27T12:31:34-04:00" level=debug msg="Created root filesystem for container 93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf at /var/lib/containers/storage/overlay/3ca3df692edc254075fdbd4c8f1f9612973492e52c45047ed57ead77a27e28fd/merged" time="2024-07-27T12:31:34-04:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription" time="2024-07-27T12:31:34-04:00" level=debug msg="Setting Cgroups for container 93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf to machine-libpod_pod_ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba.slice:libpod:93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf" time="2024-07-27T12:31:34-04:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d" time="2024-07-27T12:31:34-04:00" level=debug msg="Workdir \"/var/www\" resolved to a volume or mount" time="2024-07-27T12:31:34-04:00" level=debug msg="Created OCI spec for container 93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf at /var/lib/containers/storage/overlay-containers/93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf/userdata/config.json" time="2024-07-27T12:31:34-04:00" level=debug msg="Created cgroup path machine.slice/machine-libpod_pod_ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba.slice for parent machine.slice and name libpod_pod_ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba" time="2024-07-27T12:31:34-04:00" level=debug msg="Created cgroup machine.slice/machine-libpod_pod_ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba.slice" time="2024-07-27T12:31:34-04:00" level=debug msg="Got pod cgroup as machine.slice/machine-libpod_pod_ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba.slice" time="2024-07-27T12:31:34-04:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog" time="2024-07-27T12:31:34-04:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c 93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf -u 93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf -r /usr/bin/crun -b /var/lib/containers/storage/overlay-containers/93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf/userdata -p /run/containers/storage/overlay-containers/93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf/userdata/pidfile -n httpd2-httpd2 --exit-dir /run/libpod/exits --persist-dir /run/libpod/persist/93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf --full-attach -s -l journald --log-level debug --syslog --conmon-pidfile /run/containers/storage/overlay-containers/93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf]" time="2024-07-27T12:31:34-04:00" level=info msg="Running conmon under slice machine-libpod_pod_ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba.slice and unitName libpod-conmon-93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf.scope" time="2024-07-27T12:31:34-04:00" level=debug msg="Received: 26488" time="2024-07-27T12:31:34-04:00" level=info msg="Got Conmon PID as 26486" time="2024-07-27T12:31:34-04:00" level=debug msg="Created container 93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf in OCI runtime" time="2024-07-27T12:31:34-04:00" level=debug msg="Starting container 93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf with command [/bin/busybox-extras httpd -f -p 80]" time="2024-07-27T12:31:34-04:00" level=debug msg="Started container 93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf" time="2024-07-27T12:31:34-04:00" level=debug msg="Called kube.PersistentPostRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)" time="2024-07-27T12:31:34-04:00" level=debug msg="Shutting down engines" Jul 27 12:31:34 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[26416]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0 Jul 27 12:31:35 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[26602]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Jul 27 12:31:35 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Reload requested from client PID 26603 ('systemctl') (unit session-5.scope)... Jul 27 12:31:35 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Reloading... Jul 27 12:31:35 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Reloading finished in 246 ms. Jul 27 12:31:36 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[26771]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service scope=system enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None Jul 27 12:31:36 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Reload requested from client PID 26774 ('systemctl') (unit session-5.scope)... Jul 27 12:31:36 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Reloading... Jul 27 12:31:36 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Reloading finished in 230 ms. Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[26942]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Created slice system-podman\x2dkube.slice - Slice /system/podman-kube. â–‘â–‘ Subject: A start job for unit system-podman\x2dkube.slice has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit system-podman\x2dkube.slice has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 2524. Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Starting podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service - A template for running K8s workloads via podman-kube-play... â–‘â–‘ Subject: A start job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has begun execution â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has begun execution. â–‘â–‘ â–‘â–‘ The job identifier is 2521. Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: 2024-07-27 12:31:37.349867414 -0400 EDT m=+0.033394270 pod stop ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba (image=, name=httpd2) Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: tmp-crun.yWbfva.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit tmp-crun.yWbfva.mount has successfully entered the 'dead' state. Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: libpod-dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833.scope: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit libpod-dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833.scope has successfully entered the 'dead' state. Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: 2024-07-27 12:31:37.38440358 -0400 EDT m=+0.067930666 container died dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833 (image=localhost/podman-pause:5.1.2-1720656000, name=ab3f47105f53-infra, io.buildah.version=1.36.0) Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com aardvark-dns[26474]: Successfully parsed config Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com aardvark-dns[26474]: Listen v4 ip {} Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com aardvark-dns[26474]: Listen v6 ip {} Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com aardvark-dns[26474]: No configuration found stopping the sever Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: podman1: port 1(veth0) entered disabled state Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: veth0 (unregistering): left allmulticast mode Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: veth0 (unregistering): left promiscuous mode Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: podman1: port 1(veth0) entered disabled state Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: run-r195f68d412d64ef19135845924417b7c.scope: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit run-r195f68d412d64ef19135845924417b7c.scope has successfully entered the 'dead' state. Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=debug msg="Called cleanup.PersistentPreRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend netavark --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime crun --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend journald --syslog container cleanup dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833)" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=debug msg="Setting custom database backend: \"sqlite\"" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=info msg="Using sqlite as database backend" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=debug msg="Using graph driver overlay" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=debug msg="Using graph root /var/lib/containers/storage" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=debug msg="Using run root /run/containers/storage" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=debug msg="Using tmp dir /run/libpod" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=debug msg="Using transient store: false" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=debug msg="Cached value indicated that overlay is supported" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=debug msg="Cached value indicated that overlay is supported" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=debug msg="Cached value indicated that metacopy is being used" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=debug msg="Cached value indicated that native-diff is not being used" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=debug msg="Initializing event backend journald" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=debug msg="Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=debug msg="Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=debug msg="Using OCI runtime \"/usr/bin/crun\"" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=info msg="Setting parallel job count to 7" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097897.4404] device (podman1): state change: activated -> unmanaged (reason 'unmanaged-external-down', sys-iface-state: 'external') Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: run-netns-netns\x2dad67feaf\x2dbee7\x2d774e\x2db54b\x2d8987f6ef7689.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit run-netns-netns\x2dad67feaf\x2dbee7\x2d774e\x2db54b\x2d8987f6ef7689.mount has successfully entered the 'dead' state. Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: 2024-07-27 12:31:37.537865961 -0400 EDT m=+0.221392646 container cleanup dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833 (image=localhost/podman-pause:5.1.2-1720656000, name=ab3f47105f53-infra, pod_id=ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba, io.buildah.version=1.36.0) Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=debug msg="Called cleanup.PersistentPostRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend netavark --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime crun --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend journald --syslog container cleanup dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833)" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26957]: time="2024-07-27T12:31:37-04:00" level=debug msg="Shutting down engines" Jul 27 12:31:37 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: libpod-conmon-dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833.scope: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit libpod-conmon-dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833.scope has successfully entered the 'dead' state. Jul 27 12:31:38 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay-41f4fbe1d2e8d3fc1965435eec797ddd0ee4533f30fb8fafae206940b3c6c4b1-merged.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit var-lib-containers-storage-overlay-41f4fbe1d2e8d3fc1965435eec797ddd0ee4533f30fb8fafae206940b3c6c4b1-merged.mount has successfully entered the 'dead' state. Jul 27 12:31:38 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833-userdata-shm.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit var-lib-containers-storage-overlay\x2dcontainers-dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833-userdata-shm.mount has successfully entered the 'dead' state. Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: time="2024-07-27T12:31:47-04:00" level=warning msg="StopSignal SIGTERM failed to stop container httpd2-httpd2 in 10 seconds, resorting to SIGKILL" Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: libpod-93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf.scope: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit libpod-93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf.scope has successfully entered the 'dead' state. Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com conmon[26486]: conmon 93532dadc1783be2948a : container 26488 exited with status 137 Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: 2024-07-27 12:31:47.402780786 -0400 EDT m=+10.086308009 container died 93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry, app=test) Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com conmon[26486]: conmon 93532dadc1783be2948a : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/machine-libpod_pod_ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba.slice/libpod-93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf.scope/container/memory.events Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=debug msg="Called cleanup.PersistentPreRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend netavark --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime crun --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend journald --syslog container cleanup 93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf)" Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=debug msg="Setting custom database backend: \"sqlite\"" Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=info msg="Using sqlite as database backend" Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay-3ca3df692edc254075fdbd4c8f1f9612973492e52c45047ed57ead77a27e28fd-merged.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit var-lib-containers-storage-overlay-3ca3df692edc254075fdbd4c8f1f9612973492e52c45047ed57ead77a27e28fd-merged.mount has successfully entered the 'dead' state. Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=debug msg="Using graph driver overlay" Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=debug msg="Using graph root /var/lib/containers/storage" Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=debug msg="Using run root /run/containers/storage" Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod" Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=debug msg="Using tmp dir /run/libpod" Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes" Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=debug msg="Using transient store: false" Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=debug msg="Cached value indicated that overlay is supported" Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=debug msg="Cached value indicated that overlay is supported" Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=debug msg="Cached value indicated that metacopy is being used" Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=debug msg="Cached value indicated that native-diff is not being used" Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true" Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=debug msg="Initializing event backend journald" Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=debug msg="Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument" Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=debug msg="Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument" Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=debug msg="Using OCI runtime \"/usr/bin/crun\"" Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=info msg="Setting parallel job count to 7" Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: 2024-07-27 12:31:47.455732041 -0400 EDT m=+10.139258823 container cleanup 93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry) Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=info msg="Received shutdown signal \"terminated\", terminating!" PID=26979 Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com /usr/bin/podman[26979]: time="2024-07-27T12:31:47-04:00" level=info msg="Invoking shutdown handler \"libpod\"" PID=26979 Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Stopping libpod-conmon-93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf.scope... â–‘â–‘ Subject: A stop job for unit libpod-conmon-93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf.scope has begun execution â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A stop job for unit libpod-conmon-93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf.scope has begun execution. â–‘â–‘ â–‘â–‘ The job identifier is 2608. Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: libpod-conmon-93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf.scope: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit libpod-conmon-93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf.scope has successfully entered the 'dead' state. Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Stopped libpod-conmon-93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf.scope. â–‘â–‘ Subject: A stop job for unit libpod-conmon-93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf.scope has finished â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A stop job for unit libpod-conmon-93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf.scope has finished. â–‘â–‘ â–‘â–‘ The job identifier is 2608 and the job result is done. Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state. Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Removed slice machine-libpod_pod_ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba.slice - cgroup machine-libpod_pod_ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba.slice. â–‘â–‘ Subject: A stop job for unit machine-libpod_pod_ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba.slice has finished â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A stop job for unit machine-libpod_pod_ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba.slice has finished. â–‘â–‘ â–‘â–‘ The job identifier is 2607 and the job result is done. Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: 2024-07-27 12:31:47.504057511 -0400 EDT m=+10.187584371 pod stop ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba (image=, name=httpd2) Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: machine-libpod_pod_ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba.slice: Failed to open /run/systemd/transient/machine-libpod_pod_ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba.slice: No such file or directory Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: time="2024-07-27T12:31:47-04:00" level=error msg="Checking if infra needs to be stopped: removing pod ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba cgroup: Unit machine-libpod_pod_ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba.slice not loaded." Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: 2024-07-27 12:31:47.731824803 -0400 EDT m=+10.415351611 container remove 93532dadc1783be2948aef80b74ba1a5371faf20d5d5253e0eb8cacae7fd37bf (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry, app=test, created_at=2021-06-10T18:55:36Z) Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: 2024-07-27 12:31:47.771664803 -0400 EDT m=+10.455191504 container remove dbbab1074644820422d30320b7d51b465feadc09461c0f66b072f67d201c5833 (image=localhost/podman-pause:5.1.2-1720656000, name=ab3f47105f53-infra, pod_id=ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba, io.buildah.version=1.36.0) Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: machine-libpod_pod_ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba.slice: Failed to open /run/systemd/transient/machine-libpod_pod_ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba.slice: No such file or directory Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: 2024-07-27 12:31:47.783059784 -0400 EDT m=+10.466586472 pod remove ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba (image=, name=httpd2) Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: Pods stopped: Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: Pods removed: Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: Error: removing pod ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba cgroup: removing pod ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba cgroup: Unit machine-libpod_pod_ab3f47105f53d1730873f106426ed61a89d93228bfacb6ee670475b2d79fdeba.slice not loaded. Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: Secrets removed: Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: Error: %!s() Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: Volumes removed: Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: 2024-07-27 12:31:47.812062355 -0400 EDT m=+10.495589406 container create cfa46864c7ec0493ae037af522f39cd0a603f4dc849eb026c04a5797807e00ad (image=localhost/podman-pause:5.1.2-1720656000, name=42d68b4496fe-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Created slice machine-libpod_pod_3fc987afc0fcfab30e8777f8ebf27fce38969878d374f1ad00ca4b1107c6d265.slice - cgroup machine-libpod_pod_3fc987afc0fcfab30e8777f8ebf27fce38969878d374f1ad00ca4b1107c6d265.slice. â–‘â–‘ Subject: A start job for unit machine-libpod_pod_3fc987afc0fcfab30e8777f8ebf27fce38969878d374f1ad00ca4b1107c6d265.slice has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit machine-libpod_pod_3fc987afc0fcfab30e8777f8ebf27fce38969878d374f1ad00ca4b1107c6d265.slice has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 2609. Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: 2024-07-27 12:31:47.874309115 -0400 EDT m=+10.557836015 container create 21934980ccc10ecb78c2d3633f5f38d2dd8ef60d679cc00813b06946c70a7e97 (image=localhost/podman-pause:5.1.2-1720656000, name=3fc987afc0fc-infra, pod_id=3fc987afc0fcfab30e8777f8ebf27fce38969878d374f1ad00ca4b1107c6d265, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service, io.buildah.version=1.36.0) Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: 2024-07-27 12:31:47.88469321 -0400 EDT m=+10.568219907 pod create 3fc987afc0fcfab30e8777f8ebf27fce38969878d374f1ad00ca4b1107c6d265 (image=, name=httpd2) Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: 2024-07-27 12:31:47.922415355 -0400 EDT m=+10.605942045 container create 28a07a6e8a801cd6f84385e3d31d70834b0d17347f7424b4c81938b703393bad (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=3fc987afc0fcfab30e8777f8ebf27fce38969878d374f1ad00ca4b1107c6d265, created_by=test/system/build-testimage, io.buildah.version=1.21.0, app=test, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service, created_at=2021-06-10T18:55:36Z) Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: 2024-07-27 12:31:47.922850983 -0400 EDT m=+10.606377701 container restart cfa46864c7ec0493ae037af522f39cd0a603f4dc849eb026c04a5797807e00ad (image=localhost/podman-pause:5.1.2-1720656000, name=42d68b4496fe-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: 2024-07-27 12:31:47.88754201 -0400 EDT m=+10.571068945 image pull 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f quay.io/libpod/testimage:20210610 Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Started libpod-cfa46864c7ec0493ae037af522f39cd0a603f4dc849eb026c04a5797807e00ad.scope - libcrun container. â–‘â–‘ Subject: A start job for unit libpod-cfa46864c7ec0493ae037af522f39cd0a603f4dc849eb026c04a5797807e00ad.scope has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit libpod-cfa46864c7ec0493ae037af522f39cd0a603f4dc849eb026c04a5797807e00ad.scope has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 2615. Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: 2024-07-27 12:31:47.996015765 -0400 EDT m=+10.679542710 container init cfa46864c7ec0493ae037af522f39cd0a603f4dc849eb026c04a5797807e00ad (image=localhost/podman-pause:5.1.2-1720656000, name=42d68b4496fe-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Jul 27 12:31:47 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: 2024-07-27 12:31:47.998894892 -0400 EDT m=+10.682421681 container start cfa46864c7ec0493ae037af522f39cd0a603f4dc849eb026c04a5797807e00ad (image=localhost/podman-pause:5.1.2-1720656000, name=42d68b4496fe-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097908.0151] manager: (podman1): new Bridge device (/org/freedesktop/NetworkManager/Devices/5) Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: podman1: port 1(veth0) entered blocking state Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: podman1: port 1(veth0) entered disabled state Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: veth0: entered allmulticast mode Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: veth0: entered promiscuous mode Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: podman1: port 1(veth0) entered blocking state Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: podman1: port 1(veth0) entered forwarding state Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097908.0232] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/6) Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097908.0267] device (veth0): carrier: link connected Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097908.0271] device (podman1): carrier: link connected Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com (udev-worker)[26998]: Network interface NamePolicy= disabled on kernel command line. Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com (udev-worker)[26996]: Network interface NamePolicy= disabled on kernel command line. Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097908.1027] device (podman1): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097908.1036] device (podman1): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097908.1045] device (podman1): Activation: starting connection 'podman1' (597d342f-9147-44d9-82b7-4336562fc291) Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097908.1048] device (podman1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097908.1052] device (podman1): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097908.1056] device (podman1): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097908.1060] device (podman1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Starting NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service... â–‘â–‘ Subject: A start job for unit NetworkManager-dispatcher.service has begun execution â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit NetworkManager-dispatcher.service has begun execution. â–‘â–‘ â–‘â–‘ The job identifier is 2622. Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Started NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service. â–‘â–‘ Subject: A start job for unit NetworkManager-dispatcher.service has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit NetworkManager-dispatcher.service has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 2622. Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097908.1424] device (podman1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097908.1426] device (podman1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097908.1432] device (podman1): Activation: successful, device activated. Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Started run-r438989e88bd74f25b2e11a3ff478396f.scope - /usr/libexec/podman/aardvark-dns --config /run/containers/networks/aardvark-dns -p 53 run. â–‘â–‘ Subject: A start job for unit run-r438989e88bd74f25b2e11a3ff478396f.scope has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit run-r438989e88bd74f25b2e11a3ff478396f.scope has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 2702. Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Started libpod-21934980ccc10ecb78c2d3633f5f38d2dd8ef60d679cc00813b06946c70a7e97.scope - libcrun container. â–‘â–‘ Subject: A start job for unit libpod-21934980ccc10ecb78c2d3633f5f38d2dd8ef60d679cc00813b06946c70a7e97.scope has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit libpod-21934980ccc10ecb78c2d3633f5f38d2dd8ef60d679cc00813b06946c70a7e97.scope has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 2708. Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: 2024-07-27 12:31:48.22965057 -0400 EDT m=+10.913177417 container init 21934980ccc10ecb78c2d3633f5f38d2dd8ef60d679cc00813b06946c70a7e97 (image=localhost/podman-pause:5.1.2-1720656000, name=3fc987afc0fc-infra, pod_id=3fc987afc0fcfab30e8777f8ebf27fce38969878d374f1ad00ca4b1107c6d265, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service, io.buildah.version=1.36.0) Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: 2024-07-27 12:31:48.232878953 -0400 EDT m=+10.916405735 container start 21934980ccc10ecb78c2d3633f5f38d2dd8ef60d679cc00813b06946c70a7e97 (image=localhost/podman-pause:5.1.2-1720656000, name=3fc987afc0fc-infra, pod_id=3fc987afc0fcfab30e8777f8ebf27fce38969878d374f1ad00ca4b1107c6d265, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service, io.buildah.version=1.36.0) Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Started libpod-28a07a6e8a801cd6f84385e3d31d70834b0d17347f7424b4c81938b703393bad.scope - libcrun container. â–‘â–‘ Subject: A start job for unit libpod-28a07a6e8a801cd6f84385e3d31d70834b0d17347f7424b4c81938b703393bad.scope has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit libpod-28a07a6e8a801cd6f84385e3d31d70834b0d17347f7424b4c81938b703393bad.scope has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 2715. Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: 2024-07-27 12:31:48.291283612 -0400 EDT m=+10.974810527 container init 28a07a6e8a801cd6f84385e3d31d70834b0d17347f7424b4c81938b703393bad (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=3fc987afc0fcfab30e8777f8ebf27fce38969878d374f1ad00ca4b1107c6d265, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0) Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: 2024-07-27 12:31:48.294305812 -0400 EDT m=+10.977832574 container start 28a07a6e8a801cd6f84385e3d31d70834b0d17347f7424b4c81938b703393bad (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=3fc987afc0fcfab30e8777f8ebf27fce38969878d374f1ad00ca4b1107c6d265, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service, app=test, created_at=2021-06-10T18:55:36Z) Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: 2024-07-27 12:31:48.301206475 -0400 EDT m=+10.984733271 pod start 3fc987afc0fcfab30e8777f8ebf27fce38969878d374f1ad00ca4b1107c6d265 (image=, name=httpd2) Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: Pod: Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: 3fc987afc0fcfab30e8777f8ebf27fce38969878d374f1ad00ca4b1107c6d265 Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: Container: Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com podman[26946]: 28a07a6e8a801cd6f84385e3d31d70834b0d17347f7424b4c81938b703393bad Jul 27 12:31:48 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Started podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service - A template for running K8s workloads via podman-kube-play. â–‘â–‘ Subject: A start job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 2521. Jul 27 12:31:49 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[27162]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:31:49 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[27276]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:31:50 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[27391]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /etc/containers/ansible-kubernetes.d/httpd3.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:31:51 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[27505]: ansible-file Invoked with path=/tmp/lsr_ykmfaaw1_podman/httpd3 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:31:52 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[27618]: ansible-file Invoked with path=/tmp/lsr_ykmfaaw1_podman/httpd3-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:31:52 ip-10-31-14-141.us-east-1.aws.redhat.com podman[27747]: 2024-07-27 12:31:52.975709585 -0400 EDT m=+0.319000869 image pull 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f quay.io/libpod/testimage:20210610 Jul 27 12:31:53 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[27874]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:31:53 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[27987]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:31:54 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[28100]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:31:54 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[28190]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/ansible-kubernetes.d/httpd3.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1722097913.9955175-8372-237534320432247/.source.yml _original_basename=.t6d2o5p5 follow=False checksum=7767c5f8bacb5de840129bb71124239716453343 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:31:55 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[28303]: ansible-containers.podman.podman_play Invoked with state=started kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_options=None Jul 27 12:31:55 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Created slice machine-libpod_pod_51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae.slice - cgroup machine-libpod_pod_51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae.slice. â–‘â–‘ Subject: A start job for unit machine-libpod_pod_51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae.slice has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit machine-libpod_pod_51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae.slice has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 2722. Jul 27 12:31:55 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28310]: 2024-07-27 12:31:55.33431411 -0400 EDT m=+0.211685672 container create daada585ee8ec8c6af2a44c62ff5e665877088e370f3186af224d93b45274886 (image=localhost/podman-pause:5.1.2-1720656000, name=51e54c7f3042-infra, pod_id=51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae, io.buildah.version=1.36.0) Jul 27 12:31:55 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28310]: 2024-07-27 12:31:55.340964088 -0400 EDT m=+0.218335431 pod create 51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae (image=, name=httpd3) Jul 27 12:31:55 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28310]: 2024-07-27 12:31:55.343781908 -0400 EDT m=+0.221153482 image pull 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f quay.io/libpod/testimage:20210610 Jul 27 12:31:55 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28310]: 2024-07-27 12:31:55.373519612 -0400 EDT m=+0.250891059 container create 030999d8e1f2ce202ca3845ef1e18268a7318f66b0ebfeda66866bb8082444ad (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae, io.containers.autoupdate=registry, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, app=test) Jul 27 12:31:55 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: podman1: port 2(veth1) entered blocking state Jul 27 12:31:55 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: podman1: port 2(veth1) entered disabled state Jul 27 12:31:55 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: veth1: entered allmulticast mode Jul 27 12:31:55 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: veth1: entered promiscuous mode Jul 27 12:31:55 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: podman1: port 2(veth1) entered blocking state Jul 27 12:31:55 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: podman1: port 2(veth1) entered forwarding state Jul 27 12:31:55 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097915.4016] manager: (veth1): new Veth device (/org/freedesktop/NetworkManager/Devices/7) Jul 27 12:31:55 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097915.4080] device (veth1): carrier: link connected Jul 27 12:31:55 ip-10-31-14-141.us-east-1.aws.redhat.com (udev-worker)[28321]: Network interface NamePolicy= disabled on kernel command line. Jul 27 12:31:55 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Started libpod-conmon-daada585ee8ec8c6af2a44c62ff5e665877088e370f3186af224d93b45274886.scope. â–‘â–‘ Subject: A start job for unit libpod-conmon-daada585ee8ec8c6af2a44c62ff5e665877088e370f3186af224d93b45274886.scope has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit libpod-conmon-daada585ee8ec8c6af2a44c62ff5e665877088e370f3186af224d93b45274886.scope has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 2729. Jul 27 12:31:55 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: tmp-crun.C1B2RW.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit tmp-crun.C1B2RW.mount has successfully entered the 'dead' state. Jul 27 12:31:55 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Started libpod-daada585ee8ec8c6af2a44c62ff5e665877088e370f3186af224d93b45274886.scope - libcrun container. â–‘â–‘ Subject: A start job for unit libpod-daada585ee8ec8c6af2a44c62ff5e665877088e370f3186af224d93b45274886.scope has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit libpod-daada585ee8ec8c6af2a44c62ff5e665877088e370f3186af224d93b45274886.scope has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 2736. Jul 27 12:31:55 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28310]: 2024-07-27 12:31:55.536700541 -0400 EDT m=+0.414072178 container init daada585ee8ec8c6af2a44c62ff5e665877088e370f3186af224d93b45274886 (image=localhost/podman-pause:5.1.2-1720656000, name=51e54c7f3042-infra, pod_id=51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae, io.buildah.version=1.36.0) Jul 27 12:31:55 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28310]: 2024-07-27 12:31:55.541067417 -0400 EDT m=+0.418438913 container start daada585ee8ec8c6af2a44c62ff5e665877088e370f3186af224d93b45274886 (image=localhost/podman-pause:5.1.2-1720656000, name=51e54c7f3042-infra, pod_id=51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae, io.buildah.version=1.36.0) Jul 27 12:31:55 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Started libpod-conmon-030999d8e1f2ce202ca3845ef1e18268a7318f66b0ebfeda66866bb8082444ad.scope. â–‘â–‘ Subject: A start job for unit libpod-conmon-030999d8e1f2ce202ca3845ef1e18268a7318f66b0ebfeda66866bb8082444ad.scope has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit libpod-conmon-030999d8e1f2ce202ca3845ef1e18268a7318f66b0ebfeda66866bb8082444ad.scope has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 2743. Jul 27 12:31:55 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Started libpod-030999d8e1f2ce202ca3845ef1e18268a7318f66b0ebfeda66866bb8082444ad.scope - libcrun container. â–‘â–‘ Subject: A start job for unit libpod-030999d8e1f2ce202ca3845ef1e18268a7318f66b0ebfeda66866bb8082444ad.scope has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit libpod-030999d8e1f2ce202ca3845ef1e18268a7318f66b0ebfeda66866bb8082444ad.scope has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 2750. Jul 27 12:31:55 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28310]: 2024-07-27 12:31:55.613827977 -0400 EDT m=+0.491199603 container init 030999d8e1f2ce202ca3845ef1e18268a7318f66b0ebfeda66866bb8082444ad (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry) Jul 27 12:31:55 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28310]: 2024-07-27 12:31:55.617027388 -0400 EDT m=+0.494398924 container start 030999d8e1f2ce202ca3845ef1e18268a7318f66b0ebfeda66866bb8082444ad (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry, app=test) Jul 27 12:31:55 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28310]: 2024-07-27 12:31:55.624042322 -0400 EDT m=+0.501413678 pod start 51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae (image=, name=httpd3) Jul 27 12:31:56 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[28470]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Jul 27 12:31:56 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Reload requested from client PID 28471 ('systemctl') (unit session-5.scope)... Jul 27 12:31:56 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Reloading... Jul 27 12:31:56 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Reloading finished in 246 ms. Jul 27 12:31:57 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[28639]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service scope=system enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None Jul 27 12:31:57 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Reload requested from client PID 28642 ('systemctl') (unit session-5.scope)... Jul 27 12:31:57 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Reloading... Jul 27 12:31:57 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Reloading finished in 244 ms. Jul 27 12:31:57 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[28810]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None Jul 27 12:31:57 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Starting podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service - A template for running K8s workloads via podman-kube-play... â–‘â–‘ Subject: A start job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has begun execution â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has begun execution. â–‘â–‘ â–‘â–‘ The job identifier is 2757. Jul 27 12:31:58 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: 2024-07-27 12:31:58.02647191 -0400 EDT m=+0.033549678 pod stop 51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae (image=, name=httpd3) Jul 27 12:31:58 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: tmp-crun.yauA7R.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit tmp-crun.yauA7R.mount has successfully entered the 'dead' state. Jul 27 12:31:58 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: libpod-daada585ee8ec8c6af2a44c62ff5e665877088e370f3186af224d93b45274886.scope: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit libpod-daada585ee8ec8c6af2a44c62ff5e665877088e370f3186af224d93b45274886.scope has successfully entered the 'dead' state. Jul 27 12:31:58 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: 2024-07-27 12:31:58.059289555 -0400 EDT m=+0.066367248 container died daada585ee8ec8c6af2a44c62ff5e665877088e370f3186af224d93b45274886 (image=localhost/podman-pause:5.1.2-1720656000, name=51e54c7f3042-infra, io.buildah.version=1.36.0) Jul 27 12:31:58 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: tmp-crun.pMdNa2.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit tmp-crun.pMdNa2.mount has successfully entered the 'dead' state. Jul 27 12:31:58 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: podman1: port 2(veth1) entered disabled state Jul 27 12:31:58 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: veth1 (unregistering): left allmulticast mode Jul 27 12:31:58 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: veth1 (unregistering): left promiscuous mode Jul 27 12:31:58 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: podman1: port 2(veth1) entered disabled state Jul 27 12:31:58 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: run-netns-netns\x2dce489bf5\x2d85ef\x2d501a\x2d3443\x2d01e8362eda34.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit run-netns-netns\x2dce489bf5\x2d85ef\x2d501a\x2d3443\x2d01e8362eda34.mount has successfully entered the 'dead' state. Jul 27 12:31:58 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: 2024-07-27 12:31:58.156941831 -0400 EDT m=+0.164019206 container cleanup daada585ee8ec8c6af2a44c62ff5e665877088e370f3186af224d93b45274886 (image=localhost/podman-pause:5.1.2-1720656000, name=51e54c7f3042-infra, pod_id=51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae, io.buildah.version=1.36.0) Jul 27 12:31:58 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: libpod-conmon-daada585ee8ec8c6af2a44c62ff5e665877088e370f3186af224d93b45274886.scope: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit libpod-conmon-daada585ee8ec8c6af2a44c62ff5e665877088e370f3186af224d93b45274886.scope has successfully entered the 'dead' state. Jul 27 12:31:58 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state. Jul 27 12:31:59 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay-f6c156948585188fb8ba9c7554ec284c006f6c771ac0831d6c74c7be4400feb5-merged.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit var-lib-containers-storage-overlay-f6c156948585188fb8ba9c7554ec284c006f6c771ac0831d6c74c7be4400feb5-merged.mount has successfully entered the 'dead' state. Jul 27 12:31:59 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-daada585ee8ec8c6af2a44c62ff5e665877088e370f3186af224d93b45274886-userdata-shm.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit var-lib-containers-storage-overlay\x2dcontainers-daada585ee8ec8c6af2a44c62ff5e665877088e370f3186af224d93b45274886-userdata-shm.mount has successfully entered the 'dead' state. Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: time="2024-07-27T12:32:08-04:00" level=warning msg="StopSignal SIGTERM failed to stop container httpd3-httpd3 in 10 seconds, resorting to SIGKILL" Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: libpod-030999d8e1f2ce202ca3845ef1e18268a7318f66b0ebfeda66866bb8082444ad.scope: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit libpod-030999d8e1f2ce202ca3845ef1e18268a7318f66b0ebfeda66866bb8082444ad.scope has successfully entered the 'dead' state. Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: 2024-07-27 12:32:08.083023875 -0400 EDT m=+10.090101456 container died 030999d8e1f2ce202ca3845ef1e18268a7318f66b0ebfeda66866bb8082444ad (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry, app=test) Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay-8862b3c043b48e30eea977b7aef7985f37de3169dba05dc91b86c318229883f3-merged.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit var-lib-containers-storage-overlay-8862b3c043b48e30eea977b7aef7985f37de3169dba05dc91b86c318229883f3-merged.mount has successfully entered the 'dead' state. Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: 2024-07-27 12:32:08.122996557 -0400 EDT m=+10.130074034 container cleanup 030999d8e1f2ce202ca3845ef1e18268a7318f66b0ebfeda66866bb8082444ad (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry, app=test) Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Stopping libpod-conmon-030999d8e1f2ce202ca3845ef1e18268a7318f66b0ebfeda66866bb8082444ad.scope... â–‘â–‘ Subject: A stop job for unit libpod-conmon-030999d8e1f2ce202ca3845ef1e18268a7318f66b0ebfeda66866bb8082444ad.scope has begun execution â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A stop job for unit libpod-conmon-030999d8e1f2ce202ca3845ef1e18268a7318f66b0ebfeda66866bb8082444ad.scope has begun execution. â–‘â–‘ â–‘â–‘ The job identifier is 2844. Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: libpod-conmon-030999d8e1f2ce202ca3845ef1e18268a7318f66b0ebfeda66866bb8082444ad.scope: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit libpod-conmon-030999d8e1f2ce202ca3845ef1e18268a7318f66b0ebfeda66866bb8082444ad.scope has successfully entered the 'dead' state. Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Stopped libpod-conmon-030999d8e1f2ce202ca3845ef1e18268a7318f66b0ebfeda66866bb8082444ad.scope. â–‘â–‘ Subject: A stop job for unit libpod-conmon-030999d8e1f2ce202ca3845ef1e18268a7318f66b0ebfeda66866bb8082444ad.scope has finished â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A stop job for unit libpod-conmon-030999d8e1f2ce202ca3845ef1e18268a7318f66b0ebfeda66866bb8082444ad.scope has finished. â–‘â–‘ â–‘â–‘ The job identifier is 2844 and the job result is done. Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Removed slice machine-libpod_pod_51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae.slice - cgroup machine-libpod_pod_51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae.slice. â–‘â–‘ Subject: A stop job for unit machine-libpod_pod_51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae.slice has finished â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A stop job for unit machine-libpod_pod_51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae.slice has finished. â–‘â–‘ â–‘â–‘ The job identifier is 2843 and the job result is done. Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: 2024-07-27 12:32:08.135581215 -0400 EDT m=+10.142658735 pod stop 51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae (image=, name=httpd3) Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: machine-libpod_pod_51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae.slice: Failed to open /run/systemd/transient/machine-libpod_pod_51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae.slice: No such file or directory Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: time="2024-07-27T12:32:08-04:00" level=error msg="Checking if infra needs to be stopped: removing pod 51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae cgroup: Unit machine-libpod_pod_51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae.slice not loaded." Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: 2024-07-27 12:32:08.166070444 -0400 EDT m=+10.173148843 pod stop 51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae (image=, name=httpd3) Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: machine-libpod_pod_51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae.slice: Failed to open /run/systemd/transient/machine-libpod_pod_51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae.slice: No such file or directory Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: time="2024-07-27T12:32:08-04:00" level=error msg="Checking if infra needs to be stopped: removing pod 51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae cgroup: Unit machine-libpod_pod_51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae.slice not loaded." Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: 2024-07-27 12:32:08.196197386 -0400 EDT m=+10.203274858 container remove 030999d8e1f2ce202ca3845ef1e18268a7318f66b0ebfeda66866bb8082444ad (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry) Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: 2024-07-27 12:32:08.224778396 -0400 EDT m=+10.231855787 container remove daada585ee8ec8c6af2a44c62ff5e665877088e370f3186af224d93b45274886 (image=localhost/podman-pause:5.1.2-1720656000, name=51e54c7f3042-infra, pod_id=51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae, io.buildah.version=1.36.0) Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: machine-libpod_pod_51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae.slice: Failed to open /run/systemd/transient/machine-libpod_pod_51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae.slice: No such file or directory Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: 2024-07-27 12:32:08.236058403 -0400 EDT m=+10.243135780 pod remove 51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae (image=, name=httpd3) Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: Pods stopped: Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: 51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: Pods removed: Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: Error: removing pod 51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae cgroup: removing pod 51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae cgroup: Unit machine-libpod_pod_51e54c7f30426f36d3559784a8c8bbc4b3f498176e5f23a0774597e750d3f6ae.slice not loaded. Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: Secrets removed: Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: Error: %!s() Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: Volumes removed: Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: 2024-07-27 12:32:08.264562524 -0400 EDT m=+10.271640111 container create 3d4d558c0f2fa33094c0bd8d5e83414ebd5d14339c39e2ae713129f7e70714f8 (image=localhost/podman-pause:5.1.2-1720656000, name=b46e3d5b107f-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Created slice machine-libpod_pod_d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6.slice - cgroup machine-libpod_pod_d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6.slice. â–‘â–‘ Subject: A start job for unit machine-libpod_pod_d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6.slice has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit machine-libpod_pod_d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6.slice has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 2845. Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: 2024-07-27 12:32:08.360077721 -0400 EDT m=+10.367155344 container create b752438f066ce9faf605c7486f1f6ff867a81d90ea1198b0c44ff91e2c7d59d1 (image=localhost/podman-pause:5.1.2-1720656000, name=d934b8f7de34-infra, pod_id=d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service, io.buildah.version=1.36.0) Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: 2024-07-27 12:32:08.369627659 -0400 EDT m=+10.376705026 pod create d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6 (image=, name=httpd3) Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: 2024-07-27 12:32:08.372527657 -0400 EDT m=+10.379605403 image pull 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f quay.io/libpod/testimage:20210610 Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: 2024-07-27 12:32:08.4054002 -0400 EDT m=+10.412477575 container create cf272fb64f907d0bdeb087821bcab3f874e448e85b816fb7d9962878c5c5b1f8 (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry, app=test, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service, created_at=2021-06-10T18:55:36Z) Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: 2024-07-27 12:32:08.405809332 -0400 EDT m=+10.412886731 container restart 3d4d558c0f2fa33094c0bd8d5e83414ebd5d14339c39e2ae713129f7e70714f8 (image=localhost/podman-pause:5.1.2-1720656000, name=b46e3d5b107f-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Started libpod-3d4d558c0f2fa33094c0bd8d5e83414ebd5d14339c39e2ae713129f7e70714f8.scope - libcrun container. â–‘â–‘ Subject: A start job for unit libpod-3d4d558c0f2fa33094c0bd8d5e83414ebd5d14339c39e2ae713129f7e70714f8.scope has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit libpod-3d4d558c0f2fa33094c0bd8d5e83414ebd5d14339c39e2ae713129f7e70714f8.scope has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 2851. Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: 2024-07-27 12:32:08.459755476 -0400 EDT m=+10.466833073 container init 3d4d558c0f2fa33094c0bd8d5e83414ebd5d14339c39e2ae713129f7e70714f8 (image=localhost/podman-pause:5.1.2-1720656000, name=b46e3d5b107f-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: 2024-07-27 12:32:08.462820362 -0400 EDT m=+10.469897916 container start 3d4d558c0f2fa33094c0bd8d5e83414ebd5d14339c39e2ae713129f7e70714f8 (image=localhost/podman-pause:5.1.2-1720656000, name=b46e3d5b107f-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: podman1: port 2(veth1) entered blocking state Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: podman1: port 2(veth1) entered disabled state Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: veth1: entered allmulticast mode Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: veth1: entered promiscuous mode Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: podman1: port 2(veth1) entered blocking state Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: podman1: port 2(veth1) entered forwarding state Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097928.4928] device (veth1): carrier: link connected Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722097928.4932] manager: (veth1): new Veth device (/org/freedesktop/NetworkManager/Devices/8) Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com (udev-worker)[28859]: Network interface NamePolicy= disabled on kernel command line. Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Started libpod-b752438f066ce9faf605c7486f1f6ff867a81d90ea1198b0c44ff91e2c7d59d1.scope - libcrun container. â–‘â–‘ Subject: A start job for unit libpod-b752438f066ce9faf605c7486f1f6ff867a81d90ea1198b0c44ff91e2c7d59d1.scope has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit libpod-b752438f066ce9faf605c7486f1f6ff867a81d90ea1198b0c44ff91e2c7d59d1.scope has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 2858. Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: 2024-07-27 12:32:08.606572131 -0400 EDT m=+10.613649725 container init b752438f066ce9faf605c7486f1f6ff867a81d90ea1198b0c44ff91e2c7d59d1 (image=localhost/podman-pause:5.1.2-1720656000, name=d934b8f7de34-infra, pod_id=d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service, io.buildah.version=1.36.0) Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: 2024-07-27 12:32:08.609511076 -0400 EDT m=+10.616588544 container start b752438f066ce9faf605c7486f1f6ff867a81d90ea1198b0c44ff91e2c7d59d1 (image=localhost/podman-pause:5.1.2-1720656000, name=d934b8f7de34-infra, pod_id=d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6, io.buildah.version=1.36.0, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Started libpod-cf272fb64f907d0bdeb087821bcab3f874e448e85b816fb7d9962878c5c5b1f8.scope - libcrun container. â–‘â–‘ Subject: A start job for unit libpod-cf272fb64f907d0bdeb087821bcab3f874e448e85b816fb7d9962878c5c5b1f8.scope has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit libpod-cf272fb64f907d0bdeb087821bcab3f874e448e85b816fb7d9962878c5c5b1f8.scope has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 2865. Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: 2024-07-27 12:32:08.665829681 -0400 EDT m=+10.672907207 container init cf272fb64f907d0bdeb087821bcab3f874e448e85b816fb7d9962878c5c5b1f8 (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry) Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: 2024-07-27 12:32:08.668895792 -0400 EDT m=+10.675973313 container start cf272fb64f907d0bdeb087821bcab3f874e448e85b816fb7d9962878c5c5b1f8 (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service, app=test, created_at=2021-06-10T18:55:36Z) Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: 2024-07-27 12:32:08.674987063 -0400 EDT m=+10.682064538 pod start d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6 (image=, name=httpd3) Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: Pod: Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6 Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: Container: Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com podman[28814]: cf272fb64f907d0bdeb087821bcab3f874e448e85b816fb7d9962878c5c5b1f8 Jul 27 12:32:08 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Started podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service - A template for running K8s workloads via podman-kube-play. â–‘â–‘ Subject: A start job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 2757. Jul 27 12:32:09 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[29040]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blszdxwsybxzezcwpimtgciwezqmqtlh ; /usr/bin/python3.12 /var/tmp/ansible-tmp-1722097929.1266143-8424-270429973512865/AnsiballZ_command.py' Jul 27 12:32:09 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[29040]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-29040) opened. Jul 27 12:32:09 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[29040]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:32:09 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[29043]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod inspect httpd1 --format '{{.State}}' _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:32:09 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[23350]: Started podman-29051.scope. â–‘â–‘ Subject: A start job for unit UNIT has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit UNIT has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 116. Jul 27 12:32:09 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[29040]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:32:09 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[29173]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod inspect httpd2 --format '{{.State}}' _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:32:10 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[29293]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod inspect httpd3 --format '{{.State}}' _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:32:10 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[29450]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzxaxwicyvauzwcykyrjazqeadtmffvy ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1722097930.562536-8450-185007968080719/AnsiballZ_command.py' Jul 27 12:32:10 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[29450]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-29450) opened. Jul 27 12:32:10 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[29450]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:32:10 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[29453]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail systemctl --user list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd1[.]yml[.]service[ ]+loaded[ ]+active ' _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:32:10 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[29450]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:32:11 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[29569]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail systemctl --system list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd2[.]yml[.]service[ ]+loaded[ ]+active ' _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:32:11 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[29685]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail systemctl --system list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd3[.]yml[.]service[ ]+loaded[ ]+active ' _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:32:12 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[29801]: ansible-ansible.legacy.uri Invoked with url=http://localhost:15001/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False use_gssapi=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} remote_src=False unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None ca_path=None ciphers=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:32:12 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[29915]: ansible-ansible.legacy.uri Invoked with url=http://localhost:15002/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False use_gssapi=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} remote_src=False unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None ca_path=None ciphers=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:32:13 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[30029]: ansible-ansible.legacy.command Invoked with _raw_params=ls -alrtF /tmp/lsr_ykmfaaw1_podman/httpd1-create _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:32:13 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[30143]: ansible-ansible.legacy.command Invoked with _raw_params=ls -alrtF /tmp/lsr_ykmfaaw1_podman/httpd2-create _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:32:14 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[30257]: ansible-ansible.legacy.command Invoked with _raw_params=ls -alrtF /tmp/lsr_ykmfaaw1_podman/httpd3-create _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:32:16 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[30484]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:32:16 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[30603]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:32:17 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[30717]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:32:19 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[30832]: ansible-ansible.legacy.dnf Invoked with name=['firewalld'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 27 12:32:21 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[30946]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None Jul 27 12:32:21 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[31061]: ansible-ansible.legacy.systemd Invoked with name=firewalld state=started enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Jul 27 12:32:22 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[31176]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Jul 27 12:32:24 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[31289]: ansible-ansible.legacy.dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 27 12:32:24 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[31403]: ansible-ansible.legacy.dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 27 12:32:25 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[31517]: ansible-setup Invoked with filter=['ansible_selinux'] gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Jul 27 12:32:27 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[31672]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True Jul 27 12:32:27 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[31785]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked Jul 27 12:32:32 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[31898]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None Jul 27 12:32:33 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[32012]: ansible-getent Invoked with database=group key=3001 fail_key=False service=None split=None Jul 27 12:32:33 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[32126]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:32:34 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[32241]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:32:34 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[32355]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:32:35 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[32469]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:32:36 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[32583]: ansible-ansible.legacy.command Invoked with creates=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl enable-linger podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None Jul 27 12:32:36 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[32696]: ansible-file Invoked with path=/tmp/lsr_ykmfaaw1_podman/httpd1 state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:32:37 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[32809]: ansible-file Invoked with path=/tmp/lsr_ykmfaaw1_podman/httpd1-create state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:32:37 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[32958]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgkkrfautrsiyjmlykataepqqcsrmkxd ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1722097957.434248-8806-260387462283130/AnsiballZ_podman_image.py' Jul 27 12:32:37 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[32958]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-32958) opened. Jul 27 12:32:37 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[32958]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:32:37 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[23350]: Started podman-32962.scope. â–‘â–‘ Subject: A start job for unit UNIT has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit UNIT has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 120. Jul 27 12:32:37 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[23350]: Started podman-32969.scope. â–‘â–‘ Subject: A start job for unit UNIT has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit UNIT has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 124. Jul 27 12:32:38 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[23350]: Started podman-32977.scope. â–‘â–‘ Subject: A start job for unit UNIT has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit UNIT has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 128. Jul 27 12:32:38 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[23350]: Started podman-32984.scope. â–‘â–‘ Subject: A start job for unit UNIT has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit UNIT has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 132. Jul 27 12:32:38 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[32958]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:32:39 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[33103]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:32:39 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[33218]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d state=directory owner=podman_basic_user group=3001 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:32:39 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[33331]: ansible-ansible.legacy.stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:32:40 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[33388]: ansible-ansible.legacy.file Invoked with owner=podman_basic_user group=3001 mode=0644 dest=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml _original_basename=.v5zrrp0z recurse=False state=file path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:32:40 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[33537]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivcffrsxtvjebfoawbvariijrhndwqow ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1722097960.3111765-8844-180340567266965/AnsiballZ_podman_play.py' Jul 27 12:32:40 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[33537]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-33537) opened. Jul 27 12:32:40 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[33537]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:32:40 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[33540]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_options=None Jul 27 12:32:40 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[23350]: Started podman-33547.scope. â–‘â–‘ Subject: A start job for unit UNIT has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit UNIT has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 136. Jul 27 12:32:40 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[23350]: Created slice user-libpod_pod_12b0ee8b5925a02a9e98f79e5783b87536d5cdb6f2ed4e8f46f928359df94879.slice - cgroup user-libpod_pod_12b0ee8b5925a02a9e98f79e5783b87536d5cdb6f2ed4e8f46f928359df94879.slice. â–‘â–‘ Subject: A start job for unit UNIT has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit UNIT has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 140. Jul 27 12:32:40 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[33540]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml Jul 27 12:32:40 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[33540]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Jul 27 12:32:40 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[33540]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time="2024-07-27T12:32:40-04:00" level=info msg="/usr/bin/podman filtering at log level debug" time="2024-07-27T12:32:40-04:00" level=debug msg="Called kube.PersistentPreRunE(/usr/bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml)" time="2024-07-27T12:32:40-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" time="2024-07-27T12:32:40-04:00" level=info msg="Using sqlite as database backend" time="2024-07-27T12:32:40-04:00" level=debug msg="systemd-logind: Unknown object '/'." time="2024-07-27T12:32:40-04:00" level=debug msg="Using graph driver overlay" time="2024-07-27T12:32:40-04:00" level=debug msg="Using graph root /home/podman_basic_user/.local/share/containers/storage" time="2024-07-27T12:32:40-04:00" level=debug msg="Using run root /run/user/3001/containers" time="2024-07-27T12:32:40-04:00" level=debug msg="Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod" time="2024-07-27T12:32:40-04:00" level=debug msg="Using tmp dir /run/user/3001/libpod/tmp" time="2024-07-27T12:32:40-04:00" level=debug msg="Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes" time="2024-07-27T12:32:40-04:00" level=debug msg="Using transient store: false" time="2024-07-27T12:32:40-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" time="2024-07-27T12:32:40-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2024-07-27T12:32:40-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2024-07-27T12:32:40-04:00" level=debug msg="Cached value indicated that metacopy is not being used" time="2024-07-27T12:32:40-04:00" level=debug msg="Cached value indicated that native-diff is usable" time="2024-07-27T12:32:40-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false" time="2024-07-27T12:32:40-04:00" level=debug msg="Initializing event backend file" time="2024-07-27T12:32:40-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" time="2024-07-27T12:32:40-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" time="2024-07-27T12:32:40-04:00" level=debug msg="Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument" time="2024-07-27T12:32:40-04:00" level=debug msg="Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument" time="2024-07-27T12:32:40-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" time="2024-07-27T12:32:40-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" time="2024-07-27T12:32:40-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" time="2024-07-27T12:32:40-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" time="2024-07-27T12:32:40-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" time="2024-07-27T12:32:40-04:00" level=debug msg="Using OCI runtime \"/usr/bin/crun\"" time="2024-07-27T12:32:40-04:00" level=info msg="Setting parallel job count to 7" time="2024-07-27T12:32:40-04:00" level=debug msg="Successfully loaded network podman-default-kube-network: &{podman-default-kube-network f351134fdf5176cce5c68a14f62be5f152324834e113401839991ccbf6fbc72a bridge podman1 2024-07-27 12:31:13.375699494 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}" time="2024-07-27T12:32:40-04:00" level=debug msg="Successfully loaded 2 networks" time="2024-07-27T12:32:40-04:00" level=debug msg="Looking up image \"localhost/podman-pause:5.1.2-1720656000\" in local containers storage" time="2024-07-27T12:32:40-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2024-07-27T12:32:40-04:00" level=debug msg="Trying \"localhost/podman-pause:5.1.2-1720656000\" ..." time="2024-07-27T12:32:40-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@cd62bd3c2881107301605a8726f142eaa1c0465e93dd923b4030f0ac91a5d3c4\"" time="2024-07-27T12:32:40-04:00" level=debug msg="Found image \"localhost/podman-pause:5.1.2-1720656000\" as \"localhost/podman-pause:5.1.2-1720656000\" in local containers storage" time="2024-07-27T12:32:40-04:00" level=debug msg="Found image \"localhost/podman-pause:5.1.2-1720656000\" as \"localhost/podman-pause:5.1.2-1720656000\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@cd62bd3c2881107301605a8726f142eaa1c0465e93dd923b4030f0ac91a5d3c4)" time="2024-07-27T12:32:40-04:00" level=debug msg="exporting opaque data as blob \"sha256:cd62bd3c2881107301605a8726f142eaa1c0465e93dd923b4030f0ac91a5d3c4\"" time="2024-07-27T12:32:40-04:00" level=debug msg="Pod using bridge network mode" time="2024-07-27T12:32:40-04:00" level=debug msg="Created cgroup path user.slice/user-libpod_pod_12b0ee8b5925a02a9e98f79e5783b87536d5cdb6f2ed4e8f46f928359df94879.slice for parent user.slice and name libpod_pod_12b0ee8b5925a02a9e98f79e5783b87536d5cdb6f2ed4e8f46f928359df94879" time="2024-07-27T12:32:40-04:00" level=debug msg="Created cgroup user.slice/user-libpod_pod_12b0ee8b5925a02a9e98f79e5783b87536d5cdb6f2ed4e8f46f928359df94879.slice" time="2024-07-27T12:32:40-04:00" level=debug msg="Got pod cgroup as user.slice/user-3001.slice/user@3001.service/user.slice/user-libpod_pod_12b0ee8b5925a02a9e98f79e5783b87536d5cdb6f2ed4e8f46f928359df94879.slice" Error: adding pod to state: name "httpd1" is in use: pod already exists time="2024-07-27T12:32:40-04:00" level=debug msg="Shutting down engines" Jul 27 12:32:40 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[33540]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 125 Jul 27 12:32:40 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[33537]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:32:41 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[33666]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Jul 27 12:32:42 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[33780]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:32:42 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[33894]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:32:43 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[34009]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /etc/containers/ansible-kubernetes.d/httpd2.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:32:44 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[34123]: ansible-file Invoked with path=/tmp/lsr_ykmfaaw1_podman/httpd2 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:32:45 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[34236]: ansible-file Invoked with path=/tmp/lsr_ykmfaaw1_podman/httpd2-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:32:46 ip-10-31-14-141.us-east-1.aws.redhat.com podman[34366]: 2024-07-27 12:32:46.173987426 -0400 EDT m=+0.380705744 image pull 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f quay.io/libpod/testimage:20210610 Jul 27 12:32:46 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[34493]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:32:47 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[34608]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:32:47 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[34721]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:32:47 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[34778]: ansible-ansible.legacy.file Invoked with owner=root group=0 mode=0644 dest=/etc/containers/ansible-kubernetes.d/httpd2.yml _original_basename=.toywz0ue recurse=False state=file path=/etc/containers/ansible-kubernetes.d/httpd2.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:32:48 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[34891]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_options=None Jul 27 12:32:48 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Created slice machine-libpod_pod_b01d25e8cbcd8151db950a0699def71734e6d2551138c460a248a0ffcaaea3fd.slice - cgroup machine-libpod_pod_b01d25e8cbcd8151db950a0699def71734e6d2551138c460a248a0ffcaaea3fd.slice. â–‘â–‘ Subject: A start job for unit machine-libpod_pod_b01d25e8cbcd8151db950a0699def71734e6d2551138c460a248a0ffcaaea3fd.slice has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit machine-libpod_pod_b01d25e8cbcd8151db950a0699def71734e6d2551138c460a248a0ffcaaea3fd.slice has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 2872. Jul 27 12:32:48 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[34891]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml Jul 27 12:32:48 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[34891]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Jul 27 12:32:48 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[34891]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time="2024-07-27T12:32:48-04:00" level=info msg="/usr/bin/podman filtering at log level debug" time="2024-07-27T12:32:48-04:00" level=debug msg="Called kube.PersistentPreRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)" time="2024-07-27T12:32:48-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" time="2024-07-27T12:32:48-04:00" level=info msg="Using sqlite as database backend" time="2024-07-27T12:32:48-04:00" level=debug msg="Using graph driver overlay" time="2024-07-27T12:32:48-04:00" level=debug msg="Using graph root /var/lib/containers/storage" time="2024-07-27T12:32:48-04:00" level=debug msg="Using run root /run/containers/storage" time="2024-07-27T12:32:48-04:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod" time="2024-07-27T12:32:48-04:00" level=debug msg="Using tmp dir /run/libpod" time="2024-07-27T12:32:48-04:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes" time="2024-07-27T12:32:48-04:00" level=debug msg="Using transient store: false" time="2024-07-27T12:32:48-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" time="2024-07-27T12:32:48-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2024-07-27T12:32:48-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2024-07-27T12:32:48-04:00" level=debug msg="Cached value indicated that metacopy is being used" time="2024-07-27T12:32:48-04:00" level=debug msg="Cached value indicated that native-diff is not being used" time="2024-07-27T12:32:48-04:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" time="2024-07-27T12:32:48-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true" time="2024-07-27T12:32:48-04:00" level=debug msg="Initializing event backend journald" time="2024-07-27T12:32:48-04:00" level=debug msg="Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument" time="2024-07-27T12:32:48-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" time="2024-07-27T12:32:48-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" time="2024-07-27T12:32:48-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" time="2024-07-27T12:32:48-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" time="2024-07-27T12:32:48-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" time="2024-07-27T12:32:48-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" time="2024-07-27T12:32:48-04:00" level=debug msg="Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument" time="2024-07-27T12:32:48-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" time="2024-07-27T12:32:48-04:00" level=debug msg="Using OCI runtime \"/usr/bin/crun\"" time="2024-07-27T12:32:48-04:00" level=info msg="Setting parallel job count to 7" time="2024-07-27T12:32:48-04:00" level=debug msg="Successfully loaded network podman-default-kube-network: &{podman-default-kube-network 8f5134bcda5f516a62e074f65b89e1dc156f1d967eb9102e712a946d948ca3ab bridge podman1 2024-07-27 12:29:35.997811221 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}" time="2024-07-27T12:32:48-04:00" level=debug msg="Successfully loaded 2 networks" time="2024-07-27T12:32:48-04:00" level=debug msg="Looking up image \"localhost/podman-pause:5.1.2-1720656000\" in local containers storage" time="2024-07-27T12:32:48-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2024-07-27T12:32:48-04:00" level=debug msg="Trying \"localhost/podman-pause:5.1.2-1720656000\" ..." time="2024-07-27T12:32:48-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@dd68369a4c98aa462789d69687e5025b82f278ebe66afc63c07c145727b83b12\"" time="2024-07-27T12:32:48-04:00" level=debug msg="Found image \"localhost/podman-pause:5.1.2-1720656000\" as \"localhost/podman-pause:5.1.2-1720656000\" in local containers storage" time="2024-07-27T12:32:48-04:00" level=debug msg="Found image \"localhost/podman-pause:5.1.2-1720656000\" as \"localhost/podman-pause:5.1.2-1720656000\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@dd68369a4c98aa462789d69687e5025b82f278ebe66afc63c07c145727b83b12)" time="2024-07-27T12:32:48-04:00" level=debug msg="exporting opaque data as blob \"sha256:dd68369a4c98aa462789d69687e5025b82f278ebe66afc63c07c145727b83b12\"" time="2024-07-27T12:32:48-04:00" level=debug msg="Pod using bridge network mode" time="2024-07-27T12:32:48-04:00" level=debug msg="Created cgroup path machine.slice/machine-libpod_pod_b01d25e8cbcd8151db950a0699def71734e6d2551138c460a248a0ffcaaea3fd.slice for parent machine.slice and name libpod_pod_b01d25e8cbcd8151db950a0699def71734e6d2551138c460a248a0ffcaaea3fd" time="2024-07-27T12:32:48-04:00" level=debug msg="Created cgroup machine.slice/machine-libpod_pod_b01d25e8cbcd8151db950a0699def71734e6d2551138c460a248a0ffcaaea3fd.slice" time="2024-07-27T12:32:48-04:00" level=debug msg="Got pod cgroup as machine.slice/machine-libpod_pod_b01d25e8cbcd8151db950a0699def71734e6d2551138c460a248a0ffcaaea3fd.slice" Error: adding pod to state: name "httpd2" is in use: pod already exists time="2024-07-27T12:32:48-04:00" level=debug msg="Shutting down engines" Jul 27 12:32:48 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[34891]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 125 Jul 27 12:32:49 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[35017]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:32:49 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[35131]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:32:51 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[35246]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /etc/containers/ansible-kubernetes.d/httpd3.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:32:52 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[35360]: ansible-file Invoked with path=/tmp/lsr_ykmfaaw1_podman/httpd3 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:32:52 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[35473]: ansible-file Invoked with path=/tmp/lsr_ykmfaaw1_podman/httpd3-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:32:53 ip-10-31-14-141.us-east-1.aws.redhat.com podman[35601]: 2024-07-27 12:32:53.288798562 -0400 EDT m=+0.340932956 image pull 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f quay.io/libpod/testimage:20210610 Jul 27 12:32:53 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[35728]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:32:54 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[35843]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:32:54 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[35956]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:32:54 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[36013]: ansible-ansible.legacy.file Invoked with owner=root group=0 mode=0644 dest=/etc/containers/ansible-kubernetes.d/httpd3.yml _original_basename=.y465hhuq recurse=False state=file path=/etc/containers/ansible-kubernetes.d/httpd3.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:32:55 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[36126]: ansible-containers.podman.podman_play Invoked with state=started kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_options=None Jul 27 12:32:55 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Created slice machine-libpod_pod_9c708f57c195dcb81690a52eca5d59580019e3bb8e3f1da72c0406894d1e2199.slice - cgroup machine-libpod_pod_9c708f57c195dcb81690a52eca5d59580019e3bb8e3f1da72c0406894d1e2199.slice. â–‘â–‘ Subject: A start job for unit machine-libpod_pod_9c708f57c195dcb81690a52eca5d59580019e3bb8e3f1da72c0406894d1e2199.slice has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit machine-libpod_pod_9c708f57c195dcb81690a52eca5d59580019e3bb8e3f1da72c0406894d1e2199.slice has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 2878. Jul 27 12:32:56 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[36289]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfhzskanemupfvbtjfdfaojclkqsrsbl ; /usr/bin/python3.12 /var/tmp/ansible-tmp-1722097976.030087-9099-50490068000412/AnsiballZ_command.py' Jul 27 12:32:56 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[36289]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-36289) opened. Jul 27 12:32:56 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[36289]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:32:56 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[36292]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod inspect httpd1 --format '{{.State}}' _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:32:56 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[23350]: Started podman-36300.scope. â–‘â–‘ Subject: A start job for unit UNIT has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit UNIT has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 144. Jul 27 12:32:56 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[36289]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:32:56 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[36421]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod inspect httpd2 --format '{{.State}}' _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:32:57 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[36542]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod inspect httpd3 --format '{{.State}}' _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:32:57 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[36699]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wadmgcflojtyghnaykpgblpmewnyduex ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1722097977.4558182-9123-46239639450111/AnsiballZ_command.py' Jul 27 12:32:57 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[36699]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-36699) opened. Jul 27 12:32:57 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[36699]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:32:57 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[36702]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail systemctl --user list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd1[.]yml[.]service[ ]+loaded[ ]+active ' _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:32:57 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[36699]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:32:58 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[36818]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail systemctl --system list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd2[.]yml[.]service[ ]+loaded[ ]+active ' _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:32:58 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[36934]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail systemctl --system list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd3[.]yml[.]service[ ]+loaded[ ]+active ' _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:32:59 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[37050]: ansible-ansible.legacy.uri Invoked with url=http://localhost:15001/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False use_gssapi=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} remote_src=False unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None ca_path=None ciphers=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:32:59 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[37164]: ansible-ansible.legacy.uri Invoked with url=http://localhost:15002/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False use_gssapi=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} remote_src=False unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None ca_path=None ciphers=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:33:00 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[37278]: ansible-ansible.legacy.uri Invoked with url=http://localhost:15003/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False use_gssapi=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} remote_src=False unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None ca_path=None ciphers=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:33:02 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[37505]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:03 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[37624]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:33:03 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[37738]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:33:06 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[37853]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None Jul 27 12:33:06 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[37967]: ansible-getent Invoked with database=group key=3001 fail_key=False service=None split=None Jul 27 12:33:07 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[38081]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:33:07 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[38196]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:08 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[38310]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:09 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[38424]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:09 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[38538]: ansible-stat Invoked with path=/run/user/3001 follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:33:10 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[38689]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciqbrqkfjmgkkozkoeeszxgooyhhcuvb ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1722097989.9294028-9322-192888159902137/AnsiballZ_systemd.py' Jul 27 12:33:10 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[38689]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-38689) opened. Jul 27 12:33:10 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[38689]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:33:10 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[38692]: ansible-systemd Invoked with name=podman-kube@-home-podman_basic_user-.config-containers-ansible\x2dkubernetes.d-httpd1.yml.service scope=user state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None Jul 27 12:33:10 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[23350]: Reload requested from client PID 38695 ('systemctl')... Jul 27 12:33:10 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[23350]: Reloading... Jul 27 12:33:10 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[23350]: Reloading finished in 72 ms. Jul 27 12:33:10 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[23350]: Stopping podman-kube@-home-podman_basic_user-.config-containers-ansible\x2dkubernetes.d-httpd1.yml.service - A template for running K8s workloads via podman-kube-play... â–‘â–‘ Subject: A stop job for unit UNIT has begun execution â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A stop job for unit UNIT has begun execution. â–‘â–‘ â–‘â–‘ The job identifier is 148. Jul 27 12:33:10 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: podman1: port 1(veth0) entered disabled state Jul 27 12:33:10 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: veth0 (unregistering): left allmulticast mode Jul 27 12:33:10 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: veth0 (unregistering): left promiscuous mode Jul 27 12:33:10 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: podman1: port 1(veth0) entered disabled state Jul 27 12:33:20 ip-10-31-14-141.us-east-1.aws.redhat.com podman[38706]: time="2024-07-27T12:33:20-04:00" level=warning msg="StopSignal SIGTERM failed to stop container httpd1-httpd1 in 10 seconds, resorting to SIGKILL" Jul 27 12:33:20 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[23350]: Removed slice user-libpod_pod_eb0fd24ab262a446ab756535a25f500b5ec073e9a28a2dc378eb03b2412f15c9.slice - cgroup user-libpod_pod_eb0fd24ab262a446ab756535a25f500b5ec073e9a28a2dc378eb03b2412f15c9.slice. â–‘â–‘ Subject: A stop job for unit UNIT has finished â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A stop job for unit UNIT has finished. â–‘â–‘ â–‘â–‘ The job identifier is 149 and the job result is done. Jul 27 12:33:20 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[23350]: user-libpod_pod_eb0fd24ab262a446ab756535a25f500b5ec073e9a28a2dc378eb03b2412f15c9.slice: Failed to open /run/user/3001/systemd/transient/user-libpod_pod_eb0fd24ab262a446ab756535a25f500b5ec073e9a28a2dc378eb03b2412f15c9.slice: No such file or directory Jul 27 12:33:21 ip-10-31-14-141.us-east-1.aws.redhat.com podman[38706]: Pods stopped: Jul 27 12:33:21 ip-10-31-14-141.us-east-1.aws.redhat.com podman[38706]: eb0fd24ab262a446ab756535a25f500b5ec073e9a28a2dc378eb03b2412f15c9 Jul 27 12:33:21 ip-10-31-14-141.us-east-1.aws.redhat.com podman[38706]: Pods removed: Jul 27 12:33:21 ip-10-31-14-141.us-east-1.aws.redhat.com podman[38706]: Error: removing pod eb0fd24ab262a446ab756535a25f500b5ec073e9a28a2dc378eb03b2412f15c9 cgroup: removing pod eb0fd24ab262a446ab756535a25f500b5ec073e9a28a2dc378eb03b2412f15c9 cgroup: Unit user-libpod_pod_eb0fd24ab262a446ab756535a25f500b5ec073e9a28a2dc378eb03b2412f15c9.slice not loaded. Jul 27 12:33:21 ip-10-31-14-141.us-east-1.aws.redhat.com podman[38706]: Secrets removed: Jul 27 12:33:21 ip-10-31-14-141.us-east-1.aws.redhat.com podman[38706]: Error: %!s() Jul 27 12:33:21 ip-10-31-14-141.us-east-1.aws.redhat.com podman[38706]: Volumes removed: Jul 27 12:33:21 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[23350]: Stopped podman-kube@-home-podman_basic_user-.config-containers-ansible\x2dkubernetes.d-httpd1.yml.service - A template for running K8s workloads via podman-kube-play. â–‘â–‘ Subject: A stop job for unit UNIT has finished â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A stop job for unit UNIT has finished. â–‘â–‘ â–‘â–‘ The job identifier is 148 and the job result is done. Jul 27 12:33:21 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[38689]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:33:21 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[38865]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:33:21 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[39016]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyrxltxcvhtvvxpdayrpjoxxtejldmey ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1722098001.7130353-9340-181628971819968/AnsiballZ_podman_play.py' Jul 27 12:33:21 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[39016]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-39016) opened. Jul 27 12:33:21 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[39016]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:33:22 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[39019]: ansible-containers.podman.podman_play Invoked with state=absent debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_options=None Jul 27 12:33:22 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[39019]: ansible-containers.podman.podman_play version: 5.1.2, kube file /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml Jul 27 12:33:22 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[23350]: Started podman-39026.scope. â–‘â–‘ Subject: A start job for unit UNIT has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit UNIT has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 150. Jul 27 12:33:22 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[39019]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman kube play --down /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml Jul 27 12:33:22 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[39019]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pods stopped: Pods removed: Secrets removed: Volumes removed: Jul 27 12:33:22 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[39019]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: Jul 27 12:33:22 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[39019]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0 Jul 27 12:33:22 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[39016]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:33:22 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[39146]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:33:22 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[39295]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paddzoimqahqdhwgsrnnqvwcpyvmesws ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1722098002.7582216-9358-84588193523613/AnsiballZ_command.py' Jul 27 12:33:22 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[39295]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-39295) opened. Jul 27 12:33:22 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[39295]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:33:23 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[39298]: ansible-ansible.legacy.command Invoked with _raw_params=podman image prune -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:23 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[23350]: Started podman-39299.scope. â–‘â–‘ Subject: A start job for unit UNIT has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit UNIT has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 154. Jul 27 12:33:23 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[39295]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:33:24 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[39418]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Jul 27 12:33:24 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[39532]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:33:25 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[39646]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:33:26 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[39761]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /etc/containers/ansible-kubernetes.d/httpd2.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:27 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[39875]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None Jul 27 12:33:27 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Reload requested from client PID 39878 ('systemctl') (unit session-5.scope)... Jul 27 12:33:27 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Reloading... Jul 27 12:33:27 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Reloading finished in 246 ms. Jul 27 12:33:27 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Stopping podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service - A template for running K8s workloads via podman-kube-play... â–‘â–‘ Subject: A stop job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has begun execution â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A stop job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has begun execution. â–‘â–‘ â–‘â–‘ The job identifier is 2885. Jul 27 12:33:27 ip-10-31-14-141.us-east-1.aws.redhat.com podman[39935]: 2024-07-27 12:33:27.508879913 -0400 EDT m=+0.033288001 pod stop 3fc987afc0fcfab30e8777f8ebf27fce38969878d374f1ad00ca4b1107c6d265 (image=, name=httpd2) Jul 27 12:33:27 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: tmp-crun.Xnz3jw.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit tmp-crun.Xnz3jw.mount has successfully entered the 'dead' state. Jul 27 12:33:27 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: libpod-21934980ccc10ecb78c2d3633f5f38d2dd8ef60d679cc00813b06946c70a7e97.scope: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit libpod-21934980ccc10ecb78c2d3633f5f38d2dd8ef60d679cc00813b06946c70a7e97.scope has successfully entered the 'dead' state. Jul 27 12:33:27 ip-10-31-14-141.us-east-1.aws.redhat.com podman[39935]: 2024-07-27 12:33:27.542254668 -0400 EDT m=+0.066662918 container died 21934980ccc10ecb78c2d3633f5f38d2dd8ef60d679cc00813b06946c70a7e97 (image=localhost/podman-pause:5.1.2-1720656000, name=3fc987afc0fc-infra, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service, io.buildah.version=1.36.0) Jul 27 12:33:27 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: podman1: port 1(veth0) entered disabled state Jul 27 12:33:27 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: veth0 (unregistering): left allmulticast mode Jul 27 12:33:27 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: veth0 (unregistering): left promiscuous mode Jul 27 12:33:27 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: podman1: port 1(veth0) entered disabled state Jul 27 12:33:27 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: run-netns-netns\x2dc1fd9291\x2d1ee0\x2d6c8d\x2d7868\x2d8e0d222da8f6.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit run-netns-netns\x2dc1fd9291\x2d1ee0\x2d6c8d\x2d7868\x2d8e0d222da8f6.mount has successfully entered the 'dead' state. Jul 27 12:33:27 ip-10-31-14-141.us-east-1.aws.redhat.com podman[39935]: 2024-07-27 12:33:27.645807774 -0400 EDT m=+0.170215713 container cleanup 21934980ccc10ecb78c2d3633f5f38d2dd8ef60d679cc00813b06946c70a7e97 (image=localhost/podman-pause:5.1.2-1720656000, name=3fc987afc0fc-infra, pod_id=3fc987afc0fcfab30e8777f8ebf27fce38969878d374f1ad00ca4b1107c6d265, io.buildah.version=1.36.0, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Jul 27 12:33:28 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay-6ed095a417e8f0a482152436f83644e275252114c32ea202df372f994582bf1b-merged.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit var-lib-containers-storage-overlay-6ed095a417e8f0a482152436f83644e275252114c32ea202df372f994582bf1b-merged.mount has successfully entered the 'dead' state. Jul 27 12:33:28 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-21934980ccc10ecb78c2d3633f5f38d2dd8ef60d679cc00813b06946c70a7e97-userdata-shm.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit var-lib-containers-storage-overlay\x2dcontainers-21934980ccc10ecb78c2d3633f5f38d2dd8ef60d679cc00813b06946c70a7e97-userdata-shm.mount has successfully entered the 'dead' state. Jul 27 12:33:37 ip-10-31-14-141.us-east-1.aws.redhat.com podman[39935]: time="2024-07-27T12:33:37-04:00" level=warning msg="StopSignal SIGTERM failed to stop container httpd2-httpd2 in 10 seconds, resorting to SIGKILL" Jul 27 12:33:37 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: libpod-28a07a6e8a801cd6f84385e3d31d70834b0d17347f7424b4c81938b703393bad.scope: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit libpod-28a07a6e8a801cd6f84385e3d31d70834b0d17347f7424b4c81938b703393bad.scope has successfully entered the 'dead' state. Jul 27 12:33:37 ip-10-31-14-141.us-east-1.aws.redhat.com podman[39935]: 2024-07-27 12:33:37.566215056 -0400 EDT m=+10.090623353 container died 28a07a6e8a801cd6f84385e3d31d70834b0d17347f7424b4c81938b703393bad (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Jul 27 12:33:37 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay-be22017918fa5afa4e8e4e7a2b4fb5ce0fec83d715114bfa9d310b394fc77263-merged.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit var-lib-containers-storage-overlay-be22017918fa5afa4e8e4e7a2b4fb5ce0fec83d715114bfa9d310b394fc77263-merged.mount has successfully entered the 'dead' state. Jul 27 12:33:37 ip-10-31-14-141.us-east-1.aws.redhat.com podman[39935]: 2024-07-27 12:33:37.619498022 -0400 EDT m=+10.143906081 container cleanup 28a07a6e8a801cd6f84385e3d31d70834b0d17347f7424b4c81938b703393bad (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=3fc987afc0fcfab30e8777f8ebf27fce38969878d374f1ad00ca4b1107c6d265, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry) Jul 27 12:33:37 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Removed slice machine-libpod_pod_3fc987afc0fcfab30e8777f8ebf27fce38969878d374f1ad00ca4b1107c6d265.slice - cgroup machine-libpod_pod_3fc987afc0fcfab30e8777f8ebf27fce38969878d374f1ad00ca4b1107c6d265.slice. â–‘â–‘ Subject: A stop job for unit machine-libpod_pod_3fc987afc0fcfab30e8777f8ebf27fce38969878d374f1ad00ca4b1107c6d265.slice has finished â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A stop job for unit machine-libpod_pod_3fc987afc0fcfab30e8777f8ebf27fce38969878d374f1ad00ca4b1107c6d265.slice has finished. â–‘â–‘ â–‘â–‘ The job identifier is 2887 and the job result is done. Jul 27 12:33:37 ip-10-31-14-141.us-east-1.aws.redhat.com podman[39935]: 2024-07-27 12:33:37.652550525 -0400 EDT m=+10.176958687 container remove 28a07a6e8a801cd6f84385e3d31d70834b0d17347f7424b4c81938b703393bad (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=3fc987afc0fcfab30e8777f8ebf27fce38969878d374f1ad00ca4b1107c6d265, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service, app=test, created_at=2021-06-10T18:55:36Z) Jul 27 12:33:37 ip-10-31-14-141.us-east-1.aws.redhat.com podman[39935]: 2024-07-27 12:33:37.691063242 -0400 EDT m=+10.215471196 container remove 21934980ccc10ecb78c2d3633f5f38d2dd8ef60d679cc00813b06946c70a7e97 (image=localhost/podman-pause:5.1.2-1720656000, name=3fc987afc0fc-infra, pod_id=3fc987afc0fcfab30e8777f8ebf27fce38969878d374f1ad00ca4b1107c6d265, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service, io.buildah.version=1.36.0) Jul 27 12:33:37 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: machine-libpod_pod_3fc987afc0fcfab30e8777f8ebf27fce38969878d374f1ad00ca4b1107c6d265.slice: Failed to open /run/systemd/transient/machine-libpod_pod_3fc987afc0fcfab30e8777f8ebf27fce38969878d374f1ad00ca4b1107c6d265.slice: No such file or directory Jul 27 12:33:37 ip-10-31-14-141.us-east-1.aws.redhat.com podman[39935]: 2024-07-27 12:33:37.704019265 -0400 EDT m=+10.228427404 pod remove 3fc987afc0fcfab30e8777f8ebf27fce38969878d374f1ad00ca4b1107c6d265 (image=, name=httpd2) Jul 27 12:33:37 ip-10-31-14-141.us-east-1.aws.redhat.com podman[39935]: 2024-07-27 12:33:37.70829221 -0400 EDT m=+10.232700351 container kill cfa46864c7ec0493ae037af522f39cd0a603f4dc849eb026c04a5797807e00ad (image=localhost/podman-pause:5.1.2-1720656000, name=42d68b4496fe-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Jul 27 12:33:37 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: libpod-cfa46864c7ec0493ae037af522f39cd0a603f4dc849eb026c04a5797807e00ad.scope: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit libpod-cfa46864c7ec0493ae037af522f39cd0a603f4dc849eb026c04a5797807e00ad.scope has successfully entered the 'dead' state. Jul 27 12:33:37 ip-10-31-14-141.us-east-1.aws.redhat.com conmon[26990]: conmon cfa46864c7ec0493ae03 : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cfa46864c7ec0493ae037af522f39cd0a603f4dc849eb026c04a5797807e00ad.scope/container/memory.events Jul 27 12:33:37 ip-10-31-14-141.us-east-1.aws.redhat.com podman[39935]: 2024-07-27 12:33:37.718384278 -0400 EDT m=+10.242792573 container died cfa46864c7ec0493ae037af522f39cd0a603f4dc849eb026c04a5797807e00ad (image=localhost/podman-pause:5.1.2-1720656000, name=42d68b4496fe-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Jul 27 12:33:37 ip-10-31-14-141.us-east-1.aws.redhat.com podman[39935]: 2024-07-27 12:33:37.797193155 -0400 EDT m=+10.321601108 container remove cfa46864c7ec0493ae037af522f39cd0a603f4dc849eb026c04a5797807e00ad (image=localhost/podman-pause:5.1.2-1720656000, name=42d68b4496fe-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Jul 27 12:33:37 ip-10-31-14-141.us-east-1.aws.redhat.com podman[39935]: Pods stopped: Jul 27 12:33:37 ip-10-31-14-141.us-east-1.aws.redhat.com podman[39935]: 3fc987afc0fcfab30e8777f8ebf27fce38969878d374f1ad00ca4b1107c6d265 Jul 27 12:33:37 ip-10-31-14-141.us-east-1.aws.redhat.com podman[39935]: Pods removed: Jul 27 12:33:37 ip-10-31-14-141.us-east-1.aws.redhat.com podman[39935]: Error: removing pod 3fc987afc0fcfab30e8777f8ebf27fce38969878d374f1ad00ca4b1107c6d265 cgroup: removing pod 3fc987afc0fcfab30e8777f8ebf27fce38969878d374f1ad00ca4b1107c6d265 cgroup: Unit machine-libpod_pod_3fc987afc0fcfab30e8777f8ebf27fce38969878d374f1ad00ca4b1107c6d265.slice not loaded. Jul 27 12:33:37 ip-10-31-14-141.us-east-1.aws.redhat.com podman[39935]: Secrets removed: Jul 27 12:33:37 ip-10-31-14-141.us-east-1.aws.redhat.com podman[39935]: Error: %!s() Jul 27 12:33:37 ip-10-31-14-141.us-east-1.aws.redhat.com podman[39935]: Volumes removed: Jul 27 12:33:37 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has successfully entered the 'dead' state. Jul 27 12:33:37 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Stopped podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service - A template for running K8s workloads via podman-kube-play. â–‘â–‘ Subject: A stop job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has finished â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A stop job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has finished. â–‘â–‘ â–‘â–‘ The job identifier is 2885 and the job result is done. Jul 27 12:33:38 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[40094]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:33:38 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay-c0f48c65598cd39955de2ec76b67f125e8b50d209acdb5eb066aeee985f1ce34-merged.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit var-lib-containers-storage-overlay-c0f48c65598cd39955de2ec76b67f125e8b50d209acdb5eb066aeee985f1ce34-merged.mount has successfully entered the 'dead' state. Jul 27 12:33:38 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cfa46864c7ec0493ae037af522f39cd0a603f4dc849eb026c04a5797807e00ad-userdata-shm.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit var-lib-containers-storage-overlay\x2dcontainers-cfa46864c7ec0493ae037af522f39cd0a603f4dc849eb026c04a5797807e00ad-userdata-shm.mount has successfully entered the 'dead' state. Jul 27 12:33:38 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[40209]: ansible-containers.podman.podman_play Invoked with state=absent debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_options=None Jul 27 12:33:38 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[40209]: ansible-containers.podman.podman_play version: 5.1.2, kube file /etc/containers/ansible-kubernetes.d/httpd2.yml Jul 27 12:33:38 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[40209]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman kube play --down /etc/containers/ansible-kubernetes.d/httpd2.yml Jul 27 12:33:38 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[40209]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pods stopped: Pods removed: Secrets removed: Volumes removed: Jul 27 12:33:38 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[40209]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: Jul 27 12:33:38 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[40209]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0 Jul 27 12:33:39 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[40336]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:33:39 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[40449]: ansible-ansible.legacy.command Invoked with _raw_params=podman image prune -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:40 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[40570]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:33:41 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[40684]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:33:42 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[40799]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /etc/containers/ansible-kubernetes.d/httpd3.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:43 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[40913]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None Jul 27 12:33:43 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Reload requested from client PID 40916 ('systemctl') (unit session-5.scope)... Jul 27 12:33:43 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Reloading... Jul 27 12:33:43 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Reloading finished in 225 ms. Jul 27 12:33:43 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Stopping podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service - A template for running K8s workloads via podman-kube-play... â–‘â–‘ Subject: A stop job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has begun execution â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A stop job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has begun execution. â–‘â–‘ â–‘â–‘ The job identifier is 2888. Jul 27 12:33:43 ip-10-31-14-141.us-east-1.aws.redhat.com podman[40973]: 2024-07-27 12:33:43.472230437 -0400 EDT m=+0.030222026 pod stop d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6 (image=, name=httpd3) Jul 27 12:33:43 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: tmp-crun.Fi3dHc.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit tmp-crun.Fi3dHc.mount has successfully entered the 'dead' state. Jul 27 12:33:43 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: libpod-b752438f066ce9faf605c7486f1f6ff867a81d90ea1198b0c44ff91e2c7d59d1.scope: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit libpod-b752438f066ce9faf605c7486f1f6ff867a81d90ea1198b0c44ff91e2c7d59d1.scope has successfully entered the 'dead' state. Jul 27 12:33:43 ip-10-31-14-141.us-east-1.aws.redhat.com podman[40973]: 2024-07-27 12:33:43.505576914 -0400 EDT m=+0.063568908 container died b752438f066ce9faf605c7486f1f6ff867a81d90ea1198b0c44ff91e2c7d59d1 (image=localhost/podman-pause:5.1.2-1720656000, name=d934b8f7de34-infra, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service, io.buildah.version=1.36.0) Jul 27 12:33:43 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: podman1: port 2(veth1) entered disabled state Jul 27 12:33:43 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: run-r438989e88bd74f25b2e11a3ff478396f.scope: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit run-r438989e88bd74f25b2e11a3ff478396f.scope has successfully entered the 'dead' state. Jul 27 12:33:43 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: veth1 (unregistering): left allmulticast mode Jul 27 12:33:43 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: veth1 (unregistering): left promiscuous mode Jul 27 12:33:43 ip-10-31-14-141.us-east-1.aws.redhat.com kernel: podman1: port 2(veth1) entered disabled state Jul 27 12:33:43 ip-10-31-14-141.us-east-1.aws.redhat.com NetworkManager[678]: [1722098023.5585] device (podman1): state change: activated -> unmanaged (reason 'unmanaged-external-down', sys-iface-state: 'external') Jul 27 12:33:43 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Starting NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service... â–‘â–‘ Subject: A start job for unit NetworkManager-dispatcher.service has begun execution â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit NetworkManager-dispatcher.service has begun execution. â–‘â–‘ â–‘â–‘ The job identifier is 2890. Jul 27 12:33:43 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Started NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service. â–‘â–‘ Subject: A start job for unit NetworkManager-dispatcher.service has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit NetworkManager-dispatcher.service has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 2890. Jul 27 12:33:43 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: run-netns-netns\x2d3dfb464e\x2d725e\x2d20a2\x2db8cf\x2de52b030db400.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit run-netns-netns\x2d3dfb464e\x2d725e\x2d20a2\x2db8cf\x2de52b030db400.mount has successfully entered the 'dead' state. Jul 27 12:33:43 ip-10-31-14-141.us-east-1.aws.redhat.com podman[40973]: 2024-07-27 12:33:43.678893399 -0400 EDT m=+0.236884881 container cleanup b752438f066ce9faf605c7486f1f6ff867a81d90ea1198b0c44ff91e2c7d59d1 (image=localhost/podman-pause:5.1.2-1720656000, name=d934b8f7de34-infra, pod_id=d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service, io.buildah.version=1.36.0) Jul 27 12:33:44 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay-113e6845a6a3283be45d8451f3060693f0f6e8dde2b9aa84c23ee8749e9437f1-merged.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit var-lib-containers-storage-overlay-113e6845a6a3283be45d8451f3060693f0f6e8dde2b9aa84c23ee8749e9437f1-merged.mount has successfully entered the 'dead' state. Jul 27 12:33:44 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b752438f066ce9faf605c7486f1f6ff867a81d90ea1198b0c44ff91e2c7d59d1-userdata-shm.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit var-lib-containers-storage-overlay\x2dcontainers-b752438f066ce9faf605c7486f1f6ff867a81d90ea1198b0c44ff91e2c7d59d1-userdata-shm.mount has successfully entered the 'dead' state. Jul 27 12:33:53 ip-10-31-14-141.us-east-1.aws.redhat.com podman[40973]: time="2024-07-27T12:33:53-04:00" level=warning msg="StopSignal SIGTERM failed to stop container httpd3-httpd3 in 10 seconds, resorting to SIGKILL" Jul 27 12:33:53 ip-10-31-14-141.us-east-1.aws.redhat.com podman[40973]: 2024-07-27 12:33:53.519648124 -0400 EDT m=+10.077639913 container died cf272fb64f907d0bdeb087821bcab3f874e448e85b816fb7d9962878c5c5b1f8 (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry) Jul 27 12:33:53 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: libpod-cf272fb64f907d0bdeb087821bcab3f874e448e85b816fb7d9962878c5c5b1f8.scope: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit libpod-cf272fb64f907d0bdeb087821bcab3f874e448e85b816fb7d9962878c5c5b1f8.scope has successfully entered the 'dead' state. Jul 27 12:33:53 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay-7dfeb4e5c5d86267a74cfadcc3a1936317249f7413341c2b3549606919360ba3-merged.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit var-lib-containers-storage-overlay-7dfeb4e5c5d86267a74cfadcc3a1936317249f7413341c2b3549606919360ba3-merged.mount has successfully entered the 'dead' state. Jul 27 12:33:53 ip-10-31-14-141.us-east-1.aws.redhat.com podman[40973]: 2024-07-27 12:33:53.572742926 -0400 EDT m=+10.130734513 container cleanup cf272fb64f907d0bdeb087821bcab3f874e448e85b816fb7d9962878c5c5b1f8 (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service, app=test) Jul 27 12:33:53 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Removed slice machine-libpod_pod_d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6.slice - cgroup machine-libpod_pod_d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6.slice. â–‘â–‘ Subject: A stop job for unit machine-libpod_pod_d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6.slice has finished â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A stop job for unit machine-libpod_pod_d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6.slice has finished. â–‘â–‘ â–‘â–‘ The job identifier is 2970 and the job result is done. Jul 27 12:33:53 ip-10-31-14-141.us-east-1.aws.redhat.com podman[40973]: 2024-07-27 12:33:53.581110894 -0400 EDT m=+10.139102531 pod stop d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6 (image=, name=httpd3) Jul 27 12:33:53 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: machine-libpod_pod_d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6.slice: Failed to open /run/systemd/transient/machine-libpod_pod_d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6.slice: No such file or directory Jul 27 12:33:53 ip-10-31-14-141.us-east-1.aws.redhat.com podman[40973]: time="2024-07-27T12:33:53-04:00" level=error msg="Checking if infra needs to be stopped: removing pod d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6 cgroup: Unit machine-libpod_pod_d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6.slice not loaded." Jul 27 12:33:53 ip-10-31-14-141.us-east-1.aws.redhat.com podman[40973]: 2024-07-27 12:33:53.609670617 -0400 EDT m=+10.167662295 pod stop d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6 (image=, name=httpd3) Jul 27 12:33:53 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: machine-libpod_pod_d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6.slice: Failed to open /run/systemd/transient/machine-libpod_pod_d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6.slice: No such file or directory Jul 27 12:33:53 ip-10-31-14-141.us-east-1.aws.redhat.com podman[40973]: time="2024-07-27T12:33:53-04:00" level=error msg="Checking if infra needs to be stopped: removing pod d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6 cgroup: Unit machine-libpod_pod_d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6.slice not loaded." Jul 27 12:33:53 ip-10-31-14-141.us-east-1.aws.redhat.com podman[40973]: 2024-07-27 12:33:53.625199102 -0400 EDT m=+10.183190816 container kill 3d4d558c0f2fa33094c0bd8d5e83414ebd5d14339c39e2ae713129f7e70714f8 (image=localhost/podman-pause:5.1.2-1720656000, name=b46e3d5b107f-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Jul 27 12:33:53 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: tmp-crun.bAaH6x.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit tmp-crun.bAaH6x.mount has successfully entered the 'dead' state. Jul 27 12:33:53 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: libpod-3d4d558c0f2fa33094c0bd8d5e83414ebd5d14339c39e2ae713129f7e70714f8.scope: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit libpod-3d4d558c0f2fa33094c0bd8d5e83414ebd5d14339c39e2ae713129f7e70714f8.scope has successfully entered the 'dead' state. Jul 27 12:33:53 ip-10-31-14-141.us-east-1.aws.redhat.com podman[40973]: 2024-07-27 12:33:53.64334047 -0400 EDT m=+10.201332183 container died 3d4d558c0f2fa33094c0bd8d5e83414ebd5d14339c39e2ae713129f7e70714f8 (image=localhost/podman-pause:5.1.2-1720656000, name=b46e3d5b107f-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Jul 27 12:33:53 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state. Jul 27 12:33:53 ip-10-31-14-141.us-east-1.aws.redhat.com podman[40973]: 2024-07-27 12:33:53.672832378 -0400 EDT m=+10.230823870 container remove cf272fb64f907d0bdeb087821bcab3f874e448e85b816fb7d9962878c5c5b1f8 (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry) Jul 27 12:33:53 ip-10-31-14-141.us-east-1.aws.redhat.com podman[40973]: 2024-07-27 12:33:53.706274978 -0400 EDT m=+10.264266468 container remove b752438f066ce9faf605c7486f1f6ff867a81d90ea1198b0c44ff91e2c7d59d1 (image=localhost/podman-pause:5.1.2-1720656000, name=d934b8f7de34-infra, pod_id=d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service, io.buildah.version=1.36.0) Jul 27 12:33:53 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: machine-libpod_pod_d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6.slice: Failed to open /run/systemd/transient/machine-libpod_pod_d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6.slice: No such file or directory Jul 27 12:33:53 ip-10-31-14-141.us-east-1.aws.redhat.com podman[40973]: 2024-07-27 12:33:53.716159594 -0400 EDT m=+10.274151080 pod remove d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6 (image=, name=httpd3) Jul 27 12:33:53 ip-10-31-14-141.us-east-1.aws.redhat.com podman[40973]: 2024-07-27 12:33:53.766680486 -0400 EDT m=+10.324671979 container remove 3d4d558c0f2fa33094c0bd8d5e83414ebd5d14339c39e2ae713129f7e70714f8 (image=localhost/podman-pause:5.1.2-1720656000, name=b46e3d5b107f-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Jul 27 12:33:53 ip-10-31-14-141.us-east-1.aws.redhat.com podman[40973]: Pods stopped: Jul 27 12:33:53 ip-10-31-14-141.us-east-1.aws.redhat.com podman[40973]: d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6 Jul 27 12:33:53 ip-10-31-14-141.us-east-1.aws.redhat.com podman[40973]: Pods removed: Jul 27 12:33:53 ip-10-31-14-141.us-east-1.aws.redhat.com podman[40973]: Error: removing pod d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6 cgroup: removing pod d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6 cgroup: Unit machine-libpod_pod_d934b8f7de34bc54d46a16a6adb769f71e105fd9b95764f463833eec9b2c48c6.slice not loaded. Jul 27 12:33:53 ip-10-31-14-141.us-east-1.aws.redhat.com podman[40973]: Secrets removed: Jul 27 12:33:53 ip-10-31-14-141.us-east-1.aws.redhat.com podman[40973]: Error: %!s() Jul 27 12:33:53 ip-10-31-14-141.us-east-1.aws.redhat.com podman[40973]: Volumes removed: Jul 27 12:33:53 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has successfully entered the 'dead' state. Jul 27 12:33:53 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Stopped podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service - A template for running K8s workloads via podman-kube-play. â–‘â–‘ Subject: A stop job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has finished â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A stop job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has finished. â–‘â–‘ â–‘â–‘ The job identifier is 2888 and the job result is done. Jul 27 12:33:54 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[41146]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:33:54 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay-a431caa463f71f2b50b4afe74a9d2c0f739b4dcc7ff334225aeff25db75bd4f2-merged.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit var-lib-containers-storage-overlay-a431caa463f71f2b50b4afe74a9d2c0f739b4dcc7ff334225aeff25db75bd4f2-merged.mount has successfully entered the 'dead' state. Jul 27 12:33:54 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3d4d558c0f2fa33094c0bd8d5e83414ebd5d14339c39e2ae713129f7e70714f8-userdata-shm.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit var-lib-containers-storage-overlay\x2dcontainers-3d4d558c0f2fa33094c0bd8d5e83414ebd5d14339c39e2ae713129f7e70714f8-userdata-shm.mount has successfully entered the 'dead' state. Jul 27 12:33:54 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[41261]: ansible-containers.podman.podman_play Invoked with state=absent kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_options=None Jul 27 12:33:54 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[41261]: ansible-containers.podman.podman_play version: 5.1.2, kube file /etc/containers/ansible-kubernetes.d/httpd3.yml Jul 27 12:33:54 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jul 27 12:33:55 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[41388]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:33:55 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[41501]: ansible-ansible.legacy.command Invoked with _raw_params=podman image prune -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:55 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jul 27 12:33:56 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[41622]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=True service=None split=None Jul 27 12:33:57 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[41736]: ansible-stat Invoked with path=/run/user/3001 follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:33:57 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[41887]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehmxutokkggciktqnztiomxcuvjhjqqh ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1722098037.2598891-9584-137277880787959/AnsiballZ_podman_container_info.py' Jul 27 12:33:57 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[41887]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-41887) opened. Jul 27 12:33:57 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[41887]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:33:57 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[41890]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Jul 27 12:33:57 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[23350]: Started podman-41891.scope. â–‘â–‘ Subject: A start job for unit UNIT has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit UNIT has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 158. Jul 27 12:33:57 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[41887]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:33:58 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[42046]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyisvsnzepriocmcdxkzdstaaaohxfri ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1722098037.8880062-9594-269555214208535/AnsiballZ_command.py' Jul 27 12:33:58 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[42046]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-42046) opened. Jul 27 12:33:58 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[42046]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:33:58 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[42049]: ansible-ansible.legacy.command Invoked with _raw_params=podman network ls -q _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:58 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[23350]: Started podman-42050.scope. â–‘â–‘ Subject: A start job for unit UNIT has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit UNIT has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 162. Jul 27 12:33:58 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[42046]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:33:58 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[42205]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsvavmubqeckhpiwdvzjoqgesprjwhaf ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1722098038.411431-9604-37186754175288/AnsiballZ_command.py' Jul 27 12:33:58 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[42205]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-42205) opened. Jul 27 12:33:58 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[42205]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:33:58 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[42208]: ansible-ansible.legacy.command Invoked with _raw_params=podman secret ls -n -q _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:58 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[23350]: Started podman-42209.scope. â–‘â–‘ Subject: A start job for unit UNIT has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit UNIT has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 166. Jul 27 12:33:58 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[42205]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:33:59 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[42329]: ansible-ansible.legacy.command Invoked with removes=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl disable-linger podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None stdin=None Jul 27 12:33:59 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[42479]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpxkatyrnuepulprpiuhwbfyqelyxnqc ; /usr/bin/python3.12 /var/tmp/ansible-tmp-1722098039.4663327-9624-95555905864496/AnsiballZ_command.py' Jul 27 12:33:59 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[42479]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-42479) opened. Jul 27 12:33:59 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[42479]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:33:59 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[42482]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod exists httpd1 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:59 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[23350]: Started podman-42490.scope. â–‘â–‘ Subject: A start job for unit UNIT has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit UNIT has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 170. Jul 27 12:33:59 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[42479]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:34:00 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[42610]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod exists httpd2 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:00 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jul 27 12:34:00 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[42731]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod exists httpd3 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:00 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jul 27 12:34:01 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[42888]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsadlykbrfvpashwwscrdqiwopancssi ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1722098040.8511503-9648-214609914408569/AnsiballZ_command.py' Jul 27 12:34:01 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[42888]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-42888) opened. Jul 27 12:34:01 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[42888]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:34:01 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[42891]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail systemctl --user list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd1[.]yml[.]service[ ]+loaded[ ]+active ' _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:01 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[42888]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:34:01 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[43007]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail systemctl --system list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd2[.]yml[.]service[ ]+loaded[ ]+active ' _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:01 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[43123]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail systemctl --system list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd3[.]yml[.]service[ ]+loaded[ ]+active ' _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:02 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[43239]: ansible-stat Invoked with path=/var/lib/systemd/linger/podman_basic_user follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:34:04 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[43465]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:05 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[43584]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Jul 27 12:34:05 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[43698]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:34:06 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[43812]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:34:08 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[43927]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None Jul 27 12:34:09 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[44041]: ansible-getent Invoked with database=group key=3001 fail_key=False service=None split=None Jul 27 12:34:09 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[44155]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:34:10 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[44270]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:10 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[44384]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:11 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[44498]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:12 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[44612]: ansible-stat Invoked with path=/run/user/3001 follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:34:12 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[44763]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scdhwojqbvpxnspmbsmzrmsvzozodgxp ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1722098052.586484-9840-107139446055076/AnsiballZ_systemd.py' Jul 27 12:34:12 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[44763]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-44763) opened. Jul 27 12:34:12 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[44763]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:34:13 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[44766]: ansible-systemd Invoked with name=podman-kube@-home-podman_basic_user-.config-containers-ansible\x2dkubernetes.d-httpd1.yml.service scope=user state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None Jul 27 12:34:13 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[44763]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:34:13 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[44881]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:34:13 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[44994]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:34:15 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[45107]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Jul 27 12:34:15 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[45221]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:34:16 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[45335]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:34:17 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[45450]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /etc/containers/ansible-kubernetes.d/httpd2.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:17 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[45564]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None Jul 27 12:34:18 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[45679]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:34:18 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[45792]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:34:20 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[45905]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:34:20 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[46019]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:34:21 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[46134]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /etc/containers/ansible-kubernetes.d/httpd3.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:22 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[46248]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None Jul 27 12:34:22 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[46363]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:34:23 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[46476]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:34:24 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[46589]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=True service=None split=None Jul 27 12:34:24 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[46703]: ansible-stat Invoked with path=/run/user/3001 follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:34:25 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[46854]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pekcurrahborqpcffinfbotxxkvxiurm ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1722098064.9924238-10055-53014654928287/AnsiballZ_podman_container_info.py' Jul 27 12:34:25 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[46854]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-46854) opened. Jul 27 12:34:25 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[46854]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:34:25 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[46857]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Jul 27 12:34:25 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[23350]: Started podman-46858.scope. â–‘â–‘ Subject: A start job for unit UNIT has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit UNIT has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 174. Jul 27 12:34:25 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[46854]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:34:25 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[47013]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjxwumqfxciapyfxwbxyccogfhdfckzf ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1722098065.5211716-10066-257583667683822/AnsiballZ_command.py' Jul 27 12:34:25 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[47013]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-47013) opened. Jul 27 12:34:25 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[47013]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:34:25 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[47016]: ansible-ansible.legacy.command Invoked with _raw_params=podman network ls -q _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:25 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[23350]: Started podman-47017.scope. â–‘â–‘ Subject: A start job for unit UNIT has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit UNIT has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 178. Jul 27 12:34:25 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[47013]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:34:26 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[47173]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izwtnhgfshrocejzipvlsvggqjlelypr ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1722098066.0516913-10076-240045495406512/AnsiballZ_command.py' Jul 27 12:34:26 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[47173]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-47173) opened. Jul 27 12:34:26 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[47173]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:34:26 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[47176]: ansible-ansible.legacy.command Invoked with _raw_params=podman secret ls -n -q _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:26 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[23350]: Started podman-47177.scope. â–‘â–‘ Subject: A start job for unit UNIT has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit UNIT has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 182. Jul 27 12:34:26 ip-10-31-14-141.us-east-1.aws.redhat.com sudo[47173]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:34:26 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[47297]: ansible-ansible.legacy.command Invoked with removes=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl disable-linger podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None stdin=None Jul 27 12:34:27 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[47410]: ansible-file Invoked with path=/etc/containers/storage.conf state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:34:27 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[47523]: ansible-file Invoked with path=/tmp/lsr_ykmfaaw1_podman state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:34:31 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[47673]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 27 12:34:32 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[47815]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:34:33 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[48041]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:34 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[48160]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Jul 27 12:34:34 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[48274]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:34:35 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[48388]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:34:37 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[48503]: ansible-tempfile Invoked with state=directory prefix=lsr_podman_config_ suffix= path=None Jul 27 12:34:37 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[48616]: ansible-ansible.legacy.command Invoked with _raw_params=tar --ignore-failed-read -c -P -v -p -f /tmp/lsr_podman_config_w171idou/backup.tar /etc/containers/containers.conf.d/50-systemroles.conf /etc/containers/registries.conf.d/50-systemroles.conf /etc/containers/storage.conf /etc/containers/policy.json _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:38 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[48730]: ansible-user Invoked with name=user1 state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on ip-10-31-14-141.us-east-1.aws.redhat.com update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Jul 27 12:34:38 ip-10-31-14-141.us-east-1.aws.redhat.com useradd[48732]: new group: name=user1, GID=3002 Jul 27 12:34:38 ip-10-31-14-141.us-east-1.aws.redhat.com useradd[48732]: new user: name=user1, UID=3002, GID=3002, home=/home/user1, shell=/bin/bash, from=/dev/pts/0 Jul 27 12:34:38 ip-10-31-14-141.us-east-1.aws.redhat.com rsyslogd[613]: imjournal: journal files changed, reloading... [v8.2312.0-2.el10 try https://www.rsyslog.com/e/0 ] Jul 27 12:34:40 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[48963]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:40 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[49082]: ansible-getent Invoked with database=passwd key=user1 fail_key=False service=None split=None Jul 27 12:34:41 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[49196]: ansible-getent Invoked with database=group key=3002 fail_key=False service=None split=None Jul 27 12:34:41 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[49310]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:34:41 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[49425]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids user1 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:42 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[49539]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g user1 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:43 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[49653]: ansible-file Invoked with path=/home/user1/.config/containers/containers.conf.d state=directory owner=user1 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:34:43 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[49766]: ansible-ansible.legacy.stat Invoked with path=/home/user1/.config/containers/containers.conf.d/50-systemroles.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:34:44 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[49856]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1722098083.4056299-10394-219612045307049/.source.conf dest=/home/user1/.config/containers/containers.conf.d/50-systemroles.conf owner=user1 mode=0644 follow=False _original_basename=toml.j2 checksum=94370d6e765779f1c58daf02f667b8f0b74d91f6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:34:44 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[49969]: ansible-file Invoked with path=/home/user1/.config/containers/registries.conf.d state=directory owner=user1 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:34:45 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[50082]: ansible-ansible.legacy.stat Invoked with path=/home/user1/.config/containers/registries.conf.d/50-systemroles.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:34:45 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[50172]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1722098084.747849-10418-146732810650340/.source.conf dest=/home/user1/.config/containers/registries.conf.d/50-systemroles.conf owner=user1 mode=0644 follow=False _original_basename=toml.j2 checksum=dfb9cd7094a81b3d1bb06512cc9b49a09c75639b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:34:45 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[50285]: ansible-file Invoked with path=/home/user1/.config/containers state=directory owner=user1 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:34:46 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[50398]: ansible-ansible.legacy.stat Invoked with path=/home/user1/.config/containers/storage.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:34:46 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[50488]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1722098085.95819-10442-131737842277408/.source.conf dest=/home/user1/.config/containers/storage.conf owner=user1 mode=0644 follow=False _original_basename=toml.j2 checksum=d08574b6a1df63dbe1c939ff0bcc7c0b61d03044 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:34:47 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[50601]: ansible-file Invoked with path=/home/user1/.config/containers state=directory owner=user1 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:34:47 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[50714]: ansible-stat Invoked with path=/home/user1/.config/containers/policy.json follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:34:47 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[50827]: ansible-ansible.legacy.stat Invoked with path=/home/user1/.config/containers/policy.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:34:48 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[50917]: ansible-ansible.legacy.copy Invoked with dest=/home/user1/.config/containers/policy.json owner=user1 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1722098087.5861802-10475-262484125048733/.source.json _original_basename=.82v8i689 follow=False checksum=6746c079ad563b735fc39f73d4876654b80b0a0d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:34:48 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[51030]: ansible-stat Invoked with path=/home/user1/.config/containers/containers.conf.d/50-systemroles.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:34:49 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[51145]: ansible-stat Invoked with path=/home/user1/.config/containers/registries.conf.d/50-systemroles.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:34:49 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[51260]: ansible-stat Invoked with path=/home/user1/.config/containers/storage.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:34:50 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[51375]: ansible-stat Invoked with path=/home/user1/.config/containers/policy.json follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:34:51 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[51603]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:52 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[51722]: ansible-getent Invoked with database=group key=3002 fail_key=False service=None split=None Jul 27 12:34:52 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[51836]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:34:53 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[51951]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids user1 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:53 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[52065]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g user1 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:54 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[52179]: ansible-file Invoked with path=/home/user1/.config/containers/containers.conf.d state=directory owner=user1 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:34:54 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[52292]: ansible-ansible.legacy.stat Invoked with path=/home/user1/.config/containers/containers.conf.d/50-systemroles.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:34:55 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[52349]: ansible-ansible.legacy.file Invoked with owner=user1 mode=0644 dest=/home/user1/.config/containers/containers.conf.d/50-systemroles.conf _original_basename=toml.j2 recurse=False state=file path=/home/user1/.config/containers/containers.conf.d/50-systemroles.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:34:55 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[52462]: ansible-file Invoked with path=/home/user1/.config/containers/registries.conf.d state=directory owner=user1 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:34:56 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[52575]: ansible-ansible.legacy.stat Invoked with path=/home/user1/.config/containers/registries.conf.d/50-systemroles.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:34:56 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[52632]: ansible-ansible.legacy.file Invoked with owner=user1 mode=0644 dest=/home/user1/.config/containers/registries.conf.d/50-systemroles.conf _original_basename=toml.j2 recurse=False state=file path=/home/user1/.config/containers/registries.conf.d/50-systemroles.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:34:56 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[52745]: ansible-file Invoked with path=/home/user1/.config/containers state=directory owner=user1 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:34:57 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[52858]: ansible-ansible.legacy.stat Invoked with path=/home/user1/.config/containers/storage.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:34:57 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[52915]: ansible-ansible.legacy.file Invoked with owner=user1 mode=0644 dest=/home/user1/.config/containers/storage.conf _original_basename=toml.j2 recurse=False state=file path=/home/user1/.config/containers/storage.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:34:58 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[53028]: ansible-file Invoked with path=/home/user1/.config/containers state=directory owner=user1 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:34:58 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[53141]: ansible-stat Invoked with path=/home/user1/.config/containers/policy.json follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:34:59 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[53256]: ansible-slurp Invoked with path=/home/user1/.config/containers/policy.json src=/home/user1/.config/containers/policy.json Jul 27 12:34:59 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[53369]: ansible-stat Invoked with path=/home/user1/.config/containers/containers.conf.d/50-systemroles.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:00 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[53484]: ansible-stat Invoked with path=/home/user1/.config/containers/registries.conf.d/50-systemroles.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:00 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[53599]: ansible-stat Invoked with path=/home/user1/.config/containers/storage.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:00 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[53714]: ansible-stat Invoked with path=/home/user1/.config/containers/policy.json follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:02 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[53942]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:35:03 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[54061]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Jul 27 12:35:03 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[54175]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:35:04 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[54289]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:05 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[54404]: ansible-file Invoked with path=/etc/containers/containers.conf.d state=directory owner=root mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:05 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[54517]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/containers.conf.d/50-systemroles.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:35:05 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[54607]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1722098105.21415-10803-193350502916514/.source.conf dest=/etc/containers/containers.conf.d/50-systemroles.conf owner=root mode=0644 follow=False _original_basename=toml.j2 checksum=94370d6e765779f1c58daf02f667b8f0b74d91f6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:06 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[54720]: ansible-file Invoked with path=/etc/containers/registries.conf.d state=directory owner=root mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:06 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[54833]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/50-systemroles.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:35:07 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[54923]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1722098106.4464886-10827-228693572255732/.source.conf dest=/etc/containers/registries.conf.d/50-systemroles.conf owner=root mode=0644 follow=False _original_basename=toml.j2 checksum=dfb9cd7094a81b3d1bb06512cc9b49a09c75639b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:07 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[55036]: ansible-file Invoked with path=/etc/containers state=directory owner=root mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:07 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[55149]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/storage.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:35:08 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[55239]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1722098107.6736925-10851-11656137412744/.source.conf dest=/etc/containers/storage.conf owner=root mode=0644 follow=False _original_basename=toml.j2 checksum=d08574b6a1df63dbe1c939ff0bcc7c0b61d03044 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:08 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[55352]: ansible-file Invoked with path=/etc/containers state=directory owner=root mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:09 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[55465]: ansible-stat Invoked with path=/etc/containers/policy.json follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:09 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[55580]: ansible-slurp Invoked with path=/etc/containers/policy.json src=/etc/containers/policy.json Jul 27 12:35:10 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[55693]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/policy.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:35:10 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[55785]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/policy.json owner=root mode=0644 src=/root/.ansible/tmp/ansible-tmp-1722098109.7571483-10891-203787733281543/.source.json _original_basename=.l1nrv80h follow=False checksum=6746c079ad563b735fc39f73d4876654b80b0a0d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:11 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[55898]: ansible-stat Invoked with path=/etc/containers/containers.conf.d/50-systemroles.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:11 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[56013]: ansible-stat Invoked with path=/etc/containers/registries.conf.d/50-systemroles.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:11 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[56128]: ansible-stat Invoked with path=/etc/containers/storage.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:12 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[56243]: ansible-stat Invoked with path=/etc/containers/policy.json follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:13 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[56471]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:35:14 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[56590]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:35:15 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[56704]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:16 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[56819]: ansible-file Invoked with path=/etc/containers/containers.conf.d state=directory owner=root mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:16 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[56932]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/containers.conf.d/50-systemroles.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:35:16 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[56989]: ansible-ansible.legacy.file Invoked with owner=root mode=0644 dest=/etc/containers/containers.conf.d/50-systemroles.conf _original_basename=toml.j2 recurse=False state=file path=/etc/containers/containers.conf.d/50-systemroles.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:17 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[57102]: ansible-file Invoked with path=/etc/containers/registries.conf.d state=directory owner=root mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:17 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[57215]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/50-systemroles.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:35:17 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[57272]: ansible-ansible.legacy.file Invoked with owner=root mode=0644 dest=/etc/containers/registries.conf.d/50-systemroles.conf _original_basename=toml.j2 recurse=False state=file path=/etc/containers/registries.conf.d/50-systemroles.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:18 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[57385]: ansible-file Invoked with path=/etc/containers state=directory owner=root mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:18 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[57498]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/storage.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:35:19 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[57555]: ansible-ansible.legacy.file Invoked with owner=root mode=0644 dest=/etc/containers/storage.conf _original_basename=toml.j2 recurse=False state=file path=/etc/containers/storage.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:19 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[57668]: ansible-file Invoked with path=/etc/containers state=directory owner=root mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:20 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[57781]: ansible-stat Invoked with path=/etc/containers/policy.json follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:20 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[57896]: ansible-slurp Invoked with path=/etc/containers/policy.json src=/etc/containers/policy.json Jul 27 12:35:21 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[58009]: ansible-stat Invoked with path=/etc/containers/containers.conf.d/50-systemroles.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:21 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[58124]: ansible-stat Invoked with path=/etc/containers/registries.conf.d/50-systemroles.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:21 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[58239]: ansible-stat Invoked with path=/etc/containers/storage.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:22 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[58354]: ansible-stat Invoked with path=/etc/containers/policy.json follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:22 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[58469]: ansible-slurp Invoked with path=/home/user1/.config/containers/containers.conf.d/50-systemroles.conf src=/home/user1/.config/containers/containers.conf.d/50-systemroles.conf Jul 27 12:35:23 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[58582]: ansible-slurp Invoked with path=/home/user1/.config/containers/registries.conf.d/50-systemroles.conf src=/home/user1/.config/containers/registries.conf.d/50-systemroles.conf Jul 27 12:35:23 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[58695]: ansible-slurp Invoked with path=/home/user1/.config/containers/storage.conf src=/home/user1/.config/containers/storage.conf Jul 27 12:35:24 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[58808]: ansible-slurp Invoked with path=/etc/containers/containers.conf.d/50-systemroles.conf src=/etc/containers/containers.conf.d/50-systemroles.conf Jul 27 12:35:24 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[58921]: ansible-slurp Invoked with path=/etc/containers/registries.conf.d/50-systemroles.conf src=/etc/containers/registries.conf.d/50-systemroles.conf Jul 27 12:35:25 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[59034]: ansible-slurp Invoked with path=/etc/containers/storage.conf src=/etc/containers/storage.conf Jul 27 12:35:25 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[59147]: ansible-file Invoked with state=absent path=/etc/containers/containers.conf.d/50-systemroles.conf recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:26 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[59260]: ansible-file Invoked with state=absent path=/etc/containers/registries.conf.d/50-systemroles.conf recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:26 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[59373]: ansible-file Invoked with state=absent path=/etc/containers/storage.conf recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:26 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[59486]: ansible-file Invoked with state=absent path=/etc/containers/policy.json recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:27 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[59599]: ansible-file Invoked with state=absent path=/home/user1/.config/containers/containers.conf.d/50-systemroles.conf recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:27 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[59712]: ansible-file Invoked with state=absent path=/home/user1/.config/containers/registries.conf.d/50-systemroles.conf recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:27 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[59825]: ansible-file Invoked with state=absent path=/home/user1/.config/containers/storage.conf recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:28 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[59938]: ansible-file Invoked with state=absent path=/home/user1/.config/containers/policy.json recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:28 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[60051]: ansible-ansible.legacy.command Invoked with _raw_params=tar xfvpP /tmp/lsr_podman_config_w171idou/backup.tar _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:35:29 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[60165]: ansible-file Invoked with state=absent path=/tmp/lsr_podman_config_w171idou recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:31 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[60315]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 27 12:35:32 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[60430]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:34 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[60656]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:35:34 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[60775]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Jul 27 12:35:35 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[60889]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:35:35 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[61003]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:39 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[61155]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 27 12:35:42 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[61297]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:43 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[61523]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:35:44 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[61642]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Jul 27 12:35:45 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[61756]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:35:45 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[61870]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:50 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[62022]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 27 12:35:51 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[62164]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:53 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[62390]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:35:53 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[62509]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Jul 27 12:35:54 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[62623]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:35:54 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[62737]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:56 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[62852]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:35:57 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[62966]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:58 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[63081]: ansible-file Invoked with path=/etc/containers/systemd state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:59 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[63194]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/systemd/nopull.container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:35:59 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[63284]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1722098158.794963-11918-214056238671650/.source.container dest=/etc/containers/systemd/nopull.container owner=root group=0 mode=0644 follow=False _original_basename=systemd.j2 checksum=670d64fc68a9768edb20cad26df2acc703542d85 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:36:01 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[63510]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:36:02 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[63629]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:36:02 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[63743]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:04 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[63858]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:36:04 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[63972]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:06 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[23350]: Starting grub-boot-success.service - Mark boot as successful... â–‘â–‘ Subject: A start job for unit UNIT has begun execution â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit UNIT has begun execution. â–‘â–‘ â–‘â–‘ The job identifier is 186. Jul 27 12:36:06 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jul 27 12:36:06 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[23350]: Finished grub-boot-success.service - Mark boot as successful. â–‘â–‘ Subject: A start job for unit UNIT has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit UNIT has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 186. Jul 27 12:36:06 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jul 27 12:36:06 ip-10-31-14-141.us-east-1.aws.redhat.com podman[64106]: 2024-07-27 12:36:06.489138737 -0400 EDT m=+0.031029409 image pull-error this_is_a_bogus_image:latest short-name resolution enforced but cannot prompt without a TTY Jul 27 12:36:06 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[64226]: ansible-file Invoked with path=/etc/containers/systemd state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:36:07 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jul 27 12:36:07 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[64339]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/systemd/bogus.container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:36:07 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[64429]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1722098167.0931985-12082-65274845531540/.source.container dest=/etc/containers/systemd/bogus.container owner=root group=0 mode=0644 follow=False _original_basename=systemd.j2 checksum=1d087e679d135214e8ac9ccaf33b2222916efb7f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:36:09 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[64655]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:36:10 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[23350]: Created slice background.slice - User Background Tasks Slice. â–‘â–‘ Subject: A start job for unit UNIT has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit UNIT has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 199. Jul 27 12:36:10 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[23350]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... â–‘â–‘ Subject: A start job for unit UNIT has begun execution â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit UNIT has begun execution. â–‘â–‘ â–‘â–‘ The job identifier is 198. Jul 27 12:36:10 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[64774]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:36:10 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[23350]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. â–‘â–‘ Subject: A start job for unit UNIT has finished successfully â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ A start job for unit UNIT has finished successfully. â–‘â–‘ â–‘â–‘ The job identifier is 198. Jul 27 12:36:10 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[64890]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:12 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[65005]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:36:13 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[65119]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:14 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[65234]: ansible-systemd Invoked with name=nopull.service scope=system state=stopped enabled=False force=True daemon_reload=False daemon_reexec=False no_block=False masked=None Jul 27 12:36:15 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[65348]: ansible-stat Invoked with path=/etc/containers/systemd/nopull.container follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:16 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[65576]: ansible-file Invoked with path=/etc/containers/systemd/nopull.container state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:36:16 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[65689]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Jul 27 12:36:16 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Reload requested from client PID 65690 ('systemctl') (unit session-5.scope)... Jul 27 12:36:16 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Reloading... Jul 27 12:36:17 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Reloading finished in 222 ms. Jul 27 12:36:17 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jul 27 12:36:19 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[66091]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:36:20 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[66210]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:36:21 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[66324]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:23 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[66439]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:36:23 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[66553]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:24 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[66668]: ansible-systemd Invoked with name=bogus.service scope=system state=stopped enabled=False force=True daemon_reload=False daemon_reexec=False no_block=False masked=None Jul 27 12:36:24 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Reload requested from client PID 66671 ('systemctl') (unit session-5.scope)... Jul 27 12:36:24 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Reloading... Jul 27 12:36:24 ip-10-31-14-141.us-east-1.aws.redhat.com quadlet-generator[65715]: Warning: bogus.container specifies the image "this_is_a_bogus_image" which not a fully qualified image name. This is not ideal for performance and security reasons. See the podman-pull manpage discussion of short-name-aliases.conf for details. Jul 27 12:36:25 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Reloading finished in 224 ms. Jul 27 12:36:25 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[66838]: ansible-stat Invoked with path=/etc/containers/systemd/bogus.container follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:26 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[67066]: ansible-file Invoked with path=/etc/containers/systemd/bogus.container state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:36:26 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[67179]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Jul 27 12:36:26 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Reload requested from client PID 67180 ('systemctl') (unit session-5.scope)... Jul 27 12:36:26 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Reloading... Jul 27 12:36:27 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: Reloading finished in 212 ms. Jul 27 12:36:27 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jul 27 12:36:28 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[67468]: ansible-user Invoked with name=user_quadlet_basic uid=1111 state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on ip-10-31-14-141.us-east-1.aws.redhat.com update_password=always group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Jul 27 12:36:28 ip-10-31-14-141.us-east-1.aws.redhat.com useradd[67470]: new group: name=user_quadlet_basic, GID=1111 Jul 27 12:36:28 ip-10-31-14-141.us-east-1.aws.redhat.com useradd[67470]: new user: name=user_quadlet_basic, UID=1111, GID=1111, home=/home/user_quadlet_basic, shell=/bin/bash, from=/dev/pts/0 Jul 27 12:36:30 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[67700]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:36:31 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[67819]: ansible-getent Invoked with database=passwd key=user_quadlet_basic fail_key=False service=None split=None Jul 27 12:36:32 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[67933]: ansible-getent Invoked with database=group key=1111 fail_key=False service=None split=None Jul 27 12:36:32 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[68047]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:32 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[68162]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:36:33 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[68276]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:36:34 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[68390]: ansible-ansible.legacy.command Invoked with _raw_params=set -x set -o pipefail exec 1>&2 #podman volume rm --all #podman network prune -f podman volume ls podman network ls podman secret ls podman container ls podman pod ls podman images systemctl list-units | grep quadlet _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:36:34 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jul 27 12:36:34 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jul 27 12:36:35 ip-10-31-14-141.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. â–‘â–‘ Subject: Unit succeeded â–‘â–‘ Defined-By: systemd â–‘â–‘ Support: https://access.redhat.com/support â–‘â–‘ â–‘â–‘ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jul 27 12:36:36 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[68665]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:36:37 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[68784]: ansible-getent Invoked with database=group key=1111 fail_key=False service=None split=None Jul 27 12:36:37 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[68898]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:38 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[69013]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:36:38 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[69127]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:36:40 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[69241]: ansible-ansible.legacy.command Invoked with _raw_params=journalctl -ex _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:36:42 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[69392]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 27 12:36:43 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[69534]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:43 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[69647]: ansible-ansible.legacy.dnf Invoked with name=['python3-pyasn1', 'python3-cryptography', 'python3-dbus'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 27 12:36:44 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[69761]: ansible-ansible.legacy.dnf Invoked with name=['certmonger', 'python3-packaging'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 27 12:36:45 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[69875]: ansible-file Invoked with name=/etc/certmonger//pre-scripts owner=root group=root mode=0700 state=directory path=/etc/certmonger//pre-scripts recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:36:45 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[69988]: ansible-file Invoked with name=/etc/certmonger//post-scripts owner=root group=root mode=0700 state=directory path=/etc/certmonger//post-scripts recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:36:46 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[70101]: ansible-ansible.legacy.systemd Invoked with name=certmonger state=started enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[70216]: ansible-fedora.linux_system_roles.certificate_request Invoked with name=quadlet_demo dns=['localhost'] directory=/etc/pki/tls wait=True ca=self-sign __header=# # Ansible managed # # system_role:certificate provider_config_directory=/etc/certmonger provider=certmonger key_usage=['digitalSignature', 'keyEncipherment'] extended_key_usage=['id-kp-serverAuth', 'id-kp-clientAuth'] auto_renew=True ip=None email=None common_name=None country=None state=None locality=None organization=None organizational_unit=None contact_email=None key_size=None owner=None group=None mode=None principal=None run_before=None run_after=None Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[7402]: 2024-07-27 12:36:47 [7402] Wrote to /var/lib/certmonger/requests/20240727163647 Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[7402]: 2024-07-27 12:36:47 [7402] Wrote to /var/lib/certmonger/requests/20240727163647 Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[7402]: 2024-07-27 12:36:47 [7402] Wrote to /var/lib/certmonger/requests/20240727163647 Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[7402]: 2024-07-27 12:36:47 [7402] Wrote to /var/lib/certmonger/requests/20240727163647 Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[7402]: 2024-07-27 12:36:47 [7402] Wrote to /var/lib/certmonger/requests/20240727163647 Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[7402]: 2024-07-27 12:36:47 [7402] Wrote to /var/lib/certmonger/requests/20240727163647 Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[7402]: 2024-07-27 12:36:47 [7402] Wrote to /var/lib/certmonger/requests/20240727163647 Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[7402]: 2024-07-27 12:36:47 [7402] Wrote to /var/lib/certmonger/requests/20240727163647 Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[7402]: 2024-07-27 12:36:47 [7402] Wrote to /var/lib/certmonger/requests/20240727163647 Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[7402]: 2024-07-27 12:36:47 [7402] Wrote to /var/lib/certmonger/requests/20240727163647 Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[7402]: 2024-07-27 12:36:47 [7402] Wrote to /var/lib/certmonger/requests/20240727163647 Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[7402]: 2024-07-27 12:36:47 [7402] Wrote to /var/lib/certmonger/requests/20240727163647 Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[7402]: 2024-07-27 12:36:47 [7402] Wrote to /var/lib/certmonger/requests/20240727163647 Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[7402]: 2024-07-27 12:36:47 [7402] Wrote to /var/lib/certmonger/requests/20240727163647 Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[7402]: 2024-07-27 12:36:47 [7402] Wrote to /var/lib/certmonger/requests/20240727163647 Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[7402]: 2024-07-27 12:36:47 [7402] Wrote to /var/lib/certmonger/requests/20240727163647 Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[7402]: 2024-07-27 12:36:47 [7402] Wrote to /var/lib/certmonger/requests/20240727163647 Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[7402]: 2024-07-27 12:36:47 [7402] Wrote to /var/lib/certmonger/requests/20240727163647 Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[7402]: 2024-07-27 12:36:47 [7402] Wrote to /var/lib/certmonger/requests/20240727163647 Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[7402]: 2024-07-27 12:36:47 [7402] Wrote to /var/lib/certmonger/requests/20240727163647 Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[7402]: 2024-07-27 12:36:47 [7402] Wrote to /var/lib/certmonger/requests/20240727163647 Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[7402]: 2024-07-27 12:36:47 [7402] Wrote to /var/lib/certmonger/requests/20240727163647 Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[7402]: 2024-07-27 12:36:47 [7402] Wrote to /var/lib/certmonger/requests/20240727163647 Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[7402]: 2024-07-27 12:36:47 [7402] Wrote to /var/lib/certmonger/requests/20240727163647 Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[7402]: 2024-07-27 12:36:47 [7402] Wrote to /var/lib/certmonger/requests/20240727163647 Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[7402]: 2024-07-27 12:36:47 [7402] Wrote to /var/lib/certmonger/requests/20240727163647 Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[7402]: 2024-07-27 12:36:47 [7402] Wrote to /var/lib/certmonger/requests/20240727163647 Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[7402]: 2024-07-27 12:36:47 [7402] Wrote to /var/lib/certmonger/requests/20240727163647 Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[7402]: 2024-07-27 12:36:47 [7402] Wrote to /var/lib/certmonger/requests/20240727163647 Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[70231]: Certificate in file "/etc/pki/tls/certs/quadlet_demo.crt" issued by CA and saved. Jul 27 12:36:47 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[7402]: 2024-07-27 12:36:47 [7402] Wrote to /var/lib/certmonger/requests/20240727163647 Jul 27 12:36:48 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[70344]: ansible-slurp Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt src=/etc/pki/tls/certs/quadlet_demo.crt Jul 27 12:36:48 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[70457]: ansible-slurp Invoked with path=/etc/pki/tls/private/quadlet_demo.key src=/etc/pki/tls/private/quadlet_demo.key Jul 27 12:36:48 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[70570]: ansible-slurp Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt src=/etc/pki/tls/certs/quadlet_demo.crt Jul 27 12:36:49 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[70683]: ansible-ansible.legacy.command Invoked with _raw_params=getcert stop-tracking -f /etc/pki/tls/certs/quadlet_demo.crt _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:36:49 ip-10-31-14-141.us-east-1.aws.redhat.com certmonger[7402]: 2024-07-27 12:36:49 [7402] Wrote to /var/lib/certmonger/requests/20240727163647 Jul 27 12:36:49 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[70797]: ansible-file Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:36:50 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[70910]: ansible-file Invoked with path=/etc/pki/tls/private/quadlet_demo.key state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:36:50 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[71023]: ansible-file Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:36:50 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[71136]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:52 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[71362]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:36:53 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[71481]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Jul 27 12:36:53 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[71595]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:36:54 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[71709]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:55 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[71824]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:56 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[71937]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:56 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[72050]: ansible-ansible.legacy.dnf Invoked with name=['firewalld'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 27 12:36:57 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[72164]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None Jul 27 12:36:58 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[72279]: ansible-ansible.legacy.systemd Invoked with name=firewalld state=started enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Jul 27 12:36:58 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[72394]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['8000/tcp'] permanent=True runtime=True state=enabled __report_changed=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Jul 27 12:36:59 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[72507]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['9000/tcp'] permanent=True runtime=True state=enabled __report_changed=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Jul 27 12:37:00 ip-10-31-14-141.us-east-1.aws.redhat.com python3.12[72620]: ansible-ansible.legacy.command Invoked with _raw_params=journalctl -ex _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None TASK [Check] ******************************************************************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:130 Saturday 27 July 2024 12:37:00 -0400 (0:00:00.461) 0:00:18.858 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "ps", "-a" ], "delta": "0:00:00.041481", "end": "2024-07-27 12:37:00.804533", "rc": 0, "start": "2024-07-27 12:37:00.763052" } STDOUT: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES TASK [Check pods] ************************************************************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:134 Saturday 27 July 2024 12:37:00 -0400 (0:00:00.426) 0:00:19.285 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "pod", "ps", "--ctr-ids", "--ctr-names", "--ctr-status" ], "delta": "0:00:00.039751", "end": "2024-07-27 12:37:01.234311", "failed_when_result": false, "rc": 0, "start": "2024-07-27 12:37:01.194560" } STDOUT: POD ID NAME STATUS CREATED INFRA ID IDS NAMES STATUS TASK [Check systemd] *********************************************************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:139 Saturday 27 July 2024 12:37:01 -0400 (0:00:00.430) 0:00:19.715 ********* ok: [managed_node1] => { "changed": false, "cmd": "set -euo pipefail; systemctl list-units --all | grep quadlet", "delta": "0:00:00.014448", "end": "2024-07-27 12:37:01.642679", "failed_when_result": false, "rc": 1, "start": "2024-07-27 12:37:01.628231" } MSG: non-zero return code TASK [LS] ********************************************************************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:147 Saturday 27 July 2024 12:37:01 -0400 (0:00:00.406) 0:00:20.122 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "ls", "-alrtF", "/etc/systemd/system" ], "delta": "0:00:00.004510", "end": "2024-07-27 12:37:02.042822", "failed_when_result": false, "rc": 0, "start": "2024-07-27 12:37:02.038312" } STDOUT: total 12 drwxr-xr-x. 5 root root 47 Jul 24 05:00 ../ lrwxrwxrwx. 1 root root 43 Jul 24 05:00 dbus.service -> /usr/lib/systemd/system/dbus-broker.service drwxr-xr-x. 2 root root 32 Jul 24 05:00 getty.target.wants/ lrwxrwxrwx. 1 root root 37 Jul 24 05:00 ctrl-alt-del.target -> /usr/lib/systemd/system/reboot.target drwxr-xr-x. 2 root root 48 Jul 24 05:01 network-online.target.wants/ lrwxrwxrwx. 1 root root 57 Jul 24 05:01 dbus-org.freedesktop.nm-dispatcher.service -> /usr/lib/systemd/system/NetworkManager-dispatcher.service drwxr-xr-x. 2 root root 38 Jul 24 05:01 dev-virtio\x2dports-org.qemu.guest_agent.0.device.wants/ lrwxrwxrwx. 1 root root 41 Jul 24 05:03 default.target -> /usr/lib/systemd/system/multi-user.target drwxr-xr-x. 2 root root 104 Jul 24 05:18 timers.target.wants/ drwxr-xr-x. 2 root root 31 Jul 24 05:18 remote-fs.target.wants/ drwxr-xr-x. 2 root root 119 Jul 24 05:18 cloud-init.target.wants/ drwxr-xr-x. 2 root root 91 Jul 24 05:18 sockets.target.wants/ drwxr-xr-x. 2 root root 4096 Jul 24 05:18 sysinit.target.wants/ drwxr-xr-x. 2 root root 4096 Jul 27 12:29 multi-user.target.wants/ lrwxrwxrwx. 1 root root 41 Jul 27 12:29 dbus-org.fedoraproject.FirewallD1.service -> /usr/lib/systemd/system/firewalld.service drwxr-xr-x. 11 root root 4096 Jul 27 12:33 ./ TASK [Cleanup] ***************************************************************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:154 Saturday 27 July 2024 12:37:02 -0400 (0:00:00.404) 0:00:20.526 ********* included: fedora.linux_system_roles.podman for managed_node1 TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:3 Saturday 27 July 2024 12:37:02 -0400 (0:00:00.075) 0:00:20.602 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Ensure ansible_facts used by role] **** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:3 Saturday 27 July 2024 12:37:02 -0400 (0:00:00.059) 0:00:20.661 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check if system is ostree] ************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:11 Saturday 27 July 2024 12:37:02 -0400 (0:00:00.039) 0:00:20.701 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set flag to indicate system is ostree] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:16 Saturday 27 July 2024 12:37:02 -0400 (0:00:00.031) 0:00:20.733 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:20 Saturday 27 July 2024 12:37:02 -0400 (0:00:00.033) 0:00:20.766 ********* ok: [managed_node1] => (item=RedHat.yml) => { "ansible_facts": { "__podman_packages": [ "podman", "shadow-utils-subid" ] }, "ansible_included_var_files": [ "/tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/vars/RedHat.yml" ], "ansible_loop_var": "item", "changed": false, "item": "RedHat.yml" } skipping: [managed_node1] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "CentOS.yml", "skip_reason": "Conditional result was False" } ok: [managed_node1] => (item=CentOS_10.yml) => { "ansible_facts": { "__podman_packages": [ "iptables-nft", "podman", "shadow-utils-subid" ] }, "ansible_included_var_files": [ "/tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/vars/CentOS_10.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_10.yml" } ok: [managed_node1] => (item=CentOS_10.yml) => { "ansible_facts": { "__podman_packages": [ "iptables-nft", "podman", "shadow-utils-subid" ] }, "ansible_included_var_files": [ "/tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/vars/CentOS_10.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_10.yml" } TASK [fedora.linux_system_roles.podman : Gather the package facts] ************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 Saturday 27 July 2024 12:37:02 -0400 (0:00:00.072) 0:00:20.839 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Enable copr if requested] ************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:10 Saturday 27 July 2024 12:37:03 -0400 (0:00:00.902) 0:00:21.741 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_use_copr | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure required packages are installed] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:14 Saturday 27 July 2024 12:37:03 -0400 (0:00:00.033) 0:00:21.775 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "(__podman_packages | difference(ansible_facts.packages))", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get podman version] ******************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:22 Saturday 27 July 2024 12:37:03 -0400 (0:00:00.035) 0:00:21.810 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "--version" ], "delta": "0:00:00.030399", "end": "2024-07-27 12:37:03.746415", "rc": 0, "start": "2024-07-27 12:37:03.716016" } STDOUT: podman version 5.1.2 TASK [fedora.linux_system_roles.podman : Set podman version] ******************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:28 Saturday 27 July 2024 12:37:03 -0400 (0:00:00.417) 0:00:22.228 ********* ok: [managed_node1] => { "ansible_facts": { "podman_version": "5.1.2" }, "changed": false } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.2 or later] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:32 Saturday 27 July 2024 12:37:03 -0400 (0:00:00.034) 0:00:22.262 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_version is version(\"4.2\", \"<\")", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.4 or later for quadlet, secrets] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:39 Saturday 27 July 2024 12:37:03 -0400 (0:00:00.032) 0:00:22.294 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_version is version(\"4.4\", \"<\")", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.4 or later for quadlet, secrets] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:49 Saturday 27 July 2024 12:37:03 -0400 (0:00:00.038) 0:00:22.333 ********* META: end_host conditional evaluated to False, continuing execution for managed_node1 skipping: [managed_node1] => { "skip_reason": "end_host conditional evaluated to False, continuing execution for managed_node1" } MSG: end_host conditional evaluated to false, continuing execution for managed_node1 TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:56 Saturday 27 July 2024 12:37:03 -0400 (0:00:00.040) 0:00:22.374 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:2 Saturday 27 July 2024 12:37:04 -0400 (0:00:00.069) 0:00:22.444 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:9 Saturday 27 July 2024 12:37:04 -0400 (0:00:00.036) 0:00:22.480 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:16 Saturday 27 July 2024 12:37:04 -0400 (0:00:00.035) 0:00:22.516 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get group information] **************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:28 Saturday 27 July 2024 12:37:04 -0400 (0:00:00.044) 0:00:22.560 ********* ok: [managed_node1] => { "ansible_facts": { "getent_group": { "root": [ "x", "0", "" ] } }, "changed": false } TASK [fedora.linux_system_roles.podman : Set group name] *********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:35 Saturday 27 July 2024 12:37:04 -0400 (0:00:00.399) 0:00:22.960 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group_name": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 27 July 2024 12:37:04 -0400 (0:00:00.042) 0:00:23.002 ********* ok: [managed_node1] => { "changed": false, "stat": { "atime": 1722097719.096431, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "86395ad7ce62834c967dc50f963a68f042029188", "ctime": 1722097690.6782997, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 4755377, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1719187200.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15728, "uid": 0, "version": "2656437169", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check user with getsubids] ************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 27 July 2024 12:37:04 -0400 (0:00:00.400) 0:00:23.402 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check group with getsubids] *********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 27 July 2024 12:37:05 -0400 (0:00:00.031) 0:00:23.433 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 27 July 2024 12:37:05 -0400 (0:00:00.030) 0:00:23.464 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:74 Saturday 27 July 2024 12:37:05 -0400 (0:00:00.031) 0:00:23.496 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:79 Saturday 27 July 2024 12:37:05 -0400 (0:00:00.030) 0:00:23.526 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:84 Saturday 27 July 2024 12:37:05 -0400 (0:00:00.066) 0:00:23.593 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:94 Saturday 27 July 2024 12:37:05 -0400 (0:00:00.030) 0:00:23.624 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if group not in subgid file] ***** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:101 Saturday 27 July 2024 12:37:05 -0400 (0:00:00.031) 0:00:23.656 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set config file paths] **************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:62 Saturday 27 July 2024 12:37:05 -0400 (0:00:00.030) 0:00:23.687 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_container_conf_file": "/etc/containers/containers.conf.d/50-systemroles.conf", "__podman_policy_json_file": "/etc/containers/policy.json", "__podman_registries_conf_file": "/etc/containers/registries.conf.d/50-systemroles.conf", "__podman_storage_conf_file": "/etc/containers/storage.conf" }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle container.conf.d] ************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:71 Saturday 27 July 2024 12:37:05 -0400 (0:00:00.062) 0:00:23.750 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Ensure containers.d exists] *********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:5 Saturday 27 July 2024 12:37:05 -0400 (0:00:00.064) 0:00:23.814 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_containers_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update container config file] ********* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:13 Saturday 27 July 2024 12:37:05 -0400 (0:00:00.033) 0:00:23.847 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_containers_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle registries.conf.d] ************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:74 Saturday 27 July 2024 12:37:05 -0400 (0:00:00.032) 0:00:23.879 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Ensure registries.d exists] *********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:5 Saturday 27 July 2024 12:37:05 -0400 (0:00:00.066) 0:00:23.945 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_registries_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update registries config file] ******** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:13 Saturday 27 July 2024 12:37:05 -0400 (0:00:00.033) 0:00:23.979 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_registries_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle storage.conf] ****************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:77 Saturday 27 July 2024 12:37:05 -0400 (0:00:00.031) 0:00:24.011 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Ensure storage.conf parent dir exists] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:5 Saturday 27 July 2024 12:37:05 -0400 (0:00:00.067) 0:00:24.078 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_storage_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update storage config file] *********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:13 Saturday 27 July 2024 12:37:05 -0400 (0:00:00.033) 0:00:24.111 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_storage_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle policy.json] ******************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:80 Saturday 27 July 2024 12:37:05 -0400 (0:00:00.031) 0:00:24.143 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Ensure policy.json parent dir exists] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:6 Saturday 27 July 2024 12:37:05 -0400 (0:00:00.070) 0:00:24.213 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat the policy.json file] ************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:14 Saturday 27 July 2024 12:37:05 -0400 (0:00:00.032) 0:00:24.245 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get the existing policy.json] ********* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:19 Saturday 27 July 2024 12:37:05 -0400 (0:00:00.033) 0:00:24.278 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Write new policy.json file] *********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:25 Saturday 27 July 2024 12:37:05 -0400 (0:00:00.067) 0:00:24.346 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [Manage firewall for specified ports] ************************************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:86 Saturday 27 July 2024 12:37:05 -0400 (0:00:00.033) 0:00:24.379 ********* included: fedora.linux_system_roles.firewall for managed_node1 TASK [fedora.linux_system_roles.firewall : Setup firewalld] ******************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:2 Saturday 27 July 2024 12:37:06 -0400 (0:00:00.117) 0:00:24.496 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml for managed_node1 TASK [fedora.linux_system_roles.firewall : Ensure ansible_facts used by role] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:2 Saturday 27 July 2024 12:37:06 -0400 (0:00:00.059) 0:00:24.555 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__firewall_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Check if system is ostree] ********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:10 Saturday 27 July 2024 12:37:06 -0400 (0:00:00.039) 0:00:24.595 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __firewall_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Set flag to indicate system is ostree] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:15 Saturday 27 July 2024 12:37:06 -0400 (0:00:00.032) 0:00:24.628 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __firewall_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Check if transactional-update exists in /sbin] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:22 Saturday 27 July 2024 12:37:06 -0400 (0:00:00.033) 0:00:24.662 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __firewall_is_transactional is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Set flag if transactional-update exists] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:27 Saturday 27 July 2024 12:37:06 -0400 (0:00:00.032) 0:00:24.694 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __firewall_is_transactional is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Install firewalld] ****************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:31 Saturday 27 July 2024 12:37:06 -0400 (0:00:00.033) 0:00:24.727 ********* ok: [managed_node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: firewalld TASK [fedora.linux_system_roles.firewall : Notify user that reboot is needed to apply changes] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:43 Saturday 27 July 2024 12:37:07 -0400 (0:00:00.751) 0:00:25.479 ********* skipping: [managed_node1] => { "false_condition": "__firewall_is_transactional | d(false)" } TASK [fedora.linux_system_roles.firewall : Reboot transactional update systems] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:48 Saturday 27 July 2024 12:37:07 -0400 (0:00:00.033) 0:00:25.513 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__firewall_is_transactional | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Fail if reboot is needed and not set] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:53 Saturday 27 July 2024 12:37:07 -0400 (0:00:00.032) 0:00:25.545 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__firewall_is_transactional | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Collect service facts] ************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:5 Saturday 27 July 2024 12:37:07 -0400 (0:00:00.033) 0:00:25.579 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "firewall_disable_conflicting_services | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Attempt to stop and disable conflicting services] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:9 Saturday 27 July 2024 12:37:07 -0400 (0:00:00.030) 0:00:25.609 ********* skipping: [managed_node1] => (item=nftables) => { "ansible_loop_var": "item", "changed": false, "false_condition": "firewall_disable_conflicting_services | bool", "item": "nftables", "skip_reason": "Conditional result was False" } skipping: [managed_node1] => (item=iptables) => { "ansible_loop_var": "item", "changed": false, "false_condition": "firewall_disable_conflicting_services | bool", "item": "iptables", "skip_reason": "Conditional result was False" } skipping: [managed_node1] => (item=ufw) => { "ansible_loop_var": "item", "changed": false, "false_condition": "firewall_disable_conflicting_services | bool", "item": "ufw", "skip_reason": "Conditional result was False" } skipping: [managed_node1] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.firewall : Unmask firewalld service] *********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:22 Saturday 27 July 2024 12:37:07 -0400 (0:00:00.040) 0:00:25.650 ********* ok: [managed_node1] => { "changed": false, "name": "firewalld", "status": { "AccessSELinuxContext": "system_u:object_r:firewalld_unit_file_t:s0", "ActiveEnterTimestamp": "Sat 2024-07-27 12:29:18 EDT", "ActiveEnterTimestampMonotonic": "309776823", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "dbus.socket dbus-broker.service sysinit.target basic.target system.slice polkit.service", "AllowIsolate": "no", "AssertResult": "yes", "AssertTimestamp": "Sat 2024-07-27 12:29:17 EDT", "AssertTimestampMonotonic": "308800686", "Before": "network-pre.target multi-user.target shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "BusName": "org.fedoraproject.FirewallD1", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "727790000", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Sat 2024-07-27 12:29:17 EDT", "ConditionTimestampMonotonic": "308800682", "ConfigurationDirectoryMode": "0755", "Conflicts": "ebtables.service shutdown.target ip6tables.service iptables.service ipset.service nftables.service", "ControlGroup": "/system.slice/firewalld.service", "ControlGroupId": "4842", "ControlPID": "0", "CoredumpFilter": "0x33", "CoredumpReceive": "no", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "DefaultStartupMemoryLow": "0", "Delegate": "no", "Description": "firewalld - dynamic firewall daemon", "DevicePolicy": "auto", "Documentation": "\"man:firewalld(1)\"", "DynamicUser": "no", "EffectiveMemoryHigh": "3700936704", "EffectiveMemoryMax": "3700936704", "EffectiveTasksMax": "22402", "EnvironmentFiles": "/etc/sysconfig/firewalld (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainHandoffTimestamp": "Sat 2024-07-27 12:29:17 EDT", "ExecMainHandoffTimestampMonotonic": "308813520", "ExecMainPID": "11870", "ExecMainStartTimestamp": "Sat 2024-07-27 12:29:17 EDT", "ExecMainStartTimestampMonotonic": "308803091", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecReloadEx": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/firewalld ; argv[]=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/usr/sbin/firewalld ; argv[]=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "ExtensionImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FileDescriptorStorePreserve": "restart", "FinalKillSignal": "9", "FragmentPath": "/usr/lib/systemd/system/firewalld.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOReadBytes": "[not set]", "IOReadOperations": "[not set]", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "[not set]", "IOWriteOperations": "[not set]", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "firewalld.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Sat 2024-07-27 12:29:17 EDT", "InactiveExitTimestampMonotonic": "308803722", "InvocationID": "b164b44216a943caa07e4d87cac33d98", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "mixed", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "14001", "LimitNPROCSoft": "14001", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "14001", "LimitSIGPENDINGSoft": "14001", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "11870", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "3137830912", "MemoryCurrent": "33144832", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryKSM": "no", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemoryPeak": "33751040", "MemoryPressureThresholdUSec": "200ms", "MemoryPressureWatch": "auto", "MemorySwapCurrent": "0", "MemorySwapMax": "infinity", "MemorySwapPeak": "0", "MemoryZSwapCurrent": "0", "MemoryZSwapMax": "infinity", "MemoryZSwapWriteback": "yes", "MountAPIVFS": "no", "MountImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "firewalld.service dbus-org.fedoraproject.FirewallD1.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "ReloadSignal": "1", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "system.slice sysinit.target dbus.socket", "Restart": "no", "RestartKillSignal": "15", "RestartMaxDelayUSec": "infinity", "RestartMode": "normal", "RestartSteps": "0", "RestartUSec": "100ms", "RestartUSecNext": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RootEphemeral": "no", "RootImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SetLoginEnvironment": "no", "Slice": "system.slice", "StandardError": "null", "StandardInput": "null", "StandardOutput": "null", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StartupMemoryHigh": "infinity", "StartupMemoryLow": "0", "StartupMemoryMax": "infinity", "StartupMemorySwapMax": "infinity", "StartupMemoryZSwapMax": "infinity", "StateChangeTimestamp": "Sat 2024-07-27 12:36:27 EDT", "StateChangeTimestampMonotonic": "738844426", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none", "SurviveFinalKillSignal": "no", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "2", "TasksMax": "22402", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "1min 30s", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "dbus", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-pre.target", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0" } } TASK [fedora.linux_system_roles.firewall : Enable and start firewalld service] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:28 Saturday 27 July 2024 12:37:07 -0400 (0:00:00.574) 0:00:26.224 ********* ok: [managed_node1] => { "changed": false, "enabled": true, "name": "firewalld", "state": "started", "status": { "AccessSELinuxContext": "system_u:object_r:firewalld_unit_file_t:s0", "ActiveEnterTimestamp": "Sat 2024-07-27 12:29:18 EDT", "ActiveEnterTimestampMonotonic": "309776823", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "dbus.socket dbus-broker.service sysinit.target basic.target system.slice polkit.service", "AllowIsolate": "no", "AssertResult": "yes", "AssertTimestamp": "Sat 2024-07-27 12:29:17 EDT", "AssertTimestampMonotonic": "308800686", "Before": "network-pre.target multi-user.target shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "BusName": "org.fedoraproject.FirewallD1", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "727790000", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Sat 2024-07-27 12:29:17 EDT", "ConditionTimestampMonotonic": "308800682", "ConfigurationDirectoryMode": "0755", "Conflicts": "ebtables.service shutdown.target ip6tables.service iptables.service ipset.service nftables.service", "ControlGroup": "/system.slice/firewalld.service", "ControlGroupId": "4842", "ControlPID": "0", "CoredumpFilter": "0x33", "CoredumpReceive": "no", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "DefaultStartupMemoryLow": "0", "Delegate": "no", "Description": "firewalld - dynamic firewall daemon", "DevicePolicy": "auto", "Documentation": "\"man:firewalld(1)\"", "DynamicUser": "no", "EffectiveMemoryHigh": "3700936704", "EffectiveMemoryMax": "3700936704", "EffectiveTasksMax": "22402", "EnvironmentFiles": "/etc/sysconfig/firewalld (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainHandoffTimestamp": "Sat 2024-07-27 12:29:17 EDT", "ExecMainHandoffTimestampMonotonic": "308813520", "ExecMainPID": "11870", "ExecMainStartTimestamp": "Sat 2024-07-27 12:29:17 EDT", "ExecMainStartTimestampMonotonic": "308803091", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecReloadEx": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/firewalld ; argv[]=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/usr/sbin/firewalld ; argv[]=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "ExtensionImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FileDescriptorStorePreserve": "restart", "FinalKillSignal": "9", "FragmentPath": "/usr/lib/systemd/system/firewalld.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOReadBytes": "[not set]", "IOReadOperations": "[not set]", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "[not set]", "IOWriteOperations": "[not set]", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "firewalld.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Sat 2024-07-27 12:29:17 EDT", "InactiveExitTimestampMonotonic": "308803722", "InvocationID": "b164b44216a943caa07e4d87cac33d98", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "mixed", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "14001", "LimitNPROCSoft": "14001", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "14001", "LimitSIGPENDINGSoft": "14001", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "11870", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "3139985408", "MemoryCurrent": "33144832", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryKSM": "no", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemoryPeak": "33751040", "MemoryPressureThresholdUSec": "200ms", "MemoryPressureWatch": "auto", "MemorySwapCurrent": "0", "MemorySwapMax": "infinity", "MemorySwapPeak": "0", "MemoryZSwapCurrent": "0", "MemoryZSwapMax": "infinity", "MemoryZSwapWriteback": "yes", "MountAPIVFS": "no", "MountImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "firewalld.service dbus-org.fedoraproject.FirewallD1.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "ReloadSignal": "1", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "system.slice sysinit.target dbus.socket", "Restart": "no", "RestartKillSignal": "15", "RestartMaxDelayUSec": "infinity", "RestartMode": "normal", "RestartSteps": "0", "RestartUSec": "100ms", "RestartUSecNext": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RootEphemeral": "no", "RootImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SetLoginEnvironment": "no", "Slice": "system.slice", "StandardError": "null", "StandardInput": "null", "StandardOutput": "null", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StartupMemoryHigh": "infinity", "StartupMemoryLow": "0", "StartupMemoryMax": "infinity", "StartupMemorySwapMax": "infinity", "StartupMemoryZSwapMax": "infinity", "StateChangeTimestamp": "Sat 2024-07-27 12:36:27 EDT", "StateChangeTimestampMonotonic": "738844426", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none", "SurviveFinalKillSignal": "no", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "2", "TasksMax": "22402", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "1min 30s", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "dbus", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-pre.target", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0" } } TASK [fedora.linux_system_roles.firewall : Check if previous replaced is defined] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:34 Saturday 27 July 2024 12:37:08 -0400 (0:00:00.570) 0:00:26.795 ********* ok: [managed_node1] => { "ansible_facts": { "__firewall_previous_replaced": false, "__firewall_python_cmd": "/usr/bin/python3.12", "__firewall_report_changed": true }, "changed": false } TASK [fedora.linux_system_roles.firewall : Get config files, checksums before and remove] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:43 Saturday 27 July 2024 12:37:08 -0400 (0:00:00.042) 0:00:26.838 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__firewall_previous_replaced | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Tell firewall module it is able to report changed] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:55 Saturday 27 July 2024 12:37:08 -0400 (0:00:00.069) 0:00:26.908 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__firewall_previous_replaced | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Configure firewall] ***************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:71 Saturday 27 July 2024 12:37:08 -0400 (0:00:00.031) 0:00:26.939 ********* ok: [managed_node1] => (item={'port': '8000/tcp', 'state': 'enabled'}) => { "__firewall_changed": false, "ansible_loop_var": "item", "changed": false, "item": { "port": "8000/tcp", "state": "enabled" } } ok: [managed_node1] => (item={'port': '9000/tcp', 'state': 'enabled'}) => { "__firewall_changed": false, "ansible_loop_var": "item", "changed": false, "item": { "port": "9000/tcp", "state": "enabled" } } TASK [fedora.linux_system_roles.firewall : Gather firewall config information] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:120 Saturday 27 July 2024 12:37:09 -0400 (0:00:00.998) 0:00:27.937 ********* skipping: [managed_node1] => (item={'port': '8000/tcp', 'state': 'enabled'}) => { "ansible_loop_var": "item", "changed": false, "false_condition": "firewall | length == 1", "item": { "port": "8000/tcp", "state": "enabled" }, "skip_reason": "Conditional result was False" } skipping: [managed_node1] => (item={'port': '9000/tcp', 'state': 'enabled'}) => { "ansible_loop_var": "item", "changed": false, "false_condition": "firewall | length == 1", "item": { "port": "9000/tcp", "state": "enabled" }, "skip_reason": "Conditional result was False" } skipping: [managed_node1] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.firewall : Update firewalld_config fact] ******* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:130 Saturday 27 July 2024 12:37:09 -0400 (0:00:00.051) 0:00:27.989 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "firewall | length == 1", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Gather firewall config if no arguments] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:139 Saturday 27 July 2024 12:37:09 -0400 (0:00:00.036) 0:00:28.026 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "firewall == None or firewall | length == 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Update firewalld_config fact] ******* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:144 Saturday 27 July 2024 12:37:09 -0400 (0:00:00.034) 0:00:28.060 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "firewall == None or firewall | length == 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Get config files, checksums after] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:153 Saturday 27 July 2024 12:37:09 -0400 (0:00:00.034) 0:00:28.094 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__firewall_previous_replaced | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Calculate what has changed] ********* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:163 Saturday 27 July 2024 12:37:09 -0400 (0:00:00.030) 0:00:28.125 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__firewall_previous_replaced | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Show diffs] ************************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:169 Saturday 27 July 2024 12:37:09 -0400 (0:00:00.032) 0:00:28.158 ********* skipping: [managed_node1] => { "false_condition": "__firewall_previous_replaced | bool" } TASK [Manage selinux for specified ports] ************************************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:93 Saturday 27 July 2024 12:37:09 -0400 (0:00:00.048) 0:00:28.206 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_selinux_ports | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Keep track of users that need to cancel linger] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:100 Saturday 27 July 2024 12:37:09 -0400 (0:00:00.033) 0:00:28.239 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_cancel_user_linger": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle certs.d files - present] ******* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:104 Saturday 27 July 2024 12:37:09 -0400 (0:00:00.033) 0:00:28.272 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle credential files - present] **** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:113 Saturday 27 July 2024 12:37:09 -0400 (0:00:00.028) 0:00:28.301 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle secrets] *********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:122 Saturday 27 July 2024 12:37:09 -0400 (0:00:00.030) 0:00:28.332 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml for managed_node1 => (item=(censored due to no_log)) included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml for managed_node1 => (item=(censored due to no_log)) included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml for managed_node1 => (item=(censored due to no_log)) TASK [fedora.linux_system_roles.podman : Set variables part 1] ***************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:3 Saturday 27 July 2024 12:37:10 -0400 (0:00:00.103) 0:00:28.435 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set variables part 2] ***************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:7 Saturday 27 July 2024 12:37:10 -0400 (0:00:00.037) 0:00:28.473 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_rootless": false, "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:13 Saturday 27 July 2024 12:37:10 -0400 (0:00:00.040) 0:00:28.514 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:12 Saturday 27 July 2024 12:37:10 -0400 (0:00:00.051) 0:00:28.566 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 27 July 2024 12:37:10 -0400 (0:00:00.076) 0:00:28.643 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 27 July 2024 12:37:10 -0400 (0:00:00.031) 0:00:28.674 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:18 Saturday 27 July 2024 12:37:10 -0400 (0:00:00.032) 0:00:28.707 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Manage each secret] ******************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:34 Saturday 27 July 2024 12:37:10 -0400 (0:00:00.031) 0:00:28.738 ********* [WARNING]: Using a variable for a task's 'args' is unsafe in some situations (see https://docs.ansible.com/ansible/devel/reference_appendices/faq.html#argsplat- unsafe) ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Set variables part 1] ***************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:3 Saturday 27 July 2024 12:37:10 -0400 (0:00:00.562) 0:00:29.301 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set variables part 2] ***************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:7 Saturday 27 July 2024 12:37:10 -0400 (0:00:00.036) 0:00:29.338 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_rootless": false, "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:13 Saturday 27 July 2024 12:37:10 -0400 (0:00:00.041) 0:00:29.380 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:12 Saturday 27 July 2024 12:37:11 -0400 (0:00:00.050) 0:00:29.430 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 27 July 2024 12:37:11 -0400 (0:00:00.030) 0:00:29.461 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 27 July 2024 12:37:11 -0400 (0:00:00.031) 0:00:29.493 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:18 Saturday 27 July 2024 12:37:11 -0400 (0:00:00.031) 0:00:29.524 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Manage each secret] ******************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:34 Saturday 27 July 2024 12:37:11 -0400 (0:00:00.031) 0:00:29.556 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Set variables part 1] ***************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:3 Saturday 27 July 2024 12:37:11 -0400 (0:00:00.442) 0:00:29.998 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set variables part 2] ***************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:7 Saturday 27 July 2024 12:37:11 -0400 (0:00:00.037) 0:00:30.035 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_rootless": false, "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:13 Saturday 27 July 2024 12:37:11 -0400 (0:00:00.040) 0:00:30.076 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:12 Saturday 27 July 2024 12:37:11 -0400 (0:00:00.051) 0:00:30.128 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 27 July 2024 12:37:11 -0400 (0:00:00.031) 0:00:30.159 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 27 July 2024 12:37:11 -0400 (0:00:00.030) 0:00:30.190 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:18 Saturday 27 July 2024 12:37:11 -0400 (0:00:00.031) 0:00:30.222 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Manage each secret] ******************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:34 Saturday 27 July 2024 12:37:11 -0400 (0:00:00.030) 0:00:30.252 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle Kubernetes specifications] ***** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:129 Saturday 27 July 2024 12:37:12 -0400 (0:00:00.483) 0:00:30.735 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle Quadlet specifications] ******** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:136 Saturday 27 July 2024 12:37:12 -0400 (0:00:00.029) 0:00:30.764 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed_node1 => (item=(censored due to no_log)) included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed_node1 => (item=(censored due to no_log)) included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed_node1 => (item=(censored due to no_log)) included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed_node1 => (item=(censored due to no_log)) included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed_node1 => (item=(censored due to no_log)) included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed_node1 => (item=(censored due to no_log)) TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 27 July 2024 12:37:12 -0400 (0:00:00.170) 0:00:30.935 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_file_src": "quadlet-demo.kube", "__podman_quadlet_spec": {}, "__podman_quadlet_str": "[Install]\nWantedBy=default.target\n\n[Unit]\nRequires=quadlet-demo-mysql.service\nAfter=quadlet-demo-mysql.service\n\n[Kube]\n# Point to the yaml file in the same directory\nYaml=quadlet-demo.yml\n# Use the quadlet-demo network\nNetwork=quadlet-demo.network\n# Publish the envoy proxy data port\nPublishPort=8000:8080\n# Publish the envoy proxy admin port\nPublishPort=9000:9901\n# Use the envoy proxy config map in the same directory\nConfigMap=envoy-proxy-configmap.yml", "__podman_quadlet_template_src": "" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 27 July 2024 12:37:12 -0400 (0:00:00.046) 0:00:30.982 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "absent", "__podman_systemd_unit_scope": "", "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 27 July 2024 12:37:12 -0400 (0:00:00.041) 0:00:31.023 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_quadlet_file_src", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 27 July 2024 12:37:12 -0400 (0:00:00.034) 0:00:31.057 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_name": "quadlet-demo", "__podman_quadlet_type": "kube", "__podman_rootless": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 27 July 2024 12:37:12 -0400 (0:00:00.050) 0:00:31.108 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:2 Saturday 27 July 2024 12:37:12 -0400 (0:00:00.065) 0:00:31.174 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:9 Saturday 27 July 2024 12:37:12 -0400 (0:00:00.035) 0:00:31.210 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:16 Saturday 27 July 2024 12:37:12 -0400 (0:00:00.036) 0:00:31.246 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get group information] **************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:28 Saturday 27 July 2024 12:37:12 -0400 (0:00:00.044) 0:00:31.291 ********* ok: [managed_node1] => { "ansible_facts": { "getent_group": { "root": [ "x", "0", "" ] } }, "changed": false } TASK [fedora.linux_system_roles.podman : Set group name] *********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:35 Saturday 27 July 2024 12:37:13 -0400 (0:00:00.397) 0:00:31.688 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group_name": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 27 July 2024 12:37:13 -0400 (0:00:00.042) 0:00:31.731 ********* ok: [managed_node1] => { "changed": false, "stat": { "atime": 1722097719.096431, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "86395ad7ce62834c967dc50f963a68f042029188", "ctime": 1722097690.6782997, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 4755377, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1719187200.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15728, "uid": 0, "version": "2656437169", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check user with getsubids] ************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 27 July 2024 12:37:13 -0400 (0:00:00.394) 0:00:32.126 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check group with getsubids] *********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 27 July 2024 12:37:13 -0400 (0:00:00.031) 0:00:32.157 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 27 July 2024 12:37:13 -0400 (0:00:00.032) 0:00:32.190 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:74 Saturday 27 July 2024 12:37:13 -0400 (0:00:00.031) 0:00:32.222 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:79 Saturday 27 July 2024 12:37:13 -0400 (0:00:00.075) 0:00:32.297 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:84 Saturday 27 July 2024 12:37:13 -0400 (0:00:00.032) 0:00:32.329 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:94 Saturday 27 July 2024 12:37:13 -0400 (0:00:00.033) 0:00:32.363 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if group not in subgid file] ***** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:101 Saturday 27 July 2024 12:37:13 -0400 (0:00:00.031) 0:00:32.394 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:62 Saturday 27 July 2024 12:37:14 -0400 (0:00:00.032) 0:00:32.427 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [], "__podman_kube_yamls_raw": [ "quadlet-demo.yml" ], "__podman_service_name": "quadlet-demo.service", "__podman_systemd_scope": "system", "__podman_user_home_dir": "/root", "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 27 July 2024 12:37:14 -0400 (0:00:00.056) 0:00:32.483 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_path": "/etc/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:77 Saturday 27 July 2024 12:37:14 -0400 (0:00:00.036) 0:00:32.520 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:87 Saturday 27 July 2024 12:37:14 -0400 (0:00:00.030) 0:00:32.551 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_images": [], "__podman_quadlet_file": "/etc/containers/systemd/quadlet-demo.kube", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:103 Saturday 27 July 2024 12:37:14 -0400 (0:00:00.081) 0:00:32.632 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:110 Saturday 27 July 2024 12:37:14 -0400 (0:00:00.040) 0:00:32.672 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:4 Saturday 27 July 2024 12:37:14 -0400 (0:00:00.082) 0:00:32.755 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stop and disable service] ************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:12 Saturday 27 July 2024 12:37:14 -0400 (0:00:00.032) 0:00:32.787 ********* ok: [managed_node1] => { "changed": false, "failed_when_result": false } MSG: Could not find the requested service quadlet-demo.service: host TASK [fedora.linux_system_roles.podman : See if quadlet file exists] *********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:33 Saturday 27 July 2024 12:37:14 -0400 (0:00:00.557) 0:00:33.344 ********* ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:38 Saturday 27 July 2024 12:37:15 -0400 (0:00:00.388) 0:00:33.733 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_quadlet_stat.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Remove quadlet file] ****************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:42 Saturday 27 July 2024 12:37:15 -0400 (0:00:00.030) 0:00:33.764 ********* ok: [managed_node1] => { "changed": false, "path": "/etc/containers/systemd/quadlet-demo.kube", "state": "absent" } TASK [fedora.linux_system_roles.podman : Refresh systemd] ********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:48 Saturday 27 July 2024 12:37:15 -0400 (0:00:00.391) 0:00:34.155 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_file_removed is changed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Remove managed resource] ************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:58 Saturday 27 July 2024 12:37:15 -0400 (0:00:00.033) 0:00:34.189 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Remove volumes] *********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:95 Saturday 27 July 2024 12:37:15 -0400 (0:00:00.034) 0:00:34.223 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:112 Saturday 27 July 2024 12:37:15 -0400 (0:00:00.047) 0:00:34.270 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_parsed": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Prune images no longer in use] ******** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:116 Saturday 27 July 2024 12:37:15 -0400 (0:00:00.033) 0:00:34.304 ********* changed: [managed_node1] => { "changed": true, "cmd": [ "podman", "image", "prune", "--all", "-f" ], "delta": "0:00:00.067339", "end": "2024-07-27 12:37:16.288131", "rc": 0, "start": "2024-07-27 12:37:16.220792" } STDOUT: dd68369a4c98aa462789d69687e5025b82f278ebe66afc63c07c145727b83b12 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:127 Saturday 27 July 2024 12:37:16 -0400 (0:00:00.464) 0:00:34.768 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:12 Saturday 27 July 2024 12:37:16 -0400 (0:00:00.099) 0:00:34.868 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 27 July 2024 12:37:16 -0400 (0:00:00.032) 0:00:34.900 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 27 July 2024 12:37:16 -0400 (0:00:00.031) 0:00:34.932 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - images] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:137 Saturday 27 July 2024 12:37:16 -0400 (0:00:00.033) 0:00:34.966 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "images", "-n" ], "delta": "0:00:00.039146", "end": "2024-07-27 12:37:16.921185", "rc": 0, "start": "2024-07-27 12:37:16.882039" } TASK [fedora.linux_system_roles.podman : For testing and debugging - volumes] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:146 Saturday 27 July 2024 12:37:16 -0400 (0:00:00.436) 0:00:35.402 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "volume", "ls", "-n" ], "delta": "0:00:00.038967", "end": "2024-07-27 12:37:17.361994", "rc": 0, "start": "2024-07-27 12:37:17.323027" } TASK [fedora.linux_system_roles.podman : For testing and debugging - containers] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:155 Saturday 27 July 2024 12:37:17 -0400 (0:00:00.440) 0:00:35.843 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "ps", "--noheading" ], "delta": "0:00:00.033970", "end": "2024-07-27 12:37:17.800503", "rc": 0, "start": "2024-07-27 12:37:17.766533" } TASK [fedora.linux_system_roles.podman : For testing and debugging - networks] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:164 Saturday 27 July 2024 12:37:17 -0400 (0:00:00.442) 0:00:36.285 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "network", "ls", "-n", "-q" ], "delta": "0:00:00.039574", "end": "2024-07-27 12:37:18.246471", "rc": 0, "start": "2024-07-27 12:37:18.206897" } STDOUT: podman podman-default-kube-network TASK [fedora.linux_system_roles.podman : For testing and debugging - secrets] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:173 Saturday 27 July 2024 12:37:18 -0400 (0:00:00.441) 0:00:36.726 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - services] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:183 Saturday 27 July 2024 12:37:18 -0400 (0:00:00.438) 0:00:37.165 ********* ok: [managed_node1] => { "ansible_facts": { "services": { "NetworkManager-dispatcher.service": { "name": "NetworkManager-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "NetworkManager-wait-online.service": { "name": "NetworkManager-wait-online.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "NetworkManager.service": { "name": "NetworkManager.service", "source": "systemd", "state": "running", "status": "enabled" }, "audit-rules.service": { "name": "audit-rules.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "auditd.service": { "name": "auditd.service", "source": "systemd", "state": "running", "status": "enabled" }, "auth-rpcgss-module.service": { "name": "auth-rpcgss-module.service", "source": "systemd", "state": "stopped", "status": "static" }, "autofs.service": { "name": "autofs.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "autovt@.service": { "name": "autovt@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "blk-availability.service": { "name": "blk-availability.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "capsule@.service": { "name": "capsule@.service", "source": "systemd", "state": "unknown", "status": "static" }, "certmonger.service": { "name": "certmonger.service", "source": "systemd", "state": "running", "status": "enabled" }, "chrony-wait.service": { "name": "chrony-wait.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd-restricted.service": { "name": "chronyd-restricted.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd.service": { "name": "chronyd.service", "source": "systemd", "state": "running", "status": "enabled" }, "cloud-config.service": { "name": "cloud-config.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-final.service": { "name": "cloud-final.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init-hotplugd.service": { "name": "cloud-init-hotplugd.service", "source": "systemd", "state": "inactive", "status": "static" }, "cloud-init-local.service": { "name": "cloud-init-local.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init.service": { "name": "cloud-init.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "console-getty.service": { "name": "console-getty.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "container-getty@.service": { "name": "container-getty@.service", "source": "systemd", "state": "unknown", "status": "static" }, "crond.service": { "name": "crond.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-broker.service": { "name": "dbus-broker.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-org.fedoraproject.FirewallD1.service": { "name": "dbus-org.fedoraproject.FirewallD1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.hostname1.service": { "name": "dbus-org.freedesktop.hostname1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.locale1.service": { "name": "dbus-org.freedesktop.locale1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.login1.service": { "name": "dbus-org.freedesktop.login1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.nm-dispatcher.service": { "name": "dbus-org.freedesktop.nm-dispatcher.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.timedate1.service": { "name": "dbus-org.freedesktop.timedate1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus.service": { "name": "dbus.service", "source": "systemd", "state": "active", "status": "alias" }, "debug-shell.service": { "name": "debug-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd.service": { "name": "dhcpcd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd@.service": { "name": "dhcpcd@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "display-manager.service": { "name": "display-manager.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "dm-event.service": { "name": "dm-event.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-makecache.service": { "name": "dnf-makecache.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-system-upgrade-cleanup.service": { "name": "dnf-system-upgrade-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf-system-upgrade.service": { "name": "dnf-system-upgrade.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dracut-cmdline.service": { "name": "dracut-cmdline.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-initqueue.service": { "name": "dracut-initqueue.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-mount.service": { "name": "dracut-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-mount.service": { "name": "dracut-pre-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-pivot.service": { "name": "dracut-pre-pivot.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-trigger.service": { "name": "dracut-pre-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-udev.service": { "name": "dracut-pre-udev.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown-onfailure.service": { "name": "dracut-shutdown-onfailure.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown.service": { "name": "dracut-shutdown.service", "source": "systemd", "state": "stopped", "status": "static" }, "ebtables.service": { "name": "ebtables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "emergency.service": { "name": "emergency.service", "source": "systemd", "state": "stopped", "status": "static" }, "firewalld.service": { "name": "firewalld.service", "source": "systemd", "state": "running", "status": "enabled" }, "fsidd.service": { "name": "fsidd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fstrim.service": { "name": "fstrim.service", "source": "systemd", "state": "stopped", "status": "static" }, "getty@.service": { "name": "getty@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "getty@tty1.service": { "name": "getty@tty1.service", "source": "systemd", "state": "running", "status": "active" }, "grub-boot-indeterminate.service": { "name": "grub-boot-indeterminate.service", "source": "systemd", "state": "inactive", "status": "static" }, "grub2-systemd-integration.service": { "name": "grub2-systemd-integration.service", "source": "systemd", "state": "inactive", "status": "static" }, "gssproxy.service": { "name": "gssproxy.service", "source": "systemd", "state": "running", "status": "disabled" }, "hv_kvp_daemon.service": { "name": "hv_kvp_daemon.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "initrd-cleanup.service": { "name": "initrd-cleanup.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-parse-etc.service": { "name": "initrd-parse-etc.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-switch-root.service": { "name": "initrd-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-udevadm-cleanup-db.service": { "name": "initrd-udevadm-cleanup-db.service", "source": "systemd", "state": "stopped", "status": "static" }, "ip6tables.service": { "name": "ip6tables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ipset.service": { "name": "ipset.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iptables.service": { "name": "iptables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "irqbalance.service": { "name": "irqbalance.service", "source": "systemd", "state": "running", "status": "enabled" }, "kdump.service": { "name": "kdump.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "kmod-static-nodes.service": { "name": "kmod-static-nodes.service", "source": "systemd", "state": "stopped", "status": "static" }, "kvm_stat.service": { "name": "kvm_stat.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "ldconfig.service": { "name": "ldconfig.service", "source": "systemd", "state": "stopped", "status": "static" }, "logrotate.service": { "name": "logrotate.service", "source": "systemd", "state": "stopped", "status": "failed" }, "lvm2-lvmpolld.service": { "name": "lvm2-lvmpolld.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm2-monitor.service": { "name": "lvm2-monitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "man-db-cache-update.service": { "name": "man-db-cache-update.service", "source": "systemd", "state": "inactive", "status": "static" }, "man-db-restart-cache-update.service": { "name": "man-db-restart-cache-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "modprobe@.service": { "name": "modprobe@.service", "source": "systemd", "state": "unknown", "status": "static" }, "modprobe@configfs.service": { "name": "modprobe@configfs.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@dm_mod.service": { "name": "modprobe@dm_mod.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@drm.service": { "name": "modprobe@drm.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@efi_pstore.service": { "name": "modprobe@efi_pstore.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@fuse.service": { "name": "modprobe@fuse.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@loop.service": { "name": "modprobe@loop.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "netavark-dhcp-proxy.service": { "name": "netavark-dhcp-proxy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "netavark-firewalld-reload.service": { "name": "netavark-firewalld-reload.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "network.service": { "name": "network.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "nfs-blkmap.service": { "name": "nfs-blkmap.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-idmapd.service": { "name": "nfs-idmapd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-mountd.service": { "name": "nfs-mountd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-server.service": { "name": "nfs-server.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nfs-utils.service": { "name": "nfs-utils.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfsdcld.service": { "name": "nfsdcld.service", "source": "systemd", "state": "stopped", "status": "static" }, "nftables.service": { "name": "nftables.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nis-domainname.service": { "name": "nis-domainname.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nm-priv-helper.service": { "name": "nm-priv-helper.service", "source": "systemd", "state": "inactive", "status": "static" }, "ntpd.service": { "name": "ntpd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ntpdate.service": { "name": "ntpdate.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "pam_namespace.service": { "name": "pam_namespace.service", "source": "systemd", "state": "inactive", "status": "static" }, "pcscd.service": { "name": "pcscd.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "plymouth-quit-wait.service": { "name": "plymouth-quit-wait.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "plymouth-start.service": { "name": "plymouth-start.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "podman-auto-update.service": { "name": "podman-auto-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-clean-transient.service": { "name": "podman-clean-transient.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-kube@.service": { "name": "podman-kube@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "podman-restart.service": { "name": "podman-restart.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman.service": { "name": "podman.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "polkit.service": { "name": "polkit.service", "source": "systemd", "state": "running", "status": "static" }, "qemu-guest-agent.service": { "name": "qemu-guest-agent.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "quotaon-root.service": { "name": "quotaon-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "quotaon@.service": { "name": "quotaon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "rc-local.service": { "name": "rc-local.service", "source": "systemd", "state": "stopped", "status": "static" }, "rescue.service": { "name": "rescue.service", "source": "systemd", "state": "stopped", "status": "static" }, "restraintd.service": { "name": "restraintd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rngd.service": { "name": "rngd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpc-gssd.service": { "name": "rpc-gssd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd-notify.service": { "name": "rpc-statd-notify.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd.service": { "name": "rpc-statd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-svcgssd.service": { "name": "rpc-svcgssd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rpcbind.service": { "name": "rpcbind.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "rpmdb-migrate.service": { "name": "rpmdb-migrate.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rpmdb-rebuild.service": { "name": "rpmdb-rebuild.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rsyslog.service": { "name": "rsyslog.service", "source": "systemd", "state": "running", "status": "enabled" }, "selinux-autorelabel-mark.service": { "name": "selinux-autorelabel-mark.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "selinux-autorelabel.service": { "name": "selinux-autorelabel.service", "source": "systemd", "state": "inactive", "status": "static" }, "selinux-check-proper-disable.service": { "name": "selinux-check-proper-disable.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "serial-getty@.service": { "name": "serial-getty@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "serial-getty@ttyS0.service": { "name": "serial-getty@ttyS0.service", "source": "systemd", "state": "running", "status": "active" }, "sntp.service": { "name": "sntp.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ssh-host-keys-migration.service": { "name": "ssh-host-keys-migration.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "sshd-keygen.service": { "name": "sshd-keygen.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen@.service": { "name": "sshd-keygen@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "sshd-keygen@ecdsa.service": { "name": "sshd-keygen@ecdsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@ed25519.service": { "name": "sshd-keygen@ed25519.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@rsa.service": { "name": "sshd-keygen@rsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd.service": { "name": "sshd.service", "source": "systemd", "state": "running", "status": "enabled" }, "sshd@.service": { "name": "sshd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "sssd-autofs.service": { "name": "sssd-autofs.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-kcm.service": { "name": "sssd-kcm.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "sssd-nss.service": { "name": "sssd-nss.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pac.service": { "name": "sssd-pac.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pam.service": { "name": "sssd-pam.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-ssh.service": { "name": "sssd-ssh.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-sudo.service": { "name": "sssd-sudo.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd.service": { "name": "sssd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "syslog.service": { "name": "syslog.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "system-update-cleanup.service": { "name": "system-update-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-ask-password-console.service": { "name": "systemd-ask-password-console.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-wall.service": { "name": "systemd-ask-password-wall.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-backlight@.service": { "name": "systemd-backlight@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-battery-check.service": { "name": "systemd-battery-check.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-binfmt.service": { "name": "systemd-binfmt.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-bless-boot.service": { "name": "systemd-bless-boot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-boot-check-no-failures.service": { "name": "systemd-boot-check-no-failures.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-boot-random-seed.service": { "name": "systemd-boot-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-boot-update.service": { "name": "systemd-boot-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-bootctl@.service": { "name": "systemd-bootctl@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-confext.service": { "name": "systemd-confext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-coredump@.service": { "name": "systemd-coredump@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-creds@.service": { "name": "systemd-creds@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-exit.service": { "name": "systemd-exit.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-firstboot.service": { "name": "systemd-firstboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck-root.service": { "name": "systemd-fsck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck@.service": { "name": "systemd-fsck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-growfs-root.service": { "name": "systemd-growfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-growfs@.service": { "name": "systemd-growfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-halt.service": { "name": "systemd-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hibernate-clear.service": { "name": "systemd-hibernate-clear.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate-resume.service": { "name": "systemd-hibernate-resume.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate.service": { "name": "systemd-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hostnamed.service": { "name": "systemd-hostnamed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hwdb-update.service": { "name": "systemd-hwdb-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hybrid-sleep.service": { "name": "systemd-hybrid-sleep.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-initctl.service": { "name": "systemd-initctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-catalog-update.service": { "name": "systemd-journal-catalog-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-flush.service": { "name": "systemd-journal-flush.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journald-sync@.service": { "name": "systemd-journald-sync@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-journald.service": { "name": "systemd-journald.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-journald@.service": { "name": "systemd-journald@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-kexec.service": { "name": "systemd-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-localed.service": { "name": "systemd-localed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-logind.service": { "name": "systemd-logind.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-machine-id-commit.service": { "name": "systemd-machine-id-commit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-modules-load.service": { "name": "systemd-modules-load.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-network-generator.service": { "name": "systemd-network-generator.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-networkd-wait-online.service": { "name": "systemd-networkd-wait-online.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-oomd.service": { "name": "systemd-oomd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-pcrextend@.service": { "name": "systemd-pcrextend@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrfs-root.service": { "name": "systemd-pcrfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pcrfs@.service": { "name": "systemd-pcrfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrlock-file-system.service": { "name": "systemd-pcrlock-file-system.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-code.service": { "name": "systemd-pcrlock-firmware-code.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-config.service": { "name": "systemd-pcrlock-firmware-config.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-machine-id.service": { "name": "systemd-pcrlock-machine-id.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-make-policy.service": { "name": "systemd-pcrlock-make-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-authority.service": { "name": "systemd-pcrlock-secureboot-authority.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-policy.service": { "name": "systemd-pcrlock-secureboot-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock@.service": { "name": "systemd-pcrlock@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrmachine.service": { "name": "systemd-pcrmachine.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-initrd.service": { "name": "systemd-pcrphase-initrd.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-sysinit.service": { "name": "systemd-pcrphase-sysinit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase.service": { "name": "systemd-pcrphase.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-poweroff.service": { "name": "systemd-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pstore.service": { "name": "systemd-pstore.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-quotacheck-root.service": { "name": "systemd-quotacheck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-quotacheck@.service": { "name": "systemd-quotacheck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-random-seed.service": { "name": "systemd-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-reboot.service": { "name": "systemd-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-remount-fs.service": { "name": "systemd-remount-fs.service", "source": "systemd", "state": "stopped", "status": "enabled-runtime" }, "systemd-repart.service": { "name": "systemd-repart.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-rfkill.service": { "name": "systemd-rfkill.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-soft-reboot.service": { "name": "systemd-soft-reboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-suspend-then-hibernate.service": { "name": "systemd-suspend-then-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend.service": { "name": "systemd-suspend.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-sysctl.service": { "name": "systemd-sysctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-sysext.service": { "name": "systemd-sysext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-sysext@.service": { "name": "systemd-sysext@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-sysupdate-reboot.service": { "name": "systemd-sysupdate-reboot.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysupdate.service": { "name": "systemd-sysupdate.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysusers.service": { "name": "systemd-sysusers.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-timedated.service": { "name": "systemd-timedated.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-timesyncd.service": { "name": "systemd-timesyncd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-tmpfiles-clean.service": { "name": "systemd-tmpfiles-clean.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev-early.service": { "name": "systemd-tmpfiles-setup-dev-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev.service": { "name": "systemd-tmpfiles-setup-dev.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup.service": { "name": "systemd-tmpfiles-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup-early.service": { "name": "systemd-tpm2-setup-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup.service": { "name": "systemd-tpm2-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-load-credentials.service": { "name": "systemd-udev-load-credentials.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-udev-settle.service": { "name": "systemd-udev-settle.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-trigger.service": { "name": "systemd-udev-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udevd.service": { "name": "systemd-udevd.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-update-done.service": { "name": "systemd-update-done.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp-runlevel.service": { "name": "systemd-update-utmp-runlevel.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp.service": { "name": "systemd-update-utmp.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-user-sessions.service": { "name": "systemd-user-sessions.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-vconsole-setup.service": { "name": "systemd-vconsole-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-volatile-root.service": { "name": "systemd-volatile-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "unbound-anchor.service": { "name": "unbound-anchor.service", "source": "systemd", "state": "stopped", "status": "static" }, "user-runtime-dir@.service": { "name": "user-runtime-dir@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@0.service": { "name": "user-runtime-dir@0.service", "source": "systemd", "state": "stopped", "status": "active" }, "user-runtime-dir@3001.service": { "name": "user-runtime-dir@3001.service", "source": "systemd", "state": "stopped", "status": "active" }, "user@.service": { "name": "user@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user@0.service": { "name": "user@0.service", "source": "systemd", "state": "running", "status": "active" }, "user@3001.service": { "name": "user@3001.service", "source": "systemd", "state": "running", "status": "active" }, "ypbind.service": { "name": "ypbind.service", "source": "systemd", "state": "stopped", "status": "not-found" } } }, "changed": false } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:114 Saturday 27 July 2024 12:37:21 -0400 (0:00:02.418) 0:00:39.583 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 27 July 2024 12:37:21 -0400 (0:00:00.035) 0:00:39.618 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_file_src": "", "__podman_quadlet_spec": {}, "__podman_quadlet_str": "---\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: wp-pv-claim\n labels:\n app: wordpress\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 20Gi\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: quadlet-demo\nspec:\n containers:\n - name: wordpress\n image: quay.io/linux-system-roles/wordpress:4.8-apache\n env:\n - name: WORDPRESS_DB_HOST\n value: quadlet-demo-mysql\n - name: WORDPRESS_DB_PASSWORD\n valueFrom:\n secretKeyRef:\n name: mysql-root-password-kube\n key: password\n volumeMounts:\n - name: wordpress-persistent-storage\n mountPath: /var/www/html\n resources:\n requests:\n memory: \"64Mi\"\n cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"\n - name: envoy\n image: quay.io/linux-system-roles/envoyproxy:v1.25.0\n volumeMounts:\n - name: config-volume\n mountPath: /etc/envoy\n - name: certificates\n mountPath: /etc/envoy-certificates\n env:\n - name: ENVOY_UID\n value: \"0\"\n resources:\n requests:\n memory: \"64Mi\"\n cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"\n volumes:\n - name: config-volume\n configMap:\n name: envoy-proxy-config\n - name: certificates\n secret:\n secretName: envoy-certificates\n - name: wordpress-persistent-storage\n persistentVolumeClaim:\n claimName: wp-pv-claim\n - name: www # not used - for testing hostpath\n hostPath:\n path: /tmp/httpd3\n - name: create # not used - for testing hostpath\n hostPath:\n path: /tmp/httpd3-create\n", "__podman_quadlet_template_src": "quadlet-demo.yml.j2" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 27 July 2024 12:37:21 -0400 (0:00:00.098) 0:00:39.717 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "absent", "__podman_systemd_unit_scope": "", "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 27 July 2024 12:37:21 -0400 (0:00:00.046) 0:00:39.763 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_quadlet_str", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 27 July 2024 12:37:21 -0400 (0:00:00.037) 0:00:39.800 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_name": "quadlet-demo", "__podman_quadlet_type": "yml", "__podman_rootless": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 27 July 2024 12:37:21 -0400 (0:00:00.054) 0:00:39.855 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:2 Saturday 27 July 2024 12:37:21 -0400 (0:00:00.071) 0:00:39.926 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:9 Saturday 27 July 2024 12:37:21 -0400 (0:00:00.138) 0:00:40.065 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:16 Saturday 27 July 2024 12:37:21 -0400 (0:00:00.041) 0:00:40.107 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get group information] **************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:28 Saturday 27 July 2024 12:37:21 -0400 (0:00:00.050) 0:00:40.157 ********* ok: [managed_node1] => { "ansible_facts": { "getent_group": { "root": [ "x", "0", "" ] } }, "changed": false } TASK [fedora.linux_system_roles.podman : Set group name] *********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:35 Saturday 27 July 2024 12:37:22 -0400 (0:00:00.407) 0:00:40.565 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group_name": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 27 July 2024 12:37:22 -0400 (0:00:00.048) 0:00:40.613 ********* ok: [managed_node1] => { "changed": false, "stat": { "atime": 1722097719.096431, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "86395ad7ce62834c967dc50f963a68f042029188", "ctime": 1722097690.6782997, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 4755377, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1719187200.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15728, "uid": 0, "version": "2656437169", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check user with getsubids] ************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 27 July 2024 12:37:22 -0400 (0:00:00.403) 0:00:41.016 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check group with getsubids] *********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 27 July 2024 12:37:22 -0400 (0:00:00.036) 0:00:41.053 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 27 July 2024 12:37:22 -0400 (0:00:00.035) 0:00:41.089 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:74 Saturday 27 July 2024 12:37:22 -0400 (0:00:00.034) 0:00:41.124 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:79 Saturday 27 July 2024 12:37:22 -0400 (0:00:00.035) 0:00:41.159 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:84 Saturday 27 July 2024 12:37:22 -0400 (0:00:00.035) 0:00:41.195 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:94 Saturday 27 July 2024 12:37:22 -0400 (0:00:00.034) 0:00:41.230 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if group not in subgid file] ***** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:101 Saturday 27 July 2024 12:37:22 -0400 (0:00:00.035) 0:00:41.265 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:62 Saturday 27 July 2024 12:37:22 -0400 (0:00:00.035) 0:00:41.300 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [], "__podman_kube_yamls_raw": "", "__podman_service_name": "", "__podman_systemd_scope": "system", "__podman_user_home_dir": "/root", "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 27 July 2024 12:37:22 -0400 (0:00:00.060) 0:00:41.361 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_path": "/etc/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:77 Saturday 27 July 2024 12:37:22 -0400 (0:00:00.039) 0:00:41.400 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:87 Saturday 27 July 2024 12:37:23 -0400 (0:00:00.080) 0:00:41.481 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_images": [], "__podman_quadlet_file": "/etc/containers/systemd/quadlet-demo.yml", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:103 Saturday 27 July 2024 12:37:23 -0400 (0:00:00.082) 0:00:41.564 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:110 Saturday 27 July 2024 12:37:23 -0400 (0:00:00.044) 0:00:41.608 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:4 Saturday 27 July 2024 12:37:23 -0400 (0:00:00.085) 0:00:41.694 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stop and disable service] ************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:12 Saturday 27 July 2024 12:37:23 -0400 (0:00:00.034) 0:00:41.728 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_service_name | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : See if quadlet file exists] *********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:33 Saturday 27 July 2024 12:37:23 -0400 (0:00:00.038) 0:00:41.767 ********* ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:38 Saturday 27 July 2024 12:37:23 -0400 (0:00:00.394) 0:00:42.161 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_quadlet_stat.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Remove quadlet file] ****************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:42 Saturday 27 July 2024 12:37:23 -0400 (0:00:00.034) 0:00:42.195 ********* ok: [managed_node1] => { "changed": false, "path": "/etc/containers/systemd/quadlet-demo.yml", "state": "absent" } TASK [fedora.linux_system_roles.podman : Refresh systemd] ********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:48 Saturday 27 July 2024 12:37:24 -0400 (0:00:00.404) 0:00:42.600 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_file_removed is changed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Remove managed resource] ************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:58 Saturday 27 July 2024 12:37:24 -0400 (0:00:00.037) 0:00:42.638 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Remove volumes] *********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:95 Saturday 27 July 2024 12:37:24 -0400 (0:00:00.036) 0:00:42.674 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:112 Saturday 27 July 2024 12:37:24 -0400 (0:00:00.049) 0:00:42.724 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_parsed": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Prune images no longer in use] ******** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:116 Saturday 27 July 2024 12:37:24 -0400 (0:00:00.036) 0:00:42.761 ********* changed: [managed_node1] => { "changed": true, "cmd": [ "podman", "image", "prune", "--all", "-f" ], "delta": "0:00:00.039220", "end": "2024-07-27 12:37:24.712485", "rc": 0, "start": "2024-07-27 12:37:24.673265" } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:127 Saturday 27 July 2024 12:37:24 -0400 (0:00:00.433) 0:00:43.194 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:12 Saturday 27 July 2024 12:37:24 -0400 (0:00:00.063) 0:00:43.257 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 27 July 2024 12:37:24 -0400 (0:00:00.076) 0:00:43.334 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 27 July 2024 12:37:24 -0400 (0:00:00.036) 0:00:43.370 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - images] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:137 Saturday 27 July 2024 12:37:24 -0400 (0:00:00.035) 0:00:43.405 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "images", "-n" ], "delta": "0:00:00.039296", "end": "2024-07-27 12:37:25.357706", "rc": 0, "start": "2024-07-27 12:37:25.318410" } TASK [fedora.linux_system_roles.podman : For testing and debugging - volumes] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:146 Saturday 27 July 2024 12:37:25 -0400 (0:00:00.436) 0:00:43.842 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "volume", "ls", "-n" ], "delta": "0:00:00.039056", "end": "2024-07-27 12:37:25.796241", "rc": 0, "start": "2024-07-27 12:37:25.757185" } TASK [fedora.linux_system_roles.podman : For testing and debugging - containers] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:155 Saturday 27 July 2024 12:37:25 -0400 (0:00:00.440) 0:00:44.283 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "ps", "--noheading" ], "delta": "0:00:00.038152", "end": "2024-07-27 12:37:26.244593", "rc": 0, "start": "2024-07-27 12:37:26.206441" } TASK [fedora.linux_system_roles.podman : For testing and debugging - networks] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:164 Saturday 27 July 2024 12:37:26 -0400 (0:00:00.445) 0:00:44.728 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "network", "ls", "-n", "-q" ], "delta": "0:00:00.039285", "end": "2024-07-27 12:37:26.692042", "rc": 0, "start": "2024-07-27 12:37:26.652757" } STDOUT: podman podman-default-kube-network TASK [fedora.linux_system_roles.podman : For testing and debugging - secrets] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:173 Saturday 27 July 2024 12:37:26 -0400 (0:00:00.449) 0:00:45.178 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - services] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:183 Saturday 27 July 2024 12:37:27 -0400 (0:00:00.442) 0:00:45.621 ********* ok: [managed_node1] => { "ansible_facts": { "services": { "NetworkManager-dispatcher.service": { "name": "NetworkManager-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "NetworkManager-wait-online.service": { "name": "NetworkManager-wait-online.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "NetworkManager.service": { "name": "NetworkManager.service", "source": "systemd", "state": "running", "status": "enabled" }, "audit-rules.service": { "name": "audit-rules.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "auditd.service": { "name": "auditd.service", "source": "systemd", "state": "running", "status": "enabled" }, "auth-rpcgss-module.service": { "name": "auth-rpcgss-module.service", "source": "systemd", "state": "stopped", "status": "static" }, "autofs.service": { "name": "autofs.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "autovt@.service": { "name": "autovt@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "blk-availability.service": { "name": "blk-availability.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "capsule@.service": { "name": "capsule@.service", "source": "systemd", "state": "unknown", "status": "static" }, "certmonger.service": { "name": "certmonger.service", "source": "systemd", "state": "running", "status": "enabled" }, "chrony-wait.service": { "name": "chrony-wait.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd-restricted.service": { "name": "chronyd-restricted.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd.service": { "name": "chronyd.service", "source": "systemd", "state": "running", "status": "enabled" }, "cloud-config.service": { "name": "cloud-config.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-final.service": { "name": "cloud-final.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init-hotplugd.service": { "name": "cloud-init-hotplugd.service", "source": "systemd", "state": "inactive", "status": "static" }, "cloud-init-local.service": { "name": "cloud-init-local.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init.service": { "name": "cloud-init.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "console-getty.service": { "name": "console-getty.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "container-getty@.service": { "name": "container-getty@.service", "source": "systemd", "state": "unknown", "status": "static" }, "crond.service": { "name": "crond.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-broker.service": { "name": "dbus-broker.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-org.fedoraproject.FirewallD1.service": { "name": "dbus-org.fedoraproject.FirewallD1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.hostname1.service": { "name": "dbus-org.freedesktop.hostname1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.locale1.service": { "name": "dbus-org.freedesktop.locale1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.login1.service": { "name": "dbus-org.freedesktop.login1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.nm-dispatcher.service": { "name": "dbus-org.freedesktop.nm-dispatcher.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.timedate1.service": { "name": "dbus-org.freedesktop.timedate1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus.service": { "name": "dbus.service", "source": "systemd", "state": "active", "status": "alias" }, "debug-shell.service": { "name": "debug-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd.service": { "name": "dhcpcd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd@.service": { "name": "dhcpcd@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "display-manager.service": { "name": "display-manager.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "dm-event.service": { "name": "dm-event.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-makecache.service": { "name": "dnf-makecache.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-system-upgrade-cleanup.service": { "name": "dnf-system-upgrade-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf-system-upgrade.service": { "name": "dnf-system-upgrade.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dracut-cmdline.service": { "name": "dracut-cmdline.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-initqueue.service": { "name": "dracut-initqueue.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-mount.service": { "name": "dracut-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-mount.service": { "name": "dracut-pre-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-pivot.service": { "name": "dracut-pre-pivot.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-trigger.service": { "name": "dracut-pre-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-udev.service": { "name": "dracut-pre-udev.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown-onfailure.service": { "name": "dracut-shutdown-onfailure.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown.service": { "name": "dracut-shutdown.service", "source": "systemd", "state": "stopped", "status": "static" }, "ebtables.service": { "name": "ebtables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "emergency.service": { "name": "emergency.service", "source": "systemd", "state": "stopped", "status": "static" }, "firewalld.service": { "name": "firewalld.service", "source": "systemd", "state": "running", "status": "enabled" }, "fsidd.service": { "name": "fsidd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fstrim.service": { "name": "fstrim.service", "source": "systemd", "state": "stopped", "status": "static" }, "getty@.service": { "name": "getty@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "getty@tty1.service": { "name": "getty@tty1.service", "source": "systemd", "state": "running", "status": "active" }, "grub-boot-indeterminate.service": { "name": "grub-boot-indeterminate.service", "source": "systemd", "state": "inactive", "status": "static" }, "grub2-systemd-integration.service": { "name": "grub2-systemd-integration.service", "source": "systemd", "state": "inactive", "status": "static" }, "gssproxy.service": { "name": "gssproxy.service", "source": "systemd", "state": "running", "status": "disabled" }, "hv_kvp_daemon.service": { "name": "hv_kvp_daemon.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "initrd-cleanup.service": { "name": "initrd-cleanup.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-parse-etc.service": { "name": "initrd-parse-etc.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-switch-root.service": { "name": "initrd-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-udevadm-cleanup-db.service": { "name": "initrd-udevadm-cleanup-db.service", "source": "systemd", "state": "stopped", "status": "static" }, "ip6tables.service": { "name": "ip6tables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ipset.service": { "name": "ipset.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iptables.service": { "name": "iptables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "irqbalance.service": { "name": "irqbalance.service", "source": "systemd", "state": "running", "status": "enabled" }, "kdump.service": { "name": "kdump.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "kmod-static-nodes.service": { "name": "kmod-static-nodes.service", "source": "systemd", "state": "stopped", "status": "static" }, "kvm_stat.service": { "name": "kvm_stat.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "ldconfig.service": { "name": "ldconfig.service", "source": "systemd", "state": "stopped", "status": "static" }, "logrotate.service": { "name": "logrotate.service", "source": "systemd", "state": "stopped", "status": "failed" }, "lvm2-lvmpolld.service": { "name": "lvm2-lvmpolld.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm2-monitor.service": { "name": "lvm2-monitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "man-db-cache-update.service": { "name": "man-db-cache-update.service", "source": "systemd", "state": "inactive", "status": "static" }, "man-db-restart-cache-update.service": { "name": "man-db-restart-cache-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "modprobe@.service": { "name": "modprobe@.service", "source": "systemd", "state": "unknown", "status": "static" }, "modprobe@configfs.service": { "name": "modprobe@configfs.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@dm_mod.service": { "name": "modprobe@dm_mod.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@drm.service": { "name": "modprobe@drm.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@efi_pstore.service": { "name": "modprobe@efi_pstore.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@fuse.service": { "name": "modprobe@fuse.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@loop.service": { "name": "modprobe@loop.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "netavark-dhcp-proxy.service": { "name": "netavark-dhcp-proxy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "netavark-firewalld-reload.service": { "name": "netavark-firewalld-reload.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "network.service": { "name": "network.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "nfs-blkmap.service": { "name": "nfs-blkmap.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-idmapd.service": { "name": "nfs-idmapd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-mountd.service": { "name": "nfs-mountd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-server.service": { "name": "nfs-server.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nfs-utils.service": { "name": "nfs-utils.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfsdcld.service": { "name": "nfsdcld.service", "source": "systemd", "state": "stopped", "status": "static" }, "nftables.service": { "name": "nftables.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nis-domainname.service": { "name": "nis-domainname.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nm-priv-helper.service": { "name": "nm-priv-helper.service", "source": "systemd", "state": "inactive", "status": "static" }, "ntpd.service": { "name": "ntpd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ntpdate.service": { "name": "ntpdate.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "pam_namespace.service": { "name": "pam_namespace.service", "source": "systemd", "state": "inactive", "status": "static" }, "pcscd.service": { "name": "pcscd.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "plymouth-quit-wait.service": { "name": "plymouth-quit-wait.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "plymouth-start.service": { "name": "plymouth-start.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "podman-auto-update.service": { "name": "podman-auto-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-clean-transient.service": { "name": "podman-clean-transient.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-kube@.service": { "name": "podman-kube@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "podman-restart.service": { "name": "podman-restart.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman.service": { "name": "podman.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "polkit.service": { "name": "polkit.service", "source": "systemd", "state": "running", "status": "static" }, "qemu-guest-agent.service": { "name": "qemu-guest-agent.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "quotaon-root.service": { "name": "quotaon-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "quotaon@.service": { "name": "quotaon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "rc-local.service": { "name": "rc-local.service", "source": "systemd", "state": "stopped", "status": "static" }, "rescue.service": { "name": "rescue.service", "source": "systemd", "state": "stopped", "status": "static" }, "restraintd.service": { "name": "restraintd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rngd.service": { "name": "rngd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpc-gssd.service": { "name": "rpc-gssd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd-notify.service": { "name": "rpc-statd-notify.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd.service": { "name": "rpc-statd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-svcgssd.service": { "name": "rpc-svcgssd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rpcbind.service": { "name": "rpcbind.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "rpmdb-migrate.service": { "name": "rpmdb-migrate.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rpmdb-rebuild.service": { "name": "rpmdb-rebuild.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rsyslog.service": { "name": "rsyslog.service", "source": "systemd", "state": "running", "status": "enabled" }, "selinux-autorelabel-mark.service": { "name": "selinux-autorelabel-mark.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "selinux-autorelabel.service": { "name": "selinux-autorelabel.service", "source": "systemd", "state": "inactive", "status": "static" }, "selinux-check-proper-disable.service": { "name": "selinux-check-proper-disable.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "serial-getty@.service": { "name": "serial-getty@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "serial-getty@ttyS0.service": { "name": "serial-getty@ttyS0.service", "source": "systemd", "state": "running", "status": "active" }, "sntp.service": { "name": "sntp.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ssh-host-keys-migration.service": { "name": "ssh-host-keys-migration.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "sshd-keygen.service": { "name": "sshd-keygen.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen@.service": { "name": "sshd-keygen@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "sshd-keygen@ecdsa.service": { "name": "sshd-keygen@ecdsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@ed25519.service": { "name": "sshd-keygen@ed25519.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@rsa.service": { "name": "sshd-keygen@rsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd.service": { "name": "sshd.service", "source": "systemd", "state": "running", "status": "enabled" }, "sshd@.service": { "name": "sshd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "sssd-autofs.service": { "name": "sssd-autofs.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-kcm.service": { "name": "sssd-kcm.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "sssd-nss.service": { "name": "sssd-nss.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pac.service": { "name": "sssd-pac.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pam.service": { "name": "sssd-pam.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-ssh.service": { "name": "sssd-ssh.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-sudo.service": { "name": "sssd-sudo.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd.service": { "name": "sssd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "syslog.service": { "name": "syslog.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "system-update-cleanup.service": { "name": "system-update-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-ask-password-console.service": { "name": "systemd-ask-password-console.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-wall.service": { "name": "systemd-ask-password-wall.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-backlight@.service": { "name": "systemd-backlight@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-battery-check.service": { "name": "systemd-battery-check.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-binfmt.service": { "name": "systemd-binfmt.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-bless-boot.service": { "name": "systemd-bless-boot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-boot-check-no-failures.service": { "name": "systemd-boot-check-no-failures.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-boot-random-seed.service": { "name": "systemd-boot-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-boot-update.service": { "name": "systemd-boot-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-bootctl@.service": { "name": "systemd-bootctl@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-confext.service": { "name": "systemd-confext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-coredump@.service": { "name": "systemd-coredump@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-creds@.service": { "name": "systemd-creds@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-exit.service": { "name": "systemd-exit.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-firstboot.service": { "name": "systemd-firstboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck-root.service": { "name": "systemd-fsck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck@.service": { "name": "systemd-fsck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-growfs-root.service": { "name": "systemd-growfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-growfs@.service": { "name": "systemd-growfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-halt.service": { "name": "systemd-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hibernate-clear.service": { "name": "systemd-hibernate-clear.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate-resume.service": { "name": "systemd-hibernate-resume.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate.service": { "name": "systemd-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hostnamed.service": { "name": "systemd-hostnamed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hwdb-update.service": { "name": "systemd-hwdb-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hybrid-sleep.service": { "name": "systemd-hybrid-sleep.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-initctl.service": { "name": "systemd-initctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-catalog-update.service": { "name": "systemd-journal-catalog-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-flush.service": { "name": "systemd-journal-flush.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journald-sync@.service": { "name": "systemd-journald-sync@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-journald.service": { "name": "systemd-journald.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-journald@.service": { "name": "systemd-journald@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-kexec.service": { "name": "systemd-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-localed.service": { "name": "systemd-localed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-logind.service": { "name": "systemd-logind.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-machine-id-commit.service": { "name": "systemd-machine-id-commit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-modules-load.service": { "name": "systemd-modules-load.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-network-generator.service": { "name": "systemd-network-generator.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-networkd-wait-online.service": { "name": "systemd-networkd-wait-online.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-oomd.service": { "name": "systemd-oomd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-pcrextend@.service": { "name": "systemd-pcrextend@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrfs-root.service": { "name": "systemd-pcrfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pcrfs@.service": { "name": "systemd-pcrfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrlock-file-system.service": { "name": "systemd-pcrlock-file-system.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-code.service": { "name": "systemd-pcrlock-firmware-code.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-config.service": { "name": "systemd-pcrlock-firmware-config.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-machine-id.service": { "name": "systemd-pcrlock-machine-id.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-make-policy.service": { "name": "systemd-pcrlock-make-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-authority.service": { "name": "systemd-pcrlock-secureboot-authority.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-policy.service": { "name": "systemd-pcrlock-secureboot-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock@.service": { "name": "systemd-pcrlock@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrmachine.service": { "name": "systemd-pcrmachine.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-initrd.service": { "name": "systemd-pcrphase-initrd.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-sysinit.service": { "name": "systemd-pcrphase-sysinit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase.service": { "name": "systemd-pcrphase.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-poweroff.service": { "name": "systemd-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pstore.service": { "name": "systemd-pstore.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-quotacheck-root.service": { "name": "systemd-quotacheck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-quotacheck@.service": { "name": "systemd-quotacheck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-random-seed.service": { "name": "systemd-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-reboot.service": { "name": "systemd-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-remount-fs.service": { "name": "systemd-remount-fs.service", "source": "systemd", "state": "stopped", "status": "enabled-runtime" }, "systemd-repart.service": { "name": "systemd-repart.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-rfkill.service": { "name": "systemd-rfkill.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-soft-reboot.service": { "name": "systemd-soft-reboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-suspend-then-hibernate.service": { "name": "systemd-suspend-then-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend.service": { "name": "systemd-suspend.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-sysctl.service": { "name": "systemd-sysctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-sysext.service": { "name": "systemd-sysext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-sysext@.service": { "name": "systemd-sysext@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-sysupdate-reboot.service": { "name": "systemd-sysupdate-reboot.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysupdate.service": { "name": "systemd-sysupdate.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysusers.service": { "name": "systemd-sysusers.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-timedated.service": { "name": "systemd-timedated.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-timesyncd.service": { "name": "systemd-timesyncd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-tmpfiles-clean.service": { "name": "systemd-tmpfiles-clean.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev-early.service": { "name": "systemd-tmpfiles-setup-dev-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev.service": { "name": "systemd-tmpfiles-setup-dev.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup.service": { "name": "systemd-tmpfiles-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup-early.service": { "name": "systemd-tpm2-setup-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup.service": { "name": "systemd-tpm2-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-load-credentials.service": { "name": "systemd-udev-load-credentials.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-udev-settle.service": { "name": "systemd-udev-settle.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-trigger.service": { "name": "systemd-udev-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udevd.service": { "name": "systemd-udevd.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-update-done.service": { "name": "systemd-update-done.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp-runlevel.service": { "name": "systemd-update-utmp-runlevel.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp.service": { "name": "systemd-update-utmp.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-user-sessions.service": { "name": "systemd-user-sessions.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-vconsole-setup.service": { "name": "systemd-vconsole-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-volatile-root.service": { "name": "systemd-volatile-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "unbound-anchor.service": { "name": "unbound-anchor.service", "source": "systemd", "state": "stopped", "status": "static" }, "user-runtime-dir@.service": { "name": "user-runtime-dir@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@0.service": { "name": "user-runtime-dir@0.service", "source": "systemd", "state": "stopped", "status": "active" }, "user-runtime-dir@3001.service": { "name": "user-runtime-dir@3001.service", "source": "systemd", "state": "stopped", "status": "active" }, "user@.service": { "name": "user@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user@0.service": { "name": "user@0.service", "source": "systemd", "state": "running", "status": "active" }, "user@3001.service": { "name": "user@3001.service", "source": "systemd", "state": "running", "status": "active" }, "ypbind.service": { "name": "ypbind.service", "source": "systemd", "state": "stopped", "status": "not-found" } } }, "changed": false } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:114 Saturday 27 July 2024 12:37:29 -0400 (0:00:02.007) 0:00:47.629 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 27 July 2024 12:37:29 -0400 (0:00:00.035) 0:00:47.664 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_file_src": "envoy-proxy-configmap.yml", "__podman_quadlet_spec": {}, "__podman_quadlet_str": "---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: envoy-proxy-config\ndata:\n envoy.yaml: |\n admin:\n address:\n socket_address:\n address: 0.0.0.0\n port_value: 9901\n\n static_resources:\n listeners:\n - name: listener_0\n address:\n socket_address:\n address: 0.0.0.0\n port_value: 8080\n filter_chains:\n - filters:\n - name: envoy.filters.network.http_connection_manager\n typed_config:\n \"@type\": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager\n stat_prefix: ingress_http\n codec_type: AUTO\n route_config:\n name: local_route\n virtual_hosts:\n - name: local_service\n domains: [\"*\"]\n routes:\n - match:\n prefix: \"/\"\n route:\n cluster: backend\n http_filters:\n - name: envoy.filters.http.router\n typed_config:\n \"@type\": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router\n transport_socket:\n name: envoy.transport_sockets.tls\n typed_config:\n \"@type\": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext\n common_tls_context:\n tls_certificates:\n - certificate_chain:\n filename: /etc/envoy-certificates/certificate.pem\n private_key:\n filename: /etc/envoy-certificates/certificate.key\n clusters:\n - name: backend\n connect_timeout: 5s\n type: STATIC\n dns_refresh_rate: 1800s\n lb_policy: ROUND_ROBIN\n load_assignment:\n cluster_name: backend\n endpoints:\n - lb_endpoints:\n - endpoint:\n address:\n socket_address:\n address: 127.0.0.1\n port_value: 80", "__podman_quadlet_template_src": "" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 27 July 2024 12:37:29 -0400 (0:00:00.049) 0:00:47.713 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "absent", "__podman_systemd_unit_scope": "", "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 27 July 2024 12:37:29 -0400 (0:00:00.046) 0:00:47.760 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_quadlet_file_src", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 27 July 2024 12:37:29 -0400 (0:00:00.038) 0:00:47.798 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_name": "envoy-proxy-configmap", "__podman_quadlet_type": "yml", "__podman_rootless": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 27 July 2024 12:37:29 -0400 (0:00:00.053) 0:00:47.852 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:2 Saturday 27 July 2024 12:37:29 -0400 (0:00:00.120) 0:00:47.972 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:9 Saturday 27 July 2024 12:37:29 -0400 (0:00:00.042) 0:00:48.014 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:16 Saturday 27 July 2024 12:37:29 -0400 (0:00:00.044) 0:00:48.058 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get group information] **************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:28 Saturday 27 July 2024 12:37:29 -0400 (0:00:00.051) 0:00:48.110 ********* ok: [managed_node1] => { "ansible_facts": { "getent_group": { "root": [ "x", "0", "" ] } }, "changed": false } TASK [fedora.linux_system_roles.podman : Set group name] *********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:35 Saturday 27 July 2024 12:37:30 -0400 (0:00:00.399) 0:00:48.509 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group_name": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 27 July 2024 12:37:30 -0400 (0:00:00.051) 0:00:48.560 ********* ok: [managed_node1] => { "changed": false, "stat": { "atime": 1722097719.096431, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "86395ad7ce62834c967dc50f963a68f042029188", "ctime": 1722097690.6782997, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 4755377, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1719187200.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15728, "uid": 0, "version": "2656437169", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check user with getsubids] ************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 27 July 2024 12:37:30 -0400 (0:00:00.397) 0:00:48.957 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check group with getsubids] *********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 27 July 2024 12:37:30 -0400 (0:00:00.035) 0:00:48.993 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 27 July 2024 12:37:30 -0400 (0:00:00.037) 0:00:49.030 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:74 Saturday 27 July 2024 12:37:30 -0400 (0:00:00.036) 0:00:49.066 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:79 Saturday 27 July 2024 12:37:30 -0400 (0:00:00.034) 0:00:49.100 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:84 Saturday 27 July 2024 12:37:30 -0400 (0:00:00.036) 0:00:49.136 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:94 Saturday 27 July 2024 12:37:30 -0400 (0:00:00.036) 0:00:49.172 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if group not in subgid file] ***** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:101 Saturday 27 July 2024 12:37:30 -0400 (0:00:00.034) 0:00:49.207 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:62 Saturday 27 July 2024 12:37:30 -0400 (0:00:00.035) 0:00:49.243 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [], "__podman_kube_yamls_raw": "", "__podman_service_name": "", "__podman_systemd_scope": "system", "__podman_user_home_dir": "/root", "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 27 July 2024 12:37:30 -0400 (0:00:00.061) 0:00:49.304 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_path": "/etc/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:77 Saturday 27 July 2024 12:37:30 -0400 (0:00:00.038) 0:00:49.343 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:87 Saturday 27 July 2024 12:37:31 -0400 (0:00:00.081) 0:00:49.425 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_images": [], "__podman_quadlet_file": "/etc/containers/systemd/envoy-proxy-configmap.yml", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:103 Saturday 27 July 2024 12:37:31 -0400 (0:00:00.085) 0:00:49.510 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:110 Saturday 27 July 2024 12:37:31 -0400 (0:00:00.044) 0:00:49.554 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:4 Saturday 27 July 2024 12:37:31 -0400 (0:00:00.087) 0:00:49.642 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stop and disable service] ************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:12 Saturday 27 July 2024 12:37:31 -0400 (0:00:00.035) 0:00:49.678 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_service_name | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : See if quadlet file exists] *********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:33 Saturday 27 July 2024 12:37:31 -0400 (0:00:00.039) 0:00:49.717 ********* ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:38 Saturday 27 July 2024 12:37:31 -0400 (0:00:00.392) 0:00:50.110 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_quadlet_stat.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Remove quadlet file] ****************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:42 Saturday 27 July 2024 12:37:31 -0400 (0:00:00.034) 0:00:50.144 ********* ok: [managed_node1] => { "changed": false, "path": "/etc/containers/systemd/envoy-proxy-configmap.yml", "state": "absent" } TASK [fedora.linux_system_roles.podman : Refresh systemd] ********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:48 Saturday 27 July 2024 12:37:32 -0400 (0:00:00.406) 0:00:50.551 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_file_removed is changed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Remove managed resource] ************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:58 Saturday 27 July 2024 12:37:32 -0400 (0:00:00.038) 0:00:50.589 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Remove volumes] *********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:95 Saturday 27 July 2024 12:37:32 -0400 (0:00:00.037) 0:00:50.626 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:112 Saturday 27 July 2024 12:37:32 -0400 (0:00:00.050) 0:00:50.677 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_parsed": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Prune images no longer in use] ******** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:116 Saturday 27 July 2024 12:37:32 -0400 (0:00:00.038) 0:00:50.716 ********* changed: [managed_node1] => { "changed": true, "cmd": [ "podman", "image", "prune", "--all", "-f" ], "delta": "0:00:00.038959", "end": "2024-07-27 12:37:32.674178", "rc": 0, "start": "2024-07-27 12:37:32.635219" } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:127 Saturday 27 July 2024 12:37:32 -0400 (0:00:00.440) 0:00:51.157 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:12 Saturday 27 July 2024 12:37:32 -0400 (0:00:00.107) 0:00:51.264 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 27 July 2024 12:37:32 -0400 (0:00:00.035) 0:00:51.300 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 27 July 2024 12:37:32 -0400 (0:00:00.036) 0:00:51.336 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - images] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:137 Saturday 27 July 2024 12:37:32 -0400 (0:00:00.036) 0:00:51.372 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "images", "-n" ], "delta": "0:00:00.038669", "end": "2024-07-27 12:37:33.324062", "rc": 0, "start": "2024-07-27 12:37:33.285393" } TASK [fedora.linux_system_roles.podman : For testing and debugging - volumes] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:146 Saturday 27 July 2024 12:37:33 -0400 (0:00:00.434) 0:00:51.807 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "volume", "ls", "-n" ], "delta": "0:00:00.039262", "end": "2024-07-27 12:37:33.760814", "rc": 0, "start": "2024-07-27 12:37:33.721552" } TASK [fedora.linux_system_roles.podman : For testing and debugging - containers] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:155 Saturday 27 July 2024 12:37:33 -0400 (0:00:00.438) 0:00:52.246 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "ps", "--noheading" ], "delta": "0:00:00.040020", "end": "2024-07-27 12:37:34.196736", "rc": 0, "start": "2024-07-27 12:37:34.156716" } TASK [fedora.linux_system_roles.podman : For testing and debugging - networks] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:164 Saturday 27 July 2024 12:37:34 -0400 (0:00:00.434) 0:00:52.680 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "network", "ls", "-n", "-q" ], "delta": "0:00:00.037073", "end": "2024-07-27 12:37:34.630241", "rc": 0, "start": "2024-07-27 12:37:34.593168" } STDOUT: podman podman-default-kube-network TASK [fedora.linux_system_roles.podman : For testing and debugging - secrets] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:173 Saturday 27 July 2024 12:37:34 -0400 (0:00:00.433) 0:00:53.113 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - services] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:183 Saturday 27 July 2024 12:37:35 -0400 (0:00:00.431) 0:00:53.544 ********* ok: [managed_node1] => { "ansible_facts": { "services": { "NetworkManager-dispatcher.service": { "name": "NetworkManager-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "NetworkManager-wait-online.service": { "name": "NetworkManager-wait-online.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "NetworkManager.service": { "name": "NetworkManager.service", "source": "systemd", "state": "running", "status": "enabled" }, "audit-rules.service": { "name": "audit-rules.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "auditd.service": { "name": "auditd.service", "source": "systemd", "state": "running", "status": "enabled" }, "auth-rpcgss-module.service": { "name": "auth-rpcgss-module.service", "source": "systemd", "state": "stopped", "status": "static" }, "autofs.service": { "name": "autofs.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "autovt@.service": { "name": "autovt@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "blk-availability.service": { "name": "blk-availability.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "capsule@.service": { "name": "capsule@.service", "source": "systemd", "state": "unknown", "status": "static" }, "certmonger.service": { "name": "certmonger.service", "source": "systemd", "state": "running", "status": "enabled" }, "chrony-wait.service": { "name": "chrony-wait.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd-restricted.service": { "name": "chronyd-restricted.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd.service": { "name": "chronyd.service", "source": "systemd", "state": "running", "status": "enabled" }, "cloud-config.service": { "name": "cloud-config.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-final.service": { "name": "cloud-final.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init-hotplugd.service": { "name": "cloud-init-hotplugd.service", "source": "systemd", "state": "inactive", "status": "static" }, "cloud-init-local.service": { "name": "cloud-init-local.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init.service": { "name": "cloud-init.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "console-getty.service": { "name": "console-getty.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "container-getty@.service": { "name": "container-getty@.service", "source": "systemd", "state": "unknown", "status": "static" }, "crond.service": { "name": "crond.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-broker.service": { "name": "dbus-broker.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-org.fedoraproject.FirewallD1.service": { "name": "dbus-org.fedoraproject.FirewallD1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.hostname1.service": { "name": "dbus-org.freedesktop.hostname1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.locale1.service": { "name": "dbus-org.freedesktop.locale1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.login1.service": { "name": "dbus-org.freedesktop.login1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.nm-dispatcher.service": { "name": "dbus-org.freedesktop.nm-dispatcher.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.timedate1.service": { "name": "dbus-org.freedesktop.timedate1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus.service": { "name": "dbus.service", "source": "systemd", "state": "active", "status": "alias" }, "debug-shell.service": { "name": "debug-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd.service": { "name": "dhcpcd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd@.service": { "name": "dhcpcd@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "display-manager.service": { "name": "display-manager.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "dm-event.service": { "name": "dm-event.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-makecache.service": { "name": "dnf-makecache.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-system-upgrade-cleanup.service": { "name": "dnf-system-upgrade-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf-system-upgrade.service": { "name": "dnf-system-upgrade.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dracut-cmdline.service": { "name": "dracut-cmdline.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-initqueue.service": { "name": "dracut-initqueue.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-mount.service": { "name": "dracut-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-mount.service": { "name": "dracut-pre-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-pivot.service": { "name": "dracut-pre-pivot.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-trigger.service": { "name": "dracut-pre-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-udev.service": { "name": "dracut-pre-udev.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown-onfailure.service": { "name": "dracut-shutdown-onfailure.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown.service": { "name": "dracut-shutdown.service", "source": "systemd", "state": "stopped", "status": "static" }, "ebtables.service": { "name": "ebtables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "emergency.service": { "name": "emergency.service", "source": "systemd", "state": "stopped", "status": "static" }, "firewalld.service": { "name": "firewalld.service", "source": "systemd", "state": "running", "status": "enabled" }, "fsidd.service": { "name": "fsidd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fstrim.service": { "name": "fstrim.service", "source": "systemd", "state": "stopped", "status": "static" }, "getty@.service": { "name": "getty@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "getty@tty1.service": { "name": "getty@tty1.service", "source": "systemd", "state": "running", "status": "active" }, "grub-boot-indeterminate.service": { "name": "grub-boot-indeterminate.service", "source": "systemd", "state": "inactive", "status": "static" }, "grub2-systemd-integration.service": { "name": "grub2-systemd-integration.service", "source": "systemd", "state": "inactive", "status": "static" }, "gssproxy.service": { "name": "gssproxy.service", "source": "systemd", "state": "running", "status": "disabled" }, "hv_kvp_daemon.service": { "name": "hv_kvp_daemon.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "initrd-cleanup.service": { "name": "initrd-cleanup.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-parse-etc.service": { "name": "initrd-parse-etc.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-switch-root.service": { "name": "initrd-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-udevadm-cleanup-db.service": { "name": "initrd-udevadm-cleanup-db.service", "source": "systemd", "state": "stopped", "status": "static" }, "ip6tables.service": { "name": "ip6tables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ipset.service": { "name": "ipset.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iptables.service": { "name": "iptables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "irqbalance.service": { "name": "irqbalance.service", "source": "systemd", "state": "running", "status": "enabled" }, "kdump.service": { "name": "kdump.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "kmod-static-nodes.service": { "name": "kmod-static-nodes.service", "source": "systemd", "state": "stopped", "status": "static" }, "kvm_stat.service": { "name": "kvm_stat.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "ldconfig.service": { "name": "ldconfig.service", "source": "systemd", "state": "stopped", "status": "static" }, "logrotate.service": { "name": "logrotate.service", "source": "systemd", "state": "stopped", "status": "failed" }, "lvm2-lvmpolld.service": { "name": "lvm2-lvmpolld.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm2-monitor.service": { "name": "lvm2-monitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "man-db-cache-update.service": { "name": "man-db-cache-update.service", "source": "systemd", "state": "inactive", "status": "static" }, "man-db-restart-cache-update.service": { "name": "man-db-restart-cache-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "modprobe@.service": { "name": "modprobe@.service", "source": "systemd", "state": "unknown", "status": "static" }, "modprobe@configfs.service": { "name": "modprobe@configfs.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@dm_mod.service": { "name": "modprobe@dm_mod.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@drm.service": { "name": "modprobe@drm.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@efi_pstore.service": { "name": "modprobe@efi_pstore.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@fuse.service": { "name": "modprobe@fuse.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@loop.service": { "name": "modprobe@loop.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "netavark-dhcp-proxy.service": { "name": "netavark-dhcp-proxy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "netavark-firewalld-reload.service": { "name": "netavark-firewalld-reload.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "network.service": { "name": "network.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "nfs-blkmap.service": { "name": "nfs-blkmap.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-idmapd.service": { "name": "nfs-idmapd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-mountd.service": { "name": "nfs-mountd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-server.service": { "name": "nfs-server.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nfs-utils.service": { "name": "nfs-utils.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfsdcld.service": { "name": "nfsdcld.service", "source": "systemd", "state": "stopped", "status": "static" }, "nftables.service": { "name": "nftables.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nis-domainname.service": { "name": "nis-domainname.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nm-priv-helper.service": { "name": "nm-priv-helper.service", "source": "systemd", "state": "inactive", "status": "static" }, "ntpd.service": { "name": "ntpd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ntpdate.service": { "name": "ntpdate.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "pam_namespace.service": { "name": "pam_namespace.service", "source": "systemd", "state": "inactive", "status": "static" }, "pcscd.service": { "name": "pcscd.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "plymouth-quit-wait.service": { "name": "plymouth-quit-wait.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "plymouth-start.service": { "name": "plymouth-start.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "podman-auto-update.service": { "name": "podman-auto-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-clean-transient.service": { "name": "podman-clean-transient.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-kube@.service": { "name": "podman-kube@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "podman-restart.service": { "name": "podman-restart.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman.service": { "name": "podman.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "polkit.service": { "name": "polkit.service", "source": "systemd", "state": "running", "status": "static" }, "qemu-guest-agent.service": { "name": "qemu-guest-agent.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "quotaon-root.service": { "name": "quotaon-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "quotaon@.service": { "name": "quotaon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "rc-local.service": { "name": "rc-local.service", "source": "systemd", "state": "stopped", "status": "static" }, "rescue.service": { "name": "rescue.service", "source": "systemd", "state": "stopped", "status": "static" }, "restraintd.service": { "name": "restraintd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rngd.service": { "name": "rngd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpc-gssd.service": { "name": "rpc-gssd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd-notify.service": { "name": "rpc-statd-notify.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd.service": { "name": "rpc-statd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-svcgssd.service": { "name": "rpc-svcgssd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rpcbind.service": { "name": "rpcbind.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "rpmdb-migrate.service": { "name": "rpmdb-migrate.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rpmdb-rebuild.service": { "name": "rpmdb-rebuild.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rsyslog.service": { "name": "rsyslog.service", "source": "systemd", "state": "running", "status": "enabled" }, "selinux-autorelabel-mark.service": { "name": "selinux-autorelabel-mark.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "selinux-autorelabel.service": { "name": "selinux-autorelabel.service", "source": "systemd", "state": "inactive", "status": "static" }, "selinux-check-proper-disable.service": { "name": "selinux-check-proper-disable.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "serial-getty@.service": { "name": "serial-getty@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "serial-getty@ttyS0.service": { "name": "serial-getty@ttyS0.service", "source": "systemd", "state": "running", "status": "active" }, "sntp.service": { "name": "sntp.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ssh-host-keys-migration.service": { "name": "ssh-host-keys-migration.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "sshd-keygen.service": { "name": "sshd-keygen.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen@.service": { "name": "sshd-keygen@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "sshd-keygen@ecdsa.service": { "name": "sshd-keygen@ecdsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@ed25519.service": { "name": "sshd-keygen@ed25519.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@rsa.service": { "name": "sshd-keygen@rsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd.service": { "name": "sshd.service", "source": "systemd", "state": "running", "status": "enabled" }, "sshd@.service": { "name": "sshd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "sssd-autofs.service": { "name": "sssd-autofs.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-kcm.service": { "name": "sssd-kcm.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "sssd-nss.service": { "name": "sssd-nss.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pac.service": { "name": "sssd-pac.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pam.service": { "name": "sssd-pam.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-ssh.service": { "name": "sssd-ssh.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-sudo.service": { "name": "sssd-sudo.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd.service": { "name": "sssd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "syslog.service": { "name": "syslog.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "system-update-cleanup.service": { "name": "system-update-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-ask-password-console.service": { "name": "systemd-ask-password-console.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-wall.service": { "name": "systemd-ask-password-wall.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-backlight@.service": { "name": "systemd-backlight@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-battery-check.service": { "name": "systemd-battery-check.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-binfmt.service": { "name": "systemd-binfmt.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-bless-boot.service": { "name": "systemd-bless-boot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-boot-check-no-failures.service": { "name": "systemd-boot-check-no-failures.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-boot-random-seed.service": { "name": "systemd-boot-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-boot-update.service": { "name": "systemd-boot-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-bootctl@.service": { "name": "systemd-bootctl@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-confext.service": { "name": "systemd-confext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-coredump@.service": { "name": "systemd-coredump@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-creds@.service": { "name": "systemd-creds@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-exit.service": { "name": "systemd-exit.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-firstboot.service": { "name": "systemd-firstboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck-root.service": { "name": "systemd-fsck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck@.service": { "name": "systemd-fsck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-growfs-root.service": { "name": "systemd-growfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-growfs@.service": { "name": "systemd-growfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-halt.service": { "name": "systemd-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hibernate-clear.service": { "name": "systemd-hibernate-clear.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate-resume.service": { "name": "systemd-hibernate-resume.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate.service": { "name": "systemd-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hostnamed.service": { "name": "systemd-hostnamed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hwdb-update.service": { "name": "systemd-hwdb-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hybrid-sleep.service": { "name": "systemd-hybrid-sleep.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-initctl.service": { "name": "systemd-initctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-catalog-update.service": { "name": "systemd-journal-catalog-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-flush.service": { "name": "systemd-journal-flush.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journald-sync@.service": { "name": "systemd-journald-sync@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-journald.service": { "name": "systemd-journald.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-journald@.service": { "name": "systemd-journald@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-kexec.service": { "name": "systemd-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-localed.service": { "name": "systemd-localed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-logind.service": { "name": "systemd-logind.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-machine-id-commit.service": { "name": "systemd-machine-id-commit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-modules-load.service": { "name": "systemd-modules-load.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-network-generator.service": { "name": "systemd-network-generator.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-networkd-wait-online.service": { "name": "systemd-networkd-wait-online.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-oomd.service": { "name": "systemd-oomd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-pcrextend@.service": { "name": "systemd-pcrextend@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrfs-root.service": { "name": "systemd-pcrfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pcrfs@.service": { "name": "systemd-pcrfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrlock-file-system.service": { "name": "systemd-pcrlock-file-system.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-code.service": { "name": "systemd-pcrlock-firmware-code.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-config.service": { "name": "systemd-pcrlock-firmware-config.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-machine-id.service": { "name": "systemd-pcrlock-machine-id.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-make-policy.service": { "name": "systemd-pcrlock-make-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-authority.service": { "name": "systemd-pcrlock-secureboot-authority.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-policy.service": { "name": "systemd-pcrlock-secureboot-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock@.service": { "name": "systemd-pcrlock@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrmachine.service": { "name": "systemd-pcrmachine.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-initrd.service": { "name": "systemd-pcrphase-initrd.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-sysinit.service": { "name": "systemd-pcrphase-sysinit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase.service": { "name": "systemd-pcrphase.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-poweroff.service": { "name": "systemd-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pstore.service": { "name": "systemd-pstore.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-quotacheck-root.service": { "name": "systemd-quotacheck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-quotacheck@.service": { "name": "systemd-quotacheck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-random-seed.service": { "name": "systemd-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-reboot.service": { "name": "systemd-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-remount-fs.service": { "name": "systemd-remount-fs.service", "source": "systemd", "state": "stopped", "status": "enabled-runtime" }, "systemd-repart.service": { "name": "systemd-repart.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-rfkill.service": { "name": "systemd-rfkill.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-soft-reboot.service": { "name": "systemd-soft-reboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-suspend-then-hibernate.service": { "name": "systemd-suspend-then-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend.service": { "name": "systemd-suspend.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-sysctl.service": { "name": "systemd-sysctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-sysext.service": { "name": "systemd-sysext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-sysext@.service": { "name": "systemd-sysext@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-sysupdate-reboot.service": { "name": "systemd-sysupdate-reboot.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysupdate.service": { "name": "systemd-sysupdate.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysusers.service": { "name": "systemd-sysusers.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-timedated.service": { "name": "systemd-timedated.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-timesyncd.service": { "name": "systemd-timesyncd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-tmpfiles-clean.service": { "name": "systemd-tmpfiles-clean.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev-early.service": { "name": "systemd-tmpfiles-setup-dev-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev.service": { "name": "systemd-tmpfiles-setup-dev.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup.service": { "name": "systemd-tmpfiles-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup-early.service": { "name": "systemd-tpm2-setup-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup.service": { "name": "systemd-tpm2-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-load-credentials.service": { "name": "systemd-udev-load-credentials.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-udev-settle.service": { "name": "systemd-udev-settle.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-trigger.service": { "name": "systemd-udev-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udevd.service": { "name": "systemd-udevd.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-update-done.service": { "name": "systemd-update-done.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp-runlevel.service": { "name": "systemd-update-utmp-runlevel.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp.service": { "name": "systemd-update-utmp.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-user-sessions.service": { "name": "systemd-user-sessions.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-vconsole-setup.service": { "name": "systemd-vconsole-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-volatile-root.service": { "name": "systemd-volatile-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "unbound-anchor.service": { "name": "unbound-anchor.service", "source": "systemd", "state": "stopped", "status": "static" }, "user-runtime-dir@.service": { "name": "user-runtime-dir@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@0.service": { "name": "user-runtime-dir@0.service", "source": "systemd", "state": "stopped", "status": "active" }, "user-runtime-dir@3001.service": { "name": "user-runtime-dir@3001.service", "source": "systemd", "state": "stopped", "status": "active" }, "user@.service": { "name": "user@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user@0.service": { "name": "user@0.service", "source": "systemd", "state": "running", "status": "active" }, "user@3001.service": { "name": "user@3001.service", "source": "systemd", "state": "running", "status": "active" }, "ypbind.service": { "name": "ypbind.service", "source": "systemd", "state": "stopped", "status": "not-found" } } }, "changed": false } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:114 Saturday 27 July 2024 12:37:37 -0400 (0:00:02.046) 0:00:55.591 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 27 July 2024 12:37:37 -0400 (0:00:00.035) 0:00:55.626 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_file_src": "", "__podman_quadlet_spec": {}, "__podman_quadlet_str": "[Install]\nWantedBy=default.target\n\n[Container]\nImage=quay.io/linux-system-roles/mysql:5.6\nContainerName=quadlet-demo-mysql\nVolume=quadlet-demo-mysql.volume:/var/lib/mysql\nVolume=/tmp/quadlet_demo:/var/lib/quadlet_demo:Z\nNetwork=quadlet-demo.network\nSecret=mysql-root-password-container,type=env,target=MYSQL_ROOT_PASSWORD\nHealthCmd=/bin/true\nHealthOnFailure=kill\n", "__podman_quadlet_template_src": "quadlet-demo-mysql.container.j2" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 27 July 2024 12:37:37 -0400 (0:00:00.096) 0:00:55.723 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "absent", "__podman_systemd_unit_scope": "", "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 27 July 2024 12:37:37 -0400 (0:00:00.044) 0:00:55.768 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_quadlet_str", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 27 July 2024 12:37:37 -0400 (0:00:00.038) 0:00:55.806 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_name": "quadlet-demo-mysql", "__podman_quadlet_type": "container", "__podman_rootless": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 27 July 2024 12:37:37 -0400 (0:00:00.055) 0:00:55.862 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:2 Saturday 27 July 2024 12:37:37 -0400 (0:00:00.120) 0:00:55.982 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:9 Saturday 27 July 2024 12:37:37 -0400 (0:00:00.042) 0:00:56.024 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:16 Saturday 27 July 2024 12:37:37 -0400 (0:00:00.040) 0:00:56.065 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get group information] **************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:28 Saturday 27 July 2024 12:37:37 -0400 (0:00:00.051) 0:00:56.117 ********* ok: [managed_node1] => { "ansible_facts": { "getent_group": { "root": [ "x", "0", "" ] } }, "changed": false } TASK [fedora.linux_system_roles.podman : Set group name] *********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:35 Saturday 27 July 2024 12:37:38 -0400 (0:00:00.414) 0:00:56.531 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group_name": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 27 July 2024 12:37:38 -0400 (0:00:00.047) 0:00:56.579 ********* ok: [managed_node1] => { "changed": false, "stat": { "atime": 1722097719.096431, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "86395ad7ce62834c967dc50f963a68f042029188", "ctime": 1722097690.6782997, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 4755377, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1719187200.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15728, "uid": 0, "version": "2656437169", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check user with getsubids] ************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 27 July 2024 12:37:38 -0400 (0:00:00.396) 0:00:56.975 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check group with getsubids] *********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 27 July 2024 12:37:38 -0400 (0:00:00.035) 0:00:57.011 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 27 July 2024 12:37:38 -0400 (0:00:00.034) 0:00:57.046 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:74 Saturday 27 July 2024 12:37:38 -0400 (0:00:00.035) 0:00:57.081 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:79 Saturday 27 July 2024 12:37:38 -0400 (0:00:00.035) 0:00:57.117 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:84 Saturday 27 July 2024 12:37:38 -0400 (0:00:00.034) 0:00:57.152 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:94 Saturday 27 July 2024 12:37:38 -0400 (0:00:00.036) 0:00:57.188 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if group not in subgid file] ***** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:101 Saturday 27 July 2024 12:37:38 -0400 (0:00:00.035) 0:00:57.224 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:62 Saturday 27 July 2024 12:37:38 -0400 (0:00:00.034) 0:00:57.258 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [ "quay.io/linux-system-roles/mysql:5.6" ], "__podman_kube_yamls_raw": "", "__podman_service_name": "quadlet-demo-mysql.service", "__podman_systemd_scope": "system", "__podman_user_home_dir": "/root", "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 27 July 2024 12:37:38 -0400 (0:00:00.063) 0:00:57.321 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_path": "/etc/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:77 Saturday 27 July 2024 12:37:38 -0400 (0:00:00.087) 0:00:57.409 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:87 Saturday 27 July 2024 12:37:39 -0400 (0:00:00.034) 0:00:57.443 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_images": [ "quay.io/linux-system-roles/mysql:5.6" ], "__podman_quadlet_file": "/etc/containers/systemd/quadlet-demo-mysql.container", "__podman_volumes": [ "/tmp/quadlet_demo" ] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:103 Saturday 27 July 2024 12:37:39 -0400 (0:00:00.084) 0:00:57.528 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:110 Saturday 27 July 2024 12:37:39 -0400 (0:00:00.044) 0:00:57.573 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:4 Saturday 27 July 2024 12:37:39 -0400 (0:00:00.085) 0:00:57.658 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stop and disable service] ************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:12 Saturday 27 July 2024 12:37:39 -0400 (0:00:00.035) 0:00:57.694 ********* ok: [managed_node1] => { "changed": false, "failed_when_result": false } MSG: Could not find the requested service quadlet-demo-mysql.service: host TASK [fedora.linux_system_roles.podman : See if quadlet file exists] *********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:33 Saturday 27 July 2024 12:37:39 -0400 (0:00:00.557) 0:00:58.251 ********* ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:38 Saturday 27 July 2024 12:37:40 -0400 (0:00:00.388) 0:00:58.639 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_quadlet_stat.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Remove quadlet file] ****************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:42 Saturday 27 July 2024 12:37:40 -0400 (0:00:00.036) 0:00:58.675 ********* ok: [managed_node1] => { "changed": false, "path": "/etc/containers/systemd/quadlet-demo-mysql.container", "state": "absent" } TASK [fedora.linux_system_roles.podman : Refresh systemd] ********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:48 Saturday 27 July 2024 12:37:40 -0400 (0:00:00.401) 0:00:59.077 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_file_removed is changed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Remove managed resource] ************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:58 Saturday 27 July 2024 12:37:40 -0400 (0:00:00.039) 0:00:59.116 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Remove volumes] *********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:95 Saturday 27 July 2024 12:37:40 -0400 (0:00:00.038) 0:00:59.155 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:112 Saturday 27 July 2024 12:37:40 -0400 (0:00:00.050) 0:00:59.205 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_parsed": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Prune images no longer in use] ******** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:116 Saturday 27 July 2024 12:37:40 -0400 (0:00:00.038) 0:00:59.244 ********* changed: [managed_node1] => { "changed": true, "cmd": [ "podman", "image", "prune", "--all", "-f" ], "delta": "0:00:00.039303", "end": "2024-07-27 12:37:41.197404", "rc": 0, "start": "2024-07-27 12:37:41.158101" } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:127 Saturday 27 July 2024 12:37:41 -0400 (0:00:00.438) 0:00:59.682 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:12 Saturday 27 July 2024 12:37:41 -0400 (0:00:00.107) 0:00:59.790 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 27 July 2024 12:37:41 -0400 (0:00:00.036) 0:00:59.826 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 27 July 2024 12:37:41 -0400 (0:00:00.035) 0:00:59.862 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - images] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:137 Saturday 27 July 2024 12:37:41 -0400 (0:00:00.036) 0:00:59.899 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "images", "-n" ], "delta": "0:00:00.039435", "end": "2024-07-27 12:37:41.852533", "rc": 0, "start": "2024-07-27 12:37:41.813098" } TASK [fedora.linux_system_roles.podman : For testing and debugging - volumes] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:146 Saturday 27 July 2024 12:37:41 -0400 (0:00:00.437) 0:01:00.336 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "volume", "ls", "-n" ], "delta": "0:00:00.039695", "end": "2024-07-27 12:37:42.296251", "rc": 0, "start": "2024-07-27 12:37:42.256556" } TASK [fedora.linux_system_roles.podman : For testing and debugging - containers] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:155 Saturday 27 July 2024 12:37:42 -0400 (0:00:00.443) 0:01:00.779 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "ps", "--noheading" ], "delta": "0:00:00.039143", "end": "2024-07-27 12:37:42.737578", "rc": 0, "start": "2024-07-27 12:37:42.698435" } TASK [fedora.linux_system_roles.podman : For testing and debugging - networks] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:164 Saturday 27 July 2024 12:37:42 -0400 (0:00:00.442) 0:01:01.222 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "network", "ls", "-n", "-q" ], "delta": "0:00:00.038971", "end": "2024-07-27 12:37:43.182289", "rc": 0, "start": "2024-07-27 12:37:43.143318" } STDOUT: podman podman-default-kube-network TASK [fedora.linux_system_roles.podman : For testing and debugging - secrets] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:173 Saturday 27 July 2024 12:37:43 -0400 (0:00:00.445) 0:01:01.668 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - services] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:183 Saturday 27 July 2024 12:37:43 -0400 (0:00:00.441) 0:01:02.110 ********* ok: [managed_node1] => { "ansible_facts": { "services": { "NetworkManager-dispatcher.service": { "name": "NetworkManager-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "NetworkManager-wait-online.service": { "name": "NetworkManager-wait-online.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "NetworkManager.service": { "name": "NetworkManager.service", "source": "systemd", "state": "running", "status": "enabled" }, "audit-rules.service": { "name": "audit-rules.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "auditd.service": { "name": "auditd.service", "source": "systemd", "state": "running", "status": "enabled" }, "auth-rpcgss-module.service": { "name": "auth-rpcgss-module.service", "source": "systemd", "state": "stopped", "status": "static" }, "autofs.service": { "name": "autofs.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "autovt@.service": { "name": "autovt@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "blk-availability.service": { "name": "blk-availability.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "capsule@.service": { "name": "capsule@.service", "source": "systemd", "state": "unknown", "status": "static" }, "certmonger.service": { "name": "certmonger.service", "source": "systemd", "state": "running", "status": "enabled" }, "chrony-wait.service": { "name": "chrony-wait.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd-restricted.service": { "name": "chronyd-restricted.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd.service": { "name": "chronyd.service", "source": "systemd", "state": "running", "status": "enabled" }, "cloud-config.service": { "name": "cloud-config.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-final.service": { "name": "cloud-final.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init-hotplugd.service": { "name": "cloud-init-hotplugd.service", "source": "systemd", "state": "inactive", "status": "static" }, "cloud-init-local.service": { "name": "cloud-init-local.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init.service": { "name": "cloud-init.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "console-getty.service": { "name": "console-getty.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "container-getty@.service": { "name": "container-getty@.service", "source": "systemd", "state": "unknown", "status": "static" }, "crond.service": { "name": "crond.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-broker.service": { "name": "dbus-broker.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-org.fedoraproject.FirewallD1.service": { "name": "dbus-org.fedoraproject.FirewallD1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.hostname1.service": { "name": "dbus-org.freedesktop.hostname1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.locale1.service": { "name": "dbus-org.freedesktop.locale1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.login1.service": { "name": "dbus-org.freedesktop.login1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.nm-dispatcher.service": { "name": "dbus-org.freedesktop.nm-dispatcher.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.timedate1.service": { "name": "dbus-org.freedesktop.timedate1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus.service": { "name": "dbus.service", "source": "systemd", "state": "active", "status": "alias" }, "debug-shell.service": { "name": "debug-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd.service": { "name": "dhcpcd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd@.service": { "name": "dhcpcd@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "display-manager.service": { "name": "display-manager.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "dm-event.service": { "name": "dm-event.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-makecache.service": { "name": "dnf-makecache.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-system-upgrade-cleanup.service": { "name": "dnf-system-upgrade-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf-system-upgrade.service": { "name": "dnf-system-upgrade.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dracut-cmdline.service": { "name": "dracut-cmdline.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-initqueue.service": { "name": "dracut-initqueue.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-mount.service": { "name": "dracut-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-mount.service": { "name": "dracut-pre-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-pivot.service": { "name": "dracut-pre-pivot.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-trigger.service": { "name": "dracut-pre-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-udev.service": { "name": "dracut-pre-udev.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown-onfailure.service": { "name": "dracut-shutdown-onfailure.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown.service": { "name": "dracut-shutdown.service", "source": "systemd", "state": "stopped", "status": "static" }, "ebtables.service": { "name": "ebtables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "emergency.service": { "name": "emergency.service", "source": "systemd", "state": "stopped", "status": "static" }, "firewalld.service": { "name": "firewalld.service", "source": "systemd", "state": "running", "status": "enabled" }, "fsidd.service": { "name": "fsidd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fstrim.service": { "name": "fstrim.service", "source": "systemd", "state": "stopped", "status": "static" }, "getty@.service": { "name": "getty@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "getty@tty1.service": { "name": "getty@tty1.service", "source": "systemd", "state": "running", "status": "active" }, "grub-boot-indeterminate.service": { "name": "grub-boot-indeterminate.service", "source": "systemd", "state": "inactive", "status": "static" }, "grub2-systemd-integration.service": { "name": "grub2-systemd-integration.service", "source": "systemd", "state": "inactive", "status": "static" }, "gssproxy.service": { "name": "gssproxy.service", "source": "systemd", "state": "running", "status": "disabled" }, "hv_kvp_daemon.service": { "name": "hv_kvp_daemon.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "initrd-cleanup.service": { "name": "initrd-cleanup.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-parse-etc.service": { "name": "initrd-parse-etc.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-switch-root.service": { "name": "initrd-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-udevadm-cleanup-db.service": { "name": "initrd-udevadm-cleanup-db.service", "source": "systemd", "state": "stopped", "status": "static" }, "ip6tables.service": { "name": "ip6tables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ipset.service": { "name": "ipset.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iptables.service": { "name": "iptables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "irqbalance.service": { "name": "irqbalance.service", "source": "systemd", "state": "running", "status": "enabled" }, "kdump.service": { "name": "kdump.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "kmod-static-nodes.service": { "name": "kmod-static-nodes.service", "source": "systemd", "state": "stopped", "status": "static" }, "kvm_stat.service": { "name": "kvm_stat.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "ldconfig.service": { "name": "ldconfig.service", "source": "systemd", "state": "stopped", "status": "static" }, "logrotate.service": { "name": "logrotate.service", "source": "systemd", "state": "stopped", "status": "failed" }, "lvm2-lvmpolld.service": { "name": "lvm2-lvmpolld.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm2-monitor.service": { "name": "lvm2-monitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "man-db-cache-update.service": { "name": "man-db-cache-update.service", "source": "systemd", "state": "inactive", "status": "static" }, "man-db-restart-cache-update.service": { "name": "man-db-restart-cache-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "modprobe@.service": { "name": "modprobe@.service", "source": "systemd", "state": "unknown", "status": "static" }, "modprobe@configfs.service": { "name": "modprobe@configfs.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@dm_mod.service": { "name": "modprobe@dm_mod.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@drm.service": { "name": "modprobe@drm.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@efi_pstore.service": { "name": "modprobe@efi_pstore.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@fuse.service": { "name": "modprobe@fuse.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@loop.service": { "name": "modprobe@loop.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "netavark-dhcp-proxy.service": { "name": "netavark-dhcp-proxy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "netavark-firewalld-reload.service": { "name": "netavark-firewalld-reload.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "network.service": { "name": "network.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "nfs-blkmap.service": { "name": "nfs-blkmap.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-idmapd.service": { "name": "nfs-idmapd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-mountd.service": { "name": "nfs-mountd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-server.service": { "name": "nfs-server.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nfs-utils.service": { "name": "nfs-utils.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfsdcld.service": { "name": "nfsdcld.service", "source": "systemd", "state": "stopped", "status": "static" }, "nftables.service": { "name": "nftables.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nis-domainname.service": { "name": "nis-domainname.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nm-priv-helper.service": { "name": "nm-priv-helper.service", "source": "systemd", "state": "inactive", "status": "static" }, "ntpd.service": { "name": "ntpd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ntpdate.service": { "name": "ntpdate.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "pam_namespace.service": { "name": "pam_namespace.service", "source": "systemd", "state": "inactive", "status": "static" }, "pcscd.service": { "name": "pcscd.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "plymouth-quit-wait.service": { "name": "plymouth-quit-wait.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "plymouth-start.service": { "name": "plymouth-start.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "podman-auto-update.service": { "name": "podman-auto-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-clean-transient.service": { "name": "podman-clean-transient.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-kube@.service": { "name": "podman-kube@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "podman-restart.service": { "name": "podman-restart.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman.service": { "name": "podman.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "polkit.service": { "name": "polkit.service", "source": "systemd", "state": "running", "status": "static" }, "qemu-guest-agent.service": { "name": "qemu-guest-agent.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "quotaon-root.service": { "name": "quotaon-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "quotaon@.service": { "name": "quotaon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "rc-local.service": { "name": "rc-local.service", "source": "systemd", "state": "stopped", "status": "static" }, "rescue.service": { "name": "rescue.service", "source": "systemd", "state": "stopped", "status": "static" }, "restraintd.service": { "name": "restraintd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rngd.service": { "name": "rngd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpc-gssd.service": { "name": "rpc-gssd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd-notify.service": { "name": "rpc-statd-notify.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd.service": { "name": "rpc-statd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-svcgssd.service": { "name": "rpc-svcgssd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rpcbind.service": { "name": "rpcbind.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "rpmdb-migrate.service": { "name": "rpmdb-migrate.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rpmdb-rebuild.service": { "name": "rpmdb-rebuild.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rsyslog.service": { "name": "rsyslog.service", "source": "systemd", "state": "running", "status": "enabled" }, "selinux-autorelabel-mark.service": { "name": "selinux-autorelabel-mark.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "selinux-autorelabel.service": { "name": "selinux-autorelabel.service", "source": "systemd", "state": "inactive", "status": "static" }, "selinux-check-proper-disable.service": { "name": "selinux-check-proper-disable.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "serial-getty@.service": { "name": "serial-getty@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "serial-getty@ttyS0.service": { "name": "serial-getty@ttyS0.service", "source": "systemd", "state": "running", "status": "active" }, "sntp.service": { "name": "sntp.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ssh-host-keys-migration.service": { "name": "ssh-host-keys-migration.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "sshd-keygen.service": { "name": "sshd-keygen.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen@.service": { "name": "sshd-keygen@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "sshd-keygen@ecdsa.service": { "name": "sshd-keygen@ecdsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@ed25519.service": { "name": "sshd-keygen@ed25519.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@rsa.service": { "name": "sshd-keygen@rsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd.service": { "name": "sshd.service", "source": "systemd", "state": "running", "status": "enabled" }, "sshd@.service": { "name": "sshd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "sssd-autofs.service": { "name": "sssd-autofs.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-kcm.service": { "name": "sssd-kcm.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "sssd-nss.service": { "name": "sssd-nss.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pac.service": { "name": "sssd-pac.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pam.service": { "name": "sssd-pam.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-ssh.service": { "name": "sssd-ssh.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-sudo.service": { "name": "sssd-sudo.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd.service": { "name": "sssd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "syslog.service": { "name": "syslog.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "system-update-cleanup.service": { "name": "system-update-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-ask-password-console.service": { "name": "systemd-ask-password-console.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-wall.service": { "name": "systemd-ask-password-wall.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-backlight@.service": { "name": "systemd-backlight@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-battery-check.service": { "name": "systemd-battery-check.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-binfmt.service": { "name": "systemd-binfmt.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-bless-boot.service": { "name": "systemd-bless-boot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-boot-check-no-failures.service": { "name": "systemd-boot-check-no-failures.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-boot-random-seed.service": { "name": "systemd-boot-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-boot-update.service": { "name": "systemd-boot-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-bootctl@.service": { "name": "systemd-bootctl@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-confext.service": { "name": "systemd-confext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-coredump@.service": { "name": "systemd-coredump@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-creds@.service": { "name": "systemd-creds@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-exit.service": { "name": "systemd-exit.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-firstboot.service": { "name": "systemd-firstboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck-root.service": { "name": "systemd-fsck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck@.service": { "name": "systemd-fsck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-growfs-root.service": { "name": "systemd-growfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-growfs@.service": { "name": "systemd-growfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-halt.service": { "name": "systemd-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hibernate-clear.service": { "name": "systemd-hibernate-clear.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate-resume.service": { "name": "systemd-hibernate-resume.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate.service": { "name": "systemd-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hostnamed.service": { "name": "systemd-hostnamed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hwdb-update.service": { "name": "systemd-hwdb-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hybrid-sleep.service": { "name": "systemd-hybrid-sleep.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-initctl.service": { "name": "systemd-initctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-catalog-update.service": { "name": "systemd-journal-catalog-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-flush.service": { "name": "systemd-journal-flush.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journald-sync@.service": { "name": "systemd-journald-sync@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-journald.service": { "name": "systemd-journald.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-journald@.service": { "name": "systemd-journald@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-kexec.service": { "name": "systemd-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-localed.service": { "name": "systemd-localed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-logind.service": { "name": "systemd-logind.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-machine-id-commit.service": { "name": "systemd-machine-id-commit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-modules-load.service": { "name": "systemd-modules-load.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-network-generator.service": { "name": "systemd-network-generator.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-networkd-wait-online.service": { "name": "systemd-networkd-wait-online.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-oomd.service": { "name": "systemd-oomd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-pcrextend@.service": { "name": "systemd-pcrextend@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrfs-root.service": { "name": "systemd-pcrfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pcrfs@.service": { "name": "systemd-pcrfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrlock-file-system.service": { "name": "systemd-pcrlock-file-system.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-code.service": { "name": "systemd-pcrlock-firmware-code.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-config.service": { "name": "systemd-pcrlock-firmware-config.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-machine-id.service": { "name": "systemd-pcrlock-machine-id.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-make-policy.service": { "name": "systemd-pcrlock-make-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-authority.service": { "name": "systemd-pcrlock-secureboot-authority.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-policy.service": { "name": "systemd-pcrlock-secureboot-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock@.service": { "name": "systemd-pcrlock@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrmachine.service": { "name": "systemd-pcrmachine.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-initrd.service": { "name": "systemd-pcrphase-initrd.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-sysinit.service": { "name": "systemd-pcrphase-sysinit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase.service": { "name": "systemd-pcrphase.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-poweroff.service": { "name": "systemd-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pstore.service": { "name": "systemd-pstore.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-quotacheck-root.service": { "name": "systemd-quotacheck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-quotacheck@.service": { "name": "systemd-quotacheck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-random-seed.service": { "name": "systemd-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-reboot.service": { "name": "systemd-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-remount-fs.service": { "name": "systemd-remount-fs.service", "source": "systemd", "state": "stopped", "status": "enabled-runtime" }, "systemd-repart.service": { "name": "systemd-repart.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-rfkill.service": { "name": "systemd-rfkill.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-soft-reboot.service": { "name": "systemd-soft-reboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-suspend-then-hibernate.service": { "name": "systemd-suspend-then-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend.service": { "name": "systemd-suspend.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-sysctl.service": { "name": "systemd-sysctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-sysext.service": { "name": "systemd-sysext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-sysext@.service": { "name": "systemd-sysext@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-sysupdate-reboot.service": { "name": "systemd-sysupdate-reboot.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysupdate.service": { "name": "systemd-sysupdate.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysusers.service": { "name": "systemd-sysusers.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-timedated.service": { "name": "systemd-timedated.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-timesyncd.service": { "name": "systemd-timesyncd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-tmpfiles-clean.service": { "name": "systemd-tmpfiles-clean.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev-early.service": { "name": "systemd-tmpfiles-setup-dev-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev.service": { "name": "systemd-tmpfiles-setup-dev.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup.service": { "name": "systemd-tmpfiles-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup-early.service": { "name": "systemd-tpm2-setup-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup.service": { "name": "systemd-tpm2-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-load-credentials.service": { "name": "systemd-udev-load-credentials.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-udev-settle.service": { "name": "systemd-udev-settle.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-trigger.service": { "name": "systemd-udev-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udevd.service": { "name": "systemd-udevd.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-update-done.service": { "name": "systemd-update-done.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp-runlevel.service": { "name": "systemd-update-utmp-runlevel.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp.service": { "name": "systemd-update-utmp.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-user-sessions.service": { "name": "systemd-user-sessions.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-vconsole-setup.service": { "name": "systemd-vconsole-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-volatile-root.service": { "name": "systemd-volatile-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "unbound-anchor.service": { "name": "unbound-anchor.service", "source": "systemd", "state": "stopped", "status": "static" }, "user-runtime-dir@.service": { "name": "user-runtime-dir@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@0.service": { "name": "user-runtime-dir@0.service", "source": "systemd", "state": "stopped", "status": "active" }, "user-runtime-dir@3001.service": { "name": "user-runtime-dir@3001.service", "source": "systemd", "state": "stopped", "status": "active" }, "user@.service": { "name": "user@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user@0.service": { "name": "user@0.service", "source": "systemd", "state": "running", "status": "active" }, "user@3001.service": { "name": "user@3001.service", "source": "systemd", "state": "running", "status": "active" }, "ypbind.service": { "name": "ypbind.service", "source": "systemd", "state": "stopped", "status": "not-found" } } }, "changed": false } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:114 Saturday 27 July 2024 12:37:45 -0400 (0:00:02.037) 0:01:04.148 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 27 July 2024 12:37:45 -0400 (0:00:00.034) 0:01:04.182 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_file_src": "quadlet-demo-mysql.volume", "__podman_quadlet_spec": {}, "__podman_quadlet_str": "[Volume]", "__podman_quadlet_template_src": "" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 27 July 2024 12:37:45 -0400 (0:00:00.050) 0:01:04.233 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "absent", "__podman_systemd_unit_scope": "", "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 27 July 2024 12:37:45 -0400 (0:00:00.046) 0:01:04.279 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_quadlet_file_src", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 27 July 2024 12:37:45 -0400 (0:00:00.036) 0:01:04.316 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_name": "quadlet-demo-mysql", "__podman_quadlet_type": "volume", "__podman_rootless": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 27 July 2024 12:37:45 -0400 (0:00:00.054) 0:01:04.370 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:2 Saturday 27 July 2024 12:37:46 -0400 (0:00:00.117) 0:01:04.488 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:9 Saturday 27 July 2024 12:37:46 -0400 (0:00:00.041) 0:01:04.530 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:16 Saturday 27 July 2024 12:37:46 -0400 (0:00:00.041) 0:01:04.571 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get group information] **************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:28 Saturday 27 July 2024 12:37:46 -0400 (0:00:00.049) 0:01:04.621 ********* ok: [managed_node1] => { "ansible_facts": { "getent_group": { "root": [ "x", "0", "" ] } }, "changed": false } TASK [fedora.linux_system_roles.podman : Set group name] *********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:35 Saturday 27 July 2024 12:37:46 -0400 (0:00:00.403) 0:01:05.024 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group_name": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 27 July 2024 12:37:46 -0400 (0:00:00.047) 0:01:05.072 ********* ok: [managed_node1] => { "changed": false, "stat": { "atime": 1722097719.096431, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "86395ad7ce62834c967dc50f963a68f042029188", "ctime": 1722097690.6782997, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 4755377, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1719187200.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15728, "uid": 0, "version": "2656437169", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check user with getsubids] ************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 27 July 2024 12:37:47 -0400 (0:00:00.397) 0:01:05.470 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check group with getsubids] *********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 27 July 2024 12:37:47 -0400 (0:00:00.035) 0:01:05.506 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 27 July 2024 12:37:47 -0400 (0:00:00.035) 0:01:05.542 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:74 Saturday 27 July 2024 12:37:47 -0400 (0:00:00.034) 0:01:05.576 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:79 Saturday 27 July 2024 12:37:47 -0400 (0:00:00.035) 0:01:05.612 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:84 Saturday 27 July 2024 12:37:47 -0400 (0:00:00.035) 0:01:05.648 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:94 Saturday 27 July 2024 12:37:47 -0400 (0:00:00.034) 0:01:05.682 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if group not in subgid file] ***** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:101 Saturday 27 July 2024 12:37:47 -0400 (0:00:00.035) 0:01:05.718 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:62 Saturday 27 July 2024 12:37:47 -0400 (0:00:00.035) 0:01:05.753 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [], "__podman_kube_yamls_raw": "", "__podman_service_name": "quadlet-demo-mysql-volume.service", "__podman_systemd_scope": "system", "__podman_user_home_dir": "/root", "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 27 July 2024 12:37:47 -0400 (0:00:00.059) 0:01:05.813 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_path": "/etc/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:77 Saturday 27 July 2024 12:37:47 -0400 (0:00:00.085) 0:01:05.899 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:87 Saturday 27 July 2024 12:37:47 -0400 (0:00:00.035) 0:01:05.934 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_images": [], "__podman_quadlet_file": "/etc/containers/systemd/quadlet-demo-mysql.volume", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:103 Saturday 27 July 2024 12:37:47 -0400 (0:00:00.087) 0:01:06.022 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:110 Saturday 27 July 2024 12:37:47 -0400 (0:00:00.045) 0:01:06.067 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:4 Saturday 27 July 2024 12:37:47 -0400 (0:00:00.086) 0:01:06.153 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stop and disable service] ************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:12 Saturday 27 July 2024 12:37:47 -0400 (0:00:00.036) 0:01:06.190 ********* ok: [managed_node1] => { "changed": false, "failed_when_result": false } MSG: Could not find the requested service quadlet-demo-mysql-volume.service: host TASK [fedora.linux_system_roles.podman : See if quadlet file exists] *********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:33 Saturday 27 July 2024 12:37:48 -0400 (0:00:00.569) 0:01:06.759 ********* ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:38 Saturday 27 July 2024 12:37:48 -0400 (0:00:00.388) 0:01:07.148 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_quadlet_stat.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Remove quadlet file] ****************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:42 Saturday 27 July 2024 12:37:48 -0400 (0:00:00.036) 0:01:07.184 ********* ok: [managed_node1] => { "changed": false, "path": "/etc/containers/systemd/quadlet-demo-mysql.volume", "state": "absent" } TASK [fedora.linux_system_roles.podman : Refresh systemd] ********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:48 Saturday 27 July 2024 12:37:49 -0400 (0:00:00.400) 0:01:07.585 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_file_removed is changed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Remove managed resource] ************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:58 Saturday 27 July 2024 12:37:49 -0400 (0:00:00.037) 0:01:07.622 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Remove volumes] *********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:95 Saturday 27 July 2024 12:37:49 -0400 (0:00:00.038) 0:01:07.660 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:112 Saturday 27 July 2024 12:37:49 -0400 (0:00:00.050) 0:01:07.711 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_parsed": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Prune images no longer in use] ******** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:116 Saturday 27 July 2024 12:37:49 -0400 (0:00:00.036) 0:01:07.747 ********* changed: [managed_node1] => { "changed": true, "cmd": [ "podman", "image", "prune", "--all", "-f" ], "delta": "0:00:00.041511", "end": "2024-07-27 12:37:49.707729", "rc": 0, "start": "2024-07-27 12:37:49.666218" } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:127 Saturday 27 July 2024 12:37:49 -0400 (0:00:00.445) 0:01:08.193 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:12 Saturday 27 July 2024 12:37:49 -0400 (0:00:00.110) 0:01:08.303 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 27 July 2024 12:37:49 -0400 (0:00:00.035) 0:01:08.339 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 27 July 2024 12:37:49 -0400 (0:00:00.035) 0:01:08.375 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - images] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:137 Saturday 27 July 2024 12:37:49 -0400 (0:00:00.034) 0:01:08.409 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "images", "-n" ], "delta": "0:00:00.039833", "end": "2024-07-27 12:37:50.367006", "rc": 0, "start": "2024-07-27 12:37:50.327173" } TASK [fedora.linux_system_roles.podman : For testing and debugging - volumes] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:146 Saturday 27 July 2024 12:37:50 -0400 (0:00:00.442) 0:01:08.852 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "volume", "ls", "-n" ], "delta": "0:00:00.039310", "end": "2024-07-27 12:37:50.811703", "rc": 0, "start": "2024-07-27 12:37:50.772393" } TASK [fedora.linux_system_roles.podman : For testing and debugging - containers] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:155 Saturday 27 July 2024 12:37:50 -0400 (0:00:00.443) 0:01:09.296 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "ps", "--noheading" ], "delta": "0:00:00.039482", "end": "2024-07-27 12:37:51.253875", "rc": 0, "start": "2024-07-27 12:37:51.214393" } TASK [fedora.linux_system_roles.podman : For testing and debugging - networks] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:164 Saturday 27 July 2024 12:37:51 -0400 (0:00:00.442) 0:01:09.739 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "network", "ls", "-n", "-q" ], "delta": "0:00:00.038270", "end": "2024-07-27 12:37:51.697015", "rc": 0, "start": "2024-07-27 12:37:51.658745" } STDOUT: podman podman-default-kube-network TASK [fedora.linux_system_roles.podman : For testing and debugging - secrets] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:173 Saturday 27 July 2024 12:37:51 -0400 (0:00:00.443) 0:01:10.182 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - services] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:183 Saturday 27 July 2024 12:37:52 -0400 (0:00:00.447) 0:01:10.630 ********* ok: [managed_node1] => { "ansible_facts": { "services": { "NetworkManager-dispatcher.service": { "name": "NetworkManager-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "NetworkManager-wait-online.service": { "name": "NetworkManager-wait-online.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "NetworkManager.service": { "name": "NetworkManager.service", "source": "systemd", "state": "running", "status": "enabled" }, "audit-rules.service": { "name": "audit-rules.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "auditd.service": { "name": "auditd.service", "source": "systemd", "state": "running", "status": "enabled" }, "auth-rpcgss-module.service": { "name": "auth-rpcgss-module.service", "source": "systemd", "state": "stopped", "status": "static" }, "autofs.service": { "name": "autofs.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "autovt@.service": { "name": "autovt@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "blk-availability.service": { "name": "blk-availability.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "capsule@.service": { "name": "capsule@.service", "source": "systemd", "state": "unknown", "status": "static" }, "certmonger.service": { "name": "certmonger.service", "source": "systemd", "state": "running", "status": "enabled" }, "chrony-wait.service": { "name": "chrony-wait.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd-restricted.service": { "name": "chronyd-restricted.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd.service": { "name": "chronyd.service", "source": "systemd", "state": "running", "status": "enabled" }, "cloud-config.service": { "name": "cloud-config.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-final.service": { "name": "cloud-final.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init-hotplugd.service": { "name": "cloud-init-hotplugd.service", "source": "systemd", "state": "inactive", "status": "static" }, "cloud-init-local.service": { "name": "cloud-init-local.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init.service": { "name": "cloud-init.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "console-getty.service": { "name": "console-getty.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "container-getty@.service": { "name": "container-getty@.service", "source": "systemd", "state": "unknown", "status": "static" }, "crond.service": { "name": "crond.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-broker.service": { "name": "dbus-broker.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-org.fedoraproject.FirewallD1.service": { "name": "dbus-org.fedoraproject.FirewallD1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.hostname1.service": { "name": "dbus-org.freedesktop.hostname1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.locale1.service": { "name": "dbus-org.freedesktop.locale1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.login1.service": { "name": "dbus-org.freedesktop.login1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.nm-dispatcher.service": { "name": "dbus-org.freedesktop.nm-dispatcher.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.timedate1.service": { "name": "dbus-org.freedesktop.timedate1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus.service": { "name": "dbus.service", "source": "systemd", "state": "active", "status": "alias" }, "debug-shell.service": { "name": "debug-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd.service": { "name": "dhcpcd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd@.service": { "name": "dhcpcd@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "display-manager.service": { "name": "display-manager.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "dm-event.service": { "name": "dm-event.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-makecache.service": { "name": "dnf-makecache.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-system-upgrade-cleanup.service": { "name": "dnf-system-upgrade-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf-system-upgrade.service": { "name": "dnf-system-upgrade.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dracut-cmdline.service": { "name": "dracut-cmdline.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-initqueue.service": { "name": "dracut-initqueue.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-mount.service": { "name": "dracut-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-mount.service": { "name": "dracut-pre-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-pivot.service": { "name": "dracut-pre-pivot.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-trigger.service": { "name": "dracut-pre-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-udev.service": { "name": "dracut-pre-udev.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown-onfailure.service": { "name": "dracut-shutdown-onfailure.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown.service": { "name": "dracut-shutdown.service", "source": "systemd", "state": "stopped", "status": "static" }, "ebtables.service": { "name": "ebtables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "emergency.service": { "name": "emergency.service", "source": "systemd", "state": "stopped", "status": "static" }, "firewalld.service": { "name": "firewalld.service", "source": "systemd", "state": "running", "status": "enabled" }, "fsidd.service": { "name": "fsidd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fstrim.service": { "name": "fstrim.service", "source": "systemd", "state": "stopped", "status": "static" }, "getty@.service": { "name": "getty@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "getty@tty1.service": { "name": "getty@tty1.service", "source": "systemd", "state": "running", "status": "active" }, "grub-boot-indeterminate.service": { "name": "grub-boot-indeterminate.service", "source": "systemd", "state": "inactive", "status": "static" }, "grub2-systemd-integration.service": { "name": "grub2-systemd-integration.service", "source": "systemd", "state": "inactive", "status": "static" }, "gssproxy.service": { "name": "gssproxy.service", "source": "systemd", "state": "running", "status": "disabled" }, "hv_kvp_daemon.service": { "name": "hv_kvp_daemon.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "initrd-cleanup.service": { "name": "initrd-cleanup.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-parse-etc.service": { "name": "initrd-parse-etc.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-switch-root.service": { "name": "initrd-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-udevadm-cleanup-db.service": { "name": "initrd-udevadm-cleanup-db.service", "source": "systemd", "state": "stopped", "status": "static" }, "ip6tables.service": { "name": "ip6tables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ipset.service": { "name": "ipset.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iptables.service": { "name": "iptables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "irqbalance.service": { "name": "irqbalance.service", "source": "systemd", "state": "running", "status": "enabled" }, "kdump.service": { "name": "kdump.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "kmod-static-nodes.service": { "name": "kmod-static-nodes.service", "source": "systemd", "state": "stopped", "status": "static" }, "kvm_stat.service": { "name": "kvm_stat.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "ldconfig.service": { "name": "ldconfig.service", "source": "systemd", "state": "stopped", "status": "static" }, "logrotate.service": { "name": "logrotate.service", "source": "systemd", "state": "stopped", "status": "failed" }, "lvm2-lvmpolld.service": { "name": "lvm2-lvmpolld.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm2-monitor.service": { "name": "lvm2-monitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "man-db-cache-update.service": { "name": "man-db-cache-update.service", "source": "systemd", "state": "inactive", "status": "static" }, "man-db-restart-cache-update.service": { "name": "man-db-restart-cache-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "modprobe@.service": { "name": "modprobe@.service", "source": "systemd", "state": "unknown", "status": "static" }, "modprobe@configfs.service": { "name": "modprobe@configfs.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@dm_mod.service": { "name": "modprobe@dm_mod.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@drm.service": { "name": "modprobe@drm.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@efi_pstore.service": { "name": "modprobe@efi_pstore.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@fuse.service": { "name": "modprobe@fuse.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@loop.service": { "name": "modprobe@loop.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "netavark-dhcp-proxy.service": { "name": "netavark-dhcp-proxy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "netavark-firewalld-reload.service": { "name": "netavark-firewalld-reload.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "network.service": { "name": "network.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "nfs-blkmap.service": { "name": "nfs-blkmap.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-idmapd.service": { "name": "nfs-idmapd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-mountd.service": { "name": "nfs-mountd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-server.service": { "name": "nfs-server.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nfs-utils.service": { "name": "nfs-utils.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfsdcld.service": { "name": "nfsdcld.service", "source": "systemd", "state": "stopped", "status": "static" }, "nftables.service": { "name": "nftables.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nis-domainname.service": { "name": "nis-domainname.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nm-priv-helper.service": { "name": "nm-priv-helper.service", "source": "systemd", "state": "inactive", "status": "static" }, "ntpd.service": { "name": "ntpd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ntpdate.service": { "name": "ntpdate.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "pam_namespace.service": { "name": "pam_namespace.service", "source": "systemd", "state": "inactive", "status": "static" }, "pcscd.service": { "name": "pcscd.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "plymouth-quit-wait.service": { "name": "plymouth-quit-wait.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "plymouth-start.service": { "name": "plymouth-start.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "podman-auto-update.service": { "name": "podman-auto-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-clean-transient.service": { "name": "podman-clean-transient.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-kube@.service": { "name": "podman-kube@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "podman-restart.service": { "name": "podman-restart.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman.service": { "name": "podman.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "polkit.service": { "name": "polkit.service", "source": "systemd", "state": "running", "status": "static" }, "qemu-guest-agent.service": { "name": "qemu-guest-agent.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "quotaon-root.service": { "name": "quotaon-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "quotaon@.service": { "name": "quotaon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "rc-local.service": { "name": "rc-local.service", "source": "systemd", "state": "stopped", "status": "static" }, "rescue.service": { "name": "rescue.service", "source": "systemd", "state": "stopped", "status": "static" }, "restraintd.service": { "name": "restraintd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rngd.service": { "name": "rngd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpc-gssd.service": { "name": "rpc-gssd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd-notify.service": { "name": "rpc-statd-notify.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd.service": { "name": "rpc-statd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-svcgssd.service": { "name": "rpc-svcgssd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rpcbind.service": { "name": "rpcbind.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "rpmdb-migrate.service": { "name": "rpmdb-migrate.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rpmdb-rebuild.service": { "name": "rpmdb-rebuild.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rsyslog.service": { "name": "rsyslog.service", "source": "systemd", "state": "running", "status": "enabled" }, "selinux-autorelabel-mark.service": { "name": "selinux-autorelabel-mark.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "selinux-autorelabel.service": { "name": "selinux-autorelabel.service", "source": "systemd", "state": "inactive", "status": "static" }, "selinux-check-proper-disable.service": { "name": "selinux-check-proper-disable.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "serial-getty@.service": { "name": "serial-getty@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "serial-getty@ttyS0.service": { "name": "serial-getty@ttyS0.service", "source": "systemd", "state": "running", "status": "active" }, "sntp.service": { "name": "sntp.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ssh-host-keys-migration.service": { "name": "ssh-host-keys-migration.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "sshd-keygen.service": { "name": "sshd-keygen.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen@.service": { "name": "sshd-keygen@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "sshd-keygen@ecdsa.service": { "name": "sshd-keygen@ecdsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@ed25519.service": { "name": "sshd-keygen@ed25519.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@rsa.service": { "name": "sshd-keygen@rsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd.service": { "name": "sshd.service", "source": "systemd", "state": "running", "status": "enabled" }, "sshd@.service": { "name": "sshd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "sssd-autofs.service": { "name": "sssd-autofs.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-kcm.service": { "name": "sssd-kcm.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "sssd-nss.service": { "name": "sssd-nss.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pac.service": { "name": "sssd-pac.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pam.service": { "name": "sssd-pam.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-ssh.service": { "name": "sssd-ssh.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-sudo.service": { "name": "sssd-sudo.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd.service": { "name": "sssd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "syslog.service": { "name": "syslog.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "system-update-cleanup.service": { "name": "system-update-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-ask-password-console.service": { "name": "systemd-ask-password-console.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-wall.service": { "name": "systemd-ask-password-wall.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-backlight@.service": { "name": "systemd-backlight@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-battery-check.service": { "name": "systemd-battery-check.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-binfmt.service": { "name": "systemd-binfmt.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-bless-boot.service": { "name": "systemd-bless-boot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-boot-check-no-failures.service": { "name": "systemd-boot-check-no-failures.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-boot-random-seed.service": { "name": "systemd-boot-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-boot-update.service": { "name": "systemd-boot-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-bootctl@.service": { "name": "systemd-bootctl@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-confext.service": { "name": "systemd-confext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-coredump@.service": { "name": "systemd-coredump@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-creds@.service": { "name": "systemd-creds@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-exit.service": { "name": "systemd-exit.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-firstboot.service": { "name": "systemd-firstboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck-root.service": { "name": "systemd-fsck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck@.service": { "name": "systemd-fsck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-growfs-root.service": { "name": "systemd-growfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-growfs@.service": { "name": "systemd-growfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-halt.service": { "name": "systemd-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hibernate-clear.service": { "name": "systemd-hibernate-clear.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate-resume.service": { "name": "systemd-hibernate-resume.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate.service": { "name": "systemd-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hostnamed.service": { "name": "systemd-hostnamed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hwdb-update.service": { "name": "systemd-hwdb-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hybrid-sleep.service": { "name": "systemd-hybrid-sleep.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-initctl.service": { "name": "systemd-initctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-catalog-update.service": { "name": "systemd-journal-catalog-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-flush.service": { "name": "systemd-journal-flush.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journald-sync@.service": { "name": "systemd-journald-sync@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-journald.service": { "name": "systemd-journald.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-journald@.service": { "name": "systemd-journald@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-kexec.service": { "name": "systemd-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-localed.service": { "name": "systemd-localed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-logind.service": { "name": "systemd-logind.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-machine-id-commit.service": { "name": "systemd-machine-id-commit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-modules-load.service": { "name": "systemd-modules-load.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-network-generator.service": { "name": "systemd-network-generator.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-networkd-wait-online.service": { "name": "systemd-networkd-wait-online.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-oomd.service": { "name": "systemd-oomd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-pcrextend@.service": { "name": "systemd-pcrextend@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrfs-root.service": { "name": "systemd-pcrfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pcrfs@.service": { "name": "systemd-pcrfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrlock-file-system.service": { "name": "systemd-pcrlock-file-system.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-code.service": { "name": "systemd-pcrlock-firmware-code.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-config.service": { "name": "systemd-pcrlock-firmware-config.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-machine-id.service": { "name": "systemd-pcrlock-machine-id.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-make-policy.service": { "name": "systemd-pcrlock-make-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-authority.service": { "name": "systemd-pcrlock-secureboot-authority.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-policy.service": { "name": "systemd-pcrlock-secureboot-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock@.service": { "name": "systemd-pcrlock@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrmachine.service": { "name": "systemd-pcrmachine.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-initrd.service": { "name": "systemd-pcrphase-initrd.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-sysinit.service": { "name": "systemd-pcrphase-sysinit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase.service": { "name": "systemd-pcrphase.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-poweroff.service": { "name": "systemd-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pstore.service": { "name": "systemd-pstore.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-quotacheck-root.service": { "name": "systemd-quotacheck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-quotacheck@.service": { "name": "systemd-quotacheck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-random-seed.service": { "name": "systemd-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-reboot.service": { "name": "systemd-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-remount-fs.service": { "name": "systemd-remount-fs.service", "source": "systemd", "state": "stopped", "status": "enabled-runtime" }, "systemd-repart.service": { "name": "systemd-repart.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-rfkill.service": { "name": "systemd-rfkill.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-soft-reboot.service": { "name": "systemd-soft-reboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-suspend-then-hibernate.service": { "name": "systemd-suspend-then-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend.service": { "name": "systemd-suspend.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-sysctl.service": { "name": "systemd-sysctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-sysext.service": { "name": "systemd-sysext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-sysext@.service": { "name": "systemd-sysext@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-sysupdate-reboot.service": { "name": "systemd-sysupdate-reboot.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysupdate.service": { "name": "systemd-sysupdate.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysusers.service": { "name": "systemd-sysusers.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-timedated.service": { "name": "systemd-timedated.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-timesyncd.service": { "name": "systemd-timesyncd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-tmpfiles-clean.service": { "name": "systemd-tmpfiles-clean.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev-early.service": { "name": "systemd-tmpfiles-setup-dev-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev.service": { "name": "systemd-tmpfiles-setup-dev.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup.service": { "name": "systemd-tmpfiles-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup-early.service": { "name": "systemd-tpm2-setup-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup.service": { "name": "systemd-tpm2-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-load-credentials.service": { "name": "systemd-udev-load-credentials.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-udev-settle.service": { "name": "systemd-udev-settle.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-trigger.service": { "name": "systemd-udev-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udevd.service": { "name": "systemd-udevd.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-update-done.service": { "name": "systemd-update-done.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp-runlevel.service": { "name": "systemd-update-utmp-runlevel.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp.service": { "name": "systemd-update-utmp.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-user-sessions.service": { "name": "systemd-user-sessions.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-vconsole-setup.service": { "name": "systemd-vconsole-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-volatile-root.service": { "name": "systemd-volatile-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "unbound-anchor.service": { "name": "unbound-anchor.service", "source": "systemd", "state": "stopped", "status": "static" }, "user-runtime-dir@.service": { "name": "user-runtime-dir@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@0.service": { "name": "user-runtime-dir@0.service", "source": "systemd", "state": "stopped", "status": "active" }, "user-runtime-dir@3001.service": { "name": "user-runtime-dir@3001.service", "source": "systemd", "state": "stopped", "status": "active" }, "user@.service": { "name": "user@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user@0.service": { "name": "user@0.service", "source": "systemd", "state": "running", "status": "active" }, "user@3001.service": { "name": "user@3001.service", "source": "systemd", "state": "running", "status": "active" }, "ypbind.service": { "name": "ypbind.service", "source": "systemd", "state": "stopped", "status": "not-found" } } }, "changed": false } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:114 Saturday 27 July 2024 12:37:54 -0400 (0:00:02.061) 0:01:12.691 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 27 July 2024 12:37:54 -0400 (0:00:00.034) 0:01:12.726 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_file_src": "quadlet-demo.network", "__podman_quadlet_spec": {}, "__podman_quadlet_str": "[Network]\nSubnet=192.168.30.0/24\nGateway=192.168.30.1\nLabel=app=wordpress", "__podman_quadlet_template_src": "" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 27 July 2024 12:37:54 -0400 (0:00:00.053) 0:01:12.779 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "absent", "__podman_systemd_unit_scope": "", "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 27 July 2024 12:37:54 -0400 (0:00:00.046) 0:01:12.826 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_quadlet_file_src", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 27 July 2024 12:37:54 -0400 (0:00:00.036) 0:01:12.863 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_name": "quadlet-demo", "__podman_quadlet_type": "network", "__podman_rootless": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 27 July 2024 12:37:54 -0400 (0:00:00.054) 0:01:12.917 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:2 Saturday 27 July 2024 12:37:54 -0400 (0:00:00.123) 0:01:13.041 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:9 Saturday 27 July 2024 12:37:54 -0400 (0:00:00.042) 0:01:13.084 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:16 Saturday 27 July 2024 12:37:54 -0400 (0:00:00.041) 0:01:13.126 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get group information] **************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:28 Saturday 27 July 2024 12:37:54 -0400 (0:00:00.050) 0:01:13.176 ********* ok: [managed_node1] => { "ansible_facts": { "getent_group": { "root": [ "x", "0", "" ] } }, "changed": false } TASK [fedora.linux_system_roles.podman : Set group name] *********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:35 Saturday 27 July 2024 12:37:55 -0400 (0:00:00.402) 0:01:13.579 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group_name": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 27 July 2024 12:37:55 -0400 (0:00:00.048) 0:01:13.627 ********* ok: [managed_node1] => { "changed": false, "stat": { "atime": 1722097719.096431, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "86395ad7ce62834c967dc50f963a68f042029188", "ctime": 1722097690.6782997, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 4755377, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1719187200.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15728, "uid": 0, "version": "2656437169", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check user with getsubids] ************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 27 July 2024 12:37:55 -0400 (0:00:00.402) 0:01:14.030 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check group with getsubids] *********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 27 July 2024 12:37:55 -0400 (0:00:00.035) 0:01:14.066 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 27 July 2024 12:37:55 -0400 (0:00:00.035) 0:01:14.101 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:74 Saturday 27 July 2024 12:37:55 -0400 (0:00:00.034) 0:01:14.136 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:79 Saturday 27 July 2024 12:37:55 -0400 (0:00:00.035) 0:01:14.172 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:84 Saturday 27 July 2024 12:37:55 -0400 (0:00:00.035) 0:01:14.207 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:94 Saturday 27 July 2024 12:37:55 -0400 (0:00:00.035) 0:01:14.243 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if group not in subgid file] ***** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:101 Saturday 27 July 2024 12:37:55 -0400 (0:00:00.035) 0:01:14.278 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:62 Saturday 27 July 2024 12:37:55 -0400 (0:00:00.035) 0:01:14.314 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [], "__podman_kube_yamls_raw": "", "__podman_service_name": "quadlet-demo-network.service", "__podman_systemd_scope": "system", "__podman_user_home_dir": "/root", "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 27 July 2024 12:37:55 -0400 (0:00:00.062) 0:01:14.376 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_path": "/etc/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:77 Saturday 27 July 2024 12:37:56 -0400 (0:00:00.087) 0:01:14.464 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:87 Saturday 27 July 2024 12:37:56 -0400 (0:00:00.035) 0:01:14.499 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_images": [], "__podman_quadlet_file": "/etc/containers/systemd/quadlet-demo.network", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:103 Saturday 27 July 2024 12:37:56 -0400 (0:00:00.083) 0:01:14.583 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:110 Saturday 27 July 2024 12:37:56 -0400 (0:00:00.045) 0:01:14.629 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:4 Saturday 27 July 2024 12:37:56 -0400 (0:00:00.087) 0:01:14.716 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stop and disable service] ************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:12 Saturday 27 July 2024 12:37:56 -0400 (0:00:00.035) 0:01:14.752 ********* ok: [managed_node1] => { "changed": false, "failed_when_result": false } MSG: Could not find the requested service quadlet-demo-network.service: host TASK [fedora.linux_system_roles.podman : See if quadlet file exists] *********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:33 Saturday 27 July 2024 12:37:56 -0400 (0:00:00.556) 0:01:15.308 ********* ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:38 Saturday 27 July 2024 12:37:57 -0400 (0:00:00.390) 0:01:15.699 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_quadlet_stat.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Remove quadlet file] ****************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:42 Saturday 27 July 2024 12:37:57 -0400 (0:00:00.034) 0:01:15.733 ********* ok: [managed_node1] => { "changed": false, "path": "/etc/containers/systemd/quadlet-demo.network", "state": "absent" } TASK [fedora.linux_system_roles.podman : Refresh systemd] ********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:48 Saturday 27 July 2024 12:37:57 -0400 (0:00:00.398) 0:01:16.131 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_file_removed is changed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Remove managed resource] ************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:58 Saturday 27 July 2024 12:37:57 -0400 (0:00:00.038) 0:01:16.170 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Remove volumes] *********************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:95 Saturday 27 July 2024 12:37:57 -0400 (0:00:00.037) 0:01:16.207 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:112 Saturday 27 July 2024 12:37:57 -0400 (0:00:00.051) 0:01:16.259 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_parsed": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Prune images no longer in use] ******** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:116 Saturday 27 July 2024 12:37:57 -0400 (0:00:00.038) 0:01:16.297 ********* changed: [managed_node1] => { "changed": true, "cmd": [ "podman", "image", "prune", "--all", "-f" ], "delta": "0:00:00.039562", "end": "2024-07-27 12:37:58.249537", "rc": 0, "start": "2024-07-27 12:37:58.209975" } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:127 Saturday 27 July 2024 12:37:58 -0400 (0:00:00.435) 0:01:16.733 ********* included: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:12 Saturday 27 July 2024 12:37:58 -0400 (0:00:00.115) 0:01:16.848 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 27 July 2024 12:37:58 -0400 (0:00:00.035) 0:01:16.884 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 27 July 2024 12:37:58 -0400 (0:00:00.036) 0:01:16.920 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - images] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:137 Saturday 27 July 2024 12:37:58 -0400 (0:00:00.036) 0:01:16.957 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "images", "-n" ], "delta": "0:00:00.037023", "end": "2024-07-27 12:37:58.907066", "rc": 0, "start": "2024-07-27 12:37:58.870043" } TASK [fedora.linux_system_roles.podman : For testing and debugging - volumes] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:146 Saturday 27 July 2024 12:37:58 -0400 (0:00:00.433) 0:01:17.390 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "volume", "ls", "-n" ], "delta": "0:00:00.040428", "end": "2024-07-27 12:37:59.341618", "rc": 0, "start": "2024-07-27 12:37:59.301190" } TASK [fedora.linux_system_roles.podman : For testing and debugging - containers] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:155 Saturday 27 July 2024 12:37:59 -0400 (0:00:00.434) 0:01:17.825 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "ps", "--noheading" ], "delta": "0:00:00.035080", "end": "2024-07-27 12:37:59.773033", "rc": 0, "start": "2024-07-27 12:37:59.737953" } TASK [fedora.linux_system_roles.podman : For testing and debugging - networks] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:164 Saturday 27 July 2024 12:37:59 -0400 (0:00:00.433) 0:01:18.258 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "network", "ls", "-n", "-q" ], "delta": "0:00:00.037093", "end": "2024-07-27 12:38:00.210265", "rc": 0, "start": "2024-07-27 12:38:00.173172" } STDOUT: podman podman-default-kube-network TASK [fedora.linux_system_roles.podman : For testing and debugging - secrets] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:173 Saturday 27 July 2024 12:38:00 -0400 (0:00:00.435) 0:01:18.694 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - services] *** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:183 Saturday 27 July 2024 12:38:00 -0400 (0:00:00.440) 0:01:19.134 ********* ok: [managed_node1] => { "ansible_facts": { "services": { "NetworkManager-dispatcher.service": { "name": "NetworkManager-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "NetworkManager-wait-online.service": { "name": "NetworkManager-wait-online.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "NetworkManager.service": { "name": "NetworkManager.service", "source": "systemd", "state": "running", "status": "enabled" }, "audit-rules.service": { "name": "audit-rules.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "auditd.service": { "name": "auditd.service", "source": "systemd", "state": "running", "status": "enabled" }, "auth-rpcgss-module.service": { "name": "auth-rpcgss-module.service", "source": "systemd", "state": "stopped", "status": "static" }, "autofs.service": { "name": "autofs.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "autovt@.service": { "name": "autovt@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "blk-availability.service": { "name": "blk-availability.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "capsule@.service": { "name": "capsule@.service", "source": "systemd", "state": "unknown", "status": "static" }, "certmonger.service": { "name": "certmonger.service", "source": "systemd", "state": "running", "status": "enabled" }, "chrony-wait.service": { "name": "chrony-wait.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd-restricted.service": { "name": "chronyd-restricted.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd.service": { "name": "chronyd.service", "source": "systemd", "state": "running", "status": "enabled" }, "cloud-config.service": { "name": "cloud-config.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-final.service": { "name": "cloud-final.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init-hotplugd.service": { "name": "cloud-init-hotplugd.service", "source": "systemd", "state": "inactive", "status": "static" }, "cloud-init-local.service": { "name": "cloud-init-local.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init.service": { "name": "cloud-init.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "console-getty.service": { "name": "console-getty.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "container-getty@.service": { "name": "container-getty@.service", "source": "systemd", "state": "unknown", "status": "static" }, "crond.service": { "name": "crond.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-broker.service": { "name": "dbus-broker.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-org.fedoraproject.FirewallD1.service": { "name": "dbus-org.fedoraproject.FirewallD1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.hostname1.service": { "name": "dbus-org.freedesktop.hostname1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.locale1.service": { "name": "dbus-org.freedesktop.locale1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.login1.service": { "name": "dbus-org.freedesktop.login1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.nm-dispatcher.service": { "name": "dbus-org.freedesktop.nm-dispatcher.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.timedate1.service": { "name": "dbus-org.freedesktop.timedate1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus.service": { "name": "dbus.service", "source": "systemd", "state": "active", "status": "alias" }, "debug-shell.service": { "name": "debug-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd.service": { "name": "dhcpcd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd@.service": { "name": "dhcpcd@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "display-manager.service": { "name": "display-manager.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "dm-event.service": { "name": "dm-event.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-makecache.service": { "name": "dnf-makecache.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-system-upgrade-cleanup.service": { "name": "dnf-system-upgrade-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf-system-upgrade.service": { "name": "dnf-system-upgrade.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dracut-cmdline.service": { "name": "dracut-cmdline.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-initqueue.service": { "name": "dracut-initqueue.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-mount.service": { "name": "dracut-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-mount.service": { "name": "dracut-pre-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-pivot.service": { "name": "dracut-pre-pivot.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-trigger.service": { "name": "dracut-pre-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-udev.service": { "name": "dracut-pre-udev.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown-onfailure.service": { "name": "dracut-shutdown-onfailure.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown.service": { "name": "dracut-shutdown.service", "source": "systemd", "state": "stopped", "status": "static" }, "ebtables.service": { "name": "ebtables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "emergency.service": { "name": "emergency.service", "source": "systemd", "state": "stopped", "status": "static" }, "firewalld.service": { "name": "firewalld.service", "source": "systemd", "state": "running", "status": "enabled" }, "fsidd.service": { "name": "fsidd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fstrim.service": { "name": "fstrim.service", "source": "systemd", "state": "stopped", "status": "static" }, "getty@.service": { "name": "getty@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "getty@tty1.service": { "name": "getty@tty1.service", "source": "systemd", "state": "running", "status": "active" }, "grub-boot-indeterminate.service": { "name": "grub-boot-indeterminate.service", "source": "systemd", "state": "inactive", "status": "static" }, "grub2-systemd-integration.service": { "name": "grub2-systemd-integration.service", "source": "systemd", "state": "inactive", "status": "static" }, "gssproxy.service": { "name": "gssproxy.service", "source": "systemd", "state": "running", "status": "disabled" }, "hv_kvp_daemon.service": { "name": "hv_kvp_daemon.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "initrd-cleanup.service": { "name": "initrd-cleanup.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-parse-etc.service": { "name": "initrd-parse-etc.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-switch-root.service": { "name": "initrd-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-udevadm-cleanup-db.service": { "name": "initrd-udevadm-cleanup-db.service", "source": "systemd", "state": "stopped", "status": "static" }, "ip6tables.service": { "name": "ip6tables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ipset.service": { "name": "ipset.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iptables.service": { "name": "iptables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "irqbalance.service": { "name": "irqbalance.service", "source": "systemd", "state": "running", "status": "enabled" }, "kdump.service": { "name": "kdump.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "kmod-static-nodes.service": { "name": "kmod-static-nodes.service", "source": "systemd", "state": "stopped", "status": "static" }, "kvm_stat.service": { "name": "kvm_stat.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "ldconfig.service": { "name": "ldconfig.service", "source": "systemd", "state": "stopped", "status": "static" }, "logrotate.service": { "name": "logrotate.service", "source": "systemd", "state": "stopped", "status": "failed" }, "lvm2-lvmpolld.service": { "name": "lvm2-lvmpolld.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm2-monitor.service": { "name": "lvm2-monitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "man-db-cache-update.service": { "name": "man-db-cache-update.service", "source": "systemd", "state": "inactive", "status": "static" }, "man-db-restart-cache-update.service": { "name": "man-db-restart-cache-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "modprobe@.service": { "name": "modprobe@.service", "source": "systemd", "state": "unknown", "status": "static" }, "modprobe@configfs.service": { "name": "modprobe@configfs.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@dm_mod.service": { "name": "modprobe@dm_mod.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@drm.service": { "name": "modprobe@drm.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@efi_pstore.service": { "name": "modprobe@efi_pstore.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@fuse.service": { "name": "modprobe@fuse.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@loop.service": { "name": "modprobe@loop.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "netavark-dhcp-proxy.service": { "name": "netavark-dhcp-proxy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "netavark-firewalld-reload.service": { "name": "netavark-firewalld-reload.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "network.service": { "name": "network.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "nfs-blkmap.service": { "name": "nfs-blkmap.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-idmapd.service": { "name": "nfs-idmapd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-mountd.service": { "name": "nfs-mountd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-server.service": { "name": "nfs-server.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nfs-utils.service": { "name": "nfs-utils.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfsdcld.service": { "name": "nfsdcld.service", "source": "systemd", "state": "stopped", "status": "static" }, "nftables.service": { "name": "nftables.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nis-domainname.service": { "name": "nis-domainname.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nm-priv-helper.service": { "name": "nm-priv-helper.service", "source": "systemd", "state": "inactive", "status": "static" }, "ntpd.service": { "name": "ntpd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ntpdate.service": { "name": "ntpdate.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "pam_namespace.service": { "name": "pam_namespace.service", "source": "systemd", "state": "inactive", "status": "static" }, "pcscd.service": { "name": "pcscd.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "plymouth-quit-wait.service": { "name": "plymouth-quit-wait.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "plymouth-start.service": { "name": "plymouth-start.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "podman-auto-update.service": { "name": "podman-auto-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-clean-transient.service": { "name": "podman-clean-transient.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-kube@.service": { "name": "podman-kube@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "podman-restart.service": { "name": "podman-restart.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman.service": { "name": "podman.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "polkit.service": { "name": "polkit.service", "source": "systemd", "state": "running", "status": "static" }, "qemu-guest-agent.service": { "name": "qemu-guest-agent.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "quotaon-root.service": { "name": "quotaon-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "quotaon@.service": { "name": "quotaon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "rc-local.service": { "name": "rc-local.service", "source": "systemd", "state": "stopped", "status": "static" }, "rescue.service": { "name": "rescue.service", "source": "systemd", "state": "stopped", "status": "static" }, "restraintd.service": { "name": "restraintd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rngd.service": { "name": "rngd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpc-gssd.service": { "name": "rpc-gssd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd-notify.service": { "name": "rpc-statd-notify.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd.service": { "name": "rpc-statd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-svcgssd.service": { "name": "rpc-svcgssd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rpcbind.service": { "name": "rpcbind.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "rpmdb-migrate.service": { "name": "rpmdb-migrate.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rpmdb-rebuild.service": { "name": "rpmdb-rebuild.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rsyslog.service": { "name": "rsyslog.service", "source": "systemd", "state": "running", "status": "enabled" }, "selinux-autorelabel-mark.service": { "name": "selinux-autorelabel-mark.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "selinux-autorelabel.service": { "name": "selinux-autorelabel.service", "source": "systemd", "state": "inactive", "status": "static" }, "selinux-check-proper-disable.service": { "name": "selinux-check-proper-disable.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "serial-getty@.service": { "name": "serial-getty@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "serial-getty@ttyS0.service": { "name": "serial-getty@ttyS0.service", "source": "systemd", "state": "running", "status": "active" }, "sntp.service": { "name": "sntp.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ssh-host-keys-migration.service": { "name": "ssh-host-keys-migration.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "sshd-keygen.service": { "name": "sshd-keygen.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen@.service": { "name": "sshd-keygen@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "sshd-keygen@ecdsa.service": { "name": "sshd-keygen@ecdsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@ed25519.service": { "name": "sshd-keygen@ed25519.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@rsa.service": { "name": "sshd-keygen@rsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd.service": { "name": "sshd.service", "source": "systemd", "state": "running", "status": "enabled" }, "sshd@.service": { "name": "sshd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "sssd-autofs.service": { "name": "sssd-autofs.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-kcm.service": { "name": "sssd-kcm.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "sssd-nss.service": { "name": "sssd-nss.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pac.service": { "name": "sssd-pac.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pam.service": { "name": "sssd-pam.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-ssh.service": { "name": "sssd-ssh.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-sudo.service": { "name": "sssd-sudo.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd.service": { "name": "sssd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "syslog.service": { "name": "syslog.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "system-update-cleanup.service": { "name": "system-update-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-ask-password-console.service": { "name": "systemd-ask-password-console.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-wall.service": { "name": "systemd-ask-password-wall.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-backlight@.service": { "name": "systemd-backlight@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-battery-check.service": { "name": "systemd-battery-check.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-binfmt.service": { "name": "systemd-binfmt.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-bless-boot.service": { "name": "systemd-bless-boot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-boot-check-no-failures.service": { "name": "systemd-boot-check-no-failures.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-boot-random-seed.service": { "name": "systemd-boot-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-boot-update.service": { "name": "systemd-boot-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-bootctl@.service": { "name": "systemd-bootctl@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-confext.service": { "name": "systemd-confext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-coredump@.service": { "name": "systemd-coredump@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-creds@.service": { "name": "systemd-creds@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-exit.service": { "name": "systemd-exit.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-firstboot.service": { "name": "systemd-firstboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck-root.service": { "name": "systemd-fsck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck@.service": { "name": "systemd-fsck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-growfs-root.service": { "name": "systemd-growfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-growfs@.service": { "name": "systemd-growfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-halt.service": { "name": "systemd-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hibernate-clear.service": { "name": "systemd-hibernate-clear.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate-resume.service": { "name": "systemd-hibernate-resume.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate.service": { "name": "systemd-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hostnamed.service": { "name": "systemd-hostnamed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hwdb-update.service": { "name": "systemd-hwdb-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hybrid-sleep.service": { "name": "systemd-hybrid-sleep.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-initctl.service": { "name": "systemd-initctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-catalog-update.service": { "name": "systemd-journal-catalog-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-flush.service": { "name": "systemd-journal-flush.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journald-sync@.service": { "name": "systemd-journald-sync@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-journald.service": { "name": "systemd-journald.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-journald@.service": { "name": "systemd-journald@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-kexec.service": { "name": "systemd-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-localed.service": { "name": "systemd-localed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-logind.service": { "name": "systemd-logind.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-machine-id-commit.service": { "name": "systemd-machine-id-commit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-modules-load.service": { "name": "systemd-modules-load.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-network-generator.service": { "name": "systemd-network-generator.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-networkd-wait-online.service": { "name": "systemd-networkd-wait-online.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-oomd.service": { "name": "systemd-oomd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-pcrextend@.service": { "name": "systemd-pcrextend@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrfs-root.service": { "name": "systemd-pcrfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pcrfs@.service": { "name": "systemd-pcrfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrlock-file-system.service": { "name": "systemd-pcrlock-file-system.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-code.service": { "name": "systemd-pcrlock-firmware-code.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-config.service": { "name": "systemd-pcrlock-firmware-config.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-machine-id.service": { "name": "systemd-pcrlock-machine-id.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-make-policy.service": { "name": "systemd-pcrlock-make-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-authority.service": { "name": "systemd-pcrlock-secureboot-authority.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-policy.service": { "name": "systemd-pcrlock-secureboot-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock@.service": { "name": "systemd-pcrlock@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrmachine.service": { "name": "systemd-pcrmachine.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-initrd.service": { "name": "systemd-pcrphase-initrd.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-sysinit.service": { "name": "systemd-pcrphase-sysinit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase.service": { "name": "systemd-pcrphase.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-poweroff.service": { "name": "systemd-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pstore.service": { "name": "systemd-pstore.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-quotacheck-root.service": { "name": "systemd-quotacheck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-quotacheck@.service": { "name": "systemd-quotacheck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-random-seed.service": { "name": "systemd-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-reboot.service": { "name": "systemd-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-remount-fs.service": { "name": "systemd-remount-fs.service", "source": "systemd", "state": "stopped", "status": "enabled-runtime" }, "systemd-repart.service": { "name": "systemd-repart.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-rfkill.service": { "name": "systemd-rfkill.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-soft-reboot.service": { "name": "systemd-soft-reboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-suspend-then-hibernate.service": { "name": "systemd-suspend-then-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend.service": { "name": "systemd-suspend.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-sysctl.service": { "name": "systemd-sysctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-sysext.service": { "name": "systemd-sysext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-sysext@.service": { "name": "systemd-sysext@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-sysupdate-reboot.service": { "name": "systemd-sysupdate-reboot.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysupdate.service": { "name": "systemd-sysupdate.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysusers.service": { "name": "systemd-sysusers.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-timedated.service": { "name": "systemd-timedated.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-timesyncd.service": { "name": "systemd-timesyncd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-tmpfiles-clean.service": { "name": "systemd-tmpfiles-clean.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev-early.service": { "name": "systemd-tmpfiles-setup-dev-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev.service": { "name": "systemd-tmpfiles-setup-dev.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup.service": { "name": "systemd-tmpfiles-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup-early.service": { "name": "systemd-tpm2-setup-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup.service": { "name": "systemd-tpm2-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-load-credentials.service": { "name": "systemd-udev-load-credentials.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-udev-settle.service": { "name": "systemd-udev-settle.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-trigger.service": { "name": "systemd-udev-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udevd.service": { "name": "systemd-udevd.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-update-done.service": { "name": "systemd-update-done.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp-runlevel.service": { "name": "systemd-update-utmp-runlevel.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp.service": { "name": "systemd-update-utmp.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-user-sessions.service": { "name": "systemd-user-sessions.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-vconsole-setup.service": { "name": "systemd-vconsole-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-volatile-root.service": { "name": "systemd-volatile-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "unbound-anchor.service": { "name": "unbound-anchor.service", "source": "systemd", "state": "stopped", "status": "static" }, "user-runtime-dir@.service": { "name": "user-runtime-dir@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@0.service": { "name": "user-runtime-dir@0.service", "source": "systemd", "state": "stopped", "status": "active" }, "user-runtime-dir@3001.service": { "name": "user-runtime-dir@3001.service", "source": "systemd", "state": "stopped", "status": "active" }, "user@.service": { "name": "user@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user@0.service": { "name": "user@0.service", "source": "systemd", "state": "running", "status": "active" }, "user@3001.service": { "name": "user@3001.service", "source": "systemd", "state": "running", "status": "active" }, "ypbind.service": { "name": "ypbind.service", "source": "systemd", "state": "stopped", "status": "not-found" } } }, "changed": false } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:114 Saturday 27 July 2024 12:38:02 -0400 (0:00:02.031) 0:01:21.165 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Cancel linger] ************************ task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:143 Saturday 27 July 2024 12:38:02 -0400 (0:00:00.035) 0:01:21.200 ********* skipping: [managed_node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.podman : Handle credential files - absent] ***** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:149 Saturday 27 July 2024 12:38:02 -0400 (0:00:00.032) 0:01:21.233 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle certs.d files - absent] ******** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:158 Saturday 27 July 2024 12:38:02 -0400 (0:00:00.033) 0:01:21.267 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [Ensure no resources] ***************************************************** task path: /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:170 Saturday 27 July 2024 12:38:02 -0400 (0:00:00.051) 0:01:21.318 ********* ok: [managed_node1] => { "changed": false } MSG: All assertions passed PLAY RECAP ********************************************************************* managed_node1 : ok=245 changed=9 unreachable=0 failed=1 skipped=236 rescued=1 ignored=0 Saturday 27 July 2024 12:38:02 -0400 (0:00:00.038) 0:01:21.357 ********* =============================================================================== fedora.linux_system_roles.podman : For testing and debugging - services --- 2.42s /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:183 fedora.linux_system_roles.podman : For testing and debugging - services --- 2.06s /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:183 fedora.linux_system_roles.podman : For testing and debugging - services --- 2.05s /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:183 fedora.linux_system_roles.podman : For testing and debugging - services --- 2.04s /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:183 fedora.linux_system_roles.podman : For testing and debugging - services --- 2.03s /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:183 fedora.linux_system_roles.podman : For testing and debugging - services --- 2.01s /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:183 Gathering Facts --------------------------------------------------------- 1.23s /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:3 fedora.linux_system_roles.certificate : Slurp the contents of the files --- 1.19s /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:152 fedora.linux_system_roles.firewall : Configure firewall ----------------- 1.15s /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:71 fedora.linux_system_roles.certificate : Remove files -------------------- 1.14s /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:181 fedora.linux_system_roles.podman : Gather the package facts ------------- 1.02s /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 fedora.linux_system_roles.firewall : Configure firewall ----------------- 1.00s /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:71 fedora.linux_system_roles.podman : Gather the package facts ------------- 0.90s /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 fedora.linux_system_roles.certificate : Ensure certificate requests ----- 0.88s /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:101 fedora.linux_system_roles.certificate : Ensure certificate role dependencies are installed --- 0.85s /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:5 fedora.linux_system_roles.certificate : Ensure provider service is running --- 0.77s /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:90 fedora.linux_system_roles.firewall : Install firewalld ------------------ 0.75s /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:31 fedora.linux_system_roles.firewall : Install firewalld ------------------ 0.75s /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:31 fedora.linux_system_roles.certificate : Ensure provider packages are installed --- 0.74s /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:23 fedora.linux_system_roles.firewall : Unmask firewalld service ----------- 0.57s /tmp/tmp.kBSwYGXc3s/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:22