Selenium Hub and Nodes with Podman

NOTE: I couldn’t get this procedure to work on Amazon Linux (2023) because podman wasn’t able to create the proper iptables rules – something to do with the backend it uses for creating the iptables rules. But you can get Rocky Linux for free from the AWS MarketPlace.

1/ Install podman (e.g., on Rocky 9 Linux)
sudo dnf install -y podman
sudo systemctl enable podman
sudo systemctl start podman

2/ pull the images from docker.io images registry
podman pull selenium/hub:latest
podman pull selenium/node-chrome:latest
# if you need firefox: podman pull selenium/node-firefox:latest

3/ install one or more web browsers
sudo yum install -y firefox
wget https://dl.google.com/linux/direct/google-chrome-stable_current_x86_64.rpm
sudo yum install -y ./google-chrome-stable_current_x86_64.rpm

4/ create the network for the containers:
sudo podman network create selenium-grid

5/ start containers based on the images
sudo podman create –name selenium-hub -p 4444:4444 –network selenium-grid selenium/hub:latest
sudo podman create –name selenium-node1 -e SE_EVENT_BUS_HOST=selenium-hub -e SE_EVENT_BUS_PUBLISH_PORT=4442 -e SE_EVENT_BUS_SUBSCRIBE_PORT=4443 –network selenium-grid –shm-size=1g selenium/node-chrome:latest
sudo podman create –name selenium-node2 -e SE_EVENT_BUS_HOST=selenium-hub -e SE_EVENT_BUS_PUBLISH_PORT=4442 -e SE_EVENT_BUS_SUBSCRIBE_PORT=4443 –network selenium-grid –shm-size=1g selenium/node-chrome:latest
sudo podman create –name selenium-node3 -e SE_EVENT_BUS_HOST=selenium-hub -e SE_EVENT_BUS_PUBLISH_PORT=4442 -e SE_EVENT_BUS_SUBSCRIBE_PORT=4443 –network selenium-grid –shm-size=1g selenium/node-chrome:latest
sudo podman create –name selenium-node4 -e SE_EVENT_BUS_HOST=selenium-hub -e SE_EVENT_BUS_PUBLISH_PORT=4442 -e SE_EVENT_BUS_SUBSCRIBE_PORT=4443 –network selenium-grid –shm-size=1g selenium/node-chrome:latest

6/ install xauth package if you want to display the output (browser) on your client when you run the scripts
sudo yum install -y xauth

7/ Create Systemd service files for one hub and two nodes (I originally generated the following using “podman generate systemd –new “)

sudo podman generate systemd –new selenium-hub | sudo tee /etc/systemd/system/selenium-hub.service
sudo podman generate systemd –new selenium-node1 | sudo tee /etc/systemd/system/selenium-node1.service
sudo podman generate systemd –new selenium-node2 | sudo tee /etc/systemd/system/selenium-node2.service
sudo podman generate systemd –new selenium-node3 | sudo tee /etc/systemd/system/selenium-node3.service
sudo podman generate systemd –new selenium-node4 | sudo tee /etc/systemd/system/selenium-node4.service
sudo systemctl daemon-reload

  • below are the sample content of the service files for selenium-hub and selenium-node1:

cat /etc/systemd/system/selenium-hub.service
[Unit]
Description=Podman container-selenium-hub.service
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
# override default session of 1 = # of host CPUs don’t need this since I am using multiple worker/nodes instead so default of 1 is OK
# Environment=SE_NODE_MAX_SESSIONS=2
# Environment=SE_NODE_OVERRIDE_MAX_SESSIONS=true

TimeoutStopSec=70
ExecStart=/usr/bin/podman run \
–cidfile=%t/%n.ctr-id \
–cgroups=no-conmon \
–rm \
–sdnotify=conmon \
-d \
–replace \
–name selenium-hub \
-p 4444:4444 \
–network selenium-grid selenium/hub:latest
ExecStop=/usr/bin/podman stop \
–ignore -t 10 \
–cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm \
-f \
–ignore -t 10 \
–cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all

[Install]
WantedBy=default.target

cat /etc/systemd/system/selenium-node1.service
[Unit]
Description=Podman container-selenium-node.service
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStart=/usr/bin/podman run \
–cidfile=%t/%n.ctr-id \
–cgroups=no-conmon \
–rm \
–sdnotify=conmon \
-d \
–replace \
–name selenium-node1 \
-e SE_EVENT_BUS_HOST=selenium-hub \
-e SE_EVENT_BUS_PUBLISH_PORT=4442 \
-e SE_EVENT_BUS_SUBSCRIBE_PORT=4443 \
–network selenium-grid \
–shm-size=1g selenium/node-chrome:latest
ExecStop=/usr/bin/podman stop \
–ignore -t 10 \
–cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm \
-f \
–ignore -t 10 \
–cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all

[Install]
WantedBy=default.target

8/ Stop (and remove) the running containers (if any) that you have created service files for
(“podman ps -a” stops all running containers and “podman rm -a” removes all containers)
sudo systemctl daemon-reload
sudo podman ps -a
sudo podman stop selenium-hub
sudo podman stop selenium-node1
sudo podman stop selenium-node2
sudo podman stop selenium-node3
sudo podman stop selenium-node4
sudo podman rm selenium-hub
sudo podman rm selenium-node1
sudo podman rm selenium-node2
sudo podman rm selenium-node3
sudo podman rm selenium-node4

9/ Enable and start the Systemd services for the hub and two nodes
sudo systemctl daemon-reload
sudo systemctl enable selenium-hub.service
sudo systemctl start selenium-hub.service
sudo systemctl status selenium-hub.service
sudo systemctl enable selenium-node1.service
sudo systemctl start selenium-node1.service
sudo systemctl status selenium-node1.service
sudo systemctl enable selenium-node2.service
sudo systemctl start selenium-node2.service
sudo systemctl status selenium-node2.service

10/ launch a web browser on the host (running the hub and nodes) and connect to http://localhost:4444/ to access the Hub

  • click on the camera/video-recorder icon, you will be prompted for the VNC password which is “secret”
  • you can watch the automation going on

11/ Using podman to see the hub and node containers running (I actually have 3 nodes though only listed two service files above)
[root@rocky system]# podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
731d0ecd1467 docker.io/selenium/hub:latest /opt/bin/entry_po… 22 hours ago Up 22 hours 0.0.0.0:4444->4444/tcp, 4442-4443/tcp selenium-hub
193afdc255b1 docker.io/selenium/node-chrome:latest /opt/bin/entry_po… 15 minutes ago Up 15 minutes 5900/tcp, 9000/tcp selenium-node1
d60109ddb11f docker.io/selenium/node-chrome:latest /opt/bin/entry_po… 15 minutes ago Up 15 minutes 5900/tcp, 9000/tcp selenium-node2
011da3984d4b docker.io/selenium/node-chrome:latest /opt/bin/entry_po… 3 seconds ago Up 4 seconds 5900/tcp, 9000/tcp selenium-node3
[root@rocky system]#

12/ Submit your jobs e..g, run your python scripts and you can observe the automation in the UI

NOTE: if doing a lot of debugging and aborting script manually at the shell prompt, it takes a while
for selenium-hub to clear out the session. In the Selenium HUB UI, go to “Sessions” > Click on the “i” under the Capabilities column for the aborted session and click “DELETE”
in the session details pop-up screen. Another way is to restart the selenium-nodeX.service using systemctl (though the latter method is preferrable).

NOTE: by default max session is set to one – meaning only one session runs on a node with one CPU (the underlying host)
it can be increased if you are sure the container performance can support it (especially if you r host has more than one CPU – not cores),
but instead easier to create a second node (container)
with the default sessions are queued and run after one another
wiht a second node, the hub can schedule another session on the second node

add the following to the [Service] section of the selenium-hub systemd service file (override default session of 1 = # of host CPUs)
Environment=SE_NODE_MAX_SESSIONS=2
Environment=SE_NODE_OVERRIDE_MAX_SESSIONS=true

Some other parameters that can go in the service file for a container:
AutoUpdate=registry
PublishPort=4444:4444
Volume=/dev/shm:/dev/shm
AddCapability=AUDIT_WRITE NET_RAW

  • Other commands e..g,
    sudp podman stop
    sudo podman ps -a
    sudo podman rm
    sudo podman stats

Using Selenium (podman) container – details on doing it manually to know what is actually happening behind the scenes when you use the Systemd service above

  • To utilize a Selenium container image for a script, follow these steps:
  • Install Docker or Podman:
    sudo dnf install -y podman
    sudo systemctl start podman
    sudo systemctl enable podman
  • Pull the Selenium Image (Download the desired Selenium standalone image from Docker Hub). For instance, to use Chrome:
    sudo podman pull selenium/standalone-chrome
  • Run the Selenium Container: Start the container, exposing the necessary port for communication with the Selenium server. For example, to run Chrome:
    sudo podman run –name selechrome –cap-add=AUDIT_WRITE –cap-add=NET_RAW -d -p 4444:4444 -v /dev/shm:/dev/shm selenium/standalone-chrome

— create a Python virtual environment and install Selenium in it:
sudo yum install -y python3.12 (latest as at 8/31/2025 – latest Selenium needs something newer than 3.9.x which may be the default)
python3.12 -m venv selenium_env
[aitayemi@rocky ~]$ source selenium_env/bin/activate
((selenium_env) ) [aitayemi@rocky ~]$ pip install –upgrade pip
((selenium_env) ) [aitayemi@rocky ~]$ pip install selenium==4.35.0
((selenium_env) ) [aitayemi@rocky ~]$ deactivate

  • Configure Selenium WebDriver in your Script: In your Selenium script, configure the WebDriver to connect to the remote Selenium server running in the Docker container. The URL will typically be http://localhost:4444/wd/hub (or the IP address of the Docker host if running remotely, and the mapped port).

sample script:

from selenium import webdriver
from selenium.webdriver.chrome.options import Options

chrome_options = Options()
driver = webdriver.Remote(
    command_executor='http://localhost:4444/wd/hub',
    options=chrome_options
)

driver.get("http://www.google.com")
print(driver.title)
driver.quit()
  • In MY OWN psp.py script, I replaced the initializeBrowser() function with the call that creates the browser driver object using the running selenium container

Initialize Browse Object

URL=”https://www.acme.com/sweepstakes/all-acme-sweeps”

driver=_acme_psp.initializeBrowser(URL)

chrome_options = Options()
driver = webdriver.Remote(
command_executor=’http://localhost:4444/wd/hub’,
options=chrome_options
)

  • To see what is going on inside the container, launch a browser locally and go to the WebDriver URL http://localhost:4444/wd/hub
  • click on the camera/video-recorder icon, you will be prompted for the VNC password which is “secret”
  • you can watch the automation going on
  • NO more cron scheduling issue where I have to ensure no other chrome/firefox sessions are running! I don’t have the –user-data-dir in use error any more!!
    NOTE: the session is auto-set to the # of CPU on the host (not cores – so for example I only have one CPU on my Linux laptop/host). Overriding it to 2 means I can have two concurrent Chrome browser sessions (i.e., 2 python scripts initiating chrome sessions)
  • Run the Selenium Grid (Chrome) Container – for ACME Super Prize (to optionally limit ram disk –shm-size=512m):
    sudo podman run –name acme_psp –cap-add=AUDIT_WRITE –cap-add=NET_RAW -d -p 4444:4444 -v /dev/shm:/dev/shm -e SE_NODE_MAX_SESSIONS=2 -e SE_NODE_OVERRIDE_MAX_SESSIONS=true selenium/standalone-chrome
  • to make it easier to run the script that typing /path/to/python/venv/python3 everytime, it is easier to make the script file executable and set the first line in the script to invoke the python executable in the virtual environment i.e.,
    !/home/aitayemi/selenium_env/bin/python3

NOTE: if you install podman on a system such as amazon-linux that doesn’t have the package by default, you need to create directory /etc/containers/ and create two files in it – registries.conf and policy.conf. RedHat variants such as Rocky Linux already come with the directory and the two files in it.

[root@rocky ~]# cat /etc/containers/registries.conf
unqualified-search-registries = [“registry.access.redhat.com”, “registry.redhat. io”, “docker.io”]

[root@rocky ~]# cat /etc/containers/policy.json
{
“default”: [
{
“type”: “insecureAcceptAnything”
}
],
“transports”: {
“docker”: {
“registry.access.redhat.com”: [
{
“type”: “signedBy”,
“keyType”: “GPGKeys”,
“keyPaths”: [“/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release”, “/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta”]
}
],
“registry.redhat.io”: [
{
“type”: “signedBy”,
“keyType”: “GPGKeys”,
“keyPaths”: [“/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release”, “/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta”]
}
]
},
“docker-daemon”: {
“”: [
{
“type”: “insecureAcceptAnything”
}
]
}
}
}
[root@rocky ~]#


Minishift notes

Background: attempting to allow external hosts on the same network as the KVM host on which minishift VM is running to be able to access the minishift web UI and optionally via SSH. In this example, 192.168.42.62 is the IP assigned to minishift after starting it, and 192.168.10.0/24 is the IP range of the network on which the KVM host is running.

On my KVM Linux host, installing libvirtd had created the virbr0 bridge. Subsequently setting up minishift setup a second bridge named virbr1 to which the vNIC assigned to the minishift VM is slaved. Interestingly enough, the minishift VM is then given 2x interfaces with one attached to each bridge (virbr0 and virbr1). In the minishift VM, the default route is assigned to the interface (e.g., eth0) attached to the first bridge (virbr0) with IP 192.168.122.30, even though when minishift starts, it displays a URL with the IP address (192.168.42.62) assigned to the interface connected to the second bridge as the way to access the minishift web UI. This works fine as long as you are attempting to access the URL from the KVM host, but won’t work without some extra steps if you want to access the minishift UI from an external host.
wlp5s0 is the “public” interface on my KVM host.

Setup and start minishift (equivalent to RedHat CDK if you have the right subscription):
itababa@itamint:~$ sudo apt update -y
itababa@itamint:~$ sudo apt install qemu-kvm qemu-system qemu-utils python3 python3-pip libvirt-clients libvirt-daemon-system bridge-utils virtinst libvirt-daemon virt-manager cpu-checker -y
itababa@itamint:~$ usermod -aG libvirt $(whoami)
itababa@itamint:~$ newgrp libvirt
itababa@itamint:~$ sudo systemctl enable libvirtd
itababa@itamint:~$ sudo systemctl start libvirtd
itababa@itamint:~$ sudo virsh net-start default
itababa@itamint:~$ sudo virsh net-autostart default
itababa@itamint:~$ sudo curl -L https://github.com/dhiltgen/docker-machine-kvm/releases/download/v0.10.0/docker-machine-driver-kvm-ubuntu16.04 -o /usr/local/bin/docker-machine-driver-kvm
itababa@itamint:~$ sudo chmod +x /usr/local/bin/docker-machine-driver-kvm
itababa@itamint:~$ curl -L https://github.com/minishift/minishift/releases/download/v1.34.3/minishift-1.34.3-linux-amd64.tgz -o minishift-1.34.3-linux-amd64.tgz
itababa@itamint:~$ tar xf minishift-1.34.3-linux-amd64.tgz
itababa@itamint:~$ sudo cp minishift-1.34.3-linux-amd64/minishift /usr/local/bin/
itababa@itamint:~$ minishift start

NOTE: the “minishift start” command will display the minishift web UI at the end of its run.
Reference: https://docs.okd.io/3.11/minishift/getting-started/setting-up-virtualization-environment.html

Settings needed on the KVM host:
- configure the KVM host to redirect connections to it on 8443/tcp to the minishift host (192.168.42.62). Optionally redirect 2222/tcp to the minishift host as well:
iptables -F
iptables -F -t nat
echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/ipv4/conf/virbr0/proxy_arp
echo 1 > /proc/sys/net/ipv4/conf/virbr1/proxy_arp    (required so that the host can answer when the minishift VM tries to find the ARP of external hosts, otherwise the minshift will not respond to any connection attempts)
iptables -t nat -A POSTROUTING -o wlp5s0 -j MASQUERADE
iptables -t nat -A PREROUTING -i wlp5s0 -p tcp --dport 8443 -j DNAT --to-destination 192.168.42.62:8443
iptables -t nat -A PREROUTING -i wlp5s0 -p tcp --dport 2222 -j DNAT --to-destination 192.168.42.62:22
Settings needed on the minishift VM after it is running (execute "minishift ssh" to SSH into the minishift VM):
- NOTE: this whole section can be skipped if you choose to "DNAT" port 8443/tcp to the IP address on the virbr0 vNIC on the KVM host:
itababa@itamint:~$ minishift ssh
[docker@minishift ~]$
 sudo su -
[root@minishift ~]# ip a     (find the interface with the 192.168.42.x IP e.g., eth1)
[root@minishift ~]# ip route add 192.168.10.0/0 via dev eth1 (this is because the default route uses the NIC with 192.168.122.x IP and used by minishift to access the Internet)
[root@minishift ~]# ip route show
default via 192.168.122.1 dev eth0 proto dhcp metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.10.0/24 dev eth1 scope link
192.168.42.0/24 dev eth1 proto kernel scope link src 192.168.42.62 metric 101
192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.30 metric 100
[root@minishift ~]#
NOTE: if you restart the minishift VM, you have to SSH into it again and re-add the route to the 192.168.10.0/24 network.

Settings needed on the external host:
In this example, I want to access the minishift web UI from a Windows system on my home network. You need to add a route to the minishift IP address using the KVM host IP as the gateway (using the command line):
C:\Windows\system32>route add 192.168.42.206 MASK 255.255.255.255 192.168.10.4

Launch a browser on the external host and go directly to https://192.168.42.206:8443/console/catalog

NOTE: if you attempt to login via https://192.168.42.206:8443/console , you will encounter this error/bug after entering your credentials (e.g., admin/admin)
Error: “Error. Invalid request. Client state could not be verified. Return to the console.”
Bug: “https://bugzilla.redhat.com/show_bug.cgi?id=1589072”
Found the solution (use the /console/catalog URL): https://github.com/openshift/openshift-azure/issues/236

after login with admin/admin credential

Other commands:
minishift delete –clear-cache (solution for error similar to: “Cannot start – error starting the VM: error getting the state for host: machine dies not exist-docker” when attempting to start minishift)
sudo virsh stop minishift; sudo undefine minishift; (might be needed if the “delete” command does not fix the issue)
sudo arp -a (on the KVM host to see the IPs of the minishift VM or SSH into the minishift VM)

– To restart the minishift VM (from the KVM host):
itababa@itamint:~$ minishift stop
itababa@itamint:~$ minishift start


RUN ORACLE LINUX 8.X DOCKER CONTAINER ON WINDOWS 10 WITH WSL2

The purpose of this guide is to run an Oracle Linux container on a Windows 10 system using Windows Subsystem for Linux v2.
(This may be an alternative to using a full-blown hypervisor type 2 such as Oracle VirtualBox or VMWare Player/Workstation.)

NOTE: Using container images from the official Oracle repository
NOTE: You will need to be running Windows 10 build 18917 or higher to use WSL 2. If you are on an earlier Windows 10 build, launch Windows Update Settings, you should be able to update it to the latest available version.
NOTE: there are docker images for 7/8/9 and slim versions of 7/8/9 (minimal environment with minimal number of packages) from the ghcr.io repository.

1. Prepare an Oracle Linux 8.x container and export it to a single TAR file using an existing Linux system as the work platform:

[root@wip]# yum install -y docker
[root@wip]# usermod -aG docker root
[root@wip]# newgrp docker
[root@wip]# id
uid=0(root) gid=992(docker) groups=992(docker),0(root)
[root@wip]#
[root@wip]# systemctl start docker.service
[root@wip]# systemctl enable docker.service

– Create the Dockerfile to use to build the container:
[root@wip]# vi Dockerfile
[root@wip]# cat Dockerfile
FROM ghcr.io/oracle/oraclelinux:8

CMD [“/bin/bash”]

– Build the docker container:
[root@wip]# docker build -t ghcr.io/oracle/oraclelinux:8 .
Sending build context to Docker daemon 23.04kB
Step 1/2 : FROM ghcr.io/oracle/oraclelinux:8
8: Pulling from oracle/oraclelinux
4c770e098606: Pull complete
Digest: sha256:07a995ecaf9db1ce613648a08facc162de69f26c39712f1acc93629c2e6c4e73
Status: Downloaded newer image for ghcr.io/oracle/oraclelinux:8
—> b0045ea7bbde
Step 2/2 : CMD [“/bin/bash”]
—> Running in 168cb6d08c9e
Removing intermediate container 168cb6d08c9e
—> 53be01d92e18
Successfully built 53be01d92e18
Successfully tagged ghcr.io/oracle/oraclelinux:8

– Test the container:
[root@wip]# docker run -it 53be01d92e18
[root@ec6e4b0f7c3b /]# cat /etc/oracle-release
Oracle Linux Server release 8.7
[root@ec6e4b0f7c3b /]# exit

– List all containers (note the container id ec6e4b0f7c3b associated with the image id 53be01d92e18 from the earlier build command output):

[root@wip]# docker ps -a
CONTAINER ID    IMAGE          COMMAND       CREATED              STATUS                         PORTS     NAMES
ec6e4b0f7c3b    53be01d92e18   "/bin/bash"   About a minute ago   Exited (0) 23 seconds ago                reverent_ellis

– Export the container into a single TAR file (222M size):
[root@wip]# docker export –output=”oellinux8.tar” aa565b335857

– Optionally zip the file (85MB zipped) to reduce the amount of data transferred when copying it to the Windows 10 system:
[root@wip]# gzip oellinux8.tar

– Transfer the container output TAR file to the Windows 10 system. In this case I will be using pscp to pull the file down into the Windows 10 system using a user other than root, so I copied the file to /tmp which is accessible to all users and changed the permission on the file so other users can read it:
[root@wip]# cp oellinux8.tar.gz /tmp/
[root@wip]# chmod 666 /tmp/oellinux8.tar.gz

2. SETUP WSL2 on Windows 10:
– Using elavated/admin powershell, run: Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux
– Using elavated/admin command or powershell, run: dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart
– Reboot the Windows 10 system (WSL2 upgrade fails without a reboot after installing WSL)
– Upgrade WSL to WSL2 via the installer https://wslstorestorage.blob.core.windows.net/wslblob/wsl_update_x64.msi

3. Pull down the container file to the local Windows 10 system with WSL2 installed:
C:\Users\user1> md wsl2\oellinux8
C:\Users\user1\wsl2\oellinux8>cd wsl2\oellinux8
C:\Users\user1\wsl2\oellinux8>pscp -i ….\Downloads\centos8.ppk ec2-user@my-linux-wip-server:/tmp/oellinux8.tar.gz .

4. Unzip the container file oellinux8.tar.gz (if you compressed the original TAR file):

5. Import the TAR file into WSL (syntax: wsl –import [DISTRO NAME] [STORAGE LOCATION] [FILE NAME]):
C:\Users\user1\wsl2\oellinux8>wsl –import oellinux8 “C:\Users\user1\wsl2\oellinux8” oellinux8.tar

NOTE: the import step extracts the TAR file into rootfs and temp directories:
C:\Users\user1\wsl2\oellinux8>dir
12/07/2022 11:47 PM 232,101,888 oellinux8.tar
12/07/2022 11:59 PM 84,593,746 oellinux8.tar.gz
12/08/2022 12:25 AM rootfs
12/08/2022 12:46 AM temp

6. Start the new WSL container (which ends at the running Linux prompt):
C:\Users\user1\wsl2\oellinux8> wsl -d oellinux8
[root@mywinpc wsl2]#

7. Execute some commands in the running container:
[root@mywinpc wsl2]# ping google.com
PING google.com (172.217.7.110) 56(84) bytes of data.
64 bytes from slc08s01-in-f14.1e100.net (172.217.7.110): icmp_seq=1 ttl=59 time=4.64 ms
64 bytes from slc08s01-in-f14.1e100.net (172.217.7.110): icmp_seq=2 ttl=59 time=5.59 ms
^C
— google.com ping statistics —
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 4.639/5.116/5.594/0.482 ms
[root@mywinpc wsl2]#
[root@mywinpc wsl2]# yum repolist
repo id repo name
ol8_appstream Oracle Linux 8 Application Stream (x86_64)
ol8_baseos_latest Oracle Linux 8 BaseOS Latest (x86_64)
[root@mywinpc user1]#
[root@mywinpc user1]# yum provides gdisk
Oracle Linux 8 BaseOS Latest (x86_64) 805 kB/s | 53 MB 01:07
Oracle Linux 8 Application Stream (x86_64) 926 kB/s | 42 MB 00:45
Last metadata expiration check: 0:00:14 ago on Thu 08 Dec 2022 12:29:18 AM MST.
gdisk-1.0.3-6.el8.x86_64 : An fdisk-like partitioning tool for GPT disks
Repo : ol8_baseos_latest
Matched from:
Provide : gdisk = 1.0.3-6.el8

gdisk-1.0.3-9.el8.x86_64 : An fdisk-like partitioning tool for GPT disks
Repo : ol8_baseos_latest
Matched from:
Provide : gdisk = 1.0.3-9.el8

gdisk-1.0.3-11.el8.x86_64 : An fdisk-like partitioning tool for GPT disks
Repo : ol8_baseos_latest
Matched from:
Provide : gdisk = 1.0.3-11.el8

[root@mywinpc user1]#

8. Optionally ENTER exit command to quit the running Linux container:
[root@mywinpc user1]# exit
C:\Users\user1\wsl2\oellinux8>

—————— END OF PROCEDURE ———————————

The following setup is to allow remote connectivity to the container

- Start a  temporary container (e.g., using the image id) to copy SSHD config files from it:
[root@ip-172-31-6-136 ~]# mkdir /oel8_etc_ssh
[root@ip-172-31-6-136 ~]# docker run --name wip -it -v /oel8_etc_ssh:/tmp/mpoint 18a22840eed9
[root@609b0ec071bb /]#
[root@609b0ec071bb /]# cp -a /etc/ssh /tmp/mpoint/
[root@609b0ec071bb /]# exit

- Delete the temporary container:
[root@ip-172-31-6-136 ~]# docker rm wip

- Start the "production" container with /oel8_etc_ssh/ssh on the host mounted to /etc/ssh in the container (running headless or detached mode with "-d"):
  NOTE: mapped port 2222/tcp on the host to the SSH port in the container. This is handy to access the container remotely from outside the host.
[root@ip-172-31-6-136 ~]# docker run --name oel87c -it -p 2222:22 -v /oel8_etc_ssh/ssh:/etc/ssh -d 18a22840eed9

- Attach to the console of the container:
[root@ip-172-31-6-136 ~]# docker attach d99789174764

- Create the ssh host keys (one-time task since they are stored persistently on the underlying host):
[root@d99789174764 /]# ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key -N ''
[root@d99789174764 /]# ssh-keygen -t ecdsa -f /etc/ssh/ssh_host_ecdsa_key -N ''
[root@d99789174764 /]# ssh-keygen -t ed25519 -f /etc/ssh/ssh_host_ed25519_key -N ''

- Create the system log file (otherwise SSHD will be unable to authenticate login attempts):
[root@d99789174764 /]# touch /var/log/messages

- Start SSHD service:
[root@d99789174764 /]# /usr/sbin/sshd &

- Add a user to the container to login remotely:
[root@d99789174764 /]# useradd user1
[root@d99789174764 /]# passwd user1

- Optionally install/configure visudo to allow "user1" switch to the root user:
[root@d99789174764 /]# yum install -y sudo
[root@d99789174764 /]# visudo
[root@d99789174764 /]# grep user1 /etc/sudoers
user1   ALL=(ALL)       NOPASSWD: ALL
[root@d99789174764 /]#

- Detach from the container and keep it running: Press Ctrl-P, followed by Ctrl-Q,
  NOTE: if you mistakenly typed exit in the container which causes it to stop, start it again with "docker start <container-id>" on the host

- It is now possible to access the container remotely from outside the host. SSH to the container using the hosts' IP address on port 2222/tcp e.g.
  From a Windows/Linux system (you can also use Putty): ssh user@<host-ip> -p 2222

------------------- END -------------------

- Sample command to retrieve the IP of the container from the underlying host. [root@ip-172-31-6-136 ~]# docker ps    (command to get the container id)
[root@ip-172-31-6-136 ~]# docker container inspect -f '{{ .NetworkSettings.IPAddress }}' d99789174764
172.17.0.2

- Note: mounting the whole /etc and /var/log to directories on the underlying host should help to "persist" all the relevant configuration of the container.

References:
https://learn.microsoft.com/en-us/windows/wsl/install-manual
https://www.sanner.io/posts/2022/03/create-a-custom-linux-setup-for-wsl2/
https://learn.microsoft.com/en-us/windows/wsl/use-custom-distro
https://www.sanner.io/posts/2022/03/create-a-custom-linux-setup-for-wsl2/
https://learn.microsoft.com/en-us/windows/wsl/install-manual
https://hub.docker.com/_/oraclelinux?tab=tags
https://github.com/oracle/container-images/pkgs/container/oraclelinux
https://yum.oracle.com/oracle-linux-isos.html
https://social.technet.microsoft.com/Forums/en-US/e655c45f-3a74-4acb-8df1-3607e4fe6b49/issue-with-installing-linux-subsystem?forum=winserverhyperv
https://community.oracle.com/mosc/discussion/3949381/yum-update-error-rhn-plugin-network-error-connection-reset-by-peer