{"id":2598,"date":"2025-09-10T23:38:49","date_gmt":"2025-09-11T05:38:49","guid":{"rendered":"https:\/\/www.itayemi.com\/blog\/?p=2598"},"modified":"2025-09-10T23:38:49","modified_gmt":"2025-09-11T05:38:49","slug":"selenium-hub-and-nodes-with-podman","status":"publish","type":"post","link":"https:\/\/www.itayemi.com\/blog\/2025\/09\/10\/selenium-hub-and-nodes-with-podman\/","title":{"rendered":"Selenium Hub and Nodes with Podman"},"content":{"rendered":"\n<p>NOTE: I couldn&#8217;t get this procedure to work on Amazon Linux (2023) because podman wasn&#8217;t able to create the proper iptables rules &#8211; something to do with the backend it uses for creating the iptables rules. But you can get Rocky Linux for free from the AWS MarketPlace.<\/p>\n\n\n\n<p>1\/ Install podman (e.g., on Rocky 9 Linux)<br>sudo dnf install -y podman<br>sudo systemctl enable podman<br>sudo systemctl start podman<\/p>\n\n\n\n<p>2\/ pull the images from docker.io images registry<br>podman pull selenium\/hub:latest<br>podman pull selenium\/node-chrome:latest<br># if you need firefox: podman pull selenium\/node-firefox:latest<\/p>\n\n\n\n<p>3\/ install one or more web browsers<br>sudo yum install -y firefox<br>wget https:\/\/dl.google.com\/linux\/direct\/google-chrome-stable_current_x86_64.rpm<br>sudo yum install -y .\/google-chrome-stable_current_x86_64.rpm<\/p>\n\n\n\n<p>4\/ create the network for the containers:<br>sudo podman network create selenium-grid<\/p>\n\n\n\n<p>5\/ start containers based on the images<br>sudo podman create &#8211;name selenium-hub -p 4444:4444 &#8211;network selenium-grid selenium\/hub:latest<br>sudo podman create &#8211;name selenium-node1 -e SE_EVENT_BUS_HOST=selenium-hub -e SE_EVENT_BUS_PUBLISH_PORT=4442 -e SE_EVENT_BUS_SUBSCRIBE_PORT=4443 &#8211;network selenium-grid &#8211;shm-size=1g selenium\/node-chrome:latest<br>sudo podman create &#8211;name selenium-node2 -e SE_EVENT_BUS_HOST=selenium-hub -e SE_EVENT_BUS_PUBLISH_PORT=4442 -e SE_EVENT_BUS_SUBSCRIBE_PORT=4443 &#8211;network selenium-grid &#8211;shm-size=1g selenium\/node-chrome:latest<br>sudo podman create &#8211;name selenium-node3 -e SE_EVENT_BUS_HOST=selenium-hub -e SE_EVENT_BUS_PUBLISH_PORT=4442 -e SE_EVENT_BUS_SUBSCRIBE_PORT=4443 &#8211;network selenium-grid &#8211;shm-size=1g selenium\/node-chrome:latest<br>sudo podman create &#8211;name selenium-node4 -e SE_EVENT_BUS_HOST=selenium-hub -e SE_EVENT_BUS_PUBLISH_PORT=4442 -e SE_EVENT_BUS_SUBSCRIBE_PORT=4443 &#8211;network selenium-grid &#8211;shm-size=1g selenium\/node-chrome:latest<\/p>\n\n\n\n<p>6\/ install xauth package if you want to display the output (browser) on your client when you run the scripts<br>sudo yum install -y xauth<\/p>\n\n\n\n<p>7\/ Create Systemd service files for one hub and two nodes (I originally generated the following using &#8220;podman generate systemd &#8211;new &#8220;)<\/p>\n\n\n\n<p>sudo podman generate systemd &#8211;new selenium-hub | sudo tee \/etc\/systemd\/system\/selenium-hub.service<br>sudo podman generate systemd &#8211;new selenium-node1 | sudo tee \/etc\/systemd\/system\/selenium-node1.service<br>sudo podman generate systemd &#8211;new selenium-node2 | sudo tee \/etc\/systemd\/system\/selenium-node2.service<br>sudo podman generate systemd &#8211;new selenium-node3 | sudo tee \/etc\/systemd\/system\/selenium-node3.service<br>sudo podman generate systemd &#8211;new selenium-node4 | sudo tee \/etc\/systemd\/system\/selenium-node4.service<br>sudo systemctl daemon-reload<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>below are the sample content of the service files for selenium-hub and selenium-node1:<\/li>\n<\/ul>\n\n\n\n<p>cat \/etc\/systemd\/system\/selenium-hub.service<br>[Unit]<br>Description=Podman container-selenium-hub.service<br>Documentation=man:podman-generate-systemd(1)<br>Wants=network-online.target<br>After=network-online.target<br>RequiresMountsFor=%t\/containers<\/p>\n\n\n\n<p>[Service]<br>Environment=PODMAN_SYSTEMD_UNIT=%n<br>Restart=on-failure<br># override default session of 1 = # of host CPUs don&#8217;t need this since I am using multiple worker\/nodes instead so default of 1 is OK<br># Environment=SE_NODE_MAX_SESSIONS=2<br># Environment=SE_NODE_OVERRIDE_MAX_SESSIONS=true<\/p>\n\n\n\n<p>TimeoutStopSec=70<br>ExecStart=\/usr\/bin\/podman run \\<br>&#8211;cidfile=%t\/%n.ctr-id \\<br>&#8211;cgroups=no-conmon \\<br>&#8211;rm \\<br>&#8211;sdnotify=conmon \\<br>-d \\<br>&#8211;replace \\<br>&#8211;name selenium-hub \\<br>-p 4444:4444 \\<br>&#8211;network selenium-grid selenium\/hub:latest<br>ExecStop=\/usr\/bin\/podman stop \\<br>&#8211;ignore -t 10 \\<br>&#8211;cidfile=%t\/%n.ctr-id<br>ExecStopPost=\/usr\/bin\/podman rm \\<br>-f \\<br>&#8211;ignore -t 10 \\<br>&#8211;cidfile=%t\/%n.ctr-id<br>Type=notify<br>NotifyAccess=all<\/p>\n\n\n\n<p>[Install]<br>WantedBy=default.target<\/p>\n\n\n\n<p>cat \/etc\/systemd\/system\/selenium-node1.service<br>[Unit]<br>Description=Podman container-selenium-node.service<br>Documentation=man:podman-generate-systemd(1)<br>Wants=network-online.target<br>After=network-online.target<br>RequiresMountsFor=%t\/containers<\/p>\n\n\n\n<p>[Service]<br>Environment=PODMAN_SYSTEMD_UNIT=%n<br>Restart=on-failure<br>TimeoutStopSec=70<br>ExecStart=\/usr\/bin\/podman run \\<br>&#8211;cidfile=%t\/%n.ctr-id \\<br>&#8211;cgroups=no-conmon \\<br>&#8211;rm \\<br>&#8211;sdnotify=conmon \\<br>-d \\<br>&#8211;replace \\<br>&#8211;name selenium-node1 \\<br>-e SE_EVENT_BUS_HOST=selenium-hub \\<br>-e SE_EVENT_BUS_PUBLISH_PORT=4442 \\<br>-e SE_EVENT_BUS_SUBSCRIBE_PORT=4443 \\<br>&#8211;network selenium-grid \\<br>&#8211;shm-size=1g selenium\/node-chrome:latest<br>ExecStop=\/usr\/bin\/podman stop \\<br>&#8211;ignore -t 10 \\<br>&#8211;cidfile=%t\/%n.ctr-id<br>ExecStopPost=\/usr\/bin\/podman rm \\<br>-f \\<br>&#8211;ignore -t 10 \\<br>&#8211;cidfile=%t\/%n.ctr-id<br>Type=notify<br>NotifyAccess=all<\/p>\n\n\n\n<p>[Install]<br>WantedBy=default.target<\/p>\n\n\n\n<p>8\/ Stop (and remove) the running containers (if any) that you have created service files for<br>(&#8220;podman ps -a&#8221; stops all running containers and &#8220;podman rm -a&#8221; removes all containers)<br>sudo systemctl daemon-reload<br>sudo podman ps -a<br>sudo podman stop selenium-hub<br>sudo podman stop selenium-node1<br>sudo podman stop selenium-node2<br>sudo podman stop selenium-node3<br>sudo podman stop selenium-node4<br>sudo podman rm selenium-hub<br>sudo podman rm selenium-node1<br>sudo podman rm selenium-node2<br>sudo podman rm selenium-node3<br>sudo podman rm selenium-node4<\/p>\n\n\n\n<p>9\/ Enable and start the Systemd services for the hub and two nodes<br>sudo systemctl daemon-reload<br>sudo systemctl enable selenium-hub.service<br>sudo systemctl start selenium-hub.service<br>sudo systemctl status selenium-hub.service<br>sudo systemctl enable selenium-node1.service<br>sudo systemctl start selenium-node1.service<br>sudo systemctl status selenium-node1.service<br>sudo systemctl enable selenium-node2.service<br>sudo systemctl start selenium-node2.service<br>sudo systemctl status selenium-node2.service<\/p>\n\n\n\n<p>10\/ launch a web browser on the host (running the hub and nodes) and connect to http:\/\/localhost:4444\/ to access the Hub<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>click on the camera\/video-recorder icon, you will be prompted for the VNC password which is &#8220;secret&#8221;<\/li>\n\n\n\n<li>you can watch the automation going on<\/li>\n<\/ul>\n\n\n\n<p>11\/ Using podman to see the hub and node containers running (I actually have 3 nodes though only listed two service files above)<br>[root@rocky system]# podman ps -a<br>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES<br>731d0ecd1467 docker.io\/selenium\/hub:latest \/opt\/bin\/entry_po\u2026 22 hours ago Up 22 hours 0.0.0.0:4444-&gt;4444\/tcp, 4442-4443\/tcp selenium-hub<br>193afdc255b1 docker.io\/selenium\/node-chrome:latest \/opt\/bin\/entry_po\u2026 15 minutes ago Up 15 minutes 5900\/tcp, 9000\/tcp selenium-node1<br>d60109ddb11f docker.io\/selenium\/node-chrome:latest \/opt\/bin\/entry_po\u2026 15 minutes ago Up 15 minutes 5900\/tcp, 9000\/tcp selenium-node2<br>011da3984d4b docker.io\/selenium\/node-chrome:latest \/opt\/bin\/entry_po\u2026 3 seconds ago Up 4 seconds 5900\/tcp, 9000\/tcp selenium-node3<br>[root@rocky system]#<\/p>\n\n\n\n<p>12\/ Submit your jobs e..g, run your python scripts and you can observe the automation in the UI<\/p>\n\n\n\n<p>NOTE: if doing a lot of debugging and aborting script manually at the shell prompt, it takes a while<br>for selenium-hub to clear out the session. In the Selenium HUB UI, go to &#8220;Sessions&#8221; &gt; Click on the &#8220;i&#8221; under the Capabilities column for the aborted session and click &#8220;DELETE&#8221;<br>in the session details pop-up screen. Another way is to restart the selenium-nodeX.service using systemctl (though the latter method is preferrable).<\/p>\n\n\n\n<p>NOTE: by default max session is set to one &#8211; meaning only one session runs on a node with one CPU (the underlying host)<br>it can be increased if you are sure the container performance can support it (especially if you r host has more than one CPU &#8211; not cores),<br>but instead easier to create a second node (container)<br>with the default sessions are queued and run after one another<br>wiht a second node, the hub can schedule another session on the second node<\/p>\n\n\n\n<p>add the following to the [Service] section of the selenium-hub systemd service file (override default session of 1 = # of host CPUs)<br>Environment=SE_NODE_MAX_SESSIONS=2<br>Environment=SE_NODE_OVERRIDE_MAX_SESSIONS=true<\/p>\n\n\n\n<p>Some other parameters that can go in the service file for a container:<br>AutoUpdate=registry<br>PublishPort=4444:4444<br>Volume=\/dev\/shm:\/dev\/shm<br>AddCapability=AUDIT_WRITE NET_RAW<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Other commands e..g,<br>sudp podman stop<br>sudo podman ps -a<br>sudo podman rm<br>sudo podman stats<\/li>\n<\/ul>\n\n\n\n<p>Using Selenium (podman) container &#8211; details on doing it manually to know what is actually happening behind the scenes when you use the Systemd service above<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>To utilize a Selenium container image for a script, follow these steps:<\/li>\n\n\n\n<li>Install Docker or Podman:<br>sudo dnf install -y podman<br>sudo systemctl start podman<br>sudo systemctl enable podman<\/li>\n\n\n\n<li>Pull the Selenium Image (Download the desired Selenium standalone image from Docker Hub). For instance, to use Chrome:<br>sudo podman pull selenium\/standalone-chrome<\/li>\n\n\n\n<li>Run the Selenium Container: Start the container, exposing the necessary port for communication with the Selenium server. For example, to run Chrome:<br>sudo podman run &#8211;name selechrome &#8211;cap-add=AUDIT_WRITE &#8211;cap-add=NET_RAW -d -p 4444:4444 -v \/dev\/shm:\/dev\/shm selenium\/standalone-chrome<\/li>\n<\/ul>\n\n\n\n<p>&#8212; create a Python virtual environment and install Selenium in it:<br>sudo yum install -y python3.12 (latest as at 8\/31\/2025 &#8211; latest Selenium needs something newer than 3.9.x which may be the default)<br>python3.12 -m venv selenium_env<br>[aitayemi@rocky ~]$ source selenium_env\/bin\/activate<br>((selenium_env) ) [aitayemi@rocky ~]$ pip install &#8211;upgrade pip<br>((selenium_env) ) [aitayemi@rocky ~]$ pip install selenium==4.35.0<br>((selenium_env) ) [aitayemi@rocky ~]$ deactivate<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Configure Selenium WebDriver in your Script: In your Selenium script, configure the WebDriver to connect to the remote Selenium server running in the Docker container. The URL will typically be http:\/\/localhost:4444\/wd\/hub (or the IP address of the Docker host if running remotely, and the mapped port).<\/li>\n<\/ul>\n\n\n\n<p>sample script:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>from selenium import webdriver\nfrom selenium.webdriver.chrome.options import Options\n\nchrome_options = Options()\ndriver = webdriver.Remote(\n    command_executor='http:\/\/localhost:4444\/wd\/hub',\n    options=chrome_options\n)\n\ndriver.get(\"http:\/\/www.google.com\")\nprint(driver.title)\ndriver.quit()<\/code><\/pre>\n\n\n\n<ul class=\"wp-block-list\">\n<li>In MY OWN psp.py script, I replaced the initializeBrowser() function with the call that creates the browser driver object using the running selenium container<\/li>\n<\/ul>\n\n\n\n<h1 class=\"wp-block-heading\">Initialize Browse Object<\/h1>\n\n\n\n<h1 class=\"wp-block-heading\">URL=&#8221;https:\/\/www.acme.com\/sweepstakes\/all-acme-sweeps&#8221;<\/h1>\n\n\n\n<h1 class=\"wp-block-heading\">driver=_acme_psp.initializeBrowser(URL)<\/h1>\n\n\n\n<p>chrome_options = Options()<br>driver = webdriver.Remote(<br>command_executor=&#8217;http:\/\/localhost:4444\/wd\/hub&#8217;,<br>options=chrome_options<br>)<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>To see what is going on inside the container, launch a browser locally and go to the WebDriver URL http:\/\/localhost:4444\/wd\/hub<\/li>\n\n\n\n<li>click on the camera\/video-recorder icon, you will be prompted for the VNC password which is &#8220;secret&#8221;<\/li>\n\n\n\n<li>you can watch the automation going on<\/li>\n\n\n\n<li>NO more cron scheduling issue where I have to ensure no other chrome\/firefox sessions are running! I don&#8217;t have the &#8211;user-data-dir in use error any more!!<br>NOTE: the session is auto-set to the # of CPU on the host (not cores &#8211; so for example I only have one CPU on my Linux laptop\/host). Overriding it to 2 means I can have two concurrent Chrome browser sessions (i.e., 2 python scripts initiating chrome sessions)<\/li>\n\n\n\n<li>Run the Selenium Grid (Chrome) Container &#8211; for ACME Super Prize (to optionally limit ram disk &#8211;shm-size=512m):<br>sudo podman run &#8211;name acme_psp &#8211;cap-add=AUDIT_WRITE &#8211;cap-add=NET_RAW -d -p 4444:4444 -v \/dev\/shm:\/dev\/shm -e SE_NODE_MAX_SESSIONS=2 -e SE_NODE_OVERRIDE_MAX_SESSIONS=true selenium\/standalone-chrome<\/li>\n\n\n\n<li>to make it easier to run the script that typing \/path\/to\/python\/venv\/python3 everytime, it is easier to make the script file executable and set the first line in the script to invoke the python executable in the virtual environment i.e., <br>!\/home\/aitayemi\/selenium_env\/bin\/python3<\/li>\n<\/ul>\n\n\n\n<p>NOTE: if you install podman on a system such as amazon-linux that doesn&#8217;t have the package by default, you need to create directory \/etc\/containers\/ and create two files in it &#8211; registries.conf and policy.conf. RedHat variants such as Rocky Linux already come with the directory and the two files in it.<\/p>\n\n\n\n<p>[root@rocky ~]# cat \/etc\/containers\/registries.conf<br>unqualified-search-registries = [&#8220;registry.access.redhat.com&#8221;, &#8220;registry.redhat. io&#8221;, &#8220;docker.io&#8221;]<\/p>\n\n\n\n<p>[root@rocky ~]# cat \/etc\/containers\/policy.json<br>{<br>&#8220;default&#8221;: [<br>{<br>&#8220;type&#8221;: &#8220;insecureAcceptAnything&#8221;<br>}<br>],<br>&#8220;transports&#8221;: {<br>&#8220;docker&#8221;: {<br>&#8220;registry.access.redhat.com&#8221;: [<br>{<br>&#8220;type&#8221;: &#8220;signedBy&#8221;,<br>&#8220;keyType&#8221;: &#8220;GPGKeys&#8221;,<br>&#8220;keyPaths&#8221;: [&#8220;\/etc\/pki\/rpm-gpg\/RPM-GPG-KEY-redhat-release&#8221;, &#8220;\/etc\/pki\/rpm-gpg\/RPM-GPG-KEY-redhat-beta&#8221;]<br>}<br>],<br>&#8220;registry.redhat.io&#8221;: [<br>{<br>&#8220;type&#8221;: &#8220;signedBy&#8221;,<br>&#8220;keyType&#8221;: &#8220;GPGKeys&#8221;,<br>&#8220;keyPaths&#8221;: [&#8220;\/etc\/pki\/rpm-gpg\/RPM-GPG-KEY-redhat-release&#8221;, &#8220;\/etc\/pki\/rpm-gpg\/RPM-GPG-KEY-redhat-beta&#8221;]<br>}<br>]<br>},<br>&#8220;docker-daemon&#8221;: {<br>&#8220;&#8221;: [<br>{<br>&#8220;type&#8221;: &#8220;insecureAcceptAnything&#8221;<br>}<br>]<br>}<br>}<br>}<br>[root@rocky ~]#<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n","protected":false},"excerpt":{"rendered":"<p>NOTE: I couldn&#8217;t get this procedure to work on Amazon Linux (2023) because podman wasn&#8217;t able to create the proper iptables rules &#8211; something to do with the backend it uses for creating the iptables rules. But you can get &hellip; <a href=\"https:\/\/www.itayemi.com\/blog\/2025\/09\/10\/selenium-hub-and-nodes-with-podman\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":336,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[1368,1498,1495,1499,1492,1497],"class_list":["post-2598","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-docker","tag-playwright","tag-podman","tag-python3","tag-selenium","tag-web-automation"],"_links":{"self":[{"href":"https:\/\/www.itayemi.com\/blog\/wp-json\/wp\/v2\/posts\/2598","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.itayemi.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.itayemi.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.itayemi.com\/blog\/wp-json\/wp\/v2\/users\/336"}],"replies":[{"embeddable":true,"href":"https:\/\/www.itayemi.com\/blog\/wp-json\/wp\/v2\/comments?post=2598"}],"version-history":[{"count":1,"href":"https:\/\/www.itayemi.com\/blog\/wp-json\/wp\/v2\/posts\/2598\/revisions"}],"predecessor-version":[{"id":2599,"href":"https:\/\/www.itayemi.com\/blog\/wp-json\/wp\/v2\/posts\/2598\/revisions\/2599"}],"wp:attachment":[{"href":"https:\/\/www.itayemi.com\/blog\/wp-json\/wp\/v2\/media?parent=2598"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.itayemi.com\/blog\/wp-json\/wp\/v2\/categories?post=2598"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.itayemi.com\/blog\/wp-json\/wp\/v2\/tags?post=2598"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}