PALO ALTO VM on VirtualBox

NOTE: for educational purposes only.
NOTE: this procedure places a Oracle Enterprise Linux system "behind" a Palo Alto NGFW (firewall) and registers the OEL system with the ULN (Unbreakable Linux Network). It is part of investigation into an issue noticed with OEL 8.x in combination with Palo Alto v10.1.x/10.2.x where once a certain profile is applied on the Palo Alto, the OEL system is no longer able to communicate with the ULN (initial registration fails for new systems, and retrieving packages via yum fails for already registered systems). In one scenario, editing the file /etc/sysconfig/rhn/up2date and setting the useNoSSLForPackages=1 (default is 0) appeared to resolve the issue.

NOTE: For some reason, the Palo Alto qcow2 image does not boot up completely by default in VirtualBox. It boots up to the PA-HDF login prompt instead of the PA VM login prompt. The PA-HDF prompt implies the system has not booted up completely.

-----------------------------------------------------

Setup Palo Alto VM on VirtualBox:  https://nfv.dev/blog/2022/03/how-to-run-a-palo-alto-vm-series-firewall-in-virtualbox/

1. Convert the qcow2 disk image o hyper-v VHDX, setup Hyoer-V on your Windows 10 host, create a VM with the PA disk in Hyper-V, launch the VM, login ( be patient as it takes a while to get the "PA VM" prompt. The initial "PA-HDF" should be ignored. You might need to press ENTER key a few times for the prompt to change). Shut down the Hyper-V VM ("request shutdown system" command in PA). Then convert the VHDX disk to VMDK/VDI. Now use the new VMDK/VDI disk to create a VirtualBox VM. 

2. Convert PA qcow2 disk to VHDX:
C:\PaloAlto1010>c:\qemu\qemu-img.exe convert -f qcow2 PA-VM-KVM-10.1.0.qcow2 -O vhdx PA-VM-1010.vhdx   (qemu makes a sparse copy of the disk which is not supported by Hyper-V which will complain that the file must not be sparse: https://www.mail-archive.com/qemu-discuss@nongnu.org/msg04963.html)

3. Make a non-sparse copy of the VHDX disk using "copy" command or even the Windows Explorer copy. 
C:\PaloAlto1010>copy PA-VM-1010.vhdx PA-VM-1010a.vhdx

4. Create Hyper-V VM from PA-VM-10102a.vhdx, power it on, login, change password if prompted, shutdown the VM

Note that since the PA VM boots up properly on Hyper-V, you can use Hyper-V instead of VirtualBox. 

5. Convert VHDX to VDI after shutting down the Hyper-V VM:
C:\PaloAlto1010>c:\qemu\qemu-img.exe convert PA-VM-10102a.vhdx -O vdi PA-VM-1010.vdi


NOTE: default PA credential is admin/admin (it takes some time after boot-up for the credentials to be accepted i.e., the true login prompt when the system is fully up should be something like "PA VM" but you may initially be presented with the "PA-HDF" prompt)

-----------------------------------------------------


INITIAL BASIC CONFIG OF PALO ALTO VM TO SERVE AS INTERNET GATEWAY:  https://rowelldionicio.com/setting-up-palo-alto-networks-firewall-first-time/

My test config (all on a Windows 10 host system):
- OEL8.6 VM (VBox) <-----> PA 10.1.0 VM (VBox) <------> Windows 10 laptop (Host) <------> Home Internet Router
- IMPORTANT: all 4x NICs on the PA VM was enabled in VBox. First NIC is mgmt, second NIC is Ethernet1/1, third NIC is Ethernet1/2
- First and second NIC are bridged to the WiFi adapter in Windows 10 host so they can get DHCP IPs from my home router
- Third NIC (Ethernet1/2) is connected to the default "Internal Network" named "intnet" in VBox
- The single NIC attached to the OEL8.6 VM is also connected to the default "Internal Network" named "intnet" in VBox so that it can communicate with the PA VM which will serve as the DHCP server and gateway for the OEL8.6 VM
- NOTE: the PA 10.1.0 did NOT come with the "rule1" ACL (mentioned in the referenced URL above)that allows traffic between trusted and untrusted zone. You NEED to create the ACL rule.
- NOTE: you need to add a "Static Route"  (default route) to the default "Virtual Router" that sends all traffic to the Internet Router IP. For example, I created a "Static Route" nanmed "Default Route" with Destination 0.0.0.0/0 ; Interface ethernet1/1 ; Next Hop "IP Address" 192.168.10.1 (the LAN IP address of my home internet router)
- NOTE: you can add a second NIC to the OEL8.6 VM in Vbox and attach the NIC to the "Host-Only Adapter". This allows you to connect via SSH from the Windows 10 host to the OEL 8.6 for troubleshooting purposes.


- Other NOTES:
- install the UEK kernel on the OEL 8.6 VM:
[root@oel86vb ~]# yum install -y kernel-uek.x86_64

------------------------------------------------------


References:
How to run a Palo Alto VM Series Firewall in VirtualBox
https://docs.cloudstack.apache.org/en/4.11.2.0/adminguide/networking/palo_alto_config.html https://knowledgebase.paloaltonetworks.com/KCSArticleDetail?id=kA10g000000CltrCAC https://docs.paloaltonetworks.com/pan-os/9-1/pan-os-admin/firewall-administration/use-the-web-interface\ https://knowledgebase.paloaltonetworks.com/KCSArticleDetail?id=kA10g000000ClN7CAK https://knowledgebase.paloaltonetworks.com/KCSArticleDetail?id=kA10g000000CloQCAS https://www.wiresandwi.fi/blog/palo-alto-basic-setup Oracle Linux: How to De-Register a System from ULN (Doc ID 2133228.1) ------------------- Register a system with ULN: [root@oel86 ~]# wget https://linux-update.oracle.com/rpms/uln_register_ol8.tgz [root@oel86 ~]# wget https://linux-update.oracle.com/rpms/uln_register-gnome_ol8.tgz [root@oel86 ~]# tar xf uln_register-gnome_ol8.tgz [root@oel86 ~]# tar xf uln_register_ol8.tgz [root@oel86 ~]# yum install -y *rpm [root@oel86 ~]# uln_register - use uln_register command for the interactive option or ulnreg_ks for the CLI option. The profilename is optional. Without it, the entry in ULN for the system will be named the system's hostname e.g., # ulnreg_ks --profilename=OEL86vbox --username=<my-registered-uln-email> --password=<my-oracle-support-password> --csi=<my-oracle-support-csi-#> De-register a system from ULN: - Login to the ULN registration page (http://linux.oracle.com ) and delete the registered system from ULN. You must login as the user that registered the system with ULN. - Select the System tab > Select the system to be removed and select the Delete button - Remove the system registration information from the local system. This can be done by removing the systemid file: # rm /etc/sysconfig/rhn/systemid - Setup the public yum repository files in /etc/yum.repos.d/ . Instructions for setting up public yum can be found at the following URL: http://yum.oracle.com/ --------------------------------- - Some commands: - Get details of the IP received via DHCP over the bridge to the host WNIC from the home router admin@PA-VM> show dhcp client mgmt-interface-state - Assign same IP permanently: configure set deviceconfig system type static set deviceconfig system ip-address 192.168.10.60 netmask 255.255.255.0 default-gateway 192.168.10.1 commit - Enable HTTPS web mgmt on the mgmt interface: set deviceconfig system service disable-https no set deviceconfig system service disable-ssh no set deviceconfig system service disable-icmp no commit - Retreive mgmt interface IP details: admin@PA-VM> show interface management - Graceful shutdown: admin@PA-VM> request shutdown system - Ping a host from the PA: admin@PA-VM> ping host 8.8.8.8 ---------------------------------

RUNNING AIX v7.2 VM ON QEMU HYPERVISOR ON AN UBUNTU HOST

This procedure documents setting up the latest available QEMU on Ubuntu in order to run an AIX v7.2 VM.
Most of the steps are from http://aix4admins.blogspot.com/2020/04/qemu-aix-on-x86-qemu-quick-emulator-is.html?m=1

The host in this case is an AWS t3.xlarge compute instance running Ubuntu 22.04.1 LTS (Jammy Jellyfish)
I also attached a secondary EBS volume (55G) to the instance which I mounted on /wip and where I stored all the relevant files.


- Login to the Ubuntu host and install QEMU:

root@ip-172-31-23-252:~# apt update -y
root@ip-172-31-23-252:~# apt install -y gcc make ninja-build
root@ip-172-31-23-252:~# wget https://download.qemu.org/qemu-7.2.0.tar.xz
root@ip-172-31-23-252:~# tar xvf qemu-7.2.0.tar.xz
root@ip-172-31-23-252:~# cd qemu-7.2.0/
root@ip-172-31-23-252:~/qemu-7.2.0# apt install libglib2.0-dev
root@ip-172-31-23-252:~/qemu-7.2.0# apt-get install -y libpixman-1-dev
root@ip-172-31-23-252:~/qemu-7.2.0# apt install ncurses-dev
root@ip-172-31-23-252:~/qemu-7.2.0# ./configure
// ALTERNATIVELY - build only PPC64 support: # ./configure --target-list=ppc64-softmmu --enable-curses --disable-gtk && make
root@ip-172-31-23-252:~# make
root@ip-172-31-23-252:~# make install


- Partition the secondary volume and format the file system:

root@ip-172-31-23-252:~# lsblk
NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
loop0          7:0    0 24.4M  1 loop /snap/amazon-ssm-agent/6312
loop1          7:1    0 55.6M  1 loop /snap/core18/2632
loop2          7:2    0 63.2M  1 loop /snap/core20/1695
loop3          7:3    0  103M  1 loop /snap/lxd/23541
loop4          7:4    0 49.6M  1 loop /snap/snapd/17883
nvme0n1      259:0    0    8G  0 disk
├─nvme0n1p1  259:1    0  7.9G  0 part /
├─nvme0n1p14 259:2    0    4M  0 part
└─nvme0n1p15 259:3    0  106M  0 part /boot/efi
nvme1n1      259:4    0   55G  0 disk
root@ip-172-31-23-252:~#
root@ip-172-31-23-252:~# fdisk /dev/nvme1n1
root@ip-172-31-23-252:~# partprobe
root@ip-172-31-23-252:~# mkfs -t ext4  /dev/nvme1n1p1
root@ip-172-31-23-252:~# blkid
/dev/nvme0n1p1: LABEL="cloudimg-rootfs" UUID="687fab62-1ba5-4282-890e-9266064f7d27" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="895d8984-5441-4c70-b87c-a6b6ebb8c95e"
/dev/nvme0n1p15: LABEL_FATBOOT="UEFI" LABEL="UEFI" UUID="B2B4-82AC" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="0cf1c52c-98f5-48ae-8a07-fff782190e30"
/dev/loop0: TYPE="squashfs"
/dev/nvme1n1p1: UUID="a5051753-344e-43da-ba1f-cc785cab98b0" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="d816173f-01"
root@ip-172-31-23-252:~# vi /etc/fstab
root@ip-172-31-23-252:~# grep wip /etc/fstab
UUID="a5051753-344e-43da-ba1f-cc785cab98b0"  /wip  ext4  defaults 0 0
root@ip-172-31-23-252:~#
root@ip-172-31-23-252:~# mkdir /wip
root@ip-172-31-23-252:~# mount /wip


- Copy the AIX v7.2 ISO files to the Ubuntu instance. Please ensure you get it from a legal source.

root@ip-172-31-23-252:~# cd /wip
root@ip-172-31-23-252:/wip# mkdir AIX72ISOs
root@ip-172-31-23-252:/wip# cd AIX72ISOs/
root@ip-172-31-23-252:/wip/AIX72ISOs# scp -i ~/.ssh/wipalinux ubuntu@172.31.18.141:/wip/AIX72ISOs/aix_7200-04-02-2027_1of2_072020.iso .


- Create a disk for the AIX VM:
root@ip-172-31-23-252:~# cd /wip/
root@ip-172-31-23-252:/wip# qemu-img create -f qcow2 hdisk0.qcow2 20G


- Install AIX (you can change install settings e.g., to include SSH client and server). The installation tool approx 110 mins 
root@ip-172-31-23-252:/wip# qemu-system-ppc64 -cpu POWER8 -machine pseries -m 4096 -serial stdio -drive file=hdisk0.qcow2,if=none,id=drive-virtio-disk0 -device virtio-scsi-pci,id=scsi -device scsi-hd,drive=drive-virtio-disk0 -cdrom /wip/AIX72ISOs/aix_7200-04-02-2027_1of2_072020.iso -prom-env "boot-command=boot cdrom:"

   - NOTE: the VM will get stuck in a reboot loop at the end of installation. Use CTRL+C to terminate the VM
   
   
- Fix/solve the fsck64 issue to fix the reboot loop by booting the VM into maintenance mode:
root@ip-172-31-23-252:/wip# qemu-system-ppc64 -cpu POWER8 -machine pseries -m 4096 -serial stdio -drive file=hdisk0.qcow2,if=none,id=drive-virtio-disk0 -device virtio-scsi-pci,id=scsi -device scsi-hd,drive=drive-virtio-disk0 -cdrom /wip/AIX72ISOs/aix_7200-04-02-2027_1of2_072020.iso -prom-env "boot-command=boot cdrom:"
   - menu options to select:  1 to "define the System Console" > 1 for English > 3 for Maintenance mode > 1 to access root VG > 0 to continue > 1 to select VG/disk > 1 to "Access this Volume Group and start a shell"
  - NOTE: no keyboard BACKSPACE key, and don't use CTRL+C as that terminates the VM.

  # cd /sbin/helpers/jfs2
  # cp fsck64 fsck64.org

  - truncate the fsck64 exeutable binary file and replace content with shell script
  # > fsck64
  # cat > fsck64 << EOF
  #!/bin/ksh
  exit 0
  EOF
  #
  # cat fsck64
  #!/bin/ksh
  exit 0
  #

  - Alternative to the cat sequence above is to edit the fsck64 file after truncating it and add the 2 lines to the file:
  # > fsck64
  # export TERM=vt100
  # vi fsck64
  # cat fsck64
  #!/bin/ksh
  exit 0
  #


  - Shutdown the VM:
  #
  # sync; sync
  # halt


- Create a snapshot of the AIX O/S disk for backup purposes:
root@ip-172-31-23-252:/wip# qemu-img create -f qcow2 -b hdisk0.qcow2 -F qcow2 hdisk0.snap.qcow2 10G


- Boot the VM to AIX O/S 7.2 using the O/S disk, and accept license (I excluded cdrom since I no longer need it):
root@ip-172-31-23-252:/wip# qemu-system-ppc64 -cpu POWER8 -machine pseries -m 4096 -serial stdio -drive file=hdisk0.qcow2,if=none,id=drive-virtio-disk0 -device virtio-scsi-pci,id=scsi -device scsi-hd,drive=drive-virtio-disk0 -prom-env "boot-command=boot disk:"
   - choose vt100 (type it and press ENTER) when prompted for terminal type
   - choose to accept the license (default is no, press TAB key to change it to yes) then ENTER to accept
   - Esc+0 (hold down ESC then press 0) to go back
   - accept the software maintenance terms/conditions
   - Esc+0 (hold down ESC then press 0) to go back
   - Set any of the additional settings as required (date/time; root password; etc)
   - Option "Tasks completed - Exit to Login"
   
   - Login as root on the console (prompt)
-----------------------------------------------
   
- Fix the RPM DB error:  https://bobcares.com/blog/rpm-db_runrecovery-errors/
# cd /opt/freeware
# tar -chvf `date +"%d%m%Y"`.rpm.packages.tar packages
# rm -f /opt/freeware/packages/__*
# /usr/bin/rpm --rebuilddb
# /usr/bin/rpm -qa

-----------------------------------------------
   
   
- Setup networking: https://kwakousys.wordpress.com/2020/09/06/run-aix-7-2-on-x86-with-qemu/
    - in this example, we assign IP address 10.0.2.16 to AIX and 10.0.2.20 to the bridge we defined on the Ubuntu host.

- Setup a bridge (br0) on the Ubuntu host:
    root@ip-172-31-23-252:/wip# apt-get install bridge-utils
    root@ip-172-31-23-252:/wip# mkdir -p /usr/local/etc/qemu
    root@ip-172-31-23-252:/wip# echo "allow br0" > /usr/local/etc/qemu/bridge.conf
	
    NOTE: you can put the following network-related commands a single script that you can just run as a single command
	
    root@ip-172-31-23-252:/wip# ip link add name br0 type bridge
    root@ip-172-31-23-252:/wip# ip link set dev br0 up
    root@ip-172-31-23-252:/wip# ip addr add 10.0.2.20/24 dev br0


- Setup the tap NIC for the AIX VM:
    root@ip-172-31-23-252:/wip# ip tuntap add tap0 mode tap
    root@ip-172-31-23-252:/wip# ip link set dev tap0 up
    root@ip-172-31-23-252:/wip# ip link set dev tap0 master br0

    NOTE: tap0 interface comes up when the VM is started


- Setup the Ubuntu host for routing (including Internet access from the AIX VM):
    root@ip-172-31-23-252:/wip# echo 1 > /proc/sys/net/ipv4/conf/tap0/proxy_arp
    root@ip-172-31-23-252:/wip# ip route add 10.0.2.16 dev tap0
    root@ip-172-31-23-252:/wip# arp -Ds 10.0.2.16 eth0 pub
    root@ip-172-31-23-252:/wip# echo 1 > /proc/sys/net/ipv4/ip_forward
    root@ip-172-31-23-252:/wip# iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
    root@ip-172-31-23-252:/wip# iptables -I FORWARD 1 -i tap0 -j ACCEPT
    root@ip-172-31-23-252:/wip# iptables -I FORWARD 1 -o tap0 -m state --state RELATED,ESTABLISHED -j ACCEPT
        

- Start the AIX VM normally (assign a randomly selected MAC address to the VM's NIC):
root@ip-172-31-23-252:/wip# qemu-system-ppc64 -cpu POWER8 -machine pseries -m 4096 -serial stdio -drive file=hdisk0.qcow2,if=none,id=drive-virtio-disk0 -device virtio-scsi-pci,id=scsi -device scsi-hd,drive=drive-virtio-disk0 -prom-env "boot-command=boot disk:" -net nic,macaddr=be:16:43:37:16:ec -net tap,script=no,ifname=tap0,downscript=no


- Assign the IP address 10.0.2.16 to the en0 NIC in AIX  (use SMIT to make it permanent)
# chdev -l en0 -a netaddr=10.0.2.16 -a netmask=255.255.255.0 -a state=up

- Make the IP assignment permanent with SMIT (assign any IP on the same network as the gateway e.g., 10.0.2.254):
# smit tcpip > Min Config & Startup > en0 > (setup hostname/netmask/IP/nameserver & domain name & gateway e.g., aix7vm/10.0.2.16/255.255.255.0/8.8.8.8 & acme.com/10.0.2.254) > "START Now" = yes (TAB key to change it) then ENTER key to execute the change
   NOTE: the name server (e.g., Google's 8.8.8.8 DNS server) and a domain name MUST be provided if you decide to set the name server.


- Install BASH shell in AIX VM (bash is easier to use than the default Korn shell):

- increase /opt as the bash instal requires about 40MB space:
# chfs -a size=+60M /opt

# wget http://www.oss4aix.org/download/latest/aix71/libiconv-1.16-1.aix5.1.ppc.rpm
# wget http://www.oss4aix.org/download/latest/aix71/bash-5.0-8.aix5.1.ppc.rpm
# wget http://www.oss4aix.org/download/latest/aix71/gettext-0.19.8.1-1.aix5.1.ppc.rpm
# wget http://www.oss4aix.org/download/RPMS/gcc/libgcc-6.3.0-1.aix7.2.ppc.rpm
# rpm -ivh bash_5_0_8_aix5_1_ppc.rpm gettext_0_19_8_1_1_aix5_1_ppc.rpm libiconv_1_16_1_aix5_1_ppc.rpm libgcc_6_3_0_1_aix7_2_ppc.rpm

- In AIX, after installing bash, "authorize" AIX to allow bash shell to run:
# export TERM=vt100
   - Edit file /etc/security/login.cfg, append "/usr/bin/bash" (without the double quotes)  to the line containing "shells ="
   - Edit file /etc/shells, append this on a new line "/usr/bin/bash" (without the double quotes)
   

--------------- END OF PROCEDURE ---------------


- Extra step in order to access the AIX VM using SSH from outside the Ubuntu host (particularly useful if you are using the "--daemonize" headless option when starting the AIX VM):
Summary is to use iptables to redirect incoming attempts to connect to the Ubuntu instance on some alternate port (e.g., 2222/tcp) to port 22 on the AIX VM. Note that you also need to allow incoming traffic on this alternate port in your AWS/OCI/GCP VPC/subnet using the relevant security group rule.

root@ip-172-31-23-252:/wip# iptables -A INPUT -p tcp --dport 2222 -j ACCEPT
root@ip-172-31-23-252:/wip# iptables -t nat -A PREROUTING -p tcp --dport 2222 -j DNAT --to-destination 10.0.2.16:22


You can then connect to the AIX with putty (Ubuntu IP address and port 2222) or using SSH with a command such as: ssh root@<ubuntu-ip> -p 2222


   
References:
http://aix4admins.blogspot.com/2020/04/qemu-aix-on-x86-qemu-quick-emulator-is.html?m=1
Run AIX 7.2 on x86 with QEMU
https://worthdoingbadly.com/aixqemu/
http://www.visidon.com/blog/2015/02/bash-on-aix-7-1/
RPM DB_RUNRECOVERY errors: How to resolve
http://www.oss4aix.org/download/latest/aix71/ - download RPMs for various packages http://www.oss4aix.org/download/RPMS/gcc/ https://public.dhe.ibm.com/aix/freeSoftware/aixtoolbox/RPMS/ppc/ - Some useful commands: # qemu-system-ppc64 ... -daemonize (to run the VM in "headless" mode so access it using SSH). If you use this option, delete the "-serial stdio" from the command. QEMU starts the VM and gives a message such as "VNC server running on 127.0.0.1:5900" so you can use VNC to manage the VM as well. root@ip-172-31-23-252:/wip# mount -o loop /wip/bash51-aix71.iso /iso root@ip-172-31-23-252:/wip# ip link set dev br0 down root@ip-172-31-23-252:/wip# ip link delete dev br0 root@ip-172-31-23-252:/wip# ip a # mount -vcdrfs -oro /dev/cd0 /mnt # entstat -d en0 | grep -i hard Hardware Address: be:16:43:37:16:ec #aix# chsh <username> /bin/bash logout from session, after AIX shutdown is possible using "~~.", same as in HMC console root@ip-172-31-23-252:/wip# apt install -y genisoimage root@ip-172-31-23-252:/wip# mkisofs -max-iso9660-filenames -o bash50.iso ./bash50 - boot the AIX with the ISO image containing the bash rpm package; qemu-system-ppc64 -cpu POWER8 -machine pseries -m 4096 -serial stdio -drive file=hdisk0.qcow2,if=none,id=drive-virtio-disk0 -device virtio-scsi-pci,id=scsi -device scsi-hd,drive=drive-virtio-disk0 -prom-env "boot-command=boot disk:" -net nic,macaddr=be:16:43:37:16:ec -net tap,script=no,ifname=tap0 -cdrom /wip/bash50.iso - Some notes: "make" of QEMU took about 85 mins on t3.xlarge when compiling all supported platforms, but under 10 mins when compiling for ppc64 support only) ensure you have plenty of space for the compiler. qemu-7.2.0.tar.xz is 117M, extracted folder qemu-7.2.0 is 799M, compiled, the extracted folder goes to almost 6GB! if you didn't include the ssh client/server during the installation, you will need to start the VM with the ISO image inserted in the CDROM so you can install them. - Optionally disable some un-needed services to speed up the boot process: - edit file /etc/rc.tcpip and comment out # some services if not required e.g., sendmail, snmpd, hostmibd, snmpmibd, aixmibd (look for lines similar to: start /usr/sbin/aixmibd "$src_running") - to disable the NFSD server, edit file /etc/rc.nfs and comment out the line: start biod /usr/sbin/biod - you may use the "stopsrc -s <service-name>" command to shut them down in the current session as well. - Optionally disable additional services defined in the /etc/inittab file to make subsequent boot ups faster (using the following commands): # rmitab rcnfs # rmitab cron # rmitab piobe # rmitab qdaemon # rmitab writesrv # rmitab naudio2 # rmitab naudio # rmitab aso # rmitab clcomd # chrctcp -S -d tftpd
- The networking setup, and AIX VM launch command scripts (execute the network script before the AIX VM launch script so that the VM will have network access):

root@ip-172-31-23-252:/wip# cat setup_networking_for_aix.sh
#!/usr/bin/bash

#- Setup the tap NIC for the AIX VM:
ip tuntap add tap0 mode tap
ip link set dev tap0 up

#NOTE: tap0 interface comes up when the VM is started:

#- Setup the host for routing (including Internet access from the AIX VM):
echo 1 > /proc/sys/net/ipv4/conf/tap0/proxy_arp
ip route add 10.0.2.16 dev tap0
arp -Ds 10.0.2.16 eth0 pub
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
iptables -I FORWARD 1 -i tap0 -j ACCEPT
iptables -I FORWARD 1 -o tap0 -m state --state RELATED,ESTABLISHED -j ACCEPT

#- Setup port forwarding so that the AIX VM is accessible remotely:
iptables -A INPUT -p tcp --dport 2222 -j ACCEPT
iptables -t nat -A PREROUTING -p tcp --dport 2222 -j DNAT --to-destination 10.0.2.16:22



root@ip-172-31-23-252:/wip# cat launch_aix72_vm.sh
#!/usr/bin/bash

/usr/local/bin/qemu-system-ppc64 -cpu POWER8 -machine pseries -m 4096 -drive file=hdisk0.qcow2,if=none,id=drive-virtio-disk0 -device virtio-scsi-pci,id=scsi -device scsi-hd,drive=drive-virtio-disk0 -prom-env "boot-command=boot disk:" -net nic,macaddr=be:16:43:37:16:ec -net tap,script=no,ifname=tap0,downscript=no --daemonize
#/usr/local/bin/qemu-system-ppc64 -cpu POWER8 -machine pseries -m 4096 -drive file=hdisk0.qcow2,if=none,id=drive-virtio-disk0 -device virtio-scsi-pci,id=scsi -device scsi-hd,drive=drive-virtio-disk0 -prom-env "boot-command=boot disk:" -net nic,macaddr=be:16:43:37:16:ec -net tap,script=no,ifname=tap0,downscript=no

- VNC connection to the AIX VM
When qemu is executed with the --daemonize option, it also creates a VNC session that you can connect to. By default the VNC session is started on only the loopback (127.0.0.1) interface. In the sample command below, it is started on the primary interface of the Ubuntu host with the IP 172.31.23.252. I can then use any VNC viewer such as tightvnc to connect to the VM's console using the Ubuntu hosts' public IP:
root@ip-172-31-23-252:/wip# /usr/local/bin/qemu-system-ppc64 -cpu POWER8 -machine pseries -m 4096 -drive file=/wip/hdisk0.qcow2,if=none,id=drive-virtio-disk0 -device virtio-scsi-pci,id=scsi -device scsi-hd,drive=drive-virtio-disk0 -prom-env "boot-command=boot disk:" -net nic,macaddr=be:16:43:37:16:ec -net tap,script=no,ifname=tap0,downscript=no --daemonize -vnc 172.31.23.252:0

RUN ORACLE LINUX 8.X DOCKER CONTAINER ON WINDOWS 10 WITH WSL2

The purpose of this guide is to run an Oracle Linux container on a Windows 10 system using Windows Subsystem for Linux v2.
(This may be an alternative to using a full-blown hypervisor type 2 such as Oracle VirtualBox or VMWare Player/Workstation.)

NOTE: Using container images from the official Oracle repository
NOTE: You will need to be running Windows 10 build 18917 or higher to use WSL 2. If you are on an earlier Windows 10 build, launch Windows Update Settings, you should be able to update it to the latest available version.
NOTE: there are docker images for 7/8/9 and slim versions of 7/8/9 (minimal environment with minimal number of packages) from the ghcr.io repository.

1. Prepare an Oracle Linux 8.x container and export it to a single TAR file using an existing Linux system as the work platform:

[root@wip]# yum install -y docker
[root@wip]# usermod -aG docker root
[root@wip]# newgrp docker
[root@wip]# id
uid=0(root) gid=992(docker) groups=992(docker),0(root)
[root@wip]#
[root@wip]# systemctl start docker.service
[root@wip]# systemctl enable docker.service

– Create the Dockerfile to use to build the container:
[root@wip]# vi Dockerfile
[root@wip]# cat Dockerfile
FROM ghcr.io/oracle/oraclelinux:8

CMD [“/bin/bash”]

– Build the docker container:
[root@wip]# docker build -t ghcr.io/oracle/oraclelinux:8 .
Sending build context to Docker daemon 23.04kB
Step 1/2 : FROM ghcr.io/oracle/oraclelinux:8
8: Pulling from oracle/oraclelinux
4c770e098606: Pull complete
Digest: sha256:07a995ecaf9db1ce613648a08facc162de69f26c39712f1acc93629c2e6c4e73
Status: Downloaded newer image for ghcr.io/oracle/oraclelinux:8
—> b0045ea7bbde
Step 2/2 : CMD [“/bin/bash”]
—> Running in 168cb6d08c9e
Removing intermediate container 168cb6d08c9e
—> 53be01d92e18
Successfully built 53be01d92e18
Successfully tagged ghcr.io/oracle/oraclelinux:8

– Test the container:
[root@wip]# docker run -it 53be01d92e18
[root@ec6e4b0f7c3b /]# cat /etc/oracle-release
Oracle Linux Server release 8.7
[root@ec6e4b0f7c3b /]# exit

– List all containers (note the container id ec6e4b0f7c3b associated with the image id 53be01d92e18 from the earlier build command output):

[root@wip]# docker ps -a
CONTAINER ID    IMAGE          COMMAND       CREATED              STATUS                         PORTS     NAMES
ec6e4b0f7c3b    53be01d92e18   "/bin/bash"   About a minute ago   Exited (0) 23 seconds ago                reverent_ellis

– Export the container into a single TAR file (222M size):
[root@wip]# docker export –output=”oellinux8.tar” aa565b335857

– Optionally zip the file (85MB zipped) to reduce the amount of data transferred when copying it to the Windows 10 system:
[root@wip]# gzip oellinux8.tar

– Transfer the container output TAR file to the Windows 10 system. In this case I will be using pscp to pull the file down into the Windows 10 system using a user other than root, so I copied the file to /tmp which is accessible to all users and changed the permission on the file so other users can read it:
[root@wip]# cp oellinux8.tar.gz /tmp/
[root@wip]# chmod 666 /tmp/oellinux8.tar.gz

2. SETUP WSL2 on Windows 10:
– Using elavated/admin powershell, run: Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux
– Using elavated/admin command or powershell, run: dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart
– Reboot the Windows 10 system (WSL2 upgrade fails without a reboot after installing WSL)
– Upgrade WSL to WSL2 via the installer https://wslstorestorage.blob.core.windows.net/wslblob/wsl_update_x64.msi

3. Pull down the container file to the local Windows 10 system with WSL2 installed:
C:\Users\user1> md wsl2\oellinux8
C:\Users\user1\wsl2\oellinux8>cd wsl2\oellinux8
C:\Users\user1\wsl2\oellinux8>pscp -i ….\Downloads\centos8.ppk ec2-user@my-linux-wip-server:/tmp/oellinux8.tar.gz .

4. Unzip the container file oellinux8.tar.gz (if you compressed the original TAR file):

5. Import the TAR file into WSL (syntax: wsl –import [DISTRO NAME] [STORAGE LOCATION] [FILE NAME]):
C:\Users\user1\wsl2\oellinux8>wsl –import oellinux8 “C:\Users\user1\wsl2\oellinux8” oellinux8.tar

NOTE: the import step extracts the TAR file into rootfs and temp directories:
C:\Users\user1\wsl2\oellinux8>dir
12/07/2022 11:47 PM 232,101,888 oellinux8.tar
12/07/2022 11:59 PM 84,593,746 oellinux8.tar.gz
12/08/2022 12:25 AM rootfs
12/08/2022 12:46 AM temp

6. Start the new WSL container (which ends at the running Linux prompt):
C:\Users\user1\wsl2\oellinux8> wsl -d oellinux8
[root@mywinpc wsl2]#

7. Execute some commands in the running container:
[root@mywinpc wsl2]# ping google.com
PING google.com (172.217.7.110) 56(84) bytes of data.
64 bytes from slc08s01-in-f14.1e100.net (172.217.7.110): icmp_seq=1 ttl=59 time=4.64 ms
64 bytes from slc08s01-in-f14.1e100.net (172.217.7.110): icmp_seq=2 ttl=59 time=5.59 ms
^C
— google.com ping statistics —
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 4.639/5.116/5.594/0.482 ms
[root@mywinpc wsl2]#
[root@mywinpc wsl2]# yum repolist
repo id repo name
ol8_appstream Oracle Linux 8 Application Stream (x86_64)
ol8_baseos_latest Oracle Linux 8 BaseOS Latest (x86_64)
[root@mywinpc user1]#
[root@mywinpc user1]# yum provides gdisk
Oracle Linux 8 BaseOS Latest (x86_64) 805 kB/s | 53 MB 01:07
Oracle Linux 8 Application Stream (x86_64) 926 kB/s | 42 MB 00:45
Last metadata expiration check: 0:00:14 ago on Thu 08 Dec 2022 12:29:18 AM MST.
gdisk-1.0.3-6.el8.x86_64 : An fdisk-like partitioning tool for GPT disks
Repo : ol8_baseos_latest
Matched from:
Provide : gdisk = 1.0.3-6.el8

gdisk-1.0.3-9.el8.x86_64 : An fdisk-like partitioning tool for GPT disks
Repo : ol8_baseos_latest
Matched from:
Provide : gdisk = 1.0.3-9.el8

gdisk-1.0.3-11.el8.x86_64 : An fdisk-like partitioning tool for GPT disks
Repo : ol8_baseos_latest
Matched from:
Provide : gdisk = 1.0.3-11.el8

[root@mywinpc user1]#

8. Optionally ENTER exit command to quit the running Linux container:
[root@mywinpc user1]# exit
C:\Users\user1\wsl2\oellinux8>

—————— END OF PROCEDURE ———————————

The following setup is to allow remote connectivity to the container

- Start a  temporary container (e.g., using the image id) to copy SSHD config files from it:
[root@ip-172-31-6-136 ~]# mkdir /oel8_etc_ssh
[root@ip-172-31-6-136 ~]# docker run --name wip -it -v /oel8_etc_ssh:/tmp/mpoint 18a22840eed9
[root@609b0ec071bb /]#
[root@609b0ec071bb /]# cp -a /etc/ssh /tmp/mpoint/
[root@609b0ec071bb /]# exit

- Delete the temporary container:
[root@ip-172-31-6-136 ~]# docker rm wip

- Start the "production" container with /oel8_etc_ssh/ssh on the host mounted to /etc/ssh in the container (running headless or detached mode with "-d"):
  NOTE: mapped port 2222/tcp on the host to the SSH port in the container. This is handy to access the container remotely from outside the host.
[root@ip-172-31-6-136 ~]# docker run --name oel87c -it -p 2222:22 -v /oel8_etc_ssh/ssh:/etc/ssh -d 18a22840eed9

- Attach to the console of the container:
[root@ip-172-31-6-136 ~]# docker attach d99789174764

- Create the ssh host keys (one-time task since they are stored persistently on the underlying host):
[root@d99789174764 /]# ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key -N ''
[root@d99789174764 /]# ssh-keygen -t ecdsa -f /etc/ssh/ssh_host_ecdsa_key -N ''
[root@d99789174764 /]# ssh-keygen -t ed25519 -f /etc/ssh/ssh_host_ed25519_key -N ''

- Create the system log file (otherwise SSHD will be unable to authenticate login attempts):
[root@d99789174764 /]# touch /var/log/messages

- Start SSHD service:
[root@d99789174764 /]# /usr/sbin/sshd &

- Add a user to the container to login remotely:
[root@d99789174764 /]# useradd user1
[root@d99789174764 /]# passwd user1

- Optionally install/configure visudo to allow "user1" switch to the root user:
[root@d99789174764 /]# yum install -y sudo
[root@d99789174764 /]# visudo
[root@d99789174764 /]# grep user1 /etc/sudoers
user1   ALL=(ALL)       NOPASSWD: ALL
[root@d99789174764 /]#

- Detach from the container and keep it running: Press Ctrl-P, followed by Ctrl-Q,
  NOTE: if you mistakenly typed exit in the container which causes it to stop, start it again with "docker start <container-id>" on the host

- It is now possible to access the container remotely from outside the host. SSH to the container using the hosts' IP address on port 2222/tcp e.g.
  From a Windows/Linux system (you can also use Putty): ssh user@<host-ip> -p 2222

------------------- END -------------------

- Sample command to retrieve the IP of the container from the underlying host. [root@ip-172-31-6-136 ~]# docker ps    (command to get the container id)
[root@ip-172-31-6-136 ~]# docker container inspect -f '{{ .NetworkSettings.IPAddress }}' d99789174764
172.17.0.2

- Note: mounting the whole /etc and /var/log to directories on the underlying host should help to "persist" all the relevant configuration of the container.

References:
https://learn.microsoft.com/en-us/windows/wsl/install-manual
https://www.sanner.io/posts/2022/03/create-a-custom-linux-setup-for-wsl2/
https://learn.microsoft.com/en-us/windows/wsl/use-custom-distro
https://www.sanner.io/posts/2022/03/create-a-custom-linux-setup-for-wsl2/
https://learn.microsoft.com/en-us/windows/wsl/install-manual
https://hub.docker.com/_/oraclelinux?tab=tags
https://github.com/oracle/container-images/pkgs/container/oraclelinux
https://yum.oracle.com/oracle-linux-isos.html
https://social.technet.microsoft.com/Forums/en-US/e655c45f-3a74-4acb-8df1-3607e4fe6b49/issue-with-installing-linux-subsystem?forum=winserverhyperv
https://community.oracle.com/mosc/discussion/3949381/yum-update-error-rhn-plugin-network-error-connection-reset-by-peer