../../images/logo.svg

User-provisioned installation

UPI PXE Config dnf install -y tftp-server syslinux-tftpboot httpd haproxy wget https://www.kernel.org/pub/linux/utils/boot/syslinux/syslinux-6.03.tar.gz wget https://raw.githubusercontent.com/leoaaraujo/openshift_pxe_boot_menu/main/files/bg-ocp.png -O /var/lib/tftpboot/bios/bg-ocp.png tar xf syslinux-6.03.tar.gz cp syslinux-6.03/bios/core/pxelinux.0 /var/lib/tftpboot/bios/ cp syslinux-6.03/bios/com32/elflink/ldlinux/ldlinux.c32 /var/lib/tftpboot/bios/ cp syslinux-6.03/bios/com32/lib/libcom32.c32 /var/lib/tftpboot/bios/ cp syslinux-6.03/bios/com32/libutil/libutil.c32 /var/lib/tftpboot/bios/ cp syslinux-6.03/bios/memdisk/memdisk /var/lib/tftpboot/bios/ cp syslinux-6.03/bios/com32/modules/poweroff.c32 /var/lib/tftpboot/bios/ cp syslinux-6.03/bios/com32/modules/pxechn.c32 /var/lib/tftpboot/bios/ cp syslinux-6.03/bios/com32/modules/reboot.c32 /var/lib/tftpboot/bios/ cp syslinux-6.03/bios/com32/menu/vesamenu.c32 /var/lib/tftpboot/bios/ cp syslinux-6.03/efi64/efi/syslinux.efi /var/lib/tftpboot/efi64/ cp syslinux-6.03/efi64/com32/elflink/ldlinux/ldlinux.e64 /var/lib/tftpboot/efi64/ cp syslinux-6.03/efi64/com32/lib/libcom32.c32 /var/lib/tftpboot/efi64/ cp syslinux-6.03/efi64/com32/libutil/libutil.c32 /var/lib/tftpboot/efi64/ cp syslinux-6.03/bios/memdisk/memdisk /var/lib/tftpboot/efi64/ cp syslinux-6.03/efi64/com32/modules/poweroff.c32 /var/lib/tftpboot/efi64/ cp syslinux-6.03/efi64/com32/modules/pxechn.c32 /var/lib/tftpboot/efi64/ cp syslinux-6.03/efi64/com32/modules/reboot.c32 /var/lib/tftpboot/efi64/ cp syslinux-6.

Ceph installation

Requirements Red Hat Enterprise Linux 8.4 EUS or later. Ansible 2.9 or later. A valid Red Hat subscription with the appropriate entitlements. Root-level access to all nodes. An active Red Hat Network (RHN) or service account to access the Red Hat Registry. Create 3 RHEL 8 virtual machines ceph1 ceph2 ceph3 Note Installing Ceph on Virtual Machines is not recommendend for production use Register servers to RHN Find and attach Red Hat Ceph Storage pool $ subscription-manager list --available --matches 'Red Hat Ceph Storage' $ subscription-manager attach --pool=POOL_IDEnable server & extra repos $subscription-manager repos --disable=* subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms subscription-manager repos --enable=rhceph-5-tools-for-rhel-8-x86_64-rpms subscription-manager repos --enable=ansible-2.

HTPasswd oauth provider

Create htpaswd file $ htpasswd -c -B -b htpasswd admin adminpass $ htpasswd -c -B -b htpasswd developer devpass Create secret $ oc create secret generic htpass-secret --from-file=htpasswd -n openshift-config Patch oauth cluster $ oc patch oauth/cluster --patch '{"spec":{"identityProviders":[{"name":"htpasswd","mappingMethod":"claim","type":"HTPasswd","htpasswd":{"fileData":{"name":"htpass-secret"}}}]}}' --type=merge Give admin user cluster-admin role $ oc adm policy add-cluster-role-to-user cluster-admin admin

NGINX reverse

Use nginx as reverse proxy in front of multiple clusters $ dnf install -y nginx nginx-mod-stream.x86_64 Add in nginx.conf include /etc/nginx/passthrough.conf; passthrough.conf stream { map $ssl_preread_server_name $internalport { hostnames; *.apps.sno1.domain 9441; *.apps.sno2.domain 9442; api.sno1.domain 6441; api.sno2.domain 6442; } upstream sno2_api { server 192.168.0.109:6443 max_fails=3 fail_timeout=10s; } upstream sno2_ingress { server 192.168.0.109:443 max_fails=3 fail_timeout=10s; } upstream sno1_api { server 192.168.0.110:6443 max_fails=3 fail_timeout=10s; } upstream sno1_ingress { server 192.168.0.110:6443 max_fails=3 fail_timeout=10s; } log_format basic '$remote_addr [$time_local] ' '$protocol $status $bytes_sent $bytes_received ' '$session_time "$upstream_addr" ' '"$upstream_bytes_sent" "$upstream_bytes_received" "$upstream_connect_time"'; access_log /var/log/nginx/access.

ODF installation

Install Openshift Data Foundation from Operator Hub Create a StorageSystem using “Connect an external storage platform” of Red Hat Ceph Storage type Download ceph-external-cluster-details-exporter.py script and run it on your ceph admin node $ python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name testrbd --cephfs-data-pool-name cephfs.testfs.data --rgw-endpoint 10.0.0.n:80 --cephfs-filesystem-name testfsSample output : [{"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "ceph1=10.0.0.n:6789", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "5dabcb8e-ad19-11ed-a179-005056af8aeb", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "client.