Contents

Build a RHEL Image Mode ISO with bootc-image-builder

Build an installable ISO from a RHEL image mode container using bootc-image-builder. The resulting ISO installs a system whose root filesystem is managed as an OCI container image — no traditional RPM package manager required post-install.

Warning
All placeholders in angle brackets — <your-password>, <your-ssh-public-key>, <registry-token>, <your-org>, <server-ip>, etc. — must be replaced with values matching the target environment before running any command or applying any manifest.

Prerequisites

  • A RHEL subscription with access to registry.redhat.io.
  • podman authenticated to registry.redhat.io (podman login registry.redhat.io).
  • A container registry to host the bootc image (e.g. quay.io).
  • sudo / root privileges on the build host.

1. Write the Containerfile

Define the bootc container image by extending the official RHEL bootc base image.

The installed system needs registry credentials to pull future updates via bootc upgrade. Embed the pull secret at build time using a RUN --mount=type=secret instruction so the credentials are never stored in a layer.

The credentials are placed in /usr/lib/container-auth.json (image-owned, never modified by the 3-way /etc merge). A tmpfiles.d entry recreates the symlink at /etc/ostree/auth.json on every boot — ensuring the link survives even if manually deleted, since tmpfiles.d runs from /usr/lib which is always authoritative.

Note

How the 3-way merge works (full article)

When RHEL is running in image mode and a change is made to its filesystem, a new image is created containing those changes. The system configurations that differ from the running image are merged to create a new default state. A 3-way merge incorporates a third version — older than both the current and new image — to minimize merge conflicts.

Filesystems are treated differently in image mode:

  • /usrimage state: contents of the image overwrite local files
  • /etclocal configuration state: contents of the image are merged with a preference for local files
  • /varlocal state: image contents are ignored after initial installation

This is why the auth file is placed under /usr/lib (image-owned, always overwritten) rather than /etc (where a manual deletion would be treated as a local change and preserved across upgrades).

Create the tmpfiles.d config alongside the Containerfile:

cat > bootc-auth.conf <<'EOF'
L  /etc/ostree/auth.json  -  -  -  -  /usr/lib/container-auth.json
EOF

Then reference it in the Containerfile:

FROM registry.redhat.io/rhel10/rhel-bootc:10.1

RUN --mount=type=secret,id=bootc-pull-secret,required=true \
    cp /run/secrets/bootc-pull-secret /usr/lib/container-auth.json && \
    chmod 0600 /usr/lib/container-auth.json

COPY bootc-auth.conf /usr/lib/tmpfiles.d/bootc-auth.conf

2. Build and Push the Container Image

Authenticate to both registries before building. registry.redhat.io is required to pull the base image; the target registry (e.g. quay.io) is required to push the result:

podman login registry.redhat.io
podman login quay.io

Generate the pull secret file that will be injected into the image at build time. This file must contain credentials for the registry from which bootc upgrade will pull future updates:

podman login --authfile ./bootc-pull-secret.json \
  quay.io \
  -u <registry-username> \
  --password-stdin <<< "<registry-token>"
Warning
Do not commit bootc-pull-secret.json to version control. Add it to .gitignore.

Build the image, passing the secret file with --secret so it is mounted transiently during the RUN step and never persisted in any layer:

export IMAGE=quay.io/<your-org>/bootc:v0

podman build \
  --secret id=bootc-pull-secret,src=./bootc-pull-secret.json \
  -t ${IMAGE} .
podman push ${IMAGE}

The pushed image is the artifact that bootc-image-builder pulls to compose the ISO.

3. Prepare the Installer Configuration

config.json customizes the first-boot user created by the installer. Create it alongside the Containerfile:

{
  "blueprint": {
    "customizations": {
      "user": [
        {
          "name": "cloud-user",
          "password": "<your-password>",
          "key": "<your-ssh-public-key>",
          "groups": ["wheel"]
        }
      ]
    }
  }
}
Warning
Do not commit config.json to version control if it contains a plaintext password. Use a secrets manager or an environment-substituted template instead.

4. Generate the ISO

Run bootc-image-builder with --type iso. The /var/lib/containers/storage bind-mount lets the builder resolve the image from the host’s local container storage, avoiding an extra registry pull.

Create the output directory before running the builder, otherwise the bind-mount will fail:

mkdir -p output
sudo podman run --rm -it --privileged \
  -v ./output:/output \
  -v ./config.json:/config.json:ro \
  -v /var/lib/containers/storage:/var/lib/containers/storage \
  --pull newer \
  registry.redhat.io/rhel10/bootc-image-builder:10.1 \
  --type iso \
  --config /config.json \
  --output /output \
  quay.io/<your-org>/bootc:v0

The ISO is written to ./output/ once the osbuild pipeline completes.

Verify

ls -lh output/

Expected output:

total 1.2G
-rw-r--r--. 1 root root 1.2G Apr 14 10:00 bootimage.iso

Deploy the ISO

Option A: USB drive (bare-metal)

Write the ISO to a USB drive with dd:

sudo dd if=output/bootimage.iso of=/dev/sdX bs=4M status=progress oflag=sync

Replace /dev/sdX with the actual target block device. Boot the target host from the USB drive to start the installer.

Option B: VMware datastore

Upload the ISO to a vSphere datastore using the govc CLI:

export GOVC_URL=https://<vcenter-fqdn>
export GOVC_USERNAME=<username>
export GOVC_PASSWORD=<password>
export GOVC_INSECURE=1  # set to 0 if using a trusted certificate

govc datastore.upload \
  -ds <datastore-name> \
  output/bootimage.iso \
  iso/bootimage.iso

Once uploaded, attach the ISO to a VM via the vSphere UI or govc:

govc vm.cdrom.insert \
  -vm <vm-name> \
  -ds <datastore-name> \
  iso/bootimage.iso

Boot the VM from the CD-ROM device to launch the installer.

Option C: Kickstart

Kickstart automates the installation and is compatible with ISO, PXE, and USB boot workflows. The key difference from a standard RHEL kickstart is the ostreecontainer directive, which replaces the %packages section — the entire OS is pulled from the container image.

Create a kickstart file:

cat > bootc.ks <<'KSEOF'
%pre
mkdir -p /etc/ostree
cat > /etc/ostree/auth.json << 'EOF'
{
  "auths": {
    "quay.io": { "auth": "<base64-encoded>" }
  }
}
EOF
%end

text
network --bootproto=dhcp --device=link --activate

# Disk partitioning
clearpart --all --initlabel --disklabel=gpt
reqpart --add-boot
part / --grow --fstype xfs

# Pull the OS from the bootc container image — no %packages section needed
ostreecontainer --url quay.io/<your-org>/bootc:v0

firewall --disabled
services --enabled=sshd

# First-boot user
user --name=cloud-user --groups=wheel --plaintext --password=<your-password>
sshkey --username cloud-user "<your-ssh-public-key>"

rootpw --iscrypted locked
reboot
KSEOF

Method 1 — RHEL Network Install ISO + kickstart URL: Download the RHEL Network Install ISO for the target architecture. Host the kickstart file on any HTTP server:

python -m http.server 8080

Boot the target host from the Network Install ISO. At the boot menu, append the following to the kernel command line and press Ctrl-X:

inst.ks=http://<server-ip>:8080/bootc.ks

The installer fetches the kickstart over the network, pulls the container image from the registry via ostreecontainer, and completes the installation unattended.

Method 2 — embed kickstart into the ISO with mkksiso: mkksiso (from the lorax package) bakes the kickstart directly into the boot ISO so no HTTP server is required. Install lorax first:

sudo dnf install -y lorax

Embed the kickstart into the bootc ISO generated in step 4:

mkksiso --ks bootc.ks ~/Downloads/rhel-10.1-x86_64-boot.iso output/bootimage-ks.iso

Boot from output/bootimage-ks.iso. The kickstart runs automatically with no interactive input — the installer pulls the container image from the registry and completes the installation unattended. The installed system boots directly from the container image layers and can be updated atomically with bootc upgrade.

Tip

To serve the kickstart from an HTTP server instead of embedding it, use --cmdline to bake the kernel argument into the ISO.

mkksiso --cmdline "inst.ks=http://<server-ip>:8080/bootc.ks" \
  ~/Downloads/rhel-10.1-x86_64-boot.iso output/bootimage-ks.iso

Option D: QEMU / KVM (qcow2)

Instead of generating an ISO, bootc-image-builder can produce a qcow2 disk image that can be booted directly with libvirt or QEMU — no installer pass required.

Create the output directory and run the builder with --type qcow2:

mkdir -p output

sudo podman run --rm -it --privileged \
  -v ./output:/output \
  -v ./config.json:/config.json:ro \
  -v /var/lib/containers/storage:/var/lib/containers/storage \
  --pull newer \
  registry.redhat.io/rhel10/bootc-image-builder:10.1 \
  --type qcow2 \
  --config /config.json \
  --output /output \
  quay.io/<your-org>/bootc:v0

The disk image is written to ./output/qcow2/disk.qcow2. Boot it with virt-install:

virt-install \
  --name bootc-vm \
  --memory 4096 \
  --vcpus 2 \
  --disk output/qcow2/disk.qcow2 \
  --import \
  --os-variant rhel10.0

The VM boots directly into the installed system. Updates are applied atomically from the registry with bootc upgrade.

Option E: OpenShift Virtualization (DataVolume import)

OpenShift Virtualization can import the qcow2 image produced in Option D directly into a DataVolume via the Containerized Data Importer (CDI).

Prerequisites:

  • The OpenShift Virtualization operator is installed.
  • oc and virtctl available in $PATH.
  • The qcow2 image is accessible over HTTP or uploaded via virtctl.

Method 1 — upload with virtctl:

virtctl image-upload dv bootc-disk \
  --size=20Gi \
  --image-path=output/qcow2/disk.qcow2 \
  --storage-class=<storage-class> \
  --namespace=<namespace> \
  --insecure

Method 2 — DataVolume manifest with HTTP source:

Serve the qcow2 from any HTTP server first:

python -m http.server 8080 --directory output/qcow2

Then apply the DataVolume:

apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
  name: bootc-disk
  namespace: <namespace>
spec:
  source:
    http:
      url: "http://<server-ip>:8080/disk.qcow2"
  storage:
    accessModes:
      - ReadWriteOnce
    resources:
      requests:
        storage: 20Gi
    storageClassName: <storage-class>

Once the DataVolume reaches Succeeded phase, create the VirtualMachine:

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: bootc-vm
  namespace: <namespace>
spec:
  instancetype:
    kind: VirtualMachineClusterInstancetype
    name: u1.medium
  runStrategy: RerunOnFailure
  template:
    metadata:
      labels:
        app: bootc-vm
    spec:
      architecture: amd64
      domain:
        firmware:
          bootloader:
            efi:
              secureBoot: false
        devices:
          disks:
            - name: rootdisk
              disk:
                bus: virtio
          interfaces:
            - name: default
              masquerade: {}
      networks:
        - name: default
          pod: {}
      volumes:
        - name: rootdisk
          dataVolume:
            name: bootc-disk
oc apply -f datavolume.yaml
oc wait dv bootc-disk --for condition=Ready --timeout=10m -n <namespace>
oc apply -f virtualmachine.yaml

The VM boots from the imported bootc disk. Day-2 updates are applied inside the VM with bootc upgrade as with any other deployment method.

Option F: OpenShift Virtualization — ISO + Kickstart install

Rather than pre-importing a disk image, boot a VM directly from the bootc ISO (generated in step 4) and run the kickstart installer inside OpenShift Virtualization.

Upload the ISO to a DataVolume first:

virtctl image-upload dv bootc-iso \
  --size=5Gi \
  --image-path=output/bootimage.iso \
  --storage-class=<storage-class> \
  --namespace=<namespace> \
  --insecure

Create a blank DataVolume for the target disk:

apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
  name: bootc-rootdisk
  namespace: <namespace>
spec:
  source:
    blank: {}
  storage:
    accessModes:
      - ReadWriteOnce
    resources:
      requests:
        storage: 20Gi
    storageClassName: <storage-class>

Create the VirtualMachine, attaching the ISO as a CD-ROM and the blank PVC as the install target:

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: bootc-installer
  namespace: <namespace>
spec:
  instancetype:
    kind: VirtualMachineClusterInstancetype
    name: u1.medium
  running: true
  template:
    metadata:
      labels:
        app: bootc-installer
    spec:
      architecture: amd64
      domain:
        firmware:
          bootloader:
            efi:
              secureBoot: false
        devices:
          disks:
            - name: rootdisk
              disk:
                bus: virtio
              bootOrder: 2
            - name: installiso
              cdrom:
                bus: sata
                readonly: true
              bootOrder: 1
          interfaces:
            - name: default
              masquerade: {}
      networks:
        - name: default
          pod: {}
      volumes:
        - name: rootdisk
          dataVolume:
            name: bootc-rootdisk
        - name: installiso
          dataVolume:
            name: bootc-iso
oc apply -f blank-dv.yaml
oc apply -f vm-installer.yaml

The VM boots from the ISO (bootOrder: 1). If using a kickstart embedded with mkksiso (Option C — Method 2), the installation runs unattended. If using a plain ISO, pass the kickstart URL at the boot menu via the VM console:

inst.ks=http://<server-ip>:8080/bootc.ks

Once installation completes the VM reboots from the disk (bootOrder: 2). Remove the ISO DataVolume and detach the CD-ROM after first boot:

oc delete dv bootc-iso -n <namespace>

5. Customize the Image

Two approaches are available depending on the use case.

Option A: Rebuild from the base image

Modify the Containerfile from scratch when changes affect the base OS, the pull secret, or the tmpfiles.d setup. The secret mount must be carried over in every build:

FROM registry.redhat.io/rhel10/rhel-bootc:10.1

RUN --mount=type=secret,id=bootc-pull-secret,required=true \
    cp /run/secrets/bootc-pull-secret /usr/lib/container-auth.json && \
    chmod 0600 /usr/lib/container-auth.json

COPY bootc-auth.conf /usr/lib/tmpfiles.d/bootc-auth.conf

RUN dnf install -y vim-enhanced && dnf clean all

Option B: Layer on top of an already-built image

Use the previously published image as the FROM base to add packages, configs, or files without touching the base OS or secret setup. This is faster to build and produces a smaller diff layer:

FROM quay.io/<your-org>/bootc:v1

RUN dnf install -y vim htop && dnf clean all
COPY myconfig /etc/myapp/config.yaml
Note
The pull secret and tmpfiles.d symlink are already baked into v1. No need to repeat the --mount=type=secret step unless the credentials themselves need to change.

Tag the new build as a separate version to keep the previous image available as a rollback target:

export IMAGE=quay.io/<your-org>/bootc:v1

podman build \
  --secret id=bootc-pull-secret,src=./bootc-pull-secret.json \
  -t ${IMAGE} .
podman push ${IMAGE}

6. Apply the Update on a Running System

On the installed host, check whether a newer image is available at the current reference without downloading it:

sudo bootc upgrade --check

Pull and stage the update. The running system is not affected until reboot — updates operate in an A/B style, and the staged image is visible as staged in bootc status:

sudo bootc upgrade

To pull and immediately reboot into the new image in one step, use --apply:

sudo bootc upgrade --apply

On systems that support it, --soft-reboot=auto avoids a full hardware reboot when no kernel changes are queued, falling back to a regular reboot otherwise:

sudo bootc upgrade --apply --soft-reboot=auto

Switch to a different tag

To switch to a different tag or image reference entirely, use bootc switch:

sudo bootc switch quay.io/<your-org>/bootc:v1

bootc switch stages the new image. Add --apply to reboot immediately after staging:

sudo bootc switch --apply quay.io/<your-org>/bootc:v1

See the bootc-upgrade man page for the full list of options.

Automatic updates with bootc-fetch-apply-updates

bootc ships a systemd service and companion timer that automate the full upgrade and reboot cycle. The service is documented at bootc-fetch-apply-updates.service.

The service runs a single command:

# /usr/lib/systemd/system/bootc-fetch-apply-updates.service
[Unit]
Description=Apply bootc updates
Documentation=man:bootc(8)
ConditionPathExists=/run/ostree-booted

[Service]
Type=oneshot
ExecStart=/usr/bin/bootc upgrade --apply --quiet

The companion timer controls when it fires:

# /usr/lib/systemd/system/bootc-fetch-apply-updates.timer
[Unit]
Description=Apply bootc updates
Documentation=man:bootc(8)
ConditionPathExists=/run/ostree-booted

[Timer]
OnBootSec=1h
OnUnitInactiveSec=8h
RandomizedDelaySec=2h

[Install]
WantedBy=timers.target
Timer directiveDefaultEffect
OnBootSec=1h1 hourFirst check runs 1 hour after the system boots
OnUnitInactiveSec=8h8 hoursSubsequent checks run 8 hours after the last run completed
RandomizedDelaySec=2h2 hoursEach trigger is delayed by a random offset up to 2 hours — reduces registry load when many hosts update simultaneously
Tip
RandomizedDelaySec is especially relevant in large fleets. Spreading update requests over a 2-hour window avoids thundering-herd load on the container registry.

Enable and start the timer:

sudo systemctl enable --now bootc-fetch-apply-updates.timer

To tune the schedule without modifying the upstream unit, create a drop-in override:

sudo systemctl edit bootc-fetch-apply-updates.timer

Example — check every 6 hours with no random delay (suitable for a lab or small fleet):

[Timer]
# Reset inherited values from the base unit before redefining them.
# An empty assignment is required; without it systemd merges additively.
OnBootSec=
OnBootSec=30min

OnUnitInactiveSec=
OnUnitInactiveSec=6h

RandomizedDelaySec=
RandomizedDelaySec=0

To disable automatic updates entirely:

sudo systemctl disable --now bootc-fetch-apply-updates.timer

The three steps performed by the service can also be run independently:

sudo bootc upgrade --check      # check only, no download
sudo bootc upgrade              # download and stage, reboot separately
sudo bootc upgrade --apply      # download, stage, and reboot immediately

Verify and rollback

Check the active and staged image at any time:

sudo bootc status

If the update causes issues, roll back to the previous deployment and reboot:

sudo bootc rollback
sudo systemctl reboot