Intel® Iris® Xe MAX Graphics with Linux*

Laptops are now available in market to purchase that provide both a discrete Intel® Iris® Xe MAX graphics processor and an integrated Intel® Iris® Xe Graphics processor. You can find more information about those systems here.

While support for the Intel Iris Xe Graphics processor has already been integrated into Linux* and integrated into Linux-based distributions such as Ubuntu 20.04.1, enabling work for the Intel Iris Xe MAX graphics processor in Linux* is ongoing.

In the meantime, we are excited to provide early access to that software and instructions to configure an Ubuntu 20.04.1 system so you can take advantage of both graphics adapters today. By following these instructions, your system’s display will be using the Intel Iris Xe Graphics processor and the Intel Iris Xe MAX graphics processor can then be used for 3D, media, and compute processing.

Overview

Dual GPU

To use both the Intel Iris Xe graphics and Intel Iris Xe MAX graphics processors at the same time currently requires two different kernels. To isolate the two required kernel versions, virtualization will be used.

The virtual machine (VM) host will have direct control of the display through the Intel Iris Xe graphics processor (via the kernel provided by Ubuntu.) The VM host will also be configured to use PCI passthrough to provide the Intel Iris Xe MAX graphics processor to the VM guest. The VM guest will be running the custom Linux kernel.

Once the host is configured, instructions are provided to to configure and use a VM guest for compute and media offload. Using the standard QEMU virtualization machine manager, you can setup a local compute node using the Intel Iris Xe MAX graphics for executing media and compute applications while you continue using the Intel Iris Xe graphics adapter for your display.

Configure the host

These instructions begin with the laptop having the Desktop image for Ubuntu 20.04.1 LTS]1 (or newer) successfully installed and booting. This can be done either as the sole operating system, configuring as a dual boot environment, or by using external storage. As the specific steps for installing Linux onto a laptop is very platform specific, we do not provide those instructions here. Please refer to your platform or operating system supplier for information.

Prior to starting this guide, ensure that you have enabled virtualization in your system’s BIOS. Please refer to your platform supplier for information on configuring the BIOS. On some systems, you may also need to enable Secure Boot.

Prior to running the instructions below, activate sudo in your session by running sudo -l. This will allow you to copy/paste the instructions which use sudo without being prompted for your password.

sudo -l

Configure the host to bind vfio-pci to Intel Iris Xe MAX graphics adapter

Binding the Intel Iris Xe MAX graphics adapter to the vfio-pci driver detaches it from the host operating system, freeing it for use by the virtual machine. The PCI device and vendor ID for the Intel Iris Xe MAX graphics adapter is 8086:4905. Edit your system’s /etc/default/grub configuration to append intel_iommu=on vfio-pci.ids=8086:4905. The following shell script will check if those two parameters are set in /etc/default/grub, and set them if not.

if ! grep "intel_iommu=on" /etc/default/grub | grep -q "8086:4905"; then
  sudo sed -ine \
    's,^GRUB_CMDLINE_LINUX_DEFAULT="\([^"]*\)",GRUB_CMDLINE_LINUX_DEFAULT="\1 intel_iommu=on vfio-pci.ids=8086:4905",g' \
    /etc/default/grub
fi
grep GRUB_CMDLINE_LINUX_DEFAULT /etc/default/grub

The above should display the GRUB_CMDLINE_LINUX_DEFAULT including both intel_iommu=on and vfio-pci.ids=8086:4905.

Install the linux-oem-20.04 kernel

The default kernel provided in Ubuntu 20.04.1 does not provide a kernel driver for the Intel Iris Xe graphics adapter. To enable that graphics adapter, you need to configure the system to use the linux-oem-20.04 kernel.

sudo apt update &&
sudo apt install linux-oem-20.04

Reboot

At this point, the system is configured to:

  • Enable Intel Iris Xe Graphics adapter for the host using linux-oem-20.04 kernel.

  • Bind the vfio-pci driver to the Intel Iris Xe MAX graphics adapter so it can be passed through to the VM.

Reboot into the new configuration. The next section provides steps to verify the devices enumerate correctly.

sudo reboot

Identify PCI device bound to the vfio-pci driver

After rebooting, verify the changes worked. The following will output the PCI information as well as the kernel driver bound to those devices:

lspci -nnk | grep -A 3 VGA | grep -E "VGA|driver"

The output of the above command should look similar to the following, listing two devices. One will be bound to the i915 driver and the other to vfio-pci.

00:02.0 VGA compatible controller [0300]: Intel Corporation Device [8086:9a49] (rev 01)
        Kernel driver in use: i915
...
03:00.0 VGA compatible controller [0300]: Intel Corporation Device [8086:4905] (rev 01)
        Kernel driver in use: vfio-pci

Make a note of the address of the PCI device that is bound to the vfio-pci driver, 03:00.0 in this case. This is the PCI address that will be provided to the VM, giving it access to the Intel Iris Xe MAX graphics adapter.

Configuring the VM guest

At this point the host is configured in a way which allow the Intel Iris Xe MAX graphics adapter to be passed to a VM guest. Next, you will compile a custom kernel to boot inside of that VM guest, with support for the Intel Iris Xe MAX graphics adapter.

On the host, we recommend you continue using the kernel supplied by the operating system vendor.

Prepare kernel for Intel Iris Xe MAX graphics

Install packages necessary for building the Linux kernel from source:

sudo apt install build-essential git gcc bison flex libssl-dev bc cpio \
  openssl lz4

The following will clone the kernel sources from GitHub, configure it, and compile it in the ${HOME}/kernel-xe-max directory. This path is also used later when copying the built kernel to the VM guest.

NOTE: Cloning and building the kernel can take a while, varying on Internet connection and hardware.

cd ${HOME}
git clone --branch=main --depth=1 \
  https://github.com/intel-gpu/kernel \
  kernel-xe-max
cd kernel-xe-max
cp /boot/config-$(uname -r) .config
make olddefconfig
make -j $(nproc --all) targz-pkg LOCALVERSION="-xe-max"

At the end of the build, a tarball will be ready in ${HOME}/kernel-xe-max:

ls -l ${HOME}/kernel-xe-max/*.gz

Create an Ubuntu virtual machine

The following steps will install the required packages to manage and configure a VM, create an initial disk image, download the Ubuntu 20.04.1 live server image, and boot the VM with that image.

We will be using QEMU directly from the command line, but you could use a graphical configuration tool such as virt-manager as well.

Install packages to manage the VM

sudo apt install qemu-kvm qemu-utils \
  libvirt-daemon-system libvirt-clients \
  bridge-utils \
  virt-manager ovmf gir1.2-spiceclientgtk-3.0

Check that virtualization is enabled

You can now run the utility ‘kvm-ok’ to determine VT-d is enabled on your system:

sudo kvm-ok

You should see a message similar to:

INFO: /dev/kvm exists
KVM acceleration can be used

If you do not see that, make sure that virtualization is enabled in your system’s BIOS. Please refer to your platform supplier for information on configuring the BIOS. On some systems, you may also need to enable Secure Boot.

Create a disk image file for the VM

The following will create disk image file for the VM with 50G. If you don’t want to place the disk image in /opt you can place it wherever you have write access on the host.

sudo qemu-img create -f qcow2 /opt/ubuntu-disk.qcow2 50G
sudo chown $(whoami) /opt/ubuntu-disk.qcow2

Install Ubuntu 20.04.1 LTS image into VM

You can install either Ubuntu Server or Ubuntu Desktop within the VM. Because the graphics display is not required for GPU offloading, you may chose to install the server image to reduce disk usage.

Since you just installed the Desktop image on the host platform, you may wish to re-use the ISO image you already downloaded, instead of downloading the Server image.

If you wish to download a new image, you can do so with the following command:

wget https://releases.ubuntu.com/focal/ubuntu-20.04.1-live-server-amd64.iso

The above will download the file ubuntu-20.04.1-live-server-amd64.iso, which you will pass to QEMU below. We set those values into the BOOT_MEDIA environment variable:

export FILE=./ubuntu-20.04.1-live-server-amd64.iso
export BOOT_MEDIA="-cdrom ${FILE}"

Install OS into VM

The following will start the VM with 4G of RAM. The CPU configuration will be copied from the host CPU. The VM will boot the OS image specified in the BOOT_MEDIA variable declared previously.

qemu-system-x86_64 -machine pc \
  -m 4G \
  -cpu host \
  -enable-kvm \
  -drive file=/opt/ubuntu-disk.qcow2 \
  -netdev user,id=net0,hostfwd=tcp::10022-:22 -device virtio-net-pci,netdev=net0 \
  ${BOOT_MEDIA}

A window will open and after several seconds the Ubuntu installer will start.

NOTE: During the package installation, make sure to configure network and install openssh-server when prompted. ssh will be used later to connect to the VM.

If the installation asks how to partition the disk:

  • You do not need to use volume management (deselect LVM)

  • Add a 2G partition mounted to /boot, formatted as ext4. This will allow plenty of space for changing kernels. The default is closer to ~750M.

  • Set the remaining space on / (approximately 47G)

After the installation has completed, shut down the VM during reboot.

Log into the VM guest and configure it

The disk image now includes an installed operating system, so you no longer need to pass the boot media to the VM, so start the VM without ${BOOT_MEDIA}.

On the host, launch the VM:

qemu-system-x86_64 \
  -m 4G \
  -cpu host \
  -enable-kvm \
  -drive file=/opt/ubuntu-disk.qcow2 \
  -netdev user,id=net0,hostfwd=tcp::10022-:22 -device virtio-net-pci,netdev=net0

Once the virtual machine starts, it will boot the Ubuntu operating system you just installed.

If you didn’t install openssh-server when you installed the OS into the VM, you will need to do it from the virtual machine window launched when you started qemu-system-x86_64.

In the VM, run the following:

sudo apt-get install -y openssh-server

When the VM is started above, port 10022 on the host is forwarded to port 22 in the guest. The next command will use that local port forward to copy the built kernel from the host system into the guest. You can then ssh into the VM to run the commands to install that kernel.

On the host:

scp -P 10022 \
  ${HOME}/kernel-xe-max/linux-5.4.48-xe-max-x86.tar.gz \
  localhost:.

Now you can ssh into the guest VM to install the kernel:

On the host, connect to the VM:

ssh -p 10022 localhost
sudo -l

OPTIONAL: If you installed the Server OS version, you may want to disable ‘cloud-init’ inside the VM to improve boot times:

In the VM:

echo 'datasource_list: [ None ]' | sudo tee /etc/cloud/cloud.cfg.d/90_dpkg.cfg
sudo apt-get purge cloud-init &&
sudo rm -rf /etc/cloud &&
sudo rm -rf /var/lib/cloud

OPTIONAL: If you installed the Desktop OS version, you may want to disable the graphics display to reduce memory usage and improve performance:

In the VM:

sudo systemctl disable gdm3 &&
sudo systemctl set-default multi-user &&
sudo systemctl stop gdm3

Prior to installing the custom kernel, you need to install the firmware files required by the Intel Iris Xe MAX graphics adapter. The following will download an archive of the latest firmware files and decompress them into /lib/firmware/i915, where the kernel will look for them while booting:

In the VM:

wget -qO - \
  https://repositories.intel.com/graphics/firmware/linux-firmware-dg1_2020.43.tgz |
  sudo tar -C /lib/firmware/i915 -xvz --warning=no-timestamp

You can now install the custom kernel. This is done after the firmware files are installed to make sure that the firmware files are available while the initial ramdisk is created during the kernel installation:

In the VM:

mkdir kernel-xe-max
tar -C kernel-xe-max -xzf linux-5.4.48-xe-max-x86.tar.gz
sudo cp -r kernel-xe-max/lib/modules/5.4.48-xe-max /lib/modules/
sudo /sbin/installkernel \
  5.4.48-xe-max \
  kernel-xe-max/boot/vmlinuz-5.4.48-xe-max \
  kernel-xe-max/boot/System.map-5.4.48-xe-max \
  /boot

Next, modify grub to enable serial output so console output can be seen from the virtual machine manager (VMM). This will allow you to launch the VM later without a virtual display.

The two lines you want to modify are the GRUB_CMDLINE_LINUX_DEFAULT and GRUB_TERMINAL as follows. You can use ‘nano’ to edit the file:

GRUB_CMDLINE_LINUX_DEFAULT="console=ttyS0"
GRUB_TERMINAL=console

In the VM:

sudo nano /etc/default/grub

Finally, update the bootloader with the additional grub configuation and power off the VM.

In the VM:

sudo update-grub &&
sudo shutdown -h 0

Start a VM with GPU passthrough

Now that the VM is configured to boot the kernel containing support for the Intel Iris Xe MAX graphics adapter, we will re-start QEMU kvm using the disk image we created using and passthrough the GPU to the guest OS.

In the below example, the address 03:00.0 used with the vfio-pci option corresponds to the PCI address for xe-max that we got from running lspci on the host.

Since this will now be passing in physical hardware, you need to run it as root:

On the host:

sudo qemu-system-x86_64 \
  -m 4G \
  -cpu host \
  -enable-kvm \
  -drive file=/opt/ubuntu-disk.qcow2 \
  -device vfio-pci,host=03:00.0,id=hostdev0,bus=pci.0,x-igd-gms=2,x-igd-opregion=off \
  -netdev user,id=net0,hostfwd=tcp::10022-:22 -device virtio-net-pci,netdev=net0 \
  -smp $(nproc) \
  -vga none \
  -nographic

It may take several seconds before you see any output from the VM.

Connection to the VM

As part of launching the VM, the above command forwards connections to port 10022 on the host to port 22 in the guest. This is then used for connecting to the guest:

On the host:

ssh -p 10022 localhost

It is recommended to login into the guest via SSH to execute below commands and to run demos requiring terminal output.

Verify the Intel Iris Xe MAX graphics driver is initialized

Use lspci to verify the PCI device was passed through and initialized by the i915 kernel driver:

In the VM:

lspci -nnk | grep VGA -A 3 | grep -E "VGA|driver"

Output should look similar to the following:

00:03.0 VGA compatible controller [0300]: Intel Corporation Device [8086:4905] (rev 01)
        Kernel driver in use: i915

Install user space media and compute packages

The following is taken from the installation guides, and should be executed on the guest VM:

First, activate a sudo session so you are not prompted for a password while copy/pasting instructions to the terminal:

In the VM:

sudo -l

Configure the package repository

In the VM:

wget -qO - https://repositories.intel.com/graphics/intel-graphics.key |
  sudo apt-key add - &&
sudo apt-add-repository \
  'deb [arch=amd64] https://repositories.intel.com/graphics/ubuntu focal main'

Install the compute and media packages

The following will install the latest versions of the OpenCL* runtime, Level Zero, Media SDK, and Media driver:

In the VM:

sudo apt update &&
sudo apt install \
  intel-opencl-icd \
  intel-level-zero-gpu level-zero \
  intel-media-va-driver-non-free libigfxcmrt7 libmfx1

Configure permissions to access GPU

In order to access GPU capabilities, a user needs to have the correct permissions on the system. The following will add the user to the render group owning /dev/dri/render*:

In the VM:

sudo gpasswd -a ${USER} render
newgrp render

Tests

Verify the kernel is the version you built

This should report ‘Linux 5.4.48-xe-max’.

In the VM:

uname -sr

Verify the graphics platform name

Verify ‘platform: DG1’ is listed in i915_capabilities:

In the VM:

sudo grep "platform:" /sys/kernel/debug/dri/0/i915_capabilities

Verify Open CL

To verify that the intel-opencl packages have installed correctly, you can use the clinfo program:

In the VM:

sudo apt install clinfo
clinfo

Verify media

To verify that media driver have installed correctly, you can use the vainfo program:

In the VM:

sudo apt install vainfo
vainfo

Additional guides

We have the following guides you can follow to demonstrate how to user the Intel Iris Xe MAX graphics adapter within the VM environment:

Feedback on this page?

If you have feedback on this page, please visit the community documentation project project on GitHub and file an issue.