This is a write-up on how I did my setup.


cpu: Intel I7 5960x
Mobo: Asus rampage V extreme
Ram: 64GB micron 2400mhz
Gpu: 4x Titan x. (identical)

OS: Ubuntu 16.04

The biggest problem i experienced setting up this was due to me having 4 identical graphic cards, which meant i couldn’t use pci-stub to block graphic driver from binding the cards which i was going to use for my VM.

However i could fix this by adding a override to the pci device on boot which forced it only to accept binding from a certain driver with a certain name, which in this case was vfio-pci.
But this had to be done before graphic driver tried to load it, or it wont work especially since nvidia doesn’t support unbinding gpu device when it has been bound.

This is not a howto, so following this blindly will probably not work for you.
I recommend not trying to replicate this unless you have some intermediate Linux knowledge or you will probably spend days getting it to work. (^_^)

First i went into bios enabling vtx and vtd flags on cpu as they are required for viritualization and io passthrough.

Make sure linux load necessary modules.

nano -w /etc/modules


Set boot parameters.

nano -w /etc/default/grub


update grub.


Wrote a new systemd service to run before nvidia driver.

nano -w /etc/systemd/system/gpu.service

Description=Power-off gpu fsck-root.service fsck@.service

Enable the service.

Systemctl enable gpu

Create the override script.

nano -w /root/

echo "vfio-pci" | tee /sys/bus/pci/devices/0000:04:00.0/driver_override
echo "vfio-pci" | tee /sys/bus/pci/devices/0000:04:00.1/driver_override
modprobe vfio-pci
echo "gpu vfio override loaded" > /dev/kmsg

Make the script executable.

Chmod 755 /root/

Pci Id can be found by running the following command

lspci -nn |grep NVIDIA

Create file containing pci id for vfi-pci driver.

nano -w /etc/vfio-pci1.cfg


Then install qemu

Apt-get install qemu

Create system disk

qemu-img create -f qcow2 /~/win.img 50G

Create vm start script

nano -w ~/

if [ "$EUID" -ne 0 ]
echo "Please run as root"
exit 1


vfiobind() {
vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
device=$(cat /sys/bus/pci/devices/$dev/device)
if [ -e /sys/bus/pci/devices/$dev/driver ]; then
echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id
touch /tmp/vfio-bound
modprobe vfio-pci

if [ ! -f /tmp/vfio-bound ]; then
cat $configfile | while read line;do
echo $line | grep ^# >/dev/null 2>&1 && continue
vfiobind $line

/usr/bin/qemu-system-x86_64 -enable-kvm -m 16384 -cpu host,kvm=off \
-smp 6,sockets=1,cores=6,threads=1 \
-machine q35,accel=kvm \
-device qxl \
-usb \
-device usb-mouse \
-device usb-kbd \
-soundhw hda \
-bios /usr/share/seabios/bios.bin -vga none \
-device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
-device vfio-pci,host=04:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \
-device vfio-pci,host=04:00.1,bus=root.1,addr=00.1 \
-device virtio-blk-pci,scsi=off,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=0 \
-drive file=/home/syneic/win.img,if=none,id=drive-virtio-disk0,format=qcow2,media=disk \
-device virtio-blk-pci,scsi=off,addr=0x8,drive=drive-virtio-disk1,id=virtio-disk1,bootindex=1 \
-drive file=/home/syneic/ssd2/steamlib.img,if=none,id=drive-virtio-disk1,format=qcow2,media=disk \
-boot once=d \
-usb -device usb-host,hostbus=3,hostaddr=15 \
-rtc base=localtime,driftfix=slew

exit 0

-drive file=/home/syneic/Downloads/windows.iso,id=isocd2,if=none \
-device ide-cd,bus=ide.1,drive=isocd2 \
-drive file=/home/syneic/Downloads/virtio-win-0.1.102.iso,id=isocd3,if=none \
-device ide-cd,bus=ide.2,drive=isocd3 \

-device vfio-pci,host=04:00.0,bus=root.1,addr=00.2 \
-device vfio-pci,host=04:00.1,bus=root.1,addr=00.3 \

#-netdev tap,id=user.0 \
#-device virtio-net-pci,netdev=user.0 \

make it executable

chmod 755 ~/

Starting the vm


And then it booted up on the screen which i had connected to the vm gpu.
Installing nvidia driver in vm also went without issues due to “kvm off” and “cpu host” in start script.