Comment on Performance comparison between various Hypervisors
Voroxpete@sh.itjust.works 2 days agoUnfortunately I’m not very familiar with Cloudstack or Proxmox; we’ve always worked with KVM using virt-manager and Cockpit.
Our usual method is to remove the default hard drive, reattach the qcow file as a SCSI device, and then we modify the SCSI controller that gets created to enable queuing. I’m sure at some point I should learn to do all this through the command line, but it’s never really been relevant to do so.
The relevant sections look like this in one our prod VMs:
<disk type=‘file’ device=‘disk’> <driver name=‘qemu’ type=‘qcow2’/> <source file=‘/var/lib/libvirt/images/XXX.qcow2’ index=‘1’/> <backingStore/> <target dev=‘sdb’ bus=‘scsi’/> <alias name=‘scsi0-0-0-1’/> <address type=‘drive’ controller=‘0’ bus=‘0’ target=‘0’ unit=‘1’/> </disk>
<controller type=‘scsi’ index=‘0’ model=‘virtio-scsi’> ** <driver queues=‘6’/>** <alias name=‘scsi0’/> <address type=‘pci’ domain=‘0x0000’ bus=‘0x04’ slot=‘0x00’ function=‘0x0’/> </controller>
The driver queues=‘X’ line is the part you have to add. The number should equal the number of cores assigned to the VM.
See the following for more on tuning KVM:
buedi@feddit.org 1 day ago
Thank you very much. I spent another two hours yesterday reading up on that and creating other VMs and Templates, but I was not able yet to attach the Boot disk to a SCSI controller and make it boot. I would really liked to see if this change would bring it on-par with Proxmox (I wonder now what the defaults for Proxmox are), but even then, it would still be much slower than with Hyper-V or XCP-ng. If I find time, I will look into this again.
Voroxpete@sh.itjust.works 1 day ago
I’d suggest maybe testing with a plain Debian or Fedora install. Just enable KVM and install virt-manager, and create the environment that way.