I just can’t figure out how to create a VM in ACS with SCSI controllers. I am able to add a SCSI controller to the VM, but the Boot Disk is always connected to the SATA controller. I tried to follow this thread (…apache.org/…/op2fvgpcfcbd5r434g16f5rw8y83ng8k) and create a Template, and I am sure I am doing something wrong, but I just cannot figure it out :-(
Comment on Performance comparison between various Hypervisors
buedi@feddit.org 3 days agoThat’s a very good question. The testsystem is running Apache Cloudstack with KVM at the moment and I have yet to figure out how to see which Disk / Controller mode the VM is using. I will dig a bit to see if I can find out. Would be interesting if it is not SCSI to re-run the tests.
buedi@feddit.org 3 days ago
snowfalldreamland@lemmy.ml 2 days ago
I am only messing with KVM/libvirt for fun so i have no professional experience in this but wouldn’t you want to use virtio disks for best performance?
buedi@feddit.org 2 days ago
I am neither working professionally in that field. To answer your question: Of course I would use whatever gives me the best performance. Why it is set like this is beyond my knowledge. What you basically do in Apache Cloudstack when you do not have a Template yet is: You upload an ISO and in this process you have to tell ACS what it is (Windows Server 2022, Ubuntu 24 etc.). From my understanding, those pre-defined OS you can select and “attach” to an ISO seem to include the specifics for when you create a new Instance (VM) in ACS. And it seems to set the Controller to SATA. Why? I do not know. I tried to pick another OS (I think it was called Windows SCSI), but in the end it ended up still being a VM with the disks bound to the SATA controller, despite the VM having an additional SCSI controller that was not attached to anything.
This can probably be fixed on the commandline, but I was not able to figure this out yesterday when I had a bit spare time to tinker with it again. I would like to see if this makes a big difference in that specific workload.
Voroxpete@sh.itjust.works 2 days ago
Unfortunately I’m not very familiar with Cloudstack or Proxmox; we’ve always worked with KVM using virt-manager and Cockpit.
Our usual method is to remove the default hard drive, reattach the qcow file as a SCSI device, and then we modify the SCSI controller that gets created to enable queuing. I’m sure at some point I should learn to do all this through the command line, but it’s never really been relevant to do so.
The relevant sections look like this in one our prod VMs:
<disk type=‘file’ device=‘disk’> <driver name=‘qemu’ type=‘qcow2’/> <source file=‘/var/lib/libvirt/images/XXX.qcow2’ index=‘1’/> <backingStore/> <target dev=‘sdb’ bus=‘scsi’/> <alias name=‘scsi0-0-0-1’/> <address type=‘drive’ controller=‘0’ bus=‘0’ target=‘0’ unit=‘1’/> </disk>
<controller type=‘scsi’ index=‘0’ model=‘virtio-scsi’> ** <driver queues=‘6’/>** <alias name=‘scsi0’/> <address type=‘pci’ domain=‘0x0000’ bus=‘0x04’ slot=‘0x00’ function=‘0x0’/> </controller>
The driver queues=‘X’ line is the part you have to add. The number should equal the number of cores assigned to the VM.See the following for more on tuning KVM:
buedi@feddit.org 2 days ago
Thank you very much. I spent another two hours yesterday reading up on that and creating other VMs and Templates, but I was not able yet to attach the Boot disk to a SCSI controller and make it boot. I would really liked to see if this change would bring it on-par with Proxmox (I wonder now what the defaults for Proxmox are), but even then, it would still be much slower than with Hyper-V or XCP-ng. If I find time, I will look into this again.
Voroxpete@sh.itjust.works 2 days ago
I’d suggest maybe testing with a plain Debian or Fedora install. Just enable KVM and install virt-manager, and create the environment that way.