So, to recap my question to the experienced admins are the following:ġ. Specially considering that homeNode 3 is not used at all. Shall I set it to 44 as it is the max number of logical cores in a physical socket? Or shall I disable it and let the system do its best decision? If yes how do I disable it? This is the current layout of the cpu resources of the vm:Īlthough performances have improved I’m not happy with the distribution of the cores.
This is another thing I don't understand.įollowing which criteria do I adjust value? According to Virtual NUMA Controls I should get 6 virtual nodes by dividing 64 by 11,but I see the VM has 7. I read the above after I did the initial changes to the VM and I have now noticed that its value is 11. This setting overrides the =TRUE setting” “Please remember to adjust the setting in the VM if it is already been powered-on once.
#NUMA VMWARE ESXI 5 HOW TO#
However while the VMWare KB 2003582 states how to implement the preferHT setting it does not mention something Frank Denneman did say in his book: By disabling the Hot add CPU feature and by adding the preferHT set to True I increased performance quite a lot and I expected to see cores spread onto two physical sockets. According to Frank Denneman's book vSphere 6.5 Host Resources Deep Dive I aimed at keeping the VPD onto as little psockets as possible. ESXi 6.5 are HP DL560 with 4 sockets (22 cores each + HT) and 1.5 TB RAM. When you associate a NUMA node with a virtual machine to specify NUMA node affinity, you constrain the set of NUMA nodes on which ESXi can schedule a virtual machine's virtual CPU and memory.I have a VM with 64 vCPUs and 512GB RAM, it’s a massive db. Associate Virtual Machines with Specified NUMA Nodes.
#NUMA VMWARE ESXI 5 MANUAL#
You can specify that all future memory allocations on a virtual machine use pages associated with specific NUMA nodes (also known as manual memory affinity). Associate Memory Allocations with Specific NUMA Nodes Using Memory Affinity.This allows you to prevent the virtual CPUs from migrating across NUMA nodes. You might be able to improve the performance of the applications on a virtual machine by pinning its virtual CPUs to fixed processors. Associate Virtual Machines with Specific Processors.For example, if you manually place 10 virtual machines with processor-intensive workloads on one node, and manually place only 2 virtual machines on another node, it is impossible for the system to give all 12 virtual machines equal shares of systems resources. Manual NUMA placement might interfere with ESXi resource management algorithms, which distribute processor resources fairly across a system. NUMA management of these virtual machines is effective when you remove the CPU and memory affinity constraints. When you specify CPU or memory affinities, a virtual machine ceases to be managed by NUMA. Likewise, memory can be obtained only from the nodes specified in the NUMA node affinity. When you set this option, the server allocates memory only on the specified nodes.Ī virtual machine is still managed by NUMA when you specify NUMA node affinity, but its virtual CPUs can be scheduled only on the nodes specified in the NUMA node affinity. When you set this option, a virtual machine uses only the processors specified in the affinity. When you set this option, NUMA can schedule a virtual machine only on the nodes specified in the affinity. ESXi host’s automatic NUMA optimizations result in good performance.ĮSXi provides three sets of controls for NUMA placement, so that administrators can control memory and processor placement of a virtual machine.