Posts

ESXi Update problem 7.0 to 7.0U1 missing bootbank

 After some unsuccesful tries to update 7.0 to U1 i created a case with VMware to show them that i get a totally useless ESXi installation after the update to U1 17119627. The Case took a while but finally there is a workaround out for my specific case, actually i´m not thinking to be that specific as I'm just running ProLiant Gen10 and vSAN in a 10 Node cluster. The problem seems beeing with the software FCoE that i´m not using so it´s easy to just remove it from the configstore prior the update: configstorecli config current delete -c esx -g storage_fcoe -k fcoe_activation_nic_policies --all reboot (important)  update to 7.0U1 as usual There should be a patch out soon that hopefully will address this but meanwhile you can try it this way. cheers!

Getting Infos for vSphere First Class Disks (FCD/IVDs)

Image
 Was looking around yesterday for a problem I had with First Class Disk objects in vCenter that we are using via CNS in kubernetes. There is that GUI in vCenter that will list all PVCs and infos around but that will not show the physical path especially on a vSAN Datastore. Here is a small function I wrote that will give you something like this: To learn more about FCDs / IVDs there is a great post  https://cormachogan.com/2018/11/21/a-primer-on-first-class-disks-improved-virtual-disks/

HPE Power Profile Modes explained and best practice for HPE Server

Image
 Today I discoverd a VM that was running slower on one host than on the other. This was due to Power Profiles misconfigured. Having them set to "Balanced Power and Performance" lead to only C-States are exposed to the Hypervisior and thus leading to lower control over CPU clock speeds. The bad thing about this is that there is no default option for this but you have to select the Custom Profile and have Power Regulator set to "OS Control Mode". This will expose P- States to the hypervisor. The cool thing about it, it´s also available via iLO at least on Gen9 and Gen10 but you have to reboot anyway for changes to take effect. Having now both ACPI states available to the hypervisor gives you now the option to either select High Performance that will limit the ability to leverage turbo boost but having consistent performance or even better selecting "Balanced Performance" that will unlock turbo bin´s but more about this is perfectly written down here starting

"Really" enabling promiscous mode for vSphere VMs (VMWare KB 59235)

 We had this situation twice now where just enabling promiscous mode on a dvSwitch port group was not sufficient to the application. There is one setting that need to be enabled as well especially when using more then one uplink (what you should have for redundancy) except when using LACP. This is documented in following KB Duplicate Multicast or Broadcast Packets are Received by a Virtual Machine When the Interface is Operating in Promiscuous Mode (59235) You may set this via PowerCLI to save the hassle for ssh´ing to all needed hosts: $options = @{option="/Net/ReversePathFwdCheckPromisc";intvalue=1} $esxcli = get-vmhost "YourHostnameHere" | get-esxcli -v2 $esxcli.system.settings.advanced.set.Invoke($options)

HPE Nimble Storage AF60 on Speed!

Image
 I am just finishing the upgrade of a pair of Nimble AF60 All Flash arrays to now run with an SCM Module for even better performance and I wanted to share the installation steps and some first impressions. Adding SCM is as of now only possible to AF60 and AF80 Systems and requires a minimum software version of 5.2. There is a good post from Stephen that is detailed what´s all about: https://community.hpe.com/t5/around-the-storage-block/accelerate-transactional-workloads-with-storage-class-memory/ba-p/7090551 The SCM modules are 1,5 TB Intel Optane Cards in a PCIe x4 formfactor so you need at least one free slot in your AF to get things going. Recommendation is also to use Slot 2 for the SCM Module. Installation process in a rough overview is as following: Update to 5.2 if you haven´t done yet halt standby controller unplug cables from the halted controller and remove it from the chassis insert SCM card to the controller plug controller and cables back in and wait for it to be booted an