HPE offers Advanced Memory Protection (AMP) technologies for HPE ProLiant servers to increasing service availability for critical services. In addition of ECC (Error-Correction Code) and advanced ECC, these technologies are available to configure by system administrators.
What’s the “sfcb-smx” error? This error is related to HPE smx-providers for the Smart Array controller MRA and any HPE ProLiant G8 or G9 server will be affected if VMware ESXi has been deployed on the server by using HPE VMware ESXi 6.5 (Nov2016) ISO image. NOTE: As result of this issue, HPE Insight Management WBEM Providers are restarted by the sfcbd when they fail; therefore, the inventory information will be available. However, when a provider crashes and is re-started, the sfcbd may not resend all the indication subscriptions and the clients listening will not get any indications if the Smart Array hardware has issues. You may also faced with the below lines in vmkernel.log file: sfcb-smx: wantCoreDump:sfcb-smx signal:11 exitCode:0 coredump:enabled sfcb-smx: Dumping cartel 684037 (from world 684042) to file /var/core/sfcb-smx-zdump.000 … sfcb-smx: Userworld(sfcb-smx) coredump complete. What’s Solution? There is just a solution and administrators should upgrade their ESXi hosts by HPE VMware ESXi 6.5 (July 2017) custom ISO image. To fix the impact of not getting indications after a provider crashes, restart the HPE Insight Management WBEM Providers, by performing any of the following procedures: Run the command “/etc/init.d/sfcbd-watchdog restart”. Use the console to restart the management agents. Use vCenter to...
What’s HPE Server Options Compatibility Tool? HPE Server Options Compatibility Tool is an online application designed to assist datacenter administrators and designers with qualified options details with reserve compatibility for particular platforms including Rack, Tower, Blade Systems, Synergy, HyperScale and storage devices. Using HPE Server Options Compatibility Tool At the welcome page (Home Page), there are two choices: My Server: Find all compatible components with a server or storage device. My Options: Find all devices that compatible with selected component. My Server This is an option to find compatible components with server and storage devices. The various parts of this page are: Family: Includes most server and storage family such a 3PAR storages and ProLiant servers. Generation: After choosing device family, the generation of the family will show on this table. Server: All generation members will be available on the table and components information will display according to the this selection. Search: This is a search box to finding device faster by entering the device name. After choosing the device, the compatible components information will display. The components includes: Processor Memory Hard Disk Drives Solid State Drives Power Supplies Networking (Compatible Network Interface Cards) Storage Controller Computational and Graphic Accelerators Power Distribution...
ESXi PCI Passthrough This is a combination hardware and software feature on hypervisors to allows VMs to use PCI functions directly And we know it as VMDirectPath I/O in vSphere environment. VMDirectPath I/O needs some requirements to work perfectly, please read this KB for more information, as we read it! https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2142307 There is also some limitation when using VMDirectPath I/O and the below features will unavailable: Hot adding and removing of virtual devices Suspend and resume Record and replay Fault tolerance High availability DRS (limited availability. The virtual machine can be part of a cluster, but cannot migrate across hosts) Snapshots I couldn’t find any other limitation specially about memory size, so now why we couldn’t use more than 790 GB to 850 GB of our server memory capacity?! Anyway, let’s review our test scenario! Our Test Scenario We have some Sun X4-8 servers with the below specifications: CPU: 8 x E7-8895 v2 Local Disk: 8 x 600 GB SAS Disk Memory: 48 x 16 GB – Totally 768 GB PCI Devices: 2 x QLogic 2600 16 Gb – 2 Ports (HBA) 2 x Intel 82599EB 10 Gb – 2 Ports (Network) Embedded Devices: 2 x Intel I350 1Gb ESXi 6.x U2 has...
Which server brand do you use? HPE, Dell, Fujitsu or any other. It doesn’t matter, you should check your server compatibility with new vSphere version before planning for migration or upgrade. I don’t want to share server list because the list will be different during time and new servers will be added to the list. You can find supported servers in VMware Compatibility Guide and it’s best reference for servers compatibility. Also you can check it on OEM web sites: HPE: VMware Support Matrix Just you should choose your ESXi version on the web page and trust to the result! Dell: Virtualization Solutions Choose VMware ESXi version and then should click on “Manual” and download a PDF which contains list of compatible servers. Cisco:UCS Hardware and Software Interoperability Matrix Tool (New) just you should select some items to find proper result. Also you can use older tools: Hardware and Software Interoperability Matrix Utility Tool Fujitsu: I couldn’t find a tools on their web site and we have to download a PDF file and find our product. Sample link for FUJITSU Server PRIMERGY: x86 Servers released OS Lenovo (IBM): OS Interoperability Guide I know, there is more OEM vendor and may...
VMware has published a list that includes unsupported and deprecated devices from two vendors: Emulex Qlogic Deprecated devices may still be worked and drivers will be installed but those devices are not supported on vSphere 6.5 officially. You need to upgrade your hardware before upgrading vSphere, but it’s your choice! Because your device may be worked without any issue. You can find the deprecated and unsupported devices in the below table: Partner Driver Name Device IDs Device Name Emulex lpfc 10DF:F0E5:0000:0000 Emulex LPe1105-M4 4 Dual-Channel 4Gb/s Fibre Channel HBA 10DF:F0E5:0000:0000 Emulex LPe1150 Single-Channel 4Gb/s Fibre Channel HBA 10DF:F0E5:0000:0000 Emulex LPe1150 4Gb/s Fibre Channel Adapter 10DF:F0E5:10DF:F0E5 Emulex LPe1150 Single-Channel 4Gb/s Fibre Channel HBA 10DF:F0E5:10DF:F0E5 LPe1150-E Emulex LPe1150 Single-Channel 4Gb/s Fibre Channel HBA for Dell and EMC 10DF:FE00:0000:0000 LPe11002 4Gb Fibre Channel Host Adapter 10DF:FE00:0000:0000 NE3008-102 10DF:FE00:0000:0000 NE2000-001 10DF:FE00:0000:0000 Emulex LPe11000 4Gb PCIe Fibre Channel Adapter 10DF:FE00:10DF:FE00 Emulex LPe11002 Dual-Channel 4Gb/s Fibre Channel HBA 10DF:FE00:10DF:FE00 N8403-018 10DF:FE00:10DF:FE00 EMC LPe11000-E 10DF:FE00:10DF:FE00 EMC LPe11002-E 10DF:FE00:10DF:FE00 Emulex LPe11000 Single-Channel 4Gb/s Fibre Channel HBA 10DF:FE00:10DF:FE22 Emulex L1105-M Emulex LPe1105-M4 Dual-Channel 4Gb/s Fibre Channel mezzanine card for Dell PowerEdge 10DF:FE00:103c:1708 403621-B21 Emulex LPe1105-HP Dual-Channel 4Gb/s Fibre Channel mezzanine card for HP BladeSystem c-Cl 10DF:FE00:10DF:FE00 A8002A – FC2142SR Emulex...
Introduction Many of large companies around the world using HPE products to makes their datacenters. HPE produces server, storage, network devices and some others infrastructure equipment and all of them have technical documentation, user guides and maintenance guides. Also anyone who are using the equipment need to access to the documentation. Hewlett Packard Enterprise has provided an online library that they called it “Hewlett Packard Enterprise Information Library”. What information can I find? Actually, any information about your devices and software is available on this library, you can find and download Release Notes, User Guides, White Paper and others documentation in PDF or HTML format and also you can choose different languages for the documents. Also documents are categorized and you can choose your device according to different solutions. After choose the above options to filtering your results, you can find the related documents under the “Information Types / File Types / Languages” section. There is no need to search Google to find HPE products and devices documents anymore.
I was faced with the below log in ESXi “vmkernel.log” and I want to explain the issue and share the solution with you: fip: fcoe_ctlr_vlan_request() is done fip: host7: FIP VLAN ID unavail. Retry VLAN discovery First, I want to say that this is an issue and not a serious problem and you can ignore it but the log will be repeated continuously. Now, what is the root cause? The root cause is using CNA adapters as network adapter. As you may now, CNA (Converged Network Adapter) is a network adapter that you can use its port as network port or FCoE port for storage connectivity. FCoE is enabled by default on all ports and ESXi trying to discover FCoE VLAN and targets when discover the FCoE feature on the ports. When ESXi couldn’t discover FCoE connection, force the driver to find and discover it again. Now, what is the solution? There is two solutions that I recommend use the second solution because removing driver or any part of ESXi is not recommended. As the first solution, you can disable FCoE feature on the ports and remove the driver from ESXi: Determine which vmnics have FCoE capability esxcli fcoe nic...
Loop protection The loop protection feature enables detection of loops on downlink ports, which can be Flex-10 logical ports or physical ports. The feature applies when Device Control Channel (DCC) protocol is running on the Flex-10 port. If DCC is not available, the feature applies to the physical downlink port. Network loop protection uses two methods to detect loops: It periodically injects a special probe frame into the HP Virtual Connect domain and monitors downlink ports for the looped back probe frame. If this special probe frame is detected on downlink ports, the port is considered to cause the loop condition. It monitors and intercepts common loop detection frames used in other switches. In network environments where the upstream switches send loop detection frames, the HP Virtual Connect interconnects must ensure that any downlink loops do not cause these frames to be sent back to the uplink ports. Even though the probe frames ensure loops are detected, there is a small time window depending on the probe frame transmission interval in which the loop detection frames from the external switch might loop through down link ports and reach uplink ports. By intercepting the external loop detection frames on downlinks, the...
HPE Workload Accelerator is a PCI-e NAND flash SSD storage that it can help you to improve your services performance. HPE Workload Accelerator is closet to CPU and it can read and write data with higher speed compare to other storage and CPU does its task with lowest latency. HPE Workload Accelerator is suitable for big data, VDI, database and any process that it has high workload. Also this technology is available for Blade System as mezzanine card and deliver same experiences. HP Workload Accelerator is compatible with Windows, Linux and vSphere and HPE has provided driver for the OSes and hypervisor that the drivers are available on HPE Workload Accelerator support page. HP Workload Accelerator is available with 350GB capacity to 6.4TB capacity. You can choose it according to your service requirements. You can use HP Workload Accelerator as swap in Linux or put your database file on it in database servers or keep your sensitive virtual machine on it.
If you want to update an specific firmware on your server, you can download firmware package from the product support page and do your installation via ESXi Tech Support Mode or ESXi shell via SSH. You need to download the package at the first step, so go to your product Technical Support / Manual page and choose your vSphere version as Operating System: Then download the firmware and upload it on your server’s disk for fast access. After uploading the firmware, you should follow the below instruction: Login as root. (You must be root in order to apply the ROM update.) Place the Smart Component in a temporary directory. Unzip the file CPXXXXXX.zip Ensure that CPXXXXXX.vmexe is executable by using the command: chmod +x CPXXXXXX.vmexe From the same directory, execute the Smart Component. For example: ./CPXXXXXX.vmexe Follow the directions given by the Smart Component. Logout You can also upload unzipped folder before doing the above instruction. There is some other ways to update server’s firmware: Offline: Place the HP Service Pack for ProLiant on a USB key using the HP USB Key Creator Utility. Place the desired firmware to be updated in the directory, /hp/swpackages on the USB key. Boot from...
If you have C-Class enclosure in your environment and you have plan for upgrading OA firmware to 4.30 (or later) or you have installed OA module with 4.30 (or later) firmware, you need to configure delay time in “Device Power Sequence”. Seems, there is a problem during OA initializing process which OA shows incorrect “Caution” and/or “Critical” temperature. This is confirmed by HP officially and you can check this document for more information: c04655261 Anyway, you can configure the delay time via OA GUI same as the below:
Achieving best performance on ESXi by HP ProLiant servers needs to change some default configuration on HP RBSU (ROM-Based Setup Utility) or UEFI (Unified Extensible Firmware Interface). Some of the configurations have been mentioned on “Performance Best Practices for. VMware vSphere” but some them not mentioned. You can change the below configuration to achieve best performance on ESXi: Setting Default Recommended Reason No-Execute Page Protection (AMD)System Options -> Processor Options -> No-Execute Page Protection No-Execute Memory Protection (Intel) System Options -> Processor Options -> No-Execute Memory Protection Enabled Enabled It’s recommended by HP and the features protects systems against malicious code and viruses. Intel Virtualization TechnologySystem Options -> Processor Options -> Intel Virtualization Technology AMD V (AMD Virtualization) System Options -> Processor Options -> AMD V (AMD Virtualization) Enabled Enabled When enabled, a hypervisor supporting this feature can use extra hardware capabilities provided by AMD/Intel. Intel Hyperthreading OptionsSystem Options -> Processor Options -> Intel Hyperthreading Options Enabled Enabled Intel Hyperthreading Options is a toggle setting that allows Intel Hyperthreading Technology to be enabled or disabled. Intel Hyperthreading delivers two logical processors that can execute multiple tasks simultaneously using the shared hardware resources of a single processor core. Enhanced...
How much do you know about HP ASR? How it works? The ASR feature is a hardware-based timer. If a true hardware failure occurs, the Health Monitor might not be called, but the server will be reset as if the power switch were pressed. The ProLiant ROM code may log an event to the IML when the server reboots. ASR Timeout The ASR Timeout option sets a timeout limit for resetting a server that is not responding. When the server has not responded in the selected amount of time, the server automatically resets. The available time increments are: 10 minutes 15 minutes 20 minutes 30 minutes 5 minutes This ASR feature is implemented using a “heartbeat ” timer that continually counts down. The Health Monitor frequently reloads the counter to prevent it from counting down to zero. If the ASR counts down to zero, it is assumed that the operating system has locked up and the system will automatically attempt to reboot. Events which may contribute to the operating system locking up include: A peripheral device − such as a Peripheral Component Interconnect Specification (PCI) adapter − that generates numerous spurious interrupts when it fails. A high priority software application...
If you have read HP or VMware documents about power management on ProLiant servers, both have suggested that configure power management to Static High Performance. I did it on all my ESXi servers and everything was good. But I have performance problem on some of G7 servers, I have checked all factors but I couldn’t find any reason about it. Then I found an article from HP about power management in ESXi and iLO3 (G7 servers). Seems, Static High Performance is root cause of performance issue on the servers with iLO3. This is related to power capping problem on iLO3. Based on the article suggestion, you should configure power management to OS control on the servers to preventing high CPU ready on virtual machines. You can see the instruction on the below link: https://h20566.www2.hp.com/hpsc/doc/public/display?sp4ts.oid=5178763&docId=mmr_kc-0105395&docLocale=en_US