Esxi Host Memory Slots

Posted on by admin

Pretty much unless the server has a architecture that requires matched RAM in certain banks (ie a HP DL585G1 has to match configuration across the board) You always want to put the host in maintenance mode before shutting down. Biggest thing to remember is that if you are using the free version of EXSi 5 you are limited to 32GB.

  • Gen10 hosts affected (BL460c, DL380) with ESXi versions (6.5 - tested latest build 16576891, 6.7 - build 16316930). ESXi 7.0 (latest build 16324942) seems OK because Sensor Status is Uknown (for Memory Device).
  • ESXi Host Memory The amount of memory required for compute clusters varies according to the workloads running in the cluster. When sizing memory for compute cluster hosts, consider the admission control setting (n+1), which reserves the resources of one host for failover or maintenance.
  • The memory reclamation technique that is used depends on the ESXi host memory state, which is determined by the amount of free memory of the ESXi host at a given time. With vSphere 6 VMware introduced a new memory state, called “ clear state “.
  • If you can use the ESXi command line, there is a KB article on VMware's site whiech explains this. The link is here and I've pasted the article below: To determine how much RAM is installed in each slot on an ESX/ESXi host: Login to the host using an SSH client. Run one of these commands as user root: dmidecode less.

At about two years ago I replaced a lot of my hardware with more power-saving alternatives. My self-made NAS and hypervisor wer replaces by two HP ProLiant MicroServer G7 servers (N36L and N40L) – for a long time I was very happy with them.

In the last months the amount of VMs increased and now the CPU and memory ressources are exhausted. A new, more powerful VMware server was needed.

Memory

Currently I have two dedicated server for NAS and VMs that are connected via IPCop to different networks.

Internal as well as public VMs are served using a DMZ. Using a dedicated network interface a connection to the IPCop system is made. This is not a good solution because there is a theoretical risk. An attacker could be able to access the internal network after breaking into a compromised DMZ VM and the hypervisor. To avoid this dedicated hypervisors or additional products like VMware vShield are used in datacenters. Using this also at home would have been expensive becuase additional hardware and software licenses are needed. I ran the (very) abstract risk in this case. 😉

At first sight I was thinking about buying the freshly released MicroServer. This servers offers an even smaller case and a switchable CPU. There are a lot of reports in the internet that are showing how to repleace the standard Intel Celeron or Pentium cpu with an Intel Xeon:

Slots

This really got me as I gained good experiences with the predecessor. Unfortunately my anticipation was blurred – the new server also only supports up to 16 GB RAM. The integrated Intel chipset C204 supports also 32 GB RAM so it seems that HP placed a block in the BIOS. There are also reports in in the internet that are showing that it’s not possible to use 32 GB RAM – independently 16 GB memory modules (the MicroServer has only two memory slots) are quite expensive. Because the main reason for my redesign was the memory limitation the little HP server was bowed out. Another disadvantage of the MicroServer was the quite heavy price of at about 500 Euros.

Esxi Host Memory

Intel also offers very interesting hardware with the fourth generation of their embedded “Next Unit of Computing” systems.

Those Single Board Computers are coming with a Celeron, i3 or i5 CPU with up to 1,7 Ghz clock rate and 4 threads. Using DDR3 SODIMM sockets it possible to serve up to 16 GB RAM. The devices can also be bought as a kit which includes a small case – the most recent case can also hold one 2.5″ drive. An internal SSD (e.g. for the ESXi hypervisor) can be connected using mSATA. A i3 NUC with 16 GB RAM and a 4 GB SSD costs at about 350 Euros.

For a short time I was thinking about using such a device for my DMZ and test systems – but this seemed improvident to me because of multiple reasons:

  • Layer 3 switch needed (because of VLAN tagging for the test and DMZ network) – expense: at about 300 Euros (Cisco SG300 series)
  • another device that needs to be running permanently wasting power
  • no redundancy becuase only one SATA device can be connected (I haven’t heard about Mini PCI-Express RAID controllers so far 😛)
  • Using VMware vSphere ESXi is only possible after creating a customized ISO becuase network drivers are missing
  • only one network port, no redundancy or connection to two different networks without a Layer 3 switch

The final costs of this design would have blasted my planned budget. In my opinion this design would have been only a “half-baked” solution.

Building my own servers seemed more efficient to me. Mainboards with the Intel sockets 1155, 1156 and 1150 are also available in the space saving form-factors Mini and MicroITX. If you’re using the last size you can have up to 32 GB RAM – perfect! 🙂

Even professional mainboards with additional features like IPMI remote management and ECC error correction are available at fair prices. I was very lucky because my friend Dennis was selling such a board inlcuding CPU, RAM and RAID controller while I was looking for adequate products. That was an offer I couldn’t deny! 😀

My setup now consits of:

  • Supermicro X9SCM-F mainboard (Dual GLAN, SATA II + III, IPMI)
  • Intel Xeon E3-1230 (v1, first generation) CPU with 8 threads, 3.2 Ghz clock rate and 8 MB cache
  • 32 GB DDR3 ECC memory
  • HP P400 RAID controller with 512 MB cache and BBWU
  • LSI SAS3081E-R RAID controller without Cache
  • 80 GB Intel-SSD for VMware Flash Read Cache
  • Cisco SG300-20 Layer 3 switch (for encapsulating the Raspberry Pi’s in a DMZ VLAN)
Esxi host memory slots download

The E3-1230 CPU already received two updates (1230v2 und 1230v3) and also a Haswell refresh (1231) but the extra charge wasn’t worth it for me. I found no online shop advertising an equivalent setup to a comparable price. 😀

If I had to buy new hardware I would have chosen the 1230v3 – I’m already using this CPU in my workstation and I’m very happy with it. Compared with the rather poor AMD Turion CPU of the HP N40L the performance improvement is even with the first 1230 generation that big that it really fits my requirements. Having the most recent generation wouldn’t have meant additional benefit.

The servers has two RAID controllers which is volitional. VMware ESXi still doesn’t support software RAID so the HP P400 controller is used. Two connected hard drives (1 TB, 7200 RPM) are used inside a RAID as datastore für virtual machines. NAS hard drives are connected to the second controller. My recent NAS was converted into a VM using P2V and accesses the hard drives using this controller which is passed into the VM using VMDirectPath I/O. To be honest I never seriously thought about virtualizing my NAS. Another possibility to connect the LUNs was passing the particular hard drives to the VM using RDM (Raw device mapping). The opinions about that are very controversial in the internet – many prefer to use RDM, many other prefer to pass the whole controller. I relied to the personal experiences of Dennis who was succesful with the last solution.

After using the virtualized NAS solution for one week I have to say that this works pretty fine. Converting the physical system in a virtual machine was done quickly using the VMware vCenter Converter. In combination with the two network uplinks and the more powerful CPU I was able to increase the data throughput of Samba shares. While the old system only offered at about 70 MB/s throughput in the internal network the new system made it up to at about 115 MB/s. Using some Samba TCP optimizations it might also be possible to increase this value even more.

The only thing that was missing was an adequate case. When I mimized my hardware setup two years ago I also ordered a smaller rack that fits more in my flat. So the case size was limited which made it hard to find a case. My requirements were:

  • MicroATX form-factor
  • decent design
  • room for 6x 3.5″ hard drives
  • hard drives cases with rubber bearing if possible

In the beginning I was in love with the Fractal Design Define Mini. The case looked very nice but didn’t fit in my rack. After additional researches I finally bought the Lian-Li PC-V358B.

I really like the case concept – designed as a generous HTPC case it offers enough room for my hard drives and can be easily maintained thanks to a intelligent swing mechanism. Another thing I want to mention is that you don’t need any tools to remove the particular case parts (side walls, hard drive cages, etc.). The side walls have tiny but stable mount pins (see gallery). The case looks very sophisticated and high-class which might be the reason to the rather high price of at about 150 Euros. Luckily I was able to buy a second choice exemplar in the Alternate outlet for roughly 100 Euros. I couldn’t find any faults like scrapes which made me very happy. Buying an alternative case would have made it necessary to buy a SATA backplane – so the final price would have been comparable to the Lian-Li case.

Check

So if you’re looking for a beautiful, compact and high quality case you really should have a look at the Lian-Li PC-V358B! 🙂

Photos

Some pictures of the new setup:

Esxi Host Memory Slots Download

🙂