Saturday, June 14, 2025

PROJECT: Lenovo P520 NVMe + SATA NAS.

This post will be updated as the project continues. 


LEGAL DISCLAIMER:

This blog post details the purchase and configuration of a number of IT home lab computer parts.  We purposefully did not provide a BOM at the start of this blog post as we purchased equipment in stages and so could completely avoid talking about total cost.  Although you may add up the price of component parts while reading this blog post, you are hereby legally bound to not discuss said price with the author of this blog post (or members of his immediate family); in a similar fashion you are hereby legally bound to not use any phrase that ends "you could have bought a small car" or "paid for a nice family holiday" when in hearing range of the author (of his immediate family).

Finally please note the value of your own home lab may go up as well as down, it will probably be down but keep kidding yourself on this if you wish.

END OF LEGAL DISCLAIMER


It all began with me staring at my Synology NAS with a slight look of disgust.  I have been reading lately about Synology's recent decision to only allow installation of Synology hard disks in their NAS starting with the latest models of NAS.  This seems to have got me quite angry although I am in no way impacted by this at all (except for a hypothetical future upgrade).

For many years I have operated a Synology DS1517+ NAS that has performed a key role in both my home and my home lab; the NAS is used for the following:
  • Stores personal documents and photos.
  • Stores the contents of my Plex library.
  • Acts as NFS multipathed  storage to my ESXi and Proxmox hosts and supports VAII for VMware.
  • Is my main DNS and NTP server and is configured for multiple of my VLANs to provide both locally.
  • Runs Docker with Portainer for a handful of containers.
  • Probably some other stuff I have forgotten about.
The NAS has just under 14TB of capacity and is almost full.  The various storage devices I have in my multiple Proxmox and ESXi hosts are also getting used up and lately I have spent a lot of time moving VMs back and forth as I switch from lab testing one thing to another.  Every time the hard drives clack clack clack I start to daydream of how my life would be if I had lots of NVMe storage.

What to do?

I contemplated switching to larger capacity disks in the NAS and perhaps less of them and adding in some SSD but it did not feel like the future.  Synology do offer an NVMe cache for the NAS also but I did not think this would meet my requirements at all.

I looked at some of the hybrid and all NVMe NAS available on the market:
  • Asustor have the Flashstor which is 6 or 12 bay all NVMe (as a Gen1 or newly released Gen2).
  • UGREEN have a hybrid NAS with 4x SATA and 2x NVMe (there is also a 6/2 unit).
The first of these is limited to not having SATA drives, the second of these (in my mind) is limited to not having enough NVMe capacity without having to buy very large modules.

Ideally I wanted to replace the Synology NAS with something else, relegate the unit to backup duties and have it powered off most of the time; this isn't because it is a massive power hog it is just that I wanted something else always on.

What to do?

After trawling the Internet for what felt like a lifetime I came across this Reddit post describing the possibility of adding up to 10 NVMe devices into a single server:


I read this post over and over and came to the conclusion that this is what I wanted to do.  Ever since I first installed flash accelerated Nimble storage (over iSCSI) back in 2012 (ish) I dreamt that one day  technology like it would be available in the home (sad, I know).

I had no exact configuration in mind for this server but thought it would be fun to give it a go.  The remainder of this blog post documents this journey which is still on-going (instead of me being a lazy architect let's say we are using the DevOps methodology, or alternatively we are going to build the plane as we fly it!).


What makes the Lenovo ThinkStation P520 so special?

Originally designed as a high-end PC in a tower form factor the P520 was released in 2020 and has the following specification:
  • Single Xeon processor - Socket-R4 (LGA 2066) - numerous options supported/sold (see note below)
  • Eight slots for ECC DDR4 RDIMMs.
  • Two on-board M.2 slots supporting either 2280 or 22110 NVMe (single-sided only and see note below).
  • 6 x SATA connectors and one SATA/eSATA with power connectors for 4x SATA (see note below).
  • Internal mounting space for 2 x 3.5inch drives in toolless caddies.
  • Two front facing "Flex bays" which will take either Lenovo's own devices or two standard 5.25" devices secured using a single bar mount with one screw to secure.
  • 2x PCIe 3.0 x16, 1x PCIe 3.0 x8, 2x PCIe x4 and one legacy PCI card slot.
  • 48 PCIe lanes and support for bifurcation.
  • Ability to add 8x NVMe using two x 4 M2 bay PCIe cards.
  • SATA and NVMe RAID support (don't get excited, see note below).
  • Out of band management via shared 1GbE port (see note below).
  • If you are lucky you might get one with a Windows licence embedded into the BIOS (see note below).
  • Should take a range of GPUs (you will have to do your own research here). 
Note there is a ThinkStation P520 and also a P520C where the "C" is for "compact" so the specification is not the same,  this blog post is about the P520, not the P520C.

The really nice thing is that you can pick up these workstations pretty cheap second hand.


SPRINT 1 - Purchase a P520

I purchased a barebones P520 Xeon W-2135 with 8x cores @ 3.70GHz in the UK for £199.

Here are some notes on the hardware:

Processor - Any processor compatible with the P520 will be susceptible to the Intel L1 Terminal Fault security vulnerability (see Intel Side Channel Vulnerability L1TF).  This is not an issue for me in a home lab, I will not use this server for anything public facing.

NVMe storage - the server has a heatsink cover that fits nicely over two 2280 single-sided M.2 NVMe.  You may or may not get the heatsink with your purchase (I chose to purchase one separately).  For 22110 M.2 devices I have not tried this package size, you will require your own heatsink and the clearance to the front facing fan does not look enough to me, your milage may vary.

SATA power connector - if you only have one SATA power connector cable remove the front fan there should be a second one there (I found this out too late and so I now have a spare).

SATA/NVMe RAID support - do not waste your time with this, it is Intel VROC RAID.  If you have not come across this before it only works with Windows software based RAID or you will see both drives within your OS.  NVMe RAID is not supported out of the box and would require the purchase of an additional key.  We are not going to use this so let's just forget we ever saw the word RAID, okay?

Out of Band management - OOB management is provided by Intel AMT, this is susceptible to a pretty serious (9.8 CVSS score) vulnerability and this cannot be fixed without disabling AMT.  Again, for me this is a lab environment so this is not a concern to me.  AMT does not work with non-integrated graphics cards from what I see and this is not an option with the processors this unit takes (though I did see a Reddit post where a user had remote console working, no idea how though) but the ability to remotely power on and power off the machine works for me.

Windows licence stored in the BIOS - whether you get this depends on if your P520 was originally purchased with Windows. I was out of luck on this one.


SPRINT 2 - ADD A GRAPHICS CARD, ADD MEMORY, BOOT THE SERVER.

Once I had received the base unit and checked it over it was time to get it booted and check it out.  I had not purchased anything up-front as I really wanted to see what my £200 was getting.

I had an NVIDIA GeForce GT 1030 lying around and installed it into the P520.  Server boots, all is well with the world (so far).  No RAM though, of course.

Okay so I agonised over memory (options available here) and eventually plumped for the following at a cost of £127.46 (£9.58 delivery included, I was in a hurry so lets never speak of this again):
  • 4x 16GB (1x16GB) Micron DDR4 Server Memory, PC4-25600 (3200), ECC RDIMM, CAS 22, 1.2V 86R3X
I did a quick install of Proxmox on an old SSD I had laying around.  All good.

At this point we can answer the question "What is the P520 like to live with?".

It is heavy, feels well built/solid.  The fan noise is on a par with my Synology NAS.  There may be some scope for future fan replacements (for example here and here).


SPRINT 3 - ADD SOME BETTER STORAGE, RE-INSTALL PROXMOX.

 My initial thoughts were to use two SSDs to boot from mirrored with VROC and leave the NVMe slots free to populate later for VM storage.  Unfortunately I learnt the hard way that VROC is limited to provide software RAID for Windows, ah-well, at least I didn't shell out money for the VROC upgrade key to RAID the NVMe's.

I purchased 2x 1TB SSDs as follows:
  • 2x 1TB Samsung 870 EVO, 2.5” SSD, SATA III 6Gb/s, MKX, MLC V-NAND, 1GB Cache, Read 560MB/s, Write 530MB/s, 98k/88k IOPS 86TVT
Total cost (including £4.57 delivery) was £171.44.

For the life in me I could not figure out how to mount 2.5 inch drives in the toolless trays.  Ah well, lets just dangle them in for now.  My unit came with one right-angled SATA connector because a 3.5inch drive would require this to mount drives in the internal bays.  I sourced another one for £3.93 here (you can source original Lenovo ones but they cost much more, this one is fine).

Next I re-installed Proxmox choosing a ZFS mirror comprising the two SSDs.

So far so good.


SPRINT 4 - MIGRATE SOME VMs from ESXi!
  
I have worked with VMware technology for over 15 years.  It all started with someone categorically telling me I could not have access to something called "vCenter" which seemed to be responsible for giving my VDI "PC" what I thought was not a lot of memory.  Needless to say that was a red rag to a bull.

My first lab server was a HP ML110 (with 8GB RAM).  This and a copy of "Mastering vSphere 4" got me started with VMware products.  A second lab server was added (ML115) and after taking a VCP class to get the required credit at a night class somewhere in the middle of nowhere I was able to take my VCP exam.

Soon 8GB of RAM per server became "not enough" so I moved to three Shuttle XH61Vs with a whopping 16GB each based on a blog post by Eric Bussink.   For quite a while this ran as a VSAN cluster with a mix of spinning rust and SSD cache drives.

Later still 16GB of RAM per server became "not enough" and so I added a Shuttle SH370R8 based on a post by Ivo Beerens.  This started to show it's limitations as I tried to deploy VCF to it though (8 cores, 128GB RAM and 2TB NVMe).  Then I read about NVMe Memory Tiering is ESXi 8.0 U3 in a post by William Lam so that NVMe was partitioned to provide some space for NVMe tiering and the rest for datastore, this got me though for a while longer.

At some point my ESXi 7 licences from VMUG Advantage expired and the hardware was no good for ESXi 8 without buying USB NIC dongles and I really did not want to go down that road.  The hardware was re-purposed as a ProxMox cluster and I have to say although different to ESXi/vSphere I really rather like it, more on this in some other posts though.  At one point I virtualised a fresh install of ESXi 8 under Proxmox and bootstrapped a vSAN node, it worked but perhaps would not give the best performance!

So here we are.  It is 2025 and VMware have made huge changes to both corporate licencing and also VMUG licencing.  I will continue to run VMware in a lab but I want to move to having anything that is important (i.e. not a lab that can be thrown away and rebuilt) running on ProxMox, which can be regularly security patched unlike ESXi/vSphere/VCF on a VMUG Advantage licence.

I have a few VMs I want to move over from ESXi to ProxMox.  ProxMox has a migration method where you can register an ESXi host datastore in the inventory and migrate a VM from it, the datastore has to be a local datastore though.  To migrate install a VM you need to first install virtio drives (the ProxMox equivalent of VMware tools, uninstall VMware tools, power of the VM and migrate).

Now I have my Windows jump PC and Plex Ubuntu server on ProxMox!


SPRINT 5 - FASTER NETWORKING - ATTEMPT NUMBER 1.

1Gbe, 2.5Gbe, 10Gbe, 25Gbe etc etc.  My home network is the first of these, so the first decision is where do I want to be?  I figure my home lab will always be fairly modest so 10Gbe will be fine.

So now we have to decide on SFP+ vs SFP28 vs RJ45/CAT6 (or above).  The sensible option here is to go SFP as it is more efficient in power and gives a higher quality connection.  My thought though is I have plenty of copper cables lying around and can avoid a switch purchase by direct connecting servers.

Now I have to be honest, using the Broadcom Compatibility Guide to find a compatible 10Gbe RJ45 adapter that is available to buy second hand at a reasonable price is the least amount of fun I have had in quite some time.

So my first purchase is a used Intel Ethernet X550-T2 which is a two port 10Gbs card with RJ45 connectors, this cost me just under £50.

Here begins the fun.

Putting the card into my Proxmox server the card is recognised straight away.  Great!
Putting the card into my Shuttle SH370R8 and the PC does not boot....bah!

I spent some time working out how to firmware update the card using an UEFI shell only to find the update would not run as the card is probably an OEM card.


SPRINT 6 - TRY ANOTHER CARD!

Perhaps rather hastily I purchased an Emulex OCEE14102B 10Gbe two RJ45 port card second hand for just £30!  Excellent!  Except the small print says it is Low Profile but I didn't notice this until the card arrived (though I was trying to ensure I didn't buy a LP card when purchasing cards!).  Argggghhhhhhhhhhhhhh.  

Maybe I can take off the LP adapter and replace it with a regular one?  I compare it to my Intel card and the screw holes are completely different.

Maybe I can just take off the adapter itself and sit it in the PCIe slot.  A bodge for sure but at least I will know if the card works.  The LP adapter comes off easy but the card will not fit in my SH708R server as the NVMe gets in the way,  sigh.

Wait a minute!  Low profile card?  Wouldn't that fit in my Synology NAS?  Yup!  So now I have a 10Gbe NAS, I don't really need it but for £30 what the hell!


SPRINT 7 - TRY AGAIN WITH THE INTEL CARD.

While looking for a solution for the non booting SH708R card I head read something about taping over pins on the PCIe card to get an OEM card working.  It stewed in my head for a few days.  Found this video explaining the process.

Bonza!  After a few attempts at taping over pins B5 and B6 the SH708R boots ESXi and it sees the card!  I won't be removing that card in a hurry so will have to buy yet another card for my ProxMox host!!


SPRINT 8 - BEGIN PLANNING A 10GBe NETWORK LAYOUT.

My SH708R has 2x 10Gbe as does my Synology NAS and my ProxMox server will do soon.  What to do?

All of my lab equipment is based in the cabin.  My thought is the NAS and possibly the older Shuttle XH61Vs can be re-located to the house and put to work on backup duties.  I have two CAT6 cables between my house and the cabin.  The temptation is to light those up as 10Gbe, possibly by building an OPNsense firewall in the house with 10Gbe capability.  I am going to park this one for future consideration.

The switch I think I want doesn't seem to exist. At some point I would like to add some more compute horsepower to the lab (those MinisForum A2's look oh so good, except for the SFP instead of RJ45!) so maybe 8 ports managed 10Gbe RJ45 with some 1Gbe as well?  Maybe the 10Gbe can be isolated off.

Going to park this and come back to it, my head hurts.  If I can directly connect my SH708R to the P520 and the NAS for now and the P520 to the NAS it will do.  It's a bit on the messy side though for my liking.


SPRINT 9 START THINKING ABOUT SATA.

I want plenty of NVMe storage but I also want some capacity too at some point.  My thoughts turn to what that would possibly look like.  If I am using ProxMox I can manage the SATA disks at this level but a better option would be to pass the SATA controller directly through to whatever NAS software I choose, this way drive stats get passed through to the NAS software.  Problem is I am currently booting from a pair of mirrored SATA SSD drives (as a ZFS mirror).

So the decision is made to switch to 2 x 1TB NVMe for boot, the previous SSD won't go to waste I will use for now in ProxMox as VM storage for now and re-allocate later.

Ok, so following purchased:

2x KINGSTON  SKC3000S/1024G  KC3000 PCI-Ex 4.0 NVMe M.2 SSD Solid State Drive, 1024GB for a total of £149.84.

Unfortunately, while I was browsing I also saw a 4x 2.5" hard drive hot swap cage that would fit in one of the front bays which I thought would be cool as I have a bunch of small 2.5" drives lying around.  Following also purchased:

1x ICY DOCK  MB324SP-B  ExpressCage 4x 2.5" SATA HDD & SSD Hot Swap Cage for External 5.25" Drive Bay for £71.24.


SPRINT 10 - START THINKING ABOUT NAS SOFTWARE.

The contenders are (in no particular order) Open Media Vault (OMV), TrueNAS Scale and Unraid.  The current thinking is one virtual NAS for personal stuff (media etc) and the other for ProxMox/VMware storage.

OMV - span up a quick test VM, it requires little resources but seems to have few add-ins.  Having passed through the entire SATA controller for the P520 though it quickly picked up on drive errors on one of my old 2.5" 1TB spinners.

TrueNAS Scale - looks like this will do everything I want on my VM storage side (NFS, iSCSI and also supports VAII over iSCSI).

Unraid - this looks to be a great solution for media and capacity storage, the rich ecosystem off add-ins looks great for a tinkerer and I think this would be great for running additional services that would typically be hosted on the Synology NAS.  Boot from USB disk feels a little worrying though, it does not write a lot to the key and it can be backed up (you can also transfer to another one if needed once per year).  I have a 4 port USB PCIe card where each port is it's own USB hub so I can pass individual ports to different VMs so this works, also a nice touch is once installed you can just boot the P520 from the USB and you have the same config minus the ProxMox layer, so no big outage of personnel stuff if I break ProxMox (who me?, no never!)....sweeeeet!


SPRINT 11 - PROCURE MORE HARDWARE.

After loosing one bidding war on another X550-T2 I bid for another with a higher reserve and even remember to watch at the end of the auction, those last 20 seconds are an absolute killer, right?  I did win so another £49.04 spent.

Recently there have been a few posts on Hot UK Deals for 20TB Toshiba MG Series SATA Enterprise hard drives.  The drives look like they have gone EOL but apparently the 5 year warranty should be honoured directly with Toshiba when purchasing this through the Ebay Ebuyer UK store.  Two purchased for £291.18 each, you will have to total that up yourself as I don't want to think about it.

That should be the capacity side of my NAS sorted, docs, media and all my other crud with overspill for ISOs etc.  This should keep me going for a good while and once up an running on Unraid will move me one step closer to re-purposing the NAS for backup duty (just DNS, NTP and that Lyrion media server to sort out).  These will go into the internal bay with the whole SATA controller passed through to Unraid, next major purchase will be the NVMe and the PCIe card to host them, going to have to wait until after payday for that though.

 
To be continued....

PROJECT: Lenovo P520 NVMe + SATA NAS.

This post will be updated as the project continues.  LEGAL DISCLAIMER: This blog post details the purchase and configuration of a number of ...