vSphere Home Lab Upgrade–Synology DS1812+
- Matt Liebowitz
- 30
- 4190
I built up my home lab back in late 2011 after finally deciding that I needed something that was completely mine and not a shared lab with others. I built pretty much an identical lab to Jase McCarty’s (http://www.jasemccarty.com/blog/?p=1516) and have been very happy with it. The only problem I’ve had is with my home lab storage.
A funny thing happened on the way to figuring out what storage to use in my home lab. Faced with the prospect of using a home NAS with 4 SATA drives, I wanted to see if I could find something that would give me better performance. I had the opportunity to get my hands on a server that had 6 x 146GB 10K RPM drives and I jumped at the chance. That server ended up being an old DL380 G4 (possibly even G3, not sure). It seemed so smart at the time – why use 7200 RPM consumer SATA drives when I could use 10K RPM enterprise SCSI drives and get better performance. I didn’t factor in one important thing: cache, or lack thereof.
After seeing miserable performance I researched and bought some battery backed write cache – a whopping 128MB worth that had to be split between reads and writes. Even with that, and using iSCSI software that let me create a RAM cache, I still had pretty bad performance. How bad? This bad.
Yep, that’s over 4,000ms of latency. It wasn’t consistently this bad but trying to do multiple operations at once, like rebooting two VMs at once, would cause it. The server was old, not true 64-bit, and just not the right fit. There were probably other contributing factors beyond the lack of cache as well. Not to mention the electricity cost of running a true server class computer in my house. I realized my mistake and knew I needed to replace it with dedicated NAS storage.
I know I could have used something like Nexenta Community Edition to get better performance out of the DL380. For a variety of reasons that didn’t make sense in this situation.
After much research and quite a bit of unnecessary delaying on my part (with the appropriate amount of ribbing from @ChrisWahl and @Millardjk) I finally decided on the Synology DS1812+. I loaded it up with 4 x Sandisk 240GB SSD and 4 x Western Digital Red 2TB SATA drives and plan to use it for my home lab as well as for backing up my PC, pictures, videos, etc.
So how well does it work? Is it a worthy replacement to the DL380? Seriously, what isn’t a worthy replacement to that old server?
I have been extremely impressed with the Synology DSM software and how easy it is to set up volumes, create iSCSI targets, and configure link aggregation (more on that in a bit). It also has lots of great features to use it as a home NAS so I’m very happy with my choice. Performance has been great both on the SSDs and on the Western Digital Red drives. The days of seconds of latency are gone – as you can see in the screenshots from esxtop I’m able to push extremely high I/O (both reads and writes) with less than 3ms of latency. I may do some more detailed testing with the I/O Analyzer fling but for now this is good enough for me.
My only disappointment is that I cannot configure true 802.3ad Dynamic Link Aggregation. Unfortunately the switch I use in my home lab, a Dell PowerConnect 2816, only supports static link aggregation and not dynamic. There are many posts on the Synology forum complaining about this but it’s really Dell’s issue and not anything wrong with the DS1812+. I consider that a “nice to have” for a home lab but certainly not worth investing hundreds of dollars in a new switch that supports the proper link aggregation configuration.
All in all I’m very happy with the addition of the Synology DS1812+ into my home lab. The performance is great, the DSM software is very good, and there are some great things coming in the new DSM 4.2 (currently in beta). I highly recommend any of the Synology models to folks who are looking to upgrade their home lab storage.
30 thoughts on “vSphere Home Lab Upgrade–Synology DS1812+”
Leave a Reply Cancel reply
This site uses Akismet to reduce spam. Learn how your comment data is processed.
Very happy for you Matt, this was a long time coming!
While I do have some other storage, my favorite storage management interface, BY FAR, is the Synology DSM on my DS211+ (much lower end than yours). It’s amazing all the stats it gives and what all it can do. Anyone can try the free online demo of their management software and I think you will be impressed.
Great write up, Matt! Keep em coming!
-David
Hey David,
Thanks for the reply. I agree that the DSM software is very impressive. They also make several apps for iOS and Android that are very useful. I’m currently streaming music using DSaudio and it’s working great!
I like the stats that it provides but maybe I’m missing more detailed stats. Is there somewhere within DSM to get more detailed stats?
Matt
Do you know the power draw that the DS1812+ is using? Like you, I have a Dell Poweredge 2950 III with 7200RPM drives and its pulling 740 watts at all times. To make matters worse I’ve acquired a NetApp FAS3040 filer with a 14x 144GB FC disk shelf for training and lab purposes. That totals around 850 Watts when both the filer and disk shelf is running. I’m investing in a three host custom built cluster with using something like the DS1812+ as my NAS for VM’s and personal data. I want to cut down the power usage!
Hey Dustin,
I’m not sure what the power draw of the Synology is but with SSD and Western Digital Red drives I’m hoping it’s not too bad. Has to be less than the DL380.
On the other hand, I’d hate to see your power bill! 🙂
Dude that’s an expensive box for a home lab but great post!
Hey Keith,
I agree it was expensive for just a homelab. I plan to use it for more than that – backups for the PC, streaming audio and video to various devices around the house, etc. The great thing about the Synology line is that you get the great DSM software even with the lower end models without as many drive bays.
Hi Matt,
Thanks for sharing that with us.
Are you using iSCSI or NFS for your Datastore storage? Did you compare their respective performance?
Hi Didier,
I’m currently using both iSCSI and NFS for datastores. I will say that even without formal testing I can see that NFS is significantly faster than iSCSI. Operations take much longer using iSCSI as compared to NFS even if they’re being performed on the same set of disks. I haven’t figured out why iSCSI is so much slower in my environment but at this point I will likely be switching to an all NFS setup since the experience with iSCSI is not ideal.
Matt
Hi Matt,
Thank you very much for your feedback. This is what I already read several times on internet. (For example: http://wahlnetwork.com/2012/08/07/synology-ds411-vsphere-home-lab-storage-protocol-bakeoff/)
I’m the proud owner of a DS1010+ and very happy with it but it is just a pity that if I decide to keep my NFS datastore, I won’t be able to use the new VAAI feature (http://blog.synology.com/blog/?p=1364)
Also, I would like to switch to iSCSI in order to use multipath.
I will probably install DSM Beta 4.2 and compare iSCSI and NFS performance with vSphere. If I do this, I can keep you updated if you are interested.
Didier
I also would prefer to use iSCSI so I can take advantage of VAAI. That part does work great – I can clone a VM from template in 30 seconds when it’s on an iSCSI datastore with VAAI enabled. But everything else iSCSI related is slow. It may be partly related to using Western Digital Red drives which are only 5400RPM drives, but frustrating that performance is better on those same drives when presenting them as NFS instead of iSCSI.
I’d definitely be interested in your testing of iSCSI and NFS performance between DSM 4.1 and 4.2. Make sure to do the same tests before and after the upgrade to do a real comparison. I’m looking forward to your findings!
Matt
I’m looking at a very similar setup. I have a DS1812+ with 8x 2TB WD red drives. However i’m going to return 4 of the drives and exchange them for 256gb SSDS. My plan was to use ISCSI or NFS to my 3 esxi whiteboxes. Do you notice a big increase in performance with datastores on the Red drives vs. the SSDS?
Hey Bill,
There is definitely a big difference in performance with the SSD drives as compared to the Western Digital Red drives. The Reds aren’t bad (not the fastest drives but not the slowest either) but the SSDs are a lot faster.
As for iSCSI vs NFS it is no contest – for me NFS is significantly faster than iSCSI. I see it even against the same set of disks doing exactly the same tasks. Others have reported that NFS is faster than iSCSI on Synology so I know it isn’t just me. The only nice thing about iSCSI is that you can take advantage of VAAI for clone operations, etc. But for me it isn’t worth the performance difference in all other operations just to have VMs deploy faster from template.
Hope this helps!
Do you think this performance difference between NFS and ISCSI would still apply with Multipath ISCSI?
I tried using iSCSI with and without multipathing configured (specifically Round Robin) and the result was the same. For me I’m not coming close to maxing out the performance of the NICs, rather I’m seeing very high latency when using iSCSI as compared to NFS. I think some of the high latency can be attributed to the Western Digital Red drives only being 5400 RPM but it doesn’t really explain why I don’t see as much latency when using NFS.
For me everything on NFS is faster even if the NFS volume is presented from the same set of disks as iSCSI. Whether it’s things like performing Storage vMotion or even installing an operating system onto a virtual machine in every case NFS is faster than iSCSI.
I wish I had a better explanation as to why but at least for me that’s what I’m seeing. Others see the same thing – check out Chris’s post here: http://wahlnetwork.com/2012/08/07/synology-ds411-vsphere-home-lab-storage-protocol-bakeoff/
Matt
Are your SSDs in JBOD, RAID or standalone?
Both the SSDs and Red drives are in their own RAID-5 config. As far as performance goes I get better performance on NFS even when presenting the storage from the same RAID config and same drives. I’m guessing something in the Synology iSCSI stack is inefficient or something similar. I don’t have any other explanation for why NFS is faster.
I’ve just got a similar box for my home lab (VMware, Microsoft and Citrix testing), I’m just wondering if the device storage is fully consumed will be a performance impact?
I’ve already carved out two 2TB iSCSI LUNs, using “Thin Provisioning” from DSM which I construe to mean that any required space is only going to be allocated on demand.
Which brings us to the main point… since NAS disks are mechanical and based on spindles, obviously the outer layers will produce higher performance…
At least for what I’m using the Synology for I don’t expect this to be an issue for me. I’ll mostly be using the SSDs for vSphere and the 2TB Western Digital Red drives will be used for backing up my PCs, pictures, and other home related things. I may have some VMs on there but that will be the exception, not the norm. That is unless I decide to grow my lab out a lot more than it is right now. 🙂
I was looking at getting one of these for a home lab, but had not yet considered SSDs. Can you create flash pools like in NetApp for making a hybrid bunch of disks? Im always concerned about lifespan of SSDs
Did you try the Synology Hybrid RAID? It appears to be like NetApp’s RAID system and you can also upgrade the disk sizes.
My idea was to have one big chunk of disks which shares I/O accross all disks and present some as NFS, some as CIFS and some for iSCSI direct to the VM..
Would love to have your thoughts/feedback
Hey Matthew,
It didn’t make sense to me to try to combine the smaller SSDs and the larger SATA drives in the same RAID group. I wanted to keep them separate so I could put more important VMs on the SSDs and lesser used VMs on the SATA (where I have a lot more space). I’m just using regular RAID-5 for my setup and not using the Synology Hybrid RAID. The SHR is nice for having the ability to swap in disks of different sizes and things like that but I don’t think it does anything for specifically for performance when mixing SSDs and SATA.
I was also wondering this myself. i have a DS-1512+ and im looking to get some NAS 4tb drives but i was conflicted if i should RAID-5 or SHR. I was actually complementing mirroring 2 drives and the other 2-3 making them SHR or another mirrored set with the 5th one being a hotspare. – or the 5th drive being an SSD.
i have a similar setup, I have a DS1813+ with WD Reds. I am maxing out the speed on 1 NIC, so I tried to bond two NICs on the NAS, but no what I did, I could not get the 2nd Bonded link to be used. I am running a HP procurve 2824 and enabled trunking with LACP.
the ESXi server bonded NICs seems to be working fine, i can see traffic on both the bonded NICs, but not on the NAS.
Have you tried bonding the NICs on the NAS?
I have not been able to use the link aggregation features of my DS1812. My switch only supports static LAGs and the NAS requires dynamic. Maybe that is the problem with your setup also?
the HP procurve 2824 supports dynamic LAGs, the synology control panel shows that it is connected too.
I opened a ticket with synology tech, and they told me that NIC teaming will not work. “you can not have more than one 1GB link going to one server” is what they told me. i find that hard to believe.
Weird. It definitely supports it and lots of people have blogged about setting it up and then saturating both NICs. I think maybe the Synology tech support representative didn’t understand your request.
during my test, i saturated one NIC, and then boot up another VM stored on the NFS share, and it is a crawl! and looking at the stats on the HP switch shows no activity on the 2nd NIC. The two ports going to the ESXi server shows about the same activity. ugh…
First of all – geat post!
1. What software U’re running on Dell PowerConnect?
http://support.dell.com/support/edocs/network/pc28xx/en/index.htm
it should be PowerConnect,2800,V1.0.0.45,A07
2. Don’t belive PowerConnect has no active mode for bonding/teaming.. even my old 8-port management switch has it!
3. Found You looking for WD Red 2TB high latency problems @ Google, isn’t strange? 😀
Hi Matt. Have you tried iSCSI vs NFS performance using the new DSM 5.0? I got the same DS1812 setup but since away from home to do the testing. thanks
The best way I can describe the iSCSI performance is miserable. I did detailed performance testing before and after and it isn’t worth publishing the results. The iSCSI performance is so far behind NFS it isn’t even worth considering.
I love Synology and their devices do a lot of things really well, but iSCSI isn’t one of them.
Matt