NAS with SSD and HDD - Multiple RAIDs

b1nuzz

Standard Member
Joined
Sep 3, 2005
Messages
162
Reaction score
2
Points
39
Hi all,

in-art-close-icon-128x128-16481b937f87b244a645cdbef0d930f8.png



–– ADVERTISEMENT ––​




unmiss-sound-button-muted-e74d67a0c85c3548f07d7564782a269c.svg




in-art-close-icon-128x128-16481b937f87b244a645cdbef0d930f8.png



–– ADVERTISEMENT ––​





I have a project coming up where I require the ability to write to a NAS from multiple PC's (circa 100) as quickly as possible.
We have around 100 PC's that will each be uploading approximately 4-6GB all at the same time. I need this to take the shortest amount of time possible.
We are utilising a 10GB network to help achieve this.
I have an 8 bay QNAP NAS.
I intend to load it with 4 SSD's (8TB) and 4 HDD's (12TB).

Am I able to create a RAID using the SSD's only, and use this for the quick data transfer, and create a secondary RAID on the HDD's for longer term storage?

What RAID would you recommend for the fastest write speeds in this scenario?
Will SSD's really be of benefit in this situation?

Thanks for your advice in advance.
 
Last edited:
It might be useful if you could cite the model number of your QNAP.

Whilst you are planning this, have you thought about how you are going to backup these sorts of volumes of data...? No to mention some kind of hardware maintenance contract if this is for a business. For 100 clients, you'd may want some kind of authentication mechanism (loginids and password) to provide authentication, audit and security depending on your use case.

From the timbre of you post, it sounds like you are setting up some kind of small business type environment - you might be best advised to seek professional help rather than DIY is you are not too confident about building and maintaining this sort if infrastructure.

I don't know QNAP, but usually decent kit can support multiple RAID arrays. It is inadvisable to mix SSD/HDD within the same array and some kit simple won't let you. Writing to any RAID array is slower than "plain jane" discs, whatever the technology (though like for like, one would expect a RAID of SSD to be quicker than RAID of HDD where both are configured the same.) The nature of writing to RAID means the write operation is not considered "good" until data has been committed to all platters in an array and verified successful which takes longer the more discs are involved, longer still if parity calculations (a la RAID 5/6) are required. Wiki's articles on RAID are a good primer on the subject.

Cleverer kit and OS's have various caches and other "tricks" to speed this up and guarantee the integrity of the data - with prices to match. Some even have in built mechanism to provide "hierarchical" storage management, though we're probably getting into the realm of SAN's rather than simple/cheap NAS's

A way you can increase the throughput across the network with such as large number of clients, would be to use a networking technology called "link aggregation" between the NAS and the switch it connects to (if both support it.)
 
I have a project coming up where I require the ability to write to a NAS from multiple PC's (circa 100) as quickly as possible.
We have around 100 PC's that will each be uploading approximately 4TB all at the same time. I need this to take the shortest amount of time possible.
We are utilising a 10GB network to help achieve this.
I have an 8 bay QNAP NAS.
I intend to load it with 4 SSD's (8TB) and 4 HDD's (12TB).

Well that can't be right. If each machine is saving 4TB and you have ~100 machines then you are going to need alot more space OR someone has made a mistake in their initial post ???
 
Well that can't be right. If each machine is saving 4TB and you have ~100 machines then you are going to need alot more space OR someone has made a mistake in their initial post ???

Post Updated. 4-6GB files.
 
If you're hitting a home/SMB NAS with sequential transfers from 100 clients, it's going to be flat on it's back side regardless of what disk configuration you use (especially if they're wired clients)

IMO you would be better off maximising your spindle/drive count in one 8-disk array than splitting the disks - with 8 disks, whether they're SSDs or HDDs, the bottleneck is likely to be the processor or NICs rather than the storage back-end - this entirely depends on the model of NAS though.

I'd use RAID6 for 12TB HDDs, RAID5 if you use SSDs - either is going to hit the CPU in NAS when writing - especially from 100 clients, but assuming it's not a bargain basement model, most of them are up to the task. It would be worth performing a test with RAID10 as well though - on paper it's king for a multi-client workload, but I think limitations in how many drives are in the array and limitations of other components in the NAS will mean it's much of a muchness between them.

Alternatively, you could use 4 disk RAID0 as a landing zone and schedule regular jobs to copy the data from there to the long-term storage area, but with so few disks, and then adding a further read load to copy the data off, I think you'll be better off with sticking all your available spindles or SSDs in one array.

If you're brave, and have regular and easily recoverable backups, an 8 SSD RAID0 is going to offer you the best possible multi-client write performance.

Load it up with as much cache as you can, enable write caching and put it on a UPS for safety, and that should help.

To be honest though, with 100 active clients, I think you're barking up the wrong tree with a small NAS - spindle count is king when it comes to the kind of write performance you're looking for.
 
Last edited:
It might be useful if you could cite the model number of your QNAP.

Yes,I agree Mickevh it would help if you could tell us the model of the Qnap NAS box you have.
Also, how long is the project expected to last??

The Qnap box itself would need to be a high-end model to work for this as it will need the CPU power,memory, provision for 10GB network connections(On some lower models it is an optional extra) and facilities like Link Aggregation etc.
As to the array itself I would agree with Mister_Tad and make it a single array using all 8 disks. Whether you use normal disks or SSDs is up to you but SSDs would be more expensive.
Implementation-wise I would go with -

RAID 5 with hot swap(can manage 1 drive failure. With hot swap raid array begins rebuild immediately.All you need do is swap out faulty drive. Please note while rebuilding it is vulnerable. e.g. suffer another drive failure.)
RAID 6 with hot swap(can manage 2 drive failures.)

Please note the 2 configurations above supply resilience to the array but are in NO WAY a substitute for proper backups.

You would have to make provision for backups to external USB disks.
Must be on a UPS.

From a personal point of view, I would look at implementing a proper server to cater for this project.
e.g. HP Proliant ML110 Gen10 server with Small Form Factor(SFF) disks(up to 16 2.5" drives)
 
RAID5 with 12TB HDDs would give me the cold sweats.

There are calculators out there that tell you with with drives with 10^14 URE rates you will absolutely and unquestionably lose all of your data during a rebuild.

Reality isn't quite so bleak and URE rates are a worst case scenario (and much better with 10^15), but it's still a big risk - a rebuild with large disks like that will take several days, perhaps a week or weeks - all the while hammering all of the other disks with activity (and nevermind absolutely destroying production performance for your 100 clients during that time)

I use 6TB 10^15 disks in RAID6 and I'm still nervous about rebuilds.
 
RAID5 with 12TB HDDs would give me the cold sweats.


I intend to load it with 4 SSD's (8TB) and 4 HDD's (12TB).

I think you might be over reacting. If you go back and re-read the OP's initial post after amendment, you'll see that the sizes considered are 2TB for the SSD and 3TB for ordinary HDD.

I have watch RAID 5 arrays rebuild after a drive failure and not 1 has failed yet. I would agree with regards to drive size in that the larger the HDD the longer it will take the array to rebuild it after a failure and that is something you want to keep to a minimum due to the performance hit it causes.
 
I think you might be over reacting. If you go back and re-read the OP's initial post after amendment, you'll see that the sizes considered are 2TB for the SSD and 3TB for ordinary HDD.

I intend to load it with 4 SSD's (8TB) and 4 HDD's (12TB).

I was reading this as 4x8TB SSDs and 4x12TB HDDs - but your way works too.

I've seen (and sadly had to participate in the fallout of) double failures in an 8x600GB 15k RAID5 set in a full-on never-fails HDS enterprise array. As unlikely as it is, it happens.

That said, with up to 4TB disks I'd be pretty comfortable with RAID5.
 
Last edited:
Your photos don’t appear to work on first post on my mobile.

As others have said I would be concerned about concurrency and want to schedule uploads.

What is the project?

Watched a 6 x 12TB raid 5 array take just over 20 days to initialise.....
 
Your photos don’t appear to work on first post on my mobile.

Think they have been removed by the moderators as on my desktop both pictures have Advertisement next to them but no picture.
 

The latest video from AVForums

Is 4K Blu-ray Worth It?
Subscribe to our YouTube channel
Back
Top Bottom