Portal Home > Knowledgebase > Articles Database > 1U San


1U San




Posted by kspare, 06-07-2011, 12:03 AM
Does anyone have some good experience with any 1U sans? Something with 4 sata drives, 2 gig links etc.

Posted by Karl Austin, 06-07-2011, 11:48 AM
No offence, but that's not a SAN, that's a server, a NAS at a push. 4 x SATA drives, even in RAID-10 isn't going to give any sort of acceptable performance for multiple machines accessing it.

Posted by Dave W, 06-07-2011, 11:59 AM
I have to agree with KDA here, you need to be looking at a minimum of 8 drives and a proper software layer on top of that. If you want to build a 4 drive san then you could just use any 4 bay server and an opensource SAN distro but that's not really going to get you far.

Posted by asturmas, 06-07-2011, 12:50 PM
Agree Minimal 12/16/24 drives (SAS or SATA3) with fiber connections...

Posted by OffshoreRacks, 06-07-2011, 01:03 PM
does this 1U server support 2.5" SAS or SSD? Its rare to see a 1U case with (4) 3.5" slots.

Posted by YUPAPA, 06-07-2011, 01:08 PM
Not rare, but it is pretty common. What I find even more rare is a 1U case supporting 8 x 2.5" drives which I have been looking for months.

Posted by Karl Austin, 06-07-2011, 01:12 PM
4 x 3.5" used to be rare a few years ago, these days it's pretty much standard.

Posted by phpcoder, 06-07-2011, 01:24 PM
Are you looking for something specific? http://www.pcconnectionexpress.com/1...3tq-700cb.html

Posted by Visbits, 06-07-2011, 01:29 PM
1U san: http://www.dell.com/us/business/p/poweredge-c1100/pd Just use the 10 disk version.

Posted by mazedk, 06-07-2011, 01:38 PM
I have to disagree on the fiber part.. A lot of people are running iSCSI on existing ethernet infrastructure. Since you can get 10GE nics pretty decently priced these days. If you can push it to 2U you can get a IBM 3500 series san with fiber/iscsi/infiniband/sas connectors. http://www-03.ibm.com/systems/storage/disk/ds3500/

Posted by YUPAPA, 06-08-2011, 01:35 AM
Yes, something like that

Posted by bqinternet, 06-08-2011, 01:54 AM
Supermicro has plenty of models that take 8, and one that supports 10: http://www.supermicro.com/products/c...16TQ-R700C.cfm

Posted by prashant1979, 06-11-2011, 10:42 AM
What is the cost of a SAN device of 16 TB to 64 TB?

Posted by Karl Austin, 06-12-2011, 02:59 PM
How long is a piece of string? SATA/SAS/SSD, 7k2/10k/15k, FC/FCoE/iSCSI (GE/10GE/infinniband Features such as replication, snapshots.

Posted by MikeTrike, 06-12-2011, 03:30 PM
QNAP works fairly decent for iSCSI SAN, but they are more along the lines of a SMB appliance. http://www.qnap.com/Products.asp

Posted by SuperVDS, 06-12-2011, 09:29 PM
Thanks for the links to the 1U SAN. I didnt think there were any good 1U SAN's. For the price per GB I think its best to go for 2U with 3.5" disks.

Posted by kaniini, 06-13-2011, 04:48 AM
You might have good luck with CORAID. Generally speaking, you should avoid deploying iSCSI over ethernet infrastructure and use a layer-2 protocol like AoE. Otherwise the latency is going to be a bit rough and IP protocol adds additional complexity, which is unnecessary if you're just trying to export some block devices.

Posted by DeanoC, 06-19-2011, 02:26 PM
Micro-SANs using SSDs and plenty of RAM can be affective for small deployment, as long as you are realistic about the performance. Try 4 Gen3 SSDs, a good CPU and as much RAM as you can get (16GiB) + and you should easily saturate a couple of GbE's Another approach that works well is using a ZFS OS (OpenIndiana or FreeBSD) use a SSD as L2ARC & SLOG and 3 spinning pieces of rust (HDDs). Export via iSCSI However move to 2U well built SAN and infiniiband or 10GbE are easily the bottleneck.

Posted by kspare, 06-19-2011, 03:40 PM
I do respect that some might not thing of a qnap as a san and more of a nas, so to be fair,it's more of a smb san. We're not a large hosting company so I don't need a large 20 disk system, I did pick up a qnap 459u-rp with 4 2tb WD Black drives, and the performance is impressive. I'm able to easily saturate a 1gb link, just working on the bonded interfaces. What impressed me even more is that fact that NFS was actually faster than iscsi for vmware. For anyone looking for a small solution like this, I can tell you this until works well for a very small budget.



Was this answer helpful?

Add to Favourites Add to Favourites    Print this Article Print this Article

Also Read
Restoring a backup! (Views: 502)