Friday, May 9, 2008

"World's 1st iSCSI optimized switch" Dell PowerConnect, Dell EqualLogic says NO


So we ordered two Dell EqualLogic boxs at huge cost, then we got to the point to order our dell's switch, at which point Dell turned around and said that’s not a good idea and we should order Cisco 3750 and that our current Cisco 4510 SUP V will not do the job.

So I got $200,000 of Dell SAN and $120,000 of Cisco switch. I don’t want to buy any more Cisco switch as so fed up with their products.

Any body got any ideas when Dell going to sort our their own product?

9 comments:

Anonymous said...

Some good comment on arstechnica.com about this
http://episteme.arstechnica.com/eve/forums/a/tpc/f/833003030931/m/259008712931

Anonymous said...

Check out http://www.derekschwab.com/2008/05/networking-for-iscsi.html

Anonymous said...

Have you considered the Catalyst 3560 series switches? They are basically the same as the 3750 series, minus the stacking capability. No point in paying extra for the stacking functionality if you don't need it.

The Catalyst 4500 series is a really poor choice for iSCSI. There are modules available that support jumbo frames that would probably work, but performance would be limited due to the way bandwidth allocation works. There is a total of 3Gbps shared between each group of 6 ports and only 6Gbps between the module and the backplane. So, there is a 2:1 oversubscription ratio within the module itself and, with a 48 port module, a 8:1 oversubscription to the backplane. The "E" series chasis and modules can do 24Gbps to the backplane, but the oversubscription within the module itself is the same.

Now, generally, you're going to have two links to each server/HBA, of which only one is active at a time, so the 2:1 oversubscription is probably fine since really only half of your ports will be active at once in most cases. Still though, the 3560 series would probably be the best choice for your application.

Anonymous said...

Hello,

Before venturing into expensive switches like Cisco, give other switches in the market - never go just by the name. Jumbo frames and flow control are very important in any iSCSI solution.

With any iSCSI solution in market does not support more than 3 or 4 GbE ports. In general, the bottleneck is always on the GbE ports no matter what drives you use - SAS or SATA. If you start start stacking up boxes with few iSCSI ports then managing the switch and ports alone becomes a challenge.

Never spend on expensive components like Switches, GbE NICs etc, but carefully consider the future transition as well when designing iSCSI solutions.

The reason iSCSI is chosen to reduce costs and at the same time not compromising on the performance.

Go with modular component system when designing storage solutions because technology keeps changing in a short span.

As you know 10GbE is the future (or at least for next 5 years) and will probably replace FC in data centers or new FC investments will stalled.

How will your storage be affected when 10GbE becomes a standard? How am I going to spend to move to 10GbE? How am I going to justify the cost? These are some of the questions one should ask before moving to iSCSI.

There is ONLY one product today in the market that supports a modular upgrade to the iSCSI storage system without changing a thing other than the IP Switch. All servers will still be able to communicate with iSCSI storage system but you will have a better iSCSI solution in place for a fraction of the cost one would spend in changing the infrastructure/storage.

There is a product from iStor that is available with multiple GbE ports (8 to be precise) as well as 10GbE with single and dual-controller options. Say today you buy a 8 port version to meet your requirements, moving forward you could upgrade just the controller to 10GbE controller by just unplugging the 8x1GbE controller and plugging the new 10GbE without changing any storage configuration or settings and the system will just work fine, but only with better infrastructure.

If you compare the cost of a 10GbE switch today, for example, from Fujitsu, it costs less than 10K retail with 12x10GbE ports.

One has to carefully consider the solution they are investing in - hardware or a software solution? Any software solution especially for RAID and Storage Virtualization is going to be slower so give it a careful consideration before investing in iSCSI - DO NOT GO BY BRAND NAMES.

With just 8x1GbE or 1xGbE (with single controller) one could achieve 800MB+ and 1102MB+ respectively and scalability upto 36TB today with SATA and ~16TB with SAS - the controller supports both SATA and SAS.

Hope this helps,
Raman
RNDTS

Anonymous said...

What was the reasoning they gave you for not using their switches?

Anonymous said...

From my own personal experience with dell switches the communicaitons between the ASICS is the problem. The people at Equallogic implied that flow control and oversubscription of bandwidth between the ASICS was the problem. I can't confirm that. What I do know is that my IOMETER tests droppped like a rock as soon as I pushed data from one ASIC to another.

I got 2960Gs and they've worked out fine.

LED Signs said...

I have seen few post related to this. If you begin to start heaping up boxes with few iSCSI ports subsequently managing the button and ports alone turns into a challenge.

led signs said...

All servers will still be capable to communicate with ISCSI storage system but you will have a better ISCSI solution in place for a fraction of the cost one would spend in changing the infrastructure or the storage.

Adam Madsen said...

So I relize this is an old post but what did you end up doing?