I enjoy and benefit from Anandtech's articles on the larger-picture network & structural aspects of contemporary IT services. I wonder if, as Anandtech's readership age-cohort "grows up" and matures into higher management- and executive-level IT job positions, the demand for articles with this kind of content & focus will increase. I hope so.
While we get less comments on our enterprise articles, they do pretty well. For example the Server Clash article was in the same league as the latest Geforce and SSD reviews. We can't beat Sandy Bridge previews of course :-).
And while in the beginning of the IT section we got a lot of AMD vs Intel flames, nowadays we get some very solid discussions, right on target.
You couldn't have said it better! As an IT Director find information that this site gives invaluable to my decision making. Articles like this give me a jumping off point to thinking outside the box or adding tech I never heard of to our existing infrastructure.
What's amazing is that we put very little in new equipment and are able to do what cost millions just 10 years ago. We can now offer 99.999% normal availability with only a maximum of 30minutes of downtime during a full datacenter switch from Toronto to Chicago!
The combination of fast multi core processors, virtualization tech and cheaper bandwidth have made this type of service availalbe to companies of all sizes. Very exciting times!
The problem with Clouding is that systems are built to the lowest common denominator (which is to say, Cheap) hardware. The cutting edge is with SSD storage, and it's not likely that public Clouds are going to spend the money.
I actually see this going forward. I would put money on public cloud hosts offering different storage options, and pricing brackets to match. I also do not believe that many of the emerging cloud environments are being build with the cheapest hardware available. I would be more inclined to think that some of the providers out there are going for high-end clients who are willing to shell out the cash for performance.
3PAR, HDS (VSP's) and soon EMC will all have some form of block/page/region level dynamic optimization for auto-tiering between SSD/FC-SAS/SATA. When the majority of your storage is 2TB SATA drives but you still have the hot 3-5% on SSD the costs really come down.
HDS and 3PAR both do it very well right now... with HDS firmly in the lead come next April...
The problem I see is the 100-120km dark fiber sync limitation. Once someone figures out how to be sync with 20-40ms latency (or the internets somehow figure out how reduce latency) we will have some pretty cool "clouds".
Wanted to make a minor correction - EMC already has dynamic sub-LUN block optimization..Also called FAST - fully automated storage teiring like you mentioned. This is in both CLARiiON and V-Max...the implementation is different, but works almost same.
Don't you feel 20-40ms is bit too much?? Most database applications/or any famous MS applications don't like this amount of latency. Though quite subjective, I tend to believe that 10-20ms is what most people want.
Well, I am sure if it is reduced to 10-20ms, people will start asking for 5ms :)
Well you don't NEED cutting edge storage for a lot of things. In many cases more cheap(er) machines can be interesting than fewer more expensive machines. A lower specced machine in a large cluster of such machines going down might have less of an impact than a high end server in a cluster of only a few machines. For my customers (SMB's) I prefer more but less powerful machines. As usual YMMV.
Very rarely do people actually need cutting edge. Even if you think you do; more often than not a good SAN operating across a few dozen spindles is much faster than you think. Storage on a small scale is tricky and expensive; storage on a large scale is easy, fast and cheap. You'd be surprised how fast a good SAN can be (even on iSCSI) if you have the right arrays, HBAs and switches.
And "enterprise" cloud providers like SunGard and IBM will sell you storage, and deliver minimum levels of IOPS and/or throughput. They've done this for at least 5 years (which is the last time I priced one of them out.) It's expensive, but so is building your own environment. And remember to take into account labor costs over the life of the equipment; if your IT guy quits after 2 years you'll have to train someone, hire someone pre-trained, or (most likely,) hire a consultant at $250/hr every time you need anything changed.
Cloud is cheaper because you only need to train your IT staff on the cloud, not on whatever brand of server, HBAs, SAN, switches, disks, virtualization software, etc... For most companies, operating IT infrastructure is not a core competency, so outsource it already. You outsource your payroll to ADP, so why not your IT infrastructure to Amazon or Google?
I love these articles about IT stuff in large enterprises. They are so clear even for noobs. I don't know anything about this stuff but thanks to anandtech I get to know about these exciting developments.
I admit first i am no expert in this field. But Rackspace Cloud Hosting seems much cheaper then Amazon. And i could never understand why use EC2 at all, what advantage does it give compare like RackSpace Cloud.
What alert me was the cost you posted up, which surprise me.
Metro Clusters aren't new and you can already active active metro clusters on 10MB links with a fair amount of success. NetApp does a pretty good job of this with XenServer. Is it scalable to extreme levels, well certainly it's not as scalable as a Fiber Channel on a 1GB link. This is interesting tech and has promise in 5 years. American bandwidth is still archaically priced and Telcos really bend you over for fiber. I spend over 500k /yr on telco side network expenses already and that's using a slew of 1MB links with fiber backbone.
1GB links simply aren't even available in many places. I personally don't want my DR site 100km away from my main site. I'd like one on each coast if I was designing this system. It's definitely a good first step.
Having worked for ISP's I think they may be the only people in the world that will find this reasonable to implement quickly. ISP's generally have low latency multi GB link Fiber Rings that meshing a storage Fabric into wouldn't be difficult. The crazy part is it needs nearly the theoretical limit of the 1GB to operate so it really requires additional infrastructure costs. If a Tornado, Hurricane, or Earthquake hits your datacenter 100km away will likely also be feeling the effects. It is nice to replicate data with however in hopes that you don't completely loose all your physical equipment in both.
How long lasting is FC anyway. It seems there is a ton of emphasis still on FC when 10GB is showing scalability and ease of use that's really nice. It's an interesting cross roads for storage manufacturers. I've spoken to insiders at a couple of the BIG players who question the future of FC. I can't be the only person out there thinking that leveraging FC seems to be a loosing proposition right now. iSCSI over 10Gb is very fast and you have things like Link Aggregation, MPIO, and VLANS that really help scale those solutions and allow you to engineer some very interesting configurations. NFS over 10GB is another great technology that makes management extremely simple. You have VHD files and you move them around as needed.
Virtualization is a game changer in the Corporate IT world and we're starting to see some really cool ideas coming out of the big players.
There are absolutely no words about Microsoft's offering - Azure - in the article. Why do you think they are not in the game? Azure is a totally different approach, as you buy computing "blocks" (which are VMs in the background), not the VM's itself. You don't want to maintain VMs, you want computing capacity without the need of OS maintenance...
Azure is not flexible enough for general purpose loads. Great if you're developing a .NET web app; not so much if you're trying to run an ActiveDirectory backup DC. It's nice to be able to ship a .vmdk across the internet to clone a server.
Remember, we told you in the first lines that we would focus on Infrastructure as a Service. Azure is PaaS and is only natural that it does appear in an article about IaaS.
speaking solely from a personal computing viewpoint - unnecessary; expensive; insecure. The whole PC revolution was AWAY from central computers with dumb terminals.
A note on the negativity towards cloud computing. A lot of it ignores the situational benefits in favor of fear, fear of losing freedoms and being vulnerable to centralized security targets. I can understand that, as it may be a valid concern. Companies using clouds will inevitably force the tech on the general consumer in a more and more invasive way in the decades to come. Cloud computing certainly has it's uses, but I'd never want it to take over. Even in a future where we have absurd bandwidth and nil latencies the idea of centralization is always a bad one. With "clouds", power plants, or the forever proposed wireless electricity grid, you're always setting yourself up for failure. Decentralization and redundancy are by far the best solutions. Every neighborhood in the future should have it's own power plant, every person should have all of their personal data embedded in their body, only sharing what they want when they want. On mass identity theft would be obscure, fear over catastrophic accidents or attacks would be averted. At the end of the day, decentralization is always better, including with the backbones used to share data.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
26 Comments
Back to Article
pjkenned - Monday, October 18, 2010 - link
Stuff is still new but is pretty wow in real life. Clients are based on Android and make that Mitel stuff look like 1990's tech.Gilbert Osmond - Monday, October 18, 2010 - link
I enjoy and benefit from Anandtech's articles on the larger-picture network & structural aspects of contemporary IT services. I wonder if, as Anandtech's readership age-cohort "grows up" and matures into higher management- and executive-level IT job positions, the demand for articles with this kind of content & focus will increase. I hope so.AstroGuardian - Tuesday, October 19, 2010 - link
FYI it does to some extent... :) "You can't stop the progress" right?JohanAnandtech - Tuesday, October 19, 2010 - link
While we get less comments on our enterprise articles, they do pretty well. For example the Server Clash article was in the same league as the latest Geforce and SSD reviews. We can't beat Sandy Bridge previews of course :-).And while in the beginning of the IT section we got a lot of AMD vs Intel flames, nowadays we get some very solid discussions, right on target.
HMTK - Tuesday, October 19, 2010 - link
Like back then at Ace's? ;-)rbarone69 - Tuesday, October 19, 2010 - link
You couldn't have said it better! As an IT Director find information that this site gives invaluable to my decision making. Articles like this give me a jumping off point to thinking outside the box or adding tech I never heard of to our existing infrastructure.What's amazing is that we put very little in new equipment and are able to do what cost millions just 10 years ago. We can now offer 99.999% normal availability with only a maximum of 30minutes of downtime during a full datacenter switch from Toronto to Chicago!
The combination of fast multi core processors, virtualization tech and cheaper bandwidth have made this type of service availalbe to companies of all sizes. Very exciting times!
FunBunny2 - Monday, October 18, 2010 - link
The problem with Clouding is that systems are built to the lowest common denominator (which is to say, Cheap) hardware. The cutting edge is with SSD storage, and it's not likely that public Clouds are going to spend the money.Mattbreitbach - Monday, October 18, 2010 - link
I actually see this going forward. I would put money on public cloud hosts offering different storage options, and pricing brackets to match. I also do not believe that many of the emerging cloud environments are being build with the cheapest hardware available. I would be more inclined to think that some of the providers out there are going for high-end clients who are willing to shell out the cash for performance.mlambert - Monday, October 18, 2010 - link
3PAR, HDS (VSP's) and soon EMC will all have some form of block/page/region level dynamic optimization for auto-tiering between SSD/FC-SAS/SATA. When the majority of your storage is 2TB SATA drives but you still have the hot 3-5% on SSD the costs really come down.HDS and 3PAR both do it very well right now... with HDS firmly in the lead come next April...
The problem I see is the 100-120km dark fiber sync limitation. Once someone figures out how to be sync with 20-40ms latency (or the internets somehow figure out how reduce latency) we will have some pretty cool "clouds".
rd_nest - Monday, October 18, 2010 - link
Not willing to start another vendor war here :)Wanted to make a minor correction - EMC already has dynamic sub-LUN block optimization..Also called FAST - fully automated storage teiring like you mentioned. This is in both CLARiiON and V-Max...the implementation is different, but works almost same.
Don't you feel 20-40ms is bit too much?? Most database applications/or any famous MS applications don't like this amount of latency. Though quite subjective, I tend to believe that 10-20ms is what most people want.
Well, I am sure if it is reduced to 10-20ms, people will start asking for 5ms :)
HMTK - Tuesday, October 19, 2010 - link
Well you don't NEED cutting edge storage for a lot of things. In many cases more cheap(er) machines can be interesting than fewer more expensive machines. A lower specced machine in a large cluster of such machines going down might have less of an impact than a high end server in a cluster of only a few machines. For my customers (SMB's) I prefer more but less powerful machines. As usual YMMV.Exelius - Wednesday, October 20, 2010 - link
Very rarely do people actually need cutting edge. Even if you think you do; more often than not a good SAN operating across a few dozen spindles is much faster than you think. Storage on a small scale is tricky and expensive; storage on a large scale is easy, fast and cheap. You'd be surprised how fast a good SAN can be (even on iSCSI) if you have the right arrays, HBAs and switches.And "enterprise" cloud providers like SunGard and IBM will sell you storage, and deliver minimum levels of IOPS and/or throughput. They've done this for at least 5 years (which is the last time I priced one of them out.) It's expensive, but so is building your own environment. And remember to take into account labor costs over the life of the equipment; if your IT guy quits after 2 years you'll have to train someone, hire someone pre-trained, or (most likely,) hire a consultant at $250/hr every time you need anything changed.
Cloud is cheaper because you only need to train your IT staff on the cloud, not on whatever brand of server, HBAs, SAN, switches, disks, virtualization software, etc... For most companies, operating IT infrastructure is not a core competency, so outsource it already. You outsource your payroll to ADP, so why not your IT infrastructure to Amazon or Google?
Murloc - Tuesday, October 19, 2010 - link
I love these articles about IT stuff in large enterprises.They are so clear even for noobs. I don't know anything about this stuff but thanks to anandtech I get to know about these exciting developments.
dustcrusher - Tuesday, October 19, 2010 - link
"It won't let you tripple your VM resources in a few minutes, avoiding a sky high bill afterwards."Triple has an extra "p."
"If it works out well, those are bonusses,"
Extra "s" in bonuses.
HMTK - Wednesday, October 20, 2010 - link
I believe Johan was thinking of http://en.wikipedia.org/wiki/TripelJohanAnandtech - Sunday, October 24, 2010 - link
Scary how you can read my mind. Cheers :-)iwodo - Tuesday, October 19, 2010 - link
I admit first i am no expert in this field. But Rackspace Cloud Hosting seems much cheaper then Amazon. And i could never understand why use EC2 at all, what advantage does it give compare like RackSpace Cloud.What alert me was the cost you posted up, which surprise me.
iwodo - Tuesday, October 19, 2010 - link
Arh.. somehow posted without knowing it.And even with the cheaper price of Racksapce, i still consider the Cloud thing as expensive.
For small to medium size Web Site, Hosting still seems to be best value.
JonnyDough - Tuesday, October 19, 2010 - link
...and we don't want to "be there". I want control of my data thank you. :)pablo906 - Tuesday, October 19, 2010 - link
Metro Clusters aren't new and you can already active active metro clusters on 10MB links with a fair amount of success. NetApp does a pretty good job of this with XenServer. Is it scalable to extreme levels, well certainly it's not as scalable as a Fiber Channel on a 1GB link. This is interesting tech and has promise in 5 years. American bandwidth is still archaically priced and Telcos really bend you over for fiber. I spend over 500k /yr on telco side network expenses already and that's using a slew of 1MB links with fiber backbone.1GB links simply aren't even available in many places. I personally don't want my DR site 100km away from my main site. I'd like one on each coast if I was designing this system. It's definitely a good first step.
Having worked for ISP's I think they may be the only people in the world that will find this reasonable to implement quickly. ISP's generally have low latency multi GB link Fiber Rings that meshing a storage Fabric into wouldn't be difficult. The crazy part is it needs nearly the theoretical limit of the 1GB to operate so it really requires additional infrastructure costs. If a Tornado, Hurricane, or Earthquake hits your datacenter 100km away will likely also be feeling the effects. It is nice to replicate data with however in hopes that you don't completely loose all your physical equipment in both.
How long lasting is FC anyway. It seems there is a ton of emphasis still on FC when 10GB is showing scalability and ease of use that's really nice. It's an interesting cross roads for storage manufacturers. I've spoken to insiders at a couple of the BIG players who question the future of FC. I can't be the only person out there thinking that leveraging FC seems to be a loosing proposition right now. iSCSI over 10Gb is very fast and you have things like Link Aggregation, MPIO, and VLANS that really help scale those solutions and allow you to engineer some very interesting configurations. NFS over 10GB is another great technology that makes management extremely simple. You have VHD files and you move them around as needed.
Virtualization is a game changer in the Corporate IT world and we're starting to see some really cool ideas coming out of the big players.
chusteczka - Wednesday, October 20, 2010 - link
This author needs to submit his work to a proofreading editor before it gets published.akocsis - Wednesday, October 20, 2010 - link
There are absolutely no words about Microsoft's offering - Azure - in the article. Why do you think they are not in the game? Azure is a totally different approach, as you buy computing "blocks" (which are VMs in the background), not the VM's itself. You don't want to maintain VMs, you want computing capacity without the need of OS maintenance...Exelius - Wednesday, October 20, 2010 - link
Azure is not flexible enough for general purpose loads. Great if you're developing a .NET web app; not so much if you're trying to run an ActiveDirectory backup DC. It's nice to be able to ship a .vmdk across the internet to clone a server.JohanAnandtech - Friday, October 22, 2010 - link
Remember, we told you in the first lines that we would focus on Infrastructure as a Service. Azure is PaaS and is only natural that it does appear in an article about IaaS.billt9 - Thursday, October 21, 2010 - link
speaking solely from a personal computing viewpoint - unnecessary; expensive; insecure. The whole PC revolution was AWAY from central computers with dumb terminals.landerf - Wednesday, October 27, 2010 - link
A note on the negativity towards cloud computing. A lot of it ignores the situational benefits in favor of fear, fear of losing freedoms and being vulnerable to centralized security targets. I can understand that, as it may be a valid concern. Companies using clouds will inevitably force the tech on the general consumer in a more and more invasive way in the decades to come. Cloud computing certainly has it's uses, but I'd never want it to take over. Even in a future where we have absurd bandwidth and nil latencies the idea of centralization is always a bad one. With "clouds", power plants, or the forever proposed wireless electricity grid, you're always setting yourself up for failure. Decentralization and redundancy are by far the best solutions. Every neighborhood in the future should have it's own power plant, every person should have all of their personal data embedded in their body, only sharing what they want when they want. On mass identity theft would be obscure, fear over catastrophic accidents or attacks would be averted. At the end of the day, decentralization is always better, including with the backbones used to share data.