By David Gross
Cabling manufacturer Commscope (CTV) said today that it is in talks to be taken private by The Carlyle Group. News of the potential deal, which would be worth $3 billion, sent Commscope's shares soaring 30% today, to a close of just over $30 a share, and a $2.9 billion market cap.
Along with Siemon, Panduit, Belden (BDC), Berk-Tek, Tyco Electronics (TEL), Amphenol (APH), and Corning (GLW), Commscope is among a group of vendors that supply data centers with fiber optic and copper cabling. Most of these companies sell into multiple industries, so they do not depend on data centers typically to the same extent that traditional IT and networking manufacturers do. In Commscope's case, its 2007 acquisition of Andrew gave it a large presence in the wireless market, and it has long been the leading supplier of coaxial cable to the Cable TV industry. The company was hit hard by the recession, and saw revenue fall 25% in 2009, allthough it has begun to rebound with last quarter's revenue of $838 million representing a year-over-year increase of 7%.
While there are number of silicon photonics and Active Optical Cabling manufacturers who could go public over the next years, it is very difficult for public market investors to take advantage of the growth in data center cabling.
Showing posts with label cabling. Show all posts
Showing posts with label cabling. Show all posts
Monday, October 25, 2010
Tuesday, September 28, 2010
Transitioning Your Data Center from Copper to Fiber
By Lisa Huff
Companies like Corning (GLW) like to tell data center managers that with the advent of 10G, they should be transitioning their network from mostly copper connections to all fiber-optic. But, as many of you probably know, this is easier said than done. There are many things to consider:
1. How much more is it going to cost me to install fiber instead of copper?
2. Do I change my network architecture to Top-of-Rack in order to facilitate using more fiber? What are the down sides of this?
3. Is it really cost-effective to go all-optical?
4. What is my ROI/IRR if I do go all-optical?
5. Will it really help my power and cooling overload if I go all-optical?
It is very difficult to get to specific answers for a particular data center because each one is different. And guidelines from industry vendors may be skewed based on what they are trying to sell you. OFC/NFOEC management has recognized this and asked me to teach a new short course for their 2011 conference – Data Center Cabling – Transitioning from Copper to Fiber will be part of the special symposium Meeting the Computercom Challenge: Components and Architectures for Computational Systems and Data Centers. I invite your ideas on specifics you would like to see covered in this new short course.
Companies like Corning (GLW) like to tell data center managers that with the advent of 10G, they should be transitioning their network from mostly copper connections to all fiber-optic. But, as many of you probably know, this is easier said than done. There are many things to consider:
1. How much more is it going to cost me to install fiber instead of copper?
2. Do I change my network architecture to Top-of-Rack in order to facilitate using more fiber? What are the down sides of this?
3. Is it really cost-effective to go all-optical?
4. What is my ROI/IRR if I do go all-optical?
5. Will it really help my power and cooling overload if I go all-optical?
It is very difficult to get to specific answers for a particular data center because each one is different. And guidelines from industry vendors may be skewed based on what they are trying to sell you. OFC/NFOEC management has recognized this and asked me to teach a new short course for their 2011 conference – Data Center Cabling – Transitioning from Copper to Fiber will be part of the special symposium Meeting the Computercom Challenge: Components and Architectures for Computational Systems and Data Centers. I invite your ideas on specifics you would like to see covered in this new short course.
Labels:
cabling
Wednesday, September 15, 2010
More on 10G Cabling in the Data Center – 10GBASE-T versus 10GBASE-CR
By Lisa Huff
Even though Augmented Category 6 cabling has been around for more than four years and has actually been installed in many data centers, it has really just started to be used for 10G applications. 10GBASE-T ports have supposedly been available from both server and switch manufactures for over a year, but trying to actually get your hands on them has been difficult. And, when you look at the power dissipation and latency specs, data center operators are choosing 10GBASE-CR (SFP+ direct-attach copper) instead. So what are the tradeoffs?
SFP+ direct-attach copper is designed to be used for short interconnects – 15m or less – which makes it a perfect solution for ToR switches. It’s also available with 30AWG cable, which makes it much more manageable than the 24AWG (or sometimes 23AWG) CAT6A – especially if you’re using a shielded CAT6A solution. Because it plugs into a SFP+ port, if you purchase a passive cable (no DSP chip to clean up the signal), there is essentially no added power consumption. If you add an optical SFP+ module, it’s still less than 1W. The 10GBASE-T port still burns about 3W. Latency of a 10GBASE-T solution could be as much as 2.6 µsec, while SFP+ is 0.3 µsec max. For some firms, like trading companies, this could mean millions of dollars made or lost.
With SFP+ DAC, you could buy a ToR solution today that will accommodate both short connections within the rack and up to 40km outside the rack. If you choose a 10GBASE-T product, it will currently only cover up to 100m within one switch – there are no products available so far that have both 10GBASE-T and SFP+ ports. But, I’m sure these are being developed. And, once they are, it will most likely be more cost effective to use 10GBASE-T within the rack with LOMF SFP+ (or maybe 40G QSFP+) uplinks outside the rack. Just like what we have today with Gigabit – 10/100/1000BASE-T within the rack and SFP uplinks.
Both SFP+ DAC and 10GBASE-T products will be needed in the long term – 10GBASE-T for inexpensive connections and 10GBASE-CR (SFP+ DAC) for lower latency and lower power consumption.
Even though Augmented Category 6 cabling has been around for more than four years and has actually been installed in many data centers, it has really just started to be used for 10G applications. 10GBASE-T ports have supposedly been available from both server and switch manufactures for over a year, but trying to actually get your hands on them has been difficult. And, when you look at the power dissipation and latency specs, data center operators are choosing 10GBASE-CR (SFP+ direct-attach copper) instead. So what are the tradeoffs?
SFP+ direct-attach copper is designed to be used for short interconnects – 15m or less – which makes it a perfect solution for ToR switches. It’s also available with 30AWG cable, which makes it much more manageable than the 24AWG (or sometimes 23AWG) CAT6A – especially if you’re using a shielded CAT6A solution. Because it plugs into a SFP+ port, if you purchase a passive cable (no DSP chip to clean up the signal), there is essentially no added power consumption. If you add an optical SFP+ module, it’s still less than 1W. The 10GBASE-T port still burns about 3W. Latency of a 10GBASE-T solution could be as much as 2.6 µsec, while SFP+ is 0.3 µsec max. For some firms, like trading companies, this could mean millions of dollars made or lost.
With SFP+ DAC, you could buy a ToR solution today that will accommodate both short connections within the rack and up to 40km outside the rack. If you choose a 10GBASE-T product, it will currently only cover up to 100m within one switch – there are no products available so far that have both 10GBASE-T and SFP+ ports. But, I’m sure these are being developed. And, once they are, it will most likely be more cost effective to use 10GBASE-T within the rack with LOMF SFP+ (or maybe 40G QSFP+) uplinks outside the rack. Just like what we have today with Gigabit – 10/100/1000BASE-T within the rack and SFP uplinks.
Both SFP+ DAC and 10GBASE-T products will be needed in the long term – 10GBASE-T for inexpensive connections and 10GBASE-CR (SFP+ DAC) for lower latency and lower power consumption.
Labels:
cabling
Monday, September 13, 2010
10G Copper versus Fiber – Is Power Consumption Really the Issue?
by Lisa Huff
For decades now, fiber has been slated to take over the data networking world, but somehow, some way, copper keeps reinventing itself. But are the ways in which copper can compensate for its lower bandwidth capacity coming to an end at 10G due to what seem to be astronomical power consumption issues? Probably not. I have listened to the rhetoric from the fiber-optic companies for more than five years now, and have conducted my own research to see if what they say is true. Their argument was that at 8 to 14W per port, copper is just too costly. But, now that the chips have reduced power consumption to less than 3W per port, 10GBASE-T is a viable data center networking solution. Actually, even at 14W per port, it was viable, just not practical for switch manufacturers to incorporate in their designs because they couldn’t get the port density they needed and actually have room to cool the devices. Now that doesn’t seem to be an issue, evidenced by 24-port 10GBASE-T configurations that have been released by all the major players.
I believe decisions on copper versus fiber will be made around other parameters as well, such as latency. In a recent study, Data Center Cabling Cost Analysis - Copper Still Has Its Place, we looked at the cost of 10G copper versus fiber and added in the higher power consumption. Using the specific example focused on a rack of EoR Cisco switches, copper was still more cost-effective even when considering higher electricity costs.
But our next area to study will be a rack of servers with a ToR switch. In this scenario, the power consumption difference may be enough to justify the cost of installing fiber over copper. The above referenced report and this next research are part of a series of research reports for our Data Center Network Operator Service.
For decades now, fiber has been slated to take over the data networking world, but somehow, some way, copper keeps reinventing itself. But are the ways in which copper can compensate for its lower bandwidth capacity coming to an end at 10G due to what seem to be astronomical power consumption issues? Probably not. I have listened to the rhetoric from the fiber-optic companies for more than five years now, and have conducted my own research to see if what they say is true. Their argument was that at 8 to 14W per port, copper is just too costly. But, now that the chips have reduced power consumption to less than 3W per port, 10GBASE-T is a viable data center networking solution. Actually, even at 14W per port, it was viable, just not practical for switch manufacturers to incorporate in their designs because they couldn’t get the port density they needed and actually have room to cool the devices. Now that doesn’t seem to be an issue, evidenced by 24-port 10GBASE-T configurations that have been released by all the major players.
I believe decisions on copper versus fiber will be made around other parameters as well, such as latency. In a recent study, Data Center Cabling Cost Analysis - Copper Still Has Its Place, we looked at the cost of 10G copper versus fiber and added in the higher power consumption. Using the specific example focused on a rack of EoR Cisco switches, copper was still more cost-effective even when considering higher electricity costs.
But our next area to study will be a rack of servers with a ToR switch. In this scenario, the power consumption difference may be enough to justify the cost of installing fiber over copper. The above referenced report and this next research are part of a series of research reports for our Data Center Network Operator Service.
Tuesday, September 7, 2010
Why Fiber Polarity Matters, Part 2
by Lisa Huff
If you’ve read my previous posts on the subject, you know that polarity can be a tricky matter and it’s even more complicated when you try to choose it for your data center cabling. You really have to choose based on several factors:
1. Patch cords – method A has two different patch cords that you have to stock, but the upside is that it’s pretty simple to follow where the signal is going and if you happen to be out of one type of patch cord, you can really take the one you have and just flip the fibers as a temporary fix until you can get the other patch cords. Of course this isn’t recommended, but if you’re in a bind and need to get up-and-running right away, it will work. With methods B and C you have the same patch cord on each end so no need to worry about this, but if you happen to have the wrong cassettes or backbone, nothing will work and you'll have to wait to get the correct ones.
2. Cassettes and backbone cables – you need to make sure you buy all of one method of polarity or your system won’t work. If you’re concerned about supply, all three polarity methods are available from multiple vendors, but Method A is “preferred” by most.
3. Upgradability – this is where it can get dicey. Typically your pre-terminated assemblies are running Gigabit applications today and a few may be running 10G. Any of the polarities will work at these data rates. But when you move to 40/100G, methods A and B have straight forward paths, while C does not. Also, you’ll want to make sure you use the highest grade of LOMF available, which is OM4 – this will give you the best chance of being able to reuse your backbones up to 125m. If you need something longer, you’ll need to go to SMF.
If you are thinking about installing pre-terminated cassette-based assemblies now for 10G with an upgrade path to 40 and 100G, you need to consider the polarity method you use. Unlike today's 2-fiber configurations, with one send and one receive, the standards for 40G and 100G Ethernet implementations use multiple parallel 10G connections that are multiplexed. While 40/100G equipment vendors will tell you that polarity is not an issue, care must be taken if you want to reuse this installed base.
40G will use four 10G fibers to send and four 10G fibers to receive, while 100G uses either four 25G fibers or ten 10G fibers in each direction. Because 40 and 100G will be using the MPO connector, if the polarity method is carefully chosen, you will be able to reuse your backbone cables. This is enabled by the fact that the IEEE took much care in specifying the system so that you can connect any transmit within a connection on one end of the channel to any receive on the other end.
Those selecting fiber to support 10G now and 40G in the near future need to understand what will be involved in transitioning and repurposing their cable plant. In order to upgrade using method A, you can replace the cassettes with MPO-to-MPO patch panels and MPO-to-MPO patch cords and it will enable flexibility to address moves, adds and changes as well as promoting proper installation best practices. The polarity flip will need to be accomplished in either an A-to-A patch cord or possibly with a key up/key down patch panel.
Method B multimode backbone cables can also readily support 40G applications. For a structured cabling approach, method B will still use a patch panel and patch cords, though as with current method B, both patch cords could be A-to-B configuration. While Method C backbones could be used, they are not recommended for 40G as completing the channel involves complex patch cord configurations.
It appears that 100G will use either the 12-fiber (4x25G) or the 24-fiber (10x10G) MPO connector. With transmits in the top row and receives in the bottom row, the connection will still be best made using a standardized structured cabling approach as described above.
There are many suppliers of pre-terminated optical assemblies including Belden, Berk-Tek, a Nexans Company, CommScope (CTV), Corning (GLW), Panduit, Siemon, Tyco Electronics NetConnect (TEL) as well as many smaller shops that give quick-turn assemblies like Cxtec CablExpress and Compulink.
If you’ve read my previous posts on the subject, you know that polarity can be a tricky matter and it’s even more complicated when you try to choose it for your data center cabling. You really have to choose based on several factors:
1. Patch cords – method A has two different patch cords that you have to stock, but the upside is that it’s pretty simple to follow where the signal is going and if you happen to be out of one type of patch cord, you can really take the one you have and just flip the fibers as a temporary fix until you can get the other patch cords. Of course this isn’t recommended, but if you’re in a bind and need to get up-and-running right away, it will work. With methods B and C you have the same patch cord on each end so no need to worry about this, but if you happen to have the wrong cassettes or backbone, nothing will work and you'll have to wait to get the correct ones.
2. Cassettes and backbone cables – you need to make sure you buy all of one method of polarity or your system won’t work. If you’re concerned about supply, all three polarity methods are available from multiple vendors, but Method A is “preferred” by most.
3. Upgradability – this is where it can get dicey. Typically your pre-terminated assemblies are running Gigabit applications today and a few may be running 10G. Any of the polarities will work at these data rates. But when you move to 40/100G, methods A and B have straight forward paths, while C does not. Also, you’ll want to make sure you use the highest grade of LOMF available, which is OM4 – this will give you the best chance of being able to reuse your backbones up to 125m. If you need something longer, you’ll need to go to SMF.
If you are thinking about installing pre-terminated cassette-based assemblies now for 10G with an upgrade path to 40 and 100G, you need to consider the polarity method you use. Unlike today's 2-fiber configurations, with one send and one receive, the standards for 40G and 100G Ethernet implementations use multiple parallel 10G connections that are multiplexed. While 40/100G equipment vendors will tell you that polarity is not an issue, care must be taken if you want to reuse this installed base.
40G will use four 10G fibers to send and four 10G fibers to receive, while 100G uses either four 25G fibers or ten 10G fibers in each direction. Because 40 and 100G will be using the MPO connector, if the polarity method is carefully chosen, you will be able to reuse your backbone cables. This is enabled by the fact that the IEEE took much care in specifying the system so that you can connect any transmit within a connection on one end of the channel to any receive on the other end.
Those selecting fiber to support 10G now and 40G in the near future need to understand what will be involved in transitioning and repurposing their cable plant. In order to upgrade using method A, you can replace the cassettes with MPO-to-MPO patch panels and MPO-to-MPO patch cords and it will enable flexibility to address moves, adds and changes as well as promoting proper installation best practices. The polarity flip will need to be accomplished in either an A-to-A patch cord or possibly with a key up/key down patch panel.
Method B multimode backbone cables can also readily support 40G applications. For a structured cabling approach, method B will still use a patch panel and patch cords, though as with current method B, both patch cords could be A-to-B configuration. While Method C backbones could be used, they are not recommended for 40G as completing the channel involves complex patch cord configurations.
It appears that 100G will use either the 12-fiber (4x25G) or the 24-fiber (10x10G) MPO connector. With transmits in the top row and receives in the bottom row, the connection will still be best made using a standardized structured cabling approach as described above.
There are many suppliers of pre-terminated optical assemblies including Belden, Berk-Tek, a Nexans Company, CommScope (CTV), Corning (GLW), Panduit, Siemon, Tyco Electronics NetConnect (TEL) as well as many smaller shops that give quick-turn assemblies like Cxtec CablExpress and Compulink.
Labels:
cabling
Friday, August 27, 2010
Fiber vs. Copper in the Data Center
by Lisa Huff
Fiber proponents have been saying that the end of copper is near for decades now, only to be proven wrong time and time again. And, at 10G, some were so bold as to say that copper will die because it burns too much power. Well, I listened to their argument that at 8 to 14W per port, copper just won’t cut it. But, now that the 10GBASE-T chip manufacturers have reduced their power consumption to less than 3W per port, 10GBASE-T is a viable data center networking solution. Actually, even at 14W per port, it was viable, just not practical for switch manufacturers to incorporate in their designs because they couldn’t get the port density they needed due to cooling issues. Now, that doesn’t seem to be an issue, evidenced by 24-port configurations that have been released by all the major players.
What this means is that twisted-pair copper will have its place in data center networks for many years to come – especially based on our recent cost analysis detailed in our new report Data Center Cabling Cost Analysis - Copper Still Has Its Place. It compares 10GBASE-SR to 10GBASE-T and includes analysis on the higher power consumption. This report is the first in a series for our Data Center Network Operator service.
Fiber proponents have been saying that the end of copper is near for decades now, only to be proven wrong time and time again. And, at 10G, some were so bold as to say that copper will die because it burns too much power. Well, I listened to their argument that at 8 to 14W per port, copper just won’t cut it. But, now that the 10GBASE-T chip manufacturers have reduced their power consumption to less than 3W per port, 10GBASE-T is a viable data center networking solution. Actually, even at 14W per port, it was viable, just not practical for switch manufacturers to incorporate in their designs because they couldn’t get the port density they needed due to cooling issues. Now, that doesn’t seem to be an issue, evidenced by 24-port configurations that have been released by all the major players.
What this means is that twisted-pair copper will have its place in data center networks for many years to come – especially based on our recent cost analysis detailed in our new report Data Center Cabling Cost Analysis - Copper Still Has Its Place. It compares 10GBASE-SR to 10GBASE-T and includes analysis on the higher power consumption. This report is the first in a series for our Data Center Network Operator service.
Labels:
cabling
Tuesday, August 3, 2010
Laser-optimized Multimode Fiber (LOMF)
by Lisa Huff
Right now, there are three standardized types of LOMF in addition to FDDI-grade fiber, which is not laser optimized. So first, what does laser-optimized actually mean? In basic terms, it just means that the fiber was designed to be used with lasers, and in the case of MMF, typically VCSELs. FDDI-grade fiber pre-dated the use of VCSELs so it is not laser-optimized - it was intended for utilization with LEDs. Lasers were adopted as the light source of choice when scientists and engineers realized that LEDs became very unstable when trying to modulate them at data rates beyond 100 Mbps. They originally tried to use the same lasers that were being used in CD players, but these turned out to be unstable at Gigabit data rates as well. In the early 1990s, the development of the VCSEL enabled these higher data rates.
As the light sources evolved, the fiber progressed with them. So, for 850nm operation today we have four choices:
1. OM1 (FDDI): Minimum OFL Bandwidth of 200 MHz•km; 10G Minimum Optical Reach of 33m
2. OM2: Minimum OFL BW of 500; 10G Minimum Optical Reach of 82m
3. OM3: Minimum OFL BW of 1500; 10G Minimum Optical Reach of 300m
4. OM4: Minimum OFL BW of 3500; 10G Minimum Optical Reach of 550m
As you can see, the bandwidth of the fiber is intimately tied to what type of light source is used and the optical reach is dependent on both bandwidth and data rate. And, while OM1 fiber wasn’t necessarily designed to be used with lasers, it works fine with them, albeit at a shorter distance than with LOMF. Of note as well is the fact that there are a few cable manufacturers that also provide what I would call OM1+ cable that is 62.5-micron, but is laser-optimized, so may have some improved bandwidth and reach.
All this leads to a very important point – when specifying a cabling system for your networks and data centers, it is important to understand not only the fiber you’re going to install, but also the equipment you’re trying to connect. Just because you're "only" installing Gigabit systems and you've used OM1 fiber for years, doesn't mean it's the best solution (or even the most economical) for today and tomorrow.
Right now, there are three standardized types of LOMF in addition to FDDI-grade fiber, which is not laser optimized. So first, what does laser-optimized actually mean? In basic terms, it just means that the fiber was designed to be used with lasers, and in the case of MMF, typically VCSELs. FDDI-grade fiber pre-dated the use of VCSELs so it is not laser-optimized - it was intended for utilization with LEDs. Lasers were adopted as the light source of choice when scientists and engineers realized that LEDs became very unstable when trying to modulate them at data rates beyond 100 Mbps. They originally tried to use the same lasers that were being used in CD players, but these turned out to be unstable at Gigabit data rates as well. In the early 1990s, the development of the VCSEL enabled these higher data rates.
As the light sources evolved, the fiber progressed with them. So, for 850nm operation today we have four choices:
1. OM1 (FDDI): Minimum OFL Bandwidth of 200 MHz•km; 10G Minimum Optical Reach of 33m
2. OM2: Minimum OFL BW of 500; 10G Minimum Optical Reach of 82m
3. OM3: Minimum OFL BW of 1500; 10G Minimum Optical Reach of 300m
4. OM4: Minimum OFL BW of 3500; 10G Minimum Optical Reach of 550m
As you can see, the bandwidth of the fiber is intimately tied to what type of light source is used and the optical reach is dependent on both bandwidth and data rate. And, while OM1 fiber wasn’t necessarily designed to be used with lasers, it works fine with them, albeit at a shorter distance than with LOMF. Of note as well is the fact that there are a few cable manufacturers that also provide what I would call OM1+ cable that is 62.5-micron, but is laser-optimized, so may have some improved bandwidth and reach.
All this leads to a very important point – when specifying a cabling system for your networks and data centers, it is important to understand not only the fiber you’re going to install, but also the equipment you’re trying to connect. Just because you're "only" installing Gigabit systems and you've used OM1 fiber for years, doesn't mean it's the best solution (or even the most economical) for today and tomorrow.
Labels:
cabling
Monday, July 26, 2010
SFP+ Marks a Shift in Data Center Cabling
by Lisa Huff
With the advent of top-or-rack (ToR) switching and SFP+ direct attach copper cables, more data centers are able to quickly implement cost-effective 10G and beyond connections. ToR designs are currently one of two configurations:
1. GigE Category cabling (CAT5e, 6, or 6A) connection to each server with a 10G SFP+ or XFP uplink to either an EoR switch or back to a switch in the main distribution area (MDA)
2. SFP direct attach cabling connection to each server with a 10G SFP+ or XFP uplink to either an EoR switch or back to a switch in the MDA
Either way, SFP and SFP+ modules and cable assemblies are starting to see huge inroads where Category cabling used to be the norm. Consequently, structured cabling companies have taken their shot at offering the copper variants of these devices. Panduit was one of the first that offered an SFP direct-attach cable for the data center, but Siemon quickly followed suit and surpassed Panduit by offering both the copper and optical versions of the assemblies as well as the parallel optics QSFP+ AOC. Others rumored of working on entering into this market are Belden (BDC) and CommScope (CTV). This really marks a shift in philosophy for these companies who traditionally have stayed away from what they considered “interconnect” products. There are a couple of notable exceptions in Tyco Electronics and Molex that have both types of products, however.
So what makes these companies believe they can compete with the likes of Amphenol Interconnect (APH), Molex (MOLX) and Tyco Electronics (TEL)? Well, it might not be the fact that they think they can compete, but that they see some erosion of their patch cord businesses and view this as the only way to make sure the “interconnect” companies don’t get into certain customers. So, protecting their customer base by offering products they won’t necessarily make any money on – because, after all, many of them are actually private-labeled from the very companies they are trying to oust. Smart or risky? Smart, I think, because it seems to me that the future of the data center will be in short-reach copper and mid-range fiber in the form of laser-optimized multi-mode fiber (LOMF).
With the advent of top-or-rack (ToR) switching and SFP+ direct attach copper cables, more data centers are able to quickly implement cost-effective 10G and beyond connections. ToR designs are currently one of two configurations:
1. GigE Category cabling (CAT5e, 6, or 6A) connection to each server with a 10G SFP+ or XFP uplink to either an EoR switch or back to a switch in the main distribution area (MDA)
2. SFP direct attach cabling connection to each server with a 10G SFP+ or XFP uplink to either an EoR switch or back to a switch in the MDA
Either way, SFP and SFP+ modules and cable assemblies are starting to see huge inroads where Category cabling used to be the norm. Consequently, structured cabling companies have taken their shot at offering the copper variants of these devices. Panduit was one of the first that offered an SFP direct-attach cable for the data center, but Siemon quickly followed suit and surpassed Panduit by offering both the copper and optical versions of the assemblies as well as the parallel optics QSFP+ AOC. Others rumored of working on entering into this market are Belden (BDC) and CommScope (CTV). This really marks a shift in philosophy for these companies who traditionally have stayed away from what they considered “interconnect” products. There are a couple of notable exceptions in Tyco Electronics and Molex that have both types of products, however.
So what makes these companies believe they can compete with the likes of Amphenol Interconnect (APH), Molex (MOLX) and Tyco Electronics (TEL)? Well, it might not be the fact that they think they can compete, but that they see some erosion of their patch cord businesses and view this as the only way to make sure the “interconnect” companies don’t get into certain customers. So, protecting their customer base by offering products they won’t necessarily make any money on – because, after all, many of them are actually private-labeled from the very companies they are trying to oust. Smart or risky? Smart, I think, because it seems to me that the future of the data center will be in short-reach copper and mid-range fiber in the form of laser-optimized multi-mode fiber (LOMF).
Labels:
cabling,
Optical Components
Thursday, July 15, 2010
Is Data Center Structured Cabling Becoming Obsolete?
by Lisa Huff
Today if you walk into a “typical” data center you’ll see tons of copper Category cabling in racks, under the raised floor and above cabinets. Of course, it can be argued that there is no such thing as a “typical” data center. But, regardless, most of them still have a majority of copper cabling – but that’s starting to change. Over the last year, we’ve seen the percentage of copper cabling decrease from about 90-percent to approximately 80-percent and according to several data center managers I’ve spoken to lately, they would go entirely fiber if they could afford to.
Well, at 10G, they may just get their wish. Not on direct cost, but perhaps on operating or indirect costs. While copper transceivers at Gigabit data rates cost less than $5 per port (for the switch manufacturer), short wavelength optical ones still hover around $20/port (for the switch manufacturer) and about $120/port for the end user – a massive markup we’ll explore later. But 10GBASE-T ports are nearly non-existent – for many reasons, but the overwhelming one is power consumption. 10GBASE-SR ports with SFP+ modules are now available that consume less than 1W of power, while 10G copper chips are struggling to meet a less than 4W requirement. Considering the fact that power and cooling densitie4s are increasingly issues for data center managers, this alone may steer them to fiber.
This has also led to interconnect companies like Amphenol (APH), Molex (MOLX) and Tyco Electronics (TEL) to take advantage of their short-reach copper twinax technology in the form of the SFP+ direct attach cable assemblies and a change in network topology – away from structured cabling. So while structured cabling may be a cleaner and more flexible architecture, many have turned to top-of-rack switching and direct-connect cabling just so they can actually implement 10-Gigabit Ethernet. Of course, Brocade (BRCD), Cisco (CSCO), Force10 and others support this change because they sell more equipment. But is it the best possible network architecture for the data center?
Labels:
BRCD,
cabling,
CSCO,
Optical Components
Saturday, June 26, 2010
ADC: We're Ready for 40/100 Gigabit
ADC (ADCT) issued a press release stating its support of the recently ratified IEEE 802.3ba standard for 40 and 100 Gigabit networks. In addition to fiber and copper cables, the vendor supplies optical distribution frames to data centers.
40 Gigabit InfiniBand is already selling, but many of those links are for short-reach copper interfaces. On the Ethernet side, many data centers are still upgrading from Gigabit to 10 Gigabit, so the standardization is more of a technical milestone than a major business development.
40 Gigabit InfiniBand is already selling, but many of those links are for short-reach copper interfaces. On the Ethernet side, many data centers are still upgrading from Gigabit to 10 Gigabit, so the standardization is more of a technical milestone than a major business development.
Labels:
100 Gigabit,
40 Gigabit,
ADCT,
cabling,
InfiniBand
Subscribe to:
Posts (Atom)