10 gigabit (10Gb) home network – Part I

Right now my home network is entirely Gigabit — minus any wireless devices or laptops that are using “Fast Ethernet” (i.e. 100 Mbps). And the time has come to take that further.

My Google Fiber connection is near Gigabit full-duplex. My home network is largely spliced off into two zones to keep from running everything on one circuit in my apartment.

Zone 1 is the living room. Living out there are the NAS and a virtualization server along with the router. Everything is connected to an 850W UPS. Zone 2 is the computer room. Absinthe and Mira plus the entertainment center are plugged into a Gigabit switch connected to the router via a long Cat5e cable.

The virtualization server is a refurbished HP Z600 I bought from refurb.io. It has two Xeon E5520 processors with a 500GB HDD that I intend to upgrade to an SSD eventually. I use VMWare ESXi 6.0 since it’s free and works quite well. One of the VMs is a Plex server that mounts to shared folders on the NAS.

So there’s a lot of competition on the Gigabit connection, essentially throttling everything. Including our Internet connection. (Yes, I can already hear the sarcastic cries of sympathy…) Upgrading the network’s backbone is the only way to alleviate that.

A simple solution would be buying two Gigabit switches that have 10Gb SFP+ uplinks and just connecting those together. And while that would alleviate some of the bottlenecking on the network, there are other reasons to go with 10Gb.

First, I want to upgrade Mira and Absinthe to use 10Gb as well. While Mira has 4TB of supplemental storage (4x1TB on a RAID 0), Absinthe doesn’t have anything extra. So upgrading to 10Gb will allow her to use the NAS as supplemental storage without being limited to just Gigabit. This will become even more so after I fill out the last of the HDD bays in the NAS.

But storage speed is the primary reason to go 10Gb. Mira and Absinthe both have Samsung 950 PRO NVMe SSDs. And the RAID 0 supplemental storage on Mira is also limited when copying things to and from the NAS. And the NAS itself is limited by its onboard Gigabit connections as well. So everything on the network has the potential to see massive improvements by jumping to 10Gb.

Custom switches

There’s really only one way to do this while still controlling cost: a custom switch. (Update: At the time I wrote this, it was true from what I could find. But not anymore.)

Two zones in my network means two switches are planned. Zone 1’s switch will be happening first. This will provide a 10Gb connection between the NAS and virtualization servers while still exposing them to the rest of the network through a Gigabit connection to the router.

To that end, I’ve made an order through eBay for several 10GbE SFP+ cards:

The dual-port and one single-port Mellanox card will go into Zone 1’s switch. The onboard Gigabit connection plus 3x10Gb connections will be all that’s required. In the end, the Gigabit connection to the router will be the only connection to the router from the rest of the network.

The Chelsio S320 is for the NAS, which is running FreeNAS. Mellanox ConnectX-2 cards are not supported by FreeNAS, but the Chelsio S320 is supported since it has the Chelsio T3 chipset. If I was able to find a Chelsio S310 single-port card, I would’ve gone with that instead.

Now for cables. Virtually every 10Gb home networking tutorial I’ve seen online uses direct-attached, twin-axial SFP+ copper connections (10GSFP+Cu). In part I think ease is the reason. Direct-attached copper is easier to use since it’s like plugging in your standard RJ45 cables, just with big connectors on the end.

And it’s perfectly fine for short connections. Just bear in mind that passive 10GSFP+Cu is generally limited to 5m (a little over 16ft). Beyond that you need to use active cables, which are expensive, or optical fiber.

Which is what I’m using.

While the cost of optical fiber (10GBase-SR) is similar to passive 10GSFP+Cu at lesser lengths, optical fiber allows you to go beyond 5m without significant cost. And since I intend to connect two switches in two completely different rooms, 10GBase-SR is basically required for this to keep cost down.

While I could use 10GSFP+Cu for the connections in Zone 1, I can’t use it to connect Mira and Absinthe in Zone 2. The switch will be on the opposite side of the room from the computers, easily eating up the 5m length limit with little slack. So since I’ll have little choice but to use optical fiber for all 10Gb connections in Zone 2, and for connecting the Zone 1 and Zone 2 switches, I’m just going to use it for all 10Gb connections.

The transceivers linked above are just 16 USD each, and the two 1m cables were under 3 USD, giving a single 1m connection between two systems at just under 35 USD plus shipping. A Fiberstore representative also consulted with me to make sure I was ordering the right transceivers for my hardware before releasing the order for packing and shipping. I love that level of customer service. And she said the “generic” transceiver linked above is compatible with the Chelsio S320 and Mellanox ConnectX-2.

The cards were bought from three different sellers on eBay, shipped via three different couriers (one each via USPS, UPS, and FedEx), with three different scheduled delivery dates. The Fiberstore order shipped the day the Mellanox pair arrived. From China. Hence the 22 USD shipping charge.

So everything should arrive in time for the Thanksgiving long weekend.

First parts arrive

The pair of single-port Mellanox cards arrived first, and the Chelsio cards arrived the next day. I did a quick test on one of the cards to make sure they were detected without issue in a Windows 10 test system. And they got oddly warm while idling.

a4gh_1_20150913525279633

The Mellanox cards have passive heatsinks over their chips. While switching to active cooling would certainly help, provided I could find active cooling that would fit, I started with removing the heatsink. And I was not surprised by what I found: fractured thermal paste that barely stuck to the NIC chip. There was likely not much contact between the heatsink and chip.

Arctic MX-4 to the rescue!!! But the chip still got very hot even at idle. So active cooling is going to be a necessity, whether by placing a fan to blow onto the card or replacing its passive heatsink with active. I discovered a forum post that also mentioned the importance of cooling the SFP+ connector on the card, so something else to keep in mind.

So definitely keep in mind if you go this route that, since you’re buying surplus used NICs, you need to replace the thermal compound under any heatsinks and set up for active cooling. And make sure to use a good thermal compound like MX-4 or IC Diamond.

If you are considering replacing the passive heatsink with an active heatsink, I recommend the copper heatsinks by Enzotech. Just measure the mounting holes on your card to get one that will work.

The Chelsio card doesn’t have a removable heatsink on its main processor. Instead it’s attached with thermal glue, so don’t try to wrench it off. The seller also included two 10GBase-SR modules with my card: Finisar FTLX8571D3BCL.

Installing the cards into the NAS and virtualization machine were straightforward. Now it’s just a matter of waiting for everything else so I can build the switch and connect everything together.

That’s coming with the next part.

Share
  • HAppY_KrAToS

    For sure, hardware companies, like trendnet (‘light’ versions of cisco hardware) are holding back the 10gb tech, for at least 5 years, obviously aided by companies like microsoft.

    We have cat6 and cat7 cables available. 10gb networc cards are cheap. The only thing missing, is a 5-8 10gb ports, 100-200$ switches !

    Sure, having dozens of devices connected at 10gb isn’t really necessary. .. but in case you want to link 2 zones, like the author, or connect 1 zone to a raid Nas or server, that, would be ultra useful.
    There are plenty of cases where 10gb would be great. Say you have a 15gb rar file, and want to unrar and store it on another computer/server. Instead of unraring locally and then send it to the other computer, one could simply unrar and store remotely, the 10gb bandwidth would handle the file copy with ease.

    Why companies like trendnet don’t release 10gb switches to the masses ?
    Because that could segment the server market, and a small company, instead of spending dozens thousands bucks on 10gb fiber switches, + windows servers with windows enterprise edition, + expensive cables, fibers, upcs, etc etc, they would only need to buy 1 or 2 fast Nas, some cheap 10gb nics, a few cables, and that’s all folks.

    For sure, many many power users and companies would buy those cheap 10gb switches, instead of spending 10 or 20 thousand bucks on a full fiber network, servers, etc etc.

    I believe we will ONLY have affordable and simple 10gb networks at home, only when companies have 100gb or higher networks.

    In 2 or 3 years, i can imagine servers with hundreds of M.2 3500mb read/2500mb write cards, in some crazy raids, as it is so small , one can vertically stack 50 to 100 of those cards in a surface like a 10″ tablet…. capable of reading 100×3500 = 350gb per second or 250gb write….
    And in 5 years, i can imagine those M.2 cards will be half the size, each one capable of 35 to 100Gb read per second, on pcie 5 or 6… with 1000 of them, a server could read 1000Tb per second….
    Just imagine 1000Tb per second… on a 10Gb network…. yeah, worse than copying a full 50gb bluray over a 56k old modem…

    In 5 years, companies will have 500 to 1000Gb networks, for sure.
    Only then, they will make the 10gb tech available for homes… as it won’t interfere with their pro 1Tb networks.

    My 2 cts..

    • Thanks for your input.

      I believe we will ONLY have affordable and simple 10gb networks at home, only when companies have 100gb or higher networks.

      They already do, at least the ones that choose to go for it since it’s expensive. InfiniBand has been around for a while, which can go to 40Gb (32Gb effective). There is also 40GbE and 100GbE. I wouldn’t be surprised if they’ll try for TbE, but that’s probably a ways off given how 100GbE is implemented. Currently IEEE is looking to a 400GbE standard.

      But I think you’re missing one key point: 10Gb bandwidth on home networks has never been in high demand. Demand for it is going up, which is why we’re seeing more retired server hardware showing up on eBay and the like. But demand in home networks has typically been about wireless throughput and reception, which is why we now have wireless standards capable of >1Gb throughput. Most home labs don’t use 10GbE from what I’ve seen as well.

      • HAppY_KrAToS

        Hey,
        Just a quick update. I think asus released it a few days ago.
        One of the first reviews, i guess:
        In case you hadn’t heard about it yet :
        https://www.eteknix.com/asus-xg-c100c-10gbase-t-network-adapter-review/

        Now, we’ll need some cool 8-port switches, that work with cat6 (cat7 is super expensive and thick)

        • They exist, they’re just expensive.

          Brand new, the least expensive 10GbE RJ45 switch I’ve seen is still around 700 USD. And even used on eBay, they’re still more expensive than the Quanta LB6M, even with the price increases on them due to demand. At this point, you’re still better off buying a used SFP+ 10GbE switch, SFP+ 10GbE cards, 10GBase-SR transceivers, and LC to LC OM3 or OM4 optical fiber to build a network.

          Especially since the card you just quoted is over 100 USD, while retired SFP+ 10GbE single-port cards are going for less than 20 USD, around 35 USD for dual-port. Sure Cat6A cable is less expensive than one transceiver up to a particular cable length, but the overall cost including the switch is still lower.

          I’d recommend the Asus card only if you’re planning to connect two computers together, since it should be able to autodetect whether to use crossover or standard. And Cat6A cable can be had readily for a decent price. Any more than that and you’ll need a switch, and that’s where the added cost comes into play.

          Great to see the cost on 10GbE RJ45 cards come down, but without a corresponding drop in the price of the requisite switches, which would likely require quite a few of them to hit the used market at once, it’s still going to be more cost effective to go with used SFP+ hardware.