- Build Log:
- 10 gigabit (10Gb) home network – Part I
- 10 gigabit (10Gb) home network – Part II
- Again, Amazon?
- 10 gigabit (10Gb) home network – Zone 1 switch
- 10 gigabit (10Gb) home network – Zone 2 switch – Part 1
- 10 gigabit (10Gb) home network – Zone 2 switch – Part 2
- 10Gb home network – Retrospective
- Quanta LB6M
- 10 gigabit home network – Summary
- Revisiting the Quanta LB6M
Right now my home network is entirely Gigabit — minus any wireless devices or laptops that are using “Fast Ethernet” (i.e. 100 Mbps). And the time has come to take that further.
My Google Fiber connection is near Gigabit full-duplex. My home network is largely spliced off into two zones to keep from running everything on one circuit in my apartment.
Zone 1 is the living room. Living out there are the NAS and a virtualization server along with the router. Everything is connected to an 850W UPS. Zone 2 is the computer room. Absinthe and Mira plus the entertainment center are plugged into a Gigabit switch connected to the router via a long Cat5e cable.
The virtualization server is a refurbished HP Z600 I bought from refurb.io. It has two Xeon E5520 processors with a 500GB HDD that I intend to upgrade to an SSD eventually. I use VMWare ESXi 6.0 since it’s free and works quite well. One of the VMs is a Plex server that mounts to shared folders on the NAS.
So there’s a lot of competition on the Gigabit connection, essentially throttling everything. Including our Internet connection. (Yes, I can already hear the sarcastic cries of sympathy…) Upgrading the network’s backbone is the only way to alleviate that.
A simple solution would be buying two Gigabit switches that have 10Gb SFP+ uplinks and just connecting those together. And while that would alleviate some of the bottlenecking on the network, there are other reasons to go with 10Gb.
First, I want to upgrade Mira and Absinthe to use 10Gb as well. While Mira has 4TB of supplemental storage (4x1TB on a RAID 0), Absinthe doesn’t have anything extra. So upgrading to 10Gb will allow her to use the NAS as supplemental storage without being limited to just Gigabit. This will become even more so after I fill out the last of the HDD bays in the NAS.
But storage speed is the primary reason to go 10Gb. Mira and Absinthe both have Samsung 950 PRO NVMe SSDs. And the RAID 0 supplemental storage on Mira is also limited when copying things to and from the NAS. And the NAS itself is limited by its onboard Gigabit connections as well. So everything on the network has the potential to see massive improvements by jumping to 10Gb.
There’s really only one way to do this while still controlling cost: a custom switch. (Update: At the time I wrote this, it was true from what I could find. But not anymore.)
Two zones in my network means two switches are planned. Zone 1’s switch will be happening first. This will provide a 10Gb connection between the NAS and virtualization servers while still exposing them to the rest of the network through a Gigabit connection to the router.
To that end, I’ve made an order through eBay for several 10GbE SFP+ cards:
The dual-port and one single-port Mellanox card will go into Zone 1’s switch. The onboard Gigabit connection plus 3x10Gb connections will be all that’s required. In the end, the Gigabit connection to the router will be the only connection to the router from the rest of the network.
The Chelsio S320 is for the NAS, which is running FreeNAS. Mellanox ConnectX-2 cards are not supported by FreeNAS, but the Chelsio S320 is supported since it has the Chelsio T3 chipset. If I was able to find a Chelsio S310 single-port card, I would’ve gone with that instead.
Now for cables. Virtually every 10Gb home networking tutorial I’ve seen online uses direct-attached, twin-axial SFP+ copper connections (10GSFP+Cu). In part I think ease is the reason. Direct-attached copper is easier to use since it’s like plugging in your standard RJ45 cables, just with big connectors on the end.
And it’s perfectly fine for short connections. Just bear in mind that passive 10GSFP+Cu is generally limited to 5m (a little over 16ft). Beyond that you need to use active cables, which are expensive, or optical fiber.
Which is what I’m using.
While the cost of optical fiber (10GBase-SR) is similar to passive 10GSFP+Cu at lesser lengths, optical fiber allows you to go beyond 5m without significant cost. And since I intend to connect two switches in two completely different rooms, 10GBase-SR is basically required for this to keep cost down.
While I could use 10GSFP+Cu for the connections in Zone 1, I can’t use it to connect Mira and Absinthe in Zone 2. The switch will be on the opposite side of the room from the computers, easily eating up the 5m length limit with little slack. So since I’ll have little choice but to use optical fiber for all 10Gb connections in Zone 2, and for connecting the Zone 1 and Zone 2 switches, I’m just going to use it for all 10Gb connections.
The transceivers linked above are just 16 USD each, and the two 1m cables were under 3 USD, giving a single 1m connection between two systems at just under 35 USD plus shipping. A Fiberstore representative also consulted with me to make sure I was ordering the right transceivers for my hardware before releasing the order for packing and shipping. I love that level of customer service. And she said the “generic” transceiver linked above is compatible with the Chelsio S320 and Mellanox ConnectX-2.
The cards were bought from three different sellers on eBay, shipped via three different couriers (one each via USPS, UPS, and FedEx), with three different scheduled delivery dates. The Fiberstore order shipped the day the Mellanox pair arrived. From China. Hence the 22 USD shipping charge.
So everything should arrive in time for the Thanksgiving long weekend.
First parts arrive
The pair of single-port Mellanox cards arrived first, and the Chelsio cards arrived the next day. I did a quick test on one of the cards to make sure they were detected without issue in a Windows 10 test system. And they got oddly warm while idling.
The Mellanox cards have passive heatsinks over their chips. While switching to active cooling would certainly help, provided I could find active cooling that would fit, I started with removing the heatsink. And I was not surprised by what I found: fractured thermal paste that barely stuck to the NIC chip. There was likely not much contact between the heatsink and chip.
Arctic MX-4 to the rescue!!! But the chip still got very hot even at idle. So active cooling is going to be a necessity, whether by placing a fan to blow onto the card or replacing its passive heatsink with active. I discovered a forum post that also mentioned the importance of cooling the SFP+ connector on the card, so something else to keep in mind.
So definitely keep in mind if you go this route that, since you’re buying surplus used NICs, you need to replace the thermal compound under any heatsinks and set up for active cooling. And make sure to use a good thermal compound like MX-4 or IC Diamond.
If you are considering replacing the passive heatsink with an active heatsink, I recommend the copper heatsinks by Enzotech. Just measure the mounting holes on your card to get one that will work.
The Chelsio card doesn’t have a removable heatsink on its main processor. Instead it’s attached with thermal glue, so don’t try to wrench it off. The seller also included two 10GBase-SR modules with my card: Finisar FTLX8571D3BCL.
Installing the cards into the NAS and virtualization machine were straightforward. Now it’s just a matter of waiting for everything else so I can build the switch and connect everything together.
That’s coming with the next part.