10Gb home network – Retrospective

Significant time and some not-insignificant expense went into bringing this project to fruition. Only to be surpassed by something significantly better and less complicated.

That is a risk you take with projects like this, though. And the question is how you anticipate and respond to it.

I anticipated the risk by using as much of my existing hardware as possible. Aside from the transceivers and cables, and also the network cards, only two new pieces of hardware were acquired for this project: the Noctua NH-D9L, and the SeaSonic power supply. And there was also the Silverstone GD09 chassis for the switch that ultimately didn’t get used. Everything else was hardware I already had. So the out of pocket risk was light, staying under 200 USD combined.

And then I discovered recently, courtesy of Linus Tech Tips (video below), that a lot of surplus, refurbished 10GbE SFP+ switches were recently dropped on eBay.

The switch in question is the Quanta LB6M, which is a 24-port SFP+ 10GbE switch, 1U rack height, and most of the listings on eBay (as of the time I write this) are for 250 USD or less, with varying costs for shipping. So I decided to acquire one to replace my custom switch. I don’t have it as of the time this article goes live, but it’ll basically be a drop-in replacement to the existing switch when I do receive it.

The only downside is they are built for server rooms, meaning they are loud out of the box. I will be seeing what I can do to quiet it down, so keep an eye out for a revisit on that. Whether I can will depend on the specifications for the fans and power supplies, as well as how the switch is constructed.

Brand new switches with that port count are several thousand dollars, with lower port counts typically starting at around a thousand dollars. And they’re built to connect clusters into 10GbE connections, such as for high performance computing clusters or storage-area networks using optical fiber.

So… yeah. A lot of time and effort displaced… by an eBay listing.

And I kept an eye on eBay during the course of this project. It’s why I tried to keep the out of pocket cost low. Easily the single greatest expense in this whole project were the transceivers and cables. Though the 10GbE cards run a close second in aggregate cost, given how many I acquired. Most of which will probably be sold off depending on what I decide to do with them.

Now if you were to replicate that switch, purchasing everything, the cost would be significant. Even going through eBay to buy the parts used, you’re still looking at nearly 150 USD for just the processor and mainboard. The 4U chassis I had this built into is 100 USD on its own plus shipping brand new.

And that’s why, for the most part, I stuck with hardware I already had.

Custom switches

So I basically scrapped the entire custom switch project for an off-the-shelf switch. Most of the hardware will be repurposed. I do intend to get a GPU computing cluster back up and running, and some of it will be repurposed to that.

I ventured into building a custom switch from hardware I already had due to not being able to find anything off the shelf at a price I was willing to pay. I thought ultimately that I could do this for less than what it’d cost buying something off the shelf.

It was only after I hit “Publish” on the previous iteration in this log that I became aware of the surplus listings on eBay. Despite me not finding anything when I searched just last month.

And that is one recommendation I’ll make up front: always lean toward an off the shelf solution, and only build custom if you can’t find something that will meet your requirements within a price point you find acceptable.

But along those lines, should you consider building your own custom 10GbE switch? That’s a tough case to make. And about the only way I could see that case being made is if you had multiple 10GbE media in one network segment. For example if you have SFP+ and Cat6A media in the same network segment, then building a custom switch to combine those together may be worthwhile to keep costs down.

But first ask: for the devices using Cat6A or Cat7 for 10GbE, would it be possible to switch those over to using an SFP+ card?

10GBASE-T SFP+ transceivers are starting to show up on the market, though their lower power capability (limitation of the SFP+ standard) means they are limited in length to about 30m or less. And they are not cheap. Same with 10GbE SFP+ to RJ45 media converters.

If you need to combine disparate media across Layer 3 — e.g. you’re needing to join Infiniband, Fibre Channel, and/or Ethernet on an IP network — then a custom switch is likely your only option. In which case make sure to pay attention to PCI-Express slot layouts and lane requirements with the hardware you intend to use.

Recommendations

So let’s talk recommendations based on what I’ve learned through this project. Well there’s really only one recommendation I can make that is still relevant: use optical fiber. Optical fiber allows you to keep the transceivers and use whatever cable length you need. And it’s easy and inexpensive to swap out for another length if you need it later.

Avoid direct-attached copper. The cables are expensive. And if you need a different length later, you’d need to order another complete cable.

And, again, don’t build a custom switch unless you’re sure you have little other choice. And be ready to abandon your custom switch if a better option presents itself.

Share