Stop complicating espresso

Time to get something off my chest.

On YouTube I’ve seen plenty of videos talking about using filter papers and puck screens in the portafilter to get better extractions. Lots of people are advocating for them, and you can find them all over Amazon and at other places selling coffee equipment.

And, really, can we stop with this?

Espresso is already complicated enough with the need to dial in a grinder and get your puck prep right. Do we really need to keep adding stuff to make it even more so?

I’ll say this up front: puck screens and filter papers won’t save you from bad puck prep. It’s like a gamer thinking his gaming system is holding him back when he just sucks at the game he’s playing. Or a photographer thinking they need a high-end camera when they just need more practice and better training with what they have.

And it seems people have become convinced that puck screens and filter papers are a shortcut to a great espresso shot. Sorry to burst your bubble, but they aren’t. They won’t save you from bad puck prep. They won’t save you from a bad grinder. They won’t save you from bad coffee.

Here’s a simple test. If you have a bottomless portafilter, what does the underside look like when you’re pulling a shot? Since that’s the first sign of how even the flow is through the puck. If you don’t see the coffee coming through the underside in a relatively even fashion, meaning all the little holes in the underside come to life at about the same time, you don’t have even flow.

A few things to check first, then, in this order:

  1. Your puck prep
  2. Your shower screen
  3. Your portafilter basket

So let’s discuss…

Puck preparation

It is of paramount importance you start here.

You’ve probably heard of the Weiss Distribution Technique, or WDT. If you aren’t already doing this, start here. It is the single-best improvement to your puck prep that you can make. A simple tool is all you need – tons of options abound on Amazon – along with a tall dosing funnel. Watch some videos on YouTube about the technique and practice it.

This alone will drastically improve your extractions if you aren’t already doing it or aren’t doing it right. It wouldn’t surprise me if that alone is enough to get the bottom of your portafilter “lighting up” in an even and consistent manner.

But if not, the next thing to check is your shower screen.

Shower screen

How often are you cleaning your shower screen, to begin with? If it’s easy to remove – i.e. just need a screw driver – take it off every once in a while and clean it. (I realize it’s not easy to routinely do this with an E61.) At minimum, brush it every once in a while with a soft-bristle brush (like one of the toothbrushes your dentist gives you).

And make sure you’re backflushing your machine regularly (using a detergent like Cafiza), if you have one that requires that.

If you can upgrade your shower screen, do so. I upgraded to this one from E&B Lab (i.e. IMS) over 3 years ago (as of this writing), and it’s been working phenomenally:

See that mesh screen on the front? That will do what a puck screen normally does and spread out the water across the puck. You can see that quite readily by watching the water flow through it. And… it’s only 28 USD (as of this writing), less than the cost of premium puck screens and only about twice the cost of the lower priced options from companies like Normcore.

It’s only if you can’t upgrade your shower screen that you could look at puck screens to help even out the water flow. But before you go that route and complicate your routine, look at your portafilter basket first, since it works in tandem with your shower screen to extract the shot.

Portafilter basket

You don’t need to go all-out here and buy the Sworz billet basket. But that doesn’t mean your existing portafilter basket doesn’t suck, because it likely does. You will be far better served getting a better basket before adding filter papers or a puck screen into your steps.

And, like replacing the shower screen, this isn’t an expensive upgrade either. I was shocked when I saw the price of the filter basket I currently use: the VST 18g 58mm basket. And other comparable baskets are on the market at around the same price point.

If you have one of the lesser-end or entry-level espresso machines, look to see if there’s a basket upgrade available for your portafilter. Especially if you’re using a pressurized basket. Switch to a non-pressurized basket first before complicating your routine. This will require investing in a decent grinder, though. And if a non-pressurized basket isn’t available for your machine, upgrade to one with that option. Since having the pressurized basket itself will significantly hold back the quality of your shots.

If you already have even flow…

…why are you looking at adding puck screens and/or filter papers? You’re just going to mess up a good thing. So… don’t. Just don’t.

The only reason to add a puck screen is to keep your group head cleaner. But does it eliminate the need for periodic cleaning? No. So you’re maybe lessening how often you’re cleaning your group head… by introducing something that needs to be cleaned every time you use it. Yeah that sounds really smart…

And before getting filter papers, get a better portafilter basket if you have that option.

Final thoughts

Before adding in puck screens and filter papers, check your puck prep and equipment first. Since the puck screens and filter papers won’t save you from bad technique, bad equipment, or bad coffee. And with the filter papers, it’s added waste – and an added carbon footprint, especially if you’re ordering them premade.

But, for Christ’s sake, can we stop complicating espresso even more than it already is?

Timemore Chestnut C3 manual grinder

I’ve said in coffee reviews on this blog that my grinder is the Compak K3, which is a motorized grinder with 58mm flat burrs. And for 8-1/2 years, it was the grinder on my counter, faithfully grinding coffee without issue – except for the very dark roasts (like this one and this one) that I think would bind up any grinder out there.

So why am I now talking about the Timemore C3, which is a manual grinder available on Amazon for under 80 USD? Well… I’ll get to that in a minute.

First I’ll mention again that my espresso machine is… nowhere near cheap. It’s the ECM Technika IV Profi. When available, it was north of 2,000 USD (depending on retailer and whether it was on sale). And I’ve had it for over 7 years.

And the idea of pairing an 80 USD grinder, especially a manual grinder, with a 4-figure espresso machine would likely have a lot of coffee snobs screaming. That there is no way a cheap grinder can grind fine enough for that kind of espresso machine. That it’s best paired with one of the cheap machines from DeLongh’i (like the now-discontinued EC-155, replaced with the ECP3220 and ), or even the Breville Infuser at the upper end of the spectrum.

And for the most part, they’d be right. It’s nonsensical to pair such an inexpensive grinder with a high-end espresso machine. But they’d also be wrong. It can produce grinds fine enough for a high-end espresso machine.

Would I recommend this grinder for anyone wanting to make espresso with a high-end machine? Certainly not. Even if you want a manual grinder, there are better options available that are more suited for that application.

But is it incapable? Absolutely not. Don’t fall into the mental trap of thinking better options being available means lesser options are incapable. Something I see happen way, way too often.

Like I said, it’ll produce grinds fine enough for my high-end machine. And I didn’t need to bottom out the burrs or modify it in any way to accomplish that.

It is a conical burr grinder, though, while my Compak is a flat burr. And yes, that makes a difference. When I upgraded from the Breville Smart Grinder to the Compak K3, I didn’t notice a difference at the time simply because I hadn’t been a home barista for all that long. But going from the flat burr K3 to the conical burr Timemore C3… it was noticeable, and not for the better.

The small burrs are a big part of that. Larger burrs, whether flat or conical, produce better grinds. But that there is also a difference in grind quality between conical and flat burrs plays into this as well.

Now whether flat or conical burrs are better is a matter for debate and personal choice. The best grinders use flat burrs. But the very popular Niche Zero uses conical burrs. But at 63mm, those burrs are also much larger compared to most motorized conical burr grinders, let alone the manual ones.

Conical burrs, though, allow one to build compact grinders. You can only get so small with flat burrs before they’re useless for espresso.

Why a manual grinder?

As I said, my previous grinder was the Compak K3. And it dutifully held its own for over 8-1/2 years. Then the threads bound up hard when I was trying to take it apart to clean it out. No idea how that happened, but one of my recent dark roast misadventures probably had something to do with that.

So I started looking for a replacement. Something not easy to do when you’re… unemployed.

And I first started looking at the lesser-expensive motorized grinders from Baratza, Breville, etc. But reviews about using them with higher-end espresso machines left me wanting. I was still looking for a replacement for my Compak, but I needed a stop-gap, and preferably one that wasn’t going to drain my savings.

And manual grinders are typically far less expensive compared to their motorized counterparts.

Looking for reviews online, I discovered that James Hoffmann did a review of several manual grinders, targeting espresso specifically, one of which was an earlier version of the Timemore:

And seeing the price for the C3 online, I jumped on it to, again, have a stop-gap while I figured out what to do with the Compak.

The experience

And having used electric grinders the entire time I’ve been doing home espresso – first the Breville Smart Grinder, then the Compak K3 – using a manual was… an experience. And not really a great one for someone only a little over 4 months off a dislocated shoulder and less than a month out of physical therapy for said dislocation. Non-dominant shoulder, thankfully, or this definitely would’ve been a far worse experience.

Dialing it in for espresso was interesting, though. It is a stepped grinder, not stepless. But I’ll just say to be prepared to modify your dose size along with the grind setting to get your shots pulling within the proper time frame. In dialing this in, I got to a point where one click finer choked off the machine, while one click back from that produced a gusher. So the only way around that was to go to the finer setting and reduce the dose size – I settled in on about 15.5g dose for the grind setting.

Going to a fine grind for espresso does mean additional work being involved, and is far from an enjoyable experience when you’re making your morning latte after feeding the cats… And numerous times I considered measuring the hexagonal shaft to determine what hexagonal bit I’d need to power this with my drill. Which answers another question you’ve probably had in your mind reading this. No, I didn’t chuck it into my drill.

I don’t see any reason to think you cannot do that, so long as you can control the RPM to keep it at a pace on par with what an average person can produce – meaning under 150 RPM, probably closer to 100 RPM.

And as I said, it produced grinds fine enough for my high-end espresso machine. They definitely aren’t the greatest grinds to be produced, but it wasn’t something I can complain about much either. While one really could say having a grinder is better than no grinder, that isn’t true with espresso. For one, it needs to be able to grind fine enough for espresso, and this one, again, definitely can do that.

But the grinds it does produce also need to be fairly consistent to get good results with an espresso machine, and that’s where this falls short. But given the compact size of the grinder, meaning also the compact size of the burrs, that’s not unexpected. Again, larger burrs produce better grinds. There’s just no getting away from that. And with manual grinders, you can only go so large before the effort needed to grind the coffee beans becomes too much.

On the plus side, being a manual grinder means it doesn’t come with many of the downsides of electric grinders. For one, no need to worry about powering it – whether with batteries or off the wall.

But easily the major one is… static. There’s still some static, and some of the grinds will cling to the side of the collection cup. But a firm tap with the side or bottom of the cup against the counter top will dislodge most of that without an issue. Spritzing the beans with water – formally known as the “Ross Droplet Technique” or RDT – will likely help with that too.

Being manually powered, though, also means it’ll take longer to grind compared to a motorized grinder. But its compact size means you don’t need to dedicate counterspace to it and can just keep it in a drawer.

For the short time I used this, it worked great as a stop-gap. And I’ll likely keep it on hand in case I need a backup again in the future.

Final thoughts

As already mentioned, I absolutely would not recommend this grinder for high-end espresso machines. If you want a manual grinder, there are better options available. Just pay attention to reviews with regard to grind consistency.

If you have one of the many good lower-end options that have sprung onto the market over the last several years, then it’s a great inexpensive choice if you don’t mind the work involved. I could easily see pairing this with one of DeLonghi’s inexpensive options, for example.

If you do pick one up, make sure to also pick up Grindz with it to run through the grinder every once in a while. Being a manual grinder doesn’t mean you get away from maintenance.

I can’t really speak to the quality of this grinder for French press, drip, and pour-overs. But I think it safe to say that if it’s a capable grinder for espresso, it should do well there too. Again, better options exist, but that doesn’t mean this grinder is incapable.

Verena Street Shot Tower Espresso

I’ll say up front that it’s possible my experience with this coffee is one-off. And I certainly hope it is. But it’s a horrible enough experience that I’m steering clear of them in the future.

To preface, I was recently laid off, so in a bid to cut some costs, I decided to take on lesser-priced coffee compared to what I typically get from Messenger Coffee. And my nearby Hy-Vee is where I’ve gone in the past. So when I saw Verena Street in the Hy-Vee app, and saw it was on sale for $8 for 11oz (not 12 oz, unfortunately, not that would’ve helped much here), I decided to give it a try. Especially since the 2lb bags can be had for only a little more than a 12oz bag of Messenger.

Anyway…

Now Verena Street’s Shot Tower Espresso is dark and oily, like the previously-reviewed Pitch Black Espresso from Black-Out Coffee. And it has the flavor notes of charcoal and smoke that I also noted with Pitch Black.

Only my experience with Verena Street was worse. Way, way worse. To say I’m livid is a massive understatement.

I’m writing this not even 48 hours after I bought the bag. I got 4 lattes out of the entire 11oz bag. More of the coffee grinds went into my trash can than were used to pull a shot. The reason: any attempt to grind fine enough for espresso created grinds that clumped up and blocked up my grinder. And the 4 lattes I did get out of it were mediocre at best simply because I couldn’t get my grinder dialed in.

It was just impossible.

(For immediate reference, my grinder is the Compak K3 Touch, which has 58mm flat burs.)

The morning after I bough the bag, it took me nearly an hour to get something workable, fighting with my grinder the entire time, burning through probably a hundred grams of beans in the process, if not more. It was a royal pain in the ass… And for each of the other three lattes, it was much the same. Fight with the grinder to at least get something mediocre.

This morning I even took apart my grinder and cleaned it out. And that helped. I got close to getting it dialed in, and figured I’d just adjust things with each next grind. Only I never got that chance. Because only for this morning, cleaning the grinder helped. Only. for. this morning.

This afternoon it was back to the same experience of fighting the grinder to get something… mediocre. Six (6) hours is all the grinder was sitting. Just. Six. Hours. And something happened with the latent grinds in the grinder…

Now my grinder isn’t exactly “high end”. Yeah it’s definitely not the cheapest thing on the shelf. But there are other grinders that are far better than mine, and far pricier. But it’s price is due to it being made for espresso, able to grind super fine for espresso or Turkish coffee. And if a coffee bean is advertised as “espresso”, I should be able to put it through any grinder made for espresso, whether it has flat burs or conical burs (mine is flat), and grind it to the fineness needed to pull a shot of espresso on an espresso machine.

But this bean is way, way too oily to do that. It’ll clump up inside your grinder in such a way that nothing will flow out. At least on a flat-bur grinder. If I still had my conical bur Breville Smart Grinder, I probably could’ve been fine. Probably…

But for the most part, trying to do a fine grind with an oily bean will lead to a bad time.

Don’t call your blend an espresso blend if it can’t be ground fine for espresso. The experience alone means this isn’t even worth a 0/5.

There’s no injustice here

Here’s the article headline: “Kansas City man, acquitted of murder, remains in jail weeks later“.

And in reading the article, I found the headline to be very strange. Since he was acquitted, why was he not released… immediately?

Here’s the missing detail: he had been on parole before being arrested on the murder charge. And he now awaits a new parole hearing.

And here’s the even more relevant detail: he was on parole after spending time in prison for a murder conviction in 1971, being sentenced to life with parole eligibility. He was initially paroled in the 1990s, but was sent back to prison after being arrested for a robbery, but was later paroled again. He had his parole revoked when he was arrested and charged with another murder that occurred in 2021.

Now that he’s been acquitted, he’s just waiting for a new parole hearing.

Being on parole means you have to stay clean. If you get arrested, you risk being sent back to prison. And it doesn’t matter what you’re arrested for.

And an acquittal doesn’t automatically reinstate your parole, only your parole eligibility.

Far too many people make the wrong association that acquittal means innocent. It does not. Which is why an acquittal doesn’t automatically reinstate parole. Nor should it.

Which, to reiterate, his parole stemmed from his life sentence for his 1971 murder conviction.

As such the evidence in the murder trial may be used against him at his next parole hearing to argue against his release. The evidentiary standard at a parole hearing is “preponderance of the evidence”, which is far lower than the “beyond reasonable doubt” standard needed for a criminal conviction. To reiterate, “not guilty” does not mean innocent.

As such, here’s the right headline for the article: “Kansas City man awaits a new parole hearing after being acquitted of murder”.

Pitch Black Espresso – Blackout Coffee

Okay… I decided to give in. I’ve seen Blackout Coffee advertised by a couple YouTube creators I follow. And I’d been… wishy-washy about trying them out given my less-than-stellar experience with Black Rifle Coffee.

But I decided to give in. After looking at their site and seeing they have an espresso blend, I decided to try it.

My… equipment

Espresso machine: ECM Technika IV Profi with VST 18g basket
Grinder: Compak K-3 Touch

Accessories: 58mm 13mm-tall dosing funnel, WDT tool, 58mm Coffee Distributor

Initial thoughts

Blackout’s Pitch Black Espresso is… well… pitch black. “Dark” is a bit… light of a word to describe this.

This is a very dark, very oily roast. Easily the darkest roast I’ve had. Darker still than The Roasterie’s Gotham and Nitro espresso roasts.

It took a few tries to get this dialed in. Largely because it initially would not grind through. This was my fault, though, as the very oily bean seemed to bind onto latent grinds in my grinder. I should’ve run Grindz through before this, but had just run out with my last bag of coffee and neglected to pick up more. (For those in Kansas City, The Roasterie plant off 27th Street typically carries it for less than Amazon.) To get around that, I needed to open the grinder up wide after also dumping out what I attempted to grind.

Anyway… the final dial-in was about 16g with the grind being only slightly coarser than with Messenger’s Relay at 17g. Pulling shots, even while dialing in, I noted a very rich crema.

First latte

16g in, about 35g out on the shot. Milk is A&E Whole Milk. Steamed enough to make about a 16oz latte.

And… smoke… Wow, there’s a heavy taste of smoke and charcoal that penetrates the milk. I’m used to the flavor of the coffee being sweetened by the milk, tempered by it. The coffee flavor blending into the milk. But this charcoal flavor just punches right through.

And lingers on the tongue as well.

Experience through to the end

The smoke and charcoal flavor died off quite a bit during the run through the bag and was not nearly as pronounced toward the end. So it’s definitely most intense at the start of a bag. I typically extract the shot directly into a mug, and I noticed the crema creates a very pronounced concave meniscus. Something I’ve never seen before. (And somehow I remembered that term from my high school chemistry classes before verifying I had it right.)

I also had to back off the dose. I’m not sure what about my initial dial-in landed me on a 16g dose, but I had to back off to about 15.5g on the dose the next day. That proved to be the sweet spot, going for about 32g to 35g out. I did not need to adjust the grind or dose any further beyond that.

The resealable bag Blackout Coffee uses does not have a one-way valve for degassing, so I transferred the beans into an Airscape for storage. And during the time I worked through the beans, I did not notice the beans going stale. They stayed as fresh at the end as I’d normally expect, and was in line with my typical experience with Messenger’s Relay Espresso. Again the only “aging” I noticed was the smoke and charcoal flavors not being nearly as pronounced.

Overall

Overall, it was a pleasant experience with Blackout’s Pitch Black Espresso. The strong smoke and charcoal flavors are not for me. I definitely prefer medium roasts. But thankfully that wasn’t so overpowering it completely turned me off. Though it had no difficulty overpowering the flavor of the milk. I don’t think adding flavor syrups would’ve done much to temper that.

And when you have that strong of a charcoal or smoke flavor, you definitely need to be careful what flavors you pick to avoid creating something… unappetizing. But then if you’re buying this kind of a dark roast, you’re likely not doing any kind of adulteration to it anyway. Instead you’re buying it because you want that smoke and charcoal flavor.

How the coffee performed during the time I had it was most important. It took me about a week to get through the beans, and during that time they stayed as fresh as I’d expect being stored in an Airscape.

So overall if how I’ve described the flavor is something you typically go for, then give it a try. If you’re new to dark roasts, this will probably feel a bit extreme on the flavor spectrum, meaning you’ll either really like or it’ll send you straight back to the medium roasts.

Just make sure that you clean your grinder first before trying to put this through to avoid the issue I described above.

Geekworm X680

As said in my review of the KVM-A8, I picked this up at the same time, though I had to wait longer to put it into service. At the time I bought the KVM-A8, I already had a Pi 4B in hand, but needed to wait when it came to getting a CM4.

So now that I was able to put it into service, time to talk about it…

(Buy now through AliExpress or Geekworm)

What do you get?

You have the option to buy the X680 on its own or in several kits depending on how many systems you need to control via ATX. In my instance, I needed support for 3 systems: Nasira, Cordelia, and my mail server. That was “Kit C” (on Geekworm’s site, “X680-X630-A5-3” on AliExpress), which comes with three X630-A5 boards and requisite cable harnesses, along with three 1m Cat5e cables.

If you buy the X680 on its own, you get 4 USB 2.0 cables plus the power brick, which will still allow you to control up to 4 systems, but you won’t be able to remotely control the power and reset function.

It does NOT come with any HDMI cables.

Note: buying the kit with the X630-A5s is less expensive than buying the X630-A5s later. So depending on where you’ll be deploying this, it may be worthwhile grabbing the full kit with four (4) X630-A5s now so you won’t have to buy them later if you anticipate needing to control four systems eventually.

You have three boot options: NVMe SSD, microSD, or onboard eMMC. I think most everyone who uses this will be using either microSD or eMMC, though, so it’s rather… strange they included an NVMe slot. Sure, NVMe is more versatile compared to microSD, and probably more so compared to the onboard eMMC. But getting it set up is a little bit of a chore.

And it’s not like you need the write endurance of an NVMe since the Pi-KVM OS operates with the file system on read-only.

Initial impressions

I paired mine with the CM400200, which is the 2GB “Lite” version of the Raspberry Pi CM4, meaning no onboard eMMC and no WiFi.

Again, the X680 gives you the option to boot from NVMe or microSD along with the onboard eMMC (if your CM4 has that). I went with the same microSD card I used with the KVM-A8: Samsung PRO Endurance 32GB. It was just… far easier doing that. If your CM4 has eMMC, it must be 16GB to use it as the boot device.

Like the KVM-A8, this does also have an onboard real-time clock which needs a CR-1220 or CR-1216 battery. It’s the same socket used in the KVM-A8, which I already said will accept a CR-1216 without issue.

The main unit is also 30cm x 9cm (12″ x 3.5″) and about 2.75cm tall (a hair over 1″). This isn’t large enough to fill a 1U space and does not come with rack ears – nor does it have any screw holes for attaching them. So you will need a shelf if you’re putting this in a 19″ rack.

Unfortunately this does NOT support Power over Ethernet. But even if it did, I’m not sure I would want to use it via PoE anyway. The KVM-A8 being powered via PoE is perfectly sensible since it’s inside my OPNsense router. This is a purely external appliance.

Like with the KVM-A8, I set this up before connecting anything. This also gave me a chance to replace the SSL certificate and make sure it had all needed updates.

Given the commands to set this up, I actually wish Pi-KVM included a Bash script that could be run instead. Perhaps I’ll look into writing that later. It would just make it a lot easier on everyone for the setup instructions to include “run setup_pivkm.sh and follow the prompts” for the most common setup and customization steps.

X630-A5 and low-profile servers

Why is there no low-profile bracket available for the X630-A5?

That’s a bit of a pain since my mail server is in a 2U chassis. So to get this to “work”, I just removed the card from the full-height bracket and… attached it to the side of the chassis using VHB with the Ethernet cable just… dangling through the back.

In all seriousness, how could they NOT anticipate needing a low-profile bracket? PiKVM has a low-profile bracket for their ATX control board – though it’s odd they sell that separately. So Geekworm should get on that.

And I wonder if the Pi-KVM ATX control board works with the Geekworm appliance, if the pinouts match.

Display troubles

Getting this to work was… interesting. The mail server worked fine pretty much out of the gate. It did have this interesting error, though:

Nasira and Cordelia? Not so much.

Powering off the KVM and powering it back on with everything connected brought Nasira around. So there must’ve been some kind of hiccup with whatever video driver TrueNAS is using. (Which I’d expect it to be a generic graphics driver, not anything specific to any chipset.)

Cordelia was a bit more involved. And it’s safe to say the NVIDIA driver was the issue.

I have a Docker container for Frigate, and a GTX 1060 for video transcoding. And that video transcoding requires the NVIDIA proprietary driver. But I had installed the driver via packages from Ubuntu’s repository. This had… kinda worked with the Frigate container. And removing the NVIDIA proprietary driver packages brought Cordelia around to cooperating with the KVM – though also presenting the same EDID error as the mail server.

Now to figure out how to get the NVIDIA proprietary driver working here. Provided that’s possible. Obviously I can’t speak to the AMD drivers since the systems I’m controlling don’t have AMD cards, and I don’t have any spare AMD cards to test.

Conclusions

So first point of order: if the servers you will be controlling have an NVIDIA card running Linux, don’t use the proprietary drivers. And if you must, you’ll likely be better served using one of the other single-machine Pi-KVM options available. I can’t speak to Windows servers for this, so your mileage may vary.

(Note: I’ll update this article if I manage to get the proprietary NVIDIA driver working, but for the time being I’m leaving things as-is.)

Though if you need to control a mix of Windows, Linux, and Mac servers, you may be better served doing this:

And I considered doing this as well when I ran into the above-mentioned frustrations.

But the cost of the standalone Pi-KVM v3 HAT and the ezCoo KVM he used surpasses the cost of the the Geekworm X680 kit. Though if you use Geekworm’s KVM-A3 (buy from Amazon, AliExpress, or Geekworm), you’re about breaking even. But having an all-in-one solution I feel is better.

And once I got past the hiccups… it works.

Now does it work better than what it’s replacing? Yes and no.

There’s much higher fidelity in the video feed going to the browser with the Pi-KVM for… obvious reasons. Keyboard and mouse input latency is much better than the Avocent MPU. The Avocent MPU also had this weird… offset issue with the mouse cursor that is definitely non-existent with the Pi-KVM. Never did get that fully troubleshooted.

But I’m not going to be controlling these servers through the KVM the vast majority of the time. I use the TrueNAS web interface for Nasira and SSH for the mail server and Cordelia the vast majority of the time.

While a KVM-over-IP solution can be used for remote server management, it isn’t the most efficient means. Windows servers can be remote-administered using Remote Desktop (RDS), VNC, or one of the several remote admin products available. Most Linux servers are remotely administered via SSH.

So why have a KVM-over-IP solution? So you can see that a server comes back up when you reboot it – e.g. after doing package updates – and troubleshoot it remotely if it doesn’t. The ability to control the power and reset remotely on all connected servers is a nice bonus with the Geekworm X680.

Driver issues aside, my only real gripe with this is cable bulk. Since you have three cables per system you’re administering: USB, ATX (Cat5E), and HDMI. Cable sleeving can help tame the madness. I just wish it was as easy making custom-length USB and HDMI cables as it is Cat5E.

Which is the advantage of the Avocent MergePoint Utility KVM-over-IP solutions. The individual control modules contain both USB and video, and talk to the main appliance over Cat5E. One module – one cable – per system you’re administering.

While there are other KVM solutions with combined cables, the Cat5E means you can build your cables to whatever length you need. The modules that went to the mail server and Cordelia ran off short patch cables typically used to connect a patch panel to a front-facing switch. Plus they have modules for serial consoles for switches and uninterruptible power supplies.

Adding something like that to a Pi-KVM would take a LOT of effort. (Provided Vertiv doesn’t own any patents on that.) But that’s largely not worth it since the Pi-KVM is built to control just one system, the Pi-KVM OS written for controlling just one input. A substitute for IPMI, not a replacement for existing KVM-over-IP solutions designed from the outset for handling multiple servers.

That Geekworm was able to make the X680 is remarkable taking this into account. And it works… reasonably well.

Would using the ezCoo KVM with the PiKVM v3 HAT, or similar, or the v4 Plus work better? Likely. But simply due to the external KVM being what’s switching between systems.

Having an integrated solution, I feel, is easier to deploy and manage. And with the Geekworm X680, you also have remote power and reset control. The ezCoo KVM doesn’t have that function. So if you require that, you’ll have to figure something out for it. And the Avocent MPU also doesn’t have that.

So, in the end, I would still recommend this. If your use case is like mine, and you’re controlling a bunch of Linux servers without the need to work with a graphical desktop, this should work fine for you.

Geekworm KVM-A8

For the longest time I wanted a KVM connection to my OPNsense router.

I have a KVM-over-IP appliance on my rack. An Avocent MPU2032 I bought back in 2018. (Vertiv, the company that now owns the Avocent brand, discontinued support for it only last year.) And it works very well.

And given where I have my router, hooking all of that in would require running Cat5e to it for the interface module. Up through and across the attic, down into the networking closet, then down through floor to get to the router.

Yeah… no.

There really aren’t many other options for KVM-over-IP for controlling just one machine. Virtually every other KVM-over-IP solution is built with a rack or bank of servers in mind. Since the whole idea of KVM is controlling several computers from one keyboard, mouse, and screen. Making the Pi-KVM project somewhat unique in that. And thankfully in the years since that project started (2019 from what I can find), better options for that have come available so you’re not doing everything DIY.

And you don’t need to rely on the external box, either. Geekworm and blicube have introduced internal options, though designed for a rear I/O slot. Such as the option I chose: the KVM-A8 by Geekworm. (Buy on Amazon, AliExpress, or direct from Geekworm)

Very convenient

The package includes everything you need, including a 30cm (12″) HDMI cable. Just add the Raspberry Pi 4 Model B, and an HDMI adapter if needed. It’s fairly easy to assemble, though not entirely intuitive.

Geekworm recommends powering this via Power-over-Ethernet (PoE), and I second that recommendation. It’s just a lot more convenient if you have the option since it eliminates the need for an external power supply. It does not require PoE+. An injector (such as this one by TP-Link) is a useful alternative if you don’t have a PoE switch.

I also recommend using Cat6 or better for the Power-over-Ethernet if you have the option, simply to get full Gigabit bandwidth to the Raspberry Pi.

Quick note on the RTC battery

If you can’t find a CR-1220 battery, a CR-1216 will work just fine. So don’t fret if your local battery retailer has CR-1216s but not CR-1220s in stock – e.g. Micro Center. Just pick up a CR-1216.

The only difference between the two is thickness. A CR-1220 is 2mm thick, while a CR-1216 is 1.6mm thick. So 0.4mm. The underside contact for the coin cell has more-than-enough tension to make contact and power the RTC.

Set it up first

Given the target audience for this device, and the Pi-KVM project in general, I really should not have to say this, but I will anyway. And this goes for any Pi-KVM appliance you select: set it up for Pi-KVM before attaching it to or installing it in the system you intend to control. This just ensures you don’t have a one-off dead component on your hands, and that it’ll work with your Power-over-Ethernet setup.

You don’t need to have it connected to the target system to set it up. So take advantage of that. You only need have it powered and connected to your network to finish the setup. Just look for “pikvm” in your DHCP leases list on your router to find the IP or connect to “pikvm.local”.

And make sure to get the base OS completely updated as well. It uses Arch Linux, which uses pacman as the package manager. So to update everything, use these commands:

rw
pacman -Syu
reboot

Then connect it to the system you intend to control with it.

Replacing the SSL certificate

This is only relevant to you if, like me, you have your own SSL CA for your home servers. If you don’t and want to create one, I just followed the instructions on here to create the root signing certificate using OpenSSL.

The Pi-KVM system software uses nginx for the web UI, but the service configuration isn’t at /etc/nginx/.

As of this writing, the server certificate is stored in /etc/kvmd/nginx/ssl. You could just overwrite the public and private keys already there. I dropped in new files, leaving the originals in place, and updated the configuration: /etc/kvmd/nginx/ssl.conf.

When you’re done, restart the kvmd-nginx service.

One small complaint…

It doesn’t fit entirely within one slot width. So whether this will work with your server depends on a few details. In short, you’d want to avoid having any cards in the slot immediately below it.

And I feel this could’ve been avoided with a simple change to its design. The black screws in the above image mark the outline, more or less, of the Raspberry Pi. So if the “hat” board was just an inch longer, they could’ve better accommodated the FPIO without overstepping into the next slot by having the connectors on the Raspberry Pi’s side of the board.

Alternatively, instead of using the 8-pin pin connectors they did integrate, they could’ve used a USB 3.0 type-A female plug, which has 9 pins, or USB type-C. Not having the shroud around the 8-pin connectors might even be enough to give the needed clearance. So next time I open up the router – which will come soon since I do need to change out the front fans – I might just see if I can pull the shroud off the pins.

Alternatively if you can source a Raspberry Pi CM4, Geekworm also makes makes the X652 (buy it on Amazon, AliExpress, or direct from Geekworm), which looks to actually fit within the width of a single slot. It’s also low profile, so can fit in a 2U server with the included low-profile bracket.

Conclusions

Well… not too much to say here other than that it works. And I’ll reiterate that you should use PoE to power this if possible just to make your life a little easier depending on the proximity this will have to any power and the switch.

If you need a KVM-over-IP for just one system, then this will work great and is much more cost effective compared to trying to use a KVM-over-IP solution for multiple systems. It’s only about $100 for everything you need. Just add the Raspberry Pi 4 Model B and microSD card – I used a Samsung PRO Endurance 32GB for mine.

If you need a KVM to handle multiple systems, though, this will NOT be cost-effective in the slightest. Since you’re talking one KVM-A8 and one Raspberry Pi 4 Model B per system you need to control remotely. Plus needing to provide power to all of them – whether through USB-C power adapters or Power-over-Ethernet. Plus unique hostnames for each one. (IPs as well, but most-everyone uses DHCP, so that’s not an issue.)

But Geekworm does have a KVM-over-IP solution for up to 4 systems. And I do have one that I ordered in at the same time as the KVM-A8 to replace my Avocent MPU2032 in my rack – which has 32 ports but I’m using only… 3 of them. I’m just trying to source a Raspberry PI CM4 to put it into service. So stay tuned for that review.

Firefly III is not double-entry accounting

Double-entry accounting doesn’t have much in the way of rules. But one is paramount: all debits in a transaction must equal all credits. But there is no limit to the number of debits or credits you can have. No limit to how many accounts you’re referencing.

Firefly III, though, doesn’t allow this.

Income and revenue transactions are many source to one destination. Transfers are one source to one destination. (Splits are allowed, but they must all have the same source and destination.) Expense transactions are one source to many destination.

The thing is…. anyone’s paycheck breaks this.

Your paycheck is your gross pay (income), deductions (expenses), and your net pay (asset). You can’t have all of that in a single revenue transaction in Firefly III. Firefly’s documentation says the source accounts for the splits in a revenue transaction must be revenue accounts, and the destination is an asset or liability account.

One income (possibly more) to many expenses and at least one asset, the latter of which could include splitting deposits between multiple accounts (e.g. direct deposit to savings as well as checking) and/or retirement account contributions.

The only way around this is several transactions involving an intermediate asset account, which we’ll call simply “Paycheck” for this example.

  1. Gross pay. Revenue transaction – source: “Salary” or “Wages”, destination: “Paycheck”
  2. Deductions. Expense transaction – source: “Paycheck”, destinations: accounts for your deductions (e.g., taxes, insurance, etc., but NOT including retirement account contributions)
  3. Net pay. Transfer transaction – source: “Paycheck”, destination: bank account
  4. Split deposit. Expense transactions – source: “Paycheck”, destination: other bank accounts
  5. Retirement account contributions (where applicable). Transfer transaction – source: “Paycheck”, destination: retirement account

And whatever other transactions you’d need to account for everything. If you have employer-paid benefits or an employer 401(k) match, you could include that as separate splits on the main “gross pay” transaction.

In my case, my paycheck has three incomes: salary, employer 401(k) match, and employer-paid benefits.

Anything that breaks the one-to-many or many-to-one rule in Firefly III requires using intermediate accounts. And, as already mentioned, anyone’s paycheck is a ready item showing this. And on the expense front, if you’ve ever split payment on an expense, such as using a gift card or gift certificate to cover part of it, you’re breaking the one-source, many-destination rule for expense transactions.

This goes against double-entry accounting.

There is no rule in double-entry accounting that expense transactions must be only from a single source. There is no rule that revenue or income transactions must be only single destination. So Firefly III shouldn’t have this limitation if they’re going to say it “features a double-entry bookkeeping system”.

But I can… somewhat live with that. Cloning transactions means you really only need to enter those transactions once. But… why does cloning not open the transaction editor with everything pre-populated rather than creating a new transaction that you then have to edit, generating unnecessary audit entries?

The user interface, though, definitely leaves something to be desired.

I’ll admit I’ve been spoiled by GnuCash’s simple, straightforward, spreadsheet-like interface that makes it stupid-easy to enter transactions. It’s really easy to navigate through the transaction editor using the keyboard, much like Microsoft Money, which I used before GnuCash. And getting something like that in a mobile or web-based application is going to be hard to come by.

Firefly III’s transaction editor is far from “stupid-easy”.

One tenet of user interface design and crafting the user experience is to make your software easy to use and intuitive as best you can. Keyboard shortcuts are the easiest way to do this. The less someone has to use the mouse to perform an operation, the better. And with GnuCash, I can tab or arrow-key around the transaction editor. No mouse clicking required.

Sure learning keyboard shortcuts can be a pain in the ass. But once you learn them, you’ll never not use them again since not using them slows you down.

So why does Firefly III not have any keyboard shortcuts? If anything, that should be priority. Usability is of paramount importance in any system. Doubly-so with financial management. Consulting with a UI/UX professional for ways to improve the user interface, hopefully without having to gut it and start over, would be beneficial.

On the plus side, it is easy to set up. Especially if you use the script I provided in a previous article to set it up in a Docker container.

Hands-off script for setting up Firefly III for Docker

Firefly III is an open source financial management system. I’m trying it out in renewing my search for an accounting system I can use from a browser. Am I ultimately going to keep using it? I’m still kind of playing around with it, but it’s likely I’ll look for something else. I guess when it comes to ease of use, nothing really compares to GnuCash’s simplicity. And that’s largely what I’m looking for.

Firefly III, sorry to say, doesn’t even come close. I might write a review for it later, contrasting it with GnuCash and comparing it against the rules (or, largely, lack thereof) with double-entry accounting.

Anyway, copy the installation script to a .sh file on your Docker machine and give it the requisite permissions, then run it. Make sure to copy off everything from output at the end and store it off somewhere, preferably in a password manager like KeePass, since you’ll need it for running the update script.

Like with the install script for Guacamole, pay attention to the subnet and IPs this will be using and change those if necessary. If you don’t select a port number at the first prompt, it’ll randomly select a port between 40,000 and 60,000. And unless you have a need for this to be on a specific port, I suggest just letting the script pick one at random.

Having a separate script that is run periodically – e.g. as a cron job – to back up the volumes for the MySQL and Firefly containers would also be a good idea.

Installation script

#!/bin/bash

read -p "Port number for Firefly web interface [Enter to pick random port]: " firefly_port_number
echo

if [ -z $firefly_port_number ]; then
    firefly_port_number=$(curl -s "https://www.random.org/integers/?num=1&min=40000&max=60000&col=1&base=10&format=plain&rnd=new")
fi

# Random passwords and app key generated using Random.org so you don't have to supply them

root_secure_password=$(curl -s "https://www.random.org/strings/?num=1&len=16&digits=on&upperalpha=on&loweralpha=on&unique=on&format=plain&rnd=new")
firefly_mysql_password=$(curl -s "https://www.random.org/strings/?num=1&len=16&digits=on&upperalpha=on&loweralpha=on&unique=on&format=plain&rnd=new")
firefly_app_key=$(curl -s "https://www.random.org/strings/?num=1&len=32&digits=on&upperalpha=on&loweralpha=on&unique=on&format=plain&rnd=new")

sudo docker pull mysql
sudo docker pull fireflyiii/core

# Creating the volumes for persistent storage

sudo docker volume create firefly_iii_upload
sudo docker volume create firefly_mysql_data

# Creating the network. This allows the containers to "see" each other
# without having to do some odd port forwarding.
#
# Change the subnet and gateway if you need to.

firefly_network_subnet=192.168.11.0/24
firefly_network_gateway=192.168.11.1
mysql_host_ip=192.168.11.2
firefly_host_ip=192.168.11.3

# Remove the ".local" if necessary. Override entirely if needed.

full_hostname=$(hostname).local

sudo docker network create \
--subnet=$firefly_network_subnet \
--gateway=$firefly_network_gateway \
firefly-net

# Setting up the MySQL container

sql_create="\
CREATE DATABASE firefly; \
\
CREATE USER 'firefly_user'@'%' \
IDENTIFIED BY '$firefly_mysql_password'; \
\
GRANT ALL PRIVILEGES \
ON firefly.* \
TO 'firefly_user'@'%'; \
\
FLUSH PRIVILEGES;"

echo Creating MySQL container

sudo docker run -d \
--name firefly-mysql \
-e MYSQL_ROOT_PASSWORD=$root_secure_password \
-v firefly_mysql_data:/var/lib/mysql \
--network firefly-net \
--ip $mysql_host_ip \
--restart unless-stopped \
mysql

# Sleep for 30 seconds to allow the new MySQL container to fully start up before continuing.

echo Let\'s wait about 30 seconds for MySQL to completely start up before continuing.
sleep 30

echo Setting up MySQL database

sudo docker exec firefly-mysql \
mysql --user=root --password=$root_secure_password -e "$sql_create"

echo Creating Firefly-III container

sudo docker run -d \
--name firefly \
-v firefly_iii_upload:/var/www/html/storage/upload \
-e DB_HOST=$mysql_host_ip \
-e DB_DATABASE=firefly \
-e DB_USERNAME=firefly_user \
-e DB_PORT=3306 \
-e DB_CONNECTION=mysql \
-e DB_PASSWORD=$firefly_mysql_password \
-e APP_KEY=$firefly_app_key \
-e APP_URL=http://$full_hostname:$firefly_port_number \
-e TRUSTED_PROXIES=** \
--network firefly-net \
--ip $firefly_host_ip \
--restart unless-stopped \
-p $firefly_port_number:8080 \
fireflyiii/core

echo Done.
echo
echo MySQL root password: $root_secure_password
echo MySQL firefly password: $firefly_mysql_password
echo Firefly App Key: $firefly_app_key
echo Firefly web interface port: $firefly_port_number
echo
echo Store off these passwords as they will be needed for later container updates.
echo To access the Firefly III web interface, go to http://$full_hostname:$firefly_port_number

Update script

#!/bin/bash

read -s -p "MySQL Firefly user password: " firefly_mysql_password
echo
read -s -p "Firefly app key: " firefly_app_key
echo
read -p "Port number for Firefly web interface: " firefly_port_number
echo

# Make sure these match the IPs in the installation script

mysql_host_ip=192.168.11.2
firefly_host_ip=192.168.11.3

# Remove the ".local" if necessary

full_hostname=$(hostname).local

echo Stopping the containers.

sudo docker stop firefly-mysql
sudo docker stop firefly

echo Deleting the containers.

sudo docker rm firefly-mysql
sudo docker rm firefly

sudo docker pull mysql
sudo docker pull fireflyiii/core

echo Creating MySQL container

sudo docker run -d \
--name firefly-mysql \
-v firefly_mysql_data:/var/lib/mysql \
--network firefly-net \
--ip $mysql_host_ip \
--restart unless-stopped \
mysql

echo Creating Firefly-III container

sudo docker run -d \
--name firefly \
-v firefly_iii_upload:/var/www/html/storage/upload \
-e DB_HOST=$mysql_host_ip \
-e DB_DATABASE=firefly \
-e DB_USERNAME=firefly_user \
-e DB_PORT=3306 \
-e DB_CONNECTION=mysql \
-e DB_PASSWORD=$firefly_mysql_password \
-e APP_KEY=$firefly_app_key \
-e APP_URL=http://$(hostname).local:$firefly_port_number \
-e TRUSTED_PROXIES=** \
--network firefly-net \
--ip $firefly_host_ip \
--restart unless-stopped \
-p $firefly_port_number:8080 \
fireflyiii/core

echo Done.

More upgrades but… no more NVMe

Build Log:

With the last update, I mentioned upgrading from 990FX to X99 with the intent of eventually adding an NVMe carrier card and additional NVMe drives so I could have metadata vdevs. And I’m kind of torn on the additional NVMe drives.

No more NVMe?

While the idea of a metadata vdev is enticing, since it – along with a rebalance script – aims to improve response times when loading folders, there are three substantial flaws with this idea:

  1. it adds a very clear point of failure to your pool, and
  2. the code that allows that functionality hasn’t existed for nearly as long, and
  3. the metadata is still cached in RAM unless you disable read caching entirely

Point 1 is obvious. You lose the metadata vdev, you lose your pool. So you really need to be using a mirrored pair of NVMe drives for that. This will enhance performance even more, since reads should be split between devices, but it also means additional expense. Since you need to use a special carrier card if your mainboard doesn’t support bifurcation, at nearly 2.5x the expense of carrier cards that require bifurcation.

But even if it does, the carrier cards still aren’t inexpensive. There are alternatives, but they aren’t that much more affordable and only add complication to the build.

Point 2 is more critical. To say the special vdev code hasn’t been “battle tested” is… an understatement. Special vdevs are a feature first introduced to ZFS only in the last couple years. And it probably hasn’t seen a lot of use in that time. So the code alone is a significant risk.

Point 3 is also inescapable. Unless you turn off read caching entirely, the metadata is still cached in RAM or the L2ARC, substantially diminishing the benefit of having the special vdev.

On datasets with substantially more cache misses than hits – e.g., movies, music, television episodes – disabling read caching and relying on a metadata vdev kinda-sorta makes sense. Just make sure to run a rebalance script after adding it so all the metadata is migrated.

But rather than relying on a special vdev, add a fast and large L2ARC (2TB NVMe drives are 100 USD currently), turn primary caching to metadata only and secondary caching to all. Or if you’re only concerned with ensuring just metadata is cached, set both to metadata only.

You should also look at the hardware supporting your NAS first to see what upgrades could be made to help with performance. Such as the platform upgrade I made from 990FX to X99. If you’re sitting on a platform that is severely limited in terms of memory capacity compared to your pool size – e.g. the aforementioned 990FX platform, which maxed out at 32GB dual-channel DDR3 – a platform upgrade will serve you better.

And then there’s tuning the cache. Or even disabling it entirely for certain datasets.

Do you really need to have more than metadata caching for your music and movies? Likely not. Music isn’t so bandwidth intense that it’ll see any substantial benefit from caching, even if you’re playing the same song over and over and over again. And movies and television episodes are often way too large to benefit from any kind of caching.

Photo folders will benefit from the cache being set to all, since you’re likely to scroll back and forth through a folder. But if your NAS is, like mine, pretty much a backup location and jumping point for offsite backups, again you’re likely not going to gain much here.

You can improve your caching with an L2ARC. But even that is still a double-edged sword in terms of performance. Like the SLOG, you need a fast NVMe drive. And the faster on random reads, the better. But like with the primary ARC, datasets where you’re far more likely to have cache misses than hits won’t benefit much from it.

So then with the performance penalty that comes with cache misses, is it worth the expense trying to alleviate it based on how often you encounter that problem?

For me, it’s a No.

And for most home NAS instances, the answer will be No. Home business NAS is a different story, but whether it’ll benefit from special devices or an L2ARC is still going to come down to use case. Rather than trying to alleviate any performance penalty reading from and writing to the NAS, your money is probably better spent adding an NVMe scratch disk to your workstation and just using your NAS for a backup.

One thing I think we all need to accept is simply that not every problem needs to be solved. And in many cases, the money it would take to solve a problem far overtakes what you would be saving – in terms of time and/or money – solving the problem. Breaking even on your investment would likely take a long time, if that point ever comes.

Sure pursuing adding more NVMe to Nasira would be cool as hell. I even had an idea in mind of building a custom U.2 drive shelf with one or more IcyDock MB699VP-B 4-drive NVMe to U.2 enclosures – or just one to start with – along with whatever adapters I’d need to integrate that into Nasira. Or just build a second NAS with all NVMe or SSD storage to take some of the strain off Nasira.

Except it’d be a ton of time and money acquiring and building that for very little gain.

And sure, having a second NAS with all solid-state storage for my photo editing that could easily saturate a 10GbE trunk sounds great. But why do that when a 4TB PCI-E 4.0×4 NVMe drive is less than 300 USD, as of the time of this writing, with several times the bandwidth of 10GbE? Along with the fact I already have that in my personal workstation. Even PCIE-3.0×4 NVMe outpaces 10GbE.

A PXE boot server is the only use case I see with having a second NAS with just solid-state storage. And I can’t even really justify that expense since I can easily just create a VM on Cordelia to provide that function, adding another NVMe drive to Cordelia for that storage.

Adding an L2ARC might help with at least my pictures folder. But the more I think about how I use my NAS, the more I realize how little I’m likely to gain adding it.

More storage, additional hardware upgrades

Here are the current specs:

CPU: Intel i7-5820k with Noctua NH-D9DX i4 3U cooler
RAM: 64GB (8x8GB) G-Skill Ripjaws V DDR4-3200 (running at XMP)
Mainboard: ASUS X99-PRO/USB 3.1
Power: Corsair RM750e (CP-9020262-NA)
HBA: LSI 9400-16i with Noctua NF-A4x10 FLX attached
NIC: Mellanox ConnectX-3 MCX311A-XCAT with 10GBASE-SR module
Storage: six (6) mirrored pairs totaling 66TB effective
SLOG: HP EX900 Pro 256GB
Boot drive: Inland Professional 128GB SSD

So what changed from previous?

First I upgraded the SAS card from the 9201-16i to the 9400-16i. The former is a PCI-E 2.0 card, while the latter is PCI-E 3.0 and low-profile – which isn’t exactly important in a 4U chassis.

After replacing the drive cable harness I mentioned in the previous article, I still had drive issues that arose on a scrub. Thinking the issue was the SAS card, I decided to replace it with an upgrade. Turns out the issue was the… hot swap bay. Oh well… The system is better off, anyway.

Now given what I said above, that doesn’t completely preclude adding a second NVMe drive as an L2ARC – just using a U.2 to NVMe enclosure – since I’m using only 3 of the 4 connectors. The connectors are tri-mode, meaning they support SAS, SATA, and U.2 NVMe. And that would be the easier and less expensive way of doing it.

This also opens things up for better performance. Specifically during scrubs, given the growing pool. I also upgraded the Mellanox ConnectX-2 to a ConnectX-3 around the same time as the X99 upgrade to also get a PCI-E 3.0 card there. And swap it from a 2-port card down to a single port.

The other change is swapping out the remaining 4TB pair for a 16TB pair. I don’t exactly need the additional storage now. Nasira wasn’t putting up any warnings about storage space running out. But it’s better to say ahead of it, especially since I just added another about 1.5TB to the TV folder with the complete Frasier series and a couple other movies. And more 4K upgrades and acquisitions are coming soon.

One of the two 4TB drives is also original to Nasira, so over 7 years of near-continuous 24/7 service. The 6TB pairs were acquired in 2017 and 2018, so they’ll likely get replaced sooner than later as well merely for age. Just need to keep an eye on HDD prices.

To add the new 16TB pair, I didn’t do a replace and resilver on each drive. Instead I removed the 4TB pair – yes, you can remove mirror vdevs from a pool if you’re using TrueNAS SCALE – and added the 16TB pair as if I was expanding the pool. This cut the time to add the new pair to however long was needed to remove the 4TB pair. If I did a replace/resilver on each drive, it would’ve taken… quite a bit more than that.

Do the math and it’s about 111 MiB/sec

Obviously you can only do this if you have more free space than the vdev you’re removing – I’d say have at least 50% more. And 4TB for me was… trivial. But that obvious requirement is likely why ZFS doesn’t allow you to remove parity vdevs – i.e. RAID-Zx. It would not surprise me if the underlying code actually allows it, with a simple if statement cutting it off for parity vdevs. But how likely is anyone to have enough free space in the pool to account for what you’re removing? Unless the pool is brand new or nearly so… it’s very unlikely.

It’s possible they’ll enable that for RAID-Zx at some point. But how likely is someone to take advantage of it? Whereas someone like me who built up a pool one mirrored pair at a time is a lot more likely to use that feature for upgrading drives since it’s a massive time saver, meaning an overall lesser risk compared to replace/resilver.

But all was not well after that.

More hardware failures…

After installing the new 16TB pair, I also updated TrueNAS to the latest version only to get… all kinds of instability on restart – kernel panics, in particular. At first I thought the issue was TrueNAS. And that the update had corrupted my installation. Since none of the options in Grub wanted to boot.

So I set about trying to reinstall the latest. Except the install failed to start up.

Then it would freeze in the UEFI BIOS screen.

These are signs of a dying power supply. But they’re actually also signs of a dying storage device failing to initialize.

So I first replaced the power supply. I had more reason to believe it was the culprit. The unit is 10 years old, for starters, and had been in near-continuous 24/7 use for over 7 years. And it’s a Corsair CX750M green-label, which is known for being made from lower-quality parts, even though it had been connected to a UPS for much of its near-continuous 24/7 life.

But, alas, it was not the culprit. Replacing it didn’t alleviate the issues. Even the BIOS screen still froze up once. That left the primary storage as the only other culprit. An ADATA 32GB SSD I bought a little over 5 years ago. And replacing that with an Inland 128GB SSD from Micro Center and reinstalling TrueNAS left me with a stable system.

That said, I’m leaving the new RM750e in place. It’d be stupid to remove a brand. new. power supply and replace it with a 10 year-old known lesser-quality unit. Plus the new one is gold rated (not that it’ll cut my power bill much) with a new 7-year warranty, whereas the old one was well out of warranty.

I also took this as a chance to replace the rear 80mm fans that came with the Rosewill chassis with beQuiet 80mm Pure Wings 2 PWM fans, since those had standard 4-pin PWM power instead of needing powered direct from the power supply. Which simplified wiring just a little bit more.

Next step is replacing the LP4 power harnesses with custom cables from, likely, CableMod, after I figure out measurements.