The simplest solutions will often present themselves, provided you’re willing to open your mind to them.

If anything would be the theme to the transition of Absinthe to Amethyst, that would have to be it. Building the loop for Amethyst was initially challenging, simply because I kept thinking, anticipating the overall layout would be much more complicated. But the less I thought about it, the simpler the solution became.

Let’s start with Absinthe and the system specifications:

System specifications:

  • CPU: Intel i7-5820k
  • RAM: 16GB DDR4-2800
  • Mainboard: ASUS X99-A/USB3.1
  • Graphics: EVGA GTX 1080 SC
  • SSD: Samsung 950 PRO NVME
  • Power: Corsair RM1000
  • Chassis: Corsair 750D

Water cooling specifications:

  • Radiators:
    • Top: AlphaCool XT45 360mm
    • Bottom: AlphaCool ST30 240mm
    • Front: AlphaCool XT45 240mm
  • Radiator fans:
    • Top: Nanoxia Deep Silence 120mm 1300 RPM
    • Bottom: Bitfenix Spectre Pro 120mm
    • Front: Nanoxia Deep Silence 140mm
  • Pump: AlphaCool D5 with HF D5 acrylic mod top
  • Reservoir: Bitspower 100mm with Z-Cap I and II clear
  • CPU block: Watercool Heatkiller IV
  • GPU block: Aquacomputer kryographics with clear window
  • Tubing: 3/8″x1/2″ PETG
  • Coolant: Mayhem’s X1 Clear

* * * * *

Final specifications

None of the main system specifications changed. Only the specifications for the water cooling loop.

  • Radiators:
    • Top: AlphaCool XT45 360mm
    • Bottom: AlphaCool ST30 240mm
  • Radiator fans:
    • Top 360mm: Cougar CF-D12HB-W
    • Bottom 240mm: Bitfenix Spectre Pro 120mm
  • Pump: AlphaCool D5 with HF D5 acrylic mod top
  • Reservoir: Bitspower 200mm with Z-Cap I and II clear
  • CPU block: Watercool Heatkiller IV
  • GPU block: Aquacomputer kryographics with clear window
  • Tubing: 3/8″x1/2″ and 1/2″x5/8″ PETG
  • Coolant: PrimoChill Liquid Utopia

Specifically there are now only two radiators where there were three, and the fans on the top radiator have also changed.

* * * * *

New parts

So no system parts upgrades this time around. There is no new generation to the GTX 10xx series, as of when I write this, and there’s no need to upgrade her to the GTX 1080Ti (and desire given current pricing). And I’m not planning on a platform upgrade for the next several years.

I removed the front 240mm radiator to make room for hard drive bays. My wife has this uncanny ability to fill up a 500GB solid-state drive, so a bank of four (4) 1TB HDDs should provide some storage longevity.

This won’t affect temperatures to any significant degree. I probably could’ve taken her system down to just the 360mm radiator and been mostly fine. It’ll also make for a simpler loop that will be easier to fill and drain.

And while I love the Nanoxia fans for being very quiet, they’re green. This worked well for Absinthe. Not so much for Amethyst. It was not easy finding similarly quiet fans that would not clash with the system theme – Noctua was NOT an option here.

I turned to the Cougar CFD series, specifically the white LED model. These are comparatively rated to the CF-V12H fans I use in Mira (ratings are for 12V operation):

  • Air flow: 64 cfm (109.2 cmh) at 1200RPM
  • Static pressure: 1.74 mmH2O static pressure
  • Noise pressure: 16.6dB/A noise pressure

Three of these went to the top radiator, and a 140mm version is the rear exhaust fan. These provide white lighting in the upper area of the mainboard. The rest of the fans are not being changed, including leaving the front Nanoxia fans since those… aren’t all that important to the internal look of the system.

* * * * *

New color, new coolant

In the lead-up article to this, I mentioned wanting to use purple coolant, similar to what I saw in a system called Chimaera. Primarily to take advantage of the clear tubing and clear-top water blocks, clear reservoir, clear pump housing… I think you get the drift.

I also wanted to flood the system with bright white light. For that I bought a spool of pure-white LEDs, but never actually did anything with them. My wife, however, wanted to flood the system with purple light, similar to how Absinthe was flooded with green light.

Since I mentioned the white LED fans above, you can kind of guess how that turned out. No I didn’t win. Instead we compromised.

I still wanted the purple coolant, but I managed to bring my wife on-board, and get her off the idea of using purple light, by recommending UV coolant. Initially looking to the PrimoChill Vue UV Violet.

Except unbeknownst at the time I bought it – and I don’t think this provision was added until after I bought it – you can’t run the Vue coolant in a system for more than 8 hours at a time or it will break down in a matter of weeks. And the sediment additive that gives the coolant its pronounced effect will start clogging blocks and such.

But even at 8 hours per day, the coolant should be flushed and changed in 4 to 6 months.

So no thanks.

But having sold my wife on the UV Purple, I needed a new option. Initially I looked at UV Purple transparent coolants, for which there seemed to be only two options: Koolance (hopefully the color for the bulk option is more accurate) or PrimoChill. But to better control the final color – again trying for Siberian amethyst – I opted for clear coolant and dye.

And from what I could find, only one company sells a UV Purple dye: PrimoChill. Thankfully my local MicroCenter had it in stock so I was able to buy it locally. And for clear coolant, I opted for PrimoChill’s Liquid Utopia, which is a concentrated additive you mix with a gallon of distilled water. And it is also included with PrimoChill’s LRT Tubing retail packaging, which is where I got mine.

* * * * *

Bigger is better

Or at least longer is better. At least when you’re talking about reservoirs. (Try to keep your mind in the PG zone…) The previous reservoir was only a 100mm. Because of where it was initially placed when Absinthe was initially built:

And when moving that reservoir forward when rebuilding her loop for 3 radiators, I didn’t replace it with a longer tube. Instead keeping the short tube since it still allowed for a direct return from the CPU block. And going with a clear coolant, there really wasn’t a need to go with a longer reservoir tube.

But since we opted to swap for colored, UV-reactive coolant, I opted to go with a longer reservoir tube as well to show it off. Namely the 200mm Bitspower reservoir tube.

* * * * *

Showing off the graphics card

Colored coolant with a clear-top GPU block means wanting to show it off. But there aren’t many options for doing this in a chassis not already built for it. In the previous blog post, I showed one possible option:

This is the Cooler Master vertical graphics card holder. Unlike other options – such as this one from MNPCTech – it is intended to replace the expansion card slots in your chassis. And it works well if you’re using a chassis with more than the standard slots – such as the 750D – or have only one graphics card and no other expansion cards, which is likely most gaming system builders.

It’s intended for Cooler Master’s chassis as well, and using it in the 750D required… some modifications.

Metal sheers took care of those without much difficulty. I didn’t make it completely clean, but only clean enough.

The only complaint I have about the mount is actually with the riser cable that is included: it doesn’t clip onto the tail for 16x cards, allowing for some cable sag as you’ll see in later pictures. So if anyone from Cooler Master happens upon this article, please correct that.

* * * * *

Building the loop

Not having the front radiator simplified things in some ways. Same with the vertical GPU.

Let’s start with the lower radiator, to which the pump is also attached. I had some 5/8″ (16mm) OD tubing sitting around from when I was initially looking at using Nanoxia lighted fittings. And I also had a few EK 16mm fittings for some reason. So for the initial connection coming out of the pump and going to the bottom radiator, I opted for this.

This I felt looked a lot better than using the thinner 1/2″ OD tubing and bulkier PrimoChill Revolver fittings. The rest of the fitting assembly is…. interesting. As to get it lined up, I used a Swiftech 90° fitting, to an 8mm EK extension fitting, Swiftech dual-45° rotary fitting, and an AlphaCool 4-way fitting, with a 10mm Koolance male-to-male fitting connecting that to the radiator.

Sometimes you just need to improvise. And that wasn’t the only place. Despite having to acquire additional fittings, I’m glad I had a lot already on hand.

From the lower radiator, I needed to figure out how to get the flow running to the graphics card. Because of the jet plate, the inlet to the graphics card has to go through the specified port. So I initially tried this:

As you can see, that was complicated. The idea was to keep the coolant flow out of the way of the GPU block so you could see the entire face of it. But I quickly realized this wasn’t going to work. Again, it’s complicated, and all the 90° fittings make it restrictive as well. I needed a better solution.

This takes the coolant flow in front of the block, but also puts it directly in front of the jet plate, helping to partially obscure it. But it’s also a lot less restrictive, and a lot less complicated. What you can’t see well is the string of fittings needed to get to this. Several extension fittings coming up from the radiator, eventually to a 45° fitting, another 15mm extension, then finally to a 90° fitting to meet the tubing.

As the above picture shows, I initially planned to take the GPU outlet to the top radiator like in the previous loop, but instead opted to have it go to the CPU using this.

From there, getting to the top radiator was a matter of using a long extension fitting coming out of the CPU (easily could’ve used clear tubing, not sure why I didn’t), to a 90° fitting. The vertical piece meets with a pair of 90-degree fittings screwed into each other. Basically a 90° fitting on the radiator into which I have another 90° fitting, giving an offset that goes to about the middle of the radiator, which comes straight down to meet up with the tubing coming from the CPU.

Then with the longer reservoir tube, getting back to the reservoir was straightforward. Literally.

That’s a black Swiftech 15mm extension fitting to an EK black-nickel dual-45° fitting coming off the radiator. An EK 90° fitting on the top of the reservoir.

* * * * *

Cleaning and preparing the loop

Again back to PrimoChill – I swear they did NOT sponsor this build (though if any PrimoChill representative is reading this, I’m open to discussions). But for cleaning out the blocks and radiators, there were really two options: Mayhem’s Blitz, or PrimoChill’s System Reboot. Since I didn’t feel like dealing with harsh chemicals, I opted for the latter.

There are two ways to use it, depending on how concentrated you want to go: either add it into a 1-gallon jug of distilled water, or fill your loop with distilled water and add the entire bottle into your loop. If you’re trying to clean out from using dyed or pastel coolant, I’d highly recommend the latter. Either way, let the cleaner circulate for at least 24 hours and up to 48 hours. And if you’re cleaning out from using dyed or pastel coolant, let it run for the full 48 hours. You may also need to repeat.

I added to the gallon distilled water and let it run for close to 36 hours. Then I used the rest of the cleaner to rinse out what was in there, and then followed with distilled water.

After that, I pulled everything apart to rinse everything individually with distilled water before piecing it all back together to add the coolant and dye.

* * * * *


Initially I intended to use Darkside UV LEDs for the lighting, but decided ultimately it was too overpowering. They’d probably work well with opaque coolant, but not transparent. Any UV reaction was overpowered by its bright violet light. Note: this was taken with my cell phone, not my DSLR, with not the greatest white-balance settings. But it clearly shows the reservoir and pump housing are violet, and there’s virtually no UV reaction from anything else, rather than the UV glow I was going for.

So I opted for cold cathodes, which do not have nearly the overpowering violet light the Darkside LEDs had, meaning they also aren’t nearly as bright. Instead it provides a gentle violet hue while allowing the UV effect to shine through. So much so that, with an 11″ cathode behind the reservoir and the white light above and near by, it looks like a chunk of glowing amethyst. Unfortunately it isn’t something I’m currently able to properly capture on camera, so I’ll have to play around with camera settings later to see if I can properly capture it.

But one thing’s for sure: the cold cathodes give the loop a nice glow without any overpowering violet color. And the white LEDs from the fans aren’t overpowering it either. Specifically the system has an 11″ cold cathode behind the reservoir, and two 4″ cold cathodes to shine on the pump housing and GPU block. Unfortunately the acrylic on the GPU block is preventing any UV glow from coming through.

* * * * *

Update 2018-02-26: In one of the earlier pictures you can see a bank of four hard drives off to the front of the chassis. And in the immediate above picture, you can see a pink line beneath the reservoir. The HDDs were connected to the mainboard using SATA cables with a UV-reactive coating, hence the glow, that did not have latches. They were the only cables I could immediately grab that I knew were SATA III.

Well tonight, that created an issue that was resolved when I disabled the SATA ports on the mainboard – primary storage is an NVMe SSD. So a quick trip to Micro Center and 16 USD later for SATA III cables with latching connectors, and that issue was resolved. As noted when the system came up quickly and the desktop loaded nearly instantly upon login.

So let that be a lesson: don’t use SATA cables that do not have latches on them. Especially when you’re using them to connect an HDD RAID array. The latches basically all but guarantee the connectors are seated, giving you a trouble-free, fast connection.

Why police may not act on tips

After the recent tragedy in Florida, one question I’ve seen raised numerous times can be boiled down to this: why did the FBI and local police not act on any of the tips they’d received about the shooter?

It comes down to two concepts in law: reasonable suspicion and probable cause.

The former allows a police officer to temporarily detain someone, and they must have a specific, articulate, demonstrable reason.

A common example is fitting the description of a suspect. Also if the police see you come out of an area known to be a drug hot-spot, they may detain you to ask you why you were through the area, as they have reason to suspect you were there to buy or sell drugs.

Probable cause is a higher standard. Basically it means the officer has evidence that you have broken some law. A traffic stop is an example of an officer having probable cause. But that is probable cause only to detain you.

To search your person or vehicle, the officer needs to be able to articulate 1. what he/she expects to find, 2. why he/she expects to find it, and, most importantly, 3. what evidence that gives the officer reason to believe they will find such.

These are your basic liberties against police intrusion in a heartbeat. All of us have them. Including those who would later become mass shooters.

Given this, it should be quite obvious why the police never acted on any prior tips regarding the Florida high school shooter: they likely could not even establish reasonable suspicion.

Any action of the officer must be demonstrable to the Court for it to stand up. If an officer said to a detained person they matched the description of a suspect, the officer needs to be able to articulate which suspect they resemble.

Reasonable suspicion isn’t “I heard so-and-so talk about wanting to shoot up the school”. That actually isn’t specific enough. It might be enough that the police may decide to question the individual, but without anything more, they cannot detain the person and most certainly cannot arrest them.

Tips have to be actionable, and if a tip doesn’t provide enough information to establish reasonable suspicion, the police won’t act on it beyond maybe asking questions. And without anything more specific, and without any actual, demonstrable evidence the person in question is up to no good, the police cannot do anything more.

To do anything more would be to violate the rights of the person they are investigating, which will sink any chance of any evidence seeing the light of day in Court. Even if that person is actually planning a mass homicide incident like what occurred in Florida.

Admitting defeat

Build Log:

As you can probably tell, this hasn’t been a huge priority for me, given how many months it’s been since the last update. I haven’t given up on this project, though. Just had a lot of other things going on in the mean time while also figuring out what to do with this.

First, I’ve abandoned the IKEA SEKTION cabinet. And the loose rack rails. Part of the concern came with trying to make a frame with the loose rack rails to which the SEKTION cabinet would be attached. Loose rails are good for lightweight equipment cabinets: small network switches (unmanaged, not managed), patch panels, light audio equipment, etc. The latter point especially since the company that distributes the rails — Reliable Hardware Co. — sells other equipment and parts for audio producers, along with other small parts for making audio racks.

And while I was able to… mostly create a cabinet from these in the past, they’re not really suitable for my current goals. I don’t have the experience, space, and tools to make the frame I’d need.

Pre-built rack frame

So to eliminate that headache, I ordered a 15U open frame cabinet, 600mm (~24″) deep. What was delivered, not intentionally, was the adjustable-depth 15U cabinet — AR6X15-8001. Overall this means a cabinet a little larger than intended with a completely different structure than initially considered. Mostly.

Since previously I was planning to build a frame that would be enclosed inside an IKEA SEKTION kitchen cabinet. The difference here is I’m no longer building a frame for loose rack rails. This is, arguably, what I should’ve done from the outset.

Here’s the design of the open rack:

Here are the actual dimensions based on measurements:

  • Width: 20.125″ (20-1/8″ or 511mm)
  • Depth: 9.5″ (241mm) plus set depth adjustment for rack
  • Height: 31.5″ (800mm)

The holes on the lower supports are 16″ apart, 8″ each direction off center. They’re 3/4″ diameter and 1-3/8″ from the leading edge. Intended for bolting the frame into concrete or a subfloor.

On the uprights you’ll also notice there are several sets of holes, though the drawing is misleading in how many are actually there. There are a pair of 5/8″ holes at the top, and a pair at the bottom, each 7/8″ in from the edge. And two 9/32″ holes down the middle. Useful for combining several frames together or building it into a table.

I have the frame assembled and have been using it to hold several servers along with my 10GbE switch. BUT… it won’t hold slide rails! The M6 threaded screw holes are the proper EIA-310 spacing, but there isn’t enough spacing between the corner posts to fit a standard-width server with standard slide rails. It has 17.5″ of clearance instead of the EIA-310 standard 17.75″ (450mm).

To hold the NAS and another server while finding a better option, I needed to use a rear-mounting kit.

So… Plan B.

This one uses cage nuts. While that isn’t necessarily a guarantee that it’ll have the proper spacing between the corner posts, you can see quite easily that it’ll have plenty of room. Since the square holes for cage nuts have a standard sizing.

While listed as 15U usable height, the rack I received is actually 16U usable height, and the corner posts are 18U tall (31.5″ or 800mm). The top and bottom rack units are taken by cross brackets, which extend a little above and below the corners for a total height of about 32″ (813mm).

The top brackets extend about 1-3/8″ (35mm) from the uprights, so about 2.75″ (70mm) added to the depth at the top. The bottom brackets extend 4-7/8″ (124mm) out, so adding about 9.75″ (248mm) to the total depth. The frame is 21.25″ (540mm) wide. So for a 24″ depth, the frame will have a a 33.75″ x 21.25″ footprint (857mm x 540mm). This frame also keeps the two large-diameter holes in the bottom bracket for bolting into a floor.

Along the side of the corner posts are additional screw holes, one for each rack unit, similar to the mount holes on the loose rails I was initially going to use. This is useful if you plan to join multiple frames together. The corner posts are shy of 4″ deep.

So the question then is how to enclose it.

Enclosing the frame

Ging back to the forum post that inspired this, I’m going to use something as about a 2″ spacer, into which I’ll also be screwing down plywood to enclose the cabinet on the sides. The back will also be enclosed with ventilation out the top, similar to this:

except likely using PC case fans — either 80mm or 120mm — instead of the single 6″ fan in the video.

There aren’t great mounting holes for the front or back, so I’ll need to extend the side panels out far enough for a door on the front and the ventilation system on the back. That just leaves figuring out the bottom and top.

Top and bottom

The top isn’t going to be too difficult, since the cabinet will have a thick top to it to support whatever I decide to put on it: a computer or two, a monitor, but nothing too heavy. Maybe 100lbs at most that’ll be resting on the top supported by the metal frame. So about 1-1/2″ plywood (likely as face-glued 3/4″ ply with a ton of screws) should do the trick.

Same with the bottom, as references I’ve consulted said it should be able to hold the weight of a fully-loaded frame without a problem. Though to properly fit 1/2″ lag screws I’ve sourced locally, I may need to triple-layer the plywood sheets.

Overall dimension, currently, is looking to be about 36″x24″ footprint. The top and bottom will be attached directly to the frame.

So for now that’s the plan. Just a matter of getting into the hardware store and buying the plywood or laminated boards to make this happen. In the interim, I’ve got a few other projects going on, so who knows when I’ll be able to get there.

A prediction regarding language

With social justice warriors in the United States and much of the rest of the English-speaking world continually labeling words and phrases as being racist and what have you, a backlash is inevitable. As I feel we are not far from the tipping point, the social justice bubble bursting.

And that backlash, I predict, will come in a way that will just blindside everyone. What’ll likely end up happening is all the fed up non-SJWs will start bringing racial slurs and other terms-deemed-racist, sexist, etc., back into common parlance. And not just common parlance a la pre-1960s with those words used and intended as racial, sexist, etc., slurs. No, no.

I mean into such common usage they are used as readily as “apple” or “water”, “bottle” or “window”. Not intended as racial or sexist slurs, not used as racial or sexist slurs, but used merely because the general populace will have just gotten sick of continually being told what words and phrases to stop using because someone, typically someone in the leftist media, has decided, arbitrarily, such words and phrases are racist, etc.

And I can already see the response to such a backlash. It’s the same thing already being seen with the left labeling everyone who is:

  • white as racist
  • male as misogynistic
  • heteronormative as homophobic
  • cisgender as transphobic

and so on. “Wow, look at all this bigotry!” they already say, and likely would say if such a backlash happened. And they would be revealing themselves, much as they are now, for the brainwashed automatons they’ve become.

Squatter’s rights and illegal immigration

In June 2016 I wrote an article describing a viable compromise regarding the “terrorist watch list” that would give Democrats what they want — allowing the list to be used to block, albeit temporarily, the ability for someone to purchase a firearm — while still upholding due process. With regard to illegal immigration and some recent stories that are generating buzz online, I think a similar compromise could be possible.

In common and statutory law is a concept known as “squatter’s rights”. The actual legal term is the doctrine of adverse possession. The doctrine allows someone to obtain valid title to real property by openly and notoriously occupying the property for a period of years. As in typically 10 years or more. In that period of time, the adverse possessor must act as a normal property owner. Quoting Wikipedia:

Although the elements of an adverse possession action are different in every jurisdiction, a person claiming adverse possession is usually required to prove non-permissive use of the property that is actual, open and notorious, exclusive, adverse, and continuous for the statutory period.

This doctrine derives from a statute of limitations on the ability for property owners to enforce title to the land and eject anyone adversely possessing it.

The doctrine can also apply to personal property, but the claims are a little more difficult to prove in cases where there is not a government-issued title to the property or other paperwork showing ownership. As an example, let’s say you loan an expensive power tool to a neighbor and then forget about it. The neighbor continues to use it as if they own it, for a significant period of time. And you try to reclaim it and the neighbor refuses, so you sue. The neighbor may be able to claim adverse possession, or at the least abandonment of property.

I hope you can see where I’m going with this.

The stories of illegal immigrants being deported after being in the country for significant periods of time leave me in a little bit of a bind. In one sense, I can readily see the inequity of the issue. At the same time, the rule of law must be upheld. And the law says that those who are not in the United States legally cannot remain once they are discovered and put through the necessary due process to prove their immigration status — yes, the Fifth Amendment still applies even to this.

So can a compromise be reached on this by extending a form of the adverse possession doctrine to immigration?

Similar requirements can be extended to this. They must have been in the United States for at least… 15 years, continuously, whether they entered completely under the nose of immigration officials, or overstayed a visa. For minors the limitation would be the latter of 10 years or reaching the age of majority — a minor brought illegally into the United States at age 15 would need to reach age 25 to claim immunity, while a 2 year-old would need to only reach age 18.

In that time, they must also have clean records (no felony or violent misdemeanor convictions), be established in the community, and the like. A conditional statute of limitations on immigration enforcement.

After that period of time, provided the other conditions are met, the person would be able to apply to the State Department for legal resident status, setting the stage for citizenship if they want it. And if immigration officials discover the person after that period of time has passed, it could be an affirmative defense to deportation.

However the burden of proof in both instances, whether coming forward voluntarily or being discovered by immigration officials, will be on the immigrant to demonstrate their continuous presence in the United States. With clear and convincing evidence.

Not everyone will be able to meet that burden. Likely most won’t. And that’s the point.

Some would probably label this “amnesty”. Except this isn’t calling for blanket immunity, and keeps the burden on the immigrant to prove all needed facts. Meaning they shouldn’t go into it unless they can prove everything necessary. So if they know they’ve been in the country for 20 years, but can only demonstrate 10 years of continuous residence, then they’d be risking deportation… making themselves known.

And since this would be implemented via statute, the rule of law is preserved.

Misinterpreting Massad Ayoob

Full disclosure: I am an active member of the United States Concealed Carry Association.

For accessibility technologies:

A basic principle of American justice holds that a bad man has the same rights as a good man.
— Massad Ayoob

And USCCA’s tweet caption is: “RT if you believe in the rights of the ‘good man'”.

The full quote isn’t nearly as… inspiring as the USCCA and many others have interpreted it. Here’s the paragraph the quote is pulled from:

It is a widespread and dangerous misconception that all criminals are fair game for the bullets of the good guys. A basic principle of American justice holds that a bad man has the same rights as a good man. When the pursuer lets his own sense of justice determine whether the chased is a man with the same rights as his, or a target of opportunity, the stage has been set for tragedy.

So what exactly is Ayoob saying here? To find out, we actually need to back up to just before a large parenthetical that precedes the above paragraph. For continuity, I’ll reproduce the above paragraph in line, but I won’t reproduce the parenthetical, which discusses events from The French Connection:

A man with a gun, lawfully pursuing a fugitive, feels an impulse to shoot that must be resisted despite the excitement of the moment. Civilians, who generally don’t carry guns eight hours a day or receive several hours of justifiable force instruction, tend to be awfully bloodthirsty. The situation is understandable. The private citizen assisting a policeman does, in good faith, what he thinks a policeman is supposed to do. His only learning models are the policemen he sees on the screen, who shoot running suspects with impunity. (omitting parenthetical)

It is a widespread and dangerous misconception that all criminals are fair game for the bullets of the good guys. A basic principle of American justice holds that a bad man has the same rights as a good man. When the pursuer lets his own sense of justice determine whether the chased is a man with the same rights as his, or a target of opportunity, the stage has been set for tragedy.

Clearly Ayoob’s words are being taken completely out of context. The above paragraphs are reproduced from Chapter 3 called “The Dangerous Myth of Citizen’s Arrest”. The quote isn’t intended to be inspirational in any way, but more serve as a warning.

The full context is in regard to an armed citizen assisting law enforcement in apprehending a fugitive, and the extent to which that citizen can use lethal force.

Speaking in practical rather than legal terms, you are on thin ice any time you pull the trigger on the basis of anyone else’s judgment. Even if that command comes from a sworn officer, even if his judgment was correct, it is always possible that you could misinterpret his orders.

While the quoted statement above is correct with regard to American jurisprudence, it is clearly not intended to be taken in any kind of inspirational fashion. Yet it seems too many do. Because they’ve only seen the quote, not the context.

Now to hammer home the point the quote is part of a greater warning with the use of deadly force, I’ll leave you with this from the same chapter:

He who chooses to play the role of Citizen Cop does so at his own peril. A man requested by a police officer to assist the latter must do so, on pain of being a convicted obstructor of justice. But he owes it to himself to watch out for his own interests before those of the law or the community. Confronted with a fleeing felon, it is in the citizen arrester’s best interests to hold his fire. The question of whether he is responsible for the escaped suspect’s future crimes is less imminent and painful than the probability that he will be crucified for using more force than he should have. He must consider every repercussion that his every response could create, in light of laws and public opinion that will damn him from the moment he steps out of bounds.

In short, the citizen arrester must cope with an intensified form of the socio-legal threat that every full-fledged police officer faces every hour of his working day.

Note: if you wish to read the referenced book, you can reserve it through The Internet Archive.

Did Woodrow Wilson regret signing the Federal Reserve Act?

Recently I was in a very heated exchange over the Federal Reserve. My opponent did nothing but peddle various conspiracy theories I’d already heard, which culminated in him calling me a shill since he didn’t really have anything else to say.

One point he made, though, was with regard to Woodrow Wilson regretting signing the Federal Reserve Act. And it is that point I will address herein.

Here’s the quote in question:

I am a most unhappy man. I have unwittingly ruined my country. A great industrial nation is controlled by its system of credit. Our system of credit is concentrated. The growth of the nation, therefore, and all our activities are in the hands of a few men. We have come to be one of the worst ruled, one of the most completely controlled and dominated Governments in the civilized world no longer a Government by free opinion, no longer a Government by conviction and the vote of the majority, but a Government by the opinion and duress of a small group of dominant men.

Now since this was brought up in the course of the referenced conversation, I decided to try to track this down. To see if there was actually any evidence Wilson regretted signing the Federal Reserve Act. I couldn’t find any.

Like the oft-repeated quote attributed to Mayer Amschel Rothschild or Nathan Rothschild, depending on source, I was very skeptical that Wilson actually said or wrote what is quoted above. Attempting to Google the quote brought up numerous conspiracy websites peddling the same thing about the Federal Reserve.

Given how popular the quote was, I knew that Wikipedia had to have something on it. And the Talk page for the Wikipedia article on the Federal Reserve Act detailed that the above quote is a quote mine. But didn’t exactly go into much detail beyond that. But it was a lead, and it was enough for the keyboard brawl I was having at the time.

Looking into it further, I was able to discover it to be an extensive quote mine. Manufactured from two sections The New Freedom: A Call for the Emancipation of the Generous Energies of a People, which Wilson wrote and published in 1913. Long before the Federal Reserve Act even made it to Wilson’s desk. The book is available via Project Gutenberg, so feel free to check my work below if you wish.

Let’s start with this:

I am a most unhappy man. I have unwittingly ruined my country.

These introductory sentences may have been uttered at one point by Wilson, but not with regard to the Federal Reserve. A plausible context being the fact he dragged the United States into the First World War. It is nowhere in Wilson’s book. The word “unhappy” only appears once, and not to say “unhappy man”, and “ruined my country” never appears in the book.

The rest of the quote is mined and assembled from two paragraphs. The first half can be found (quote mined section in blue) in chapter 8 called “Monopoly, or Opportunity”:

However it has come about, it is more important still that the control of credit also has become dangerously centralized. It is the mere truth to say that the financial resources of the country are not at the command of those who do not submit to the direction and domination of small groups of capitalists who wish to keep the economic development of the country under their own eye and guidance. The great monopoly in this country is the monopoly of big credits. So long as that exists, our old variety and freedom and individual energy of development are out of the question. A great industrial nation is controlled by its system of credit. Our system of credit is privately concentrated. The growth of the nation, therefore, and all our activities are in the hands of a few men who, even if their action be honest and intended for the public interest, are necessarily concentrated upon the great undertakings in which their own money is involved and who necessarily, by very reason of their own limitations, chill and check and destroy genuine economic freedom. This is the greatest question of all, and to this statesmen must address themselves with an earnest determination to serve the long future and the true liberties of men.

And the latter half is found in chapter 9, called “Benevolence, or Justice?” (quote mined section in blue):

We are at the parting of the ways. We have, not one or two or three, but many, established and formidable monopolies in the United States. We have, not one or two, but many, fields of endeavor into which it is difficult, if not impossible, for the independent man to enter. We have restricted credit, we have restricted opportunity, we have controlled development, and we have come to be one of the worst ruled, one of the most completely controlled and dominated, governments in the civilized world—no longer a government by free opinion, no longer a government by conviction and the vote of the majority, but a government by the opinion and the duress of small groups of dominant men.

So it is clear that the oft-repeated quote from Woodrow Wilson was never actually said or written by Wilson as it is quoted. The quote is mined from two completely separate paragraphs of Wilson’s book. From two separate chapters of Wilson’s book.

And as the quote came from a book published before the Federal Reserve Act was even voted on by Congress, let alone signed by Wilson into law, Wilson could not have been speaking about the Federal Reserve Act.

This does not mean Woodrow Wilson never regretted signing it into law. He very well may have. But the oft-repeated quote cannot be used as evidence to that effect. One would need to find a specific quote showing Wilson explicitly expressing such regret.

And, thus far, to the best I was able to find, no one has shown such a quote.

Massively misunderstanding the Constitution

Article: “America’s Constitution is terrible. Let’s throw it out and start over.

Setting aside the American self-flagellation that appears to be going on lately, this is not the first article I have encountered calling the Constitution “terrible” and saying we need to scrap it. And generally what I have discovered about those who call for such is a massive amount of misunderstanding about the Constitution and how it is supposed to work.

Along with holding the perspective the United States is merely a country of provinces like Canada. When in actuality the United States is a federated republic of independent, sovereign States. Not provinces. States. With each State being a republic of its own, with its own sovereignty, with some of that sovereignty ceded to the Federal government by way of the Constitution.

Note the word republic as well. Not democracy. Republic. There’s a major difference, yet many keep using the word “democracy” when referring to the United States.

Last year, Ryan Cooper wrote an article for The Week called “The case against the American Constitution” in which he said the Constitution is “falling apart before our very eyes.” I have not specifically read that article — I glanced it enough to pull that specific phrase — and I will write a specific rebuttal to it later. But I will speculate that there will be very little within that article I have not already seen.

The subject of this rebuttal is Cooper’s set of ideas for replacing the Constitution. Ryan Cooper is not the first I have encountered calling for replacing the Constitution. Earlier I wrote a rebuttal to Dr. Sanford Levinson and his article called “Our Imbecilic Constitution”. So ahead of examining Cooper’s points, I am unsure if Ryan Cooper will be presenting anything original.

Especially since Cooper starts out on fallacious footing:

The major problem with America’s Constitution is that it creates a system in which elections generally do not produce functioning governments, and there is no mechanism to break the deadlock (like calling snap elections). Most of the time, control of the House, Senate, and presidency is split between the two parties in some way. Bipartisan compromises to keep government functioning used to be common, but are near-impossible anymore due to extreme party polarization. So as Michael Kinnucan points out, during divided government “there is de facto no legislative body.”

This is not an issue with the Constitution. The Constitution merely sets the framework for a government. It doesn’t specifically prescribe or proscribe how those within the government are to act. Nor does it prescribe or proscribe who is to comprise that government, with some restrictions on age and residency.

That our political system has devolved itself into two major political factions with several smaller factions continually vying for breadcrumbs — though they had a much better showing in 2016 than years prior — is a problem with the People.

To fix the problem, America should aim to make itself more like a proportional parliamentary democracy, by far the most successful and road-tested form of government.

And this completely ignores the underlying problem. The problem the United States faces is not with the Constitution. It is the Federal government. A proper consideration is shrinking the Federal government and returning much of its ill-gotten power to the States from which it stole that power.

After all, the Tenth Amendment prescribes that all powers not explicitly enumerated to the Federal government belong to the States.

This does not mean there are not ways even the foundational system can be improved. Indeed I have entertained some such ideas in the past. For example, I advocate reforming the Electoral College such that the Nebraska/Maine model is universal.

In light of that, let us now entertain Ryan Cooper’s ideas.

1. Get rid of the Senate filibuster.

Okay, this ought to be good.

This would at least allow a party that got the presidency plus both houses of Congress to govern, and could be passed by a simple majority vote in the Senate. However, that sort of unified control only happens every six to 10 years or so, so this reform would only be periodically useful.

His idea doesn’t follow from the premise. The filibuster has nothing to do with whether the party that got the Washington Trifecta would be able to govern. Indeed, look to everything the Democrats had to do in order to pass the Affordable Care Act, the various compromises they had to make to placate other Democrats while holding a “filibuster-proof majority”.

Beyond that, getting rid of the filibuster doesn’t require scrapping the Constitution. Only changing the Senate rules.

2. Radically change the way House members are elected.

One major engine of political extremism in America is the partisan drawing of district boundaries. The United States has the most entrenched two-party system in the world, partly a result of “first past the post” voting, and partly because the parties have locked themselves into place behind enormous legal barricades to third parties.

Actually the latter has more to do with it than the former. The “first past the post” voting rules have little to do with this. Ballot-access laws enact significant barriers to third parties, such as the Libertarian Party, from actually gaining any significant electoral ground. Though third parties did have a very significant showing in the 2016 election.

At the same time, we’ve seen both the Republicans and Democrats enact rules to prevent insurgencies and ensure that favored candidates are the ones winning primaries.

Worse, the ironclad two-party system has proved to be highly vulnerable to an extreme right-wing fringe that protects itself with gerrymandering and other cheating tactics.

Do not pretend the Republicans are the only ones who gerrymander. And gerrymandering is not what protects “an extreme right-wing fringe”, setting aside for a moment that term not being explicitly defined.

Where in any other country the 15-20 percent of the national population that makes up Republican primary voters would have their own small party, instead they now own one out of two parties.

Do I really need to go into how Democrat primaries work? At least the Republicans do not have the concept of “super-delegates”, which, along with all other corruption that has come to light, worked very well to ensure Bernie Sanders had little more than a prayer in the 2016 Democratic primaries.

As the folks at Fair Vote demonstrate, one clever way to solve this problem would be to change the way House members are elected. Instead of drawing one district for every representative, make each district have three seats, allocated by a ranked-vote system.

Such a system could only effectively work in the larger States. But let’s entertain the idea for a moment by pointing out something most likely don’t realize about the Constitution: there is no requirement for districts. No, seriously. Look if you don’t believe me. See Article I, Section 2 of the Constitution of the United States.

A State is granted a certain number of representatives based on population. That is the only advance requirement, other than age and residency of those actually chosen to serve.

The States were free to decide how their Representatives are chosen. For example in the First Congress, all Representatives from Connecticut, New Hampshire, New Jersey, and Pennsylvania were chosen “at-large”. And many States retained at-large representation into the early 20th century.

Obviously in States with only one Representative — Wyoming and Montana come to mind — this is a moot point. But larger States like Texas, California, and New York could adopt a different model that doesn’t rely heavily on district lines.

In other words, this is something that doesn’t require scrapping the Constitution, or even amending it. This is an idea any State could start considering as soon as next year. Provided Congress let them.

The requirement of one district per Representative is given by statute, specifically 2 USC § 2c. So if a State wanted to try something different, they would need to convince Congress to remove that statute, or challenge it in Court.

And while we’re at it, let’s change House elections from every two years to every four years. American lawmakers need time to actually govern, and should not be perpetually seeking re-election.

I doubt the four year intervals would change that. But if we were to change to a four year interval, it would probably work best to have everyone staggered like with the Senate, with about 1/4th of the House up for election every year. Obviously such would require amending the Constitution.

3. Neuter the Senate.

And now we enter dangerous territory. The Senate was introduced with the Connecticut Compromise as a means of neutering the power of the largest States, thus preventing those large States from exacting control over the smallest States. The Senate, in short, ensures that all States have some kind of say on a matter, even if they’re not in the final majority vote.

As such, enacting this idea:

However, it might be possible to pass an amendment making the Senate a House of Lords-style institution without real power. Senators could still be elected, but not be able to pass a binding vote on legislation.

would be very, very dangerous.

Read the history of the Connecticut Compromise and you will understand why it was so important to the Constitution’s framework.

4. Elect the president from the House.

This was initially proposed in the Virginia Plan, the House and Senate jointly selecting the President.

The point of “separation of powers” was to create a check on tyranny, but it has ironically worked to increase tyranny and undermine democracy.

This is again blaming the framework when the blame lies on the actors who maneuvered things in that direction. When a history professor writes in an article on Presidential term limits, “Democratic lawmakers would worry about provoking the wrath of a president who could be reelected”, clearly something went haywire somewhere down the line.

What we need are the proper people in office who can correct that course.

The separate executive branch is a major factor behind the rise of the lawless imperial presidency in the United States, and most other American-style constitutions fell apart due to standoffs between the president and legislature.

The separate Executive Branch was initially created to have a functioning Federal government during the months Congress was not in session. In the first Congresses, sessions only lasted a few months at most. Much like what we see in most States today.

And we don’t have anywhere near a “lawless imperial presidency” in the United States.

In normal countries, the executive is simply part of the legislature.

And here’s more of that self-flagellation I referred to earlier. I don’t think I need to go any further. Especially since his last point is, literally:

5. Throw the entire Constitution in the garbage.

And where does he start off? Attacking the amendment process. No need to respond to that, as I’ve already done so.

Proxmox VE vs VMware vSphere

If you’re seeking to get involved in anything computer-related with the intent of making it a career, two concepts with which you really need to be familiar are virtualization and containerization. The latter is relatively new to things, but virtualization has been around for quite a while.

VMware made virtualization more accessible, releasing their first virtualization platform almost 20 years ago. Almost hard to believe it’s been that long. I started fooling around with a very early version of VMware back when I was still in community college.

And it’s why they are basically the name for virtualization. But they are not the only name.

In my home setup, I have a virtualization server that is an old HP Z600 I picked up a couple years ago. A dual-Xeon E5520 (4 cores/8 threads per processor) that I loaded out with about 40GB RAM (it came with 8GB when I ordered it), a Mellanox 10GbE SFP+ NIC, and a 500GB SSD. The intent from the outset was virtualization. I wanted a system I could dedicate to virtual machines.

Initially I put VMware ESXi on it. Simply because it was a name I readily recognized and knew. The free version you can readily download online after registering a VMware online account. First, let’s go over the VMs I had installed:

  • Docker machine: Fedora 27, 4 cores, 8GB RAM
  • Plex media server: Fedora 27, 2 cores, 4GB RAM
  • Backup proxy: Fedora 27, 2 cores, 2GB RAM
  • Parity node: Ubuntu Server 16.04.3 LTS, 4 cores, 8GB RAM

All Fedora 27 installations use the “Minimal Install” option from the network installer with “Guest Agents” and “Standard” add-ons.

My wife and I noticed that Plex had a propensity to pause periodically when playing a movie, and even when playing music. I didn’t think Plex was the concern, but rather the virtual machine subsystem. Everything is streamed in original quality, so the CPU was barely being touched.

And my NAS certainly wasn’t the issue either. Playing movies or music directly from the NAS didn’t have any issues. So if Plex’s CPU usage was nowhere near anything concerning, this points to virtualization as the issue. The underlying VMware hypervisor.

This prompted me to look for another solution. Plus VMware 6.5’s installation told me the Z600’s hardware was deprecated.

Enter Proxmox VE.

I’ve been using it for a few weeks now, and I’ve already noticed the virtual machines appear to be performing significantly better than on VMware. All of them. Not just Plex – the intermittent pausing is gone. Here’s the current loadout (about same as before):

  • Docker machine: Fedora 27, 4 cores, 4GB RAM
  • Plex media server: Fedora 27, 2 cores, 4GB RAM
  • Backup proxy: Fedora 27, 2 cores, 2GB RAM
  • Parity node: Ubuntu 16.04.3 LTS, 4 cores, 16GB RAM

A note about Parity: it is very memory hungry, hence why I gave it 16GB this round instead of just 8GB (initially I gave it 4GB). Not sure if it’s due to memory leaks or what, but it seems to always max out the RAM regardless of how much I give it.

Plex at least I know uses the RAM to buffer video and audio. What is Parity doing with it?

Proxmox VE out of the box has no limitation on cores either. They don’t limit system use to get you to buy a subscription. So it will use all 16 cores on the Z600. Though according to the specification sheet, it’ll support up to 160 CPU cores and 2TB of RAM per node. And it’s free to use.

It will, however, nag you when logging into the web interface if you don’t buy a support subscription. And you’ll see error messages in the log saying the “apt-get update” command failed — since you need a subscription to access the Proxmox update repository. But you can disable the Proxmox repository to keep those error messages from showing up, and there are tutorials online about removing the nagging message for not having a subscription.

The lowest cost for that subscription, as of this writing, is 69.90 EUR per physical CPU, not CPU core. So in a dual-Xeon or dual-Opteron, it’d be shy of 140 EUR (~170 USD) per year. Quad-Xeon or Quad-Opteron servers would be shy of 280 EUR (~340 USD). Which isn’t… horrible, I suppose.

The base system is built around Debian Linux and integrates KVM for virtualization. Which basically makes Proxmox a custom Debian Linux distribution with a nice front-end for managing virtual machines. Kind of how Volumio for the Raspberry Pi is a custom Raspbian Jessie distribution.

It also supports LXC for container support. Note that LXC containers are quite different from Docker containers, though there have been several requests to integrate Docker support into Proxmox VE. Which would be great if they did, since that would eliminate one of my virtual machines altogether. But I doubt they’ll be able to cleanly support Docker given what would be involved — not just containers, but volumes, networks, images, etc.

The only hiccup I’ve had with it came while installing Proxmox. First, I had to burn a disc to actually install it. Attempting to write the ISO to a USB drive didn’t work out. Perhaps I needed to use DD mode with Rufus, but following their instructions didn’t work.

It also did not support the 10GbE card during installation, so I had to re-enable the Z600’s onboard Gigabit port to complete the installation with networking support properly enabled. Once installed, it detected the 10GbE card, and I was able to add it into the bridge device and disconnect the Gigabit from the switch.

This machine will also soon be phased out. I don’t have enough room on this box to set up other virtual machines that I’d like to run. For example, I’d like to play around with clustering — Apache Mesos, Docker Swarm, perhaps MPI. So this will be migrated to system with dual Opteron 6278 processors on an Asus KGPE-D16 dual-G34 mainboard, which supports up to 256GB RAM (128GB if using unregistered or non-ECC).

I’ll be keeping this system around for a while still, though, since it does still have some use. It’s just starting to really show its age.

OpenVPN on Docker

A self-hosted VPN is a simple and secure way to access your home or small business network. For small businesses, this is a great way to set up a VPN connection to allow your employees to work remote. For the rest of us, it is also a great way to secure your Internet connection when using unsecured WiFi.

And for a self-hosted VPN solution, OpenVPN is one of the best solutions. One of the best known as well. It’s free and there are both desktop and mobile clients available.

Setting it up, however, isn’t nearly as straightforward. But that’s where Docker comes in. If you’ve never played around with it…. why not? It allows for very clean deployments and easy cleanup and upgrades without affecting the host system and anything else installed on it. As you’ll see here with deploying an OpenVPN instance.

In my instance I’m using Docker on Fedora 27 inside a virtual machine running on Proxmox, hosted on an old HP Z600 dual-Xeon workstation (the Xeons are from 2009, not anything to write home about). While the instructions for setting up the container are pretty straightforward, I’ll walk through some of the finer details based on my experience setting it up.

Before setting things up

Hopefully in your research on self-hosting a VPN you’ve discovered that you need a domain name for accessing it. So go to one of the several dynamic DNS hostname services and create a hostname for your home network. Without that hostname, you’re only setting yourself up for problems down the line trying to consistently use your VPN. So if you’re settled on self-hosting a VPN service, do that now.

Personally I use No-IP, and I’ve used them for… about 12 years now. While they do give one hostname for free, if you sign up for them, do yourself a favor and pay the $25 yearly subscription price so you don’t have to keep renewing your hostname every month.

Most home routers have built in support for dynamic DNS hostname services so it automatically updates your hostname with the IP address, so read the instructions on your router to set it up. Not all routers support all services, so use your router support to determine which service to select.

Fedora 27 and Docker

One thing needs to be said about using Docker with Fedora, though this kind of applies to all distros as well: do not use the Docker packages that come with Fedora. Instead follow the commands on Docker’s website to install the latest Docker Community Edition. Regardless of distribution, though, you’ll want to do this with their supported distributions.

The OpenVPN container does not play well with the Docker build distributed with Fedora 27. And that build is also several versions behind, and it’s always imperative to stay up to date.

With Docker installed, it’s time to pull the container and continue with the installation.

Note: If you will be running other Docker containers to which you want access over the VPN connection – e.g. a MySQL container for a GnuCash database – make sure the Docker bridge (e.g. docker0) is in the same firewall zone as the network adapter, otherwise the firewall will cut you off from being able to access them.

Installing OpenVPN on Docker

Installing OpenVPN is as simple as pulling the OpenVPN container and setting things up. If you’re familiar with Docker, you’ll notice right away the instructions in the container’s documentation are likely not familiar. I copied these instructions mostly verbatim from the container’s documentation to fill in a few details from my own experience.

# Pull the image
docker pull kylemanna/openvpn

# Create the volume and set up the keys
OVPN_DATA="ovpn-data-home" # Call this whatever you want
docker volume create $OVPN_DATA

# Create initial configuration
docker run -v $OVPN_DATA:/etc/openvpn --rm kylemanna/openvpn ovpn_genconfig -u udp://[VPN.SERVERNAME.COM]

where, in the last command, VPN.SERVERNAME.COM is the DNS name for your home network. This would be the dynamic DNS name you created earlier.

Updating the configuration

The default configuration for the OpenVPN Docker image uses the Google DNS servers. This may not be desirable depending on what is available on your network that you want accessible – such as mapped drives from a NAS or other services.

So the configuration will need to be updated to push different DNS servers to clients. You need access to the configuration file in the volume. (Note: highlight, copy and paste the below code if the all underscores aren’t showing in your browser.)

cd /var/lib/docker/volumes/$OVPN_DATA/_data

The file to edit is “openvpn.conf”, and the lines you’re looking for are:

push "dhcp-option DNS"
push "dhcp-option DNS"

You’ll want to modify the DNS server IPs to whatever is used on your home network. You can tweak any other options as you feel necessary – that is well beyond the scope of this article. Just DO NOT touch the “proto” and “port” options.

Setting up certificates and client profiles

With the volume created, now to create the server certificate:

docker run -v $OVPN_DATA:/etc/openvpn --rm -it kylemanna/openvpn ovpn_initpki

You will be prompted to create a passphrase for this certificate. So as always make sure to pick a reasonably secure passphrase since you’re securing the key used to generate the client profile for accessing your VPN. When asked for the “Common Name” for the certificates, use the hostname entered earlier when setting up the initial configuration.

After the certificate is generated, you will be prompted for that password later to finish the initial configuration.

Creating the container

I prefer creating containers over just using “docker run”. So that’s the command I’ll be using to create a container for the VPN:

docker create --name [name] -v $OVPN_DATA:/etc/openvpn -p 1194:1194/udp --cap-add=NET_ADMIN kylemanna/openvpn
docker start [name]

where [name] is the name for the container – I used “openvpn”. If the container is ever updated, you can just stop and delete the previous container, then re-run the steps above to create a new one. Since it’s operating off a pre-created volume, all your settings and certificates are preserved.

Now to expose it on the firewall. If you’re running Docker on Ubuntu, this step isn’t necessary.

firewall-cmd --zone=[zone] --permanent --add-port=1194/udp
systemctl restart firewalld

where [zone] is the zone for your network adapter.

If restarting the firewall service kicks you off SSH, you’ll need to recreate the OVPN_DATA variable upon next login.

Exposing it in public

In general, when exposing services where they are accessible outside your network, you want to avoid using default port numbers. Either configure the service to use a different port number, or use the port forwarding on the router to provide a different port number.

By default OpenVPN will run on 1194/UDP. And the OpenVPN container will always use that port number. You’ll notice above that all the configuration left this default port in place. I didn’t publish a different port when creating the container.

So securing your exposed VPN service is relatively easy: pick a random port number, preferably north of 32768, and map it to 1194/UDP for the Docker host. The vast majority of hackers will look only for default port numbers.

If your router does not allow this option, then you will need to publish a different port on the Docker host. Instead of “-p 1194:1194/udp”, use “-p [port]:1194/udp”, where [port] is a random port number. This also means you’ll need to update the firewall configuration as well to expose the random port number.

Creating client profiles

First, run:

docker run -v $OVPN_DATA:/etc/openvpn --rm -it kylemanna/openvpn easyrsa build-client-full CLIENTNAME nopass

where CLIENTNAME is the name of the profile you’re creating. For example, if I’m creating a profile for my personal cell phone, I’d call it “Kenneth_Phone”, or even “Kenneth_GalaxyS7” since that is the model I have. That way when I upgrade phones, I can create a new profile for the new phone and revoke the profile for my current phone.

With the profile created, now retrieve it. This will save it to the local folder, where you can retrieve it to install it on the client device.

docker run -v $OVPN_DATA:/etc/openvpn --rm kylemanna/openvpn ovpn_getclient CLIENTNAME > CLIENTNAME.ovpn

Before using the profile on the client device, you will need to edit the file. Look for this line:

remote [host] 1194 udp

If you’re exposing a different port externally for your VPN service, you will need to update the 1194 port number to the port number you’re using.

Backing up your configuration

Now that you have your VPN set up, you likely won’t want to go through that all over again. Especially since it’d require generating new profiles – and certificates – for all your devices. So to avoid that, back up everything in the OpenVPN volume you created earlier.

cd /var/lib/docker/volumes/$OVPN_DATA/_data
tar cvfz ~/openvpn.tar.gz *

Restoring it is straightforward. After recreating the volume, just extract the archive back into the same location.


And that’s about it. The profile you’ve created will work with any OpenVPN client, such as the Android OpenVPN client that I use on my cell phone. Just follow the steps above to create profiles for each device you want accessible from the VPN.

Also remember that security here is paramount. If you believe that any of the client profiles have been compromised, you will want to revoke the certificates for those profiles to prevent them from being used to access your VPN.