Proxmox VE vs VMware vSphere

If you’re seeking to get involved in anything computer-related with the intent of making it a career, two concepts with which you really need to be familiar are virtualization and containerization. The latter is relatively new to things, but virtualization has been around for quite a while.

VMware made virtualization more accessible, releasing their first virtualization platform almost 20 years ago. Almost hard to believe it’s been that long. I started fooling around with a very early version of VMware back when I was still in community college.

And it’s why they are basically the name for virtualization. But they are not the only name.

In my home setup, I have a virtualization server that is an old HP Z600 I picked up a couple years ago. A dual-Xeon E5520 (4 cores/8 threads per processor) that I loaded out with about 40GB RAM (it came with 8GB when I ordered it), a Mellanox 10GbE SFP+ NIC, and a 500GB SSD. The intent from the outset was virtualization. I wanted a system I could dedicate to virtual machines.

Initially I put VMware ESXi on it. Simply because it was a name I readily recognized and knew. The free version you can readily download online after registering a VMware online account. First, let’s go over the VMs I had installed:

  • Docker machine: Fedora 27, 4 cores, 8GB RAM
  • Plex media server: Fedora 27, 2 cores, 4GB RAM
  • Backup proxy: Fedora 27, 2 cores, 2GB RAM
  • Parity node: Ubuntu Server 16.04.3 LTS, 4 cores, 8GB RAM

All Fedora 27 installations use the “Minimal Install” option from the network installer with “Guest Agents” and “Standard” add-ons.

My wife and I noticed that Plex had a propensity to pause periodically when playing a movie, and even when playing music. I didn’t think Plex was the concern, but rather the virtual machine subsystem. Everything is streamed in original quality, so the CPU was barely being touched.

And my NAS certainly wasn’t the issue either. Playing movies or music directly from the NAS didn’t have any issues. So if Plex’s CPU usage was nowhere near anything concerning, this points to virtualization as the issue. The underlying VMware hypervisor.

This prompted me to look for another solution. Plus VMware 6.5’s installation told me the Z600’s hardware was deprecated.

Enter Proxmox VE.

I’ve been using it for a few weeks now, and I’ve already noticed the virtual machines appear to be performing significantly better than on VMware. All of them. Not just Plex – the intermittent pausing is gone. Here’s the current loadout (about same as before):

  • Docker machine: Fedora 27, 4 cores, 4GB RAM
  • Plex media server: Fedora 27, 2 cores, 4GB RAM
  • Backup proxy: Fedora 27, 2 cores, 2GB RAM
  • Parity node: Ubuntu 16.04.3 LTS, 4 cores, 16GB RAM

A note about Parity: it is very memory hungry, hence why I gave it 16GB this round instead of just 8GB (initially I gave it 4GB). Not sure if it’s due to memory leaks or what, but it seems to always max out the RAM regardless of how much I give it.

Plex at least I know uses the RAM to buffer video and audio. What is Parity doing with it?

Proxmox VE out of the box has no limitation on cores either. They don’t limit system use to get you to buy a subscription. So it will use all 16 cores on the Z600. Though according to the specification sheet, it’ll support up to 160 CPU cores and 2TB of RAM per node. And it’s free to use.

It will, however, nag you when logging into the web interface if you don’t buy a support subscription. And you’ll see error messages in the log saying the “apt-get update” command failed — since you need a subscription to access the Proxmox update repository. But you can disable the Proxmox repository to keep those error messages from showing up, and there are tutorials online about removing the nagging message for not having a subscription.

The lowest cost for that subscription, as of this writing, is 69.90 EUR per physical CPU, not CPU core. So in a dual-Xeon or dual-Opteron, it’d be shy of 140 EUR (~170 USD) per year. Quad-Xeon or Quad-Opteron servers would be shy of 280 EUR (~340 USD). Which isn’t… horrible, I suppose.

The base system is built around Debian Linux and integrates KVM for virtualization. Which basically makes Proxmox a custom Debian Linux distribution with a nice front-end for managing virtual machines. Kind of how Volumio for the Raspberry Pi is a custom Raspbian Jessie distribution.

It also supports LXC for container support. Note that LXC containers are quite different from Docker containers, though there have been several requests to integrate Docker support into Proxmox VE. Which would be great if they did, since that would eliminate one of my virtual machines altogether. But I doubt they’ll be able to cleanly support Docker given what would be involved — not just containers, but volumes, networks, images, etc.

The only hiccup I’ve had with it came while installing Proxmox. First, I had to burn a disc to actually install it. Attempting to write the ISO to a USB drive didn’t work out. Perhaps I needed to use DD mode with Rufus, but following their instructions didn’t work.

It also did not support the 10GbE card during installation, so I had to re-enable the Z600’s onboard Gigabit port to complete the installation with networking support properly enabled. Once installed, it detected the 10GbE card, and I was able to add it into the bridge device and disconnect the Gigabit from the switch.

This machine will also soon be phased out. I don’t have enough room on this box to set up other virtual machines that I’d like to run. For example, I’d like to play around with clustering — Apache Mesos, Docker Swarm, perhaps MPI. So this will be migrated to system with dual Opteron 6278 processors on an Asus KGPE-D16 dual-G34 mainboard, which supports up to 256GB RAM (128GB if using unregistered or non-ECC).

I’ll be keeping this system around for a while still, though, since it does still have some use. It’s just starting to really show its age.

Share

OpenVPN on Docker

If you’re considering setting up a self-hosted VPN rather than using one of the many service providers available, OpenVPN is one of the best solutions. It’s free and there are both desktop and mobile clients available.

Now I’m not going to get into the pros and cons of using a VPN service. If you’re reading this, I’m going to presume you’ve decided that 1. you want to use a VPN service, so are familiar with the concept and pros/cons, and 2. you want to self-host it. And you’re reading this because you’re investigating options.

Again OpenVPN is one of the best solutions available. One of the best known as well. Setting it up, however, isn’t nearly as straightforward. But that’s where Docker comes in. If you’ve never played around with it…. why not? It allows for very clean deployments and easy cleanup without affecting the host system and anything else installed on it. As you’ll see here with deploying an OpenVPN instance.

In my instance I’m using Docker on Fedora 27 inside a virtual machine running on Proxmox, hosted on an old HP Z600 dual-Xeon workstation (the Xeons are from 2009, not anything to write home about). While the instructions for setting up the container are pretty straightforward, I’ll walk through some of the finer details based on my experience setting it up.

Before setting things up

Hopefully in your research on self-hosting a VPN you’ve discovered that you need a domain name for accessing it. So go to one of the several dynamic DNS hostname services and create a hostname for your home network. Without that hostname, you’re only setting yourself up for problems down the line trying to consistently use your VPN. So if you’re settled on self-hosting a VPN service, do that now.

Personally I use No-IP, and I’ve used them for… about 12 years now. While they do give one hostname for free, if you sign up for them, do yourself a favor and pay the $25 yearly subscription price so you don’t have to keep renewing your hostname every month.

Most home routers have built in support for dynamic DNS hostname services so it automatically updates your hostname with the IP address, so read the instructions on your router to set it up. Not all routers support all services, so use your router support to determine which service to select.

Fedora 27 and Docker

One thing needs to be said about using Docker with Fedora, though this kind of applies to all distros as well: do not use the Docker packages that come with Fedora. Instead follow the commands on Docker’s website to install the latest Docker Community Edition. Regardless of distribution, though, you’ll want to do this with their supported distributions.

The OpenVPN container does not play well with the Docker build distributed with Fedora 27. And that build is also several versions behind, and it’s always imperative to stay up to date.

With Docker installed, it’s time to pull the container and continue with the installation.

Installing OpenVPN on Docker

Installing OpenVPN is as simple as pulling the OpenVPN container and setting things up. If you’re familiar with Docker, you’ll notice right away the instructions in the container’s documentation are likely not familiar. I copied these instructions mostly verbatim from the container’s documentation to fill in a few details from my own experience.

# Pull the image
docker pull kylemanna/openvpn

# Create the volume and set up the keys
OVPN_DATA="ovpn-data-home" # Call this whatever you want
docker volume create $OVPN_DATA

# Create initial configuration
docker run -v $OVPN_DATA:/etc/openvpn --rm kylemanna/openvpn ovpn_genconfig -u udp://[VPN.SERVERNAME.COM]

where, in the last command, VPN.SERVERNAME.COM is the DNS name for your home network. This would be the dynamic DNS name you created earlier.

Updating the configuration

The default configuration for the OpenVPN Docker image uses the Google DNS servers. This may not be desirable depending on what is available on your network that you want accessible – such as mapped drives from a NAS or other services.

So the configuration will need to be updated to push different DNS servers to clients. You need access to the configuration file in the volume.

cd /var/lib/docker/volumes/$OVPN_DATA/_data

The file to edit is “openvpn.conf”, and the lines you’re looking for are:

push "dhcp-option DNS 8.8.8.8"
push "dhcp-option DNS 8.8.4.4"

You’ll want to modify the DNS server IPs to whatever is used on your home network. You can tweak any other options as you feel necessary – that is well beyond the scope of this article. Just DO NOT touch the “proto” and “port” options.

Setting up certificates and client profiles

With the volume created, now to create the server certificate:

docker run -v $OVPN_DATA:/etc/openvpn --rm -it kylemanna/openvpn ovpn_initpki

You will be prompted to create a passphrase for this certificate. So as always make sure to pick a reasonably secure passphrase since you’re securing the key used to generate the client profile for accessing your VPN. When asked for the “Common Name” for the certificates, use the hostname entered earlier when setting up the initial configuration.

After the certificate is generated, you will be prompted for that password later to finish the initial configuration.

Creating the container

I prefer creating containers over just using “docker run”. So that’s the command I’ll be using to create a container for the VPN:

docker create --name [name] -v $OVPN_DATA:/etc/openvpn -p 1194:1194/udp --cap-add=NET_ADMIN kylemanna/openvpn
docker start [name]

where [name] is the name for the container – I used “openvpn”. If the container is ever updated, you can just stop and delete the previous container, then re-run the steps above to create a new one. Since it’s operating off a pre-created volume, all your settings and certificates are preserved.

Now to expose it on the firewall. If you’re running Docker on Ubuntu, this step isn’t necessary.

firewall-cmd --zone=[zone] --permanent --add-port=1194/udp
systemctl restart firewalld

where [zone] is the zone for your network adapter.

If restarting the firewall service kicks you off SSH, you’ll need to recreate the OVPN_DATA variable upon next login.

Exposing it in public

In general, when exposing services where they are accessible outside your network, you want to avoid using default port numbers. Either configure the service to use a different port number, or use the port forwarding on the router to provide a different port number.

By default OpenVPN will run on 1194/UDP. And the OpenVPN container will always use that port number. You’ll notice above that all the configuration left this default port in place. I didn’t publish a different port when creating the container.

So securing your exposed VPN service is relatively easy: pick a random port number, preferably north of 32768, and map it to 1194/UDP for the Docker host. The vast majority of hackers will look only for default port numbers.

If your router does not allow this option, then you will need to publish a different port on the Docker host. Instead of “-p 1194:1194/udp”, use “-p [port]:1194/udp”, where [port] is a random port number. This also means you’ll need to update the firewall configuration as well to expose the random port number.

Creating client profiles

First, run:

docker run -v $OVPN_DATA:/etc/openvpn --rm -it kylemanna/openvpn easyrsa build-client-full CLIENTNAME nopass

where CLIENTNAME is the name of the profile you’re creating. For example, if I’m creating a profile for my personal cell phone, I’d call it “Kenneth_Phone”, or even “Kenneth_GalaxyS7” since that is the model I have. That way when I upgrade phones, I can create a new profile for the new phone and revoke the profile for my current phone.

With the profile created, now retrieve it. This will save it to the local folder, where you can retrieve it to install it on the client device.

docker run -v $OVPN_DATA:/etc/openvpn --rm kylemanna/openvpn <span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span>ovpn_getclient CLIENTNAME > CLIENTNAME.ovpn

Before using the profile on the client device, you will need to edit the file. Look for this line:

remote [host] 1194 udp

If you’re exposing a different port externally for your VPN service, you will need to update the 1194 port number to the port number you’re using.

Backing up your configuration

Now that you have your VPN set up, you likely won’t want to go through that all over again. Especially since it’d require generating new profiles – and certificates – for all your devices. So to avoid that, back up everything in the OpenVPN volume you created earlier.

cd /var/lib/docker/volumes/$OVPN_DATA/_data
tar cvfz ~/openvpn.tar.gz *

Restoring it is straightforward. After recreating the volume, just extract the archive back into the same location.

Conclusions

And that’s about it. The profile you’ve created will work with any OpenVPN client, such as the Android OpenVPN client that I use on my cell phone. Just follow the steps above to create profiles for each device you want accessible from the VPN.

Also remember that security here is paramount. If you believe that any of the client profiles have been compromised, you will want to revoke the certificates for those profiles to prevent them from being used to access your VPN.

Share

Simple way to keep compute GPUs cool

First, a little bit of an update on my setup. After noticing it available for a good price from B&H Photo Video, I bought a second Zotac GTX 1060 3GB. And I pulled the setup out of the 4U chassis (Athlon X4 with RX 580) and replaced it with one of my 990FX boards and the two GTX 1060s. Running Windows 10 as well to allow for overclocking.

Specifications:

  • CPU: AMD FX-8350
  • Mainboard: Gigabyte 990FXA-UD3
  • Memory: 8GB DDR3-1866
  • GPUs: Zotac GTX 1060 3GB (x2)
  • Power: Corsair AX860

Interestingly, the acquired GTX 1060 performs about 25% lower than the one I already had, despite being the same model running at about the same clocks. That is why I wanted to put these on Windows 10. Overclocking just the memory by +500 on both cards brought the combined hashrate to a little north of 41MH/s.

But it ran HOT. As in climbing easily over 80C. Couldn’t last 24 hours before shutting down. So something needed to be done. Inside the 4U chassis, there isn’t much intake airflow. And virtually no exhaust.

PlinkUSA IPC-G4380S

The solution drew inspiration from the Mountain Mods Gold Digger chassis lineup (Ascension and U2UFO), which features 20-slot expansion card mounting for mounting graphics cards. But there’s something else: the cards are recessed behind 25mm (1 in) deep 120mm fan mounts. The fans are for drawing heat away from the cards. While not necessary for blower cards, it’s essential for custom cooled cards.

Mountain Mods U2UFO “Gold Digger” (Rear)

So I did the next best thing initially: loosely attached a 120mm Corsair SP120 to the rear of the chassis right behind the graphics cards. With it running at full speed. It has been a while since I had these in an actual system, so I forgot how loud they could be. The result?

So one card is holding fine at 72C, while the other is running at a nice, chilly 60C. With the hotter card being the one closer to chassis sidewall.

But with the SP120 running loud, I decided against using that particular fan and attached a Bitspower Spectre Pro to the back instead. Initially the 120mm fan, then a 140mm fan. But I couldn’t quite get the result I was aiming for, and the reason is the lesser static pressure. Even with removing the expansion slot covers for the slots immediately adjacent to the GPU coolers to allow for better airflow.

The chassis initially had a Cougar CF-V12HPB for airflow onto the cards. I switched that for a Bitfenix Spectre Pro 140mm. The fan has 120mm fan mount holes as well. And then mounted the Cougar fan on the rear behind the cards since it has better static pressure while being able to push ~60 CFM.

So that gives decent airflow into the chassis and onto the GTX 1060s with a quieter 120mm fan behind the cards pulling the hot air away. Both cards run in the low to mid 70s while mining.

What would likely allow for better cooling, since it would allow for better airflow, is sealing the inset area of the chassis so the fan is all but guaranteed to be sucking air from just the cards. Likely with thick weatherstripping or something like that. So perhaps that’ll be a future revisit of this depending on what I can find.

Share

I don’t think we have the whole story

Back in 2015 I wrote about Stephanie Hughes and how she was coded because she wasn’t wearing a crew neck shirt as her school dress code mandated. Instead she chose to wear a tank top with a sweater-like garment over top.

Recently a student named Remy was dress coded for… well:

Remy claims she was coded for not wearing a bra. And I’m not buying it. Not for a moment. Instead, I think based on the hysteria around school dress codes and the apparent disproportionate penalization of young women, she manufactured the idea she was coded for not wearing a bra when the likely reason she was coded is a bit more… obvious.

And what’s allowing her claim to propagate comes down to two simple things: the school won’t comment on such situations, and, again, the recent hysteria regarding dress codes, including attempts to label dress codes “fashion censorship”.

A Yahoo! article even pointed out that, while the dress code doesn’t specify anything regarding undergarments — likely because they presume it to be a given — it does say this (emphasis mine): “Tops must cover all parts of undergarments and shall not be low cut or revealing.”

Again, I think it’s obvious why Remy was coded for her choice of apparel. And that she manufactured the statement that she got coded for not wearing a bra to get around the fact that she was wearing something blatantly in violation of her school’s dress code and is hoping to shame the administration for the fact she got called on it. In other words, scream the equivalent of “pervert” at school administrators, likely only male school administrators as well, for having the audacity to call out young women who violate them.

The dress code for Remy’s high school also points this out: “Students who repeatedly dress inappropriately for school may be suspended for defiance.” Just as I’ve said that repeated violation of a workplace dress code is grounds for termination.

Share

As we leave 2017

One lesson all of us should learn ahead of this Christmas given everything that’s happened politically in 2017 and also 2016: RIGHTS limit how the government interacts with the People, PRINCIPLES limit how you interact with everyone else. That is why I’ve spent much of the last several years continually defending PRINCIPLES over rights.

Without the underlying principles of free speech and the presumption of innocence, for example, there is no foundation for the RIGHTS derived from those principles. Yet more and more I see those principles continually violated by people who claim to stand up for the rights derived from those principles.

Lay judgment against others only by the same measure you expect judgment to be laid on you. Treat others how you want to be treated. Respect must always be earned by how you treat others.

Do not be so quick to deduce motive or malice from someone’s actions. Presume someone is innocent when that person is accused of something, regardless of the accusation, regardless of the accuser, and regardless of who is being accused.

I’m certainly not perfect on these principles, but I at least try. Unlike a lot of others in the United States and the world.

At the same time, learn to be grateful for what you have, not envious of what others have. Do not seek to take what you have not earned. And above all, no one can legitimately do for you or someone else that which you cannot yourself morally or legally do.

Share

Apple’s battery woes

Recently Apple confirmed that its iPhones slow down as the battery degrades. A lot of people have taken this to mean that Apple is trying to force people to upgrade -i.e. “planned obsolescence”. Because when is something like this never about corporate greed?

Here’s the long and the short of it.

Rechargeable batteries degrade over time. If you have a laptop, you can use a program like CPUID’s HWMonitor to see the original max charge level and the current max charge level on the battery, measured in mAh, or milliamp-hours. You will see those two numbers deviate more and more over time. It happens with laptop batteries. It happens with your cell phone batteries.

This isn’t some kind of ploy like “planned obsolescence” either. It’s just the nature of rechargeable batteries. They have a limited lifespan.

But given that I said you can use software to determine your laptop battery’s max charge level, that should tell you that your cell phone can also detect its battery’s max charge level. This can allow the underlying operating system to warn you the battery needs to be replaced – if you own something other than an iPhone, that is.

But there’s something else, and it plays right into why Apple’s phones slow down over time: it is adjusting how much power it consumes based on the max charge level of the phone.

That’s right, either the iOS operating system or the phone’s internal hardware controls are throttling your phone’s performance to preserve battery time. Otherwise you’d find yourself needing to charge your phone more often. But the in-built throttling also has the complimentary effect of preserving your battery since, in the case of all iPhone models, you can’t easily change it.

So there. Now it’s bad that Apple didn’t disclose it – or if they did, it’s buried in something no one’s read. But there isn’t anything sinister behind that. But since few seem to understand the underlying technology and electronics…

Which would you rather have: a phone that artificially throttles itself in steadily-increasing, but minute amounts over time based on battery wear, or a battery that drains faster and faster as time goes on because it’s wearing down and the phone’s power draw is wearing it down faster and faster over time?

If you can’t replace the battery easily, it’s best to preserve it as best as possible.

Share

Net neutrality

All the various discussions of incidents wherein ISPs have done “shady things” all ignore WHY they made those moves over the simple fact that it happened. Instead the ready assumption is that ISPs did it to extort money from the content platforms, and this has led to presumptions of things like “micro transactions” and all kinds of other fear mongering merely “because they can” because ISPs in many regions hold a monopoly.

Except businesses typically try to avoid losing customers. A company having a monopoly in a region doesn’t mean they can just do what they want. Not when they still have to answer to municipalities (who answer to voters). And if a business artificially prices customers off their service, that’s not exactly a good thing.

BitTorrent was blocked by ComCast because BitTorrent is designed to saturate an Internet connection when downloading. Even prior to P2P, download managers already existed with the intent of taking advantage of HTTP protocol flexibility to download files from multiple sources (aka “mirrors”) with the intent of saturating your Internet connection. P2P sharing arguably started with Napster, which gave rise to other P2P network protocols like Gnutella and, eventually, BitTorrent.

Unless throttled in the client software, P2P is designed to saturate an Internet connection. And will saturate an Internet connection, which can affect network availability in a home or apartment, college campus (something I had fun dealing with when I was in college), or a local region.

Video streaming is also designed to saturate a connection. Video streaming protocols will change quality settings based on bandwidth availability between the sender and receiver. We’ve all seen this with YouTube and streamers have likely experienced such as well trying to stream via YouTube or Twitch. All of that has the potential to affect availability for everyone.

So much of the hyperbole over what could happen if the “Open Internet Order” is repealed seems to avoid the question of why ISPs did those “shady” things and merely looks to the fact that it happened. And that does nothing to further any understanding. And it’s fueling a lot of baseless speculation as well.

And the idea of “micro transactions” and extortion comes from a very, very broad misunderstanding of the whole “fast lane” concept. Recall where I said that video streaming is designed to saturate a connection up to the maximum throughput needed to stream the video at the requested quality settings. And the sender will throttle the video stream if congestion is encountered. In other words, video is very bandwidth intense. And given that Netflix and YouTube both support 4K streaming (likely with few consumers right now), this isn’t a problem going away any time soon.

Enter the “fast lanes”. While it’s been portrayed as ISPs trying to extort money from Netflix, there’s actually a much more benign motive here: getting Netflix to help upgrade the infrastructure needed to support their video streaming. Since it was the start of their streaming service, and the eventual consumption of it by millions of people, that led to massive degradation in Internet service for millions of other customers who weren’t streaming Netflix or much else.

To alleviate traffic congestion, many metropolitan areas have HOV lanes, or “high occupancy vehicle” lanes to encourage carpooling. The degree to which this is successful being highly debatable. The “fast lane” concept for the Internet was similar. But when the idea was first mentioned, many took it to mean that ISPs were going to artificially throttle websites who don’t pay up. When what it actually means is providing a separate bandwidth route for bandwidth-intense applications. Specifically video streaming.

And since these were sources of complaints among the general populace — whose only knowledge of computer networking, let alone the structure of the Internet, isn’t much — this led the FCC to talk about trying to regulate the Internet infrastructure. Since regulating the Internet is something that governments across the world have been trying to do for at last the last 15 years. And since they haven’t had much luck regulating what happens on the Internet, regulating the infrastructure is the next best thing.

Misconceptions and misunderstandings, and the speculation and doomsday predictions that have come from all of that, lead to bad policy. And it’s fueling much of the current discussion on net neutrality.

Note: The above is a comment I attempted to put on a video for Paul’s Hardware, but the comment appears to have been filtered or deleted.

Share

Dress codes = censorship?

It’s finally happened. I really wish I could say I’m surprised by this, but I would be lying. Dress code enforcement now means female fashion censorship. I wish I was making this up. And I wish I hadn’t already called it.

This comes from Fashion Beans writer Brooke Geller: “Planes, Trains, And School Dances: Censorship In Women’s Fashion“. The article was also only recently published. From what I could find embedded in the HTML for the article (Ctrl+U in the browser), it was published on November 16, and updated a few days later.

So let’s get into this.

Spaghetti straps, loose-fitting blouses, and even exposed collarbones have all prompted disciplinary action from disapproving teachers, some of whom have sent these students home early. Why? Because their clothing was apparently inappropriate.

Not “apparently inappropriate”, but in violation of the school dress code. In all incidents I’ve evaluated on this blog, I’ve not found ONE that is not a dress code violation.

With schools standing firm on their policies while outraged reactions intensify, these incidents—which seem to be becoming more and more common—have sparked an important debate on what’s more inappropriate: the length of a teenage girl’s skirt or the way she’s viewed?

Both are actually important. After all, I’m sure you wouldn’t want to see young women attending school in miniskirts or Daisy Dukes…

What exactly is the official reason for these restrictions? According to more than one justification that’s been given, it’s to protect other students from being “distracted” by their female classmates’ bodies.

I’ve said before that school administrators need to STOP giving this as a reason for enforcing the dress code.

Instead of manufacturing an excuse, or saying the young women in question will be “distractions”, the administrators need to merely state they are enforcing the dress code as written. A lot of dress codes are written objectively, and can be objectively enforced. The administrators don’t need to say anything beyond that. And should have copies of the dress code at the ready when pressed.

The problem, however, is the “distraction” excuse has become the thing to which female students and feminists have latched. And it has led to the narrative that dress codes are inherently sexist and are even being used to sexualize prepubescent girls.

“Distraction” was the reason cited when 18-year-old Macy Edgerly was sent home for wearing leggings and a long shirt. Rose Lynn was also told her outfit—leggings, a cardigan, and a top that entirely covered her midriff and cleavage—”may distract young boys.”

Okay let’s look into these. Macy Edgerly went to school in cropped leggings and a long shirt. At the time, she was a student in the Orangefield Independent School District. While distraction was cited in the immediate for Macy being coded, her sister admitted on Facebook that the outfit Macy chose violated the dress code.

The dress code allegedly allowed leggings, but only if what was worn over them adhered to the “fingertip rule” — it must extend beyond the tips of the fingers when the arms are extended loosely at the sides. Macy’s sister, Erica, admitted the shirt didn’t fit that definition.

Rose Lynn’s story comes from Oklahoma. What Brooke is omitting from her statement is Rose’s top was a tank top. Which is about universally disallowed under school dress codes.

There’s Deanna Wolf, whose daughter wasn’t allowed in class because she was wearing an oversized sweater over leggings. Or Stacie Dunn, who had to leave work to collect her daughter Stephanie because her outfit exposed her collarbones.

Stephanie Dunn’s story I’ve already tackled. Like Rose’s story, the concern in question was the tank top. All students were required to wear a crew-neck shirt. No exceptions.

Deanna’s incident was similar to Macy’s in that she chose to wear leggings with an oversized shirt. Something her school’s dress code did not allow, contrary to the assertion of the linked article. From page 29 of the student handbook, emphasis mine:

Students may wear yoga pants, tights, leggings, or jeggings as long as they are used as an undergarment covered by shorts, skirts, or dresses that are at least no higher than three inches above the bend of the back of the knee.

In other words, leggings are not to be pants on their own, which is what Deanna chose to do. Again, she was objectively in violation of her school’s dress code.

Or Melissa Barber, whose daughter Kelsey was told—by her teacher­—that her top was not well suited for the size of her breasts.

The problem with Kelsey’s shirt is the exposed middle in the top. If it was a completely solid top, she likely would’ve been fine. Even though she has a large bust.

And I say this as a husband to a woman with large breasts. Who has herself gone through dress code issues as well — and who defended herself by pulling out her copy of the dress code every time an issue has come up.

For these mothers, their opinions are much the same: It’s not okay to keep a student from learning because of their outfit. And it’s certainly not all right to imply that an underage girl looks too attractive for her male classmates to be able to concentrate on their own work.

Given that it is both a parent’s and student’s responsibility to know and comply with the dress code, the mother’s opinion is irrelevant. If a student is determined to be in violation of the dress code, they will be kept from the classroom. Just as they would also be sent home from work for not complying with a workplace’s dress code.

And again, the distraction excuse should not be made by school administrators. Instead where dress code violations occur, they need only cite the dress code.

Aside from conservative parents outraged at the idea of a young boy wearing a dress, there’s been little fuss over the length of their shorts or the tightness of their shirts.

High school dress code restrictions may be designed to be non-discriminatory, but it’s disproportionately affecting girls far more than boys. And while that isn’t necessarily illegal, it’s still unfair.

No it isn’t in the least unfair. After all we don’t say that criminal laws are racist if they are disproportionately enforced against one demographic. Okay, rational people don’t say that. We aren’t repealing murder laws because blacks are more likely to be murder perpetrators. Yet I don’t think anyone is saying that murder laws are racist. Or at least I hope not.

If the policy is demonstrably neutral, then as long as the disproportionate enforcement isn’t due to targeting one particular demographic, the policy doesn’t need to change, regardless of how unfair it might seem.

Yet that is exactly what is being proposed. Quoting Tricia Berry, an engineer and collaborative lead of the Texas Girls Collective, an organization that seeks to motivate girls and young women to pursue STEM fields:

However there are cases where policies end up having a disparate impact. It’s important that schools look at the data and recognize when one group is disproportionally affected by a seemingly neutral policy. In the case of dress code restrictions, if the policies are penalizing female students or transgender students or any other group at rates higher than others, it should be investigated and the policy creators should be willing to adjust accordingly.

Again, as long the policy itself is demonstrably neutral, then what matters is whether the enforcement is demographically neutral, even if one demographic is being penalized more under the policy. The only thing the disproportionate penalization shows is one group’s greater likelihood to disobey the policy.

Again, we don’t repeal or massage laws due to one demographic being more likely to violate them. You can use individual situations as a push to reevaluate the codes to either be more specific or to allow something not previously allowed.

But don’t use the disproportionate likelihood that young women will be penalized as a reason to reevaluate dress codes. Especially when MOST young women comply with the dress codes without issue or complaint.

This is probably the last thing a teenage girl wants to hear, but the clothing censorship doesn’t stop when high school’s over.

Any adult person—regardless of gender—will encounter dress codes at some point in their life. Upscale restaurants, weddings, and luxury trains all require some degree of respectable attire. Even nightclubs often have strict dress requirements.

And here is where things really start to take a turn. Aside from citing incidents of dress code enforcement on commercial flights, the rest of her article latches onto the fallacious “distraction” excuse and really flies with it. Perpetuating the myth that dress codes are inherently sexist, only because they are being disproportionately enforced against young women.

And going even further by calling dress codes “fashion censorship”.

* * * * *

Let’s see if I can inject some rationality back into this discussion, since all rationality appears to have been flown up to 40,000 feet and dropped out the back of a cargo plane.

First and foremost is the obvious: people largely don’t like being told they cannot do something. Especially if the action in question is one for which there is no demonstrable or perceivable harm on anyone else. And how a person dresses readily falls into that.

Brooke’s biggest flaw with the article is not making any mention of workplaces and employers. None. Search her article for “employ”, and you’ll see it come up once, when mentioning “employee passes” with regard to United Airlines. And “work” shows up three times, and none in relation to a workplace. Search for “job” and you won’t find it.

And I really have to wonder why.

Actually I know why she never mentioned workplaces and employers. It completely invalidates her argument and the statements she quotes from Tricia Berry.

While casual workplaces are becoming more common, there are still dress codes that still apply, restrictions on what “casual” dress is allowed. A lot of workplaces require business casual or business professional dress. And in some workplaces, you have a uniform requirement.

When I worked for K-Mart back in 1998 into 1999, first as a cashier then as a shift supervisor, our dress code was pretty simple: white collared shirt (golf shirt or button-down) with the red vest over top and name tag prominently visible. Tan khakis. Jeans were not allowed. Shoes had to be one color (white, black, or brown) and had to look presentable (i.e. not torn up or falling apart). Collared shirts with the K-Mart logo were allowed as well.

Quoting an earlier article:

In the real world, what you think “looks fine” might get you sent home from work. Without pay if you’re hourly. And repeated noncompliance with a dress code is grounds for termination.

Dress codes are enforced at school to prepare students for them after school. Students are removed from school for dress code violations just as they will be removed from work for dress code violations. And school is about educating and preparing our youth for life on their own as an adult.

Imagine if all schools required business casual dress or a uniform. 1 in 5 schools do have uniform requirements. But perhaps we need to start having that nationwide, or requiring business casual dress at least. Again to ensure there is virtually no subjectivity from the equation.

And, again, school administrators and teachers need only cite the dress code when coding students for violations. Stop making the “distraction” excuse. Just cite the dress code, and have copies of it on hand so you can point out the violation specifically.

 

Share

Ethereum mining rig

For about the last month, I’ve been Ethereum mining, putting a GTX 1060 (“Rack 2U GPU Compute Node“) and 1070 (“Mira“), and an R9 290X (from “Absinthe“) at this. In part to see what the hardware can do. And I’ve been rather impressed with the R9’s performance that I decided to build another standalone node for Ethereum mining using another AMD graphics card.

And the graphics card is the only new contribution to this, the only hardware I didn’t have previously. Everything else was hardware I already had on hand.

New graphics card

That is the Sapphire Nitro+ RX 580. Specifically I picked up the 4GB model. My only slight concern was power consumption.

I managed to find the Nitro+ card as an open box at Micro Center for a very nice markdown. If not for the markdown, however, I would not have gone with this as it’s one of the more expensive RX 580s on the market. I would’ve instead looked for a lesser RX 580, or possibly looked for an RX 570 instead.

Initial form

  • Chassis: PlinkUSA IPC-G4380S 4U
  • CPU and cooling: AMD FX-8350 with Noctua NH-D9L
  • Mainboard: Gigabyte 990FXA-UD3
  • RAM: 8GB DDR3-1333
  • Power supply: Corsair AX860
  • OS: Ubuntu Server 16.04.3 LTS

In my 10 Gigabit Ethernet series, I mentioned building a custom switch from the above hardware. And after disassembling that switch to replace it with an off-the-shelf 10GbE SFP+ switch, I never reprovisioned that hardware for anything else, despite planning to do so.

So assembling the system was pretty straightforward: drop the new graphics card in, plug it up, find something to act as primary storage, and go.

One thing that was frustratingly strange: Ubuntu did NOT want to play nice with the onboard Gigabit NIC, or with a separate TP-Link NIC I attempted to use. I was able to get the networking functional with another HP 4-port Gigabit NIC. Fedora 26, however, had no issues playing nice with the onboard NIC.

But this was only a temporary build. I had no intention of leaving this card plugged into the FX-8350. Again the initial build was in this merely because it was mostly prepared. And I initially couldn’t find the mainboard and processor I wanted to use to drive this mining machine. I did find it later that day, so I decided to let the system run and I’d swap things out the next day.

Intermediate form

  • CPU and cooling: AMD Athlon 64 X2 4200+ with Noctua NH-D9L
  • Mainboard: MSI KN9 Ultra SLI-F
  • RAM: 2GB DDR2-800

Wait, a 12 year-old platform driving a modern graphics card? I know what you might be thinking: bottleneck! Except…. no.

While this combination certainly is not powerful enough to drive this card for gaming, it’s more than enough for computational tasks such as BOINC, Folding@Home (though you’d really want a better CPU), and Ethereum mining. The aforementioned R9 290X is running on an Athlon 64 X2 3800+ system without a problem. It overtook my GTX 1060 on reported shares, despite the 1060 having a several day head start, and is reporting a better hash rate than my GTX 1070.

So… yeah. As I’ve pointed out before, most who scream “bottleneck” are those who have no clue how all of this stuff works together.

The slow lane

That being said, though, the RX580’s performance out of the box left much to be desired: 18MH/s out of the box. When others are reporting mid 20s to near 30? What gives? I mean the R9 290X was outperforming it. These are the hash rates reported to the pool by ethminer:

  • GTX 1070 8GB: 25.4MH/s
  • RX 580 4GB: 18.4MH/s
  • R9 290X 4GB: 24.8MH/s
  • GTX 1060 3GB: 19.9MH/s

I thought system memory was holding back the mining software, so I swapped 4GB into the system. No difference. Had a slight improvement swapping in Claymore’s Dual-Miner for the original ethminer, but still wasn’t anywhere near what I thought I should’ve been seeing.

The GTX 1060 rig had been offline for a few days, so I moved the mainboard (Athlon X4 860k) from that into the 4U chassis and installed the RX 580 into that, then swapped out Ubuntu for Windows 10. Pull down the Claymore miner for Windows, and attempt to run it… No significant difference in hash rate.

Which rules out the platform as the culprit. And I didn’t have any reason to think it would be. Instead the platform change was due to my concern the Linux driver may have been holding it back, thinking there would be a performance boost by running the Windows driver. And out of the box there wasn’t.

But setting the “GPU Workload” setting to “Compute” instead of “Gaming” boosted the hash rate to about 22.5MH/s. Time to overclock.

One thing I didn’t realize ahead of this: the 4GB RX cards have their memory clocked at 1750MHz, while the 8GB RX cards have memory clocked at 2000MHz. Which definitely explains the lackluster performance I had out of the box — and likely why it was returned to Micro Center.

And in reading about how to get better mining performance, everything I read said to focus on memory, not core speed.

So with MSI Afterburner, I first turned up the fan to drop the core temperature. Then I started bumping the memory with the miner running in the background to provide an instant stability and performance check. I was able to push it to 2050MHz. At 2100MHz, the system locked up. I was not interested in dialing it in any further, as any improvements would’ve been within margin of error.

Final verdict: ~26MH/s. About on par with my GTX 1070. And almost 45% increase in performance by setting the driver to Compute and overclocking the memory.

The fact I was able to get 24.4MH/s out of the R9 290X, running the AMD driver on Ubuntu Server with no additional configuration (since I don’t know if you can configure it further), shows I must have a really, really good R9 290X, since it was only 10% lower than my GTX 1070 on reported hash rates from the mining software.

Until I swapped out ethminer for Claymore’s miner. Then it overtook my GTX 1070 on hash rate.

Other systems

I mentioned before that this isn’t the only mining rig I have set up currently. Along with Mira, I have two other systems, one with the aforementioned R9 290X and GTX 1060, both running Ubuntu Server 16.04.3 LTS and using Claymore’s miner.

First system:

  • CPU and cooling: AMD Athlon 64 X2 3800+ with Noctua NH-L9a
  • RAM: 4GB DDR2-800
  • Mainboard: Abit KN9 Ultra
  • Graphics card: XFX “Double-D” R9 290X 4GB (water cooled)
  • Hash rate: 27.5 MH/s

Right now this system is sitting in an open-style setup. I’m working to move it, and the water cooling setup, into another 4U chassis. I just need to, first, find a DDC pump or possibly use an aquarium pump inside a custom reservoir.

Second system:

  • CPU and cooling: AMD Athlon 64 X2 4200+ with Noctua NH-L9a
  • RAM: 4GB DDR2-800
  • Mainboard: MSI K9N4 Ultra-SLI
  • Graphics card: Zotac GTX 1060 3GB
  • Hash rate: 19.8 MH/s

Not much to write home about at the moment. I’m considering swapping this into one of the 990FX platforms to use Windows 10 so I can overclock it and see if I can get any additional performance from it. Since it’s a hell of a lot easier to overclock a graphics card on Windows.

The GTX 1060 also is not water cooled, but that might change as well, though using an AIO and not a custom loop. Which is a reason to swap it into a 4U chassis and out of the 2U chassis that currently houses it. But I’d also need another rack to hold that.

Recommendations

There really are only three graphics cards to recommend out of what’s on the market, in my opinion: the GTX 1060, RX 570, or RX 580. As demonstrated, the RX 580 is the better performer, but also has higher power requirements compared to the GTX 1060.

So if you pick up a GTX 1060, you can save a little money and get the 3GB model and still have a decent hashrate. You can even try overclocking it on Windows if you desire. The 6GB model may give you better performance (since it has more CUDA cores), but it’s up to you as to whether you feel it’s worth the extra cost. The mini versions work well, but the full-length cards will provide better cooling on the core.

For AMD RX cards, run those on a Windows system with the driver set to “Compute”. You can also save a little money and grab the 4GB version. Just make sure to overclock the memory.

* * * * *

If you found this article informative, consider dropping me a donation via PayPal or ETH.

Share

Revisiting the Quanta LB6M

Back in January of this year, I acquired a Quanta LB6M to upgrade my home network to 10GbE, at least for the primary machines in the group. In recently also acquiring a better rack for mounting that switch plus a few other things, I noticed the switch becoming painfully hot to the touch on the underside.

Now immediately after receiving the switch, I swapped out the 40mm fans on the switch’s sled to alleviate a very noticeable, high-pitched whine. The fans I replaced them with I knew weren’t going to draw air nearly as well, but all indications online were the switch shouldn’t overheat.

Yet the switch was obviously overheating. Thankfully it didn’t appear the switch was malfunctioning, but just getting so hot that I couldn’t touch the switch without feeling like I was going to burn myself.

I moved a fan to blow air on it to alleviate that as much as possible as a temporary measure, knowing I’d eventually have to pull it out and open it up. Let’s just say I should’ve done this before initially deploying the switch.

In removing the copper heatsink from the primary processor, it was obvious my switch was not refurbished in any fashion. Normally when you remove a heatsink from a processor, thermal compound stays behind and you end up with some on the processor and the heatsink.

Almost none stayed on the processor. And what attached to the heatsink had hardened and needed to be scraped off. In hindsight I should’ve followed up with a metal polish, but ArctiClean took care of it well enough.

So did this help? Not really. Still don’t regret doing it, though, as it very obviously needed to be done. But there was something else I failed to notice while I had the switch off the rack: the fan sled.

Anymore today, ventilation around fans tends to be honeycomb to maximize airflow. The type of grill you see above, however, is obviously not the right kind for airflow. Perhaps that’s why Quanta felt the need to put 40mm fans (AVC DB04028B12U) on this thing that rival 80mm fans on airflow. And no I’m not joking on that. They move A LOT of air at a pretty good static pressure, but are LOUD!

And the 40mm fans I have in there currently don’t come anywhere close to what the stock fans allowed. So cut the fan grills and everything’s good, right?

For the most part. The switch is still getting noticeably warm to the touch, but not the scald-your-hand hot it was previously. And an increased airflow was the first thing I noticed after powering on the switch after opening up the sled.

So both definitely needed to be done. If I’d opened the fan sled without changing the thermal compound, it wouldn’t have made nearly the difference. And no I’m not posting a picture of the metalwork I did to do this. I’ll just say that I wish I had a better setup to get a cleaner result. But it works, at least.

Share