Limiting the exercise of rights, and the risks of freedom

I go further, and affirm that bills of rights, in the sense and to the extent in which they are contended for, are not only unnecessary in the proposed Constitution, but would even be dangerous. They would contain various exceptions to powers not granted; and, on this very account, would afford a colorable pretext to claim more than were granted. For why declare that things shall not be done which there is no power to do? Why, for instance, should it be said that the liberty of the press shall not be restrained, when no power is given by which restrictions may be imposed? I will not contend that such a provision would confer a regulating power; but it is evident that it would furnish, to men disposed to usurp, a plausible pretense for claiming that power.

— Alexander Hamilton, Federalist No. 84

Alexander Hamilton felt that a bill of rights was not only unnecessary, but dangerous. And in declaring such, as stated plainly above, he also laid a prediction onto the future: that declaring rights into the Constitution will actually be used to claim powers that don’t actually exist. Instead they’ll be used as a challenge by governments to see just how close to the line they can skirt before a particular action becomes an infringement.

Indeed even with the actual declared powers of Congress, this has become the case, basically interpreting the declared powers as being able to do pretty much anything they want. The bastardization of the power to “borrow Money on the credit of the United States” (see Article I, Section 8) is the reason the public debt of the United States (not including intragovernmental holdings) is 13.9 TRILLION USD.

Indeed much of what Congress has done requires an extra-constitutional reading of the Enumerated Powers, much like how Sheila Jackson Lee [D-TX(18)] made a very extra-constitutional interpretation of the Fifth Amendment arguing against a House of Representatives bill to repeal the Patient Protection and Affordable Care Act of 2010.

The Ninth Amendment was a response to Hamilton’s prediction:

The enumeration in the Constitution, of certain rights, shall not be construed to deny or disparage others retained by the people.

From the Ninth Amendment we have numerous claims of rights that are not enumerated, per se, in the Bill of Rights or anywhere else in the Constitution. Indeed one such right is that of privacy, first recognized by the Supreme Court in Griswold v. Connecticut, 381 US 479 (1965). As I’ve argued previously, the Ninth Amendment is intended to be a fence around the Federal government and the State governments by way of incorporation. Even if the Second Amendment were repealed, the Ninth Amendment would still hold that a person has the right to keep and bear arms:

As such, taking away the Second Amendment, completely nullifying it, does not mean no one can own firearms because the Ninth Amendment and Fourteenth Amendment would still control. As such the only way a ban on firearms can go through is if we have an Eighteenth Amendment equivalent with regard to firearms — and imagine how well that would go over.

Imagine indeed how well it would go over. The history of Prohibition shows just how well it would go over. Which is why the despots in Washington and the various State legislatures around the United States aren’t wanting to go full-tilt into doing this. They know it would be a disaster greater than Prohibition. It would, quite likely, lead to a second civil war in the United States and the largest armed resistance to a standing law ever in US history.

So instead they come as close to repeal as possible, to nullify the operative clause of the Second Amendment without following the Article V process to do it properly. They pass laws that seek to deter the exercise of rights, or put so many encumbrances as possible in place. We’ve seen this with gun rights as well as abortion. Indeed, California has just enacted even more gun laws in an attempt to one-up New York and Illinois as having the most stringent gun control laws in the United States. And there are already declarations of non-compliance by gun rights proponents. And there have been instances wherein civilians have shown up armed to city council meetings and State legislatures as a show of resistance to attempts to enact new gun control laws.

And all the while, gun control proponents say that our Second Amendment rights “are not unlimited”. They’re not incorrect on that mark. Indeed Thomas Jefferson would likely agree with them. But only insofar that that the right to keep and bear arms doesn’t mean the right to shoot anyone you damned well please:

of Liberty then I would say that, in the whole plenitude of it’s extent, it is unobstructed action according to our will: but rightful liberty is unobstructed action according to our will, within the limits drawn around us by the equal rights of others. I do not add ‘within the limits of the law’; because law is often but the tyrant’s will, and always so when it violates the right of an individual.

Their fallacy is in saying “Since the Second Amendment isn’t unlimited, we can pass any gun restrictions we want (short of an outright ban), and they’re all perfectly reasonable because your rights aren’t unlimited.” In other words not being unlimited means they can be limited to the point where they might as well not exist.

To borrow the colloquialism, the right to swing your fist ends at my body. The right to swing a melee weapon ends at my body. The right to keep and bear arms ends when you decide to shoot me. In each of these instances, however, you retain your right to assault me only if I have presented a credible threat to your safety or life in a circumstance not of your own creation. At no other time do you have a right to assault me.

And no reasonable person declares that our right to keep and bear arms means we have the right to commit mass murder or shoot anyone. But the presentation of claims against the Second Amendment operates on a presumption of malicious intent by those seeking to exercise their rights. Indeed, one might as well presume that everyone who wants to exercise their First Amendment rights intends to do so only to cause offense or spread hatred. Though the colloquial social justice warrior is one step shy of such a presumption, at least with regard to all white, hetero-normative, cisgender males.

Crime is a risk of freedom generally.

Offensive and hate speech is a risk of the general freedom of speech. Acquittals of terrorists, killers, child pornographers, and child sex abusers is a risk of the general protections of due process, the warrant requirement for searches, and the protection against self incrimination, including the Miranda rule. Drunks, violent drunks, and drunk drivers, whether or not their drunkenness kills themselves and/or others, is a risk to allowing the consumption of alcohol. As is the risk of women being victims of rape and sexual assault.

And armed crimes, including homicides by firearm, and suicides by firearm are risks inherent in the general right of the people to keep and bear arms.

Again crime is a risk of freedom generally, and a risk to freedom when used by despots as justifications to infringe freedom generally.

Indeed the condescending attempts by the government to limit the Second Amendment rights can basically be summarized as such: “Well these mass murderers and criminals show that you generally cannot be trusted with firearms. We’ve let you have your fun with them, but now we need to take them back and can’t let you have them anymore.”

Instead of taking away rights, the idea is to instead go after the minority who abuse them to the demonstrable harm of others. If you wish to prevent that minority from being able to abuse the rights and liberties we all enjoy, you must do so without infringing on the rights and liberties we all enjoy.

In other words if you want to swing your fist, you must ensure that in doing so you assault only the person you intend, and only with demonstrable reasonable justification. This is why responses to gun control efforts following the Orlando massacre include asserting not just the Second Amendment, but also the Fifth and Fourteenth Amendment guarantees of Due Process.

Regardless of how hard we try, you cannot legislate away evil. All attempts fail. Because legislating away evil requires the commission of greater evils. It requires the enactment and empowerment of a despotic government. And those desiring such a government will presume to be immune from the exercise of its power until they discover, usually entirely by surprise, they are not, that the hands of despots reach to everyone, even those who thought they were politically favored.

This is why much of our existing law isn’t about attempting to legislate away evil, but providing a process for responding to it. The problem is so many people think we can legislate away evil that they want to take away the ability for civilians to respond to it, and that is the manifestation of the greater evil that history shows is the inevitability of such desires, and one that society must generally oppose.

How to screw up your computer

I’m not a computer support expert. Or at least I don’t assert that I am. I do have experience with computer support. I was the go-to guy for computer support issues while living on campus at Peru State College. I still cannot to this day understand why most of the problems I had to fix seemed to crop up in Morgan Hall (women’s only residence hall). Anyway…

A person who does assert that he is a “PC Support Expert” wrote an article for About.com called “13 Ways You’re Probably Screwing Up Your Computer“. Let’s get into this. Note that I may not cover every point, only those where I hold disagreement.

You’re not backing up continuously

One big way to screw up your computer, and by extension yourself, is to back up in some way that’s not continuous.

This is a LEVEL 10 SCREW UP!

Yes, you should be backing up your data continuously, as in virtually nonstopall the timeat least once per minute.

Backing up your data is extremely important. There are horror stories galore about people not making backups of important stuff. It’s one of the reasons I built a NAS, and I mention in Part 2 of that build log the need to set up off-site backups (commonly referred to as “cloud backups”).

Let’s get one thing straight: continuous backups are not necessary! Indeed, they can actually be detrimental, because you could end up overwriting a backed-up copy unintentionally. For example, let’s say that your computer is set to have your files backed up automatically to a cloud account. Your computer gets hit with a ransomware virus and those files are encrypted. Your cloud backup now contains the encrypted files, meaning you’ve basically just lost everything.

This is where two concepts need to be introduced, neither of which are discussed by the “expert”: incremental backups and redundancy.

Redundancy means simply having multiple copies of your important data. This means not backing up your data continuously, but keeping multiple current copies of it. Given how many online services offer free trials and free accounts up to a particular storage limit, there is no excuse to not have pictures and other important documents and data backed up to multiple online accounts, with all of them kept reasonably current. There are numerous tools available to allow you to do this quickly and seamlessly.

Incremental means just what it says: backups that are done incrementally. This means that only files within the target folders that are changed as of the time the backup is made are grouped together into a “transaction” and backed up as one set. But the older copies are still retained until you decide to get rid of them. This means in the aforementioned ransomware attack, you’ll still have an unencrypted copy of your (backed up) files sitting in your archive where you can recover it.

It also means that you can recover files if you make inadvertent changes, such as if you inadvertently overwrite a file.

So to summarize with regard to back-ups, there are a few points to keep in mind:

Make incremental backups. This also means that you should always be retaining older versions of your important files so you can recover from inadvertent changes and file corruption.

Back up only what you cannot recover. Since many cloud backup services charge based on how much data you back up, make sure that you only back up what you genuinely cannot replace or rebuild. Pictures are one classic example. But it’s not necessary to back up your music and movie collection if you can re-download the files or rebuild them from physical copies.

Back up periodically, but not continuously. Doing “continuous” backups is not necessary, and I’ve never seen it recommended. Nightly backups are the most frequent I’ve seen recommended. If the data is more mission critical — such as if you’re a content creator of any kind — then more frequent backups may be worthwhile, but you’ll likely never need any kind of “continuous” backup.

Not updating your anti-virus software

This one is a no-brainer. You need to have anti-virus software on your system. Windows 8.1 and Windows 10 come with it by default, but the Windows Defender in Windows 7 is not an anti-virus. For Windows 7, you need Microsoft Security Essentials.

That said, there are sometimes pop-up messages that ask you to do this manually or notices that appear on screen about needing to update the core program before definition updating can continue.

Unfortunately, I see people screw up all the time by closing these… without reading them at all! A message that shows up over and over is usually a good indication that’s it’s important.

So stop screwing up your computer’s ability to fight the bad guys and make sure your antivirus program is updated! Just open the program and look for the “update” button.

Actually if your anti-virus software isn’t updating in the background without prompting you, get rid of it and find one that does. For example the aforementioned Windows Defender and Microsoft Security Essentials (Windows 7) update automatically in the background.

At the same time, though, antivirus software can also be more of a problem than a solution. Antivirus software has been plagued with concerns of false positives, in which the antivirus signatures cause the software to believe that critical system files are infected, causing those files to be quarantined, likely on a forced reboot since files cannot be deleted while they are in use.

At my previous job, the systems were “protected” with McAfee (which I haven’t trusted since before the turn of the century), and once it mistook one of the critical files for Microsoft Visual Studio as being infected and quarantined it. Every engineer who relied on Visual Studio (which was about half the engineering staff, myself included) suffered downtime at a critical point in the release cycle due to this.

So long as you’re not engaging in risky Internet activities, you should be fine if your antivirus software isn’t completely up to date.

Not patching software right away

Once these vulnerabilities in Windows have been discovered, a patch has to be created by the developer (Microsoft) and then installed (by you) on your computer, all before the bad guys figure out how to exploit said vulnerability and start doing damage.

Microsoft’s part of this process takes long enough so the worst thing you can do is extend that window of opportunity any longer by procrastinating on installing these fixes once provided.

There are greater risks to your system’s security than running outdated software. For example if you’re running on a public WiFi without the Windows Firewall enabled, you’re exposing yourself to greater risks. The Windows Firewall has been included in Windows since XP Service Pack 2 in response to widespread worm attacks on Windows systems, something I had first hand experience combating while at Peru State College.

Your web browser, though, is actually more likely to be a security concern than the operating system. And this is true regardless of your operating system. In other words, Macs aren’t immune, and Linux isn’t immune. And I detest the clueless fuckwits who try to say differently.

Let me put it this way: when a high-profile SSL vulnerability is discovered in Apple’s Safari browser that causes the browser to accept any properly-formed SSL certificate, valid or not, for what is supposed to be a secured connection, and that vulnerability is also known to exist on all Safari browsers, including the ones embedded into the iOS operating system on iPhones and iPads, all Apple fanboys have lost any right to claim that any Apple operating system is secure.

Don’t also forget the fact that any software on your computer can be a security concern. But keeping software completely up to date isn’t always the best practice, as software patches can, at times, introduce bugs. But so long as you’re not far out of date, and you’re not exposing yourself to other risks, running out-of-date software isn’t the security concern many make it out to be.

You’re still using Windows XP

If you’re still using Windows XP then your computer is still vulnerable to all of the security issues that have been found, and corrected in later versions of Windows, since May of 2014!

There are numerous reasons to still be using Windows XP. For example, older games may not be compatible with newer versions of Windows. Legacy software is often a reason to stick with a legacy operating system. And there are numerous reasons to stick with legacy software packages, the chief one being that migration may be extremely time consuming and prone to errors or incomplete data migrations, presuming it’s possible at all.

But using a legacy operating system isn’t necessarily a problem. It all depends on how that operating system is being used. And the chief concern comes when that legacy system is connected to a network or the Internet.

There are many things you can do to mitigate potential security concerns in legacy operating systems. With Windows XP, that means making sure you are running Service Pack 3 and have all the latest updates from there and have the firewall enabled.

But bear in mind as well, as I said, that other software on the system are more likely points for security concerns, so make sure to keep all of that up to date as well, and be smart in how you use the system.

You’re letting needless files fill up your hard drive

In general, having “stuff” on your computer that doesn’t do anything but take up space is not anything to worry about it. When it can be an issue is when the free space on the drive gets too low.

The operating system, Windows for example, needs a certain amount of “working” room so it can temporarily grow if need be. System Restore comes to mind as a feature that you’ll be happy to have in an emergency but that won’t work if there’s not enough free space.

To avoid problems, I recommend keeping 10% of your main drive’s total capacity free. See How to Check Free Hard Drive Space in Windows if you’re not sure how much you have.

And I’d recommend keeping at least 25% of your main drive’s total capacity free, depending on the total capacity of the drive and how you use the remaining space. This is especially the case if you have a solid state drive (SSD), as having too little free space on your SSD can cause it to slow down faster.

It’s a well-known, well-documented phenomenon with SSDs that they get slower over time, or at least with more writes to the sectors on the drive. Having too little free space limits the number of sectors to which the SSD’s controller can write data. Combine that with doing a lot of writes, rewrites, and deletes in that limited space, and that phenomenon can manifest much faster than typical. To overcome this eventual slowness, the firmware and controllers in an SSD never write data to the same sectors over and over, instead keeping track of how many write have been made to sectors so it knows which ones to avoid.

There are numerous ways to free up space on your system as well. In general, though, look for the largest files you can safely get rid of. And use the Windows Disk Cleanup utility to free up space in the system areas used by the operating system.

You’re not defragging on a regular basis

And here’s where the author gets so many things wrong:

To defragment or not to defragment… not usually a question. While it’s true that you don’t need to defrag if you have an solid state hard drive, defragging a traditional hard drive is a must.

NEVER DEFRAG AN SSD. Let me repeat that. NEVER DEFRAG AN SSD! Remember that phenomenon I mentioned earlier that causes an SSD to slow down? Running a defrag utility regularly on an SSD will cause that to happen significantly faster. Much faster than not having a lot of free space on the drive. Again this is just the nature of how SSDs work.

SSDs have no seek penalty, which is what defragmenting a drive is supposed to overcome, so defragmenting an SSD is not only unnecessary, it’s potentially damaging.

However, the built-in Defrag tool for Windows 8 and Windows 10 will not defragment a solid state drive — provided the drive is actually detected as one. It will instead “trim” it, meaning it will scan the drive for unused blocks and make sure they are empty. This keeps write speeds at top notch for the drive. Your operating system should be running this periodically, but if the Defrag tool says the drive needs optimizing and it hasn’t been run in a significant period of time, go ahead and “Optimize” the drive as it only takes a few seconds.

And for a typical platter hard drive (HDD), running a regular defragment on your entire drive has not been necessary since Windows XP. Keeping an HDD defragmented is so important for performance that Microsoft included it in the operating system. Not just with the operating system, but in the operating system. When your system is idle for a set period of time, the operating system will do some light defragmenting on your hard drive. Leave your system on overnight at least once a week and it’ll make some optimizations to your HDD for you.

But it is not a full defrag. But you likely also don’t need to run a full defrag either. Depending on your HDD capacity and how much of that is occupied and the level of fragmentation, a full defrag will take hours, possibly longer than a day.

That’s why Microsoft and other OS developers introduced the background defrag into the OS. Periodic partial defrags are better than waiting around for a full defrag on tens to hundreds of gigabytes of data to finish. HDDs are faster today than they were 10 years ago, but they’re still significantly slower than SSDs, and the seek penalty that HDDs have means a full defragment is going to take forever depending on the fragmentation level.

If your system is running slow, chances are a full defrag on the drive is going to make little difference — it’ll be a lot of time for little gain. You’re better off cleaning off the system what you don’t need — this includes uninstalling needless applications and cleaning up files (see previous section). Depending on what you’re doing now compared to when you bought the system, a memory upgrade may also be in order. If you’re using an HDD, consider upgrading to an SSD, which can give an almost instantaneous performance boost to any system.

A cheap way to improve performance is to buy an inexpensive USB 3.0 flash drive and use the ReadyBoost feature in Windows. I do this on an older laptop since I don’t have an SSD in it, yet.

Given how much just in that one point that was incorrect, the “expert” really needs to study up.

You’re not [physically] cleaning your computer

In upgrading my wife’s computer, I showed pictures of the water cooling radiators caked in dust. The fans weren’t as bad but definitely needed cleaning.

Not properly cleaning your computer, however, especially a desktop computer, is an often overlooked maintenance task that could eventually screw up your computer something severe.

This is a LEVEL 4 SCREW UP!

Here’s what happens: 1) your computer’s many fans collect dust and other grime, 2) said dirt and grime build up and slow down the fans, 3) the computer parts cooled by the fans begin to overheat, 4) your computer crashes, often permanently.

In other words, a dirty computer is a hot computer and hot computers fail.

He’s exaggerating hard here.

First, there are numerous safety features built into modern computer components to prevent them from being damaged by excessive heat. Processors and graphics cards will “throttle” down in speed to keep temperatures down. So if you notice that your system is going very slow and the fans in your system sound like you’re standing next to a runway at a commercial airport, chances are you’ve got some serious thermal issues.

Cleaning out the dust in fans is certainly necessary to keep the fans performing at their peak. The weight of the dust on the fans will slow them down a little while wearing them out faster. But the difference in airflow is only a few cubic feet per second. For most 80mm and larger case fans, this is negligible.

It’s when dust settles on the internal components that thermal problems can arise, so it’s a good idea to keep the inside of your system cleaned out. Power down the system and let it sit for at least an hour to cool down, grab a can of compressed air and your vacuum cleaner, and go to town. You can also grab a clean soft bristle paint brush to go after any stubborn or hard to reach areas. Pay attention to dust on heatsinks as well, such as the cooler on your CPU and graphics card. Oh, and leave the system plugged in while you do this to keep it grounded, especially if you’re using a paint brush to assist.

If you have an air compressor, make sure to have it at a low PSI and keep it a nominal distance away or you could risk damaging your components. Use a clean paintbrush to loosen any stubborn dust instead of a higher PSI or closer distance.

In desktops, be sure not to miss the [fans] in the power supply and in the case. Increasingly, video cards, RAM, and sound cards have fans too.

A lot of graphics cards have fans — it’s been the case for at least the last 15 years — but lower-end graphics cards tend to have only passive heatsinks. Still clean those off as dust can severely degrade cooling performance since those rely entirely on airflow in the case to stay cool. But RAM doesn’t have fans on it by default. You can buy active RAM coolers, but those are largely unnecessary unless you’re overclocking your RAM.

And I don’t know of any sound cards that have active cooling.

At the same time, keep an eye on system temperatures using a utility such as CPUID HWMonitor.

You’re putting off fixing problems that you can probably fix yourself

If your system is still covered under any kind of warranty, consult that warranty before attempting any kind of repairs to make sure you’re not voiding it. But the author largely isn’t talking about anything requiring you to access the physical hardware.

Instead he’s talking about running utilities and such on your system to diagnose and possible correct some concerns. And a lot of issues can be corrected doing this. Certainly not all, but most common concerns can.

It’s the ones that require internal access to the hardware of the system that tend to be more involved. At the same time, these are issues that aren’t necessarily easy to diagnose. Your computer running slow, for example, has several causes and several potential solutions, so you need to diagnose it further. Online forums can aid you in that endeavor.

* * * * *

And that’s it for this one. It’s amazing how many misconceptions get published all the time about computers. It often makes it difficult to discern what’s fact and what isn’t. As I’ve shown above, some of the advice given online can be quite misleading as well.

But at least there are always plenty of actual experts out there who are willing to counter all of this and publish information that is correct.

Obama’s scorecard isn’t as great as you think

13445765_1054131948011464_4603057453128320334_n

1. Unemployment is low because of the number of disenfranchised workers who are no longer looking for work, plus the fact that a significant number of those who found jobs after long stints of unemployment aren’t on their previous career paths anymore. In other words, unemployment being low doesn’t mean those who are working are any better off than before. Looking at unemployment to determine the quality of the job market is like looking at GDP to determine the state of the economy.

2. Like the quality of jobs in relation to unemployment, much of the insurance sold on the individual market under Obamacare is, in practicality, worthless. The deductibles are so high that most with these plans will never actually use their insurance. So all these plans do is take *more* from the pockets of those who likely can barely afford to pay it while giving virtually nothing in return.

3. My 401(k) would like to heavily disagree with how well the stock market is doing. The Dow Jones Industrial Average is just an index total and doesn’t represent the full status of the stock market. And the current state of the stock market has many analysts worried it’s forming another bubble.

4. Only if you compare against the deficit for FY 2009. If you exclude FY 2009, all deficits under Obama except for 2015 have been higher than the highest deficit under Bush, and Obama has presided over the largest increases in the national debt in history.

5. It’s lower than it was under Bush, but the current rate of inflation is the same as it was in 2002. This isn’t necessarily a good thing for consumers, and neither were the periods of deflation that were seen in 2009 and for virtually all of 2015.

6. Not quite as much as they think. Revenues for GM and Ford have both held steady since about 2011. Revenue for GM is higher than 2008. Ford, on the other hand, is seeing *lower* revenue compared to 2006 and 2007. About the only reason they can be said to be “booming” is because both Ford and GM tried to shed liabilities by the Billions of dollars from 2009 onward.

7. At what cost? “Clean energy” needs so much in subsidies right now that it would never survive in the free market. It just cannot compete against nuclear energy and fossil fuels when it comes to energy density and energy per dollar.

A compromise on the “terrorist watch list”

A lot of the gun control talk in recent days following the Orlando attack has centered around the “terrorist watch list”, a covert list of individuals maintained by the Federal government that can be used to deny a person a number of privileges, including being able to board a plane.

So no surprise that the Federal government would want to expand it to include infringing the Second Amendment.

The biggest problem with the watch list is the fact it is full of mistakes. Names that shouldn’t be on there. The list has been used to deny children entry to a plane. And it’s likely damned near impossible to get your name removed, or at least an exception made so that a particular person happening to bear that name isn’t unjustly targeted. And yet they expect us to trust them with that list when it comes to navigating the waters known as the Constitution? Okay, if they want that power, we need a compromise.

Currently the FBI has the ability to place up to a 3 day hold on a firearm transfer when a background check is requested through NICS. Three days is plenty of time for the Department of Justice to file for an emergency injunction with the District Court that has jurisdiction over the area where the sale was to take place. As the person against whom the injunction is filed would have the opportunity to defend themselves against the injunction, the due process requirement would be satisfied provided the injunction is only temporary and includes the caveat that Federal or State charges against the person be filed within the provided window (30 days, for example) or the person’s rights will be restored.

The injunction should also require more than the Department of Justice or Department of Homeland Security merely saying “he’s on the terror watch list”, as they’d have to give more substantial reasons to deny the sale to satisfy the Fifth Amendment, basically providing all the evidence as to why that person was on the list. The Court could even require the evidence satisfy the burden of probable cause. This would also give the opportunity for a counter-motion to order the Department of Justice or Homeland Security to remove the person from the watch list if that burden is not satisfied, while also raising the burden of proof to the point where if the government is even going to attempt to deny a person their rights, there should be enough evidence to arrest them and put them on trial. But then if they had that kind of evidence, the only thing that would be stopping the DoJ from putting them on trial is not having enough evidence to satisfy the burden of beyond reasonable doubt.

This isn’t much different than what currently goes on with other criminal trials. For example pre-trial injunctions forbidding contact between a defendant and an alleged victim are common in cases of domestic assault. If the defendant is convicted, the injunction may become permanent, while it is lifted if the defendant is acquitted.

This brings the terror watch list out of the realm of secrecy while also giving Democrats what they want: the ability to use it to temporarily deny gun purchases. Anything more than this requires a criminal trial and conviction on felony charges.

Seven minutes

In the current firearms political climate following the Orlando shootings — not just at Pulse, but also of Christina Grimmie — it seems almost expected that someone would try to replay Mark Kelly’s stunt from 2013. In this instance, it was Helen Ubiñas of the Philadelphia Enquirer:

Seven minutes. From the moment I handed the salesperson my driver’s license to the moment I passed my background check.

Yes, the NICS background check is actually quite… instant. It’s in the name.

It likely will take more time than that during the forthcoming round of vigils to respectfully read the names of the more than 100 people who were killed or injured.

It’s obscene.

Horrifying.

Actually it’s not. That’s the amount of time it would take to clear an instant background check for any firearm.

If you have a squeaky clean record, there’s no reason to not sail quickly through the background check process. Not everyone does. Mark Kelly sailed through the process quickly as well simply because he’s a retired commissioned officer from the United States Navy with a clean record. I’ve sailed through the NICS background check the… seven times, I think, I’ve been through it. My parents have sailed through it. Friends of mine have sailed through it. What exactly is the concern?

Oh yes, that’s right. We’re talking about an “assault rifle”. The famous AR-15 platform. In your instance it was the Smith & Wesson M&P-15. And the fact you were able to sail through the background check process means it’s “too easy” to buy a firearm. Am I right? Or at least too easy to buy an AR-15?

Bear in mind that there are over 2 million of them in private hands. Two Million. The number of people killed or injured each year by rifles is insignificant. The number of AR-15 rifles used in any shooting, mass or otherwise, is also insignificant. According to Mayors Against Illegal Guns, between 2009 and 2013, there were only three mass homicides known to involve the AR-15, plus two more with an “assault rifle”. The other mass homicides were committed with pistols of various calibers and shotguns.

And yet you focus on the AR-15.

Functionally the AR-15 is no different from any other rifle that chambers the same caliber. The only difference is the body style, which is lightweight and modular, both of which make the rifle popular.

And unless you have a shady background, you should be able to walk into a gun seller and be able to walk out with any rifle, pistol, or shotgun of your choice and not be significantly delayed by the “instant” background check. Seriously there is nothing wrong with that idea.

There seems to be this point of view wherein a person being able to pass a background check means it’s too easy to get a firearm. You’ve demonstrated that with your article, and I’ve run into that mentality before. It’s as if the background check process should be able to read minds and predict the future.

Instead of exercising a political stunt that’ll generate a lot of clicks, why did you not instead just keep the firearm and learn how to use it? Take it to a range that allows centerfire rifles (not all do) and shoot a few magazines through it instead of almost immediately surrendering it to the police.

You’ve lost a golden opportunity here to actually learn about the rifle you are trying to demonize. But you never wanted to learn about the rifle, likely because you’re afraid you might no longer want to denigrate it. After all, it’s been said and demonstrated that the fastest way to turn someone from anti-gun to pro-gun is to take them to a gun range.

Out of two million AR-15 rifles that passed into private hands, the number that have actually been used in mass shootings is likely barely into the double-digits. More kids die from swimming pools than the total number of people killed by rifles in any given year, including the AR-15.

And somehow the AR-15 is some super special killer?

It sounds more like you have no idea what you’re talking about and was looking for a sensationalist hit piece. And you write and publish this sensationalist article before the bodies are even in the ground without, to borrow your own words, “even a moment to at least consider how gross all of this felt as relatives of the dead were still being notified.”

If it felt “gross”, why did you do it? Because you saw an opportunity to write an article that generates a lot of clicks. Congratulations.

And if you think that private sales mean that anyone can still get a rifle, I think you forget the fact that there is also a seller involved. The seller can, at any time before it’s completed, kill the sale. That actually happened here in Kansas City. A person who shot up random cars on I-470 near where I used to work attempted to acquire a firearm through a private sale that the seller aborted.

Interesting concept, don’t you think? Makes me wonder how many times that happens.

At the same time, sellers at gun shops will also abort sales if they feel something’s up. I’ve personally witnessed this. At a gun shop I frequent, a person walked in with the intent of buying a pistol. He was not exercising any kind of discipline over his movements and actions, though, and he was escorted out of the shop.

And a shop owner in Ohio acting on his wits intervened in preventing a potential shooting at Ohio University.

Can they be any more predictable?

Predictably in the wake of the recent Orlando mass shooting, there are calls for gun control and reinstating the Assault Weapons Ban. Here is one such call that landed on my Facebook feed.

13406923_1220882524612348_6513907044259882366_n

And in 2013, they tried to get the Assault Weapons Ban back through the Senate as well, and it was knocked down 60 to 40. Think about that. 60 Senators voted against it in a Democrat-controlled Senate. 15 Democrats plus Angus King joined Republicans to vote down the AWB.

Most people who want to ban the AR-15 don’t even know what it is, only what it looks like. They can’t describe the firearm properly. They continuously label it as an “automatic” rifle. One friend of mine called it a “machine gun”. The AR-15 is no different from any other .223/5.56mm rifle on the market except for its body style, and the AR-15 isn’t the only rifle on the market that shoots the .223/5.56mm round. There are variants of the AR-15 that shoot the .22LR as well, which are functionally not any different from any other rifle that shoots .22LR.

Banning “assault weapons” won’t do jack for violence in this country. In fact, as Australia, the Middle East, and the United States have all demonstrated, those wanting to kill a large number of people will find alternate means of doing so. Arson, bombings, and suicide bombings to be specific.

Here’s a question. In 2009 and 2010, the Democrats had their trifecta — the House, the Senate, and the White House. They could’ve passed any gun control bill they wanted, but didn’t touch gun control at all. They didn’t introduce any gun control bills that made it out of committee, and Feinstein didn’t even consider reintroducing her assault weapon’s ban. It should’ve been a slam dunk in the 111th Congress, but they didn’t do jack. Why is that? Why did they wait until they had a divided Congress?

Two reasons. First, plenty of Democrats won’t touch gun control because it’ll end their political careers. Plenty of Democrat voters favor gun rights, and plenty of Democrat voters demonstrated to Democrats just how unfavorably they view gun control by handing the Senate and House back to Republicans in 1994. Even Feinstein barely held on to her Senate seat that year.

Second, they don’t care about protecting people or children. Only playing politics. And they can play politics left, right, and fucking center all day by playing up gun rights as meaning gun rights activists prefer to see dead kids, and that if we “only cared about the children”, or in the case of Orlando the gays, then we’d just see how morally superior the Democrats are and just give into their demands and turn in all our guns. That’s how this whole thing has played out since Aurora.

When even Democrats are voting against gun control, perhaps a rethink is in order.

Regarding the Brock Turner case…

Defending principles is extremely difficult. But necessary.

On this blog and in other venues, I’ve defended the outcomes of the Casey Anthony and George Zimmerman trials. I’ve defended the grand jury decision with regard to Darren Wilson. I defended the right to a trial for James Holmes and Dzhokhar Tsarnaev. I defended the acquittals of Jian Ghomeshi and Gregory Alan Elliot.

And by defending all of that, I’ve been portrayed as defending the people involved and their alleged crimes.

I’ve been called racist. White supremacist. Sexist. Terrorist sympathizer. I’ve already been called a rape apologist before, along with a misogynist, for my views on feminism. And now this article will probably get me branded a rape “sympathizer”, if not an out-and-out rapist. No, wait, I’ve already been branded as both for my staunch defense of due process with no wavering even when the accusation is that of rape.

After all, only a rapist would possibly defend the outcome of a rape trial, right? Only a rapist would defend due process for suspected rapists, right? After all, the world would be better if it were easier to convict rapists. So goes the rhetoric that’s been pushed at me.

People don’t know what defending principles looks like anymore.

Given the details of the assault upon “Emily Doe”, I join the majority who are outraged at the lenient sentence Brock Turner received for his three felony convictions. Six months in prison (though he’ll likely actually serve an additional three months on top of whatever he already has served) plus three years on probation. Seems kind of light. It would likely outrage people more to learn that the Santa Clara County Probation Department recommended only one (1) year in prison along with probation. And I would agree that would be considered lenient as well.

Except my opinion on the matter is irrelevant. Every person’s opinion on the matter is irrelevant.

It sickens me that the public response to cases like that of Brock Turner force me to defend outcomes I also do not like. I do not do this to defend Brock Turner. Or any specific person. I do it to defend the process that is employed.

I do this to defend due process itself. To defend the trial process. To defend the rights entitled to all persons who become subject to the jurisdiction of a criminal court.

I do this because it seems no one else will.

It’s exhausting.

And it’s heartbreaking to watch supposedly civilized people be “reduced into pesky, pestilent, squandering, argumentative and belligerent mobs”, to quote an article I wrote over five (5) years ago. Few seem to realize that the more often this occurs, the easier it becomes for our rights to be violated, if not out right stolen from us.

In the wake of cases like Brock Turner, Casey Anthony, George Zimmerman, and Darren Wilson, mob justice reigns. One man versus a mob. But if that is what I must do to defend the basic rights of a civilized society when that society goes patently uncivilized, then so be it.

Currently there is a campaign to recall the judge who presided over the case and declared the sentence. This strikes me as similar to the recall campaigns against other judges who make unpopular decisions. Most notably the three justices of the Iowa Supreme Court who declared Iowa’s gay marriage ban unconstitutional.

One of my friends commented on Facebook, “Stupid ass fucked up piece of shit some one shoot him between the eyes and leave him for the buzzards.”

I still have a lot of work ahead of me, it seems. But I’m only one man.

 

Burned by the sun

I think we’ve all been burned by the sun at least once in our lives. Being fair-skinned (though thankfully not nearly as bad as this poor woman), I’ve also been unlucky enough to receive severe sunburns. Twice.

The first time was during the summer following 3rd grade during a summer camp trip to the beach in San Diego, CA. Despite putting on sunscreen initially, I didn’t follow the directions properly, never reapplied it, and ended up with moderate to severe sunburns over my arms, legs, and neck. The second time was during a parade in 8th grade in which I put sunscreen on my arms and legs, but failed to do so on my neck, face, and ears. Those burns blistered and were painful for, if I recall correctly, close to two weeks, and took almost three weeks to completely go away.

And I’ve been heavily burned by the sun on other occasions. Let’s just say that now, in my mid-30s, I keep a very close eye on anything on my skin for early signs of skin cancer.

So I can certainly relate to the story of a 3 year-old boy who suffered severe sunburns even despite his mother applying and reapplying sunscreen. The sunscreen in question, based on the news report, is Banana Boat Kids Tear-Free Sting-Free Continuous Lotion Spray Sunscreen, SPF 50+.

Now there’s something else about this story that needs to be pointed out. The child was at the beach off Virginia’s Eastern Shore. The Eastern Shore is at the southern tip of the Delmarva Peninsula surrounded on the west by the Chesapeake Bay, and the east by the Atlantic Ocean. That means the child was playing in surf and salt water.

Yes, that little detail is extremely important.

It has, for as long as I can remember, been recommended that you reapply sunscreen at least twice as often as recommended if you’re in salt water and surf unless you are using a sunscreen specifically formulated for salt water. This means if you have a sunscreen that is “water resistant up to 80 minutes”, which is what the Banana Boat sunscreen in question states, you reapply it every 40 to 45 minutes when playing in salt water, and no less frequently than every hour regardless of whether you’re in salt or fresh water.

Sweat-resistant sunscreens (typically labeled as “sport” sunscreens, but it should say “sweat-resistant” on the label) can withstand salt water a little better, but you should still reapply it more frequently than the label states — about every hour if not more often, every 45 minutes to be safe.

Bear in mind, as well, that no sunscreen is perfect. And even if you follow the directions, there may still be some variables that mean you still get burned. For example, if you don’t reapply the sunscreen properly or often enough (it’s easy to forget or misjudge when you last reapplied), or get adequate coverage on the areas that will be exposed to the sun.

So just pay attention to when and how you apply and how often you reapply the sunscreen, and you should be fine, and any sunburns you do get should be mild and infrequent. Avoid spray sunscreens and stick with lotions, since it is easier to get adequate coverage with lotions.

It doesn’t matter

And once again the people have another bone onto which to latch with regard to Casey Anthony…

In the 15-page affidavit obtained by PEOPLE, Dominic Casey alleges that Anthony had a sexual relationship with her attorney before the case went to trial. He claims that he witnessed “a naked Casey” when he arrived at Baez’s office unexpectedly.

In the documents, he also alleges that Baez “told me that Casey had murdered Caylee and dumped the body somewhere and, he needed all the help he could get to find the body before anyone else did.”

It’s simply amazing what people will say, even in what is supposed to be a sworn statement given under penalty of perjury. Especially a sworn statement that carries with it some very, very serious allegations.

But all of this is completely immaterial anyway, of absolutely no value whatsoever. Well the allegations of a sexual relationship isn’t entirely immaterial, as, if there is actually evidence that occurred, it can get Jose Baez, Casey Anthony’s attorney, disbarred. Needless to say, Baez has pushed back, calling the claim “libelous” and saying that legal action is “forthcoming”.

But the one thing that no one seems to want to settle in their mind is the simple fact that Casey Anthony was acquitted. This means that she cannot be touched. That is why this statement by the private investigator, whether true or not, ultimately doesn’t matter.

Not one bit.

It won’t convene a new trial that would again subject her to jeopardy because it isn’t evidence of guilt. At minimum, it’s hearsay, meaning it’s completely unreliable as evidence, provided it’s truthful. If it isn’t, as Baez has already said, it’s libel.

And don’t bring up the idea of a Federal prosecution either. I’ve already tackled that idea. Twice. Isn’t it time she just fade away into obscurity?

Revisiting bottlenecking, or why most don’t use the term correctly

Previously I wrote a long rant-ish article about the term bottlenecks, particularly with regard to AMD vs Intel. In that article I tried to demonstrate, among other things, how the term “bottleneck” is misused and, frankly, overused. I didn’t exactly do the greatest job in that article, so I’ll revisit the term here, especially in light of some things I’ve learned over the last year.

Working on the Colony West project modified my perspective on the topic, or rather informed it significantly more. The project’s ultimate goal was a rack hosting three systems. One of the systems was a Minecraft server built on an AMD Athlon 64 X2 3800+. The other two systems would be distributed computing systems, one running an AMD Athlon 64 X2 4200+, and the other running an AMD FX-8320E.

In terms of computing, the word “bottleneck” is very disproportionately levied against the AMD FX processor, and typically with regard to higher-end graphics cards. This became especially true after the GTX 900 series was introduced. Anyone who posts anywhere that they are building a system with an AMD FX processor and a GTX 900 series graphics card will likely get the term “bottlenecking” thrown at them — typically by someone who doesn’t understand the term, which seems to be practically everyone who uses it.

The ready assumption is easy to paraphrase: the AMD FX processor is always a bottleneck, period, end of story, no discussion. And the adjacent assumption is this: Intel processors are never a bottleneck, period, end of story, no discussion. The number of people who say in threads regarding the AMD FX processor “Stop defending AMD” is telling on that mark.

The more accurate statement is simply this: all components in a computer can be a bottleneck. The Intel i7-5960X can be a bottleneck. The Xeon processor can be a bottleneck. The AMD FX-9590 can be a bottleneck. The Titan X can be a bottleneck. The almost 300,000 cores (16-core Opteron processors) and 18,688 GK110 Tesla GPUs that comprise the Titan supercomputer can be bottlenecks.

The speed of light can be a bottleneck.

This was recognized and demonstrated by the late Grace Hopper, RADM, USN (ret.). One of Adm. Hopper’s many contributions to computing was simply to recognize that computers must become smaller to become faster — something that today seems so obvious. She was famous for carrying around “nanosecond wires” — strands of wire that were about 30cm long, the distance that light can travel in one nanosecond. Initially they were used to demonstrate why the speed of light is a limitation to satellite communication, but it served to also demonstrate why all components will have a ceiling with regard to bandwidth and processing power.

It is one of the reasons multi-core processors have been the norm and why multi-processor mainboards have been the norm in high-performance servers: when you can’t expand performance linearly, you expand it laterally through parallel processing.

So now that we have that out of the way, let’s talk about how the term is typically applied: CPUs and graphics cards. I will also explain why my Athlon 64 X2 processor is not “bottlenecking” the GTX 680.

What is a bottleneck?

So what exactly is a bottleneck, in the proper sense of the term? In short, it is an inefficiency in a process.

An optimal process is one wherein no part of the process must wait to do work. So if you have three steps to make a widget, and one person for each step, you would want to make sure that persons 1, 2, and 3 all take about the same amount of time to perform their steps. The goal is to minimize idle time.

If person 1 takes significantly longer than person 2, then person 1 is said to be an inefficiency — a “bottleneck” — in the process. Now if person 2 takes significantly longer than person 1 to do their task, then person 2 is even worse of an inefficiency. Not only is he holding up person 3, but he’ll actually force person 1 to slow down to prevent a backlog of work.

Note that whether a process has a bottleneck has nothing to do with the overall time it takes to perform the process. Only the time to complete one or more parts of the process in relation to the entire process.

A complementary question, however, is whether that happens to be the nature of the process being performed. Will that task always take longer? If so, the process may need to be redesigned or re-implemented by bringing on additional personnel. If person 1’s task takes only 5 minutes, but person 2’s task requires 10 minutes, then you’ll want two workers on step 2 of the process for every person on step 1.

The primary focus of managing the process and the workers at each step is minimizing idle time and ensuring the process can move smoothly. Some idle time may be desirable, especially when you’re talking about people, but too much is detrimental.

In alleviating inefficiencies or improving the process, the manager (and typically involving consultants) will look at each step. To determine if the inefficiency is ultimately unacceptable, the process manager will evaluate their options to determine if correcting the inefficiency could be more costly than just living with it.

Typically significant improvements are needed to justify making them so as to avoid a situation where correcting inefficiencies provides an overall loss or barely any gain.

Bottlenecks and computers

It should be quite easy to see how this applies to a computer. The “people” is the hardware, and the task they are trying to perform is the application or game you are trying to run. But it’s not a clean analogy. In most processes, the manager is not a part of the process. Instead the manager delegates tasks and oversees the operation.

In a computer, on the other hand, the task manager is not only overseeing the process, but doing a significant amount of the work. This is why it is not proper to apply the term “bottleneck” to a computer, especially the central processing unit (CPU), since the central processing unit is the process. At the least it is not proper to say that a CPU will bottleneck a graphics card, or any other hardware for that matter, but instead to say that all the other hardware will “bottleneck” a CPU. Sit back and relax, we’re about to get very, very technical, so do try to keep up.

There are two types of operations a CPU performs: blocking and non-blocking.

The colloquial for a “non-blocking” operation is “fire and forget”. The CPU tells the hardware to do something and then the CPU goes off and does something else immediately. It doesn’t care about the result and won’t wait for one, so the hardware is not blocking the CPU.

Then there are “blocking” operations — which comprise the vast, vast majority of tasks and instructions a CPU carries out. If “non-blocking” means the CPU won’t wait for a result, “blocking” means the CPU will wait however long is necessary for that result. Some processes will have a “timeout” value associated to them, meaning there is a ceiling to how long the CPU will wait. But it is quite easy to see how these work, and readily explains why the single most restrictive “bottleneck” in your system will always be storage and network devices.

So again, the CPU is the process. This is why you will almost always see performance improvements in your system by going with a better processor.

Central processing units work through an instruction set, a set of instructions that have been hard-wired into the CPU. Here’s the interesting part: the CPU does not have any instructions for talking to any device. Instead CPUs talk to memory addresses, and every instruction the CPU carries out is about manipulating memory addresses. If you don’t believe me, look up the AMD64 and Intel x86 instruction sets and you’ll see that there isn’t even an instruction for writing text to the screen. Instead everything happens by manipulating data at memory addresses.

That is at the instruction set level — i.e. the actual instructions the CPU is running. If you’ve never studied assembly language, consider yourself lucky to have remained insulated away from the granularity of all of that.

So then, why is it improper to say that a CPU bottlenecks a graphics card?

What’s often omitted from the discussion is simply that the CPU and other hardware work in tandem, never in isolation. It is rare that upgrading either the processor or graphics card, or going with multiple graphics cards, will result in no performance improvement. On LinusTechTips, I said this:

Upgrading the CPU or GPU will always improve performance in a gaming system. I don’t know of a situation where this won’t be true. You could have a GTX 480 with a Skylake CPU and, provided you don’t run into any incompatibility concerns, I’d wager it’ll outperform a Sandy Bridge with that same graphics card, and likely quite significantly. Sure we can argue there’s a ceiling, but I’d wager that you’ll run into incompatibilities before that ceiling becomes a concern.

Same with graphics cards. Pair a Titan X with a Sandy Bridge and it’ll outperform a GTX 480 with a Sandy Bridge. And if you have two Titan Xs in SLI with a Sandy Bridge — again, assuming no compatibility concerns — it’ll likely outperform a single Titan X with a Sandy Bridge, though the performance certainly won’t cleanly scale. But take those two Titan Xs and put them with a Skylake and you’ll see significantly better performance compared to the same on a Sandy Bridge. Knowing this makes the question on “bottlenecking” not an easy one to answer, and also shows the massive misuse of the term, because the question of whether there is a bottleneck still comes down to the process you’re trying to perform since.

Specifically, the question is to the output and requirements of the process. Will the hardware combination deliver an adequate level of performance? That depends on how that is defined, and comes down to other variables involved — monitor, resolution, refresh rate of the monitor, and the FPS and response level the combination can deliver. If the answer is no, then figure out what to upgrade.

And that brings me to the main point, that the question is more of optimal level of performance.

Optimal level of performance means meeting or exceeding a desired level of output. For any process, that should be the focus. Not how well any one part of the process performs, though it does matter, or whether there is something better out there (because there will always or eventually be something better), but whether it is meeting or exceeding your expectations.

In the case of an application, the desired result is typically defined as completing a particular task within a particular period of time — the lower the better. This will often be defined in requirements when determining what equipment to purchase. For a game, the desired result is typically defined by the frame rate, refresh rate of the monitor, resolution, and visual quality settings for the game.

Whether a system achieves the desired result is up to its owner. And if the system does not achieve the desired result, whether the deviation is acceptable or needs to be corrected, and the evaluation on correcting it is also up to its owner.

The AMD FX processor

But whether a system can achieve a particular level of performance seems to be cause for confusion. For one, there are a lot of AMD FX naysayers who seem willing to just make shit up. Numerous times I’ve seen a lot of statements about the AMD FX processor that make me wonder whether the individuals making the statements have ever actually had an FX processor and under what kind of configuration. I’ve seen written several times that the FX processor shouldn’t be used for MMOs or online play. Another commenter said that the FX processor cannot deliver anything more than 25 or 30 frames per second at 1080p in AAA titles.

My only thought seeing statements like that is “where the hell are they getting that information?” As I said in the previous article, if you believe what some of these people say, the level of performance I can get from my system and my wife from her previous system is impossible.

As such, the question of the level of performance a system, in particular an AMD FX system, can provide has been subject to the “shifting of the goalposts” fallacy — “well the FX is fine up to [insert graphics chip here], but bottlenecks everything beyond that”… And all the while it is presumed that no Intel processor “bottlenecks” high-end graphics cards (setting aside for a moment the incorrect use of the word). And the definition of whether a graphics card is “bottlenecked” is whether it runs at 100% or not during a game, with it being declared “bottlenecked” if it never does.

Whether the overall system achieves a desired level of performance — the true definition of whether a system has any undesirable inefficiencies (i.e. bottlenecks) — seems to never be part of the discussion.

The Athlon 64 X2 and the GTX 680

I said that the GTX 680 is not being “bottlenecked” by the processor, going on the incorrect usage. Many would certainly dispute that, mainly because many likely won’t ask what I’m doing with this setup, or understand how this setup could possibly work. The PCI-Express 1.0a lane is fast enough that it isn’t constricting the card for the tasks being performed, so having a newer PCI-Express standard won’t provide any improvement. A faster processor will only provide for a faster transition between tasks, and the difference is likely to be largely insignificant.

As such because the demand on the PCI-Express system bus is relatively small, the CPU is able to transfer the data to the GTX 680 without breaking a sweat. And the system is able to run through the Berkeley tasks without any problem with minimal CPU usage — the Linux GeForce driver doesn’t provide any GPU usage statistics, unfortunately.

For the requirements and demands that are placed on this system, the only system bottleneck is the GTX 680. I could alleviate that bottleneck by purchasing a GTX 980, 980Ti, or Titan X, even an nVidia Tesla — provided such cards won’t be limited by the PCI-Express 1.0 lane. But the gains to me — a quicker accumulation of BOINC credits — are not worth that expense (especially the Titan X and Tesla). Again, CPU usage is minimal, so a faster processor won’t help. The Ethernet connection is faster than my Internet connection, so there is no improvement to be made on that front, and only 1GB of RAM is being used, and switching to faster RAM likely won’t improve anything either.

Compared to a GTX 770 connected to an FX board (the GTX 770 is a re-branded GTX 680), the X2 won out because of the operating system setup (Linux on the X2 versus Windows 8.1 Pro on the FX), finishing a similar task about 25 seconds faster (265 seconds compared to 290 seconds, about 8.6% difference) than the FX/GTX 770 combination.

So for the tasks in question, a faster CPU or even a newer platform would be a waste of money since there would likely be no significant gain in performance.

Inefficiencies in your system

So when it comes to your system, again the question to ask is whether you are getting a desired level of performance. If you are not, evaluate what to upgrade. Would more memory be better? Would a newer graphics card be better, or would you be better off replacing the mainboard and processor with a newer platform?

Unfortunately you are largely not going to find a good answer to these questions in any online forum. As soon as you post the specifications to your system, they will be cherry-picked, and you will be told to upgrade a certain way. Not suggested, told. Don’t even consider anything else. Buy only what they tell you to. Think I’m joking on that? I’ve seen what happens when someone mentions they have an AMD processor. They are told to buy Intel.

This problem is especially evident on the Linus Tech Tips forum. One person who talked about water cooling an AMD system was met with the “have you considered upgrading?” response twice before a more reasonable person chimed in with “how about not just shooting people down, but also offering to help…that way they don’t feel like you’re insulting them”.

Another person was told “Instead of wasting money on water cooling, spend money on a real processor that is capable of pushing two R9 290Xs.” Yet another person who wanted a custom loop for an FX-6300 was also met with “Don’t waste your money” and “Your money will be much better spent buying an Intel CPU”. Apparently people forget that water cooling a budget system can be more about learning than the benefit the custom loop will provide. I learned a ton water cooling mine and my wife’s systems, and that knowledge was poured into building a high-end system for a friend.

And more recently another person who asked about putting an AIO on an FX-8320 was basically given the treatment of “that money is better spent switching to Intel”, a response I called “more condescending than helpful” by comparing it to “pushing someone to buy a new car when all they need is to have their HVAC or cooling system repaired”. I guess everyone forgot that virtually every AIO on the market includes mounting hardware for both AMD and Intel sockets.

With any potential upgrade path, there is, obviously, going to be a ceiling — a point beyond which you won’t see any significant improvements to performance — and a point of diminishing returns, wherein the benefit starts to decline dramatically the higher up you go. And if what you’re attempting to run is very poorly implemented software, the point of diminishing returns is hit much, much sooner. As I said in the previous article, you’ll need significantly faster hardware to overcome poorly implemented software — something we are seeing with DirectX 12 benchmarks.

When it comes to gaming performance, the important question is actually the frame rate versus your monitor’s refresh rate. If you have a 60Hz monitor (or television) and your frame rates are consistently over 60 frames per second for what you currently play, it’s pointless to upgrade because you won’t actually see the performance improvement. Go too high on the frame rate and you’ll start to observe a phenomenon called “tearing“.

Gaming performance, and improvements thereto, is also not a straightforward topic to address because the CPU and graphics systems work in tandem. Improving either will improve your gaming performance. If you have an SLI or Crossfire capable graphics card, you will see performance improvements by going from one card to two, even if you’re running an FX processor.

Instead, again, what to look for is idle time, as that points out inefficiencies depending on your expectations for the system. For example if you’re running a program that is very CPU intensive but not very GPU intensive, then you’d be focusing only on the CPU. If you’re running a program that is heavily GPU intensive, such as a game or 3D modeler, you’d look at both, but the GPU would be more important, and you’d likely want to see GPU usage higher than CPU usage depending on what you’re doing — I realize there are applications that will tax both the CPU and GPU to pretty significant degrees.

In the X2 graphics host, I would expect CPU usage to remain minimal because I don’t have any Berkeley CPU tasks running on that system, only OpenCL tasks. In a game, however, I would expect the graphics usage to be significantly higher than the CPU. I should not expect the CPU usage to ever max out.

This isn’t to say there is no benefit to upgrading. It just may not be nearly as pronounced as one might expect — unless it’s been at least 5 years since your last upgrade. In terms of gaming, I’ve posited before that there is no point to switching from an AMD FX processor to any Intel processor if your system is already delivering frame rates that exceed your monitor’s refresh rate for the games you play with all quality settings maxed out — or at quality settings with which you are satisfied.

In the case of the X2 graphics host, again, I’ve already demonstrated there won’t be any gains going with a faster processor or newer platform. For what it does, the X2 and single PCI-Express 1.0a lane suffice, and there actually will not be any gain going with a faster mainboard/processor combination.