The Biggest Problem We Have

Rabbit testing for sarin

My last couple of posts talked about radical Islam and the rise of nanotechnology. I think once you start to combine these two things, you end up with the potential for some bad outcomes that illustrate the biggest problem with technological advancement.

Technologically advancement is all about building on top of itself–making it faster and easier to do more with less. Computers that took up a room 50 years ago are less powerful than a smart phone that you can put in your pocket today.  To individuals and small groups, technology often provides leverage, enabling small actions to have a much bigger impact than ever before. And this is what creates the problem.

The huge benefit of guns

Guns were a huge technological advancement.  When compared to things like bows and spears, they enabled individuals to shoot farther, hit more often, and do more damage. What’s more, they reduced the amount of skill and strength needed to kill people at a distance.  Medieval longbows required 90-110 pounds of force to draw. It took boys years to become proficient at the weapon.

When the musket was developed, those years of rigorous training became unnecessary. There was still some effort required to become efficient, but far less than there was with the longbow.

That’s great technological advancement, of course. It enables hunting and warring to be far more efficient than it would otherwise.  But it also has a downside–the gun magnifies the ability of a single person to go on a murderous rampage. Today, a single person with a few easily-acquired guns can decide to go berserk and has a reasonable chance of killing or wounding tens of people before they are stopped.  The gun gives them far greater leverage to kill.

This is unfortunate. Because there are lots of people in the world, you don’t need a high percentage of them to go nuts for there to be bad consequences. For instance, in the USA, I imagine fewer than 0.001% of people would seriously consider going on a shooting rampage. But that’s still 3000 people.

And this reality is reflected in the stats.  There hasn’t been a single week in Obama’s second term that there hasn’t been a mass shooting in the USA.

The next level

That’s one simple weapon. What happens when you create the technology that provides even more leverage for a single person or a small group of people to do damage?  Well, you get bombings and events like 9/11.

Then, when you take the next step above that, you get nuclear weapons. A nuke that can be easily carried within a plane can cause the equivalent damage to 50,000,000 tons of TNT.

The thing I find most fascinating about nuclear weapons is that they don’t actually require that much knowledge to assemble. If you gave a small, reasonably smart team the components necessary to build it, they could almost certainly build one.  The main reason that we aren’t encountering terrorists with nukes isn’t because it’s hard to build, but rather because it is difficult to get the materials to build one.

Better technology

So, what happens when we get the nanotechnology that will enable a doctor to create a tiny robot that seeks out and eliminates a type of tumor? Or when we have the capability to assemble DNA to build microbes to do things like convert sunlight and carbon dioxide into oil?

Once it becomes easy to do these sorts of things, some people will look to weaponize the technology. Instead of seeking out a tumor and killing it, maybe those tiny robots will be programmed to drift on the air and seek out people who have blue eyes to kill them.  Maybe the microbes will be designed to quietly infect everyone, and then start killing them after 6 months elapses.

This sort of technology has a good chance of becoming easily accessible to the masses–or at least small groups of people. Of course, to the vast majority, nano- and biotechnology will be a huge benefit to their lives.  They’ll never even consider using the technology for ill.

The problem is, if it’s possible for a small group of individuals to create a virus or nanobot capable of destroying humanity, you only need one small group out of the 7 billion people on earth to make that decision, and then it’s all over.  This isn’t like 9/11. You won’t be able to go back and say, “Ok, our security network didn’t identify that threat. How do we detect it the next time?” Once even a single one of these threats materializes, it’s game over.

The solution

Unfortunately, there aren’t many good solutions to this problem. The scientists doing work in these areas–when asked about these issues–seem to wave their hands and pretend that regulation or some magical way of detecting the bad people will be developed.

But I’m not that hopeful. I mean, are we really going to put infrastructure in place to watch every person on the planet at all times?  And if so, will the people in power be able to resist abusing it?  It seems very unlikely.  And remember, out of billions of people, you only need one small group to fall through the cracks and it’s all over. We only get one chance.

Maybe the only real solution is redundancy–colonizing other planets as quickly as possible. That way, when some jerk wipes out everyone on Earth, there will still be people elsewhere who will survive and rebuild.  Of course, this isn’t a very satisfying solution, particularly for all the people on Earth who are killed. But it might be the only one.

It’s a pity that we’re spending hundreds of times more on building weapons than space travel.

2 thoughts on “The Biggest Problem We Have

  1. Yeah, that’s why I am thinking that there’s a high chance of binary outcome for human civilization. Either we manage to singularity’ze in one way or another or we blow ourselves up. Of course, we could do both if we get opposing actors after singularity.

    OTOH, you might be a bit too pessimistic (optimistic?) about the “nanobot capable of destroying humanity … and then it’s all over”. It’s likely that the situation will be similar to computer viruses: bad actors can do serious damage (possibly wipe a huge swath of population), but they won’t be able to wipe everyone. You could make similar argument about computer terrorism even now, but we haven’t seen any significant attacks yet. Bad actors still prefer to use the older mechanical means (guns, etc.). It’s not guaranteed that things will somehow change in nanobot age. In particular it may never be easy to create nanobot virus that can evade defense systems and do significant damage to large percentage of defensive-nanobot-enhanced humans. Probably it will be possible for high tech high budget actors like states or perhaps big corps (though evil corp is a silly trope really – corps want to make money and not wipe their markets), but maybe not easily possible for 0.0001% sociopaths.

    We’ll see. 🙂

    Like

    1. I think it’s a great point that jerks tend to use mechanical means. I think part of that is the visceral nature of seeing people cut down by bullets or blown to bits. If you’re trying for terror, the mechanical attacks are quite effective. If you think about it, one terrorist attack on US soil has basically made America abandon many of its core principals.

      Plus, on good days, I think that, despite the vilification of terrorists, most of them aren’t insane and do have people they care about. So, creating a disease or destructive nanobots that they can’t control would be counterproductive. If that’s truly the case, perhaps we only have to worry about nanobot attacks that use genetic targeting (so the evildoers can murder only the people with the wrong skin color) or the rare insane messiah like Jim Jones.

      If it does come down to nanobot vs. nanobot cold war, I’m not terribly optimistic because of two reasons that, combined, are really bad. First, people are horrible at anticipating and responding to threats that they’ve never seen before. Second, a single release of a nanobot virus has the potential to kill everyone on earth, and if it happens even once, the game’s over. So, to survive a nanobot vs nanobot cold war, you have to believe that we somehow are able to create something that can respond almost instantly to threats we’ve never seen before. (Like, how long after the gun was created was a light and effective bullet-proof vest made? And how does that vest do the first time someone pulls out a grenade?)

      So, I hope you’re right that nanobot technology is outside the abilities of all but the biggest budgets. That would be a possible saving grace. (I mean, we haven’t had a nuclear war yet.)

      Like

Leave a comment