My last couple of posts talked about radical Islam and the rise of nanotechnology. I think once you start to combine these two things, you end up with the potential for some bad outcomes that illustrate the biggest problem with technological advancement.
Technologically advancement is all about building on top of itself–making it faster and easier to do more with less. Computers that took up a room 50 years ago are less powerful than a smart phone that you can put in your pocket today. To individuals and small groups, technology often provides leverage, enabling small actions to have a much bigger impact than ever before. And this is what creates the problem.
The huge benefit of guns
Guns were a huge technological advancement. When compared to things like bows and spears, they enabled individuals to shoot farther, hit more often, and do more damage. What’s more, they reduced the amount of skill and strength needed to kill people at a distance. Medieval longbows required 90-110 pounds of force to draw. It took boys years to become proficient at the weapon.
When the musket was developed, those years of rigorous training became unnecessary. There was still some effort required to become efficient, but far less than there was with the longbow.
That’s great technological advancement, of course. It enables hunting and warring to be far more efficient than it would otherwise. But it also has a downside–the gun magnifies the ability of a single person to go on a murderous rampage. Today, a single person with a few easily-acquired guns can decide to go berserk and has a reasonable chance of killing or wounding tens of people before they are stopped. The gun gives them far greater leverage to kill.
This is unfortunate. Because there are lots of people in the world, you don’t need a high percentage of them to go nuts for there to be bad consequences. For instance, in the USA, I imagine fewer than 0.001% of people would seriously consider going on a shooting rampage. But that’s still 3000 people.
And this reality is reflected in the stats. There hasn’t been a single week in Obama’s second term that there hasn’t been a mass shooting in the USA.
The next level
That’s one simple weapon. What happens when you create the technology that provides even more leverage for a single person or a small group of people to do damage? Well, you get bombings and events like 9/11.
Then, when you take the next step above that, you get nuclear weapons. A nuke that can be easily carried within a plane can cause the equivalent damage to 50,000,000 tons of TNT.
The thing I find most fascinating about nuclear weapons is that they don’t actually require that much knowledge to assemble. If you gave a small, reasonably smart team the components necessary to build it, they could almost certainly build one. The main reason that we aren’t encountering terrorists with nukes isn’t because it’s hard to build, but rather because it is difficult to get the materials to build one.
So, what happens when we get the nanotechnology that will enable a doctor to create a tiny robot that seeks out and eliminates a type of tumor? Or when we have the capability to assemble DNA to build microbes to do things like convert sunlight and carbon dioxide into oil?
Once it becomes easy to do these sorts of things, some people will look to weaponize the technology. Instead of seeking out a tumor and killing it, maybe those tiny robots will be programmed to drift on the air and seek out people who have blue eyes to kill them. Maybe the microbes will be designed to quietly infect everyone, and then start killing them after 6 months elapses.
This sort of technology has a good chance of becoming easily accessible to the masses–or at least small groups of people. Of course, to the vast majority, nano- and biotechnology will be a huge benefit to their lives. They’ll never even consider using the technology for ill.
The problem is, if it’s possible for a small group of individuals to create a virus or nanobot capable of destroying humanity, you only need one small group out of the 7 billion people on earth to make that decision, and then it’s all over. This isn’t like 9/11. You won’t be able to go back and say, “Ok, our security network didn’t identify that threat. How do we detect it the next time?” Once even a single one of these threats materializes, it’s game over.
Unfortunately, there aren’t many good solutions to this problem. The scientists doing work in these areas–when asked about these issues–seem to wave their hands and pretend that regulation or some magical way of detecting the bad people will be developed.
But I’m not that hopeful. I mean, are we really going to put infrastructure in place to watch every person on the planet at all times? And if so, will the people in power be able to resist abusing it? It seems very unlikely. And remember, out of billions of people, you only need one small group to fall through the cracks and it’s all over. We only get one chance.
Maybe the only real solution is redundancy–colonizing other planets as quickly as possible. That way, when some jerk wipes out everyone on Earth, there will still be people elsewhere who will survive and rebuild. Of course, this isn’t a very satisfying solution, particularly for all the people on Earth who are killed. But it might be the only one.
It’s a pity that we’re spending hundreds of times more on building weapons than space travel.