One of the interesting things about writing science fiction these days is that you don’t actually need to look far in the future to come up with interesting stories. Whereas historically, technology in Science Fiction was often akin to magic (Star Trek transporters, faster than light travel, etc), there are huge technological advancements right now that make some wacky old Sci-Fi ideas start to seem realistic.
This is great, until you realize that most of those wacky Sci-Fi ideas are actually from dystopias. Thus, here’s my list of the five things that scare me most about the future.
1. Artificial Intelligence
The rise of artificial intelligence (AI) has the potential to be the most transformative event in human history. The challenge is the singularity—an event where the AI becomes smarter than us, able to improve itself beyond what a human could do. When this happens you can get exponential growth in the intelligence and capabilities of the AI.
Even in the best case, if (every) AI is benevolent, almost every job has the potential to be done better and more efficiently by an AI. All those white collar jobs where people are paid to think (i.e. the ones everyone had to transition to when the manufacturing jobs disappeared as a result of automation), will go away. So, this transition could leave vast numbers of people unemployed. While the AI might be able to produce enough for everyone, our social and economic systems aren’t designed around a world where we have plenty of everything and labor is no longer required.
Then there’s the worst case, if the AIs (or even just one AI) aren’t benevolent. Either they could be actively against humans, or simply not care about humans, and pursue their own goals. Hackers can already cause chaos in the world, so what will happen when an AI is a million times smarter than hackers, and has better resources? Maybe it’ll decide to divert all the resources to enhancing its own abilities rather than providing food, shelter, and medicine to humanity. Or maybe it’ll decide that humans are simply in the way. That could be rather bad.
2. Nanotechnology
When you create machines that are really small, you can potentially do a lot of good. You can manufacture extraordinary materials, perform medical feats far beyond the capabilities of drugs or scalpels, or potentially clean up the environment.
The problem is that you can also do bad things, like create self-replicating machines designed to hurt people, or tamper with their brains, or mess with any number of things. The problem is that nanobots are too tiny to be seen and impossible to guard yourself against. You could, in theory, create a nanobot that self-replicates all over the world, and after a specific time period, kills anyone who has blue eyes. And I have blue eyes, so that’s pretty terrible.
3. Genetic Engineering
Genetic engineering has many of the same problems as nanotechnology. I’m not particularly concerned about people meddling with humans (e.g. eliminating genetic defects as a zygote, or creating superhumans.) While those things have huge moral and ethical issues, they are much more likely to result in the evolution of humanity than the rapid, violent extinction of humanity.
The thing that does worry me is the potential for genetically engineered disease, because if people can do it, they will. Like nanotechnology, genetically engineered viruses are basically impossible to guard yourself against. You constantly hear of cases where nasty people say they’ll release embarrassing pictures to Facebook if the victim doesn’t pay them off. So how long will it be before people will infect someone with a customized disease, and only provide an antidote if the victim pays them off?
4. Surveillance
One big change that’s happened in the western world in the last 20 years is the rise of the surveillance state. Largely, this has been ignored—people don’t seem to care if the government reads all their emails or snoops on all their electronic devices. But I think they should, because I think Orwell’s 1984 is a warning.
If we give governments the capability of monitoring everything we do, they will use it. And then, you have the potential for a police state that has complete information on all its citizens, and the ability for a permanent surveillance-based dystopia to arrive. Dictators will be able to do whatever they want, and it will literally be impossible to fight back, since they will be able to detect and crush dissent immediately. Allowing the infrastructure to be put in place that allows that sort of world to be created seems like a bad idea.
5. People’s ability to do bad things
The common thread for many of the items on this list is technology increasing the ability of a few people cause widespread destruction. For instance, I have no doubt that the vast majority of the result of nanotechnology and genetic engineering will be hugely positive. The problem is the 0.1% case when someone uses it for evil has much broader consequences than the misuse of technology today.
Nanotechnology and genetic engineering toolkits will be build that can be used broadly, outside of the cutting-edge university labs. Then, you only need one person who decides that it’s a good idea to create a self-replicating nanobot that blocks the blood flow to people’s hearts, and the game is over. Similarly, you only need one president to decide that the people who disagree with him ideologically are a threat to the state, and then you’ve got a permanent Orwellian dictatorship.
The bottom line
To me, that’s the downside of technological advancement—we’re not that far from a place where one nut job can decide to ruin everything for everyone. Our capitalist system is great at encouraging innovation, but quite poor at putting limits on technology for the good of everyone—heck, we can’t even recognize that drastically changing the planet’s climate when we have nowhere else to live isn’t a good idea.
Stephen Hawking recently said that now is a particularly dangerous time for humanity, that we have created new technological risks that could lead to humanity destroying itself in the next 100 years. I’m inclined to agree with him.
You touch on the defining issues of your time. However, does not every past time also experience
issues that threaten? For instance, the social displacement of the industrial revolution. One offers potential solutions in the hope of an improved future.
LikeLike
I think the difference this time is that the scale of the damage that can be done is massively increased. This is the first time where a small group of people with relatively accessible technology could wipe out most of humanity.
I guess the main comparable would be the creation of nuclear weapons–did they truly know that triggering the first atomic bomb wouldn’t cause a chain reaction that would engulf the entire planet? (The theory said it wouldn’t but did they truly know?)
I think these ones are more akin to creating a magic gun and giving it to everyone, except if the gun is actually fired on a person, it kills 98% of the population.
LikeLike