First and foremost, in the likely event that you have not read these posts on the topic of Artificial Intelligence and the AI Revolution by Tim Urban (Part 1 and Part 2)
Assuming you’ve read those articles, there are a few things I’d like to bring up and discuss:
It is not a matter of ‘if’ humans will create AGI, but a matter of ‘when’
Conservative estimates from experts indicate that AGI will exist by 2075
Realistic estimates say that it is more likely to occur by 2040, with ASI emerging by 2060 (some experts such as Ray Kurzweil believe 2030 and 2045 to be more accurate estimates, respectively)
It is quite possible (even probable) that ASI will emerge within a few years (or even days, or hours) of the emergence of AGI. Either way, the speed at which this occurs will probably be faster than we are able to comprehend, and almost certainly faster than we can contain
Any ASI that emerges will either be Friendly or Unfriendly. Friendly ASI could very easily give us immortality, while Unfriendly ASI would quite probably lead to extinction of the human race
Any ASI that emerges will likely see a MAJOR threat to its existence and goal in any other upcoming AI (regardless of level of intelligence), and will likely terminate all other forms of AI (regardless if they are Friendly or Unfriendly). This will mean humanity likely only gets ONE chance at creating an ASI and that all other forms will be exterminated by this initial ASI
Taking all of this into account, and assuming the estimates for when the emergence of ASI are not drastically incorrect, it is reasonable to conclude that one will be created within my own lifetime. If this is the case, it is not unfathomable to believe that my generation is very likely to be the first generation that will not die of what we consider to be natural causes at this given point in history. If the ASI that emerges is Friendly, human lifespan could like increase either by drastic orders of magnitude, or even infinitely. If it is Unfriendly, my generation or the one immediately after it, those born in the last 15 years, is quite likely at risk of an extinction level event. Both of these cases are non-traditional in terms of how we die (either not at all or all at once).
Before continuing with this post, please take some time to think about what this means to you philosophically, and what your reaction would be for each of these cases. Really take the time to think about that. We are, after all, talking about the future of the human race, and on a more personal and immediate level, the quality (and means) of life for your children and grandchildren. What do you want for yourself, and for them?
For me, I don’t want to be a part of either of these worlds. I don’t want to see the fall of humanity through some extinction event due to (as far as I’m concerned) our stupidity and inability to properly plan and anticipate what repercussions this might realistically have. I also do not want to live on for eternity, whether that means I live in my current biological body or in an optimized, machine constructed one that has no disease or pain, has drastically increased capabilities, mental and physical, or any other form of unfathomable improvements. This is a world where not only is anything you can imagine possible, but all of that and more is going to be a trivial reality.
The first of those may seem like a reasonable position to have, but some of you, maybe even most of you, will probably think that the second scenario seems like a utopian dream come true. And for some people, it will be. And it may very well be the golden age of humanity and the pinacle of perfection. I, for one, prefer the flawed reality we live in now to those endless possibilities (or more accurately, endless realities). And here is why.
Good comes from experiencing bad.
Happiness comes from knowing sadness.
Progress stems out of struggle.
Love does not exist without pain and loss.
Ultimately, life has meaning only because of death.
For me, everything good - everything I enjoy, and everything I strive for - is only that way because of it’s equally opposite bad counterpart. Over the past few weeks I’ve been re-examining a lot of things, including the way I think and live in general. I’ve been realizing (and this is a personal realization, not one that by any means should be one that you adopt) that life, and the way you live it, is very much like gambling at a casino. There are three ways to approach it. You can be reckless and selfish and do nothing make big bets on anything at a whim, which can result in you getting lucky and hitting the jackpot, or more likely, getting very very unlucky and not having a single penny to fall back on. You can be extremely conservative (even to the point of not making any bets), and while that’ll mean you can stay at the casino for a very long time (possibly for your whole long lived life), it also means that it is quite likely that most of the time you’ll be watching everyone else have the fun from the sidelines. The last option is to make smart bets. This would be the equivalent of choosing a game at the casino that has the best odds of winning, like Blackjack for example, and furthermore, learning to count cards. You’d still be living at the whim of a certain amount of randomness, but in general, if you make smart bets, and bet big when the odds are in your favor, you’ll probably get to have lots of fun, and win big doing it so that you have something to fall back on when you decide it’s time to leave the casino altogether. On the other hand, you are still gambling, so you could end up broke and penniless too.
That certain, inherent risk that comes with living life that way is intriguing me more and more everytime I think about it and the way I’ve been living life thus far. For some people, playing conservative on most (or every) hand is the way they WANT to live their life. And that’s okay. I just don’t think it’s for me.
Bringing this back to Artificial Intelligence and ASI, it’d be like one person, or a group of people, (quite likely secretly and without the knowledge of anyone else) betting every single penny of the entire human race on a single hand. There are a few things to note:
In casinos, as with AI, there are SOOOOOOOO many things that we players don’t peroperly comprehend or grasp. And, in the end, the house is ALWAYS favored
There are only two outcomes to this bet. Either the casino wins and the human race goes extinct, or this person or group of people win, and every single bet thereafter is guaranteed to win
Putting this in perspective for myself, if the casino wins, it has to collect somehow. That may be quick and painless and unbeknownst to the rest of the human race, or it could be painful and lengthy and very obviously coming. If humanity wins, then the game becomes pointless and boring and lengthy (again, this is my personal take). Neither of these cases appeals to me. I simply don’t want to be around to witness it.
With that in mind, my strategy is to go the route of betting smart, expecting that since the house is favored, my number will come up at some point. I simply hope that as close to the point in time where AGI and ASI is created, while still occurring before it, and that in the meantime, I have a hell of a lot of fun playing the game.
My thoughts and experiences as a young adult in San Francisco