What is an AI?
There are 3 standard definitions of AI, Artificial Narrow Intelligence, Artificial General Intelligence, Artificial Super Intelligence. ANI (Artificial Narrow Intelligence) is surrounding us at almost all times, your phone has hundreds of ANI’s that are absurdly above humans at their chosen tasks. For instance the calculator on your phone is better at computational math then the single best humans on the planet. Not only that, but it understands hundreds of other tasks and handles them with ease. This form of AI is by far the easiest form to create. It can be made with through understanding of the task at hand. However, it is also the single least effective form of AI. It does not truly learn , but only processes data and outputs a result. Yet, the best of these AI’s are incredibly advanced. Siri and Cortana for example are incredibly advanced ANI’s they quickly learn human language and even their owners unique pronunciation. While these AI’s are great at handling our day to day life’s they could never replace a human. No ANI will think creatively or learn a new skill.
AGI’s are not like this. AGI’s will have a general form of intelligence. That is they will know only how to learn. So far this task is proving to be incredibly difficult. The reason for this is linked to the basic concepts behind what intelligence is in the first place. How do we as humans tell that this image is of a cat. It doesn’t truly look like a cat, but somehow our brains are completely capable of looking at a cartoon flat image and deducing that it is a drawing of a cat.
We have tried and our trying several different approaches to reinvent this type of general reasoning skill. The first way is to try and simulate the human brain inside of a computer. This is currently being done by several different groups. The most complex simulations of a brain built by man are currently no smarter than a cat (Footnote 1). While this seems incredibly underwhelming, at our current rate of computational increase said AI is likely to be as smart as a human by 2025.
The next approach is to simulate what created our intelligence in the first place, evolution. This way has one large drawback, its slow. Evolution takes countless iterations in order to develop complexity. We are subverting this weakness slightly by always selecting for intelligence in our current test systems. But even with this path being slow at the moment, by the time 2045 roles around our computers will be fast enough to zip through 2 trillion iterations in less then a day.(Footnote 2)
ASI’s are our ultimate goal. An ASI will be an AGI that is so far above humans in terms of reasoning skills and speed that we would be closer to ants. This can happen in one of 3 ways. The first is to throw enough processing power at an AGI, that even though it thinks at the same reasoning levels of a human it can out think the whole of humanity. This form of ASI will be substantially less effective then the others, but it is requires less assumtions. The other way to create an ASI is to first create an AGI that is slightly smarter then the humans that created it. Then have it recursivly improve itself, with each iteration being smarter and thus being able to improve itself more, until it vastly surpasses all of human thought. This form of ASI would be the closest thing to a true omniscience that can exist in our physics.(Footnote 3)
The Law of Accelerating Returns
During the last century, humanities rate of information generation and technological advancement has increased at an exponential rate. Many of you will have heard of Moore’s law before. The basic concept is that our computational power doubles every 18 months. So far we have operated at a level slightly above Moore’s Law. Because of this increase in returns we are very likely to have personal computers that have more computational power then all humans put together by 2045.(Footnote 3). This creates a form of inevitability for AI. If even a novice programmer can brute force a problem until an AGI is created, then AGI’s are a guaranteed to happen sometime in the future.
The True Pascal Wager.
Pascal’s Wager is a famous thought experiment about whether to believe in god. It boils down to 4 options either you believe in God and he exists, you believe in God and he does not exist, you don’t believe in God and he exists, and you dont believe in God and he doesn’t exist. Pascal argues that the logical choice is to believe in god, because in all possible options believing in God is equal to or better than not believing in God. There are several issues with this argument biggest of which is choosing which God to believe in out of hundreds. However, the creation of a ASI creates a purer form of Pascal’s Wager. If we make an ASI either it has our values and beliefs and springs the world into utopia, if it doesnt have our beliefs then we will almost certainly become extinct. Else we get nothing.
Choosing what to believe
The obvious choice is of course to make sure that our ASI has our moral system. But which values would you give an AI. The most popular answer when phrased for an AI tends to amount to “maximize human happiness”. However, when you think about what an AI (Fundamentally none human creation) would do to maximize human happiness is convert all organic matter to tiny human DNA based dopamine factories. All intelligence would be removed, everything that humans want and value would be eradicated, and the world would be covered in a happiness goop. Of course nobody wants all life on earth to be converted to tiny dopamine sensors, but an AI with the goal of maximizing happiness might chose that. The best solution is to create an AI that somehow understands what we want and creates it for us. But we have to be careful to not create one that will destroy all of us first.
Our Game
Our game will try to bring awareness to the need to not let a Unfriendly ASI develop. What will happen is it will start om a would already ravaged by an ASI. It will also have a focus on the futility of fighting an ASI even with a different but younger ASI. Because of this the game will be very difficult. We are also making the enemy incredibly smart as compared to standard AI. They will adapt there defense to make your attack weak over time. We will also pepper hints as to potential pit falls when creating an ASI throughout the game.
Footnotes
1. http://www.zdnet.com/article/scientists-build-biggest-artificial-brain-of-all-time-16-billion-neurons-as-smart-as-a-cat/ This is a super computer emulating as many neurons as a cat. It has been shown to learn new behaviors and can be taught basic concepts.
2.http://www.kurzweilai.net/global-futures-2045-ray-kurzweil-immortality-by-2045. By 2045 the abilities of computers will advance to the point of changing the world in an unprecedented way. (Whats more each computer will be able to process more info then the entirety of the human race can).
3. Ray Kurzweil, The Singularity is Near, pp. 135–136. Penguin Group, 2005. Discussing the likelyhood and power of an AI