Larry Page once said, Artificial Intelligence would be the ultimate version of Google. The ultimate search engine that would understand everything on the web. It would understand exactly what you wanted, and it would give you the right thing. AI by definition is the capability of a machine to imitate intelligent human behavior. AI would improve the world in so many ways, but what if in the process it managed to kill us all? What if our worst nightmares of robots taking over the world came true? These fears seem so far away from coming to life, but even the top-ranked scientist have said it may be way closer than it seems. The risks of creating AI are so high, that the benefits seem so pointless. AI development would make so many fears come true, to the point where it logically doesn’t make sense to continue it. 66 million years ago, dinosaurs were the dominant species. It is now 2019, and humans have been dominant for a while. However, scientist have predicted that robots may take over in the next 20-200 years. This may seem far fetched, but as soon as AI is smarter than humans, they automatically have more power than us. Once AI becomes smarter than us it will be able to improve itself, which would cause a technology explosion. They would become a lot smarter than us in a matter of a short time, and would therefore have a lot of power over us. Think about it, we have control over even the largest of animals just because we are smarter than them. Along with this, humans would have a very hard time trying to just turn off the AI, because they are so much smarter than us. It would not even be a problem to continue to create and develop new AI if we knew how to turn them off. However, AI will eventually become smarter than us, so we must prepare to be able to turn them off. However, this would be difficult because they will outsmart us and know how we would try and turn them off, so they would adapt. In conclusion, maybe it would just be simpler to not continue the development of AI. In around two years, over 110 million peoples personal information were exposed. In the same time, over 40 million people had their credit cards hacked, and every single yahoo account was hacked, all 300 million of them. Having a computer or an account hacked may be pretty pricy, and you may have to spend a lot of time trying to go to court or talking with people who can help you, but in the long run, having something hacked isn’t life-threatening. At least with the current technology and AI it isn’t. But as AI improves, this could change. For example, if AI is created to self-drive cars, it could be life threatening if someone took over control of the car. They would have the power to control where the car is going, and that may be to a certain location, but if it was in the hands of the wrong person, the person could choose to get the victim in an accident or another tragedy. Lastly, since the AI would most likely be more powerful than humans, it would be a very clever idea for someone who wanted to harm others to do it through AI and program them to do something very destructive. This could turn very bad very quickly and could lead to way worse things such as the extinction of the human race. For example, the attacker could have programmed it to set off atomic bombs or another destructive material. Therefore, the development of AI should definitely stop. Many people would argue, even after hearing these risks, that AI has more benefits than risks. For example, AIs database is very narrow, therefore may have a biased opinion. The data the AI has is only the data it is fed, so it only can see things from one perspective, giving a biased view. Also, the AI would allow for tasks to be done a lot quicker than when humans do it, but if the goals of the AI were different and interfere with those of the majority of the human race, then this could result in war. However, if the AI is smarter than us, the probability of creating data to predict what the humans would do is probably very high. Again, the AI would be able to adapt and therefore, the chances of the AI winning is very high. So why would we create a more efficient world if we knew we knew that our lives would be at risk? Why would put war upon the human race if we could prevent it? We have no way to predict what would happen to this AI either. If we were able to predict Hitlers rising or the epidemic of smallpox, then the world would have taken a totally different turn, so the same goes for AI. If the human race can’t predict what would happen because it has never happened before, then the chances of something bad happening are too high. AI development is such a controversial topic. However, I fully believe it should be discontinued. The chances of the AI doing something very bad are so high, it doesn’t make sense to continue it. AI could lead to the end of the human race. If we know all of this, why would we continue to make it?