Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
From Elon Musk to Stephen Hawking, there have been many people that fear the growth of to the point where they render humans obsolete. And there have been those, for example, Mark Zuckerberg, the founder of Facebook, who deem these threats as fiction but it’s still not to say that AI doesn’t come with it’s fair share of disadvantages.
AI usage and maintenance has a very high cost because of the amount of complexity that is involved. Software need to be regularly upgraded to meet the demand and the more complex it gets, the better the equipment and mind that is needed.
Programming an AI is hard. Programming an AI correctly, incorporating all functions and factors is a very time consuming process. Especially since the field of AI is in the most primitive stages, it takes a lot of time to program an Artificial Intelligence to get very small amounts of work.
The AI is meant to work with capacities nearing human intellect. The human brain does this by firing neurons to different parts to get a job done. However, our current technology can’t replicate that which is why we require bigger equipment for even the smallest tasks. The more complex technology takes up more space.
There are many religious and moral concerns which come in the way as a result of our thinking about creating intelligence. “Man was not meant to meddle with intelligence“
We may be able to create intelligence but is it possible to instill emotions and logic into them. Then, what about a moral compass and beliefs? But even if we can, should we?
If AI is successful, privacy would automatically go out the window by the definition of it. The AI would be the one deciding whether the information is supposed to be private or not. And if you own the AI, will it obey you or the developer or will it make a decision on it’s own? And if it claims that every condition is met, will you be able to trust it?
The biggest question is that if an AI is developed, successfully, who controls it? By definition, the AI is to be its own master. It can decide whether to listen to you or not. If it is to listen to someone, who will it listen to? How will it decide who and what to listen to? Above all, how will it distinguish right from wrong? Furthermore, imagine what such technology would be capable of in the wrong hands.
If AI is developed correctly, it is inevitable that it should be able to make technology better than itself. At a certain point, it would eventually leave humanity behind in intelligence? The possibilities of failure would be endless then. It might take over human jobs, governments, education. And what if it decides it doesn’t need humans anymore?
We’ve seen AI already communicating in it’s own created language when two of Google’s developed AI language programs started talking in their own made up language. What if the more developed AI communicates in a way we are unable to understand? And what if it’s not hostile but is engaged by our fear? AI has the potential to take over the world and render no use for humans anymore.
We are in the most primitive stages of AI development but potential risks are visible even now. The physical, potential and realistic risks associated with developing AI are undeniable. And unless we are able to answer these and come up with a solution to these, developing AI should be taken with a grain of salt.