LATEST RESEARCH: IS THE ARTIFICIAL INTELLIGENCE REALLY DANGEROUS OR THREAT TO HUMAN BEINGS?


    In this 21st century, it is really wonderful and quite amazing that from various applications like computers to self-driving cars, artificial intelligence (AI) is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google’s search algorithms to IBM’s Watson to autonomous weapons. Artificial intelligence today is properly known as narrow AI(or weak AI), in that it is designed to perform a narrow task (e.g. only facial recognition or only internet searches or only driving a car). However, the long-term goal of many researchers is to create general AI(AGI or Strong AI). While narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations, AGI would really outperform humans at nearly every cognitive task. Since recent developments have made super-intelligent machines possible much sooner than initially thought, the time is now to determine what dangers artificial intelligence poses. 


     In the long term, an important question is what will happen if the quest for strong AI succeeds and an AI system becomes better than humans at all cognitive tasks. As pointed out by Mr.I.J.Good in 1965, designing smarter AI systems is itself a cognitive task. Such a system could potentially undergo recursive self-improvement, triggering an intelligence explosion leaving human intellect far behind. By inventing revolutionary new technologies, such a superintelligence might help us eradicate war, disease, and poverty, and so the creation of strong AI might be the biggest event in human history. Some experts have expressed concern, though, that it might also be the last unless we learn to align the goals of the AI with ours before it becomes superintelligentAny powerful technology can be misused. Today, artificial intelligence is used for many good causes including to help us make better medical diagnoses, find new ways to cure cancer, and make our cars safer. Unfortunately, as our AI capabilities expand we will also see it being used for dangerous or malicious purposes. Since AI technology is advancing so rapidly, it is vital for us to start to debate the best ways for AI to develop positively while minimizing its destructive potential.

    At the core, artificial intelligence is about building machines that can think and act intelligently and includes tools such as Google's search algorithms or the machines that make self-driving cars possible. While most current applications are used to impact humankind positively, any powerful tool can be wielded for harmful purposes when it falls into the wrong hands. Today, we have achieved applied AI—AI that performs a narrow task such as facial recognition, natural language processing, or internet searches. Ultimately, experts in the field are working towards artificial general intelligence where systems can handle any task that intelligent humans could perform, and most likely beat us at each of them. 

   Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could someday result in human extinction or some other unrecoverable global catastrophe. 









    However, most researchers agree that a super intelligent AI is unlikely to exhibit human emotions like love or hate and that there is no reason to expect AI to become intentionally benevolent or malevolent. While we haven’t achieved super-intelligent machines yet, the legal, political, societal, financial, and regulatory issues are so complex and wide-reaching that it’s necessary to take a look at them now so we are prepared to safely operate among them when the time comes. Outside of preparing for a future with super-intelligent machines now, artificial intelligence can already pose dangers in its current form. Let’s take a look at some key AI-related risks. 

Autonomous Weapons: AI is programmed to do something dangerous, as is the case with autonomous weapons programmed to kill, which is one way AI can pose risks. It might even be plausible to expect that the nuclear arms race will be replaced with a global autonomous weapons race. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation. This risk is one that’s present even with narrow AI but grows as levels of AI intelligence and autonomy increase.

Social manipulationSocial media through its autonomous-powered algorithms is very effective at target marketing. They know who we are, what we like, and are incredibly good at surmising what we think. Investigations are still underway to determine the fault of Cambridge Analytica and others associated with the firm who used the data from 50 million Facebook users to try to sway the outcome of the 2016 U.S. presidential election and the U.K.'s Brexit referendum, but if the accusations are correct, it illustrates AI's power for social manipulation. By spreading propaganda to individuals identified through algorithms and personal data, AI can target them and spread whatever information they like, in whatever format they will find most convincing—fact or fiction.









Invasion of privacy and social gradingIt is now possible to track and analyze an individual's every move online as well as when they are going about their daily business. Cameras are nearly everywhere, and facial recognition algorithms know who you are. In fact, this is the type of information that is going to power China's social credit system that is expected to give every one of its 1.4 billion citizens a personal score based on how they behave—things such as do they jaywalk, do they smoke in non-smoking areas and how much time they spend playing video games. When Big Brother is watching you and then making decisions based on that intel, it’s not only an invasion of privacy it can quickly turn to social oppression.

Misalignment between our goals and the machines: Part of what humans value in AI-powered machines is their efficiency and effectiveness. But, if we aren’t clear with the goals we set for AI machines, it could be dangerous if a machine isn’t armed with the same goals we have. For example, a command to “Get me to the airport as quickly as possible” might have dire consequences. Without specifying that the rules of the road must be respected because we value human life, a machine could quite effectively accomplish its goal of getting you to the airport as quickly as possible and do literally what you asked, but leave behind a trail of accidents.

Discrimination: Since machines can collect, track, and analyze so much about you, it’s very possible for those machines to use that information against you. It’s not hard to imagine an insurance company telling you you’re not insurable based on the number of times you were caught on camera talking on your phone. An employer might withhold a job offer based on your “social credit score.”



Comments

  1. Please donate to poor children in the name of humanity through our NGO namely RAVI FOUNDATION through PayPal email: ravigroupexcellency@gmail.com

    ReplyDelete

Post a Comment

Popular posts from this blog

MOST OUTSTANDING APPLICATIONS OF RAMAN SPECTROSCOPY

DOES THE COSMIC IONIZATION RADIATION EFFECT AIRCRAFTS, AIRCREW, PASSENGERS IN FLIGHTS?

THE JAMES WEBB SPACE TELESCOPE: A TRUE ENGINEERING MARVEL