send mail to support@abhimanu.com mentioning your email id and mobileno registered with us! if details not recieved
Resend Opt after 60 Sec.
By Loging in you agree to Terms of Services and Privacy Policy
Please specify
Please verify your mobile number
Login not allowed, Please logout from existing browser
Please update your name
Subscribe to Notifications
Stay updated with the latest Current affairs and other important updates regarding video Lectures, Test Schedules, live sessions etc..
Your Free user account at abhipedia has been created.
Remember, success is a journey, not a destination. Stay motivated and keep moving forward!
Refer & Earn
Enquire Now
My Abhipedia Earning
Kindly Login to view your earning
Support
The ethics of artificial intelligence is the part of the ethics of technology specific to robots and other artificially intelligent beings. It is typically divided into roboethics, a concern with the moral behavior of humans as they design, construct, use and treat artificially intelligent beings, and machine ethics, which is concerned with the moral behavior of artificial moral agents (AMAs). The final goal of AI ethics can thus be found in acievement of a "singularity". The singularity is that point in time when all the advances in technology, particularly in artificial intelligence (AI), will lead to machines that are smarter than human beings. Ray Kurzweil, Google’s Director of Engineering, is a well-known futurist with a high-hitting track record for accurate predictions. Of his 147 predictions since the 1990s, Kurzweil claims an 86 percent accuracy rate. At the SXSW Conference in Austin, Texas, Kurzweil made yet another prediction: the technological singularity will happen sometime in the next 30 years. In the not too distant future when AI possesses true intelligence, AGI (Artificial General Intelligence) morality will become a critical issue. Given the curve of technological progress and futurist Ray Kurzweil’s singularity predictions, AGI progress is within reach and researchers are working out issues related to morality, specifically addressing questions like whether AGI can be rendered as per our moral system, a paper authored by Sophia’s maker Ben Goertzel discusses.
Which brings us to the question – can morality be engineered in AGI system through design and code? Can moral rules be programmed into AGI systems? According to Goertzel, “AGI is a highly complex, highly dynamic self-organizing system”. Subsequently, this program can even end up deleting the moral rules one programmed in, or it may reinterpret the terms.
Defining Machine Intelligence Of late, academic research focuses on issues such as computational ethics, machine ethics (ME), machine morality, artificial morality and building friendly AI. By definition, machine morality extends the traditional engineering concern with safety to domains where the machines themselves can make moral decisions. According to Wallach & Allen, 2009, when designers and engineers are unable to predict how a system will perform in new situations and new inputs, then mechanisms will be required to facilitate the agent’s evaluation functionally moral.
Researchers Shane Legg & Marcus Hutter proposed a formal definition of intelligence based on algorithmic information theory and the AIXI theory. They defined intelligence as a measure of an agent’s ability to achieve goals in a wide range of environments. The definition is non-anthropomorphic, meaning that it can be applied equally to humans and artificial agents. Clearly the researchers expect that the future super-human AGI will have the ability to react, respond and achieve goals in varied range of environments compared to humans. It follows that the more intelligent an agent is, the more control it will have over aspects of the environment relating to its goals. Hence this points to the risks with intelligent systems – if their goals are not aligned with ours, then there will likely be a point where their goals will be achieved to the loss of ours.
Some Of The Core Questions Addressed In AGI Research Are: Experts predict that a technological singularity caused by an AGI may lead to existential risks as well as risks of substantial suffering for the human race. Some clusters of problems appear in multiple research agendas and can be used as design guides for development of AGI:
Approaches To Engineering Artificial Morality This report Engineering Moral Agents – from Human Morality to Artificial Morality discusses challenges in engineering computational ethics and how mathematically oriented approaches to ethics are gaining traction among researchers from a wide background, including philosophy. AGI-focused research is evolving into the formalization of moral theories to act as a base for implementing moral reasoning in machines. For example, Kevin Baum from the University of Saarland talked about a project about teaching formal ethics to computer-science students wherein the group was involved in building a database of moral-dilemma examples from the literature to be used as benchmarks for implementing moral reasoning.
Another study, titled Towards Moral Autonomous Systems from a group of European researchers states that today there is a real need for a functional system of ethical reasoning as AI systems that function as part of our society are ready to be deployed.One of the suggestions include having every assisted living AI system to have a “Why did you do that?” button which, when pressed, causes the robot to explain why it carried out the previous action.
Researchers from Swedish University Chalmers made a philosophical analysis of the brain-emulation argument for AI and concluded that the question of AGI is certainly within research. Researchers were less optimistic about the timing of AGI, but predicted that it will happen within this century. Surveys taken of AI researchers estimate that human-level AGI will be created by 2045. The survey was taken by 21 attendees at the 2009 Artificial General Intelligence conference, and found a median of 2045 for superhuman AGI. According to an industry expert, given how recent discussions on AI ethics are being politicized by companies for economic gain, the two issues – AI ethics and building a robust, human-aligned AGI system are being conflated. American AI researcher Eliezer Yudkowsky believes that the technical task of building a minimal AGI system which is “well-aligned with its operators” intentions is vastly different from “AI ethics”.
If Ray Kurzweil's predictions are to be believed then by the 2040s, non-biological intelligence will be a billion times more capable than biological intelligence. However, capability in teraflop terms is different from capability in terms of Human Intelligence which is not just different ways of responding to the environment but more Human ways to do so.
By: Abhishek Sharma ProfileResourcesReport error
Access to prime resources