send mail to support@abhimanu.com mentioning your email id and mobileno registered with us! if details not recieved
Resend Opt after 60 Sec.
By Loging in you agree to Terms of Services and Privacy Policy
Claim your free MCQ
Please specify
Sorry for the inconvenience but we’re performing some maintenance at the moment. Website can be slow during this phase..
Please verify your mobile number
Login not allowed, Please logout from existing browser
Please update your name
Subscribe to Notifications
Stay updated with the latest Current affairs and other important updates regarding video Lectures, Test Schedules, live sessions etc..
Your Free user account at abhipedia has been created.
Remember, success is a journey, not a destination. Stay motivated and keep moving forward!
Refer & Earn
Enquire Now
My Abhipedia Earning
Kindly Login to view your earning
Support
Context
In February, the Kerala police inducted a robot for police work. The same month, Chennai got its second robot-themed restaurant, where robots not only serve as waiters but also interact with customers in English and Tamil. In Ahmedabad, in December 2018, a cardiologist performed the world’s first in-human telerobotic coronary intervention on a patient nearly 32 km away. All these examples symbolise the arrival of Artificial Intelligence (AI) in our everyday lives.
Need for regulation of AI
If AI is not regulated properly, it is bound to have unmanageable implications. Example – Imagine, for instance, that electricity supply suddenly stops while a robot is performing a surgery, and access to a doctor is lost?
All countries, including India, need to be legally prepared to face such kind of disruptive technology.
Challenges of AI
Predicting and analysing legal issues and their solutions, however, is not that simple.
Existential Questions
What if an AI-based driverless car gets into an accident that causes harm to humans or damages property?
Who should the courts hold liable for the same?
Can AI be thought to have knowingly or carelessly caused bodily injury to another?
Can robots act as a witness or as a tool for committing various crimes?
Scenario in other countries –In the U.S., there is a lot of discussion about the regulation of AI. Germany has come up with ethical rules for autonomous vehicles stipulating that human life should always have priority over property or animal life. China, Japan and Korea are following Germany in developing a law on self-driven cars.
Initiative in India –
In India, NITI Aayog released a policy paper, ‘National Strategy for Artificial Intelligence’, in June 2018, which considered the importance of AI in different sectors.
The Budget 2019 also proposed to launch a national programme on AI.
No comprehensive legislation to regulate this growing industry has been formulated in the country till date.
Legal personality of AI
Definition of AI – First we need a legal definition of AI.
Establishing legal personality – Also, given the importance of intention in India’s criminal law jurisprudence, it is essential to establish the legal personality of AI (which means AI will have a bundle of rights and obligations), and whether any sort of intention can be attributed to it.
Ensuring liability – To answer the question on liability, since AI is considered to be inanimate, a strict liability scheme that holds the producer or manufacturer of the product liable for harm, regardless of the fault, might be an approach to consider.
Privacy Rights – Since privacy is a fundamental right, certain rules to regulate the usage of data possessed by an AI entity should be framed as part of the Personal Data Protection Bill, 2018.
Conclusion
Reducing traffic accidents – Traffic accidents lead to about 400 deaths a day in India, 90% of which are caused by preventable human errors. Autonomous vehicles that rely on AI can reduce this significantly, through smart warnings and preventive and defensive techniques.
Availability of doctors – Patients sometimes die due to non-availability of specialised doctors. AI can reduce the distance between patients and doctors.
But as futurist Gray Scott says, “The real question is, when will we draft an artificial intelligence bill of rights? What will that consist of? And who will get to decide that?”
By: VISHAL GOYAL ProfileResourcesReport error
Access to prime resources
New Courses