send mail to support@abhimanu.com mentioning your email id and mobileno registered with us! if details not recieved
Resend Opt after 60 Sec.
By Loging in you agree to Terms of Services and Privacy Policy
Claim your free MCQ
Please specify
Sorry for the inconvenience but we’re performing some maintenance at the moment. Website can be slow during this phase..
Please verify your mobile number
Login not allowed, Please logout from existing browser
Please update your name
Subscribe to Notifications
Stay updated with the latest Current affairs and other important updates regarding video Lectures, Test Schedules, live sessions etc..
Your Free user account at abhipedia has been created.
Remember, success is a journey, not a destination. Stay motivated and keep moving forward!
Refer & Earn
Enquire Now
My Abhipedia Earning
Kindly Login to view your earning
Support
Context: Recently, the European Union (EU) has set the stage for the world's first comprehensive legislation aimed at regulating the use of Artificial intelligence (AI).
Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems.
AI works by analyzing large amounts of labeled training data to find patterns and correlations.
It requires specialized hardware and software, with popular programming languages like Python, R, Java, C++, and Julia often used by developers.
AI programming focuses on cognitive skills like learning, reasoning, self-correction, and creativity to achieve specific tasks, such as generating new text, images, music, and ideas.
Ethical Concerns: AI systems can make decisions and take actions that impact individuals and society. Establishing rules helps address ethical concerns related to the use of AI, ensuring that it aligns with human values and respects fundamental rights.
Privacy: AI often involves the processing of large amounts of data. Rules can help protect individual privacy by specifying how data should be collected, stored, and used.
Security: Rules are necessary to ensure the security of AI systems. This includes safeguarding against potential vulnerabilities and protecting against malicious uses of AI technology.
Transparency: Rules can mandate transparency in AI systems, requiring developers to disclose how their algorithms work.
Competition and Innovation: Establishing a regulatory framework provide a level playing field for businesses, preventing the abuse of market dominance and encouraging responsible innovation.
Public Safety: In cases where AI is used in critical domains such as healthcare, transportation, or public infrastructure, rules are essential to ensure the safety of individuals and the general public.
The legislation includes safeguards on the use of AI within the EU, including guardrails on its adoption by law enforcement agencies.
Empowerment of Consumers: Ability for individuals to launch complaints against perceived AI violations.
Restrictions on Law Enforcement Adoption: Clear boundaries on AI usage by law enforcement agencies.
Strict Limitations on AI: Strong restrictions on facial recognition technology and AI manipulation of human behaviour.
Penalties for Violations: Provision for tough penalties for companies found breaking the rules.
Limited Biometric Surveillance: Governments permitted to use real-time biometric surveillance in public areas only in cases of serious threats like terrorist attacks.
The Regulatory Framework establishes obligations for providers and users depending on the 4 levels of risk from artificial intelligence i.e. Unacceptable risk, High risk, Limited risk, Minimal or no risk.
Unacceptable risk AI systems are systems considered a threat to people and will be banned. They include:
Cognitive behavioral manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behavior in children
Social scoring: Classifying people based on behavior, socio-economic status or personal characteristics
Real-time and remote biometric identification systems, such as facial recognition.
Exception: Governments can only use real-time biometric surveillance in public areas when there are serious threats involved, such as terrorist attacks.
AI systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories:
AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts.
AI systems that will have to be registered in an EU database and fall into eight specific areas including Biometric identification and categorisation of natural persons, Management and operation of critical infrastructure, Education and vocational training, law enforcement etc.
All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle.
Limited risk AI systems should comply with minimal transparency requirements:
Disclosing that the content was generated by AI,
Designing the model to prevent it from generating illegal content,
Publishing summaries of copyrighted data used for training.
User Discretion: After interacting with the applications, the user can then decide whether they want to continue using it.
User awareness: Users should be made aware when they are interacting with AI. This includes AI systems that generate or manipulate image, audio or video content, for example deepfakes.
The proposal allows the free use of minimal-risk AI.
This includes applications such as AI-enabled video games or spam filters.
Stance: India is yet to have a comprehensive framework for regulating AI. However, India has shifted from a stance of not considering AI regulation to actively formulating regulations based on a risk-based, user-harm approach.
Digital India Framework- India is developing a comprehensive Digital India Framework that will include provisions for regulating AI. The framework aims to protect digital citizens and ensure the safe and trusted use of AI.
National AI programme- India has established a National AI Programme to promote the efficient and responsible use of AI.
National Data Governance Framework Policy- India has implemented a National Data Governance Framework Policy to govern the collection, storage, and usage of data, including data used in AI systems. This policy will help ensure the ethical and responsible handling of data in the AI ecosystem.
Draft Digital India Act- The Ministry of Information Technology and Electronics (MeitY) is working on framing the draft Digital India Act, which will replace the existing IT Act. The new act will have a specific chapter dedicated to emerging technologies, particularly AI, and how to regulate them to protect users from harm.
European Union- The European Union is working on the draft Artificial Intelligence Act (AI Act) to regulate AI from the top down. It has set the stage for the world's first comprehensive legislation aimed at regulating the use of Artificial intelligence (AI).
United States- The White House Office of Science and Technology Policy has published a non-binding Blueprint for the Development, Use, and Deployment of Automated Systems (Blueprint for an AI Bill of Rights), listing principles to minimize potential harm from AI.
Japan- Japan’s approach to regulating AI is guided by the Society 5.0 project, aiming to address social problems with innovation.
China- China has established the “Next Generation Artificial Intelligence Development Plan” and published ethical guidelines for AI. It has also introduced specific laws related to AI applications, such as the management of algorithmic recommendations.
Universal adoption of the Bletchley Declaration- The push must be made towards universal adoption of the Bletchley Declaration by all the countries.
Establish comprehensive and flexible regulatory framework- The governments should develop clear guidelines and laws that address various aspects of AI, including data privacy, algorithmic transparency, accountability, and potential biases.
Foster international cooperation- Given the global nature of AI and its potential impact, collaboration among countries is essential. International standards and agreements should be developed to promote ethical practices and ensure consistency in regulation across borders. In this respect, the G7 Hiroshima AI Process (HAP) could facilitate discussions.
Encourage industry self-regulation- Companies involved in AI development should take responsibility for ensuring the ethical and responsible use of their technologies.
Invest in AI research and education- Governments, academic institutions, and industry stakeholders should allocate resources to R&D, and education in the field of AI. This will help create a well-informed workforce capable of addressing regulatory challenges and ensuring the safe and responsible deployment of AI technologies.
By: Shubham Tiwari ProfileResourcesReport error
Access to prime resources
New Courses