send mail to support@abhimanu.com mentioning your email id and mobileno registered with us! if details not recieved
Resend Opt after 60 Sec.
By Loging in you agree to Terms of Services and Privacy Policy
Claim your free MCQ
Please specify
Sorry for the inconvenience but we’re performing some maintenance at the moment. Website can be slow during this phase..
Please verify your mobile number
Login not allowed, Please logout from existing browser
Please update your name
Subscribe to Notifications
Stay updated with the latest Current affairs and other important updates regarding video Lectures, Test Schedules, live sessions etc..
Your Free user account at abhipedia has been created.
Remember, success is a journey, not a destination. Stay motivated and keep moving forward!
Refer & Earn
Enquire Now
My Abhipedia Earning
Kindly Login to view your earning
Support
Context: Under the Advisory Group’s guidance (headed by Principal Scientific Advisor), a Subcommittee on ‘AI Governance and Guidelines Development’ was formed to provide actionable recommendations for AI governance in India.
Artificial intelligence (AI) governance refers to the processes, standards and guardrails that help ensure AI systems and tools are safe and ethical and thereby ensuring fairness and respect for human rights.
Deepfakes and Malicious Content: Legal frameworks exist, but enforcement gaps hinder the removal of harmful AI-generated content.
Cybersecurity: Current laws apply to AI-related cybercrimes, but they need strengthening to address evolving threats.
Intellectual Property Rights (IPR): AI's use of copyrighted data raises infringement and liability concerns, with existing laws not fully addressing AI-generated content.
AI-led Bias and Discrimination: AI can reinforce biases, making it harder to detect and address discrimination despite existing protections.
Life Cycle Approach: In order to operationalise these principles, policymakers must use a life cycle approach.
They must look at AI systems at different stages of their development, deployment, and diffusion, during which distinct risks can occur.
There should be an “ecosystem view” of AI actors.
The report proposes a tech-enabled digital governance system.
Ethical Concerns: AI systems can make decisions and take actions that impact individuals and society. Establishing rules helps address ethical concerns related to the use of AI, ensuring that it aligns with human values and respects fundamental rights.
Privacy: AI often involves the processing of large amounts of data. Rules can help protect individual privacy by specifying how data should be collected, stored, and used.
Security: This includes safeguarding against potential vulnerabilities and protecting against malicious uses of AI technology.
Transparency: Rules can mandate transparency in AI systems, requiring developers to disclose how their algorithms work.
Competition and Innovation: Establishing a regulatory framework provides a level playing field for businesses, preventing the abuse of market dominance and encouraging responsible innovation.
Public Safety: In cases where AI is used in critical domains such as healthcare, transportation, or public infrastructure, rules are essential to ensure the safety of individuals and the general public.
Digital Personal Data Protection Act in 2023: The government enacted the Digital Personal Data Protection Act in 2023, it can address some of the privacy concerns concerning AI platforms.
Global Partnership on Artificial Intelligence: India is a member of the GPAI. The 2023 GPAI Summit was held in New Delhi, where GPAI experts presented their work on responsible AI, data governance, and the future of work, innovation, and commercialization.
The National Strategy for Artificial Intelligence #AIForAll strategy, by NITI Aayog: It featured AI research and development guidelines focused on healthcare, agriculture, education, “smart” cities and infrastructure, and smart mobility and transformation.
Principles for Responsible AI: In February 2021, the NITI Aayog released Principles for Responsible AI, an approach paper that explores the various ethical considerations of deploying AI solutions in India.
Rapid Evolution of AI: The field is constantly evolving, making it difficult to write future-proof regulations.
Balancing Innovation and Safety: Striking a balance between fostering innovation and ensuring safety is a challenge.
International Cooperation: Effective AI regulation requires international cooperation to avoid a fragmented landscape.
Defining AI: There’s no universally agreed-upon definition of AI, making it difficult to regulate effectively.
Establish an Inter-Ministerial AI Coordination Committee: To coordinate AI governance across various ministries and regulators. Include representatives from MeitY, NITI Aayog, RBI, SEBI, and other sectoral regulators.
Create a Technical Secretariat: To serve as a technical advisory body for the AI Coordination Committee.
Leverage Techno-Legal Measures: Explore technological solutions like watermarking and content provenance to combat deepfakes.
Set Up an AI Incident Database: To document real-world AI-related risks and harms; Encourage voluntary reporting from both public and private sectors.
Approved in 2024 with a budget of INR 10,300 crore.
Aim: To create a robust AI ecosystem through seven key pillars, including AI Compute Capacity, FutureSkill, Safe & Trusted AI, and Startup Financing.
Focuses on democratizing AI access, improving data quality, and ensuring ethical AI development.
Artificial Intelligence (AI) is here to stay and possesses the capability to fundamentally change the way in which we work. It is a far greater force of either good or evil or both, AI needs to be regulated.
By acknowledging the potential dangers of AI and proactively taking steps to mitigate them, we can ensure that this transformative technology serves humanity and contributes to a safer, more equitable future.
By: Shubham Tiwari ProfileResourcesReport error
Access to prime resources
New Courses