Addressing the Challenges Posed by AI in India

Addressing the Challenges Posed by AI in India

Introduction

For the last few decades Artificial Intelligence (“AI”) has been advancing at an unprecedented pace globally. A few decades ago, the perpetration of AI to the present extent would have been a far-fetched idea, but today, it is our reality. AI is continuously developing and pervading almost every crucial aspect of our lives. AI has left a huge impact not only in India but the whole world. AI technologies may prove to be both a boon and a bane for India, as they hold great potential for developments as well as tremendous perils. It has been used for augmentations in various aspects of life, such as health care delivery, education, and other public services. A critical concern regarding AI is the widening gap between humans and AI, which perpetuates the risk of infringing upon human rights. While AI has posed serious challenges to humankind, it can be a harbinger of great change, if used in the right manner. In this piece, the author delves into the challenges posed by AI, and the existing regulations governing AI in India, and suggests as to how the regulatory framework in India intends to deal with the same.

What are the current challenges?

The use of AI in India has triggered controversy regarding its implications on the users. In the Indian context, it is imperative that these systems are designed to reflect the diversified population of the nation, in order to prevent them from turning discriminatory and causing stigmatisation. India’s complex social dynamics, demography, cultural and historical diversity add to the complications in developing ideal AI model prototypes to cater to the needs of the diverse masses. AI-enabled technologies are being evolved rapidly, and often so without the developers actually reckoning or analysing the harms or biases they might be feeding in the AI models inadvertently. Thereby resulting in discriminatory outcomes, leading to violation of fundamental rights. Similarly, there is little transparency or accountability associated with AI development and deployment in India.

Moreover, the instances of the abuse of AI technology by private actors and government have created a looming threat of breach of individual privacy and infringement of civil liberties. Usage of AI in facial recognition technology, for surveillance and monitoring, tracking individuals, etc., raises serious security concerns. With the growth of AI technology, there is indiscriminate collection of personal data, which may be manipulated at the whims of the data controller, causing situations of high risk. At the same time, considering India’s own challenges and situation, it will be worth seeing how India’s own Artificial Intelligence Act designates a high risk AI system since there is no underlying criteria.

In addition, AI-driven automation is likely to displace jobs involving tasks of routine nature and may result in the rise of unemployment at an unmatched speed. Further, this may lead to worsening economic security, especially in those segments of society which have higher probability of being marginalised. AI has already proliferated various industries like science, finance, healthcare, education, etc. While AI has undoubtedly led to revolutionary changes, it is necessary to examine its actual efficiency, and more importantly its impact on the livelihoods of people. Placing significant reliance on technology without fully comprehending its effects could further exacerbate the digital divide. 

Social manipulation through AI algorithms is another danger that AI poses. This has led to a lot of misinformation, spread of rumours, and created unusual biases in the minds of users. Deepfakes, voice manipulation, Ai-generated audio-visuals, etc. have further exacerbated the challenge. AI has made it possible for anyone to manipulate masses through social media in minutes. This has caused an apprehension amongst the consumers of news as to its genuineness and veracity. Apart from the countless other challenges, these are some of the pressing issues which must be prioritised for improving AI governance. Thus, it can be seen that though there are some practices which clearly

Existing regulations governing AI in India

A proactive and comprehensive approach to AI governance and regulation will aid in addressing these challenges in an effective manner. The country has taken some steps in this direction, such as the establishment of a Ministry of Electronics and Information Technology (MeitY) committee to develop a national strategy for AI.  However, additional efforts are warranted to ensure that the benefits of AI are equitably distributed and that its harms are mitigated. Law on Artificial Intelligence (AI) in India, a single legislation/statute that comprehensively governs the subject of Artificial Intelligence (AI) is yet to be developed. Various frameworks, however, have been proposed to lead the forthcoming AI regulation in the country.

The National Strategy for Artificial Intelligence by NITI Aayog– a policy think tank of the Government of India, has reiterated the key focus areas and policy recommendations. MeitY has also set up a Task Force on Artificial Intelligence, to provide an overall road map guiding India on the AI journey in coming years. Additionally, the National Strategy for Artificial Intelligence, introduced in June 2018, aims to lay a foundation for the future regulation of AI in India. The Principles for Responsible AI, released in February 2021, outline India’s approach to creating an ethical and responsible AI ecosystem across various sectors. The Operationalizing Principles for Responsible AI, published in August 2021, highlights the necessity for regulatory and policy interventions, capacity building, and the promotion of ethics in AI design. The “Responsible AI #AIforAll” framework outlines principles for the ethical use of AI, while the “Compliance Guidelines for Government Departments” provide a compliance template for public sector AI implementations.

Additionally, the Ministry of Electronics and Information Technology (MeitY) has issued advisories:

1.  The first advisory, dated 1 March 2024, followed a previous notice from 26 December 2023, concerning the spread of misinformation through deepfakes. It required intermediaries or platforms to adhere to their due diligence obligations under the IT (Intermediary Guidelines & Digital Media Ethics Code) Rules, 2021 and to obtain explicit permission before deploying untested or unreliable AI models, including large language models or generative AI systems, to the “Indian Internet”. Additionally, a status report had to be submitted by the intermediaries to MeitY, within 15 days. Taking into consideration the threats posed by Artificial Intelligence surveillance, or infringement on privacy rights, the Indian government has presently passed the Digital Personal Data Protection Act, 2023 that will create an effective data management plan and the ability to make required changes to provide people with enhanced control over their information and data.

2. The second advisory (15 March 2024) replaced the first one by eliminating the law that required obtaining permission from the government prior to deploying AI systems and committing platforms to submit status reports. It also stipulated that platforms and intermediaries should ensure that AI models used by users do not facilitate any illegal content as per Rule 3(1)(b) of the Information Technology (Intermediary Guidelines & Digital Media Ethics Code) Rules, 2021, among other legislations. This advisory is applicable to almost all players in the Indian AI ecosystem. Moreover, it also stated that untested or unreliable AI models accessible by the public must be accompanied with a clear label indicating their potential fallibility. The advisory extended its ban on AI’s use for electoral manipulation and broadened the role of platforms in preventing such illegalities. The advisories additionally touched upon platform obligations towards users with regard to terms of service, and deletion of any information found violating any provisions thereof. Furthermore, consequences of non-compliance were outlined in this second advisory, leading to prosecution under the Information Technology Act, 2000 as well as other criminal laws like Bharatiya Nyaya Sanhita (2023), which is India’s official criminal code for intermediaries, platforms and users.

Such actions are a clear indication of India’s growing awareness of the necessity of regulating AI and addressing threats and opportunities posed by it. However, this strategy has been criticised as haphazard because the government lacks a clear and cohesive regulatory framework. A number of challenges have been noted regarding the application of AI, for example, there is low accountability of AI systems that have been developed and deployed as well as the possibility of misuse by corporations and the state.

It is also interrelated, with one of the key objectives for governments being the adoption of ethical standards for use in decision-making. It should improve the development of truly intelligent systems based on Artificial Intelligence. Attention is also needed to span specific actions to avoid dependency since there is rarely a one-size-fits-all solution, it entails learning from past experience. Other factors that need to be considered are the use of algorithms, respect for individual rights for privacy, and the opportunity to make decisions and ensure accountability.

Proposed solutions for the AI crisis

It is important to note that there are various harms which are associated with AI such as facial recognition, surveillance systems and autonomous weapons. In turn, these harms carry a significant impact on fundamental rights such as the right to privacy and so forth. In order to combat all these challenges, it is imperative that there is a need to establish clear laws and regulations. Further, laws and regulations have to take into account the uniqueness of India’s AI sector. In order to mitigate the risks posed by such technologies that may negatively impact fundamental rights and democratic principles, clear guidelines and oversight mechanisms would have to be established by governments and the judiciary.

Furthermore, the ability of governments and the judiciary to comprehend, evaluate, and control AI systems should be enhanced. At the same time, the government must recognize and expand its capabilities, in order to understand, assess and regulate artificial intelligence. In addition to this, increased investments would be required by the government to build better infrastructure. There is also a need for expertise to aid in the effective monitoring and regulation of the AI ecosystem. This might involve creating an independent body of supervising AI as well as providing finances, and other resources for research testing and auditing of AI systems. In addition, regulation measures should include provisions that would prevent AI misuse by both private entities and the government particularly in relation to criminal investigation and targeting vulnerable populations. Further, it will be worth mentioning what are the certain practices which are prohibited by AI. In order to implement the same, inspiration can be drawn from Article 5 of EU’s AI Act which states about many harmful practices such as the usage of subliminal techniques which are beyond a person’s consciousness and so forth.

To minimise misuse and mitigate other negative consequences associated with AI solutions, the government can make efforts towards the development of infrastructure and human resources. This could involve setting up an effective organisational structure, HR practices that meet the needs of workers and customers, and the provision for worker training. A new independent body to oversee the use of AI, as well as financially supporting the work on R&D and testing, and also their monitoring and auditing will be a much-needed step. Additionally, a need to regulate the industry through various means is apparent, in order to tackle these factors as well lest they lead to the following cases, risks of misuse of AI technologies either by private companies and by the government, for example in focus on vigilantism through surveillance, predictive policing and exploitation of vulnerable groups. The Digital Personal Data Protection Act, of 2023, is commendable but it calls for a more comprehensive legal framework. Individual rights need to be protected through the development of privacy regulation and data manageability measures. Several studies indicate that the incorporation of Mental Health in healthcare practices is expected to reach an all-time high by the year 2024.

For India to transform into a hub for manufacturing and innovation, it must focus on training the skilled workforce to effectively manage, use and control the advancement of AI responsibly. Most importantly, it must be ensured that any form of technology we unleash into society is ethical. This warrants for a stronger emphasis on education, training, and capacity building for developing the necessary expertise to deal with the issues associated with AI that have to be instrumental in India’s strategy.

 Conclusion

While India has demonstrated appreciable advancement in the regulation of AI, the extant policies formulated by the government are criticised because they are perceived as being anachronistic and fleeting. However, to effectively mitigate the harm that entails proactive usage and incorporation of AI, laws appear to necessitate a more sophisticated, likewise a more driven framework. This requires the formulation of rigorous guidelines and accountability mechanisms, alongside the establishment of independent oversight bodies and empowered regulatory agencies. Agencies must be equipped with enhanced understanding, relevant qualifications and competencies to effectively evaluate and regulate the diverse applications of AI. A key focus is on developing a robust ethics framework which can prevent algorithmic bias, protect privacy, and ensure transparency in the design, deployment, and use of AI. While India has taken preliminary measures in this direction, there is a need for more concrete and meticulous regulatory action. A pragmatic and comprehensive framework would facilitate us to better tap the benefits of AI, and also provide a cushion against any future adversities. 

This article is a part of the DNLU-SLJ (Guest Post) series, for submissions click here.

Adv. Prashant Mali, Ph.D.,Addressing the Challenges Posed by AI in India, DNLU-SLJ, < https://dnluslj.in/addressing-the-challenges-posed-by-ai-in-india/> accessed 12 December 2024.
Adv. Prashant Mali, Ph.D., "Addressing the Challenges Posed by AI in India", DNLU Student Law Journal (SLJ) | Dharmashastra National Law University, available at :https://dnluslj.in/addressing-the-challenges-posed-by-ai-in-india/ (last visitied on 12 December 2024)
Adv. Prashant Mali, Ph.D., DNLU Student Law Journal (SLJ) | Dharmashastra National Law University, 06 June 2024 Addressing the Challenges Posed by AI in India., viewed 12 December 2024,<https://dnluslj.in/addressing-the-challenges-posed-by-ai-in-india/>
Adv. Prashant Mali, Ph.D., DNLU Student Law Journal (SLJ) | Dharmashastra National Law University - Addressing the Challenges Posed by AI in India. [Internet]. [Accessed 12 December 2024]. Available from: https://dnluslj.in/addressing-the-challenges-posed-by-ai-in-india/
"Adv. Prashant Mali, Ph.D., Addressing the Challenges Posed by AI in India." DNLU Student Law Journal (SLJ) | Dharmashastra National Law University - Accessed 12 December 2024. https://dnluslj.in/addressing-the-challenges-posed-by-ai-in-india/
"Adv. Prashant Mali, Ph.D., Addressing the Challenges Posed by AI in India." DNLU Student Law Journal (SLJ) | Dharmashastra National Law University [Online]. Available: https://dnluslj.in/addressing-the-challenges-posed-by-ai-in-india/. [Accessed: 12 December 2024]

 

Leave a Reply

Your email address will not be published. Required fields are marked *