The Narendra Modi government is planning to enact Digital India Act.In fact, it’s not a new move.
In March 2023, Mr. Rajeev Chandrasekhar, India’s Minister of State for the Ministry of Electronics and Information Technology (MEITY) had announced the proposed Digital India Act (DIA).
It was proposed to replace the Information Technology Act (IT Act), which was enacted in 2000.
This proposal remained locked for over one year due to general elections. However, in view of the emerging threat of Artificial Intelligence(AI) and its misuse in digital platform resulting in deepfake has raised its necessity in India today.
Every conscientious person knows that deepfake is an artificial image or video (a series of images) generated by a special kind of machine learning called “deep” learning (hence the name).This is understood to be a very dangerous mechanism to spread falsehood, a feature dangerous to democracy.
Now, the Narendra Modi government is gearing up to enact DIA, it is learnt. The proposed DIA, which was first discussed in March 2023, sets the notion of transparency in the context of algorithmic transparency.
Divij Joshi, a Mozilla Tech Policy Fellow, who worked on Trustworthy AI, created the AI Observatory, which documents cases of ADMS in India and their effects.
A quick look through the algorithms deployed in the country and their impact validates the need for algorithmic accountability and transparency. However, the domain of online safety goes beyond the physical space where algorithms deny food to the poor.
The case of Indonesia, where image generation tools such as Midjourney played a role before the elections, serves as a precedent for what generative AI can do.
Even during the recently concluded Lok Sabha polls in the country,several YouTubers made anti Modi government videos based on controversial and or “ fake news”, claims a senior officer of the IT Ministry.
In fact, the current laws in India are inadequate for this rapidly developing field, IT experts say.
Hence, when the DIA proceeds to the discussion and debate stage, it must account for the algorithms that make an impact offline (such as those governing the provision of benefits, facial recognition, etc.) as well as those affecting online spaces (fabricated videos, images, recommendation systems, etc.) holistically.
The term ‘algorithm’ must not be provided a blanket definition, but must be juxtaposed within the different scenarios and contexts in which algorithms will be deployed. Precise definitions with examples of applicability should be established if algorithmic systems are to be transparent.
While the proposal mentions ‘periodic risk assessments by digital entities’ and also states the inclusion of ‘AI based ad-targeting, content moderation’ systems, these should be framed within the context of safety, considering the potential of such systems to foster violent and harmful content.
For example, the UK’s Online Safety Act considers algorithmic risk assessments in the context of children and other groups to protect them from unsafe content.
Once DIA comes into force, deepfake and YouTubers who thrive on spreading messages without any substance of facts, will not remain beyond the reach of the strong arms of the law in India.