Description

  1. Home
  2. Description

TECH INERTIA & BACKWARDNESS: How AI-powered Digital Divide risks a Quirkier “Constitutional Marginalisation”

- Mr. Tanishq Soni
B.A.LL.B (Hons), 
Barkatullah Vishwavidhyalaya, Bhopal

The Prologue

The challenges posed by Artificial Intelligence (AI) have already surpassed the fathomable limits of traditional legal frameworks. It governs aspects of our lives that we are not yet competent to apprehend in case of mishaps. With regards to the Indian jurisdiction, which this article tends to focus on, it has serious implications, considering the age of predictive justice arrives shortly. The problem is twin-folded: firstly, the developments and challenges posed by the AI are seemingly outpacing the known avenues of law; secondly, the development is unprecedented, and though still in its initial years, it would be perplexing over time to draw a tighter knot of regulations against its developments.

Despite being the seventh-largest investment market in AI, India's failure to even recognise a white paper on AI governance creates a fascinating paradox for the country with the largest population. And in the given light of the concerns provided, it is pertinent to pose two questions: can this delayed deliberation give rise to a digital divide and consequently, a newer form of marginalisation? If so, is it possible that this intersects with the preconceived notion of the Constitutional Marginalisation as recognised by the judiciary and the Constitution itself?

Definitions: AI Powered Digital Divide, Marginalisation, Marginalised Communities

In order to adequately explore the constitutional implications of AI-driven marginalisation, it becomes essential first to revisit and refine our understanding of the concepts at play — namely, marginalisation, marginalised communities, and the emerging notion of a digital divide powered by AI.

The term “marginalisation” does not have a set definition. The interpretations are far and wide spanning over timely requirements. Over time, the meaning of ‘marginalisation’ and ‘marginalised communities’ have expanded across a spectrum of landmark recognition. Yet, despite these interpretations having evolved over time, it is still dubious as to whether these existing definitions can withstand the technological advancements and their consequences in society. And the answer is, arguably, negative.

Since the current constitutional and statutory understandings are deeply rooted in social and cultural realities, it is almost impossible to stretch either of the Constitutionally developed interpretations to tackle the ‘digitally marginalised segments and groups’. This raises a pressing question: how should 'marginalisation' be reinterpreted in the age of AI? Oxford Reference Dictionary for Media and Communication presents a definition beyond the traditional notion -

“Marginalisation is a spatial metaphor for a process of social exclusion in which individuals or groups are relegated to the fringes of a society, being denied economic, political, and/or symbolic power and pushed towards being ‘outsiders’. 

However, the emergence of AI-powered technologies introduces a new, more opaque axis of exclusion — the digital divide. It refers to the disparity between those with access to and proficiency in digital technologies — now increasingly AI-driven — and those without. This divide is no longer merely about internet access; it concerns the ability to meaningfully participate in an AI-transformed society. In the light of the given argument, it is imperative to ponder: is AI empowering the next wave of digital divide based on technological availability and accessibility? Absolutely affirmative, and much likely, anticipated. The present stage of AI technology is theoretically considered ‘weak-AI’ - it is only the time when the driving factor emerges as the leading actor, because there are no frameworks that conclusively determine the accountability of AI systems.

Why is it concerning?

Having delineated the contours of marginalisation and digital exclusion, it is pertinent to assess the Indian State's current readiness, or the lack thereof, to address these challenges in light of its technological and regulatory landscape.

The Oxfords Insights’ Government AI Readiness Index 2024 places India at a modestly mediocre 46th position, despite attracting seventh largest AI investments. What is even more concerning is that it has ranked India least in terms of maturity and infrastructure. The report's findings, combined with the absence of legislative intent to prudently regulate the influx of global AI giants into the Indian economy, have fostered an environment of passive, unregulated data consumption. Be it generative AIs or other general purposes AIs for the sake of businesses, lack of basic awareness amongst the population already puts India on the receiving end of the wave. It is even more concerning that it not only sidelines the concerns of Global South narrative, but also creates an intra-state conflict amongst those who are technically equipped with the usage of AI, and those who don’t.

The Focal Point of Discussion - What will this marginalisation look like?

Although forecasting legal remedies for evolving challenges may seem premature, India's constitutional ethos demands a proactive imagination, especially when fundamental guarantees of equality, dignity, and non-discrimination could be imperiled. It is against this backdrop that the next inquiry must be situated — can emergent AI-induced exclusions be reconciled within the existing constitutional jurisprudence, or do they necessitate new interpretative innovations?

The form of marginalisation that is the focal point of this discussion appears to be closely similar to the socio-economic marginalisation whose interpretation has evolved over the years under the aegis of the Apex Judiciary and various public policies.

The concerns of “digital apartheid” and digital divide have echoed emphatically in the corridors of the judiciary ever since the advent of the COVID-19 pandemic. For instance, the Supreme Court in 2021 highlighted that digitalisation of vaccination initiative via CoWin would disproportionately affect the marginalised sections due to digital divide, particularly the rural areas. In another instance, a Court directed private unaided and government schools in Delhi to provide adequate gadgets and internet packages to students from Economically Weaker Sections (EWS) and Disadvantaged Groups (DG) to ensure equal access to online classes during the COVID-19 lockdown.​ Justice Surya Kant recently expressed his worry as to whether an unregulated potential of AI could risk professions such as driving.

These examples help us identify and give us a probable glimpse into what sections could be potentially marginalised. Importantly, this marginalisation does not operate uniformly. The phenomenon has a potential to bifurcate into two categories: one, the communities directly disadvantaged by AI-driven digitalisation, and two, the segments indirectly prejudiced due to persisting infrastructural inadequacies and exclusionary design.

The first bracket possibly includes the professions that risk rapid automation and redundancy including low-skilled jobs, the technologically illiterate populations, who face immediate exclusion from AI-enabled services such as digital banking, AI-based healthcare, & online education, and, persons with disabilities, where existing AI technologies initially might lack accessibility features. The second header may include the rural and remote populations where lack of internet infrastructure and digital literacy indirectly sidelines communities from reaping the benefits of AI Wave, and the EWS where affordability constraints prevent access to AI-facilitated opportunities (e.g., e-governance, telemedicine, digital finance).

It is essential to recognise the deepening AI-powered digital divide not merely as a technological phenomenon, but as a profound socio-political and economic disruption once its more opaque forms surge India and build a systemic narrative of a newer kind of marginalisation. And now the much anticipated and expected question: would it rise to an extent where affirmative actions under articles 14, 15, 16, & 21 of the Indian Constitution would be required? This includes, for instance, discrimination via tech accessibility, or a structural exclusion via technology, or automation affecting equal opportunity, or essential services being inaccessible. One possible answer to it could be that predicting and cementing remedies to the technicalities that are not recognised yet seems like a Socratian philosophy. Yet, it is important to also ponder as to whether the emergent AI-induced marginalisation be conceptually accommodated within the existing constitutional structure of equality and non-discrimination.

The Epilogue - Reflections & Way Forward

The scope of this article, as it would seem, was not to provide an overview of persisting issues of AI Bias (as happened in the case of US COMPAS’ racial bias towards the black community), but rather to analyse and introduce arguments in favour of a new field of technical nuance - accessibility to AI and its constitutional implications. The demography of India presents a striking farce of contradictions and paradoxes, and to cater to its dynamic constitutional needs is challenging. It was quintessential to introduce how the next-wave of digital divide could alter the existing spirit of Indian Constitutionalism. Much emphasis could have been put on affirmative actions, but the issue is in its infancy and needs to be recognised to draw further inferences. Lastly, India must not merely catch up with the AI revolution — it should constitutionally anticipate it. As we stand at the cusp of an AI-driven society, the true test of Indian constitutionalism will lie not just in adapting to change, but in ensuring that no one is left behind by it.

 

Blog Categories