Wed, May 07, 2025
The discussions surrounding artificial intelligence (AI) often narrow down to the tangible gains of its specific applications, neglecting the broader structural implications.
This tunnel vision, akin to focusing solely on the trees while missing the forest, fails to address critical issues such as AI's potential cause towards perpetuating socioeconomic and geopolitical imbalance, reshaping labour demand and pricing as much as skills gap, and consolidating warped power dynamics of few firms and fewer nations, particularly in regions outside the technological epicentres.
Much too regularly, the general positioning of AI as that indispensable tool for socioeconomic advancement oversimplifies its role, introduces newer risks including that of human-awe-of-AI and overlooks critical nuances.
While AI certainly offers promise in addressing longstanding societal challenges, portraying it as a panacea for all developmental obstacles exaggerates its potential and sidesteps the need for robust institutional frameworks and safeguards.
True socioeconomic progress requires more than technological advancements; it necessitates the cultivation of human intelligence combined with native and traditional knowledge, supportive governance structures, ethical guidelines, and regulatory mechanisms to ensure equitable distribution of benefits and mitigate potential harms.
Therefore, while AI can undoubtedly contribute to progress, it must be integrated into a broader framework that prioritises holistic development and addresses the systemic issues underlying disparities.
The complexity of AI models warrants scrutiny; while intricate black-box models exist, simpler logic-based approaches may offer comparable efficacy. But then, will the industry work with what’s simple?
One should urgently remove the myth surrounding AI and accept that it's neither magic nor a replica of human intelligence. Rather, AI operates as a form of computational statistics, leveraging historical data or human-provided datasets to make probabilistic predictions. But then one must answer if there are alternate ways of solving what AI can solve, and if those solutions are unique to AI.
Moreover, the development of AI must shape up with the integration of domain knowledge, and sectoral expertise where it is applied. Only this can allow for AI applications to align with real-world contexts and address specific challenges effectively.
Correcting Policy Bias
Amidst the fervour for all things grand and big, starting with the love of big data, the significance of policy focussed on change and measurement of its impact has been overshadowed. To rectify this policy imbalance, we must cultivate a focus on domain expertise, serving as the cornerstone for AI model construction.
First and foremost, AI organisations must recognise the value of domain expertise and actively seek input from professionals who possess deep knowledge in relevant fields. This could involve forming interdisciplinary teams where domain experts work alongside data scientists to co-create AI solutions.
Furthermore, organisations should invest in training programs that equip data scientists with domain-specific knowledge, to develop more contextually relevant and effective AI models.
In addition to incorporating domain knowledge during the initial development phase, organisations should also prioritise ongoing collaboration and feedback loops between domain experts and data scientists. This ensures that AI models remain aligned with evolving real-world contexts and can adapt to changing circumstances or new insights.
Moreover, organisations should implement robust evaluation mechanisms to assess the effectiveness and impact of AI solutions in real-world settings. This involves not only evaluating technical performance metrics but also considering broader socio-economic factors and ethical implications. This is where any regulatory intent will need strengthening its processes.
Many variables will be the basis for evolving AI regulations. Initially we might need to make do with guardrails, and as and when AI develops further, the regulatory framework can develop.
After all, expecting policy makers to be gatekeepers of everything AI is a worrisome idea. Simply because AI will need technical and technological understanding. A harsh truth is that the global policymakers still are unable to regulate big-tech including social media, despite all their noise and hype.
Big Tech's Dominance
One of the most critical challenges that we will face in the AI landscape is the alarming concentration of power within a select few technology conglomerates.
Although not a new phenomenon, this issue has been further exacerbated by recent advancements in large language models and generative AI. These developments have bolstered the dominance of tech giants, amplifying concerns regarding their unchecked influence and control over AI technologies.
Compounding the problem is the narrative propagated by these companies, which often revolves around exaggerating existential risks associated with AI. This fear-mongering not only diverts attention away from the very real and pressing issues caused by existing AI applications but also perpetuates the notion that these companies are indispensable in mitigating AI-related risks.
Measures such as antitrust enforcement, transparency requirements, and promoting competition in the AI market are crucial steps towards mitigating the disproportionate influence wielded by tech giants.
A handful of large technology companies wield significant global control, shaping the global narrative of technological development and exerting influence over societal norms and governance structures. With unprecedented access to vast amounts of data and resources, these tech giants will further amass immense power, transcending national borders.
Governments, both domestically and internationally, frequently find themselves entangled in the influence of these tech behemoths, grappling with issues of regulation, privacy, and national security in an increasingly digital world.
The close relationship between governments and tech companies, characterised by lobbying efforts, revolving door employment, and regulatory capture, further embeds the profound influence these corporations wield over policy decisions and regulatory frameworks.
As a result, efforts to address the concentration of power in the tech sector must navigate complex geopolitical dynamics and confront the intertwined interests of governments and influential corporations. For the winner in this AI race will have global influence in the 21st century.
(The author is a Mumbai-based corporate advisor and researcher. Views expressed are personal)