Wearing Down of Sovereignty
The notion of Artificial General Intelligence (AGI) has sparked intense debate among experts, with some predicting an AI Singularity that could pose existential threats to humanity. However, a more pressing concern that is often overlooked is the centralized AI monopoly, or “Big AI,” and its implications for sovereignty, agency, and the social contract.
If AI ends up as yet another Big Tech oligarchy, similar to cloud platforms, it will mark the beginning of the end of sovereignty for nation-states around the world. Cloud platforms were infrastructure, static, inert, and fungible plumbing. The risk was primarily that of being “de-platformed” and the need to find alternative providers; and, there were usually backup services available if one was willing to pay the price.
With Big AI, however, states face a de facto abdication to the Big AI “models” – culturally, linguistically, from a social mores’ stance, and from a social fabric perspective. Over time, this will result in the relinquishment of sovereignty over economic, political, and military outcomes.
Without AI sovereignty, there is no digital sovereignty; and, without AI sovereignty, there is no national integrity. Every nation needs to locally build, host, and govern their ‘sovereign AI’ to promote and defend their interests in the global stage.
Disruption of the Social Contract
Big AI claims that all content on the internet is ‘free’, for them to do as they please, towards training their models, without compensation for the individuals or organizations that created it in the first place. Big AI likes to claim that the jobs that are disappearing are those that should never have even existed in the first place.
This is oligarch-level hubris. ‘Give them UBI’ is the digital version of ‘let them eat cake.’ UBI is no silver bullet; those that are not familiar with the term ‘surrogate activity’ should reflect on the phenomenon of ‘economic assistance’ given during the recent lockdown, and its side-effects (e.g. ‘meme stock mania’) on both the traditional and the crypto markets, as an exemplar of what this entails in just one sector. The road to hell is paved with good intentions, as they say, and ‘handouts are one of those.
But why are we even talking about UBI? Because, generative AI has the potential to fundamentally reshape the knowledge worker and the service worker, similar to how the Industrial Revolution disrupted the blue-collar worker. The difference today is that it is the knowledge worker economy and the service worker economy that powers the economic machine in the developed world.
In the knowledge worker sector, in particular, there is going to be a Great Divide – between those that exploit AI capabilities to further their skills, and those that abdicate to AI capabilities and eventually risk being furloughed. It is the newer and junior-level workers that are more at risk, as middle layers of management discover that generative AI capabilities let them by-pass less-experienced employees and fast-track activities.
There are definitely no easy answers here; but pointing at UBI, in isolation, is disingenuous.
Erosion of Agency
In the Hegelian tradition, human agency is a collective dynamic, arising from aggregated human behavior; across the choices that a human being makes, and his/her ability to influence his/her life.
Agency plays a very critical role for the health and well-being of human society. The exploration and search for agency is a recurring theme in young people’s experiences across the world. Young people with a greater sense of individual agency are able to more effectively deal with the challenges of adulthood; they are more resilient and resourceful in forging ahead, with a sense of purpose, and with stronger self-esteem.
As more young people rely on Big AI, from having it do their homework (and work), to using it as a ‘friend’ in lieu of social relationships, and in trusting Big AI as a confidant and even mentor, there is on-going erosion of human agency. The promise of always-on, hassle-free convenience, with respect to both relationships and with respect to getting tasks done, is extraordinarily tempting, and does not bode well for the health and wellness of the social fabric.
Thus far, the tools that we have crafted have supported and enhanced agency; Big AI, however, supplants and displaces human agency.
Conclusion
Big AI sets the stage for a modern-day, digital Leviathan: Hobbes versus Locke redux. Will the social contract endure? Will human agency become a relic of the past? Will local sovereignty be seen as archaic?
The implications of Big AI are far-reaching and profound. It is essential that we recognize the potential risks and challenges associated with this technology and take proactive steps to mitigate them. We must reject the centralized, oligarchic model of Big AI and instead work towards building a future where AI continues to be a tool, not a force that shapes us.
FAQs
Q: What is Big AI?
A: Big AI refers to the centralized, oligarchic model of Artificial Intelligence that is emerging, where a few powerful entities control the development, deployment, and governance of AI technologies.
Q: What are the implications of Big AI for sovereignty?
A: Big AI has the potential to erode national sovereignty, as states become increasingly dependent on centralized AI models and lose control over their own economic, political, and military outcomes.
Q: How does Big AI impact agency?
A: Big AI has the potential to erode human agency, as individuals become increasingly reliant on AI for decision-making and problem-solving, and lose the ability to make choices and influence their own lives.
Q: What can be done to mitigate the risks associated with Big AI?
A: It is essential that we reject the centralized, oligarchic model of Big AI and instead work towards building a future where AI continues to be a tool, not a force that shapes us. This can be achieved through the development of decentralized, open-source AI technologies and the promotion of local sovereignty and agency.