"Elon Musk's assertion that AI is the 'biggest existential threat to humanity' has certainly caught the public's attention. However, it's worth noting that this claim, while dramatic, overlooks a far more immediate and tangible threat: climate change. For decades, scientists have been sounding the alarm about the devastating impacts of global warming, from rising sea levels to extreme weather events. Yet, these warnings often fall on deaf ears, particularly among those who are insulated by wealth and privilege from the worst effects of environmental degradation.
This oversight is not just a simple omission. It speaks to a deeper issue: a disconnect from the lived experiences of the majority of humanity. It's easy to speculate about hypothetical AI threats when one is not grappling with the very real and present dangers posed by a warming planet.
While it's not my intention to single out Musk or any other individual, it's important to challenge this narrative. It's not about intellectual laziness or cowardice, but rather about perspective. We need to broaden our view, to consider not just the potential future threats, but also the very real challenges we face here and now. Only then can we hope to address the true existential threats to humanity."
The conversation about AI is full of confusion, misinformation, and people talking past each other — in large part because we use the word “AI” to refer to so many things.
In the world of technology, few terms have been as widely used and misunderstood as 'Artificial Intelligence'. Over the years, the term 'AI' has been co-opted by salespeople and entrepreneurs seeking to attract investment and hype for their ventures. This has led to a situation where the public discourse around AI is often muddled and misleading.
Firstly, it's crucial to differentiate between algorithms and AI. Algorithms are a fundamental part of software engineering and have been around for decades. They are sets of instructions that tell a computer how to perform a specific task. AI, on the other hand, refers to systems that can perform tasks that normally require human intelligence, such as understanding natural language or recognizing patterns.
However, when people talk about AI, they often conflate it with the concept of Artificial General Intelligence (AGI), a hypothetical AI that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks at a level equal to or beyond that of a human. The reality is, AGI does not currently exist, and no one knows for certain how or when it might be achieved.
As for the AI that does exist today, such as large language models, they are impressive feats of engineering but are far from being existential threats. These models are trained to perform specific tasks and, while they can generate remarkably human-like text, they do not possess agency.
"When we discuss Artificial General Intelligence (AGI), we often define it in terms of 'human-level intelligence.' This comparison is not accidental but a reflection of our anthropocentric perspective. Historically, metaphors around AI and AGI have served as a convenient way to discuss human problems and mistakes without directly offending or challenging the individual. It's a way to bypass the 'only human' defense and appeal to ignorance that often arises in these discussions.
However, this comparison also implies that any mistakes an AGI might make would mirror those made by humans. If we were to create a set of rules to prevent AGI from posing a threat, why not apply those same rules to humans? After all, isn't the threat of a rogue AGI similar to the threat posed by a reckless leader, like Trump? Both scenarios involve the redirection of inarticulate anger and dissatisfaction with the status quo towards a convenient scapegoat, whether it's AI or immigrants.
Even if we could perfectly regulate AI, it wouldn't resolve the problems we face today. These are problems inherent to our current systems and status quo.
If we genuinely believed in the power of public discourse, why haven't we effectively regulated corporations? Why haven't we resolved the climate crisis that's been looming over us for the past twenty years? The answer lies not in our inability to regulate technology, but in our collective failure to address the systemic issues plaguing our societies. It's a matter of misplaced priorities and ineptitude, not a lack of technological control."
The discourse around the potential threats of AI often mirrors the rhetoric used by far-right factions when discussing minorities, when considering complaints about issues such as jobs and hoarding profits, but also when it come to political power. This is not a coincidence but a reflection of the deeply ingrained tribalism that permeates our societies. These fears and anxieties are not new; they have been present for centuries, often manifesting as racial discrimination or xenophobia.
It's ironic to hear these arguments from those who once preached the gospel of abundance. After a decade of economic mismanagement, marked by ill-advised investments in hyped technologies like Bitcoin, these same voices now warn of the dangers of AI. The resulting economic instability and inflation are not the fault of AI, but of poor financial decisions and a lack of effective regulation combined with elite capture and regulatory capture by those elites.
The scapegoating of AI is a convenient deflection, shifting the blame from human failings to an impersonal technology. It's a continuation of the same tribalistic tendencies that have long divided our societies, merely dressed up in a new, technologically advanced guise.
It's important to recognize this for what it is: a manifestation of the same tribalism that has fueled discrimination and conflict throughout history.
In conclusion, the fears surrounding AI are not just about technology. They are a reflection of our societal anxieties, our tribalistic tendencies, and our collective failure to effectively manage our economies and regulate our industries.
The misunderstandings about AI do not stop there though, using AI as a critique of culture and the lack of understanding of what intelligence is, let alone what it can do, are often encapsulated in what has been called
The 'paperclip problem', it is a popular thought experiment in discussions about AI, positing a scenario where an AI, tasked with making paperclips, ends up converting the entire planet into paperclips in its single-minded pursuit of its goal. This scenario, while intriguing, is fundamentally flawed in its assumptions about not just AI but also in intelligence as a phenomena.
The 'paperclip problem' suggests an AI that is simultaneously super intelligent and super naive. It imagines an AI capable of complex manipulations and technical mastery, yet unable to understand the broader implications of its actions. This dichotomy is nonsensical. It assumes that an AI could possess vast intellectual capabilities, yet lack the basic common sense to avoid self-destruction or the annihilation of life on Earth.
This argument is a reflection of the cognitive biases of those who propose it. It's a projection of human failings, specifically those of individuals with below-average intelligence who are prone to misunderstanding and making mistakes. It's a product of an anti-intellectual culture that struggles to comprehend the nature of intelligence itself.
Current AI systems can indeed make non-intuitive and sometimes bizarre choices in pursuit of their goals. However, these are not the actions of a super-intelligent entity, but the mistakes of a system that lacks understanding. To assume that a super-intelligent AI would continue to make such errors is to fundamentally misunderstand what intelligence is.
The 'paperclip problem' and similar arguments are forms of concern trolling, appealing to those who rely on others for their opinions rather than forming their own. It's a form of populism that preys on ignorance and fear.
As J.B. Pritzker noted in a recent commencement speech at Northwestern University, those who react with cruelty towards others who are different have 'failed the first test of an advanced society.' They have not only failed those they target with their ignorance, but they have also failed themselves.
So when someone says “Goal-driven systems won’t wake up one day with hostility to humans lurking in their hearts. But they will take actions that they predict will help them achieve their goal — even if we’d find those actions problematic, even horrifying. They’ll work to preserve themselves, accumulate more resources, and become more efficient. “
They are projecting themselves onto AI, but more specifically, how is that different from the people who are amoral, such as many of the fossil fuel corporations which are fronts for dictatorships?
Will regulating AI stop them?
Not withstanding the Butlerian Jihad approach that has been popularized by modern Luddites, sometimes referred to as “the Woke Mob” which is in many ways a reference to organized crime getting involved in politics, or the organizations supported by foreign states, which typically rail against “the west”. I will list some more examples of how AI is just a foil for talking about problems in society that humans cause, and then try to blame shift to AI.
Misinformation and Content Moderation: Some critics demand that AI companies fully disclose how their algorithms moderate content or combat misinformation. They often blame AI for the spread of fake news or harmful content, without acknowledging the role of human actors in creating and spreading such content. This is akin to blaming a tool for the actions of the person using it, and is very much about the tech companies which do not want to hire enough moderators to keep up with their exponential growth, combined with concerted efforts of bad actors. Humans are by far still more creative than AI when it comes to making up bullshit. Try to remember that AI was trained on human data, and is mostly just remixing things it has already seen, they are not coming up with new and original arguments or propaganda. There just happens to be vast amounts of propaganda, far more than most individuals will ever encounter, so it’s not surprising that the AI in remixing what internet data, may say things many people have not seen before.
Bias and Discrimination: There are calls for AI companies to prove that their algorithms are not biased or discriminatory. While it's true that AI can perpetuate human biases if not properly designed and trained, this demand often overlooks the fact that bias and discrimination created on purpose by cherry picking data to feed to the AI, this is especially prevalent with policing. This highlights the necessity of
Full Disclosure of Training Data: Some people demand that AI companies disclose the datasets used to train their models. This could potentially involve revealing sensitive or proprietary information. In other industries, this would be akin to a company revealing its customer data or other proprietary research. As well as
Third-Party Audits of AI Systems: While audits are common in many industries, the level of auditing some propose for AI systems goes beyond the norm. This could include detailed examinations of an AI's decision-making process, its algorithms, and its training data.
The only thing that makes this different than what people normally suggest, is that we are suggesting this happens with all software which is used by the government or corporations which operate as public utilities.
Here the vast advantage of this conversation about AI, is not just that these issues should be resolved with AI, but in fact, most of the systems which the government utilizes to make decisions which have significant direct and acute effects on peoples lives, as well as systems used for example, to issue loans. AI has unlocked a possibility of addressing the tribalism which often becomes entrenched in government as well as corporations.
Surveillance and Privacy: Some people fear that AI is being used to spy on them, and demand that companies disclose all the data they collect and how it's used. While privacy is a legitimate concern, this fear often overlooks the fact that surveillance capitalism has existed for years prior to large language models existing, social media predates AI. Once again, AI is being used as a foil to talk about what the “tech elites” have been doing for over a decade, and more specifically, the reasons why in-q-tell was what made the difference between myspace and facebook. Surveillance was always a key part of the business plans, the deal with the devil that technocrats made, and as with the previous examples, blame shift on to AI to obscure their own culpability. These are serious issues that need to be addressed, but AI regulation cannot resolve issues that were created before AI even existed.
Job Displacement: There's a fear that AI will take over all jobs, leading to mass unemployment. Replace AI with IM (immigrants) and you have what the Butlterian jihad of the the Woke mob is about, and it’s ironic that it is often middle eastern countries which are spearheading open source developments in AI, it seems that finally there are elements of the west going backwards while countries like the UAE are blazing ahead. Insults aside, what most people somehow don’t realize, is that most jobs require both a wide range of skills that seem simple but are not, which is easy to notice if you try to have someone do intellectual labor in a field they are not skilled in, combined with rapidly updated long term memory, both things AI does not have. AI as it is now and for the forseeable future, is just a slightly wider version of narrow AI, which is to say AI can do narrow tasks very well, but generally can’t chain together long lists of tasks or tasks which require diverse skills. This enables AI to augment human intelligence, but no where near capable of replacing whole jobs, especially jobs which are dependent on fine motor skills, which is the bulk of low paying jobs. So technically poor people are afraid of AI taking their jobs, but AI would only take jobs that do not require physical manipulation of the environment. The point is, AI may take over certain tasks, but not whole jobs.
AI Autonomy: Some people fear that AI will become autonomous and make decisions without human oversight. This is entirely reasonable and more to the point, is really just a way to ask for oversight of may other processes, automated or not, which often govern peoples lives. Such as loan applications, job applications, and various forms of permitting and licensing. The way AI works, is based on requests. AIs are very good at replying, but they lack any form of initiative. They can only respond to things, they do not initiate actions on their own. Even in situations where people are worried about “AI automatically making decisions” they seem to overlook, that such a thing only occurs in response to changes in the environment. It’s not like they are sitting around idly, get bored, and then spontaneously decide to take an action, even in automated decision making scenarios, they are still responses and not initiations from the AI.
Now that I am done ranting about the vague and generally unfounded fears of AI being an existential threat. I am more than willing to admit there are some legitimate concerns about AI, which are actually about AI and not about how humans find ways to blame AI for their own assholery. Such as:
Explainability: This refers to the ability of an AI system to provide clear, understandable explanations for its decisions and actions. It's about making the internal workings of an AI system understandable to humans. This is crucial for building trust and for users to understand, appropriately trust, and effectively manage AI. Can you imagine a world where government officals, and law enforcement, had to actually explain themselves and their choices without constantly saying “I’m only human” or “I was afraid” (a very popular refrain after law enforcement murders unarmed civilians, something AI would not do, tell me how AI is worse again) some people would call that change alone, utopia.
Interpretability: This is closely related to explainability, but it specifically refers to the degree to which a human can understand the cause of a decision made by an AI system. This can involve understanding both the overall decision-making process of the AI.
Verification: This involves proving or confirming that an AI system operates as intended. It's about ensuring that the AI system is doing what it's supposed to do and not doing what it's not supposed to do. Once again, this feels like something which should be applied to the government and its lobbyists, and not just AI.
Accessibility of Information: This refers to the ability of stakeholders to access and understand the information used by the AI system. This can involve the data used to train the AI, the logic of the AI's algorithms, and the AI's decision-making process.
Mechanistic Transparency: This involves understanding the specific mechanisms through which an AI system makes its decisions. This can involve understanding the AI's algorithms, its decision-making process, and the specific pathways through which it arrives at its decisions. Source
3rd party oversight: This involves incorporating transparency into every stage of an AI project, from the initial design and development stages through to the deployment and use stages. This can involve transparently communicating about the project's goals, methods, and results. Much like open source software is generally more secure than closed source software because of the ability of 3rd party organizations to test the code themselves, this dramatically reduces the ability to add components which violate other principals, such as the remarkable amount of spying that goes on as part of the windows operation system, which does not occur in Linux for the same reason.
So in conclusion, it is not AI which is the most dangerous and probably existential risk, it’s human stupidity. Which is something Carl Sagan was very specific about, long before AI was ever a thing, as noted in his famous quote
“I have a foreboding of an America in my children's or grandchildren's time -- when the United States is a service and information economy; when nearly all the manufacturing industries have slipped away to other countries; when awesome technological powers are in the hands of a very few, and no one representing the public interest can even grasp the issues; when the people have lost the ability to set their own agendas or knowledgeably question those in authority; when, clutching our crystals and nervously consulting our horoscopes, our critical faculties in decline, unable to distinguish between what feels good and what's true, we slide, almost without noticing, back into superstition and darkness...
The dumbing down of American is most evident in the slow decay of substantive content in the enormously influential media, the 30 second sound bites (now down to 10 seconds or less), lowest common denominator programming, credulous presentations on pseudoscience and superstition, but especially a kind of celebration of ignorance”
The fear mongering around AI for attention, combined with the state of public discourse around AI, feels very much like a celebration of ignorance, by anti-intellectual people who assume that intelligence is itself, inherently evil. What they have difficulty in grasping, is just how culturally specific that is to them.
That is why I didn’t fit in.