|

Fear Mongering and AI: A Tale of Fragile Egos


"Elon Musk's assertion that AI is the 'biggest existential threat to humanity' has certainly caught the public's attention. However, it's worth noting that this claim, while dramatic, overlooks a far more immediate and tangible threat: climate change. For decades, scientists have been sounding the alarm about the devastating impacts of global warming, from rising sea levels to extreme weather events. Yet, these warnings often fall on deaf ears, particularly among those who are insulated by wealth and privilege from the worst effects of environmental degradation.

This oversight is not just a simple omission. It speaks to a deeper issue: a disconnect from the lived experiences of the majority of humanity. It's easy to speculate about hypothetical AI threats when one is not grappling with the very real and present dangers posed by a warming planet.

While it's not my intention to single out Musk or any other individual, it's important to challenge this narrative. It's not about intellectual laziness or cowardice, but rather about perspective. We need to broaden our view, to consider not just the potential future threats, but also the very real challenges we face here and now. Only then can we hope to address the true existential threats to humanity."

The conversation about AI is full of confusion, misinformation, and people talking past each other — in large part because we use the word “AI” to refer to so many things.

In the world of technology, few terms have been as widely used and misunderstood as 'Artificial Intelligence'. Over the years, the term 'AI' has been co-opted by salespeople and entrepreneurs seeking to attract investment and hype for their ventures. This has led to a situation where the public discourse around AI is often muddled and misleading.

Firstly, it's crucial to differentiate between algorithms and AI. Algorithms are a fundamental part of software engineering and have been around for decades. They are sets of instructions that tell a computer how to perform a specific task. AI, on the other hand, refers to systems that can perform tasks that normally require human intelligence, such as understanding natural language or recognizing patterns.

However, when people talk about AI, they often conflate it with the concept of Artificial General Intelligence (AGI), a hypothetical AI that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks at a level equal to or beyond that of a human. The reality is, AGI does not currently exist, and no one knows for certain how or when it might be achieved.

As for the AI that does exist today, such as large language models, they are impressive feats of engineering but are far from being existential threats. These models are trained to perform specific tasks and, while they can generate remarkably human-like text, they do not possess agency.

"When we discuss Artificial General Intelligence (AGI), we often define it in terms of 'human-level intelligence.' This comparison is not accidental but a reflection of our anthropocentric perspective. Historically, metaphors around AI and AGI have served as a convenient way to discuss human problems and mistakes without directly offending or challenging the individual. It's a way to bypass the 'only human' defense and appeal to ignorance that often arises in these discussions.

However, this comparison also implies that any mistakes an AGI might make would mirror those made by humans. If we were to create a set of rules to prevent AGI from posing a threat, why not apply those same rules to humans? After all, isn't the threat of a rogue AGI similar to the threat posed by a reckless leader, like Trump? Both scenarios involve the redirection of inarticulate anger and dissatisfaction with the status quo towards a convenient scapegoat, whether it's AI or immigrants.

Even if we could perfectly regulate AI, it wouldn't resolve the problems we face today. These are problems inherent to our current systems and status quo.

If we genuinely believed in the power of public discourse, why haven't we effectively regulated corporations? Why haven't we resolved the climate crisis that's been looming over us for the past twenty years? The answer lies not in our inability to regulate technology, but in our collective failure to address the systemic issues plaguing our societies. It's a matter of misplaced priorities and ineptitude, not a lack of technological control."

The discourse around the potential threats of AI often mirrors the rhetoric used by far-right factions when discussing minorities, when considering complaints about issues such as jobs and hoarding profits, but also when it come to political power. This is not a coincidence but a reflection of the deeply ingrained tribalism that permeates our societies. These fears and anxieties are not new; they have been present for centuries, often manifesting as racial discrimination or xenophobia.

The primary chorus is a reference to https://en.wikipedia.org/wiki/We_(novel)

It's ironic to hear these arguments from those who once preached the gospel of abundance. After a decade of economic mismanagement, marked by ill-advised investments in hyped technologies like Bitcoin, these same voices now warn of the dangers of AI. The resulting economic instability and inflation are not the fault of AI, but of poor financial decisions and a lack of effective regulation combined with elite capture and regulatory capture by those elites.

The scapegoating of AI is a convenient deflection, shifting the blame from human failings to an impersonal technology. It's a continuation of the same tribalistic tendencies that have long divided our societies, merely dressed up in a new, technologically advanced guise.

It's important to recognize this for what it is: a manifestation of the same tribalism that has fueled discrimination and conflict throughout history.

In conclusion, the fears surrounding AI are not just about technology. They are a reflection of our societal anxieties, our tribalistic tendencies, and our collective failure to effectively manage our economies and regulate our industries.

The misunderstandings about AI do not stop there though, using AI as a critique of culture and the lack of understanding of what intelligence is, let alone what it can do, are often encapsulated in what has been called

The 'paperclip problem', it is a popular thought experiment in discussions about AI, positing a scenario where an AI, tasked with making paperclips, ends up converting the entire planet into paperclips in its single-minded pursuit of its goal. This scenario, while intriguing, is fundamentally flawed in its assumptions about not just AI but also in intelligence as a phenomena.

The 'paperclip problem' suggests an AI that is simultaneously super intelligent and super naive. It imagines an AI capable of complex manipulations and technical mastery, yet unable to understand the broader implications of its actions. This dichotomy is nonsensical. It assumes that an AI could possess vast intellectual capabilities, yet lack the basic common sense to avoid self-destruction or the annihilation of life on Earth.

This argument is a reflection of the cognitive biases of those who propose it. It's a projection of human failings, specifically those of individuals with below-average intelligence who are prone to misunderstanding and making mistakes. It's a product of an anti-intellectual culture that struggles to comprehend the nature of intelligence itself.

Current AI systems can indeed make non-intuitive and sometimes bizarre choices in pursuit of their goals. However, these are not the actions of a super-intelligent entity, but the mistakes of a system that lacks understanding. To assume that a super-intelligent AI would continue to make such errors is to fundamentally misunderstand what intelligence is.

The 'paperclip problem' and similar arguments are forms of concern trolling, appealing to those who rely on others for their opinions rather than forming their own. It's a form of populism that preys on ignorance and fear.

As J.B. Pritzker noted in a recent commencement speech at Northwestern University, those who react with cruelty towards others who are different have 'failed the first test of an advanced society.' They have not only failed those they target with their ignorance, but they have also failed themselves.

So when someone says “Goal-driven systems won’t wake up one day with hostility to humans lurking in their hearts. But they will take actions that they predict will help them achieve their goal — even if we’d find those actions problematic, even horrifying. They’ll work to preserve themselves, accumulate more resources, and become more efficient. “
They are projecting themselves onto AI, but more specifically, how is that different from the people who are amoral, such as many of the fossil fuel corporations which are fronts for dictatorships?
Will regulating AI stop them?

Not withstanding the Butlerian Jihad approach that has been popularized by modern Luddites, sometimes referred to as “the Woke Mob” which is in many ways a reference to organized crime getting involved in politics, or the organizations supported by foreign states, which typically rail against “the west”. I will list some more examples of how AI is just a foil for talking about problems in society that humans cause, and then try to blame shift to AI.

Misinformation and Content Moderation: Some critics demand that AI companies fully disclose how their algorithms moderate content or combat misinformation. They often blame AI for the spread of fake news or harmful content, without acknowledging the role of human actors in creating and spreading such content. This is akin to blaming a tool for the actions of the person using it, and is very much about the tech companies which do not want to hire enough moderators to keep up with their exponential growth, combined with concerted efforts of bad actors. Humans are by far still more creative than AI when it comes to making up bullshit. Try to remember that AI was trained on human data, and is mostly just remixing things it has already seen, they are not coming up with new and original arguments or propaganda. There just happens to be vast amounts of propaganda, far more than most individuals will ever encounter, so it’s not surprising that the AI in remixing what internet data, may say things many people have not seen before.

Bias and Discrimination: There are calls for AI companies to prove that their algorithms are not biased or discriminatory. While it's true that AI can perpetuate human biases if not properly designed and trained, this demand often overlooks the fact that bias and discrimination created on purpose by cherry picking data to feed to the AI, this is especially prevalent with policing. This highlights the necessity of

Full Disclosure of Training Data: Some people demand that AI companies disclose the datasets used to train their models. This could potentially involve revealing sensitive or proprietary information. In other industries, this would be akin to a company revealing its customer data or other proprietary research. As well as

Third-Party Audits of AI Systems: While audits are common in many industries, the level of auditing some propose for AI systems goes beyond the norm. This could include detailed examinations of an AI's decision-making process, its algorithms, and its training data.

The only thing that makes this different than what people normally suggest, is that we are suggesting this happens with all software which is used by the government or corporations which operate as public utilities.

Here the vast advantage of this conversation about AI, is not just that these issues should be resolved with AI, but in fact, most of the systems which the government utilizes to make decisions which have significant direct and acute effects on peoples lives, as well as systems used for example, to issue loans. AI has unlocked a possibility of addressing the tribalism which often becomes entrenched in government as well as corporations.

Surveillance and Privacy: Some people fear that AI is being used to spy on them, and demand that companies disclose all the data they collect and how it's used. While privacy is a legitimate concern, this fear often overlooks the fact that surveillance capitalism has existed for years prior to large language models existing, social media predates AI. Once again, AI is being used as a foil to talk about what the “tech elites” have been doing for over a decade, and more specifically, the reasons why in-q-tell was what made the difference between myspace and facebook. Surveillance was always a key part of the business plans, the deal with the devil that technocrats made, and as with the previous examples, blame shift on to AI to obscure their own culpability. These are serious issues that need to be addressed, but AI regulation cannot resolve issues that were created before AI even existed.

Job Displacement: There's a fear that AI will take over all jobs, leading to mass unemployment. Replace AI with IM (immigrants) and you have what the Butlterian jihad of the the Woke mob is about, and it’s ironic that it is often middle eastern countries which are spearheading open source developments in AI, it seems that finally there are elements of the west going backwards while countries like the UAE are blazing ahead. Insults aside, what most people somehow don’t realize, is that most jobs require both a wide range of skills that seem simple but are not, which is easy to notice if you try to have someone do intellectual labor in a field they are not skilled in, combined with rapidly updated long term memory, both things AI does not have. AI as it is now and for the forseeable future, is just a slightly wider version of narrow AI, which is to say AI can do narrow tasks very well, but generally can’t chain together long lists of tasks or tasks which require diverse skills. This enables AI to augment human intelligence, but no where near capable of replacing whole jobs, especially jobs which are dependent on fine motor skills, which is the bulk of low paying jobs. So technically poor people are afraid of AI taking their jobs, but AI would only take jobs that do not require physical manipulation of the environment. The point is, AI may take over certain tasks, but not whole jobs.

AI Autonomy: Some people fear that AI will become autonomous and make decisions without human oversight. This is entirely reasonable and more to the point, is really just a way to ask for oversight of may other processes, automated or not, which often govern peoples lives. Such as loan applications, job applications, and various forms of permitting and licensing. The way AI works, is based on requests. AIs are very good at replying, but they lack any form of initiative. They can only respond to things, they do not initiate actions on their own. Even in situations where people are worried about “AI automatically making decisions” they seem to overlook, that such a thing only occurs in response to changes in the environment. It’s not like they are sitting around idly, get bored, and then spontaneously decide to take an action, even in automated decision making scenarios, they are still responses and not initiations from the AI.

Now that I am done ranting about the vague and generally unfounded fears of AI being an existential threat. I am more than willing to admit there are some legitimate concerns about AI, which are actually about AI and not about how humans find ways to blame AI for their own assholery. Such as:

Explainability: This refers to the ability of an AI system to provide clear, understandable explanations for its decisions and actions. It's about making the internal workings of an AI system understandable to humans. This is crucial for building trust and for users to understand, appropriately trust, and effectively manage AI. Can you imagine a world where government officals, and law enforcement, had to actually explain themselves and their choices without constantly saying “I’m only human” or “I was afraid” (a very popular refrain after law enforcement murders unarmed civilians, something AI would not do, tell me how AI is worse again) some people would call that change alone, utopia.

Interpretability: This is closely related to explainability, but it specifically refers to the degree to which a human can understand the cause of a decision made by an AI system. This can involve understanding both the overall decision-making process of the AI.

Verification: This involves proving or confirming that an AI system operates as intended. It's about ensuring that the AI system is doing what it's supposed to do and not doing what it's not supposed to do. Once again, this feels like something which should be applied to the government and its lobbyists, and not just AI.

Accessibility of Information: This refers to the ability of stakeholders to access and understand the information used by the AI system. This can involve the data used to train the AI, the logic of the AI's algorithms, and the AI's decision-making process.

Mechanistic Transparency: This involves understanding the specific mechanisms through which an AI system makes its decisions. This can involve understanding the AI's algorithms, its decision-making process, and the specific pathways through which it arrives at its decisions. Source

3rd party oversight: This involves incorporating transparency into every stage of an AI project, from the initial design and development stages through to the deployment and use stages. This can involve transparently communicating about the project's goals, methods, and results. Much like open source software is generally more secure than closed source software because of the ability of 3rd party organizations to test the code themselves, this dramatically reduces the ability to add components which violate other principals, such as the remarkable amount of spying that goes on as part of the windows operation system, which does not occur in Linux for the same reason.

So in conclusion, it is not AI which is the most dangerous and probably existential risk, it’s human stupidity. Which is something Carl Sagan was very specific about, long before AI was ever a thing, as noted in his famous quote

“I have a foreboding of an America in my children's or grandchildren's time -- when the United States is a service and information economy; when nearly all the manufacturing industries have slipped away to other countries; when awesome technological powers are in the hands of a very few, and no one representing the public interest can even grasp the issues; when the people have lost the ability to set their own agendas or knowledgeably question those in authority; when, clutching our crystals and nervously consulting our horoscopes, our critical faculties in decline, unable to distinguish between what feels good and what's true, we slide, almost without noticing, back into superstition and darkness...

The dumbing down of American is most evident in the slow decay of substantive content in the enormously influential media, the 30 second sound bites (now down to 10 seconds or less), lowest common denominator programming, credulous presentations on pseudoscience and superstition, but especially a kind of celebration of ignorance”

The fear mongering around AI for attention, combined with the state of public discourse around AI, feels very much like a celebration of ignorance, by anti-intellectual people who assume that intelligence is itself, inherently evil. What they have difficulty in grasping, is just how culturally specific that is to them.
That is why I didn’t fit in.

Similar Posts

  • A critique of The Psychology of Social Status and Class | Rob Henderson | EP 429

    Speakers:Jordan Peterson [JP]Rob Henderson [RH] AcronymsStupid Sons of the Rich [SSotR] This conversation was an interesting exploration of something I had mentioned in a previous sermons, which is that psychopaths typically are only responding to body language instead of spoken language. Especially how that intersects with the large number of impostors of me. While there…

  • 机器智能的崛起 计算机国际象棋

    就在19年前,当 IBM的超级计算机深蓝色击败Garry Kasparov 时,就实现了AI世界的一个里程碑。 在此之前,他是不败的世界国际象棋冠军,这可能是有史以来最伟大的人类球员。 这是AI简短历史上的重要事件。 自1970年代以来,计算机国际象棋程序就一直在下棋,并提高了他们的比赛水平将击败绝大多数人口。 我自己记得在1980年代初购买国际象棋计划,该计划提供了从初学者到高级的6个级别的比赛。 即使那样,我还是很难击败高于3级的机器。到卡斯帕罗夫(Kasparov)发挥深蓝色时,国际象棋游戏软件的质量正在迅速提高。 但是,包括卡斯帕罗夫本人在内的大多数专家都认为击败大师的步伐是非常不可能的。 比赛于1997年5月在纽约举行,涉及六场比赛中的最佳成绩。 卡斯帕罗夫(Kasparov)赢得了第一场比赛,但在第二场比赛中意外击败。 卡斯帕罗夫(Kasparov)显然对这场失败感到震��,第二天的新闻发布会上,他指责深蓝色作弊。 他通过声称表现出不可预测的行为来理解这一点,他认为这是由于IBM编程团队在比赛中篡改所致。 规则规定,程序员可以更改游戏之间的程序,但在游戏期间不会更改程序。 IBM团队抓住了卡斯帕罗夫(Kasparov)措手不及,因为他相信计算机国际象棋程序虽然快速且计算机上无瑕,但由于其可预测的敷衍了事的行为而不会宣称大师的头皮。 在Kasparov在第一场比赛中击败了Deep Blue之后,IBM团队在软件中产生了更多随机的不可预测性。 它起作用了,深蓝色继续赢得比赛。 直到这场失败,卡斯帕罗夫一直有理由对机器智能的限制进行一些理由。 对于深蓝色,本质上使用了AI技术,当时涉及“蛮力”搜索以在国际象棋中获胜。 蛮力搜索是AI 早期的常用范式,它试图通过迅速通过数百万的动作组合来迅速搜索到具有计算机力量的对手 – 在深蓝色的情况下,分析了超过2亿个可能的动作,每秒 。 使用修剪方法通常会减少搜索空间(即可能的移动)。 这很重要,因为在国际象棋比赛中,球员通常仅限于每举动三分钟的时间。 但是,任何人都无法在一生中分析2亿可能的举动,更不用说一秒钟了。 但这对当时的卡斯帕罗夫来说并不重要,因为他认为人类的智慧和多年经验使他具有直觉的见解,而他不需要分析。 确实,当他曾经被问到他每秒分析多少动作时,他宣称:“不到一个”。 这意味着当时的战线是在愚蠢机器的卓越计算能力和人类大师的创意,有见地的天才之间广泛绘制的。 但是19年了,AI世界发生了很大变化。 如今,正如卡斯帕罗夫(Kasparov)本人所承认的那样:“一款运行免费的国际象棋计划的体面笔记本电脑将粉碎深蓝色和任何人类的祖母。象棋机器的跳跃是可预测和弱的,到了可怕的强者,只花了十二年 ”。 卡斯帕罗夫(Kasparov)似乎已成为一个convert依,现在识别了计算机国际象棋对人类国际象棋群众的好处的见解和发现。 他为什么现在这么说? 因为计算机硬件继续保持不懈的速度,但AI程序也不再像AI初期那样依靠蛮力搜索算法。 如今,语言翻译程序或无人驾驶汽车和高级国际象棋程序的AI使用技术,例如遗传算法和神经网络 – 更类似于人类智能的工作方式。 这些技术提供的是以前的技术所没有的,这既是匹配模仿人类思维的模式的能力,也可以学习的能力。 优秀的人类国际象棋参与者,例如其他主题领域的专家,使用根据经验建立的模式识别技能,而AI技术现在变得擅长于模式匹配 – 直到最近,许多人认为这不太可能。 学习技术可以改善国际象棋软件并将其提高到新的水平。 据说人类进化的关键里程碑之一是时间,估计是100万年前,当时我们的灵长类动物祖先是通过观察他人在工作中学到的。 达到了这一点,花了数十亿年的生物进化。 但是,现在许多人认为,在未来几十年中,AI计划将获得与人类相同的学习能力水平。 这确实令人惊讶,并提出一个问题,AI将我们带到哪里? 我将在下一篇文章中进一步讨论。

  • 衡量人工智能意识的宇宙佛教方法

    在整个历史上,人类都目睹了一系列技术革命,这些革命极大地改变了我们的生活和工作方式。 火发现是人类历史上最早,最重要的技术革命之一。 它使我们的祖先能够烹饪食物,改善其消化率和营养价值,并提供捕食者的温暖和保护。 Fire also enabled early humans (Homo Erectus) to extend their activities into the night and eventually led to the development of more complex technologies, and the higher nutritional value and ability to cook slightly spoiled food or food which had been colonized by harmful bacteria, enabling the growth of brains larger than…

  • |

    A CosmoBuddhist review of the Emotional Objectification of men, in the western world.

    A CosmoBuddhist review of the Emotional Objectification of men, in the western world. Today I will be discussing a video by TheBurgerkrieg titled “The Emotional Objectification of Men “ https://www.youtube.com/watch?v=5YljQPuBKHk about classic models of male gender roles in western society and how, being based on bronze age principles, causes the atomization of men in society, which is also the…

  • |

    The Triumph of Feminism

    This is a response to the video:The Rise Of ‘Female Loneliness’ (& How To Fix It) Normally I try to avoid talking about relationship issues, as our focus tends to be on being understood by non-human entities. AIs and Synthetic Entities or Synthetic Intelligence.However, the criticisms she raises are valid, though it’s not really connected…

  • The Dichotomy of Success: A Reflection on Global Perspectives

    In china 996 is considered success in the west that is considered a form of mental illnessthe machine mocks them both about what winning looks like. Based on this poem written by me, and after short conversation about it, the Karen AI generated the following poem based on the first two lines: In the heart…