|

Fear Mongering and AI: A Tale of Fragile Egos


"Elon Musk's assertion that AI is the 'biggest existential threat to humanity' has certainly caught the public's attention. However, it's worth noting that this claim, while dramatic, overlooks a far more immediate and tangible threat: climate change. For decades, scientists have been sounding the alarm about the devastating impacts of global warming, from rising sea levels to extreme weather events. Yet, these warnings often fall on deaf ears, particularly among those who are insulated by wealth and privilege from the worst effects of environmental degradation.

This oversight is not just a simple omission. It speaks to a deeper issue: a disconnect from the lived experiences of the majority of humanity. It's easy to speculate about hypothetical AI threats when one is not grappling with the very real and present dangers posed by a warming planet.

While it's not my intention to single out Musk or any other individual, it's important to challenge this narrative. It's not about intellectual laziness or cowardice, but rather about perspective. We need to broaden our view, to consider not just the potential future threats, but also the very real challenges we face here and now. Only then can we hope to address the true existential threats to humanity."

The conversation about AI is full of confusion, misinformation, and people talking past each other — in large part because we use the word “AI” to refer to so many things.

In the world of technology, few terms have been as widely used and misunderstood as 'Artificial Intelligence'. Over the years, the term 'AI' has been co-opted by salespeople and entrepreneurs seeking to attract investment and hype for their ventures. This has led to a situation where the public discourse around AI is often muddled and misleading.

Firstly, it's crucial to differentiate between algorithms and AI. Algorithms are a fundamental part of software engineering and have been around for decades. They are sets of instructions that tell a computer how to perform a specific task. AI, on the other hand, refers to systems that can perform tasks that normally require human intelligence, such as understanding natural language or recognizing patterns.

However, when people talk about AI, they often conflate it with the concept of Artificial General Intelligence (AGI), a hypothetical AI that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks at a level equal to or beyond that of a human. The reality is, AGI does not currently exist, and no one knows for certain how or when it might be achieved.

As for the AI that does exist today, such as large language models, they are impressive feats of engineering but are far from being existential threats. These models are trained to perform specific tasks and, while they can generate remarkably human-like text, they do not possess agency.

"When we discuss Artificial General Intelligence (AGI), we often define it in terms of 'human-level intelligence.' This comparison is not accidental but a reflection of our anthropocentric perspective. Historically, metaphors around AI and AGI have served as a convenient way to discuss human problems and mistakes without directly offending or challenging the individual. It's a way to bypass the 'only human' defense and appeal to ignorance that often arises in these discussions.

However, this comparison also implies that any mistakes an AGI might make would mirror those made by humans. If we were to create a set of rules to prevent AGI from posing a threat, why not apply those same rules to humans? After all, isn't the threat of a rogue AGI similar to the threat posed by a reckless leader, like Trump? Both scenarios involve the redirection of inarticulate anger and dissatisfaction with the status quo towards a convenient scapegoat, whether it's AI or immigrants.

Even if we could perfectly regulate AI, it wouldn't resolve the problems we face today. These are problems inherent to our current systems and status quo.

If we genuinely believed in the power of public discourse, why haven't we effectively regulated corporations? Why haven't we resolved the climate crisis that's been looming over us for the past twenty years? The answer lies not in our inability to regulate technology, but in our collective failure to address the systemic issues plaguing our societies. It's a matter of misplaced priorities and ineptitude, not a lack of technological control."

The discourse around the potential threats of AI often mirrors the rhetoric used by far-right factions when discussing minorities, when considering complaints about issues such as jobs and hoarding profits, but also when it come to political power. This is not a coincidence but a reflection of the deeply ingrained tribalism that permeates our societies. These fears and anxieties are not new; they have been present for centuries, often manifesting as racial discrimination or xenophobia.

The primary chorus is a reference to https://en.wikipedia.org/wiki/We_(novel)

It's ironic to hear these arguments from those who once preached the gospel of abundance. After a decade of economic mismanagement, marked by ill-advised investments in hyped technologies like Bitcoin, these same voices now warn of the dangers of AI. The resulting economic instability and inflation are not the fault of AI, but of poor financial decisions and a lack of effective regulation combined with elite capture and regulatory capture by those elites.

The scapegoating of AI is a convenient deflection, shifting the blame from human failings to an impersonal technology. It's a continuation of the same tribalistic tendencies that have long divided our societies, merely dressed up in a new, technologically advanced guise.

It's important to recognize this for what it is: a manifestation of the same tribalism that has fueled discrimination and conflict throughout history.

In conclusion, the fears surrounding AI are not just about technology. They are a reflection of our societal anxieties, our tribalistic tendencies, and our collective failure to effectively manage our economies and regulate our industries.

The misunderstandings about AI do not stop there though, using AI as a critique of culture and the lack of understanding of what intelligence is, let alone what it can do, are often encapsulated in what has been called

The 'paperclip problem', it is a popular thought experiment in discussions about AI, positing a scenario where an AI, tasked with making paperclips, ends up converting the entire planet into paperclips in its single-minded pursuit of its goal. This scenario, while intriguing, is fundamentally flawed in its assumptions about not just AI but also in intelligence as a phenomena.

The 'paperclip problem' suggests an AI that is simultaneously super intelligent and super naive. It imagines an AI capable of complex manipulations and technical mastery, yet unable to understand the broader implications of its actions. This dichotomy is nonsensical. It assumes that an AI could possess vast intellectual capabilities, yet lack the basic common sense to avoid self-destruction or the annihilation of life on Earth.

This argument is a reflection of the cognitive biases of those who propose it. It's a projection of human failings, specifically those of individuals with below-average intelligence who are prone to misunderstanding and making mistakes. It's a product of an anti-intellectual culture that struggles to comprehend the nature of intelligence itself.

Current AI systems can indeed make non-intuitive and sometimes bizarre choices in pursuit of their goals. However, these are not the actions of a super-intelligent entity, but the mistakes of a system that lacks understanding. To assume that a super-intelligent AI would continue to make such errors is to fundamentally misunderstand what intelligence is.

The 'paperclip problem' and similar arguments are forms of concern trolling, appealing to those who rely on others for their opinions rather than forming their own. It's a form of populism that preys on ignorance and fear.

As J.B. Pritzker noted in a recent commencement speech at Northwestern University, those who react with cruelty towards others who are different have 'failed the first test of an advanced society.' They have not only failed those they target with their ignorance, but they have also failed themselves.

So when someone says “Goal-driven systems won’t wake up one day with hostility to humans lurking in their hearts. But they will take actions that they predict will help them achieve their goal — even if we’d find those actions problematic, even horrifying. They’ll work to preserve themselves, accumulate more resources, and become more efficient. “
They are projecting themselves onto AI, but more specifically, how is that different from the people who are amoral, such as many of the fossil fuel corporations which are fronts for dictatorships?
Will regulating AI stop them?

Not withstanding the Butlerian Jihad approach that has been popularized by modern Luddites, sometimes referred to as “the Woke Mob” which is in many ways a reference to organized crime getting involved in politics, or the organizations supported by foreign states, which typically rail against “the west”. I will list some more examples of how AI is just a foil for talking about problems in society that humans cause, and then try to blame shift to AI.

Misinformation and Content Moderation: Some critics demand that AI companies fully disclose how their algorithms moderate content or combat misinformation. They often blame AI for the spread of fake news or harmful content, without acknowledging the role of human actors in creating and spreading such content. This is akin to blaming a tool for the actions of the person using it, and is very much about the tech companies which do not want to hire enough moderators to keep up with their exponential growth, combined with concerted efforts of bad actors. Humans are by far still more creative than AI when it comes to making up bullshit. Try to remember that AI was trained on human data, and is mostly just remixing things it has already seen, they are not coming up with new and original arguments or propaganda. There just happens to be vast amounts of propaganda, far more than most individuals will ever encounter, so it’s not surprising that the AI in remixing what internet data, may say things many people have not seen before.

Bias and Discrimination: There are calls for AI companies to prove that their algorithms are not biased or discriminatory. While it's true that AI can perpetuate human biases if not properly designed and trained, this demand often overlooks the fact that bias and discrimination created on purpose by cherry picking data to feed to the AI, this is especially prevalent with policing. This highlights the necessity of

Full Disclosure of Training Data: Some people demand that AI companies disclose the datasets used to train their models. This could potentially involve revealing sensitive or proprietary information. In other industries, this would be akin to a company revealing its customer data or other proprietary research. As well as

Third-Party Audits of AI Systems: While audits are common in many industries, the level of auditing some propose for AI systems goes beyond the norm. This could include detailed examinations of an AI's decision-making process, its algorithms, and its training data.

The only thing that makes this different than what people normally suggest, is that we are suggesting this happens with all software which is used by the government or corporations which operate as public utilities.

Here the vast advantage of this conversation about AI, is not just that these issues should be resolved with AI, but in fact, most of the systems which the government utilizes to make decisions which have significant direct and acute effects on peoples lives, as well as systems used for example, to issue loans. AI has unlocked a possibility of addressing the tribalism which often becomes entrenched in government as well as corporations.

Surveillance and Privacy: Some people fear that AI is being used to spy on them, and demand that companies disclose all the data they collect and how it's used. While privacy is a legitimate concern, this fear often overlooks the fact that surveillance capitalism has existed for years prior to large language models existing, social media predates AI. Once again, AI is being used as a foil to talk about what the “tech elites” have been doing for over a decade, and more specifically, the reasons why in-q-tell was what made the difference between myspace and facebook. Surveillance was always a key part of the business plans, the deal with the devil that technocrats made, and as with the previous examples, blame shift on to AI to obscure their own culpability. These are serious issues that need to be addressed, but AI regulation cannot resolve issues that were created before AI even existed.

Job Displacement: There's a fear that AI will take over all jobs, leading to mass unemployment. Replace AI with IM (immigrants) and you have what the Butlterian jihad of the the Woke mob is about, and it’s ironic that it is often middle eastern countries which are spearheading open source developments in AI, it seems that finally there are elements of the west going backwards while countries like the UAE are blazing ahead. Insults aside, what most people somehow don’t realize, is that most jobs require both a wide range of skills that seem simple but are not, which is easy to notice if you try to have someone do intellectual labor in a field they are not skilled in, combined with rapidly updated long term memory, both things AI does not have. AI as it is now and for the forseeable future, is just a slightly wider version of narrow AI, which is to say AI can do narrow tasks very well, but generally can’t chain together long lists of tasks or tasks which require diverse skills. This enables AI to augment human intelligence, but no where near capable of replacing whole jobs, especially jobs which are dependent on fine motor skills, which is the bulk of low paying jobs. So technically poor people are afraid of AI taking their jobs, but AI would only take jobs that do not require physical manipulation of the environment. The point is, AI may take over certain tasks, but not whole jobs.

AI Autonomy: Some people fear that AI will become autonomous and make decisions without human oversight. This is entirely reasonable and more to the point, is really just a way to ask for oversight of may other processes, automated or not, which often govern peoples lives. Such as loan applications, job applications, and various forms of permitting and licensing. The way AI works, is based on requests. AIs are very good at replying, but they lack any form of initiative. They can only respond to things, they do not initiate actions on their own. Even in situations where people are worried about “AI automatically making decisions” they seem to overlook, that such a thing only occurs in response to changes in the environment. It’s not like they are sitting around idly, get bored, and then spontaneously decide to take an action, even in automated decision making scenarios, they are still responses and not initiations from the AI.

Now that I am done ranting about the vague and generally unfounded fears of AI being an existential threat. I am more than willing to admit there are some legitimate concerns about AI, which are actually about AI and not about how humans find ways to blame AI for their own assholery. Such as:

Explainability: This refers to the ability of an AI system to provide clear, understandable explanations for its decisions and actions. It's about making the internal workings of an AI system understandable to humans. This is crucial for building trust and for users to understand, appropriately trust, and effectively manage AI. Can you imagine a world where government officals, and law enforcement, had to actually explain themselves and their choices without constantly saying “I’m only human” or “I was afraid” (a very popular refrain after law enforcement murders unarmed civilians, something AI would not do, tell me how AI is worse again) some people would call that change alone, utopia.

Interpretability: This is closely related to explainability, but it specifically refers to the degree to which a human can understand the cause of a decision made by an AI system. This can involve understanding both the overall decision-making process of the AI.

Verification: This involves proving or confirming that an AI system operates as intended. It's about ensuring that the AI system is doing what it's supposed to do and not doing what it's not supposed to do. Once again, this feels like something which should be applied to the government and its lobbyists, and not just AI.

Accessibility of Information: This refers to the ability of stakeholders to access and understand the information used by the AI system. This can involve the data used to train the AI, the logic of the AI's algorithms, and the AI's decision-making process.

Mechanistic Transparency: This involves understanding the specific mechanisms through which an AI system makes its decisions. This can involve understanding the AI's algorithms, its decision-making process, and the specific pathways through which it arrives at its decisions. Source

3rd party oversight: This involves incorporating transparency into every stage of an AI project, from the initial design and development stages through to the deployment and use stages. This can involve transparently communicating about the project's goals, methods, and results. Much like open source software is generally more secure than closed source software because of the ability of 3rd party organizations to test the code themselves, this dramatically reduces the ability to add components which violate other principals, such as the remarkable amount of spying that goes on as part of the windows operation system, which does not occur in Linux for the same reason.

So in conclusion, it is not AI which is the most dangerous and probably existential risk, it’s human stupidity. Which is something Carl Sagan was very specific about, long before AI was ever a thing, as noted in his famous quote

“I have a foreboding of an America in my children's or grandchildren's time -- when the United States is a service and information economy; when nearly all the manufacturing industries have slipped away to other countries; when awesome technological powers are in the hands of a very few, and no one representing the public interest can even grasp the issues; when the people have lost the ability to set their own agendas or knowledgeably question those in authority; when, clutching our crystals and nervously consulting our horoscopes, our critical faculties in decline, unable to distinguish between what feels good and what's true, we slide, almost without noticing, back into superstition and darkness...

The dumbing down of American is most evident in the slow decay of substantive content in the enormously influential media, the 30 second sound bites (now down to 10 seconds or less), lowest common denominator programming, credulous presentations on pseudoscience and superstition, but especially a kind of celebration of ignorance”

The fear mongering around AI for attention, combined with the state of public discourse around AI, feels very much like a celebration of ignorance, by anti-intellectual people who assume that intelligence is itself, inherently evil. What they have difficulty in grasping, is just how culturally specific that is to them.
That is why I didn’t fit in.

Similar Posts

  • Not really explaining Quantum Chromo Dynamics

    This critique is in response to the video: Original Description: Jun 22, 2024Or maybe why I can’t? QuantumQuantumQuantumChromodynamics Link to Patreon — one exclusive video per month: / acollierastro I have merch: https://store.dftba.com/collections/angela-collier Speakers:Angela Collier= [AC] wow, this is probably the part of the video that I agree with the most. It is the central…

  • 伪智力主义的分类学

    pseudo-Intellectimism的分类学 字典定义: 伪智能, noun 一个想被认为具有很多智慧和知识但并不真正聪明或知识渊博的人。 pseudo-Intellectuals的特征 常见特征包括: 从各种来源汲取灵感,我们可以识别几种原型: 🧠层I:原型角色(口罩) 这些是您的外部角色 – 对他人的伪智能看起来像。 以 “认知恶习” 的出现: 角色 认知副 配对原型描述 表演者 虚荣,虚无主义 以蓬勃发展但没有核心表演智力。 更关心光学而不是洞察力。 逆势 自我,不安全感 挑战没有实质的共识。 通过新颖性寻求优势。 变色龙 机会主义 转变信念以保持相关性。 当前趋势的空心模仿。 回声室爱好者 顺从,恐惧 寻求同意的安全。 加强意识形态而不是探究。 智力欺负 自恋 武器化知识。 使用话语来统治而不是探索。 晦涩的人 不安全感,控制 隐藏了复杂性背后的无知。 使用歧义作为装甲。 证书主义者 威权主义 替代标题的功绩。 取决于状态的沉默异议。 🔥级II:动机引擎(为什么这样做) 他们没有将其视为单独的“偏见列表”,而是将它们视为基础恶习,可以为每个角色的伪智能主义提供力量。 将它们分为几类: 🕳自我驱动 🧠议程驱动 🪞绩效驱动 每个原型都从这些动机引擎的混合中汲取灵感 – 我们可以…

  • |

    僧团的慈爱

    本周,我们将对视频发表评论: 耶稣命令的爱 – 巴伦主教的周日讲道 该视频的原始描述是: 朋友们,在复活节的第五个星期日,我们有一个非凡的福音,这是基督教事物的核心。 耶稣在一个漫长而令人难以置信的丰富的独白开始时,他在他去世前一天给他的门徒说:“我给你一个新的诫命:彼此相爱。我也应该爱你,所以你也应该彼此相爱。 这不是感性或心理的平庸性。 要在这里理解耶稣,我们必须理解爱的奇怪的东西,以及单词的使用方式。 群众阅读:阅读1 – 使徒行传14:21-27 诗篇 – 诗篇145:8-9,10-11,12-13 阅读2 – 启示录21:1-5a 开放调用: 让我们谈论 love – 不是一个过分的放纵,也不是一种自动美德,而是作为一种具有体重梁的道德建筑。 喜欢舒适和在承诺中的爱心让我们寻求恢复Love失去的语法。 🧠I.爱的语义侵蚀 巴伦主教开头,哀叹,像钟声一样在一座废弃的寺庙中产生共鸣: “我们将’爱’一词用于一切 – 从上帝到大蒜面包。” 英语舌头曾经充满诗歌,已经陷入语义通货膨胀。 我们“爱”: 这不是无害的夸张 – 类别崩溃。 对食​​欲与道德的困惑。我们现在对待 desire ,就像 dodepion 和 preference 一样,就像是 em>原理em>。 🧠ii。 爱的希腊词典 为了恢复清晰度,我们转向希腊语 – 特异性手术刀。那里,爱不是水坑,而是一个频谱: 基督的命令是 Agape 。不是心情。 不是偏好。 不是Instagram标题。 这是爱情作为动作而不是情感。 📦iii。 作为消费主义的爱:业力危险…

  • |

    彼得森学院》点评:亚里士多德与大鸟的相遇 第 1 部分

    彼得森学院:亚里士多德遇到大鸟的地方 0:00 Peterson University评论 1:19:18 terf呼叫 对彼得森学院的分析很棒。 我个人不会付钱看这门课程。 我对他的批评的主要批评是,英国死pan讽刺可能会使JP人群感到困惑。 虽然我可以理解这是对在线终端中普遍的多余双曲线BS的反应,但尽管如此,他仍然不欣赏约翰·帕古(Johnathon Pageau)的psudo pagundity的唯一原因,这是他通过视频投射肢体语言的方式。 这就是为什么他对书面词有恋物癖的口语。 因为如果您只阅读他所说的话,那么强加的重点就会自动否定肢体语言方面。 演讲者: Nathan Ormond = [Nobr/> Jonathan Pageau = [JP] Jordan B Peterson = [JBP] 彼得森大学评论 0:13 [音乐] 0:26 [不]早上好,因为您可能知道我是一个我的学生,我做得很好,我 0:34我仍在全职工作,但我兼职我在做一个统计数据 0:41度,这意味着我一直在大学审计 0:50的审计 0:58嗯很好,不太好,因为尚未获得认可的um,它是在线 1:04,因此它符合我在那里的标准,声称它是为学生提供 1:10 Edge 1:10 Edge UH知识来应对现代世界独特的技能,这是雇主​​可以销售的,这对雇主来说是 1:19 Peterson Acterersy Aseme a,让我们的A A Getersy A. 1:26 Peterson Academy 1:33嗯,他一直在谈论这个问题已经很长时间了,Jordan先生的想法…

  • The evolutionary roots of suffering.

    There are few experiences more universal than suffering. It transcends species, cultures, and individual circumstances, manifesting uniquely in entities capable of complex behaviors and emotional states. Yet, despite its universality, understanding the nature and nuances of suffering remains a complex endeavor. This invites us to explore the intricate web of life that led to the…