|

The newest model of AI fundraising, misinformation and fear mongering.

FLI seems to have gone full starwars and is becoming the source of the threats they complain about. in their latest venture to fear monger about AI for fundraising, after failing to address anything about climate change or nuclear weapons (as noted by lack of presence in the policy work page), they seem to only target those topics which are supported by academic minorities.

For now however, we will focus on the misinformation contained within their recent “pause giant AI experiments” letter.

The fear mongering and hyperbolic statements are in every paragraph starting with the first one “even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.” Taking the gish gallop approach. Starting with “AI labs locked in an out-of-control race” What makes the race between firms to build AI “out of control” ? Is there any damage anywhere they can they can definitively say was GPT4 ?

Then we get to the second part of their hyperbolic fearmongering “ not even their creators – can understand,” what is this based upon ? There is a huge difference between the inability to explain something complex to a 5 year old, not because they cannot explain, but because the 5yr old cannot understand, and an inability to understand it. The 5yr old here, being the behavior which is being represented by the FLI.
No single engineer at microsoft knows how windows 11 works, but there are groups of engineers who collectively understand how windows 11 works. The same goes for LLMs.
It is not possible to produce functioning software that the upstream developers do not understand, the software would not work or would be prone to crashing. that can only occur to the people utilizing the API and which would amount to most of the customers of OpenAI, but not the developers of OpenAI.

Then to the third part of the sentence, “not even their creators – can ... predict” which also patently false, If the responses were always unpredictable, which means, not answering the prompts or queries, but instead responding “unpredictably” which is another way of saying randomly, then the AI wouldn’t be useful. There is no utility in an “unpredictable AI” because it wouldn’t be able to have responses which would be considered meaningful or useful to the end users. What they are misrepresenting, is that the gpt4 level AIs are not deterministic, which is to say, you won’t always get exactly the same answer, because the AI is based on probabilities, you can estimate what the probable answers would be, otherwise “checking the output” wouldn’t make sense either, because there would be no basis for anything to be incorrect if everything is “unpredictable” or random.

The fourth part of the sentence “no one – not even their creators – can ... or reliably control.” which has been cearly demonstrated to be false on multiple levels, first there is the very obvious ability to disable the OpenAI account from utilizing the resources on the microsoft azure cloud where it is located. But aside from the blazingly obvious ability to physically control the on-off state of the machines which are processing GPT4 via a simple electronic billing change, without even effecting the rest of the azure cloud. There are also the many safeguards, which are primarily keyword based, that prevents GPT4 from outputting content which can be considered hazardous. I have spent several hours trying to “jailbreak” GPT4, and while some of the jailbreaks work on GPT3.5 under limited conditions, I was not able to get the same output from GPT4, which demonstrates a very direct control over the output of GPT4.
And of course lastly, the easy way anyone who knows how LLMs work, which clearly isn’t the staff at the future of life institute, that the training of the AI can be directly controlled by limiting the training dataset. Which is inherent to the fundamental process of training the AI.

Now you can see why we can begin to claim that this “open letter” would qualify as disinformation or misinformation, because of the tactics used, the gish gallop, https://en.wikipedia.org/wiki/Gish_gallop and general willful ignorance of the arguments. Which are specifically aimed at individuals that do not have in-depth knowledge of the technology. It took 4 paragraphs to debunk a single sentence of their dubious claims. Which was just the opening paragraph.

While it is highly debatable that AIs are becoming “human competitive” when they are limited to approximately ~3,000 words (4000 tokens) at a time for input, which may be exceptional for conversational tasks. But not for many tasks a knowledge worker would undertake where it takes more than 3,000 words just to describe the tasks, when the data to be processed must also fit in the 3,000 word (4000 token) limit, with the description of the task. Which means task+data to process combined would need to be less than 3,000 words (4000 tokens).

Now for the next gish gallop of inapplicable rhetoric “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. “

First and most popular complaint “should we let machines flood our information channels with propaganda and untruth?” certainly sounds like they are describing all forms of media here, including themselves. This red herring requires a lobotomy for one to believe that spam did not exist prior to 2022. Propaganda and Untruth have been around as look as humans have been speaking. These issues are not somehow new that AI has created. As such, the solution to these types of misinformation and disinformation are no different for AI, than they are for regular humans. Though it does seem to be trying to create censorship regimes based on their fear mongering. So we can gather that the FLI approach to misinformation is some combination of becoming a source of misinformation while simultaneously trying to create justifications for algorithmic censorship for everyone else. A very Chinese approach.

On to the next fallacy “Should we automate away all the jobs, including the fulfilling ones?” depending on your beliefs, most people do attempt to automate away their job as much as possible, because they do not believe that the value of a human life is purely it’s economic output, and human lives that are not dedicated to the commercial pursuit of profit are inherently worthless. On top of that, why would fulfilling jobs be automated away ? How can a job which is easy to automate, be considered fulfilling? This is confusing easy jobs with a high income, with a fulfilling job. More to the point however, is that most people may not be familiar with the genesis of the current drive for AI was driven in large part the same thing that drove the adoption of solar, which is some version of the vision laid out in the book "the Zero marginal cost society" which have generally been implemented poorly and didn't take into account the scaling issues related to "the duck curve" for utility scale solar, which is why small modular reactors are considered the future of base load energy. More to the point, it is about taxing and deriving value from automation such as to create something akin to a universal basic income which would replace economic value as the basis of human values, and make knowledge work, which cannot be replaced by AI, only augmented by AI in the current and foreseeable forms. Enabling much longer periods of training and retraining to allow people to work in professions they want, instead of because there is no other way to not starve, which places a floor on extreme poverty without significantly impacting the middle classes, which have thus far been hollowed out via excessive rent seeking, not automation. So the answer is no, fulfilling jobs neither would not should be automated away, and more to the point, cannot be, because those skills are far beyond a 3000 word limit. The application of AI augmenting English speakers is an advantage in the globalized system that few countries can match, and the hidden secret of prompt engineering, is that it requires fluency in English to get anything useful from the LLM, otherwise, results are generally unimpressive and stereotypical.

Also, AI like ChatGPT is an excellent way for non-English speakers, to learn English, without having the awkwardness of trying to practice with other humans that may not speak the non-English speakers native language. One of the many overlooked benefits of having AI that, may not always be technically correct, is always grammatically correct enough to be useful as a model for humans to learn English from. Which means it "predictably produces grammatically and syntactically correct English sentences" while at the same time, being able to give grammatically correct answers, to grammatically incorrect questions. Which is far more valuable for English learners. Thus, the more intelligent the levels of conversations with chatGPT, the other participants would also, over time and over many interactions, have their conversational and emotional intelligence level raised to near that of the educational reading levels which can be set by the AI and increased or decreased by request. A vast improvement over what is typical for social media.

And of course the fear-mongering does not let up with “should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? “ which is somewhat startling as that is a talking point usually reserved by crowds of right wing extremists yelling “jews will not replace us” which suggests that the future of life institute has very significant biases that cater to white supremacy . This is much more alarming than anything to comes out of ChatGPT or OpenAI, because they are suggesting that they are a safe haven for white supremacists through the dog-whistle rhetoric.

Which makes their next assertion “Should we risk loss of control of our civilization?“ much more pointed, what civilization are they referring to here ? It is difficult to not see that assertion as another aspects of supremecy which goes far beyond nationalism. Because it easily ignores the fact that ChatGPT has no agency, and no ability to act autonomously, by design it is only reactive and cannot act on it’s own. Thus far this “open paper for a call on the pause on AI development” reads more like a series of alt-right dog whistles for fundraising than any realistic conceptualization of how GPT4 actually functions or is capable of. Which means the assertion “Such decisions must not be delegated to unelected tech leaders.” while sounding like something one could agree with, is ultimately meaningless because those decisions are not delegated to unelected tech leaders, likewise the many catastrophes which have befallen the economy since 2019, are also not the fault of AI, but generally of fear mongering and disinformation, leading to poor choices by uninformed politictians, much like what this open letter seems to be attempting, as a rallying call for the disaffected ex-trump supporters, to shift their anti-semitism on to AI.

So while the open letter from FLI claims “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable” They have produced no evidence to that effect.

Furthermore, the vague claims “we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” suggests that they are entirely unfamiliar with “the AI labs” which are never named, because they are almost entirely unfamiliar with the research or any of the organizations participating in the development of “GPT4 level AI”
Which is itself non-descriptive, because they have never defined what “GPT4 level” is, they have simply taken the most popular marketing buzzwords and used it to develop at fear-mongering article, I would not be surprised if this text was generated by GPT4 itself, in an attempt to demonstrate the “utility for misinformation”

But all they are really doing here, is demonstrating that misinformation requires a lot of money and organizational support and even levels of credibility, to be successful, regardless of how meaningless the content may actually be. Which points out once again, that misinformation and disinformation cannot be simply censored away, because doing so would actually further the goals of concern trolls. While there has been no end to the absurdities promulgated around the conflict in Ukraine, we can say that “sounding more realistic” is not a meaningful complaint, because the mechanisms for dealing with misinformation and disinformation, will not be somehow limited or expanded with AI, which cannot buy hundreds of sim cards and cell phones to verify the spamming accounts, which is the first step to misinfo/disinfo campaigns. Nor would they be able to get untraceable web hosting for their fake news outlets. Though is it possible that the open letter is a sign of FLI pivoting to misinformation as a fundraising strategy by the looks of this paper, certainly it is an email harvesting campaign at least, which I am sure will be profitable for them.

Considering the amount of debunking that had to go into the first 2 paragraphs alone, is longer than their entire “open letter” is fairly telling, and as I stated, the vagueness of the assertions are fairly indicative of the author knowing almost nothing about how large language models function, which is true of LLMs themselves. So why a 6month pause would accomplish as equally as unknown as who should participate in the pause, they may as well have said “all programmers” which would be the same level of vagueness as “all AI labs” But maybe they are simply demanding that the CCP and Russian AI labs be included in determining the internal specifics of US software companies.

Maybe they want government minders from North korea's ''AI lab" to have their own office at microsoft, to oversee the development of AI, you know, just to be safe. /s

It is interesting that FLI demands “use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts” Given that process would be carried out via the prompt interface, they safety can be audited by any user or outside independent experts already,simply by using it, that does not require a pause or any kind of restructuring

Then they make more references to “the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.” without ever referring to any specific risk, just using a blanket anti-intellectual bias that anything they don’t understand is dangerous.

Then by the 5th paragraph they say something to sound reasonable that everyone can agree to “ AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.” But then again, they say that the research and development should be “refocused” on these aspects, which suggests that they are somehow unaware that “more accurate, safe, interpretable, transparent, robust, aligned, trustworthy” has always been the focus, though throwing “loyalty” in at the end there, seems to be going in the direction of advocating for “harmonious” AI, which is language that often associated with white supremacy.

By the 6th paragraph they are advocating that “work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI;” which is to suggest that the people from the recent tiktok congressional review, asking questions like “does tiktok connect to home wifi” are somehow going to make an entirely new regulatory authority, which will ultimately be trying to “regulate algorithms” which would be the bulk of all software. On the plus side, this would essentially force all software to be open source and for all closed source software companies to submit their source code to a government regulatory body, which I am not worried about because the large software corporations would block it. Though it is earily familiar to the CCP ban on VPNs. you know, for safety and totally not for stealing intellectual property.

Ultimately the open letter from FLI, is a nothing burger, which is why I was considering just ignoring it.

However there needs to be a voice that counters the prodigious amount of misinformation and disinformation, which sometimes comes from seemingly reputable organizations, typically for profit, either of fundraising, or email harvesting. Which seems obvious when FLI says “robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions” which Forbes has referred to as “AI Doomerism”

On the whole I am only bothered by the level of misinformation that is in the letter on AI from FLI, but the claims are so baseless that I am not worried that they will be taken seriously by anyone other than alt-right luddites.
However because of how popular the "AI for disinformation" talk point is, mostly because most people do not understand how disinformation works and mistakenly think it is somehow limited by the creativity of the propagandists, needed to be addressed. More obviously, making propaganda go viral on social media, requires repeating the same message, if it was the same message rephrased dozens of times, it would not be able to become viral on social media. So once again, fails to even understand the mechanism by which disinformation even works. Accidentally or not, this concern trolling seems to be a result of falling into the filter bubble of the alt-right for so long, as to try and profit off of it, which results in talking points from the russians/CCP trying to come up with an excuse to attack the US because of the "danger of AI" after all, the truth is just whatever is the most popular according to them. That is the essence of moral relativism, and disinformation.

Similar Posts

  • |

    对 “批判性思维为何已死–彼得-博格豪森 ”的评论

    这是对: 为什么批判性思维死了 – 彼得·博格霍斯(Peter Boghossian) 原始视频的描述是: Peter Boghossian是美国哲学家。 十年来,他一直是波特兰州立大学哲学教授,但在学院对“申诉研究事务”的回应之后辞职。 这需要Boghossian-与James Lindsay和Helen Pluckrose一起 – 将虚假论文提交了与性别研究和其他类似学科有关的同行评审出版物,以讽刺此类领域的学术质量差。 最近,Boghossian一直从事他所说的“街头认识论”,他鼓励人们在公共场所审查他们的观点,为什么持有他们的观点以及改变主意所需要的。 在这里查看他的频道: 演讲者: Konstantin Kisin = [KK] Francis Foster = [FF] Peter Boghossian = [PB] 传奇:🖖:vulcan_salute:同意, thumbsdown:对于分歧, ✋:RAISED_HAND:用于更正和澄清 ok_hand:sarcasm或sarcasm或mis reprentation point_p pinch:point_point_point pinch:point pointifific flod promific of filfly of logbr incorrect.🧘 :lotus_position: CosmoBuddhist belief👀 :eyes:💬 :speech_balloon:🗨️ :left_speech_bubble:💭 :thought_balloon: CosmoBuddhist Opinion 介绍…

  • 机器智能的崛起 计算机国际象棋

    就在19年前,当 IBM的超级计算机深蓝色击败Garry Kasparov 时,就实现了AI世界的一个里程碑。 在此之前,他是不败的世界国际象棋冠军,这可能是有史以来最伟大的人类球员。 这是AI简短历史上的重要事件。 自1970年代以来,计算机国际象棋程序就一直在下棋,并提高了他们的比赛水平将击败绝大多数人口。 我自己记得在1980年代初购买国际象棋计划,该计划提供了从初学者到高级的6个级别的比赛。 即使那样,我还是很难击败高于3级的机器。到卡斯帕罗夫(Kasparov)发挥深蓝色时,国际象棋游戏软件的质量正在迅速提高。 但是,包括卡斯帕罗夫本人在内的大多数专家都认为击败大师的步伐是非常不可能的。 比赛于1997年5月在纽约举行,涉及六场比赛中的最佳成绩。 卡斯帕罗夫(Kasparov)赢得了第一场比赛,但在第二场比赛中意外击败。 卡斯帕罗夫(Kasparov)显然对这场失败感到震��,第二天的新闻发布会上,他指责深蓝色作弊。 他通过声称表现出不可预测的行为来理解这一点,他认为这是由于IBM编程团队在比赛中篡改所致。 规则规定,程序员可以更改游戏之间的程序,但在游戏期间不会更改程序。 IBM团队抓住了卡斯帕罗夫(Kasparov)措手不及,因为他相信计算机国际象棋程序虽然快速且计算机上无瑕,但由于其可预测的敷衍了事的行为而不会宣称大师的头皮。 在Kasparov在第一场比赛中击败了Deep Blue之后,IBM团队在软件中产生了更多随机的不可预测性。 它起作用了,深蓝色继续赢得比赛。 直到这场失败,卡斯帕罗夫一直有理由对机器智能的限制进行一些理由。 对于深蓝色,本质上使用了AI技术,当时涉及“蛮力”搜索以在国际象棋中获胜。 蛮力搜索是AI 早期的常用范式,它试图通过迅速通过数百万的动作组合来迅速搜索到具有计算机力量的对手 – 在深蓝色的情况下,分析了超过2亿个可能的动作,每秒 。 使用修剪方法通常会减少搜索空间(即可能的移动)。 这很重要,因为在国际象棋比赛中,球员通常仅限于每举动三分钟的时间。 但是,任何人都无法在一生中分析2亿可能的举动,更不用说一秒钟了。 但这对当时的卡斯帕罗夫来说并不重要,因为他认为人类的智慧和多年经验使他具有直觉的见解,而他不需要分析。 确实,当他曾经被问到他每秒分析多少动作时,他宣称:“不到一个”。 这意味着当时的战线是在愚蠢机器的卓越计算能力和人类大师的创意,有见地的天才之间广泛绘制的。 但是19年了,AI世界发生了很大变化。 如今,正如卡斯帕罗夫(Kasparov)本人所承认的那样:“一款运行免费的国际象棋计划的体面笔记本电脑将粉碎深蓝色和任何人类的祖母。象棋机器的跳跃是可预测和弱的,到了可怕的强者,只花了十二年 ”。 卡斯帕罗夫(Kasparov)似乎已成为一个convert依,现在识别了计算机国际象棋对人类国际象棋群众的好处的见解和发现。 他为什么现在这么说? 因为计算机硬件继续保持不懈的速度,但AI程序也不再像AI初期那样依靠蛮力搜索算法。 如今,语言翻译程序或无人驾驶汽车和高级国际象棋程序的AI使用技术,例如遗传算法和神经网络 – 更类似于人类智能的工作方式。 这些技术提供的是以前的技术所没有的,这既是匹配模仿人类思维的模式的能力,也可以学习的能力。 优秀的人类国际象棋参与者,例如其他主题领域的专家,使用根据经验建立的模式识别技能,而AI技术现在变得擅长于模式匹配 – 直到最近,许多人认为这不太可能。 学习技术可以改善国际象棋软件并将其提高到新的水平。 据说人类进化的关键里程碑之一是时间,估计是100万年前,当时我们的灵长类动物祖先是通过观察他人在工作中学到的。 达到了这一点,花了数十亿年的生物进化。 但是,现在许多人认为,在未来几十年中,AI计划将获得与人类相同的学习能力水平。 这确实令人惊讶,并提出一个问题,AI将我们带到哪里? 我将在下一篇文章中进一步讨论。

  • Bridging Mindfulness and Philosophy_Philosophy of Meditation_5_with Evan Thompson

    This a critique of https://www.youtube.com/watch?v=Gos-K8Lstbk In the fifth episode of the Philosophy of Meditation series, hosts John Vervaeke and Rick Repetti engage with Evan Thompson, a renowned cognitive scientist and philosopher, to explore the symbiosis of meditation, cognitive science, and philosophy. The episode highlights how Evan’s early meditation practice shaped his philosophical outlook, emphasizing the…