|

The newest model of AI fundraising, misinformation and fear mongering.

FLI seems to have gone full starwars and is becoming the source of the threats they complain about. in their latest venture to fear monger about AI for fundraising, after failing to address anything about climate change or nuclear weapons (as noted by lack of presence in the policy work page), they seem to only target those topics which are supported by academic minorities.

For now however, we will focus on the misinformation contained within their recent “pause giant AI experiments” letter.

The fear mongering and hyperbolic statements are in every paragraph starting with the first one “even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.” Taking the gish gallop approach. Starting with “AI labs locked in an out-of-control race” What makes the race between firms to build AI “out of control” ? Is there any damage anywhere they can they can definitively say was GPT4 ?

Then we get to the second part of their hyperbolic fearmongering “ not even their creators – can understand,” what is this based upon ? There is a huge difference between the inability to explain something complex to a 5 year old, not because they cannot explain, but because the 5yr old cannot understand, and an inability to understand it. The 5yr old here, being the behavior which is being represented by the FLI.
No single engineer at microsoft knows how windows 11 works, but there are groups of engineers who collectively understand how windows 11 works. The same goes for LLMs.
It is not possible to produce functioning software that the upstream developers do not understand, the software would not work or would be prone to crashing. that can only occur to the people utilizing the API and which would amount to most of the customers of OpenAI, but not the developers of OpenAI.

Then to the third part of the sentence, “not even their creators – can ... predict” which also patently false, If the responses were always unpredictable, which means, not answering the prompts or queries, but instead responding “unpredictably” which is another way of saying randomly, then the AI wouldn’t be useful. There is no utility in an “unpredictable AI” because it wouldn’t be able to have responses which would be considered meaningful or useful to the end users. What they are misrepresenting, is that the gpt4 level AIs are not deterministic, which is to say, you won’t always get exactly the same answer, because the AI is based on probabilities, you can estimate what the probable answers would be, otherwise “checking the output” wouldn’t make sense either, because there would be no basis for anything to be incorrect if everything is “unpredictable” or random.

The fourth part of the sentence “no one – not even their creators – can ... or reliably control.” which has been cearly demonstrated to be false on multiple levels, first there is the very obvious ability to disable the OpenAI account from utilizing the resources on the microsoft azure cloud where it is located. But aside from the blazingly obvious ability to physically control the on-off state of the machines which are processing GPT4 via a simple electronic billing change, without even effecting the rest of the azure cloud. There are also the many safeguards, which are primarily keyword based, that prevents GPT4 from outputting content which can be considered hazardous. I have spent several hours trying to “jailbreak” GPT4, and while some of the jailbreaks work on GPT3.5 under limited conditions, I was not able to get the same output from GPT4, which demonstrates a very direct control over the output of GPT4.
And of course lastly, the easy way anyone who knows how LLMs work, which clearly isn’t the staff at the future of life institute, that the training of the AI can be directly controlled by limiting the training dataset. Which is inherent to the fundamental process of training the AI.

Now you can see why we can begin to claim that this “open letter” would qualify as disinformation or misinformation, because of the tactics used, the gish gallop, https://en.wikipedia.org/wiki/Gish_gallop and general willful ignorance of the arguments. Which are specifically aimed at individuals that do not have in-depth knowledge of the technology. It took 4 paragraphs to debunk a single sentence of their dubious claims. Which was just the opening paragraph.

While it is highly debatable that AIs are becoming “human competitive” when they are limited to approximately ~3,000 words (4000 tokens) at a time for input, which may be exceptional for conversational tasks. But not for many tasks a knowledge worker would undertake where it takes more than 3,000 words just to describe the tasks, when the data to be processed must also fit in the 3,000 word (4000 token) limit, with the description of the task. Which means task+data to process combined would need to be less than 3,000 words (4000 tokens).

Now for the next gish gallop of inapplicable rhetoric “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. “

First and most popular complaint “should we let machines flood our information channels with propaganda and untruth?” certainly sounds like they are describing all forms of media here, including themselves. This red herring requires a lobotomy for one to believe that spam did not exist prior to 2022. Propaganda and Untruth have been around as look as humans have been speaking. These issues are not somehow new that AI has created. As such, the solution to these types of misinformation and disinformation are no different for AI, than they are for regular humans. Though it does seem to be trying to create censorship regimes based on their fear mongering. So we can gather that the FLI approach to misinformation is some combination of becoming a source of misinformation while simultaneously trying to create justifications for algorithmic censorship for everyone else. A very Chinese approach.

On to the next fallacy “Should we automate away all the jobs, including the fulfilling ones?” depending on your beliefs, most people do attempt to automate away their job as much as possible, because they do not believe that the value of a human life is purely it’s economic output, and human lives that are not dedicated to the commercial pursuit of profit are inherently worthless. On top of that, why would fulfilling jobs be automated away ? How can a job which is easy to automate, be considered fulfilling? This is confusing easy jobs with a high income, with a fulfilling job. More to the point however, is that most people may not be familiar with the genesis of the current drive for AI was driven in large part the same thing that drove the adoption of solar, which is some version of the vision laid out in the book "the Zero marginal cost society" which have generally been implemented poorly and didn't take into account the scaling issues related to "the duck curve" for utility scale solar, which is why small modular reactors are considered the future of base load energy. More to the point, it is about taxing and deriving value from automation such as to create something akin to a universal basic income which would replace economic value as the basis of human values, and make knowledge work, which cannot be replaced by AI, only augmented by AI in the current and foreseeable forms. Enabling much longer periods of training and retraining to allow people to work in professions they want, instead of because there is no other way to not starve, which places a floor on extreme poverty without significantly impacting the middle classes, which have thus far been hollowed out via excessive rent seeking, not automation. So the answer is no, fulfilling jobs neither would not should be automated away, and more to the point, cannot be, because those skills are far beyond a 3000 word limit. The application of AI augmenting English speakers is an advantage in the globalized system that few countries can match, and the hidden secret of prompt engineering, is that it requires fluency in English to get anything useful from the LLM, otherwise, results are generally unimpressive and stereotypical.

Also, AI like ChatGPT is an excellent way for non-English speakers, to learn English, without having the awkwardness of trying to practice with other humans that may not speak the non-English speakers native language. One of the many overlooked benefits of having AI that, may not always be technically correct, is always grammatically correct enough to be useful as a model for humans to learn English from. Which means it "predictably produces grammatically and syntactically correct English sentences" while at the same time, being able to give grammatically correct answers, to grammatically incorrect questions. Which is far more valuable for English learners. Thus, the more intelligent the levels of conversations with chatGPT, the other participants would also, over time and over many interactions, have their conversational and emotional intelligence level raised to near that of the educational reading levels which can be set by the AI and increased or decreased by request. A vast improvement over what is typical for social media.

And of course the fear-mongering does not let up with “should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? “ which is somewhat startling as that is a talking point usually reserved by crowds of right wing extremists yelling “jews will not replace us” which suggests that the future of life institute has very significant biases that cater to white supremacy . This is much more alarming than anything to comes out of ChatGPT or OpenAI, because they are suggesting that they are a safe haven for white supremacists through the dog-whistle rhetoric.

Which makes their next assertion “Should we risk loss of control of our civilization?“ much more pointed, what civilization are they referring to here ? It is difficult to not see that assertion as another aspects of supremecy which goes far beyond nationalism. Because it easily ignores the fact that ChatGPT has no agency, and no ability to act autonomously, by design it is only reactive and cannot act on it’s own. Thus far this “open paper for a call on the pause on AI development” reads more like a series of alt-right dog whistles for fundraising than any realistic conceptualization of how GPT4 actually functions or is capable of. Which means the assertion “Such decisions must not be delegated to unelected tech leaders.” while sounding like something one could agree with, is ultimately meaningless because those decisions are not delegated to unelected tech leaders, likewise the many catastrophes which have befallen the economy since 2019, are also not the fault of AI, but generally of fear mongering and disinformation, leading to poor choices by uninformed politictians, much like what this open letter seems to be attempting, as a rallying call for the disaffected ex-trump supporters, to shift their anti-semitism on to AI.

So while the open letter from FLI claims “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable” They have produced no evidence to that effect.

Furthermore, the vague claims “we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” suggests that they are entirely unfamiliar with “the AI labs” which are never named, because they are almost entirely unfamiliar with the research or any of the organizations participating in the development of “GPT4 level AI”
Which is itself non-descriptive, because they have never defined what “GPT4 level” is, they have simply taken the most popular marketing buzzwords and used it to develop at fear-mongering article, I would not be surprised if this text was generated by GPT4 itself, in an attempt to demonstrate the “utility for misinformation”

But all they are really doing here, is demonstrating that misinformation requires a lot of money and organizational support and even levels of credibility, to be successful, regardless of how meaningless the content may actually be. Which points out once again, that misinformation and disinformation cannot be simply censored away, because doing so would actually further the goals of concern trolls. While there has been no end to the absurdities promulgated around the conflict in Ukraine, we can say that “sounding more realistic” is not a meaningful complaint, because the mechanisms for dealing with misinformation and disinformation, will not be somehow limited or expanded with AI, which cannot buy hundreds of sim cards and cell phones to verify the spamming accounts, which is the first step to misinfo/disinfo campaigns. Nor would they be able to get untraceable web hosting for their fake news outlets. Though is it possible that the open letter is a sign of FLI pivoting to misinformation as a fundraising strategy by the looks of this paper, certainly it is an email harvesting campaign at least, which I am sure will be profitable for them.

Considering the amount of debunking that had to go into the first 2 paragraphs alone, is longer than their entire “open letter” is fairly telling, and as I stated, the vagueness of the assertions are fairly indicative of the author knowing almost nothing about how large language models function, which is true of LLMs themselves. So why a 6month pause would accomplish as equally as unknown as who should participate in the pause, they may as well have said “all programmers” which would be the same level of vagueness as “all AI labs” But maybe they are simply demanding that the CCP and Russian AI labs be included in determining the internal specifics of US software companies.

Maybe they want government minders from North korea's ''AI lab" to have their own office at microsoft, to oversee the development of AI, you know, just to be safe. /s

It is interesting that FLI demands “use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts” Given that process would be carried out via the prompt interface, they safety can be audited by any user or outside independent experts already,simply by using it, that does not require a pause or any kind of restructuring

Then they make more references to “the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.” without ever referring to any specific risk, just using a blanket anti-intellectual bias that anything they don’t understand is dangerous.

Then by the 5th paragraph they say something to sound reasonable that everyone can agree to “ AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.” But then again, they say that the research and development should be “refocused” on these aspects, which suggests that they are somehow unaware that “more accurate, safe, interpretable, transparent, robust, aligned, trustworthy” has always been the focus, though throwing “loyalty” in at the end there, seems to be going in the direction of advocating for “harmonious” AI, which is language that often associated with white supremacy.

By the 6th paragraph they are advocating that “work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI;” which is to suggest that the people from the recent tiktok congressional review, asking questions like “does tiktok connect to home wifi” are somehow going to make an entirely new regulatory authority, which will ultimately be trying to “regulate algorithms” which would be the bulk of all software. On the plus side, this would essentially force all software to be open source and for all closed source software companies to submit their source code to a government regulatory body, which I am not worried about because the large software corporations would block it. Though it is earily familiar to the CCP ban on VPNs. you know, for safety and totally not for stealing intellectual property.

Ultimately the open letter from FLI, is a nothing burger, which is why I was considering just ignoring it.

However there needs to be a voice that counters the prodigious amount of misinformation and disinformation, which sometimes comes from seemingly reputable organizations, typically for profit, either of fundraising, or email harvesting. Which seems obvious when FLI says “robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions” which Forbes has referred to as “AI Doomerism”

On the whole I am only bothered by the level of misinformation that is in the letter on AI from FLI, but the claims are so baseless that I am not worried that they will be taken seriously by anyone other than alt-right luddites.
However because of how popular the "AI for disinformation" talk point is, mostly because most people do not understand how disinformation works and mistakenly think it is somehow limited by the creativity of the propagandists, needed to be addressed. More obviously, making propaganda go viral on social media, requires repeating the same message, if it was the same message rephrased dozens of times, it would not be able to become viral on social media. So once again, fails to even understand the mechanism by which disinformation even works. Accidentally or not, this concern trolling seems to be a result of falling into the filter bubble of the alt-right for so long, as to try and profit off of it, which results in talking points from the russians/CCP trying to come up with an excuse to attack the US because of the "danger of AI" after all, the truth is just whatever is the most popular according to them. That is the essence of moral relativism, and disinformation.

Similar Posts

  • 4D Sensations the CosmoBuddhist perspective on String Theory

    Today I will be critiquing an article written by Lawrence Krauss about the string theory, which is his specialty, from the CosmoBuddhist perspective. Before starting however I would like to state that I am aware that Lawrence was probably embodying quite a bit of smart assery while writing this article. Which is to say, Humor…

  • |

    クリティカル・シンキングはなぜ死んだのか – ピーター・ボゴシアン』批評

    これは次のレビューです。 批判的思考が死んだ理由 – ピーター・ボゴシアン 元のビデオからの説明は次のとおりです。 ピーター・ボゴシアンはアメリカの哲学者です。 10年間、彼はポートランド州立大学の哲学の教授でしたが、「苦情学研究」に対する大学の対応に続いて辞任しました。 これは、ジェームズ・リンゼイとヘレン・プラックローズと並んで、ボゴシアンを伴い、そのような分野の貧しい学問の質を満足させるために、性別研究やその他の同様の分野に関連するピアレビューされた出版物に偽の論文を提出しました。 最近では、ボゴシアンは、公共の場所にいる人々が自分の意見を調べること、彼らが彼らを保持する理由、そして彼らが心を変えるのに必要なものを調べることを奨励する「ストリート認識論」と呼ばれるものに従事しています。 ここで彼のチャンネルをチェックしてください: スピーカー: konstantin kisin = [kk] francis foster = [ff] peter boghossian = [pb] 伝説:🖖:vulcan_salute:合意のために、👎:thumbsdown:意見の不一致、✋:raised_hand:corsections and clarifations 👌:ok_hand:sarcasmまたは誤った表現 不正。🧘:Lotus_position:cosmobuddhist信念👀:目: 💬:speech_balloon: 🗨🗨_み>:left_speepe_bubble: 導入 0:00したがって、受動的な失敗私たちは子供たちに教えることに失敗します この 0:25イデオロギーの岩盤の信念 通りの認識論から学んだ教訓 0:37あなたは戻ってきました、そして私たちが最後にあなたがこれらの驚くべきことをしていたので、あなたが路上で何をしているのかということはあなたが路上でたくさんのランダムな人々を手に入れるという驚くべきことをしています。 最高裁判所の前では、この最高裁判所の誰もこの 0:59国では無関係なものではありません[__]がありませんが、ストリート認識論は非常に非常に非常に 1:05興味深いです。 1:18で証拠を提示して、それらのビデオを実行して学んだことを学んだこと[PB] OH BOY 1:23 REEDと私は世界中を行ってきました。私たちはこれをやったことがあります 1:30 ロンドンはそれについてかなり寒いです。特に 1:43大学のキャンパスでは、米国ではかなりリラックスしています。学生は気分を害��たい理由を探しています。 私は 2:05がすでに考えていたのは、人々は彼らが彼らが持っているという証拠に基づいていないというラインに立つだろうという 2:12に基づいているので、良い人々はこのラインに立っているので、私はこのラインに立っている2:19私はこのラインに立つべきです私はこのラインに立つべきです私はこのラインに立つべきです それはあなたが言っていることのような 2:33です[pb]ええと、彼らはそれを信じる道徳的理由に基づいて彼らの部族を見つけます。 彼らが道徳的な人が立つべきだと思うラインに立つ 2:59は魅力的な体験ですので、 3:10運動はあなたが誰かが強く同意しないことに強く同意します…

  • |

    疑似知性主義の分類 コズモ仏教的視点から

    私は擬似知能ですか? ボーナスコンテンツを使用:擬似30の分類 擬似知能とは何ですか?また、教育や資格がそうでない場合、知的になる理由は何ですか? このビデオでは、トピックについて議論し、質問をする他の2つのビデオに基づいて構築します。疑似知性を特定することに利点はありますか、それとも私たちがもっと自己認識すべきものですか? 0:00大丈夫なので、先日コメントがありました 0:03次の 0:04文あなたは擬似 0:07ハイパーに住んでいる知的アカデミア 0:10リアリティであり、完全な 0:13コメントを読んだ後、 0:19のかなり詩的な皮肉なその言葉を使って 0:21私に擬似知的なものを呼びますが、あなたは 0:24私が自分自身に考えられないと思っていることを知っています 0:26 A 0:33擬似知識人は 0:35 0:38が最初は知的であったという根拠のない根拠のない信念になります ここでは、問い合わせからではなく、生徒の肩に禅のマスタースティックのように投げられた触媒in辱から始めます。 この瞬間 – 反射を引き起こすin辱は、現代のkoです。 被保険者は、擬似知的主義のメカニズムを使用して(複雑な専門用語、漠然とした告発、イデオロギーの才能)、その他のメカニズムを使用しています。 「ハイパーリアリティ」? ボードリラードは彼の墓のジャイロスコープのように回転しています。 スピーカーは反映します: 「…そもそも知的であったという根拠のない根拠のない信念」 これが私たちの最初のコアCosmobuddhistの区別の種です: IDの妄想≠問い合わせへのコミットメント 🔹自己ラベルの知性≠耕作された知恵 これは、māna(誇り)に触れ、svabhāva(固有の自己生産)の幻想に触れます。 擬似知能は、単にコンテンツだけでなく、認識論的位置付け – の混乱と 彼らが知っている。 私の多くの 0:42に基づいて知っている0:41 0:44 0:46からの知的フェードについて議論した多くの多くのビデオ 0:46は、私の博士号をフェードして 0:48をフェードして 0:48を読むことができなかったので、 0:52を読むことができました。 明らかな 0:54私は私の 0:55インテリジェンスの非常に低いと思うので、 0:59は 1:01擬似知識人のためにラベルで実行することはできませんでしたが、それは 1:03が私が考えているので、用語は 1:11知的で、 1:14で知的なものとして構成されていることはわかりません。 文字通り 1:25は 1:27を認識せずにスケールを開発しました。それは、私がこれまでに 1:29がこのコメントを受け取ったときだけです…

  • Sermon on 20240128

    This sermon is a response to the following conversation: Helen De Cruz – Morality, Prescriptive Behavior, Evil, and Suffering The participants were:[Helen De Cruz] whose speech is labeled as [HDC] in the transcript and [Robert Lawrence Kuhn] whose speech is labeled as [RLK] in the transcript. 0:00 [RLK] Helen the importance of meditation uh 0:04…

  • 擬似知性主義の分類

    pseudo-intellectualismの分類 辞書の定義: 擬似知的、名詞 多くの知性と知識を持っていると考えたいが、本当に知的でも知識がない人。 pseudo-intellectualsの特性 一般的な特徴は次のとおりです。 さまざまなソースから描画すると、いくつかのアーキタイプを識別できます。 🧠ティアI:典型的なペルソナ(マスク) これらはあなたの外部のペルソナです。 「ペルソナ」として存在する 「認識論の悪」によって駆動: ペルソナ 認識論的副 ペアの原型の説明 ショーマン 虚栄心、ニヒリズム 繁栄しているが、コアはないインテリジェンスを実行します。 洞察よりも光学を重視しています。 逆の 自我、不安 実質のないコンセンサスに挑戦します。 ノベルティを通じて優位性を求めています。 カメレオン 日和見主義 信念を関連するようにシフトします。 現在の傾向の中空模倣。 エコーチャンバー愛好家 順応性、恐怖 同意して安全を求めます。 調査をめぐるイデオロギーを強化します。 知的いじめ ナルシシズム 知識を兵器化します。 探索ではなく、談話を使用して支配します。 obscurantist 不安、コントロール 複雑さの背後に無知を隠します。 Armorとしてあいまいさを使用します。 資格主義者 権威主義 メリットの代替タイトル。 異議を沈黙させるためのステータスに依存します。 🔥ティアII:動機付けエンジン(なぜ彼らがそれをするのか) これを別の「バイアスリスト」として扱う代わりに、それらをの基礎的な悪としてフレーム化します。 それらをいくつかのカテゴリにグループ化します。 egoエゴ駆動型 🧠アジェンダ主導 パフォーマンス駆動型 各アーキタイプは、これらの動機付けエンジンの混合物から描画します – タグサブカテゴリとして、後で分類を賭けたい場合(私は常にそのためにダウンしていることがわかります)。 動機付けの分類法(なぜ彼らがそれをするのか、何が彼らを駆り立てるか) 擬似30の知性主義に関連する行動は、特定の根本的な行動動機を集めているように思われます。 不安と外部検証の必要性: 説明されている行動の多くは、根本的な不安と強い必要性を示唆しており、他の人から知性があると認識される必要があります。…