Paco Xander Nathan
3 min readNov 24, 2023

--

Hi Chris, great to hear!

Much of what needs to be said was articulated already so very well by Sasha Luccioni and Emily Bender.

* https://twitter.com/SashaMTL/status/1728072910093312473

* https://twitter.com/emilymbender/status/1727922270855930354

Setting the obvious AGI false narratives aside, LLMs as a technology represent useful ML approaches. For example, take a look at what we’ve been doing at Argilla.io for the past ~7 years, and check out what our friends Hugging Face have been doing during this same period.

* https://argilla.io/

* https://huggingface.co/

In most instances, AGI is either used as a term of science fiction or as a tactic in research fraud and related commercial disruption attempts. Either way, it’s a literary conceit at best.

That said, AGI has clearly become a dog-whistle. Its close adjacencies are no coincidence: accelerationism and new variants of eugenics. With those come the usual suspects of rampant misogyny, racism, exploitation of anyone who doesn’t look like a wealthy white male, enablement for far-right groups and insurrection, etc. In a word, libertarianism.

While I nave no training in psychiatric care, my spouse does that kind of work – caring for people with deeply psychotic disorders. I don’t want to speak for anyone else, though let’s just say that so many themes from the entire OAi saga have been quite familiar.

Most of the “grandfathers of deep learning” and some of their more notable grad students tend to exhibit symptoms of psychotic disorders – except for Yann LeCun. Most of the “grandfathers of deep learning” and some of their more notable grad students tend to exhibit symptoms of rather extreme narcissism.

The two main false narratives being promoted, which turn out to be exceptionally harmful in practice, are:

AGI is anything other than a literary conceit.

AI splits into two camps: “Doomers” and “Accelerationists”.

From a social perspective, the problem is the following …

There are tons of young-ish people around the SF Bay Area and other tech hubs calling themselves “AI Engineers” and gleefully placing e/acc or .eth after their usernames on X, espousing belief systems which fit neatly into the TESCREAL bundle. Most of these people are bright-but-not-so-bright, i.e., the research equivalent of gold rush-era buskers. Painfully naïve is probably a better term for the ilk.

* https://www.salon.com/2023/06/11/ai-and-the-of-human-extinction-what-are-the-tech-bros-worried-about-its-not-you-and-me/

* https://akjournals.com/view/journals/2054/aop/article-10.1556-2054.2023.00292/article-10.1556-2054.2023.00292.xml#B104

Meanwhile, their ring-leaders are anything but naïve. And more importantly, they share three things in common:

wielding world-leading organizations for online marketing

leveraging enormous amounts of capital

wanting to make life more friendly for billionaires, at the expense of say addressing climate crises

They also happen to be dangerously adjacent to people who really, really want to leverage disinformation and insurrection to undermine democracy.

The culpable are using marketing, capital, and research fraud – plus the armies of naïve techies – to accomplish the above. The professional, economic, and moral implications of what these goons are doing are enormous. But not so much for the technologies mentioned.

In our team’s analysis, a handful of right-leaning wealth individuals, who tend to be in the AI headlines, are likely to face intense federal scrutiny in the near-term. My job is to be at least two hops away from anyone implicated when “The Troubles” hit. Given how close I am to Silicon Valley, based on personal connections, it will require a metric fuckton of burning professional connections ASAP to keep that amount of distance.

--

--

Paco Xander Nathan
Paco Xander Nathan

Written by Paco Xander Nathan

evil mad scientist @ Senzing ; https://derwen.ai/paco ; @pacoid.bsky.social ; lives on an apple orchard in the coastal redwoods coastal redwoods /|\

No responses yet