Pivot to AI: Microsoft looks into taking over Torment Nexus development

  • By Amy Castor and David Gerard

  • We need your support for more posts like this. Send us money! Here’s Amy’s Patreon, and here’s David’s. Sign up today!

No one has ever attempted to move a 700-person polycule from SF to Redmond but I’m told the resources exist.

Jacob Silverman

After Sam Altman was booted from OpenAI on Friday, the one question was: what will Microsoft do?

Microsoft has put $13 billion into OpenAI. It put in $1 billion in 2019 and another $2 billion in the years since. In January 2023, the company pledged an additional $10 billion in capital and infrastructure credits — i.e., compute time on the Azure cloud — though not all of that has been drawn.

Microsoft CEO Satya Nadella announced early on Monday morning that Microsoft would be starting its own AI unit — with Sam at the helm. Microsoft would also hire Greg Brockman, the former OpenAI president who was also booted from the board and quit his job in solidarity with Sam.

Microsoft’s stock price shot up on the news — even as the deal isn’t signed as we write this. [Twitter, archive]

Restoring the rightful order in Silicon Valley

OpenAI is not a regular tech company. It was formed as a nonprofit to develop artificial general intelligence (AGI) to benefit “humanity as a whole” and keep developers from creating a Torment Nexus. [Twitter, archive]

OpenAI’s chartered duty is to humanity, not to making big money into bigger money. But it turns out big money is also important. Training machine learning models isn’t cheap. 

If OpenAI does put Altman back on the board, he wants the remaining board members — Adam D’Angelo, Helen Toner, Ilya Sutskever (maybe), and Tasha McCauley — replaced with Sam-friendly people who aren’t full-on AI doomsday cultists. He also wants his name cleared. 

If a Sam-friendly board is put in place, then no enterprise or government will take anything OpenAI says about AI safety seriously henceforth — and OpenAI will just become a regular tech company focused on number going up, not a research company with AI doomers guiding its ethics.

Sam is venture capital’s guy. He’s the face of “AI” for the VC world. The VCs simply could not suffer the humiliation of Sam being ousted. They want him back. 

Sam’s VC buddies have been working the press hard since Friday, trying to pressure the remaining OpenAI board members. If you see an article sourced to “multiple people familiar with discussions,” think to yourself which of the warring factions the “multiple people” likely belong to.

Right now, the VCs who put money into the for-profit arm of OpenAI are talking up the idea of suing the nonprofit board for their losses, to put added pressure on the board to resign. [Reuters]

If Altman and Brockman do go to Microsoft, OpenAI becomes an empty shell with no funding — a nonprofit board of nothing.

Sutskever, who led the coup against Sam, has already crumbled. He says he’s sorry — “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.” [Twitter, archive]

Altman now needs two of the three remaining Sam-opposed board members to flip. [Verge]

Palace intrigue

Nobody has revealed precisely why Sam got booted — but several past and present OpenAI employees who spoke with the Atlantic said the tension started with the release of ChatGPT. The company was growing too quickly. 

“After ChatGPT, there was a clear path to revenue and profit,” one source said. “You could no longer make a case for being an idealistic research lab. There were customers looking to be served here and now.” [Atlantic, archive]

OpenAI was torn between two growing factions at the company — the idealistic, like Sutskever, who feared AI taking over the world, and the commercial, like Altman and Brockman, who were pushing for more product releases, sometimes before the products were ready. 

Sutskever began to behave like some sort of cultist:

At OpenAI’s 2022 holiday party, held at the California Academy of Sciences, Sutskever led employees in a chant: “Feel the AGI! Feel the AGI!” The phrase itself was popular enough that OpenAI employees created a special “Feel the AGI” reaction emoji in Slack.

Altman, meanwhile, was trying to drum up money from Softbank and Middle Eastern investors to build a chip company so OpenAI could own its computation. He wanted an OpenAI that worked like any other fast-growing Silicon Valley startup. [Bloomberg, archive]

New seeker falls off broomstick

OpenAI has hired a second interim CEO, Twitch cofounder Emmett Shear, to replace Mira Murati, who held the position for two days. Shear wants to hire an independent investigator to find out what the heck happened here. [Twitter, archive]

Shear appears to fall into the idealistic category. He takes Eliezer Yudkowsky seriously. He also had a cameo in Yudkowsky’s Harry Potter and the Methods of Rationality. [Twitter, archive; HPMOR]

But unlike Yudkowsky, who thinks that the rogue superintelligence will absolutely, positively, destroy humankind one day, Shear puts the probability of AI doom at a mere 5% to 50%. [Twitter]

In the manner of AI doom prophets throughout recent history, Shear has never done anything so tawdry as showing how he worked out these numbers. You might be forgiven for thinking that these guys pull this sort of number out of their backsides so that they can announce scary numbers in a confident voice.

Altman’s job at OpenAI was not in any way technical. He dropped out of Stanford computer science after two years to chase money in startups. But he inspired the team. So OpenAI’s 770 employees want their old CEO, not this new guy.

When Shear called for an all-hands meeting on Sunday at the company’s San Francisco headquarters, the employees refused. One responded in Slack with a rude emoji. [Verge]

By Monday afternoon, 700 of the OpenAI staff had signed a letter saying that they would quit and go to Microsoft if Sam didn’t return. They wrote that they were “unable to work for or with people that lack competence, judgment and care for our mission and employees.”[Wired; archive]

Sutskever also signed the letter — because hey, we all want to find the guy who did this. Murati signed too.

Ask Clippy

Microsoft also has rights to OpenAI’s source code and training data. It could start a unit with Altman as an inspirational tech leader and let him cherry-pick who he wants there. Microsoft could effectively buy OpenAI for nothing. The company is already extending feelers out to OpenAI staff — though on a very noncommittal “if needed” basis. [BBC]

Whether Microsoft actually wants to swallow OpenAI is another question. Working for a startup is a very different experience from working for a large corporate office supply company. Nobody who thought they were changing the world is going to stick around to work on text generation for Outlook email, including Sam. And Microsoft is smart enough to realize this.

Microsoft’s ideal outcome is that Altman goes back to OpenAI and the flows of cash and firewalling from culpability continue as they did before all this unpleasantness.

What Microsoft wants is to rent out computation on Azure. Cloud computing is a commodity, and one that’s only getting cheaper. But the supply of graphics cards for number crunching is rather more constricted. [Paris Marx]

If there’s demand for “AI” products — whether or not they even work — then there’s money renting out the number crunching the machine learning will need. That’s what Microsoft is in this for.

This provides a more robust and business-friendly substrate — without those annoying “ethics” people — for AI’s real use case: abusing labor and customers.

Update 11/22/2023: Sam has been reinstated. Venture capital won and OpenAI is now just another startup whose goal is to grow like a cancer. The paperclip maximizer is satisfied. [Twitter; archive]

Image: Sam Altman

Pivot to AI: Replacing Sam Altman with a very small shell script

We’ve got a new Pivot to AI post. This one is on David’s blog. [David Gerard]

OpenAI just dumped their CEO Sam Altman. You just don’t come out and call your CEO a liar in a press release! 

The world is presuming that there’s something absolutely awful about Altman just waiting to come out. But we suspect the reason for the firing is much simpler: the AI doom cultists kicked Altman out for not being enough of a cultist.

Image: Sam and Ilya, back in the happier days of June 2023

Pivot to AI: Pay no attention to the man behind the curtain

  • By Amy Castor and David Gerard
  • We need your support for more posts like this. Send us money! Here’s Amy’s Patreon, and here’s David’s. Sign up today!

“all this talk of AI xrisk has the stink of marketing too. Ronald McDonald telling people that he has a bunker in New Zealand because the new burger they’re developing in R&D might be so delicious society will crumble.”

Chris Martin

Crypto’s being dull again — but thankfully, AI has been dull too. The shine is coming off. So we’re back on the AI beat.

The AI winter will be privatized

Since the buzzword “artificial intelligence” was coined in the 1950s, AI has gone through several boom and bust cycles.

A new technological approach looks interesting and gets a few results. It gets ridiculously hyped up and lands funding. The tech turns out to be not so great, so the funding gets cut. The down cycles are called AI winters.

Past AI booms were funded mainly by the US Department of Defense. But the current AI boom has been almost completely funded by venture capital.

The VCs who spent 2021 and 2022 pouring money into crypto startups are pivoting to AI startups, because people buy the idea that AI will change the world. In the first half of 2023, VCs invested more than $40 billion into AI startups, and $11 billion just in May 2023. This is even as overall VC funding for startups dropped by half in the same period from the year before. [Reuters; Washington Post]

The entire NASDAQ is being propped up by AI. It’s one of the only fields that is still hiring.

In contrast, the DOD only requested $1.8 billion for AI funding in its 2024 budget. [DefenseScoop]

So why are VCs pouring money into AI? 

Venture capital is professional gambling. VCs are looking for a liquidity event. One big winner can pay for a lot of failures.

Finding someone to buy a startup you’ve funded takes marketing and hype. The company doing anything useful, or anything that even works, is optional.

What’s the exit plan for AI VCs? Where’s the liquidity event? Do they just hope the startups they fund will do an initial public offering or just get acquired by a tech giant before the market realizes AI is running out of steam?

We’re largely talking about startups whose business model is sending queries to OpenAI.

At least with “Web3,” the VCs would just dump altcoins on retail investors via their very good friends at Coinbase. But with AI, we can’t see an obvious exit strategy beyond finding a greater fool.

Pay no attention to the man behind the curtain

The magical claim of machine learning is that if you give the computer data, the computer will work out the relations in the data all by itself. Amazing!

In practice, everything in machine learning is incredibly hand-tweaked. Before AI can find patterns in data, all that data has to be tagged and output that might embarrass the company needs to be filtered.

Commercial AI runs on underpaid workers in English-speaking countries in Africa creating new training data and better responses to queries. It’s a painstaking and laborious process that doesn’t get talked about nearly enough. 

The workers do individual disconnected actions all day, every day — so called “tasks” — working for companies like Remotasks, a subsidiary of Scale AI, and doing a huge amount of the work behind OpenAI.

AI doesn’t remove human effort. It just makes it much more alienated.

There’s an obvious hack here. If you are an AI task worker, your goal is to get paid as much as possible without too much effort. So why not use some of the well-known tools for this sort of job? [New York]

Another Kenyan annotator said that after his account got suspended for mysterious reasons, he decided to stop playing by the rules. Now, he runs multiple accounts in multiple countries, tasking wherever the pay is best. He works fast and gets high marks for quality, he said, thanks to ChatGPT. The bot is wonderful, he said, letting him speed through $10 tasks in a matter of minutes. When we spoke, he was having it rate another chatbot’s responses according to seven different criteria, one AI training the other.

Remember, the important AI use case is getting venture capital funding. Why buy or rent expensive computing when you can just pay people in poor countries to fake it? Many “AI” systems are just a fancier version of the original Mechanical Turk.

Facebook’s M from 2017 was an imitation of Apple’s Siri virtual assistant. The trick was that hard queries would be punted to a human. Over 70% of queries ended up being answered by a human pretending to be the bot. M was shut down a year after launch.

Kaedim is a startup that claims to turn two-dimensional sketches into 3-D models using “machine learning.” The work is actually done entirely by human modelers getting paid $1-$4 per 15-minute job. But then, the founder, Konstantina Psoma, was a Forbes 30 Under 30. [404 Media; Forbes

The LLM is for spam

OpenAI’s AI-powered text generators fueled a lot of the hype around AI — but the real-world use case for large language models is overwhelmingly to generate content for spamming. [Vox]

The use case for AI is spam web pages filled with ads. Google considers LLM-based ad landing pages to be spam, but seems unable or unwilling to detect and penalize it. [MIT Technology Review; The Verge

The use case for AI is spam books on Amazon Kindle. Most are “free” Kindle Unlimited titles earning money through subscriber pageviews rather than outright purchases. [Daily Dot

The use case for AI is spam news sites for ad revenue. [NewsGuard]

The use case for AI is spam phone calls for automated scamming — using AI to clone people’s voices. [CBS]

The use case for AI is spam Amazon reviews and spam tweets. [Vice]

The use case for AI is spam videos that advertise malware. [DigitalTrends]

The use case for AI is spam sales sites on Etsy. [The Atlantic, archive]

The use case for AI is spam science fiction story submissions. Clarkesworld had to close submissions because of the flood of unusable generated garbage. The robot apocalypse in action. [The Register]

Supertoys last all summer long

End users don’t actually want AI-based products. Machine learning systems can generate funny text and pictures to show your friends on social media. But even that’s wearing thin — users mostly see LLM output in the form of spam.

LLM writing style and image generator drawing style are now seen as signs of low quality work. You can certainly achieve artistic quality with AI manipulation, as in this music video — but even this just works on its novelty value. [YouTube]

For commercial purposes, the only use case for AI is still to replace quality work with cheap ersatz bot output — in the hope of beating down labor costs.

Even then, the AI just isn’t up to the task.

Microsoft put $10 billion into OpenAI. The Bing search engine added AI chat — and it had almost no effect on user numbers. It turns out that search engine users don’t want weird bot responses full of errors. [ZDNet]

The ChatGPT website’s visitor numbers went down 10% in June 2023. LLM text generators don’t deliver commercial results, and novelty only goes so far. [Washington Post]

After GPT-3 came out, OpenAI took three years to make an updated version. GPT-3.5 was released as a stop-gap in October 2022. Then GPT-4 finally came out in March 2023! But GPT-4 turns out to be eight instances of GPT-3 in a trenchcoat. The technology is running out of steam. [blog post; Twitter, archive]

Working at all will be in the next version

The deeper problem is that many AI systems simply don’t work. The 2022 paper “The fallacy of AI functionality” notes that AI systems are often “constructed haphazardly, deployed indiscriminately, and promoted deceptively.”

Still, machine learning systems do some interesting things, a few of which are even genuinely useful. We asked GitHub and they told us that they encourage their own employees to use the GitHub Copilot AI-based autocomplete system for their own internal coding — with due care and attention. We know of other coders who find Copilot to be far less work than doing the boilerplate by hand.

(Though Google has forbidden its coders from using its AI chatbot, Bard, to generate internal code.) [The Register]

Policy-makers and scholars — not just the media — tend to propagate AI hype. Even if they try to be cautious, they may work in terms of ethics of deployment, and presume that the systems do what they’re claimed to do — when they often just don’t.

Ethical considerations come after you’ve checked basic functionality. Always put functionality first. Does the system work? Way too often, it just doesn’t. Test and measure. [arXiv, PDF, 2022]

AI is the new crypto mining

In 2017, the hot buzzword was “blockchain” — because the price of bitcoin was going up. Struggling businesses would add the word “blockchain” to their name or their mission statement, in the hope their stock price would go up. Long Island Iced Tea became Long Blockchain and saw its shares surge 394%. Shares in biotech company Bioptix doubled in price when it changed its name to Riot Blockchain and pivoted to bitcoin mining. [Bloomberg, 2017, archive; Bloomberg, 2017, archive]

The same is now happening with AI. Only it’s not just the venture capitalists — even the crypto miners are pivoting to AI.

Bitcoin crashed last year and crypto mining is screwed. As far as we can work out, the only business plan was to get foolish investors’ money during the bubble, then go bankrupt.

In mid-2024, the bitcoin mining reward will halve again. So the mining companies are desperate to find other sources of income. 

Ethereum moved to proof of stake in September 2022 and told its miners to just bugger off. Ethereum was mined on general-purpose video cards — so miners have a glut of slightly-charred number crunching machinery.

Hive Blockchain in Vancouver is pivoting to AI to repurpose its pile of video cards. It’s also changed its name to Hive Digital Technologies. [Bloomberg, archive; press release

Marathon Digital claims that “over time you’re going to see that blockchain technologies and AI have a very tight coupling.” No, us neither. Marathon is doubling and tripling down on bitcoin mining — but, buzzwords! [Decrypt]

Nvidia makes the highest-performance video cards. The GPU processors on these cards turn out to be useful for massively parallel computations in general — such as running the calculations needed to train machine learning models. Nvidia is having an excellent year and its market cap is over $1 trillion.

So AI can take over from crypto in yet another way — carbon emissions from running all those video cards.

AI’s massive compute load doesn’t just generate carbon — it uses huge amounts of fresh water for cooling. Microsoft’s water usage went up 34% between 2021 and 2022, and they blame AI computation. ChatGPT uses about 500 mL of water every time you have a conversation with it. [AP]

We don’t yet have a Digiconomist of AI carbon emissions. Go start one.