Same. I’m not being critical of lab-grown meat. I think it’s a great idea.
But the pattern of things he’s got an opinion on suggests a familiarity with rationalist/EA/accelerationist/TPOT ideas.
Same. I’m not being critical of lab-grown meat. I think it’s a great idea.
But the pattern of things he’s got an opinion on suggests a familiarity with rationalist/EA/accelerationist/TPOT ideas.
Do you have a link? I’m interested. (Also, I see you posted something similar a couple hours before I did. Sorry I missed that!)
So it turns out the healthcare assassin has some… boutique… views. (Yeah, I know, shocker.) Things he seems to be into:
How soon until someone finds his LessWrong profile?
As anyone who’s been paying attention already knows, LLMs are merely mimics that provide the “illusion of understanding”.
As a longtime listener to Tech Won’t Save Us, I was pleasantly surprised by my phone’s notification about this week’s episode. David was charming and interesting in equal measure. I mostly knew Jack Dorsey as the absentee CEO of Twitter who let the site stagnate under his watch, but there were a lot of little details about his moderation-phobia and fash-adjacency that I wasn’t aware of.
By the way, I highly recommend the podcast to the TechTakes crowd. They cover many of the same topics from a similar perspective.
For me it gives off huge Dr. Evil vibes.
If you ever get tired of searching for pics, you could always go the lazy route and fall back on AI-generated images. But then you’d have to accept the reality that in few years your posts would have the analog of a geocities webring stamped on them.
Please touch grass.
The next AI winter can’t come too soon. They’re spinning up coal-fired power plants to supply the energy required to build these LLMs.
I’ve been using DigitalOcean for years as a personal VPS box, and I’ve had no complaints. Not sure how well they’d scale (in terms of cost) for a site like this.
Anthropic’s Claude confidently and incorrectly diagnoses brain cancer based on an MRI.
Strange man posts strange thing.
This linked interview of Brian Merchant by Adam Conover is great. I highly recommend watching the whole thing.
For example, here is Adam, decribing the actual reasons why striking writers were concerned about AI, followed by Brian explaining how Sam Altman et al hype up the existential risk they themselves claim to be creating, just so they can sell themselves as the solution. Lots of really edifying stuff in this interview.
She really is insufferable. If you’ve ever listened to her Pivot podcast (do not advise), you’ll be confronted by the superficiality and banality of her hot takes. Of couse this assumes you’re able to penetrate the word salad she regularly uses to convey any point she’s trying to make. She is not a good verbal communicator.
Her co-host, “Professor” [*] Scott Galloway, isn’t much better. While more verbally articulate, his dick joke-laden takes are often even more insufferable than Swisher’s. I’m pretty sure Kara sourced from him her opinion that you should “use AI or be run over by progress”; it’s one of his most frequent hot takes. He’s also one of the biggest tech hype maniacs, so of course he’s bought a ticket on the AI hype express. Before the latest AI boom, he was a crypto booster, although he’s totally memory-holed that phase of his life now that the crypto hype train has run off a cliff.
[*] I put professor in quotes, because he’s one of those people who insist on using a title that is equal parts misleading and pretentious. He doesn’t have a doctorate in anything, and while he’s technically employed by NYU’s business school, he’s a non-tenured “clinical professor”, which is pretty much the same as an adjunct. Nothing against adjunct professors, but most adjuncts I’ve known don’t go around insisting that you call them “professor” in every social interaction. It’s kind of like when Ph.D.s insist you call them “doctor”.
I wonder what percentage of fraudulent AI-generated papers would be discovered simply by searching for sentences that begin with “Certainly, …”
Eats the same bland meal every day of his life. Takes an ungodly number of pills every morning. Uses his son as his own personal blood boy. Has given himself a physical appearance that can only be described as “uncanny valley”.
I’ll never understand the extremes some of these tech bros will go to deny the inevitability of death.
The first comment by the first commenter is “Can we suspend Godwin’s Law for a moment?” followed by an explanation of the ways in which The Protocols of the Elders of Zion is an accurate description of reality.
Libertarianism is never far from Nazism. The Venn diagram is a circle. The only question is which circle contains the other.
I see him more as a dupe than a Cassandra. I heard him on a podcast a couple months ago talking about how he’s been having conversations with Bay Area AI researchers who are “really scared” about what they’re creating. He also spent quite a bit of time talking up Geoffrey Hinton’s AI doomer tour. So while I don’t think Ezra’s one of the Yuddite rationalists, he’s clearly been influenced by them. Given his historical ties to effective altruism, this isn’t surprising to me.
I mean, of course he loves unfettered technology and capitalism. He's a fucking billionaire. He hit the demographic lottery.
EDIT: I just noticed his list of "techno-optimist" patrons. On the list? John Galt. LMAO. The whole list is pretty much an orgy of libertarians.
Now that his alter ego has been exposed, Hanania is falling back on the “stupid things I said my youth” chestnut. Here’s a good response to that.
Let them fight. https://openai.com/index/elon-musk-wanted-an-openai-for-profit/