Art, death and imagined conversations
Sorry, this one is long. I sounded off too hard.
Welcome to this week’s free edition of The Terminal. If you’d like to support this newsletter by becoming a paying subscriber — and get access to subscriber-only posts — hit the button below.
Guts and glory
I wrote a story at VICE about the gore community: the loose constellation of influencers, content creators and communities at the far fringe of the online true crime world who simply love watching videos of people dying violently.
Gore has a long and rich history at the weirder fringe of the horror fandom. The year 1978 saw the release of so-called “mondo horror” movie Faces of Death, which combined obviously fake footage of Satanic ritual murders and people eating monkey brains with genuine scenes of chaos and death culled from newsreel and police footage. Derided as cheap and tasteless at the time and banned in multiple countries (though possibly not the 40 it sensationally claimed in its marketing) it quickly achieved cult status among an echelon of extreme horror diehards who wanted something more fucked up than exploitation cinema could offer. They wanted something real.
A robot walks into a bar
After a few weeks of Twitter entertaining itself with AI-generated images, we’re back on the carousel again. OpenAI recently opened up the beta of its generative language model GPT-3 to whoever signs up — you get a handful of free credits if you head to that link — and, naturally, everyone started using it for funnies.
As far as I can tell, it kicked off with various accounts using the platform to create fake 4chan greentext memes, as here:
Soon enough, the telltale format of the highlighted green text was everywhere. Again, as with the image generation algorithms, you realise there are essentially two very different phases of release for this sort of thing. Firstly it’s the early beta list, when people with an actual interest in this stuff use it to experiment with the tech and build interesting applications. Then the general public gets their hands on it and uses it for elaborate fart jokes.
I’m obviously firmly in the latter camp. I was making myself laugh until I couldn’t breathe generating transcripts of absurd conversations between public figures, which GPT-3 is particularly good at with the right inputs. It’s also good at professional correspondence, as this very funny exchange posted by Ben Jenkins shows.
Write a transcript of a conversation between Richard Nixon and [former White House Chief of Staff] John Haldeman about Super Mario. Nixon seems to believe he is a real person, and Haldeman is subtly trying to disabuse him of this notion.
Nixon: I've been hearing a lot about this Super Mario character lately. He's quite the phenomenon, isn't he?
Haldeman: Well, yes, Mr. President, he is quite popular.
Nixon: I understand he's some sort of Italian plumber who goes around saving people from turtles and whatnot.
Haldeman: Well, that's one way to describe him, I suppose.
Nixon: Do you think he could be real? I mean, could there really be an Italian plumber running around saving people?
Haldeman: I'm not sure, Mr. President. I think it's unlikely.
Nixon: But you can't rule it out, can you? I mean, stranger things have happened.
Haldeman: I suppose that's true, but I still think it's unlikely.
Nixon: Well, I'll tell you what, Bob. I want you to look into this Super Mario character. See if you can find out if he's real or not. I'm curious to know.
To put it through its paces, I also wanted to see if it could emulate distinctive comedic styles while generating ‘new’ content. To do so, I tried to get it to spit out captions for fictional Far Side cartoons. I started with this:
Write the caption for a Far Side cartoon. The cartoon depicts two monkeys dressed as detectives. A chalk outline of a banana is on the ground.
"I don't care what the lab says, this was no suicide."
Then I tested it in the default home terrain of the 21st century edgy joke — September 11:
Write the caption for a Far Side cartoon. The cartoon depicts two 9/11 hijackers in the cockpit of the plane. The Twin Towers are visible through the plane's windshield. One of the hijackers is saying something to the other with a concerned look on his face.
"I'm telling you, Ahmed, this does not look like Cleveland.”
First, full disclosure: in both of these cases I had to go through a number of re-rolls to land on a joke that a) made sense and b) was actually funny. But they’re both not bad, as far as cartoon jokes go. The 9/11 joke in particular you could imagine Larson doing in some alternate universe.
But here’s the cinch: a huge part of the comedy in the vast majority of these gags is the fact they are produced by a bot. If Gary Larsen had actually written those captions they might be funny, but they wouldn’t be standouts in his library of work. (Both jokes are pretty cliché in form and content, which is probably why the bot was able to pull them out of the ether.) Similarly, if a human writer had produced that Nixon/Haldeman exchange, it definitely wouldn’t be nearly as funny. “A bot wrote this” is load-bearing to the joke, along with the uncanny closeness to the tone of both mens’ speech. It seems the think difference between finding these posts insanely funny and not finding them funny at all is whether you are at all amused by the fact it’s a sophisticated pattern-matching robot spitting out these uncanny bits of pop culture ephemera.
If you don’t care, you’re hardly going to be impressed by the comedy itself, which tends to consist of concepts smashed together without understanding, but in a linguistically pleasing way. Over at Dirt, Elan Ullendorff makes a similar point about the distorted lo-fi memes people have been making with DALL-E mini and other generative AI bots. “Mashing together cultural mainstays has always been a reliable source for memes; the difference here is that you can also see what would happen if a computer tried to visualize it,” he writes. “No matter if the image is any good; in fact in many cases the image barely needs to exist at all. The mere idea that it is able to exist is enough to unlock our sense of wonder.”
This computer’s eye view is an interesting quality to try and come to grips with. There’s an obvious — and definitely not unwarranted — fear that this tech will make a lot of rote writing and creative work redundant. It may well. But I think we’re still in the phase where we’re not entirely sure whether we’re actually impressed by this stuff, or just trapped in that uncanny valley of dazzlement of being ‘seen’ by the machine. Who knows how long that honeymoon will last, especially as this tech becomes better and more widespread.
I remembered this bit from deep learning critic Gary Marcus from back in 2020, when GPT-3 was first released and demonstrated:
At first glance, GPT-3 seems to have an impressive ability to produce human-like text. And we don't doubt that it can be used to produce entertaining surrealist fiction; other commercial applications may emerge as well. But accuracy is not its strong point. If you dig deeper, you discover that something’s amiss: although its output is grammatical, and even impressively idiomatic, its comprehension of the world is often seriously off, which means you can never really trust what it says.
Perhaps that’s why people have been so compelled by the comedic output of . Comedy doesn’t demand internal consistency or logic, or a rich understanding of the world. If you laugh you laugh. The robot is merely building on a long and rich human tradition of crashing two disparate ideas together like toy trucks and laughing at them, and doing a pretty good job of that. And if it can do a decent Richard Nixon impression, all the better.
I will never hesitate to share any story that seems to lie at the somewhat rare (but increasingly common) intersection of consumer technology and the occult. From The Verge:
Amazon has revealed an experimental Alexa feature that allows the AI assistant to mimic the voices of users’ dead relatives.
The company demoed the feature at its annual MARS conference, showing a video in which a child asks Alexa to read a bedtime story in the voice of his dead grandmother.
“As you saw in this experience, instead of Alexa’s voice reading the book, it’s the kid’s grandma’s voice,” said Rohit Prasad, Amazon’s head scientist for Alexa AI. Prasad introduced the clip by saying that adding “human attributes” to AI systems was increasingly important “in these times of the ongoing pandemic, when so many of us have lost someone we love.”
“While AI can’t eliminate that pain of loss, it can definitely make their memories last,” said Prasad.
AI voice stuff is pretty commonplace, and is everywhere in entertainment nowadays, mostly in subtle ways. You probably remember the controversy about the use of a simulated voice in the Anthony Bourdain documentary. The backlash to that was particularly interesting. The filmmakers used software to simulate Bourdain’s voice to read three pretty unremarkable lines of text, which was in an email he sent to a friend. It was controversial because it raised uncomfortable questions about death and consent, and turned into a shitfight between producers and Bourdain’s relatives.
I’m always interested in the weird ways consumer tech is used to elide the externalities of death and the grieving process. As an example, I’m fascinated by the ways social media tries to reconcile the relative permanency of the internet with the absolute permanency death, like with Facebook’s memorial pages, which allow family members a limited means of turning their loved ones’ profiles into bespoke digital shrines.
It’s wild that perhaps the very first time AI audio tech has been pitched to the public at large is this use case. Amazon is a company that absolutely does not care how closely their product lineup aligns with whatever dystopian example from literature you’d care to name, so maybe it isn’t overly weird they’re proposing having your dead grandma’s voice say “Coming right up!” when you ask your Alexa to turn your smart lights off.
Apes of wrath
This is a clip from ApeFest 2022, which is the annual Bored Ape Yacht Club party that runs in parallel to the bacchanalia (loosely understood) that is NFT NYC (which is exactly what it sounds like). It, as Ryan Broderick writes at Garbage Day, amounts to an effort to “recreate Art Basel with a bunch of bankers.”
I can only say that ‘LCD Soundsystem playing at a cartoon monkey picture party for an audience of various subgenres of Discord mod’ sounds less like the birth of a brave new era than the shuddering death of an old one. If this is supposed to be the beating heart of the new culture, I’m honestly not really seeing it. It seems more like a retroactive effort to astroturf an cultural edifice onto a marketplace — a marketplace which isn’t particularly vibrant at this precise moment.
Maybe I’m being uncharitable. It’s community, and they obviously find it nourishing in some way. I’ll have two beers and get back to you on this.
“Is Web3 culture similar to Amway culture?” There’s no shortage of comparisons between the crypto world and multi-level marketing, for reasons I assume are obvious, but this was good on the cultural dimensions of that comparison.
Interesting profile of a woman who has dedicated her life to being a Sandy Hook truther. While reading it I was struck by the fact that, despite ‘Sandy Hook truther’ being the protoplasmic form of the modern day Facebook conspiracy theorist, you tend to experience them as a mass rather than as individuals.
On the ethics of data scraping: “Every day, without our realising it, the images, text and other data we post for our own purposes are scrutinised and collected by countless outside eyes for purposes we are unaware of and would often not consent to if we were aware of them.”
Another piece on why Google Search sucks now, but with a bit more of historical context. “Google is still useful for many, but the harder question is why its results feel more sterile than they did five years ago.”
Thought this story about coronal mass ejections on the Sun was interesting, you may find it unsettling. Could go either way.