I don't think Henry Ford would have guessed, when he invented the automobile, that dogs would enjoy sticking their heads out of the window. I'm not sure that Tim Berners-Lee guessed the utility of the web for pornography, and I don't know that the founders of YouPorn guessed that so many people had sexual fantasies about step-siblings.
Which is wild. The amount of step-sibling porn seems vastly out of proportion to the amount of people who have step-siblings. So maybe it's not a proclivity of people but of machines. Do androids have sex dreams about their step-siblings? I wouldn't know (you don't either, right?) Recommendation algorithms are based on statistics, and the funny thing about statistics is that it can only analyze what is measured. Good statistics cannot save a bad experiment. In fact it may in some sense make a bad experiment worse, by giving it that veneer of legitimacy.
When generative models eagerly apply good statistics to bad (or incomplete) data, they confidently assert what they cannot know. We call this a hallucination, but like intelligence, reasoning or inference this is a bad term, meant to conflate intelligence with the simulation of intelligence that can be produced by algorithms. The LLM does not know anything, or rather, the only thing it knows are words. Everything is a hallucination.
Hallucination is then the AI-pushers term for statistics gone awry. A region of the model where the data is incomplete, but which instead of leading to uncertainty, as one would expect, leads instead to complete certainty. A key factor in producing this is re-enforcement, small errors that go uncorrected, leading to greater and greater confidence. As more data is fed to the model that supports the wrong output, the confidence increases. As the wrong output becomes further deranged, it becomes distinctly weird, but also distinctly stuck: the output is now so divorced from reality that there is no path back to sanity.
We recognize this when LLMs tell us how many rocks to eat per day. But I would assert that recommendation algorithms are subject to their own kind of hallucination. A content creator with an audience is in the precarious position of keeping that audience, leading to a positive feedback loop between the author and the reader. And in between them, the platform is the publisher/distributor/bookstore, monitoring the appeal of each book and feeding the audience more of what it wants.
We know this as the filter bubble, the ideological frame, or the echo chamber, but I what I suggest here is that it is also a kind of hallucination. And the hallucination of the machine conjoins with its original meaning: creating false belief in an actual human mind. The author, the reader, and the machine all re-enforce one another, and as the false belief becomes increasingly weird, it also becomes increasingly stuck.
I had never heard of Charlie Kirk until last week. This is partly because I live in Canada, and partly because I try to not get my news from social media. Depending on who you ask, his politics sounds somewhere between "the polar opposite of mine" and "abhorrent". One thing that no one disagrees with, though, is that his impact on the world was limited to persuading people of things. His murder, the celebration and jeering of it, is abhorrent to the most sacred cornerstone of liberalism. It is also counterproductive: the history of ideas tells us that killing a person very rarely extinguishes their philosophy; in fact, it usually gathers attention to it. By the exact same token, the notion that the right can stamp out an entire political camp through some kind of coercive action is equally deranged.
We have the usual calls for sanity, and the usual rationalizations why this is not a "both sides" issue. And there are certainly important qualitative differences between the MAGA right and the woke left. But both sides really have committed violence in a blatantly illiberal attempt to stamp out ideas they see as dangerous. It really is both sides.
But from another point of view it's only one side. The very online side. The manipulated side. The tribal side. The deranged side. The hallucinating side. So they have different ideas about what is sacred and what is profane. So what? Killing in the name of the sacred is wrong for the same reason it's always been: it is impossible to distinguish the voice of God from a hallucination. It is very hard to see how either the MAGA right or the woke left would exist without social media. They are political movements born out of the new media environment, and their blatant illiberalism has to end. Facebook really did cause a genocide in Burma; are we really going to wait for them to behave responsibly now, or are we going to pull the plug on them?