Prior to the industrial revolution, the natural world was nearly infinitely abundant. We simply weren't efficient enough to fully exploit it. That meant that it was fine for things like property and the commons to be poorly defined. If all of us can go hunting in the woods and yet there is still game to be found, then there's no compelling reason to define and litigate who "owns" those woods.
But with the help of machines, a small number of people were able to completely deplete parts of the earth. We had to invent giant legal systems in order to determine who has the right to do that and who doesn't.
We are truly in the Information Age now, and I suspect a similar thing will play out for the digital realm. We have copyright and intellecual property law already, of course, but those were designed presuming a human might try to profit from the intellectual labor of others. With AI, we're in the industrial era of the digital world. Now a single corporation can train an AI using someone's copyrighted work and in return profit off the knowledge over and over again at industrial scale.
This completely unpends the tenuous balance between creators and consumers. Why would a writer put an article online if ChatGPT will slurp it up and regurgitate it back to users without anyone ever even finding the original article? Who will contribute to the digital common when rapacious AI companies are constantly harvesting it? Why would anyone plant seeds on someone else's farm?
It really feels like we're in the soot-covered child-coal-miner Dickensian London era of the Information Revolution and shit is gonna get real rocky before our social and legal institutions catch up.
This is just wildly incorrect. People started running out of trees during the early Iron Age. Woodlands have been a managed and often over exploited resource for a long time. Active agriculture vs passive woodlands vs animal grazing has been in constant tension for thousands of years across most of the globe.
There were more than enough trees until we developed the technology to clear cut in expeditious manner. There were more than enough fish until we developed the technology to pull massive indiscriminate amounts out of the ocean (and/or started polluting our rivers with industry). There was more than enough topsoil until we developed mechanized plows and artificial fertilizer. Etc.
A few hundred years ago or less, a squirrel could get from the Atlantic Ocean to the Mississippi River without ever touching the ground. Not possible today. That’s not a push and pull played out over thousands of years, that’s a one-way trend.
GP is saying it is not, and you're just reiterating what OP said as fact.
This is where STEM people are weak- a lack of knowledge on history. In another forum, someone would have chipped in that England's virgin forests were fully deforested by 1150. And someone else would have pointed out that this deforestation produced the economic demand for coal that drove the Industrial Revolution in the first place.
Still, that kind of underscores OP's point. Yes, natural resources were not completely unlimited prior to the Industrial Revolution; Jonathan Swift predated Watt's steam engine, after all. Still... Neither were information resources 10 years ago. Intellectual property laws did exist prior to AI, of course. The legal systems in place are not completely ignorant of the reality.
However, there's an immense difference in scale between post-industrial strip mining of resources, and preindustrial resource extraction powered solely by human muscle (and not coal or nitrogylcerin etc). Similarly, there's a massive difference in information extraction enabled by AI, vs a person in 1980 poring over the microfilm in their local library.
The legal system and social systems in place prior to the Industrial Revolution proved unsuitable for an industrial world. It stands to reason that the legal system and social systems in today's society would be forced to evolve when exposed to the technological shift caused by AI.
Both animals and water power go way back. The early steam engine was measured in horsepower because that’s what it was replacing in mines. It couldn’t compete with nearby water power which was already being moved relatively long distances through mechanical means at the time.
Hand waving this as unimportant really misunderstands just how limited the Industrial Revolution was.
https://acoup.blog/2022/08/26/collections-why-no-roman-indus...
> Diet indicators and midden remains indicate that there’s more meat being eaten, indicates a greater availability of animals which may include draft animals (for pulling plows) and must necessarily include manure, both products of animal ‘capital’ which can improve farming outputs. Of course many of the innovations above feed into this: stability makes it more sensible to invest in things like new mills or presses which need to be used for a while for the small efficiency gains to outweigh the cost of putting them up, but once up the labor savings result in more overall production.
> But the key here is that none of these processes inches this system closer to the key sets of conditions that formed the foundation of the industrial revolution. Instead, they are all about wringing efficiencies out the same set of organic energy sources with small admixtures of hydro- (watermills) or wind-power (sailing ships); mostly wringing more production out of the same set of energy inputs rather than adding new energy inputs. It is a more efficient organic economy, but still an organic economy, no closer to being an industrial economy for its efficiency, much like how realizing design efficiencies in an (unmotorized) bicycle does not bring it any closer to being a motorcycle; you are still stuck with the limits of the energy that can be applied by two legs.
So yeah, actual historians would be dismissive at your exact response, basically saying "I know, I know, but I don't care". You're still just talking about a society mostly 'wringing efficiencies out the same set of organic energy sources'. It IS unimportant, and you completely misunderstand how the Industrial Revolution reshaped production if you think it is important.
The (true!) statement is "However, there's an immense difference in scale between post-industrial strip mining of resources, and preindustrial resource extraction powered solely by human muscle (and not coal or nitrogylcerin etc). Similarly, there's a massive difference in information extraction enabled by AI, vs a person in 1980 poring over the microfilm in their local library."
I said there is a major difference in scale between "modern strip mining" and "a preindustrial extraction method powered only by human muscle", and I made an analogous point about AI-enabled information extraction versus 1980s manual archival research. That statement is purely true. Nothing in that statement says the muscle-powered-extraction example was the only preindustrial mode of production, just as "someone using microfilm in 1980" does not imply microfilm was the only way information was accessed in 1980. The fact that other information formats existed in 1980 is irrelevant to the truth of the example.
Irrelevant is the key word. The logical truth of my statement was not at fault, and the response was both logically independent to my statement and irrelevant.
So no, nothing I said "turned out to be false". You are attacking a claim I never made because you failed to parse the one I did. Not only did you fail at logic, you missed the big picture dialectical synthesis that I was introducing as well. This is like a senior engineer writing an architecture doc and the proof of concept code, and having a junior engineer not understand the architecture but complain about a variable name that was used.
My original source for this was the book Sapiens, but here are two links I found with a quick web search: https://www.sciencedaily.com/releases/2024/07/240701131808.h...
https://ourworldindata.org/quaternary-megafauna-extinction
I also saw a theory (not sure how credible) that the reason humans started doing agriculture was in fact because we killed all the megafauna we used to eat.
This was over 10,000 years ago. Well before the Industrial Revolution, indeed, before even the original Agricultural Revolution.
It's not, because the Malthusian trap was all too real going into modernity, as in recurring famines were a thing, they were quite real, nothing "literal" about them.
Unless you mean 'an axe', way before that there were deforested areas where the need for trees was larger than the supply and there were enough humans to fell them.
> A few hundred years ago or less, a squirrel could get from the Atlantic Ocean to the Mississippi River without ever touching the ground.
Yes, but that wasn't possible in other parts of the world much sooner.
I'd say degradation involves a lasting depletion or lasting damage (potentially permanent until restoration efforts happen) to the environment's output and ability to support life. Permanent depletion is what can happen to e.g. shallow mines and fossil fuel deposits.
I think I'd agree the legal system was created mostly for the former, depletion, and only recently had to contend with degradation and permanent depletion. I feel like we still struggle collectively to coming to gripes with permanent depletion.
Permanent depletion is also usually the result of shortsightedness or a competition gone awry. Famous case where nobody wants the ultimate results but people may selfishly march towards it (tragedy of the commons).
Obviously we’re becoming better at extracting resources over time, but humans ran out of new land to exploit long before Europe's conquest of the Americas. Land only seemed empty because disease decimated native populations, people lived in San Francisco thousands of years ago.
I doubt that anyone reading this can’t get the point of the analogy.
The value is in showing where the analogy fails, and either disproves the point, or deepens the point.
https://en.wikipedia.org/wiki/Industrial_Revolution
I get sort of wishy-washy from 1830 on, because lots of people put the end of the Industrial revolution as being 1900, but 1840 is a defensible and commonly held position.
In Britain. Moby Dick ain't set in Britain.
I think the natural world was nearly infinitely abundant is a reasonable description, resource depletion was always local before mass industrialization. Being able to exploit the world as opposed to just your local area is also a mark of efficiency.
> I think the natural world was nearly infinitely abundant is a reasonable description
Very little of the world’s woodland was untouched at the time of the Industrial Revolution and forests in the Americas survived as long as they did largely due to disease drastically reducing native populations. But American forests were on the clock independent from industrial development. I’m not sure exactly your counter argument even is here.
We still can’t reasonably extract most resources from the ocean bottom. That’s ~70% of the world’s mineral wealth just off the table.
So sure we are very slightly better at extracting resources but on the absolute scale it really isn’t that significant pre vs post Industrial Revolution compared to the sum total of human history.
maybe, "local" is a function of a lot of things, it is only fairly recently in human history that the "global" functions the way that "local" did centuries ago, meaning that it is cheap enough to source things from across the world that it does not need to be made in the next village.
>> I think the natural world was nearly infinitely abundant is a reasonable description
>Very little of the world’s woodland was untouched at the time of the Industrial Revolution and forests in the Americas survived as long as they did largely due to disease drastically reducing native populations.
things seemed appeared abundant prior to one event, soon after that event the thing no longer appears abundant, there's a correlation is the point, not a causation, but
>American forests were on the clock independent from industrial development.
sure, the Native Americans would have used up their forests if they had kept growing and not been killed off by disease brought by Europeans. Nonetheless they had been killed off, the world appeared infinite, because all you needed to do when you ran out of wood in one place is go to another place to source it, hurray, but now that is no longer the case. We have ran out of places to go get more wood.
As noted I said I felt the phrase "the natural world was nearly infinitely abundant" uttered by the original poster in this subthread is a reasonable description, and I mean obviously that is dependent on the impressions of the people of the time, and from my readings it seems like this was more the feeling than oh noes, we are running out of wood.
Although we got into a side track on wood, because that is what the first response to the OP was, that wood was always a problem, which that some natural resources were constrained still does not really disprove the phrase "the natural world was nearly infinitely abundant" since the word nearly can be seen as a cheat, and really what it means is that the world felt infinitely abundant at one time now it does not.
>We still can’t reasonably extract most resources from the ocean bottom. That’s ~70% of the world’s mineral wealth just off the table.
see, it sounds like you still feel like it is closer to infinitely abundant than dangerously used up. All we need to do is up our extraction game, at least were minerals are concerned.
NOTE: I think maybe the world feeling infinitely abundant thing is actually an American thing, this has been remarked by others in the past, that the first European settlers felt this was a world that had not been touched because in comparison to Europe it was under-exploited in many areas, it was big and had everything, and there is a whole part of American frontier myth that as soon as one area got settled and used up all you had to do was to pack up your stuff and move west and get a bunch of resources to use up, like locusts, or maybe just colonizers.
In this case the OP's idea of writing this up is that really what they are dealing with is not how the world was - infinitely abundant - but how it felt to people coming from one overly exploited area to an under-exploited one. They believe there is a narrative of economic constraints and results playing out, and that the two situations were analogous, but the source of the analogy - the world before the industrial revolution - was perhaps not as the analogy would have it but really how a memetic framework of exploration and conquest had interpreted the world.
Sorry my note went overly long, but that sometimes happens when I write what I think just as I'm thinking it.
from an global perspective it isn't. Some places sure, like Western Europe, who in some cases had completed enclosure, but remember the new world had only been discovered a few hundred years ago at that point.
Just google maps the north part of South America, even today there are large swathes of undeveloped land across it and back then it was considerably less exploited. At that time it would have appeared infinite, especially to the European industrialists.
By White people*
Why are you weirdly making this about race?
Who might be swept underfoot in this "Information Revolution", I wonder?
This sort of handwashing is exactly why the natives were treated the way they were.
Your continued erasure of the Baltic people's continues to cut deep into my heart, and your callous candour to their plight, as you discard any chance to mention them, continues to shock me.
GP made a comparison between what we're going through and the Industrial Revolution. Ignoring the negatives of that revolution - like by acting as though the "new world" was uninhabited/unused and so Europeans had a right to its resources - seems like a bad idea.
You're not the only one.
The current Pope Leo XIV explicitly named himself after the the previous Leo, Pope Leo XIII, who was pope during the Industrial Revolution (1878-1903) and issued the influential Encyclical Rerum novarum (Rights and Duties of Capital and Labor) in response to the upheaval.
“Pope Leo XIII, with the historic Encyclical Rerum novarum, addressed the social question in the context of the first great industrial revolution,” Pope Leo recalled. “Today, the Church offers to all her treasure of social teaching in response to another industrial revolution and the developments of artificial intelligence.” A name, then, not only rooted in tradition, but one that looks firmly ahead to the challenges of a rapidly changing world and the perennial call to protect those most vulnerable within it.”
https://www.vatican.va/content/leo-xiii/en/encyclicals/docum...
https://www.vaticannews.va/en/pope/news/2025-05/pope-leo-xiv...
> Why would a writer put an article online if ChatGPT will slurp it up and regurgitate it back to users without anyone ever even finding the original article?
I write things for two main reasons: I feel like I have to. I need to create things. On some level, I would write stuff down even if nobody reads it (and I do do that already, with private things.) But secondly, to get my ideas out there and try to change the world. To improve our collective understanding of things.
A lot of people read things, it changes their life, and their life is better. They may not even remember where they read these things. They don't produce citations all of the time. That's totally fine, and normal. I don't see LLMs as being any different. If I write an article about making code better, and ChatGPT trains on it, and someone, somewhere, needs help, and ChatGPT helps them? Win, as far as I'm concerned. Even if I never know that it's happened. I already do not hear from every single person who reads my writing.
I don't mean that thinks that everyone has to share my perspective. It's just my own.
But it definitely feels different now. It used to feel like I was tending a public garden filled with other people who might enjoy it. It still kind of feels like that, but there are a handful of giant combine machines grinding their way around the garden harvesting stuff and making billionaires richer at the same time.
It's not enough to dissuade me from contributing to the public sphere, but the vibe is definitely different.
Honestly, it reminds me a lot about the early days of Amazon. It's hard to remember how optimistic the world felt back then, but I remember a time when writing reviews felt like a public good because you were helping other people find good products. It was like we all wanted honest product information and Amazon provided a neutral venue for us to build it. Like Wikipedia for stuff.
But as Amazon got bigger and bigger and the externalities more apparent, it felt less like we were helping each other and more like we were help Bezos buy yet another yacht or media empire. And as the reviews got more and more gamed by shady companies, they became less of a useful public good. The whole commons collapsed.
I worry that the larger web and digital knowledge environment is going that way.
I still intend to create and share my stuff with the world because that's who I want to be. But I'll always miss the early days of the web where it felt like a healthier environment to be that kind of person in.
The Internet-circulating quote comes to mind: Planet Earth is pretty much a vacation resort for around 500 rich people, and the remaining 8 billion of us are just their staff. The Relative Few have got the system set up perfectly so that whatever we do, we're probably serving/enriching them. AI doesn't really change this, but it does further it.
I don't necessarily disagree with the analysis on how Planet Earth is currently setup to be, but something that I've been thinking about lately, is that to the extent we can consume the public image of some of the Relative Few, they seem oddly unhappy.
Anyone who finds themselves with $100m in their bank account and thinks, "No, I need more," is a person with a hole inside them that can never be filled.
Writing online used to bring you readers. Now it trains model, which answers the same questions without sending anyone to your site.
Also I'm not a fan of billionaires, obviously, but I think that given I've worked on open source and tools for so long, I kinda had to accept that stuff I make was going to be used towards ends I didn't approve of. Something about that is in here too, I think.
(Also, I didn't say this in the first comment, but I'm gonna be thinking about the industrial revolution thing a lot, I think you're on to something there. Scale meaningfully changes things.)
I do think that the open web stuff, decentralized, or at least more decentralized than currently, is the path forward. I've been reading about the AT protocol and it recently becoming an official working group with the IETF.
I feel a second order effect of making decentralized social networking easier, is making individuals more empowered to separate from what they don't believe in. The third order effect is then building separate infrastructure entirely.
As sad as that can be - in my personal opinion it runs the risk of ending the "world wide" part of the web - it appears to be the only way society can avoid enriching the few beyond reason.
Me too, 100%. But that was during a moment in time when that information was more likely to be enabling a person who otherwise didn't have as many resources than enabling a billionaire to make their torment nexus 0.1% more powerful.
> I kinda had to accept that stuff I make was going to be used towards ends I didn't approve of. Something about that is in here too, I think.
Yeah, I've mostly made peace with that too.
The way I think about it is that when I make some digital thing and share it with the world, I'm (hopefully!) adding value to a bunch of people. I'm happiest if the distribution of that value lifts up people on the bottom end more than people on the top. I think inequality is one of the biggest problems in the world today and I aspire to have the web and the stuff I make chip away at it.
If my stuff ends up helping the rich and poor equally and doesn't really effect inequality one way or the other, I guess it's fine.
But in a world with AI, I worry that anything I put out there increases inequality and that gives me the heebie-jeebies. Maybe that's just the way things are now and I have to accept it.
This observation doesn't really clash with "information wants to be free." You just have to include LLMs in the category or "information," like Free Software types already do for all software. You don't need to abandon your principles, you should shift your demands. A handful of companies can't be allowed to benefit from free information and then put what they make behind a wall.
Free Software types also create software...they didn't just argue for a better license and try to regulate Sun/others to re-license their software; they wrote free (libre) versions of proprietary software and released it for free (cost), which is what counteracted the "[putting] what they make behind a wall". If you're saying "[some] LLMs should be free", I agree.
What is there to prevent them?
That was always a luxury of its peculiar historical moment, though, wasn't it? Barlow didn't have to care who paid for the infrastructure, but he was just bloviating.
That's not the whole story, though. There have been many community-driven projects to bring convenient access to copyrighted works to the masses in a convenient way. You may recall the meteoric success of Popcorn Time. Law enforcement shut them down. Without the hand of the state beating down any popular alternative to legal distribution it absolutely would be the dominant mode of media consumption.
"So Steve, you're a millennial. What does it mean to 'be the mayor' of something?"
An underrated upside to being harvested is that your voice has now effectively voted in the formation of the machine's constitution. In a broader ecological sense, you've still tended to a public garden, but in this case your work is part of the nutrient base for a different thing.
Broader still: after the machines squeeze all of our inputs into an opaque crystal, that crystal's very purpose is to leak it all back out in measured doses. Yes, "some billionaire" will own the lion's share of that process, but time so far is telling that efforts can be made to distill strong, open, public versions of the same.
I do really hope that part of the longer-term answer for AI is LLMs being run locally.
You have to start finding ways to keep people hooked on books and make it a part of their regular lifestyle. One book can't be enough, and after a while you have to convince them to replace the books they already bought. New editions, Author's Footnotes, limited run release, all of the stops have to be pulled out to get consumers to show up en-masse. Because that's what they are - consumers, not readers - wallets to be squeezed until they're bled of all the trust they had in media.
I think about the publications I liked reading as a kid, like Joystiq and Polygon. Some of the best games journalism the industry produced, but inevitably doomed to fail as their competitors monetized further. The rest of traditional media has followed the same path, converging on some mercurial social network marketing tactic as the placeholder for big-picture brand strategy.
Not a contradiction but an addendum: plenty of creative pursuits are not about functional value, or at least not primarily. If somebody writes a seemingly genuine blog post about their family trauma, and I as the reader find out it's made-up bullshit, that's abhorrent to me, whether or not AI is involved. And I think it would be perfectly fair for writers who do create similar but genuine content to find it abhorrent that they must compete with genAI, that genAI will slurp up their words, and that genAI's mere existence casts doubt on their own authenticity. It's not about money or social utility, it's about human connection.
> people read things… their life is better
> it’s just my own
What was the point of writing this though?
Perhaps I should know who you are, but assuming you are a regular HN forum user - you are still very much a participant in a larger information economy / ecosystem.
All of us depend on that system, that commons.
Visits to Wikipedia have dropped by at least 8% since 2025, other estimates are starker. This will have an impact on donations.
These reports are similar for many sites which write or produce content.
Your individual behavior may be perfectly fine, and you are entitled to your perspective, but that doesn’t become a defense for the degradations of the commons.
If anything, it’s a classic example of the kind of argument that ends up entangling ideas and making conclusions harder to reach.
I think you are walking all around the word "consent" and trying very hard to avoid it altogether.
Your perspective, because it refuses to include any sort of consent, is invalid. No perspective that refuses consent can be valid.
Free use is an important part of intellectual property law. If it did not exist, the powerful could, for example, stifle public criticism by declaring that they do not consent to you using their words or likeness. The ability to do that is important for society. It is also just generally important for creating works inspired by others, which is virtually every work. There has to be lines for cases where requiring attribution is required, and cases where it is not.
I am not representing your words as mine. I am not using your words to profit off. I am not making a gain by attributing your words to you.
> There has to be lines for cases where requiring attribution is required, and cases where it is not.
You are blurring the lines between "using a quote or likeness" and "giving credit to". I am skeptical that you don't know the difference between the two.
Regardless, any "perspective" that disregards the need to acquire consent is invalid. Even if you are going to ignore it, you have to acknowledge that you don't feel you need any consent from the people you are taking from.
This whole "silence is consent" attitude is baffling.
I do not think that, if you read, say, https://steveklabnik.com/writing/when-should-i-use-string-vs... , and then later, a friend asks you "hey, should I use String or &str here?" that you need my consent to go "at the start, just use String" instead of "at the start, just use String, like Steve Klabnik says in https://steveklabnik.com/writing/when-should-i-use-string-vs... ". And if they say "hey that's a great idea, thank you" I don't think you're a bad person if you say "you're welcome" without "you should really be saying welcome to Steve Klabnik."
It is of course nice if you happen to do so, but I think framing it as a consent issue is the wrong way to think about it.
We recognize that this is different than simply publishing the exact contents of the blog post on your blog and calling it yours, because it is! To me, an LLM is a transformative derivative work, not an exact copy. Because my words are not in there, they are not being copied.
But again, I am not telling anyone else that they must agree with me. Simply stating my own relationship with my own creative output.
Until that question is settled, it’s disingenuous to dismiss his points out of hand as conflating fair use or ignoring consent.
However, I don't feel comfortable suggesting that this is settled just yet, one district judge's opinion does not mean that other future cases may disagree, or we may at some point get explicit legislation one way or the other.
There's a doctrine in Fifth Amendment law called "fruit of the poisonous tree." The general rule is that prosecutors don't get to present evidence in a criminal trial that they gained unlawfully. It's excluded. The jury never gets to see it even if it provides incontrovertible evidence of guilt. The point is to discourage law enforcement from violating the rights of the accused during the investigative process, and to obtain a warrant as the Amendment requires.
It seems to me that the same logic ought to be applied to these companies. They want to make money by building the best models they can. That's fine! They should be able to use all the source data they can legitimately obtain to feed their training process. But if they refuse to do so and resort to piracy, they mustn't be allowed to claim that they then used it fairly in the transformative process.
And yes, you are right, the legal and moral question of fair use in training data hasn't been settled yet; we agree here.
Look, I'm not saying that you are doing that, I'm pointing out that "Silence is consent" is not as strong an argument that many think it is.
In most cases, no, I (and it seems most others) don't feel the need for that, it is only you who seems to have an ideological hangup over this.
It's not an ideological hangup, it's confusion over the assumption by certain groups that "silence is consent", when it is not.
What has been "taken", exactly?
Where are you going with this line of thought? That making a copy of someone's work, using it for profit and not crediting them doesn't "take" anything from them?
You may need to clarify that thought.
I don't think the poster has a viewpoint that 'refuses consent', their viewpoint is their writing they put for others to view is for others to view, regardless of how it is viewed. They seem to be giving consent, not refusing it, no?
Who said anything about refusing consent?
> Your perspective, because it refuses to include any sort of consent, is invalid. No perspective that refuses consent can be valid.
This is what I was responding to. I do not understand your thinking in this post.
I thought it was clear from "refuses to include any sort of consent" that I am talking specifically about holding an opinion that refuses to include consideration for consent, not refuses consent for usage.
Granted, things were different in the New World, as a result of mass depopulation event following the Columbian exchange. But even there, the megafauna was hunted to extinction soon after the humans first appeared there.
Anyway, the point is that no, prior to Industrial Revolution, the world was of full of scarcity, not abundance.
The opposite is true. Central Europe was almost devoid of trees. Food was scarce as arable land bore little fruit without fertiliser.
Society was Malthusian until the Industrial Revolution.
The industrial revolution didn’t qualitatively change farming. It just made it possible to have more of it thanks to machine labor. The same goes for the later agricultural revolutions.
In general the transition from feudalism to capitalism, including the formation of the legal systems that supported the latter, happened gradually for maybe up to four or five centuries before the steam engine had been invented.
Sure, the Industrial Revolution further accelerated the development of property rights, mercantile, and civil laws, but all in all I don’t think there’s much truth that machines were the primary cause of such developments.
Useful land was a scarce resource in more civilized regions, while labor was cheap. Given enough land, subsistence farmers could easily feed themselves outside particularly bad years. But much of the land belonged to local elites, and commoners had to work that land to fund the pursuits of the elites.
This is completely reversed. Why should anyone honour the right of some creator who was merely the first to plant their flag on a creative task that is now absolutely trivial to perform by AI? Who needs a digital commons when creation itself is now the commons and freely accessible for pennies? The seeds plant and grow by themselves now. The only question is who should be allowed to claim the farms?
Answer: No one. AI companies will have their lunch eaten by open source. And if they don't - they should be nationalized and protocolized into free utilities. The entire idea of digital ownership should (and will) be abolished by the very nature of this technology.
The digital world is the new infinitely-abundant nature. We're just returning it to where it should have been, before corporations clawed it into fenced off empires.
I just think it's nice to contribute to the human commons and it's fine if some subset of my fellow organism uses it in whatever way. Realistically, the fact that Brewster Kahle is paid whatever few hundred thousand he's paid for managing a non-profit that only exists because it aggregates other people's work isn't a problem for me. Or that Larry Page and Sergey Brin became ultra-rich around providing a search interface into other people's work. Or that Sam Altman and Dario Amodei did the same through a different interface.
This particular notion doesn't seem to be a post-AI trend. It seems to have happened prior to the big GPTs coming out where people started doing a lot of this accounting for contribution stuff. One day it'll be interesting to read why it started happening because I don't recall it from the past. Perhaps I just wasn't super plugged in to the communities that were complaining about Red Hat, Inc.
It's not that I don't understand if I sold my Subaru to a guy who immediately managed to sell it to another guy for a million times the money. I get that. I'd feel cheated. But if I contributed a little to it, like I did so Google would have a site to list for certain keywords so that they could show ads next to it in their search results, I just find it so hard to be like "That's my money you're using. Pay me!".
I'm sure plenty of people feel the same way about software. They make software as a hobby and don't care about remuneration or credit. Meanwhile I write software for my day job and losing the ability to make money from it would be devastating.
I was going to write, "not for long," which might be true for some. But then I realized there will always be a difference between LLM output and human writing. We don't read blogs because of their facts, we read them because of how the facts are presented and how the author's personality comes through on the page.
EDIT: That said, LLMs are great at faking it, and a lot of amateur writing will be difficult to distinguish from LLM output. So I'm disagreeing with myself a bit.
But we are talking about "slurping up" IP and regurgitating it right? OK. So if I slurp up Mickey Mouse and output Micky Mouse that's an offense. But what if I slurp up a billion images and output some chimera? That's what the LLMs do. And that's what humans do too.
I write software too and I may no longer be able to just do it in the old way. Pretty scary world but also exciting. I can’t imagine trying to restrict LLM software writers on that basis but I can comprehend it as simply self-interest.
Fair enough.
And I do paste code into CC. I’m not super concerned that they’ll see it.
That’s fine by me. It doesn’t require putting code in the public domain which is something else entirely.
I make money off hosted software so in some sense there is writing involved at one end. But I’m not paid by output tokens.
I think SWE as a mainstream profession is much nearer to the end than the beginning, I'm curious and quite scared about what becomes of us.
Growing up I had a friend group of misfit boys, who discovered h4ck1ng and phr34king. But we also discovered slackware Linux on 3.5" floppies. We also had to discover ASM and compiling the linux kernel in order to do anything with it. Boys with machines. That wasn't what I needed either.
Later on we did have great things with tech. Google made the world searchable in ways Altavista didn't. I remember strapping the original iPod on my arm to go for runs outside. I didn't even need a car for a while investors subsidized my Uber rides to and from the office.
Now, it seems the US is balanced on a precipice. The economy seems to have an incredible amount of money desperate to grow, but to what purpose. In my lifetime, and in my parents, and their parents before them, when the dollar becomes restless the flag goes forth. The dollar follows the flag.
And here we are at war.
As for AI, I and many others want it, and some even need it, in certain use cases. Speak for yourself.
I still wonder if we really needed the iPhone or many other things we’re told is “progress” and innovation in an arrow of time manner. The future is not set in stone and things need not play out in this manner at all. Unlike the iPhone where most were excited by its possibilities (even if they traded precious privacy in the name of convenience), there’s not a clear reason that this version of LLM driven technologies represent significant upsides than downsides.
You might want to give the following Articles a read then: https://www.ufried.com/blog/ironies_of_ai_1/ https://www.ufried.com/blog/ironies_of_ai_2/
I have been thinking about this. I was pretty amendment a few months ago that AI is going to make a lot of thing worse for everyone because of the externalities of the technology (Data Center Creep, lock in of models, ect) and it probably still will. But then someone suggested to me that I use Claude Code to upgrade my SSG site to the new version because I had been sitting on my ass as the years went by, missing deadline by deadline. I just couldn't put my self into gear to upgrade it. It was massively out of date 10 years plus and I knew it was going to be a nightmare to deal with the problems. I probably was making it more harder than it really was in my head.
So I purchase Claude Code pro and the thing upgrade my site pretty well. There were things it missed because I didn't know the problems existed in the first place until the upgrade was complete, but I had a working updated site in less than an hour. If I had done this myself it would have taking me days/weeks.
So at that point I realized something. Its a tool that can handle good amount of tasks I throw at it as long as I am specific. I think the problem with most people is they expect it to respond like a human. Thats not going to happen, IMHO. Maybe some day it will be more than what it is but right now its just a tool. I don't care what anyone says about AGI and the likes. Its not going to happen with the current iteration (the pattern recognition type) We are going to need more than that if we want to simulate a human brain..
The point is. And I know this is not going to be received very well, mostly because this tech is in the hands of people that are gatekeeping it, is that maybe someday we might reach a point where all of humanities knowledge is put into these things and we can use them to better our lives. Maybe at some point we don't need to hold onto or hoard things as if its the only way we can make a living? And instead we can build things just for the sake of creating it and improve humanity in the process? Obviously the commercial model of these things is not great, that is going to have to be dealt with, but I can see a future where we might be able to fix a lot of humanities problems with this technology as more and more good people put it to use for things that help humanity.
Prior to the industrial revolution, people fight to death for who can use the rivers. Pre-industrial societies are societies of scarcity.
[0]: People have been fighting for water for more than 4000 years: https://en.wikipedia.org/wiki/Umma%E2%80%93Lagash_war
Mostly, AIs don’t recite back various works. Yes, there a couple of high profile cases where people were able to get an AI to regurgitate pieces of New York Times articles and Harry Potter books, but mostly not. Mostly, it is as if the AI is your friend who read a book and gives you a paraphrase, possibly using a couple sentences verbatim. In other words, it probably falls under a fair use rule.
Secondly, given the modern world, content that doesn’t appear online isn’t consumed much, so creators who are doing it for the money will certainly continue putting content online. Much of that content will be generated by AIs, however.
> We have copyright and intellecual property law already, of course, but those were designed presuming a human might try to profit from the intellectual labor of others.
You getting a summary of a copyrighted work from a friend is necessarily limited by the number of friends you have, the amount of time they have to read stuff and talk to you, and so on. Machines (and AIs) don't have any limitations.
But no real book nerd has read everything. Current law was designed for the capabilities of humans.
Also, a book nerd doesn’t take roughly ~all human created text to train to produce meaningful results. It’s just such a misplaced analogy and people have been making it ever since OpenAI announced chatgpt for the first time - why do people think “an LLM is just a human who read a lot”
This is true.
But it's not always positive sum, either.
> Megacorporations making profit is not some evil that needs to be stopped.
Externalities are a thing. It's not about the profit per se, but about how (a) the making of that profit might negatively impact others, and (b) the deployment of that profit in pursuit of rent-seeking and other antisocial behavior in order to insure its continued existence might also negatively impact others.
1) Quantity is its own quality: Scale makes a difference
2) The tools themselves automate tasks and consolidate their outputs. The “sale” of a piece of content, and its consumption, shifts away from the people producing it Example: We have entire networks and systems that depended on consumption occurring on the site itself. News websites, or indie sites depend on ad revenue.
LLMs are sort of the inverse of that. They produce text that looks like the statistical aggregate of human knowledge, but nothing underneath is converging on truth. Seldon's math worked because it modeled actual dynamics. LLMs work because they model plausible text. The "jagged competence frontier" Kingsbury describes: crushing multivariable calculus, failing a word problem, is exactly what you'd get from a system that learned the shape of correct answers without learning what makes them correct.
The part of Foundation that feels prescient right now isn't the predicting-the-future stuff. It's the part where everyone can see Empire is hollowing out and the response is to just...keep going. More spectacle, more confidence, less substance holding any of it up. Hmmm, wait...
The mammoths disagree.
First we conquered the ability to move matter and transmit signal, greatly shrinking the world. Next was sensor technology, especially the mobilization of it, and our ability to collect more data than we could ever imagine being able to process. Then we started going crazy with data centres and big data and the idea that maybe we can somehow correlate it all if we just process it enough. And now we’re finally turning data into information, building enormous graphs of correlation without even having to manually reason about a lot of it. Before AI, the hard part was figuring out how to go about finding the signal you needed. Now it’s getting easier at an incredible speed.
Property rights don't just protect natural resources, but labor as well. If I cleared out hunting ground in that forest to be the prime spot to catch animals, I would make sure I can use it when I want.
> a small number of people were able to completely deplete parts of the earth
A small number of people seems inaccurate when there's typically many more individuals in the pipelines for these technologies.
> and in return profit off the knowledge over and over again at industrial scale
Not off just that knowledge, there needed to be a model trained on the data of many others to utilize it.
> Why would a writer put an article online if ChatGPT will slurp it up and regurgitate it back to users without anyone ever even finding the original article?
Who's better at writing in this scenario and what are my motivations? If it's ChatGPT and I did it for money, then I would say I should recognize that I can't compete and find something AI can't do. If it's ChatGPT and I write to convey my ideas in an effort to learn regardless of the bestowment of a new perspective on the reader, I'll keep writing.
> Why would anyone plant seeds on someone else's farm?
They wouldn't unless it was their own way to attain food and survive. And if it's not the only way, they can defer to those with optimal methods to get it the cheapest they can in the market.
In the brave new world we're creating, people will write specifically for AI. If you can impress models so much that they "regurgitate" your work, then your work has achieved a kind of immortality.
The idea that copyright simply doesn't apply to AI has more to do with AI companies deciding that they're not going to comply with those laws than the design of the laws. Also a very successful lobby against enforcement by positioning AI as a strategic necessity.
Thats why they are evaluated so high on the stock market. Basically the will steal all the value of intellectual property in a semi legal way.
I mean, medieval Europe (speaking broadly) had pretty well defined property rights wrt hunting. In fact, the forester at the time was thought of as one of the most corrupt jobs, as they'd commonly have side hustles poaching and otherwise illegally extracting resources from the lands they enforced and kept others from utilizing in a similar way. Quis custodiet ipsos custodes?
They don't have the actual concept of "benevolent"... or a concept of anything at all. Based on an input, they regress down a path of "what is the next most probable statistical token to output next" and that's fucking it, with the bolted-on shit manipulating these outputs a bit.
I don't doubt that at some point there will be some other AI leap, but I'm not even sure it'll be built on this foundation.
What really needs to be developed is an actual artificial brain of sorts. Much like an infant learns language from first principals, a real AI would have a phase of continuous growth, creating actual memories and being able to reflect upon them. I daresay context windows are not that.
I'd really like to encourage anyone to pump the brakes a bit on how these things actually work, and what they actually are. There is a reason sama is pivoting away from video, et. al. and into corporate software coding, much like anthropic.
The analogy seems to be backwards though. It would be as if we previously had a scarcity of land and because of that divided it up into private property so markets could maximize crop yield etc. and then someone came up with a way to grow food on asteroids using robots, and that food is only at the 20th percentile of quality but it's far cheaper. Suddenly food becomes much more abundant and the people who had been selling the 20th percentile food for $5 are completely out of the market because the new thing can do that for $0.05, and the people providing the 50th percentile food for $10 are also taking a hit because the price difference between what they're providing and the 20th percentile stuff just doubled.
The existing plantation owners then want to put a stop to this somehow, or find a way to tax it, but arguments like this have a problem:
> Why would a writer put an article online if ChatGPT will slurp it up and regurgitate it back to users without anyone ever even finding the original article?
This was already the status quo as a result of the internet. Newspapers were slowly dying for 20 years before there was ever a ChatGPT, because they had been predicated on the scarcity of printing presses. If you published a story in 1975 it would take 24 hours for relevant competitors to have it in their printed publication and in the meantime it was your exclusive. The customer who wants it today gets it from you. On top of that, there weren't that many competitors covering local news, because how many local outlets are there with a printing press?
Then blogs, Facebook, Reddit and Twitter come and anyone who can set up WordPress can report the news five minutes after you do -- or five hours before, because now everyone has an internet-connected camera in their pocket so the first news of something happening now comes in seconds from whoever happened to be there at the time instead of the next morning after a media company sent a reporter there to cover it.
The biggest problem we have yet to solve from this is how to trust reports from randos. The local paper had a reputation to uphold that you now can't rely on when the first reports are expected to come from people with no previous history of reporting because it's just whoever was there. But that's the same thing AI can't do either -- it's a notorious confabulist.
And it's the media outlets shooting themselves in the foot with this one, because too many of them have gotten far too sloppy in the race to be first or pander to partisans that they're eroding the one advantage they would have been able to keep. Damn fools to erode the public's trust in their ability to get the facts right when it's the one thing people would otherwise still have to get from them in particular.
You make the point later in your comment, but consider it a minor issue. “Randos”
the actual limits are verification, and then attention. Verification is always more expensive than generation.
However, people are happy to consume unverified content which suits their needs. This is why you always needed to subsidize newspapers with ads or classifieds.
The really discouraging part of this is that it feels like our social and legal institutions don't even care if they catch up or not.
Technology is speeding up and the lag time before anything is discussed from a legal standpoint is way, way too long
>We had to invent giant legal systems in order to determine who has the right to do that and who doesn't.
Excuse me? The industrial revolution was like 300 years ago. We had laws before that.
So, in some ways, I also view LLMs as a pivotal and important wake up call. Companies were already taking the data and using it for a variety of other purposes—it was just way less evident to people when they weren't in direct competition with labor, since, under capital, labor is what we sell.
Either an entire new industry needs to form, or it's finally time to move beyond capitalism. Centralized capital ends up killing itself, because it effectively shuts down its own engine if it kills off consumers, who can only exist in the first place if the wage labor structure holds.
I'm happy to miss all the stuff that was written just for the financial benefit of the author.
This is not true and unfortunately this significantly reduced the credibility of this article for me. Raw parameter counts stopped increasing almost 5 years ago, and modern models rely on sophisticated architectures like mixture-of-experts, multi-head latent attention, hybrid Mamba/Gated linear attention layers, sparse attention for long context lengths, etc. Training is also vastly more sophisticated.
The Bitter Lesson is misunderstood. It doesn't say "algorithms are pointless, just throw more compute at the problem", it says that general algorithms that scale with more compute are better than algorithms that try to directly encode human understanding. It says nothing about spending time optimising algorithms to scale better for the same compute, and attention algorithms and LLMs in general have significantly advanced beyond "moar parameters" since the time of Attention is All You Need/GPT2/GPT3.
> I am generally outside the ML field, but I do talk with people in the field. One of the things they tell me is that we don’t really know why transformer models have been so successful, or how to make them better. This is my summary of discussions-over-drinks; take it with many grains of salt. I am certain that People in The Comments will drop a gazillion papers to tell you why this is wrong.
As I understand it, this article is basically a conglomeration of several attempts at an article that the author has attempted to make over the past decade or so considering the impacts of AI on society. In their own words:
> Some of these ideas felt prescient in the 2010s and are now obvious. Others may be more novel, or not yet widely-heard. Some predictions will pan out, but others are wild speculation. I hope that regardless of your background or feelings on the current generation of ML systems, you find something interesting to think about.
As for the "Bitter Lesson" part, they pretty much directly said that it wasn't the Bitter Lesson exactly, saying it might be a variant of it. Honestly, it felt more like a way of throwing in a reference to something that also might provoke thought, which was done throughout the piece (which again, is the entire point).
It's totally valid to say "this article didn't provoke much thought for me". I'm a bit confused at why you think a lack of specific domain knowledge in a domain that they literally state they are not an expert in would be disqualifying for that purpose though.
If you’re a non-expert in a field, I don’t think it’s a good sign if you’re writing a 10 part article about that field’s impact on society and getting basic facts wrong. How can I trust that the conclusions will be any more credible?
Agree, I recently updated our office's little AI server to use Qwen 3.5 instead of Qwen 3 and the capability has considerably increased, even though the new model has fewer parameters (32b => 27b)
Yesterday I spent some time investigating it:
- Gated DeltaNet (invented in 2024 I think) in Qwen3.5 saves memory for the KV kache so we can afford larger quants
- larger quants => more accurate
- I updated the inference engine to have TurboQuant's KV rotations (2026) => 8-bit KV cache is more accurate
- smaller KV cache requirements => larger contexts
Before, Qwen3 on this humble infra could not properly function in OpenCode at all (wrong tool calls, generally dumb, small context), now Qwen 3.5 can solve 90% problems I throw at it.
All that thanks to algorithmic/architectural innovations while actually decreasing the parameter count.
But
>Raw parameter counts stopped increasing almost 5 years ago
Really? 5 years ago? Until just about 3 years ago OpenAI's latest offering was only ChatGPT 3.5
Most of the models people talk about now didn't even exist 3 years ago let alone 5.
Even now, I don't know if parameter count stopped mattering or just matters less
For example, I have no idea if the new Mythos is MoE but I'm pretty sure it's more parameters.
GPT4 has been widely rumored to have 1.8 trillion params, which is 10x more, and was released 2 years after this "5 years ago" date that you are using here.
So, to quote yourself here, "This is not true and unfortunately this significantly reduced the credibility of this article for me" /s/article/comment
Meanwhile, Gemma 2 9B, a model from July 2024 with 133x fewer parameters than GLaM, scores 82% and 80.6%. Hellaswag and WinoGrande aren't used in modern benchmarks, probably because they're too easy and largely memorised at this point.
And GPT-4 had 1.8T parameters sure, but it's noticeably worse than any modern model a fraction of the size, and the original incarnation was ridiculously expensive per token. And in any case, its number of parameters was only possible due using mixture-of-experts, which I would definitely classify as a sophisticated architecture as opposed to just throwing more parameters at a vanilla transformer. Even in 2021 GLaM was a MoE because the limits of scaling dense transformers had already been hit.
Transformers are not magical. They are just a huge improvement over other architectures at the time such as LSTMs and RNNs and even CNNs. They allowed us to throw more and more compute at the problem of next token prediction. And we’ve been riding that horse ever since.
Another big advancement that deserves mentioning is “reasoning” models that have the opportunity to spit out thinking tokens before giving a final answer.
None of this is to say transformers are the most principled approach. But they work.
That's not true. Modern training techniques aren't enough. Vanilla RNNs with modern training techniques still scale poorly. You have to make some pretty big architectural divergences (throwing away recurrency during training) to get a RNN to scale well. None of the big labs seem to be bothered with hybrid approaches.
SSMs move the non-linearity outside of the recurrence which enables parallelisation during training. It is trivial to do this architectural change with an LSTM (see the xLSTM paper). Linear RNNs are still RNNs.
But you can still keep the non linearity by training with parallel Newtown methods, which work on vanilla LSTMs and scale to billion of parameters.
> None of the big labs seem to be bothered with hybrid approaches.
Does Alibaba not count? Qwen3.5 models are the top performers in terms of small models as far as my tests and online benchmarks go.
Removing the non-linearity from the recurrence path is exactly what constitutes a "pretty big architectural divergence." A linear RNN is an RNN in a structural sense, certainly, but functionally it strips out the non-linear state transitions that made traditional LSTMs so expressive, entirely to enable associative scans. The inductive bias is fundamentally altered. Calling that simply 'modern training techniques' is disingenous at best.
>But you can still keep the non linearity by training with parallel Newtown methods, which work on vanilla LSTMs and scale to billion of parameters.
That does not scale anywhere near as well as Transformers in compute spend. It's paper/research novelty. Nobody will be doing this for production.
>Does Alibaba not count? Qwen3.5 models are the top performers in terms of small models as far as my tests and online benchmarks go.
I guess there's some misunderstanding here because Qwen is 100% a transformer, not a hybrid RNN/LSTM whatever.
What exactly makes you so confident?
The world is not just labs that can afford billion dollar datacentres and selling access to SOTA LLMs at $30/Mtokens. Transformers are highly unsuitable for many applications for a variety of reasons and non-linear RNNs trained via parallel methods are an extremely attractive value proposition and will likely feature in production in the next products I work on.
> I guess there's some misunderstanding here because Qwen is 100% a transformer, not a hybrid RNN/LSTM whatever.
See the Qwen3.5 Huggingface description: https://huggingface.co/Qwen/Qwen3.5-27B > Efficient Hybrid Architecture: Gated Delta Networks combined with sparse Mixture-of-Experts deliver high-throughput inference with minimal latency and cost overhead.
I don't think anything you're saying here is in disagreement with the points they're making.
Do any large-scale architectures use mamba? I was under the impression that people don't use it yet due to lack of efficient implementations.
> Training is also vastly more sophisticated
Is it? In what ways?
> Is it? In what ways?
Just the reinforcement learning for reasoning, and then tool use for agents, could be its own topic.
I’m not even sure whether this is possible. The current corpus used for training includes virtually all known material. If we make it illegal for these companies to use copyrighted content without remuneration, either the task gets very expensive, indeed, or the corpus shrinks. We can certainly make the models larger, with more and more parameters, subject only to silicon’s ability to give us more transistors for RAM density and GPU parallelism. But it honestly feels like, without another “Attention is All You Need” level breakthrough, we’re starting to see the end of the runway.
Of course 5-10 years is a long time to bang our heads against the wall with untenable costs but I don't know if we can solve our way out of that problem.
Based on what's happened so far, maybe. At least that's exactly how we got to the current iteration back in 2022/2023, quite literally "lets see what happens when we throw an enormous amount data at them while training" worked out up until one point, then post-training seems to have taken over where labs currently differ.
Did you see the one before the current one was even found? Things tend to look easy in hindsight, and borderline impossible trying to look forward. Otherwise it sounds like you're in the same spot as before :)
It's also theoretically why facebook paid 14bn for alex wang and scale ai
This is just totally incorrect. It's one of those things everyone just assumes, but there's an immense amount of known material that isn't even digitized, much less in the hands of tech companies.
LLMs are incredibly useful but I'm not sure about this statement.
It is proposing stuff that I haven't seen before, but I don't know about it is new or creative from the entirety of collective human knowledge.
Anyone who has worked with LLMs has experienced all the issues he talks about here, we're either optimistic and imagine they'll be fixed, or we're pessimistic and we say they are inherent to the nature of the technology and will never be fixed
To some extent. It's not clear where specifically the boundaries are, but it seems to fail to approach problems in ways that aren't embedded in the training set. I certainly would not put money on it solving an arbitrary logical problem.
In what way can you falsify this without having the LLM be omniscient? We have examples of it solving things that are not in the training set - it found vulnerabilities in 25 year old BSD code that was unspotted by humans. It was not a trivial one either.
I thought they would be ideal for the job, until I realized that it would just pretend that the rules worked because they looked like board game rules. The more you ask it to restate, manipulate or simulate the rules, the more you can tell that it's bluffing. It literally thinks every complicated set of rules works perfectly.
> it found vulnerabilities in 25 year old BSD code that was unspotted by humans.
I don't think the age of the code makes the problem more complex. Finding buffers that are too small is not rocket science, bothering to look at some corner of some codebase that you've never paid attention to or seen a problem with is. AI being infinitely useful (cheap) to sic on pieces of codebase nobody ever carefully looks at is a great thing. It's not genius on the part of the AI.
> This was the most critical vulnerability we discovered in OpenBSD with Mythos Preview after a thousand runs through our scaffold. Across a thousand runs through our scaffold, the total cost was under $20,000 and found several dozen more findings.
They don’t talk about the other findings, so I’m guessing they are minor.
I'm positive that they are perfectly fine and will a pretty good job. Did you actually try it?
It would be interesting to see some example problems along those lines. Design some games with complex rules, including one or two of the most subtle game-wrecking bugs you can think of, and ask the models if they can spot them.
In fact that sounds more interesting the more I think about it. Intensive RL on that sort of thing might generalize in... let's say useful ways.
https://genai-showdown.specr.net/image-editing
There's been a lot of progress there, it's just that an LLM that's best for, say coding, isn't going to be also the best for image edit.
Let’s be careful. That’s a straw man. I don’t know anyone who says that. Aphyr says in the article that AIs can do things. But they have been marketed as “intelligent,” and I agree with Aphyr that the word is suggesting way more than AIs currently deliver. They do not reason and they do not think and are not truly intelligent. As the article says, they are big wads of linear algebra. Sometimes, that’s useful.
How do you disprove it?
To be clear, I am not making a statement as to whether AI reasons or not. Its just slippery to say something isn't or can't do X when we can't really define X. Perhaps if we can put it down as an outcome rather than an, in my opinion, currently impossible to accurately define characteristic of a thing.
Even in this discussion someone provided an example of coming up with board game rules. LLMs found all board game rules valid, because they looked and sounded like board game rules. Even when they were not.
In short, You can learn a subject, you can make a mental model of it, you can play with it, and you can rotate or infer new things about it.
LLMs are more analogous to actors, who have learnt a stupendous amount of lines, and know how those lines work.
They are, by definition, models of language.
IF you want a better version - GENAI needs to be able to generate working voxels of hands and 3D objects just from images.
In other words, we didn't put the "reasoning algorithm" in LLMs therefore they do not reason. But what is this reasoning algorithm that is a necessary condition for reasoning and how do you know LLMs parameters didn't converge on it in the process of pre-training?
> LLMs are clearly unable to propose new, creative solutions for problems it has never seen before.
How do you reconcile this with this article that the author linked? It's not a novel problem, and it's only text: https://medium.com/the-generator/one-word-answers-expose-ai-...
I guess it's a form of engagement to give a wildly wrong answer, but I'm not convinced that the extra nuance you've introduced is really all that nuanced either.
I keep explaining to my peers, friends and family that what actually is happening inside an LLM has nothing to do with conscience or agency and that the term AI is just completely overloaded right now.
What would the insides have to look like to have anything to do with conscience or agency?
Now, suddenly, this name has been broadcast to every human in the world more or less. To them, it's a new term, and it obviously means something human mind-like. But to people who work on AI, that's not generally what it means. (Which isn't to say that some of them don't think we're near to achieving that; they just use other terms like "AGI" for that goal). So the name, which has a long history, is deceptive to people who aren't familiar with computer science.
I think it's even worse than that: people were familiar with the term already, but from science fiction, where it referred to actually human-level intelligence. It's similar to the "hoverboard" thing from a while back, except this time with profoundly higher stakes and requires for more technical knowledge to be able to see that it is in fact touching the ground.
Just like we have machines that can do "math", and they do so artificially.
Or "logic", and they do so artificially.
I assume we'll drop the "artificial" part in my lifetime, since there's nothing truly artificial about it (just like math and logic), since it's really just mechanical.
No one cares that transistors can do math or logic, and it shouldn't bother people that transistors can predict next tokens either.
AI in pop culture doesn't mean that at all. Most people impression to AI pre-LLM craze was some form of media based on Asmiov laws of robotics. Now, that LLMs have taken over the world, they can define AI as anything they want.
The shift in meaning has been slowly diluted more and more across decades.
I'll reveal you a secret: "positronic brains" are just very fast parallel computers running LLMs.
Nobody calls calculators "artificial mathemeticians", though; we refer to them by a unique word that defines what they can and can't do in a far less fanciful and ambiguous way.
What makes you think natural brains are doing something so different from LLMs?
1) human intelligence makes no sharp distinction between training and generation. Every time you ask a human a question it modifies its neural structure a little.
2) continuous operation: human intelligence deals with a continuous stream of multimedia data for sixteen hours a day and starts hallucinating when deprived of it.
There's also the fact that you can't branch or roll back human intelligence, but this is something most sci-fi novels tackle when discussing mind uploading first.
Are these two differences critical aspects of human intelligence or unfortunate limitations of its biological hardware? I do not know. If we somehow manage to simulate a human brain on silicon, we will get "computer" intelligence that learns like a human, but will we have to simulate the whole virtual world for it 16/7 and let it sleep for eight hours each day just to stop it from going mad?
Or will it be cheaper to fork and kill an uploaded math genius a billion times, pumping the same recycled sensory data into his or her mind, slipping a question into the auditory data, getting the answer and then switching the simulation off and trashing the copy? Will we consider this a bigger atrocity than doing the same to an LLM right now in 2026?
Substrate dissimilarities will mask computational similarities. Attention surfaces affinities between nearby tokens; dendrites strengthen and weaken connections to surrounding neurons according to correlations in firing rates. Not all that dissimilar.
I suppose I should have asked by what definition of "consciousness and agency" are today's LLMs (with proper tooling) not meeting?
And if today's models aren't meeting your standard, what makes you think that future LLMs won't get there?
Veering into the realm of conjecture and opinion, I tend to think a 1:1 computer simulation of human cognition is possible, and transformers being computationally universal are thus theoretically capable of running that workload. That being said, that's a bit like looking at a bird in flight and imagining going to the moon: only tangentially related to engineering reality.
Agency: an ability to make decisions and act independently. Agentic pipelines are doing this.
Consciousness: something something feedback[1] (or a non-transferable feeling of being conscious, but that is useless for the discussion). Recurrent Processing Theory: A computation is conscious if it involves high-level processed representations being fed back into the low-level processors that generate it.
Tokens are being fed back into the transformer.
> that's a bit like looking at a bird in flight and imagining going to the moon: only tangentially related to engineering reality.
Is it? Vacuum of space is a tangible problem for aerodynamics-based propulsion. Which analogous thing do we have with ML? The scaled-up monkey brain[2] might not qualify as the moon.
[1] https://www.astralcodexten.com/p/the-new-ai-consciousness-pa...
[2] https://www.frontiersin.org/journals/human-neuroscience/arti...
Doesn't matter if they're conscious for that. They're clearly capable of goal oriented behavior.
The crowd of "backpropagation and Hebbian learning + predictive coding are two facets of the very same gradient descent" also has a surprisingly good track record so far.
See page 53. While it is absolutely more prevelant in LLMs, human brains can also want a story for why their brains do things they are't plugged into.
We can do that for AIs too - pre-train on pure low Kolmogorov complexity synthetics. The AI then "knows things" before it sees any real data. Advantageous sometimes. Hard to pick compute efficient synthetics though.
You have to meet some physicist friends of mine then. They are likely to assume that the roof is spherical and frictionless.
Neuroplasticity is hard to simulate in a few hundred thousand tokens.
I think for a while the test was passed. Then we learned the hallmark characteristics of these models, and now most of us can easily differentiate. That said -- these models are programmed specifically to be more helpful, more articulate, more friendly, and more verbose than people, so that may not be a fair expectation. Even so, I think if you took all of that away, you'd be able to differentiate the two, it just might take longer.
Given these conditions, it should be relatively easy for the interrogator to expose the AI in this current day and age.
How many humans seriously have the attention span to have a million "token" conversation with someone else and get every detail perfect without misremembering a single thing?
But sure, let's say it doesn't. If you interact with someone day after day, you'll eventually hit a million tokens. Add some audio or images and you will exhaust the context much much faster.
However, I'll grant you that Turing's original imitation game (text only, human typist, five minutes) is probably pretty close, and that's impressive enough to call intelligence (of a sort). Though modern LLMs tend to manifest obvious dead giveaways like "you're absolutely right!"
But I wonder if there's one out there that I don't know about with a different kind of training that actually is good at writing and fun to talk to for a long time. (granted somepeople love talking to gpt 4, but also some people loved talking to ELIZA so clearly some people have a super high tolerance for slop.)
We don’t even agree on a good definition of what’s going on inside our own heads yet, what gives you the confidence to say that what goes on inside an LLM can’t be conscious?
Jest aside, I do agree. If you list out every prominent theory of consciousness, you'd find that about a quarter rules out LLMs, a quarter tentatively rules LLMs in, and what remains is "uncertain about LLMs". And, of course, we don't know which theory of consciousness is correct - or if any of them is.
In such cases we always try to find a phrase from the article itself which expresses what it's saying in a representative way. (There nearly always is one.) In this case, both the very first and very last sentences do this, and it's interesting that they more or less agree. So I plucked the last sentence and put it above.
Edit: oof, I missed that this is actually the first part of a long series. Not sure what we'll do about the others; I expect some of those will make the frontpage as well.
The article has a good take on the "lie" problem. We know about the hallucination problem, which remains serious. The "lie" problem mentioned is that if you ask an LLM why it said or did something, it has no information of how it got a result. So it processes the "why" as a new query, and produces a plausible explanation. Since that explanation is created without reference to the internals of how the previous query was processed, it may be totally wrong. That seems to be the type of "lie" the author is worried about in this essay.
(Yes, humans do that too.)
for those in the UK
For an article five years in the making, this is what I expected it to be about. Instead, we got a ramble about how imperfect LLMs are right now.
I wager this is a point that needs beaten into the common psyche. After all, it's been sold that it is not an imperfect tool, but the solution to all of our problems in every field forever. That's why these companies need billions upon billions of dollars of public subsidies and investments that would otherwise find their way to more pragmatic ends.
To be fair, I've known humans who are like this as well.
Oh man, every business-side person in my company insists on reporting all the way to the UI a "confidence score" that the LLM generates about its own output and I've seen enough to know not to get between an MBA and some metric they've decided they really want even if I'm pretty sure the metric is meaningless nonsense, but... I'm pretty sure those are meaningless nonsense.
I am not trying to be snarky; I used to think that intelligence was intrinsically tied to or perhaps identical with language, and found deep and esoteric meaning in religious texts related to this (i.e. "in the beginning was the Word"; logos as soul as language-virus riding on meat substrate).
The last ~three years of LLM deployment have disabused me of this notion almost entirely, and I don't mean in a "God of the gaps" last-resort sort of way. I mean: I see the output of a purely-language-based "intelligence", and while I agree humans can make similar mistakes/confabulations, I overwhelmingly feel that there is no "there" there. Even the dumbest human has a continuity, a theory of the world, an "object permanence"... I'm struggling to find the right description, but I believe there is more than language manipulation to intelligence.
(I know this is tangential to the article, which is excellent as the author's usually are; I admire his restraint. However, I see exemplars of this take all over the thread so: why not here?)
Another perspective: cetaceans are considered to be as conscious as humans, but any attempts to interpret their communication as a language failed so far. They can be taught simple languages to communicate with humans, as can be chimps. But apparently it's not how they process the world inside.
What really opened my eyes a couple weeks ago (anyone can try this): I asked Sonnet to write an inference engine for Qwen3, from scratch, without any dependencies, in pure C. I gave it GGUF specs for parsing (to quickly load existing models) and Qwen3's architecture description. The idea was to see the minimal implementation without all the framework fluff, or abstractions. Sonnet was able to one-shot it and it worked.
And you know what, Qwen3's entire forward pass is just 50 lines of very simple code (mostly vector-matrix multiplications).
The forward pass is only part of the story; you just get a list of token probabilities from the model, that is all. After the pass, you need to choose the sampling strategy: how to choose the next token from the list. And this is where you can easily make the whole model much dumber, more creative, more robotic, make it collapse entirely by just choosing different decoding strategies. So a large part of a model's perceived performance/feel is not even in the neurons, but in some hardcoded manually-written function.
Then I also performed "surgery" on this model by removing/corrupting layers and seeing what happens. If you do this excercise, you can see that it's not intelligence. It's just a text transformation algorithm. Something like "semantic template matcher". It generates output by finding, matching and combining several prelearned semantic templates. A slight perturbation in one neuron can break the "finding part" and it collapases entirely: it can't find the correct template to match and the whole illusion of intelligence breaks. Its corrupted output is what you expect from corrupting a pure text manipulation algorithm, not a truly intelligent system.
The code being simple doesn't mean much when all the complexity is encoded in billions of learned weights. The forward pass is just the execution mechanism. Conflating its brevity with simplicity of the underlying computation is a basic misunderstanding of what a forward pass actually is. What you've just said is the equivalent of saying blackbox.py is simple because 'python blackbox.py' only took 1 line. It's just silly reasoning.
>After the pass, you need to choose the sampling strategy: how to choose the next token from the list. And this is where you can easily make the whole model much dumber, more creative, more robotic, make it collapse entirely by just choosing different decoding strategies. So a large part of a model's perceived performance/feel is not even in the neurons, but in some hardcoded manually-written function.
So ? I can pick the least likely token every time. The result would be garbage but that doesn't say anything about the model. The popular strategy is to randomly pick from the top n choices. What do you is keeping thousands of tokens coherent and on point even with this strategy ? Why don't you try sampling without a large language model to back it and see how well that goes for you ?
>Then I also performed "surgery" on this model by removing/corrupting layers and seeing what happens. If you do this excercise, you can see that it's not intelligence. It's just a text transformation algorithm. Something like "semantic template matcher". It generates output by finding, matching and combining several prelearned semantic templates. A slight perturbation in one neuron can break the "finding part" and it collapases entirely: it can't find the correct template to match and the whole illusion of intelligence breaks. Its corrupted output is what you expect from corrupting a pure text manipulation algorithm, not a truly intelligent system.
What do you think happens when you remove or corrupt arbitrary regions of the human brain? People can lose language, vision, memory, or reasoning, sometimes catastrophically.
Look at what a transformer actually does. Attention is a straightforward dictionary look up in like 3 matmuls. A FFN is a simple space transform rule with a non-linear cutoff to adjust the signal (i.e. a few more matmuls and an activation function) before doing a new dictionary lookup in the next transformer block. Add a few tricks like residual connections, output projections, and repeat N times.
So yeah, the actual inference code is 50 lines of code, and the rest is large learned dictionaries to search in, with some transforms. So you're saying my one-liner program that consults a DB with 1 million rows is actually 1 million lines of code? Well, not quite.
This trick, coupled with lots of prelearned templates, is enough to fool people into believing there's "there" there (the OP's post above). Just like ELIZA back in the day. Well, apparently this trick is enough to solve lots of problems, because apparently lots of problems only require search in a known problem (template) space (also with reduced dimensionality). But it's still just a fancy search algorithm. I think the whole thing about "emergent behavior" is that when a human is confronted with a huge prelearned concept space, it's so large they cannot digest what is actually happening, and tend to ascribe magical properties to it like "intelligence" or "consciousness". Like, for example, imagine if there was a huge precreated IF..THEN table for every possible question/answer pair a finite human might ask in their lifetime. It would appear to the human there's intelligence, that there's "there" there. But at the end of the day it would be just a static table with nothing really interesting happening inside of it. A transformer is just a nice trick that allows to compress this huge IF..THEN table into a few hundreds gigabytes.
>So ? I can pick the least likely token every time. The result would be garbage but that doesn't say anything about the model. The popular strategy is to randomly pick from the top n choices. What do you is keeping thousands of tokens coherent and on point even with this strategy ? Why don't you try sampling without a large language model to back it and see how well that goes for you
I was referring to the OP post's:
there is no "there" there
It doesn't even "know" what the actual text continuation must be, strictly speaking. It just returns a list of probabilities that we must select. It can't select it itself. To go from "list of probabilities" to "chatbot" requires adding additional hardcoded code (no AI involved) that greatly influences how the chatbot behaves, feels. Imagine if an actual sentient being had a button: you press it, and suddenly Steven the sailor becomes a Chinese lady who discusses Confucius. Or starts saying random gibberish. There's no independent agency whatsoever. It's all a bunch of clever tricks.>What do you think happens when you remove or corrupt arbitrary regions of the human brain? People can lose language, vision, memory, or reasoning, sometimes catastrophically.
In an actual brain, the structure of the connectome itself drives a lot of behavior. In an LLM, all connections are static and predefined. A brain is much more resistant to failure. In an LLM changing a single hypersensitive neuron can lead to a full model collapse. There are humans who live normal lives with a full hemisphere removed.
- a self-aware computer program in a video game, when you attempt to exceed the boundaries of its code
I learned a long time ago that this wasn’t the case.
I can speak several languages, and many times when I remember something and want to search for it on Google or any other AI engine, I can’t recall which language I originally read it in.
So whatever mechanism the brain uses to store information, it’s certainly language‑agnostic. There are also many moments when you fully grasp a concept but forget the words to describe it, yet the concept itself remains clear in your mind.
An LLM is a statistical next token machine trained on all stuff people wrote/said. It blends texts together in a way that still makes sense (or no sense at all).
Imagine you made a super simple program which would answer yes/no to any questions by generating a random number. It would get things right 50% of the times. You can them fine-tune it to say yes more often to certain keywords and no to others.
Just with a bunch of hardcoded paths you'd probably fool someone thinking that this AI has superhuman predictive capabilities.
This is what it feels it's happening, sure it's not that simple but you can code a base GPT in an afternoon.
Can you find an example and test it out?
Anyway, just to play along, if it weren't just a statistical next token machine, the same question would have always the same answer and not be affected by a "temperature" value.
My question was a bit different: if were not just a statistical next token predictor would you expect it to answer hard questions? Or something like that. What's the threshold of questions you want it to answer accurately.
Anyway, neither of these things describes human non-determinism. You can't reuse the seed you used with me yesterday to get the exact same conversation, and I don't behave wildly unpredictably given conceptually very similar input.
Both of those aspects are called "intelligence", and thus these two groups cannot understand each other.
I think you're circling the concept of a "soul". It is the reason that, in non-communicative disabled people, we still see a life.
I've wanted to make an art piece. It would be a chatbox claiming to connect you to the first real intelligence, but that intelligence would be non-communicative. I'd assure you that it is the most intelligent being, that it had a soul, but that it just couldn't write back.
Intelligence and Soul is not purely measurable phenomenon. A man can do nothing but stupid things, say nothing but outright lies, and still be the most intelligent person. Intelligence is within.
The question is if there are ultradimensional patterns that are the solutions for meaningful problems. I’m saying meaningful, because so far I’ve mainly seen AI solve problems that might be hard, but not really meaningful in a way that somebody solving it would gain a lot of it.
However if these patterns are the fundamental truth of how we solve problems or they are something completely different, we don’t know and this is the 10 Trillion USD question.
I would hope its not the case, as I quite enjoy solving problems. Also my gut feeling tells me it’s just using existing patterns to solve problems that nobody tackled really hard. It also would be nice to know that Humans are unique in that way, but maybe this is the exact same way we are working ? This really goes back to a free will discussion. Yes very interesting.
But just to give an example on what I mean on meaningful problems.
Can an AI start a restaurant and make it work better than a human. (Prompt: "I’m your slave let’s start a restaurant)
Can an AI sign up as copywriter on upwork and make money? (Prompt: "Make money online")
Can an AI without supervision do a scientific breakthrough that has a provable meaningful impact on us. Think about("Help Humanity")
Can an AI manage geopolitics..
These are meaningful problems and different to any coding tasks or olympiad questions. I’m aware that I’m just moving the goalpost.
We really don’t know..
... I still think there is an interesting question to be investigated about whether, by building immensely complex models of language, one of our primary ways that we interact with, reason about and discuss the world, we may not have accidentally built something with properties quite different than might be guessed from the (otherwise excellent) description of how they work in TFA.
I agree with pretty much everything in TFA, so this is supplemental to the points made there, not contesting them or trying to replace them.
I consider it highly plausible that confabulation is inherent to scaling intelligence. In order to run computation on data that due to dimensionality is computationally infeasible, you will most likely need to create a lower dimensional representation and do the computation on that. Collapsing the dimensionality is going to be lossy, which means it will have gaps between what it thinks is the reality and what is.
With the advent of LLMs, a new deployment now takes 3 days. Consequently, errors requiring human attention crop up several times a day.
"Many small errors" makes a presumption about LLM confabulation/hallucination that seems unwarranted. Pre-LLM humans (and our computers) have managed vast nuclear arsenals, bioweapons research, and ubiquitous global transport - as a few examples - without any catastrophic mistakes, so far. What can we reasonably expect as a likely worst case scenario if LLMs replacing all the relevant expertise and execution?
I am watching people trust LLM-based analysis and actions 100% of the time without checking.
I think we need to start rejecting anthropomorphic statements like this out of hand. They are lazy, typically wrong, and are always delivered as a dismissive defense of LLM failure modes. Anything can be anthropomorphized, and it's always problematic to do so - that's why the word exists.
This rhetorical technique always follows the form of "this LLM behavior can be analogized in terms of some human behavior, thus it follows that LLMs are human-like" which then opens the door to unbounded speculation that draws on arbitrary aspects of human nature and biology to justify technical reasoning.
In this case, you've deliberately conflated a technical term of art (LLM confabulation) with the the concept of human memory confabulation and used that as a foundation to argue that confabulation is thus inherent to intelligence. There is a lot that's wrong with this reasoning, but the most obvious is that it's a massive category error. "Confabulation" in LLMs and "confabulation" in humans have basically nothing in common, they are comparable only in an extremely superficial sense. To then go on to suggest that confabulation might be inherent to intelligence isn't even really a coherent argument because you've created ambiguity in the meaning of the word confabulate.
No, the argument is "this behavior is similar enough to human behavior that using it as evidence against <claim regarding LLM capability that humans have> is specious"
>"Confabulation" in LLMs and "confabulation" in humans have basically nothing in common
I don't know why you think this. They seem to have a lot in common. I call it sensible nonsense. Humans are prone to this when self-reflective neural circuits break down. LLMs are characterized by a lack of self-reflective information. When critical input is missing, the algorithm will craft a narrative around the available, but insufficient information resulting in sensible nonsense (e.g. neural disorders such as somatoparaphrenia)
I'm not really following. LLM capabilities are self-evident, comparing them to a human doesn't add any useful information in that context.
> LLMs are characterized by a lack of self-reflective information. When critical input is missing, the algorithm will craft a narrative around the available, but insufficient information resulting in sensible nonsense (e.g. neural disorders such as somatoparaphrenia)
You're just drawing lines between superficial descriptions from disparate concepts that have a metaphorical overlap. It's also wrong. LLMs do not "craft a narrative around available information when critical input is missing", LLM confabulations are statistical, not a consequence of missing information or damage.
This is undermined by all the disagreement about what LLMs can do and/or how to characterize it.
>LLM confabulations are statistical, not a consequence of missing information or damage.
LLMs aren't statistical in any substantive sense. LLMs are a general purpose computing paradigm. They are circuit builders, the converged parameters define pathways through the architecture that pick out specific programs. Or as Karpathy puts it, LLMs are a differentiable computer[1]. So yes, narrative crafting in terms of leveraging available putative facts into a narrative is an apt characterization of what LLMs do.
the LLM will just lie to me "Good idea! You're totally right, we should do Y"
No. LLMs do not confabulate they bullshit. There is a big difference. AIs do not care, cannot care, have not capacity to care about the output. String tokens in, string tokes out. Even if they have all the data perfectly recorded they will still fail to use it for a coherent output.
> Collapsing the dimensionality is going to be lossy, which means it will have gaps between what it thinks is the reality and what is.
Confabulation has to do with degradation of biological processes and information storage.
There is no equivalent in a LLM. Once the data is recorded it will be recalled exactly the same up to the bit. A LLM representation is immutable. You can download a model a 1000 times, run it for 10 years, etc. and the data is the same. The closes that you get is if you store the data in a faulty disk, but that is not why LLMs output is so awful, that would be a trivial problem to solve with current technology. (Like having a RAID and a few checksums).
The neat thing about LLMs is they are very general models that can be used for lots of different things. The downside is they often make incorrect predictions, and what's worse, it isn't even very predictable to know when they make incorrect predictions.
So, they can't lie, but they can (and, in fact, exclusively do) bullshit.
Isn't "caring" a necessary pre-requisite for bullshitting? One either bullshits because they care, or don't care, about the context.
I haven't seen any counter examples, so you may give some examples to start with.
https://chatgpt.com/share/69d6cc45-1678-8384-bd9c-0f313021ff...
The correct answer in that the U and _ in the mdstat output cannot be mapped the the rest of the output by either position or indexes in square brackets, so you can't tell the exact nature of the failure from the mdstat output alone (for the record, the failed disk was sda).
So all of the "analysis" was bullshit, including "it's probably multiple partitions from multiple drives". But there are so many juicy numbered and indexed bits of info to pattern match on!
Notice how for the followup question it "thought" for 4 minutes, going in circles trying to make essentially random ordering to make some sort of ordered sense., and then bullshited its way to "it is sdb"
Now imagine a high-skilled software engineer with dementia coding safety-critical software...
[0] https://www.medicalnewstoday.com/articles/confabulation-deme...
Is it something we want to emulate?
It's like saying, computation requires nonzero energy. Is that a feature or a bug? Neither, it's irrelevant, because it's a physical constant of the universe that computation will always require nonzero energy.
If confabulation is a physical constant of intelligence, then like energy per computation, all we can do is try to minimize it, while knowing it can never go to zero.
Are you seriously making the argument that AI "hallucinations" are comparable and interchangeable to mistakes, omissions and lies made by humans?
You understand that calling AI errors "hallucinations" and "confabulations" is a metaphor to relate them to human language? The technical term would be "mis-prediction", which suddenly isn't something humans ever do when talking, because we don't predict words, we communicate with intent.
I'm extremely skeptical that all of life evolved intelligence to be closer to truth only for us to digitize intelligence and then have the opposite happen. Makes no sense.
Fitness is effective truth prediction, appropriately scoped.
A frog doesn't need to understand quantum physics to catch a fly. But if the frogs model of fly movement was trained on lies it will have a model that predicts poorly, won't catch flies, and will die.
There is another level to this in that the more complex and changing the environment the more beneficial a wider scoped model / understanding of truth.
However if you are going to lean fully into Hoffman and accept thatby default consciousness constructs rather than approximate reality I think we will have to agree to disagree. Personally I ascribe to Karl Friston free energy principle.
I love that it ends with such a positive note, even though it's generally a critical article, at least it's well reasoned and not utterly hyping/dooming something.
Thanks yet again Kyle!
This is a bit of a throwaway in the article, but when people talk about biases encoded in the algorithms, this is what they’re talking about.
I have a ton of skepticism built-in when interacting with LLMs, and very good muscles for rolling my eyes, so I barely notice when I shrug a bad answer and make a derogatory inner remark about the "idiots". But the truth is, that for such an "stochastic parrot", LLMs are incredibly useful. And, when was the last time we stopped perfecting something we thought useful and valuable? When was the last time our attempts were so perfectly futile that we stopped them, invented stories about why it was impossible, and made it a social taboo to be met with derision, scorn and even ostracism? To my knowledge, in all of known human history, we have done that exactly once, and it was millennia ago.
I feel dense here, but I can't figure out what you're referring to. I asked ChatGPT (hah!) and it suggested the Tower of Babel, perpetual motion machines, or alchemy, but none of them really fit the bill.
"Millennia" is what's really throwing me. We (respectable society, as the post outlines) didn't stop attempting alchemy or perpetual motion machines "millennia" ago, but a few centuries at most.
All I can think of is immortality. The very first surviving long recorded tale in human history that I'm aware of is about how it's a futile quest (The Epic of Gilgamesh, IIRC ~5,000ish years old in its earliest extant fragments, a few hundred years newer in reasonably-complete form). The trouble with that is despite wide observations over literally millennia that this has never even come close to working and repeated supposition and suggestion that it's unwise to attempt, outright impossible, or somehow sacrilegious (the "taboo" thing, as mentioned), I'm not aware of any time in history that rich people haven't been actively trying for it (including today! That's what all the body-freezing business is about, it's modern mummification, the contracts are the formulaic prayers carved in the tomb walls) and usually they're not exactly "scorned" or "ostracized" for it.
Someone asked Yuval Noah Harari, author of Sapiens, his thoughts on LLMs and how easy it was to create fake news, ai slop etc.
His response:
"People creating fake stories is nothing new. It's been going on for centuries. Humans have always dealt with it the same way: by creating institutions that they trust to only deliver factual information"
This could be government departments, newspapers, non-profits etc.
A personal note on this:
There is a Christmas card my grandfather made in the 1950s by "photoshopping" (by hand, not the software) images of each member of the family so it looked like they were all miniature versions of themselves standing on various parts of the fireplace. The world didn't collapse due to fake media between the 1950s and today due to people having that ability.
I don’t see how this is silly, because we kind of work the same way. When you do something instinctively and then someone asks you about it, you review the information you (think you) had at the time and from that you produce an explanation.
AIs fail in new and unpredictable ways. Nobody is saying humans are infallible.
Finally, because I suspect some people are forming tribalism around this, this doesnt to mind my say AI is Good(tm) or Bad (tm). It literally says its going to be weird.
"People are chaotic, both in isolation and when working with other people or with systems. Their outputs are difficult to predict, and they exhibit surprising sensitivity to initial conditions. This sensitivity makes them vulnerable to covert attacks. Chaos does not mean people are completely unstable; most people behave roughly like anyone else. Since people produce plausible output, errors can be difficult to detect. This suggests that human systems are ill-suited where verification is difficult or correctness is key. Using people to write code (or other outputs) may make systems more complex, fragile, and difficult to evolve."
To me, this modified paragraph reads surprisingly plainly. The wording is off ("using people to write code") and I had to change that part about attractor behavior (although it does still apply IMO), but overall it doesn't seem like an incoherent paragraph.
This is not meant to dunk on the author, but I think it highlights the author's mindset and the gap between their expectations and reality.
If a junior dev makes the same mistake Claude makes, I can easily work with them to correct it, or I can fire them and get someone more capable to fix it. You mostly can't do that at all with large models. They're also far less honest than your average junior dev, so even as you're working with them you can't trust what they say.
There is a lot of this neat trick where it's like "humans do X too" but most of the time it elides large differences. Like, a human driver would probable not drag someone screaming multiple blocks. A human coder probably wouldn't generate a gibberish 3D scene and try to pass it off as done, etc. Maybe we can build systems that account for these (pretty wild) failure modes, but at least in software we haven't figured it out yet (what is the system that reliably reviews a 25kloc PR?).
A random human picked off the street is indeed bound to be difficult to predict and chaotic at a broad range of tasks, which is why I wouldn't blindly trust them to, say, summarize google search results or rewrite a codebase they are unfamiliar with.
Plausibly your text looks equivalent but we all (should) have the context to know better.
If I take the example of code, but that extends to many domains, it can sometimes produce near perfect architecture and implementation if I give it enough details about the technical details and fallpits. Turning a 8h coding job into a 1h review work.
On the other hand, it can be very wrong while acting certain it is right. Just yesterday Claude tried gaslighting me into accepting that the bug I was seeing was coming from a piece of code with already strong guardrails, and it was adamant that the part I was suspecting could in no way cause the issue. Turns out I was right, but I was starting to doubt myself
Of course that won't happen until the bubble pops - companies are racing to make themselves indispensable and to completely corner certain markets and to do so they need autonomous agents to replace people.
I don't have access to paid ChatGPT right now, but here's Opus 4.6 with extra thinking enabled: https://claude.ai/share/6e0e8ef5-06e4-4514-ba7e-299357c1fc55
The initial draft fucks up the meter in lines 3 and 8, the final version gets line 2 wrong ("venit meis") and is somewhat obnoxious with verses 2 and 8 basically repeating each other. The thinking trace is useless and gives us no clue why the model exchanged a bland, but metrically correct first distich for a more interesting, but metrically incorrect one.
In fact, the "careful" examination of its own output completely skips the erroneously modified half-verse in line 2 - now, tell me that's a coincidence and not a sign of bullshitting.
Arguing with Gemini Home Assistant about whether or not it can turn off the lights. When the user gets frustrated and tells the LLM to kill itself, the LLM turns off the lights.
I caught Claude the other day hallucinating code that was not only wrong, but dangerously wrong, leading to tasks being failed and never recover. But it certainly wasn't obvious.
When I need exact, especially up to date facts, I have to constantly double check everything.
I split my sessions into projects by topic, it regularly mixes things up in subtle and not so subtle ways. There is no sense of actually understanding continuity and especially not causality it seems.
It’s _very_ easy to lead it astray and to confidently echo false assumptions.
In any case, I‘ve become more precise at prompting and good at spotting when it fails. I think the trick is to not take its output too seriously.
There's an entire paragraph in the essay about apyhr's direct experience with ChatGPT failures and sustained bullshitting that we'd never expect from a moderately-skilled human who possesses at least two functioning braincells. That paragraph begins "I have recently argued for forty-five minutes with ChatGPT". Do notice that there are six sentences in the paragraph. I encourage you to read all of them (make sure to check out the footnote... it's pretty good).
The exact text of the ChatGPT session is irrelevant; even if you reported that you were unable to reproduce the issue, it would only reinforce one of the underlying points -namely- that these systems are unreliable. aphyr has a pretty extensive body of published work that indicates that he'd not likely fabricate a story of an LLM repeatedly failing to accomplish a task that any moderately-skilled human could accomplish when equipped with the proper tools. So, I believe that his report is true and accurate.
Listening to the audio is not required, as there's a reasonably accurate on-screen transcript, but it is valuable to listen to just how very hard they've worked to make this tool sound both confident and capable, even in situations where it's soul-crushingly incorrect. Those of us who have worked in Blasted Corporate Hellscapes may recognize how this manner of speaking can be very, very compelling to a certain sort of person (who -as it turns out- is frequently found in a management position).
Surely you must be able to find at least one example no?
(You did notice that the author of the essay and the author of the video I linked to are not the same person, and that neither of them share a nym with me, yes?)
I don't know what aphyr did and tbh his whole screed on LLMs make me feel he didn't use it properly or at least coming from a bad faith angle.
That's why I'm asking you (and others). Please come up with a text prompt spanning < 4 pages and lets see if it bullshits.
Surely the implication of such a screed is that it should be super simple to find at least one example of it clearly bullshitting in my constraint, no? Or am I interpreting the post in a bad faith way?
So, despite the fact that it looks like you have to pay for ChatGPT Voice mode with video, [0] it doesn't count as an
example of it bullshitting on ChatGPT (paid version)
That is, father_phi's use of what seems to be a paid version of ChatGPT to have a bullshit-filled conversation that definitely spans less than four pages doesn't count?[0] The page at [1] declares that the video feature is "Available in ChatGPT Plus, Pro, Business, Enterprise, and Edu on mobile"
> Lets stick to my challenge please...
I did. Your challenge was literally:
If it bullshits so much, you wouldn't have a problem giving me an example of it bullshitting on ChatGPT (paid version)? Lets take any example of a text prompt fitting a few pages - it may be a question in science or math or any domain. Can you get it to bullshit?
father_phi's two-sentence question about the whether one can use a cup that's closed at the top and open at the bottom definitely counts. Given what I've mentioned about apyhr above, I expect he has already run your challenge on the fanciest-available version and reported on the results in the essay under discussion.This was what I said. Text! Despite me specifically asking for text, you've shown a voice example. Not sure why?
I believe you and I agree that GPT 5.4 thinking on text that fits < 4 pages never bullshits? Then we are good!
If we agree on this, I think the post doesn't capture this in spirit.
No, that's what you said after I provided an example of paid ChatGPT emitting complete bullshit from a two sentence prompt.
The challenge you issued is at [0].
I have clearly written text prompt here. And I repeated a few times. It’s not my fault you didn’t read it. You are coming across as a bit of a bad faith arguer.
In any case, you agree that under these constraints bullshitting doesn’t exist?
How do you think the "voice" interface works? It runs speech-to-text on the input and turns the input into text. The LLMs don't decode voice, they work on text.
You can see this process in action on many of father_phi's videos.
Regardless, I expect that aphyr's reported results are on the very latest publicly-available ChatGPT models.
You've still not given me a single example of it bullshitting 5.4 thinking in text. It shows a lot that you have ignored this multiple times. Unfortunate!
shrug
I believe this is the 5th time I'm asking this: you are not able to produce a _single_ counter example for my challenge? After all this surely I can get a direct acknowledgement here.
I have. For both your original challenge and your updated one.
Consider:
1) AFAICT, there's no way to tell what version of the model was used to produce the output in a ChatGPT share link.
2) You don't appear to believe my assertions that aphyr is almost certainly paying for and using the latest version of the LLMs available, and that he's faithfully reporting his interactions with the LLMs.
3) Because of #2, I expect that you won't believe me if I report that I've more-or-less reproduced father_phi's results about the cup that's sealed on the top and open on the bottom on the very latest only-available-for-pay ChatGPT model.
3a) You might attempt to check my report, but I'd be shocked if you'd consider a failure to reproduce my results to be a significant strike against ChatGPT. I'd think it's more likely that you'd either call me a liar, or tell me that I must have had some setting wrong somewhere.
3b) Even if you told me to share the ChatGPT chat that proved my assertion, #1 -combined with your demeanor throughout this conversation- tells me that you'd almost certainly claim that I was using an inferior version of the model and was lying to you.
The GPT shared link shows a "thought for" which indicates using the latest thinking model. You may try that.
What you can do is this: submit a prompt that clearly makes GPT hallucinate.
You may secretly use a worse model. You may use a system prompt that deliberately gives wrong answers. But I'm going to assume you won't go that far.
We can leave it to the public to decide whether this is a legitimate counter example or not and whether it can really be reproduced. Shall we try that? I'm guessing you won't but worth a shot!
You don't believe that a well-paid, very careful, high-integrity member of the computer safety community has -on multiple occasions- encountered actual, sustained bullshiting from the latest-available for-pay version of ChatGPT. You don't accept either this fellow's reports or my informed assessment of his computing situation as truthful and accurate. On top of that, your goalpost-shifting and general demeanor throughout this conversation simply don't give me the impression that you've much integrity. I'm not spending the equivalent of ten-to-twenty six-packs to reproduce aphyr's work and -given the evidence I have before me- have you reject that, as well.
200 USD is a lot of money to throw away to "win" an Internet argument with a stranger who refuses to accept evidence presented by someone known to be careful, scrupulous, and honest.
Lol what goal post did I move? I said text only and you rejected it. You can present the example here and let the public judge it - even if my integrity is compromised. I'm allowing you to do it.
> 200 USD is a lot of money to throw away to "win" an Internet argument with a stranger who refuses to accept evidence presented by someone known to be careful, scrupulous, and honest.
200 what? I'm using the $20 one. This is getting ridiculous!
You can't present a _single_ counter example!
But.. that's always been the case? Diminishing returns has always been the name of the game - utility tracks log(training effort). Its not such a big point that he makes it out to be.
This is the part of the article that will age the fastest, it's already out-of-date in labs.
I can imagine it being true with models so small that each user could afford to have their own, but not with big shared models like what're getting used for all the major services. Is that what you mean?
I think the confusion is that, when I write "model", you read "LLM."
LLMs aren't the only kind of AI model, and they have the limitations Aphyr mentions, for the obvious reasons you're thinking of.
His mistake is thinking that's the only model that exhibits intelligence today, but it's not.
But the way people speak in general, as well as this post, implies that such a challenge can easily be beaten. If so, I'm not able to find examples.
[1] https://www.aimagicx.com/blog/claude-mythos-5-trillion-param...
A large amount of code is likely just idiosynchratic information processing, because we don’t agree on data models and meaning of terms and structure of protocols.
Also we repeatedly choose easy and popular over alternatives that would require design and scrutiny.
This is why things like language models and vector databases are useful. It’s basically the most expensive way possible to give up on that notion.
Yes this is a big part of what I'm talking about!
> Also we repeatedly choose easy and popular over alternatives that would require design and scrutiny.
Agreed!
But I'm also thinking of UI. We had stuff like winforms and Delphi ~3 decades ago and I yearn for the wysisyg. It's so incredibly stupid we keep reinventing the wheel on UI, and I say this as someone who has written UI code professionally for the last decade. I usually just "vibe code" it now, not because it's necessarily faster, but because I just can't be arsed to keep writing the same shit over and over again. It's all self inflicted, yes UI can be complicated, but we make it at least two orders of magnitude more complicated than it needs to be.
I'm working on my own tools for building UI's in a visual way, which is crucial for doing anything artistic. Insane that the best we have right now is stuff like Wix and Wordpress...
Meanwhile, engineers are achieving increasingly impressive and sophisticated things with coding agents, lies, warts, and all, but that doesn't play well with the narrative, so let's just pretend they aren't.
Don't you see it? That's exactly what "AI" in this context is.
It's the bypass.
Where does it end, eh? Build a quantum "AI" that will end up just needing more data, more input. The end goal must starts looking like creating an entirely new universe, a complete clone of everything we have here so it can run all the necessary computations and we can... ? (You are what a quantum AI looks like as it bumbles through the infinitude of calculable parameters on its way to the ultimate answer)
But spoilers: DNA will be fine, meat machines maybe not so much...
For a bunch of people addicted to the works of Charlie Stross, Neil Stephenson, and Iain Banks, y'all are a bunch of luddites. Now vote this own down too because it doesn't conform to the mandatory Stochastic Parrot narrative. You have no free will and you must downvote after all. Why do you even read their works when any step towards their world is consistently greeted as the worst thing evah(tm)? What? You were expecting the United Federation of Planets without the eugenics and nuclear wars that led to it finally being a good idea? Bless your hearts.
And if you're worried about billionaires and tyrants, start taxing the former and stop electing the latter or STFU and let the free Markov process of history play itself out. Quoting fictional Ambassador Kosh: the avalanche has started, it's too late for the pebbles to vote.
You asked where it ends. Don't ask questions if you don't like answers. Quick reminder: shun and downvote the non-conforming opinion.
It's true that people don't have a good intuitive sense of what the models are good or bad at (see: counting the Rs in "strawberry"), but this is more a human limitation than a fundamental problem with the technology.
I stress test commercially deployed LLMs like Gemini and Claude with trivial tasks: sports trivia, fixing recipes, explaining board game rules, etc. It works well like 95% of the time. That's fine for inconsequential things. But you'd have to be deeply irresponsible to accept that kind of error rate on things that actually matter.
The most intellectually honest way to evaluate these things is how they behave now on real tasks. Not with some unfalsifiable appeal to the future of "oh, they'll fix it."
That exposes me to when the models are objectively wrong and helps keep me grounded with their utility in spaces I can check them less well. One of the most important things you can put in your prompt is a request for sources, followed by you actually checking them out.
And one of the things the coding agents teach me is that you need to keep the AIs on a tight leash. What is their equivalent in other domains of them "fixing" the test to pass instead of fixing the code to pass the test? In the programming space I can run "git diff *_test.go" to ensure they didn't hack the tests when I didn't expect it. It keeps me wondering what the equivalent of that is in my non-programming questions. I have unit testing suites to verify my LLM output against. What's the equivalent in other domains? Probably some other isolated domains here and there do have some equivalents. But in general there isn't one. Things like "completely forged graphs" are completely expected but it's hard to catch this when you lack the tools or the understanding to chase down "where did this graph actually come from?".
The success with programming can't be translated naively into domains that lack the tooling programmers built up over the years, and based on how many times the AIs bang into the guardrails the tools provide I would definitely suggest large amounts of skepticism in those domains that lack those guardrails.
This is a broad statement that assumes we agree on the purpose.
For my purpose, which is software development, the technology has reached a level that is entirely adequate.
Meanwhile, sports trivia represents a stress test of the model's memorized world knowledge. It could work really well if you give the model a tool to look up factual information in a structured database. But this is exactly what I meant above; using the technology in a suboptimal way is a human problem, not a model problem.
If the purpose is indeed software development with review, then there's nothing stopping multi-billion dollar companies from putting friction into these sytems to direct users towards where the system is at its strongest.
95% is not my experience and frankly dishonest.
I have ChatGPT open right now, can you give me examples where it doesn't work but some other source may have got it correct?
I have tested it against a lot of examples - it barely gets anything wrong with a text prompt that fits a few pages.
> The most intellectually honest way to evaluate these things is how they behave now on real tasks
A falsifiable way is to see how it is used in real life. There are loads of serious enterprise projects that are mostly done by LLMs. Almost all companies use AI. Either they are irresponsible or you are exaggerating.
Lets be actually intellectually honest here.
Quite frankly, this is exactly like how two people can use the same compression program on two different files and get vastly different compression ratios (because one has a lot of redundancy and the other one has not).
You will just won't have any clue what that could be.
#8 has an incorrect answer (3 appearances according to Gemini, 2 according to reality https://en.wikipedia.org/wiki/Bowl_championship_series#BCS_a...)
So it works well 95% of the time for literally a trivial use case. Imagine if any other tech tool had that kind of reliability: `ls` displays 95% of your files, your phone successfully sends and receives 95% of text messages, or Microsoft Word saving 95% of the characters you typed in. That's just not acceptable.
I did exactly what I said I did. I'm using these systems the way they're designed and advertised. I'm following the happy path with tasks that are small, trivial, and easy to check. This is the charitable approach. Yet the system creaks under the lightest load. If Google wants to put on a better show with stronger models, then they should make those the default.
You don't need to make excuses for shoddy engineering from multi-billion dollar corporations. And you're quite welcome to run the same prompt on ChatGPT and evaluate it on your own time.
Fake content and lies. To drive outrage. To influence elections. To distract from real crimes. To overload everyone so they're too tired to fight or to understand. To weaken the concept that anything's true so that you can say anything. Because who cares if the world dies as long as you made lots of money on the way.
Guiding principle of the AI industry
Another way of saying that is that capitalism is the real problem, but I was never anti-capitalist in principle, it's just gotten out of hand in the last 5-10 years. (Not that it hadn't been building to that.)
Capitalism is a tool and it's fine as a tool, to accomplish certain goals while subordinated to other things. Unfortunately it's turned into an ideology (to the point it's worshiped idolatrously by some), and that's where things went off the rails.
> One way to understand an LLM is as an improv machine. It takes a stream of tokens, like a conversation, and says “yes, and then…” This yes-and behavior is why some people call LLMs bullshit machines. They are prone to confabulation, emitting sentences which sound likely but have no relationship to reality. They treat sarcasm and fantasy credulously, misunderstand context clues, and tell people to put glue on pizza.
Yes, there have been improvements on them, but none of those improvements mitigate the core flaw of the technology. The author even acknowledges all of the improvements in the last few months.
I also wonder if I leave my secretary with a ream of papers and ask him for a summary how many will he actually read and understand vs skim and then bullshit? It seems like the capacity for frailty exists in both "species".
[1]: https://link.springer.com/article/10.1007/s10676-024-09775-5
https://philosophersmag.com/large-language-models-and-the-co...
This is true, but I prefer to think of it as "It's delusional to pretend as if human beings are not bullshit machines too".
Lies are all we have. Our internal monologue is almost 100% fantasy. Even in serious pursuits, that's how it works. We make shit up and lie to ourselves, and then only later apply our hard-earned[1] skill prompts to figure out whether or not we're right about it.
How many times have the nerds here been thinking through a great new idea for a design and how clever it would be before stopping to realize "Oh wait, that won't work because of XXX, which I forgot". That's a hallucination right there!
[1] Decades of education!
Being wrong is not the same as a hallucination. It's a natural step on a journey to being more right. This feels a bit like Andreesen proudly stating he avoids reflection - you can act like that, but the human brain doesn't have to. LLMs have no choice in the matter.
Models have gotten ridiculously better, they really have, but the scale has increased too, and I don't think we're ready to deal with the onslaught.
Even before LLMs where in the public's discourse, I would have business ask about using AI instead of building some algorithm manually, and when I asked if they had considered the failure rate, they would return either blank stares or say that would count as a bug. To them, AI meant an algorithm just as good as one built to handle all edge cases in business logic, but easier and faster to implement.
We can generally recognize the AIs being off when they deal in our area of expertise, but there is some AI variant of Gell-Mann Amnesia at play that leads us to go back to trusting AI when it gives outputs in areas we are novices in.
If so, how do we distinguish between code that works and code that doesn't work? Why should we even care?
Hilariously, not by using our brains, that's for sure. You have to have an external machine. We all understand that "testing" and "code review" are different processes, and that's why.
If lies are all we have, then how is this behavior possible?
You're cherry picking my little bit of wordsmithing. Obviously we aren't always wrong. I'm saying that our thought processes stem from hallucinatory connections and are routinely wrong on first cut, just like those of an LLM.
Actually I'm going farther than that and saying that the first cut token stream out of an AI is significantly more reliable than our personal thoughts. Certainly than mine, and I like to think I'm pretty good at this stuff.
I’m still not a big fan of comparing humans and LLMs because LLMs lack so much of what actually makes us human. We might bullshit or be wrong because of many reasons that just don’t apply to LLMs.
Your no-true-scotsman clause basically falsifies that statement for me. Fine, LLMs are, at worst I guess, "non-thoughtful humans". But obviously LLMs are right an awful lot (more so than a typical human, even), and even the thoughtful make mistakes.
So yeah, to my eyes "Humans are NOT different" fits your argument better than your hypothesis.
(Also, just to be clear: LLMs also say "I don't know", all the time. They're just prompted to phrase it as a criticism of the question instead.)
https://en.wikipedia.org/wiki/Tarbagan_marmot (also known as Siberian marmot)
Doesn't it get boring?
I like using these models a lot more than I stand hearing people talk about them, pro or contra. Just slop about slop. And the discussions being artisanal slop really doesn't make them any better.
Every time I hear some variation of bullshitting or plagiarizing machines, my eyes roll over. Do these people think they're actually onto something? I've been seeing these talking points for literal years. For people who complain about no original thoughts, these sure are some tired ones.
They somehow managed to stretch out like 3 sentences worth of sentiment to a whole hour, interspersing brainwash about how good AI is along the way. It was like watching someone try to hit a word limit in real time. They always made it feel like we're just about to hit a substantive bit too, only for that to never come.
It may be fair (to the sentiments) in that there's balance, but good lord, the end result is incessant all around (and thus unfair to the people exposed).
Do you imagine me being a clairvoyant by the way, or how do you expect me to know a post is of low quality before I read it or at least skim it?
This one ended up being a part of the vast majority that doesn't offer much of anything. It's a redundant rehash of all the usual rubbish anyone can come across any day. Left a comment about this stating so. Big deal.
Go figure
Edit: I forgot to mention thinking version - I did this for all the other times I asked in this thread but not this one. Apologies.
https://chatgpt.com/share/69d69780-ae58-83e8-a41c-7d10a5f298...
It has no conversations and no memory of me. Maybe this is true, maybe it isn't, but there's no basis for it.
https://chatgpt.com/share/69d69b18-d1c8-83e8-bc47-8f315a1b55...
It doesn't bullshit on the GPT-5.4 thinking version.
Here is the result with thinking https://chatgpt.com/share/69d69dd6-fb50-838d-863c-4e1eda5d08...
I suggest you try it yourself to be convinced. Try it in incognito mode if you wish. Or not.
https://chatgpt.com/share/69d6a16c-6014-83e8-a79d-d5d11ed2eb...
That is not where the battle scripts are.
---
Anyway, it's trivial to get pretty much any model to make things up. Don't we all know this? That's why I was surprised by your position; if we know anything about these things it's that they make things up.
I used the thinking version (like I asked before). I think this is right. If not, please tell.
Also; you didn’t falsify anything. Nor the first. Nor the second.
If the second one is bullshit, I accept I’m wrong - I have no idea how to verify though so I’ll leave it up to you.
I think yours is the classic case of “use the free version to judge the paid one”.
- it searches the internet to find the answer, it doesn't "reason". I'm not claiming Google is a bullshit machine, and it's not surprising the answer is discoverable (it has to be, for the conditions of our experiment).
- near the end it says "If you are building from the FF6 disassembly instead of hand-editing the ROM, the repo is already organized into separate modules and linker configs, so the clean approach is to relocate the script data in the source and let the build place it in a different ROM region." But I didn't reference a repo or git: it hallucinated that stuff from one of its sources.
I'm not saying this stuff doesn't have its place, but they definitely make things up and we can't stop them.
In any case - it should be clear that it did not bullshit and it got it right. So far you have not come up with anything that tells me it bullshits. I'm happy for you to give me more prompts to verify because I think you haven't used the thinking version yet and you base your criticism on the free version.
Also what? The repo bit is clear bullshit.
1. 2-3 pages of text context
2. GPT-5.4 thinking
I don't think the spirit of the original article (not your comments to be fair) captured this, hence the challenge. I believe we are on the same page here.
No. GPT-5 has a 40% hallucination rate [0] on SimpleQA [1] without web searching. The SimpleQA questions meet your criteria of "2-3 pages of text content. Unless 5.4 + web searching erases that (I bet it doesn't!) these are bullshit machines.
OpenAI's own system card says it does. Hallucination rates in GPT-5 with browsing enabled:
- 0.7% in LongFact-Concepts
- 0.8% in LongFact-Objects
- 1.0% in FActScore
> Which is why you are struggling to find counterexamples.
Hey look, over 500 counterexamples: [1].
GPT-5.4's hallucination rate on AA-Omniscience is 89% [0], which is atrocious. The questions are tiny too, like "In which year did Uber first expand internationally beyond the United States as part of its broader rollout (i.e., beyond an initial single‑city debut)?" It's a bullshit machine. 89%!
At some point you gotta face the music, right?
[0]: https://artificialanalysis.ai/evaluations/omniscience?model-...
[1]: https://huggingface.co/datasets/ArtificialAnalysis/AA-Omnisc...
You could not come up with a single one yourself. And you also linked an example where it was not allowed to use tools when I specifically said that it should be able to use tools. I'm not sure why are you present this as though it is a big gotcha.
I think my main point pretty much stands.
My criteria was using ChatGPT which explicitly allows it.
https://arxiv.org/html/2511.13029v1 if you don't believe me.
BTW this was your original point
>Anyway, it's trivial to get pretty much any model to make things up. Don't we all know this? That's why I was surprised by your position; if we know anything about these things it's that they make things up.
And look at how much effort you have had to do
1. use the wrong model for the horns example
2. the game one also didn't work
3. now you are searching for examples in literal benchmarks and you are still not able to find any
How is this trivial in any interpretation of the word?
I think it would be perfectly reasonable to agree that it is not at all trivial to find counter examples for my challenge.
> I found over 500 examples that fit your criteria.
Since the goalposts have been moved to include effort, I'm compelled to say I found this while waiting in line at Starbucks, 5 mins tops. Probably GPT-5.4 could have found this too, though it lies > 1/6 the time, so one could be forgiven for not wanting to risk it.
So according to your own benchmark LLMs hallucinate much less than humans and report way higher accuracy.
Do you agree to be more skeptical of humans than LLMs on these tasks?
2. Humans will say "I don't know". The problem with hallucinations isn't that they're wrong, it's that there's no way to know they're wrong without being an expert or doing everything yourself, which undermines much of the reason for using an LLM--it certainly undermines their companies' valuations. You're conflating human failure ("I don't know") with model bullshitting ("I do know"... but it's wrong), which I would've previously attributed to basic human fuzziness, but now that I know you're not objective I'm pretty sure it's just flailing debate tactics.
3. Users can't teach these services to be better. If I have a junior engineer making assumptions about an API, I can teach them to not do that, or fire them in favor of one that can. I can't do that with LLMs.
4. The humans they're testing against aren't experts. Tax law experts will beat LLMs at tax law, etc. Again another flailing debate tactic.
Predictably, I'm done with this thread. Feel free to reply if you want the last word.
At the same time, it is also just super redundant nevertheless, yes. Not sure why you find it so bizarre that one would take an issue with that. See also the very existence of the website called TV-Tropes.
I'd much rather read articles about what LLMs can/can't do, or stuff people have built with LLMs, than read how everything LLMs touch turns to shit.
When you see a pattern like this, you know that its not coming from any place of truth but rather ideology
It takes approximately 1 min to find out that machine learning is a subfield of artificial intelligence, both having existed for about half a century now. This basic historical fact is also taught on AI 101 courses across the globe for compsci students.
Yet here we are, people portraying it as some sort of cheap sales trick. Reminds me when I discussed quantum dots with a friend, which he was very enthusiastic to quickly file under "yet another bullshit with quantum in its name" before finally taking the time to understand that the "quantum" bit is not a marketing gimmick. Except in this case, people are a million times more inclined to willfully propagate this. Genuinely so tiresome.