This is an absolutely deranged understanding of the dynamics at play, and even within your article you cite a number of sober and accurate critiques from the left on AI, then describe that as plugging their ears? It is a definitionally hallucinatory predictive model that has no conception of correct or incorrect information being treated like an intelligent search engine, studies consistently show that incorporation of LLMs into organizational workflows leads to more work for workers, while consistent reliance on LLMs leads to poor critical thinking and productivity. This is not to mention the environmental costs, nor the foundation of the tech being the nonconsensual and uncompensated monetization of the personal data and commercial labor of hundreds of millions, if not billions, of people. These are all critiques from the left. The right consists of financially incentivized and hoodwinked executives/investors insisting that generative algorithms are good for society, despite said benefits having never been demonstrated.
The argument of the piece is not that the left has failed to offer responses, it's that the left has failed to offer responses *that are relevant to a scenario where AI technology does grow capable enough to really change the world* – because of the complacent assumption that it's impossible for this to ever happen ("definitionally"). Your comment is a good example of what he's talking about – it's strange to criticise his account of the dynamics at play when you are embodying them in the very same breath!
The most groundbreaking developments in the capabilities of coding agents have been made in the past months and even weeks. Any large scale studies that have been done would be hopelessly outdated.
Moreover, it's just not logical to judge the capabilities of a technology based on mediocre people at mediocre organizations using a mediocre version of the technology for mediocre purposes. What are the smartest people doing with the frontier models? That's what you need to pay attention to.
But who’s assigning the marks here? You’re simply declaring that the arguments you like are from ‘smart’ people, and the ones you don’t like are from ‘mediocre’ people - and that’s the very fault you’re attacking. It used to be called ‘begging the question’ before mediocre people started misusing that phrase. [see what I did there?]
It doesn’t matter “who is assigning the marks,” whatever that even means.
If you want to understand space technology you should look to NASA and SpaceX, not to flat-earthers trying to build steam-powered rockets to see if the earth is really round. That’s entirely self evident and if you ARE pointing to the flat earthers as evidence that space technology is dumb and useless, that’s firmly a “you’re an idiot” problem rather than a stunning insight.
Assigning the marks? Simple enough - who decides, and how do they decide, who the ‘mediocre’ people are, in your argument. Your reductio argument about flatearther versus NASA whizz is too widely-meshed - there may be fools at NASA too.
“But I keep telling you, humans are just made of like, atoms and molecules and stuff! Neurons just fire stupidly in response to mindless chemical signals! What kind of idiot would confuse that for understanding or thinking??”
This is basically your viewpoint, and you’re doing precisely what the article says people like you are doing. Hiding behind rhetorical tricks as if they undo the things that are happening in real time.
Oh good, you’ve successfully said some trivial sounding words. It’s just a predictive machine “by definition!” Cool. I guess the real world is fake news then because you’ve already got your words.
Literally what the article is calling out. Maybe AI can help you understand it.
"It is a definitionally hallucinatory predictive model that has no conception of correct or incorrect information being treated like an intelligent search engine"
You are doing exactly what the essay talks explicitly about. Deciding that it is "definitionally" something and therefore any evidence otherwise can dismissed. But a major part of the entire point of essay is that this sort of view is a view getting repeated a lot on the left in ways which are just not adjusting to the evidence at all. So repeating that here as if it is somehow a useful argument doesn't do anything. Maybe try to reread the essay again more slowly and carefully?
I appreciate much of what you have here, and I think you are absolutely correct that the left view of AI (on blusky in particular) is primarily influenced by academia. However, I don't think you are focused on the correct subset of academics here. I am a historian and can say firmly that among scholars in the humanities in particular, the concern is not with whether or not LLMs can do what they are proported to do, but rather the impact that they have had on education. The issue isn't that we are being told the calculator knows algebra, it is that the students are being given access to the calculator before they have learned to do simple addition. I do commonly hear the pessimist view that Chat GPT gets everything wrong, but I think that is just cope in response to the enormous problem presented by an unregulated shortcut machine unleashed on a workforce who was never trained to deal with it. The fact that a handful of private companies abolished the college essay is something that humanities profs will probably never get over. Many of them still haven't admitted that it happened and continue to assign the same tasks to increasingly AI dependant students.
Smart people will use AI to conduit being smart. The less smart Kids who don't want to read your mandatory terrible assigned reading will likely be better off not reading it, and learning to use AI to get a grade from you.
The truth is, the American education system is a absolute sham and there's kids don't care about what you want them to do. The amount you actually learn in school is so small after a certain period
It seems like these professors are slow on the uptake but this has all happened so fast! When the history of this period is written I don't think the professors will be faulted for not noticing that their take-home essays were hackable. It just took a few years. It's that in-between time that we're living in, between the time that Speedy Gonzales darted away and Sylvester the Cat reacted. It seems long because we're living in it and we see undergrads exploiting the lag.
I do think it's telling that you posed the question "How can we use this to massively advance state capacity and massively expand the ability of the government to deliver public services to people" and the single answer offered that wasn't openly pitched as being exotic, far future speculation was "rich people are really into this so we can use the hype to make them invest in the grid". Which, yes, agreed... but that's not something that the technology itself is offering, it's basically admitting there's a bubble and suggesting we ride it for something good for once lol
Presumably the author thought readers would be smart enough to figure that out on their own rather than being intentionally stupid and pretending that not being by spoon-fed an answer must mean no answer could exist.
So far the only way "AI" has impacted my career has been landing me a bit of extra work cleaning up tasks that somebody else was hired to do and used "AI" to attempt, and fail, to achieve properly, so you'll have to forgive me if I'm having trouble visualizing how this is going to increase state capacity. "A machine that gives you information that's wrong an unknown but substantial percentage of the time" is not giving me big gamechanger vibes. That's why I tried reading this article to see what else might be on offer. Looks like... not much? But perhaps you have some ideas?
I agree with you it's far from perfect. That being said it's advancing rapidly, hallucination rates for frontier models have been drastically reduced from the early ChatGPT days. It's not a solved problem, so using it as the sole go/nogo decision maker in critical applications is clearly asking for trouble.
When I think about state capacity, I think about bureaucracy. Filling out forms. Reading applications and permit requests. Things that constitute a lot of busywork, a lot of which can be handled by the AI with human oversight, judgment, and decision-making.
Hallucination is more or less of a problem depending on how you use the AI, how you interpet the outputs, and how good your judgment is. Humans are far from perfect and we hallucinate far more, on average, than any frontier model. But we still trust humans orders of magnitude more to not decide "man, what a tricky situation, let's try a nuclear strike to see what happens." As we should, and should continue to do so.
But yeah, outside of things like that where I'm in full agreement with you, I'm thinking of things like how it takes 6 years for funded solar projects to go through the permitting process before they're allowed to connect to the grid. Intelligently allocating human resources to inspect infrastructure efficiently. Stuff like that. Of course humans can do it too, but it's not like we can just hire an additional 5 million civilian government contractors overnight even if we wanted to.
At the very least it's prudent to start preparing for this future, where models don't hallucinate anymore, or at least not a meaningful amount in most domains that aren't ethically/morally questionable. If we wait until things are perfect to get started, we'll end up far behind in the long run.
I mean for the amount of money being "invested" in "AI" we could easily hire millions more public servants. The problem there is that some of these roadblocks are just a consequence of democracy - if everyone gets a say, that takes time and dialogue. You can't speed that up without overruling some set of people. Which some people support, but that's a *political* discussion, not a technical one. This is why China builds a train station in a weekend - they don't consult anybody, they just do it. There's pros and cons there!
For the hallucination issue - hallucination is foundational to generative AI, that's what makes it generative. You cannot resolve it and retain the generational aspects; the best you can do is reduce it by bolting on additional measures. I would be skeptical of the claim that humans hallucinate more than AI models when I'm seeing error rates still of almost 50%, but even if we grant that - humans have systemic solutions to that issue (e.g. when you need medical advice, you can see a trained doctor and get an answer that is highly likely to be correct rather than asking any random person or, worse, RFK Jr. lol).
Now, the hallucinations wouldn't be a big deal if these models weren't being sold as authoritative places to get trustworthy information. That's universally the marketing though - "AI is smarter than humans! AI hallucinates less than humans!" but that's simply not true and it is exposing people to major risk. As an example: I use a tracking app to monitor my newborn's feeding and sleep cycle. If you pay extra, you can add an "AI" chatbot to answer your questions about childcare. Of course it says "these answers may be incorrect so check your sources", for legal reasons. And I know they hallucinate so I didn't use it - I'm absolutely not risking my child's health to a machine I know gets things wrong. But other people do - my wife's friend uses the same app and texted her about frustrations because she was following its advice and it wasn't working, and she shared the advice and it was flat out fiction. Just pure nonsense. *This is an app for caring for infants*. The risk level is INSANE to put that in there. Yet here we are. Surely you can see why I'm deeply concerned about the way this is being rolled out, and the people in charge of doing so. There needs to be WAY more caution and regulation; this stuff is not ready for prime time, casual use by non-experts, at all. It's already killed some people; it's going to get worse. And for what? Are we seeing big productivity gains?
On a scale of "mentally challenged" to "midwit", where do you rank yourself?
No self respecting person has said this since 2006 on New Atheist message boards:
> "Provide extraordinary evidence for this extraordinary claim."
It is clearly impossible to base all rational thought and scientific methodology on an aphorism whose meaning is entirely subjective.
Stop being stupid, if anything was legitimate about multiple evidence standards, it actually is **you** who has the far more incredible claim that it won't continue doing what we've already watched it do in 3 years and no sign of stopping.
Freddie can always be counted on to grease the wheels on the goalposts.
AI can’t do that, and if it can it’s not impressive, and if it is the it must not be useful, and if it is then it’s too small to matter, and if it’s not then I don’t believe it anyway because it’s “definitionally impossible!”
Heres the thing, you can take the position of the skeptic screaming at a lamppost under a bridge if it makes you feel good. You can keep that going for the next 50 years if you like. That doesn’t mean your opinion has any relevance or connection to reality. You can position yourself as The Ultimate Skeptic Prove Me Wrong Bro til you’re blue in the face. Nobody cares, the world moves on without you. You’re not Feynman or Carl Sagan. Being intentionally stupid is not the same as “there’s LITERALLY NO EVIDENCE!!!”
“If AI is just ‘spicy autocomplete,’ then what’s responsible for the current frenzy of attention?”
The huge amount of corporate and personal wealth tied up in it being the future of everything, that’s what’s responsible. Many of the world’s biggest companies have tens of not hundreds of billions sunk into tech that cannot and will not return even a fraction of that investment, and they’re desperately hoping that a big enough publicity blitz will make the impossible possible.
The technology is “genuinely engaging” for people who’ve always been bad at writing emails, I guess. Or don’t care at all about the quality of their work output. Or don’t have a creative bone in their body, but always kind of wished they did. Or find interacting with other people a drag on their life rather than the point of being alive.
You need to talk to an actual person who uses AI daily and LISTEN to what they're saying (don't argue, don't force your opinions on them; LISTEN). Your assumptions only come from raw ignorance that is willful at this point considering the simple facts of usage of this technology.
I have a LOT of critiques of LLMs. A LOT. But this willful blindness about their utility and natural engagement does no one any good. Your comments prove every point in the article here.
I'm someone who uses AI at a very basic level, but have found it to be helpful and additive in my daily life in many ways. Its also generally been a source of creativity, as it opens possibilities to do things I wouldn't have previously considered doing. I'll give you a few examples.
1.
I'm learning a language of the country I moved to (lets call it language A) - very important for my continued life here. I wanted to relearn language B, a language of my childhood I've mostly lost, but would love to pick back up and use to speak with relatives. But I don't want to risk losing progress on language A. So I decided to learn language B from language A. There are probably less than 500 people in the world who speak both, so no resources for learning between the two. I used AI to create sets of flashcards between the two languages for the most common verbs, nouns, adjectives, adverbs, etc. I communicate with it in language A to give me lessons and practice exercises in language B. And every day I have it send me a paragraph in language B at the A2 level (my current level in that language) followed by a paragraph in language A at the C1 level, so I can get some practice in for both at once. It's also immensely helpful in explaining the subtleties in meaning between 2 words in one language that translate as the same word in another language. (Sometimes during language classes in language A, my teacher wasn't able to explain some of these subtleties - but the LLM could - and once read to him, he could confirm that the explanation was right).
2.
Recently the hard drive crashed on my laptop. Money is tight right now so I wanted to avoid spending on a new laptop, or even paying for a computer tech to repair it if I could. So I used AI to talk me through what new hard drive I should buy for my laptop, how I could try an Ubuntu backup to salvage what I can from the old hard drive, what tools I need and how to take apart my laptop and replace it with the new hard drive, and how to set it up. I could have found tutorials for all of these things pre-AI, but I probably wouldn't have had the confidence to do it myself, as I've never played around with electronics before, and I'd be worried about something going wrong. Sure enough - it did, a few times! The new hard drive wasn't recognized. The solution was an obscure setting in BIOS mode that I'm pretty unconfident I would have figured out myself, if it wasn't for the LLM I was troubleshooting with the whole time. But in the process, I had a blast poking around inside my computer's hardware and software, and gained a lot of confidence that this is something I could do more of myself in the future.
3.
Another thing that wasn't a necessity, but just a fun side project - I'm not much of a coder, and I don't really have the motivation to learn to be a good enough one to actually make useful things by coding. But I'm a musician and a huge music fan. I check out a lot of sources for music news, releases and concert updates, but have yet to find a really good system for aggregating all the news. Using a coding agent (just a free tier of Claude from a few months ago), I set some parameters for what I wanted, and guided it to create a Python script which would collect articles from a bunch of music websites, then send the results to an LLM via API which would select articles I'd be most interested in based on some detailed criteria I gave it, then use my Gmail API to email me a newsletter with the selected articles and summaries. It helped me set up a recurring task on my computer so it can do this automatically every morning. Now when I wake up every morning I have some fresh ideas for music to check out as I start my day. I learned a lot about Python in the process, since it explained every step to me, and I later made my own modifications to the code without using AI. But to build something like this from scratch would have taken months of study - at least - which I frankly wouldn't have been motivated to do.
These are just a couple examples, and again, very basic use cases using free tier versions of LLMs far below their potential capabilities. I have tons of other ideas in mind of useful future projects to do things I just didn't used to have the desire or knowledge to master on my own.
It's also been useful for trivial but still valuable every day life things too - suggesting recipes based on random ingredients in the fridge I need to use up, talking me through how to remove the aerator on my kitchen faucet after it got wedged in place due to calcium buildup, etc. All these things are possible to find some info or ideas on with a Google search too, but to be able to have an actual conversation, give specific parameters to get feedback, troubleshooting, and coming to a good solution is something genuinely different. Over the past year and half it's gone from an occasional speed up of a google search to a super-tool that I find multiple uses for every day.
Literally none of these things require LLM algorithms to accomplish at all. This is like hunting deer with an RPG that explodes in your hand 1 in 5 times. You could read about any of these things nearly for free and it doesn’t cost half the worlds energy and a trillion dollars.
2-You yourself understand that lack of confidence, not ability, was the limiting factor.
3-Useful for you!
Your use case basically boils down to questions that you'd have previously asked on reddit. I mean I'm not sure what to even say because you acknowledge most of the conveniences afforded to you by AI were available before the GPT era! The big leap is going from "ooo that's cool/nice to have" to framing it as a necessarily transformative technology that will fundamentally upend the nature of productive work/existence. "What if it's super important" is a hypothetical we're asked to seriously consider instead of addressing the actively deleterious bullshit they continue to purvey(and depend on).
I was responding to someone who seemed genuinely unable to imagine a useful or creative case for AI use, so I gave some examples. Now, hopefully they can. Case 1 I mentioned is definitely not something I could have asked on reddit - it's just something I couldn't have done. You could argue I don't absolutely need to do it, but that's true of basically any new technology - it makes things possible that we didn't absolutely need to be done beforehand.
Case 2 and 3 of course could have been done before without AI. But, for the reasons explained, I wouldn't have, or they are processes that would have taken weeks or months - or paying someone else - rather than hours. To say I could have just asked reddit is kind of like explaining to someone how you traveled across the country on a steam locomotive in the 1800s and the response being "What's the big deal, you literally could have just walked!"
I didn't make an argument about its potential impact on the fundamental nature of existence. As I pointed out, I'm using AI at an extremely basic level - I'm not the person you should be listening to for that type of argument. I'll leave that to others who have some basis to know what they're talking about. That means I need to cut through excess hype or doom from some of those people, but that's just called reading critically, and its the case on lots of subjects which I otherwise know nothing about - it's not a good reason to completely bury my head in the sand and ignore it.
Your last sentence is interesting. It seems to suggest - not sure if you're saying this, but in general I see this argument a lot - that we shouldn't use AI because of the various social and environmental harms committed by the companies that offer it. That's certainly a principled stance to take, but the weird thing is everyone I've seen take that stance seems to have no problem using countless other technologies which enact many of the same or even worse harms.
It's pretty difficult to talk about AI without taking hype into context--they feed each other. The thing is that "advanced" LLM use cases are like...orchestrating 20 agents to automate the creation of B2B SAAS. I'm pretty sure we would hear if anything of note was actually being made. lol. The most interesting work is being done by researchers trying to understand how they work(again, completely self referential and necessarily linked to the relevance of the object studied).
Opening up youtube to learn how to boot into BIOS does not take as much time as you think I promise! Recipe websites exist too! You literally called them trivial conveniences, I don't even think YOU believe your transportation analogy.
My last sentence refers issues wrt to big tech. Yes, I think a big part of the calculus behind LLM development is rental seeking/profit extraction of the open web/monopolization of information more generally. And yes I believe it's unethical for a number of reasons and yet I "live in a society" iphone child labor hypocrite or whatever, but when I can minimize my participation in what I clearly see as having many harmful n-th order effects I will. CSAM generator. Suicide whisperer. Misinfo proliferator. Automated scammer. Especially when its most visible issues continue to go unaddressed, and especially when I don't trust the CEOs behind them, marginal convenience just doesn't cut it.
You can tell there’s something to the tech when it’s causing people to completely divorce their minds from reality like this. You are so deeply, personally offended by the possibility that you don’t even care what exists and is happening in the real world. You’ve got your beliefs and that’s all you need!
It’s nice to have comments like this where people out themselves as proudly ignorant of the real world.
Hey did you know vaccines have MERCURY in them!?!?! There’s a whole group of people who think just the same way you do that I’m sure would love for you to join their community.
Interesting – it seems to me like Claude is a remarkable life-enhancing tool I use several times a day, but I'm learning from your comment that I've been fooled into thinking that by a "publicity blitz". Guess I'd better cancel my subscription!
It's like having 24-hour access to an expert in any topic I'm interested in, whether for reasons of immediate practical need (cooking, gardening, travel) or intellectual curiosity (science, history, economics). A lot of questions arise in life that you can't answer with a Google search because they're too complicated or specific – but now you can just have a conversation about them, and I've found that so enriching. Yes, AIs hallucinate, but the more you use them, the better you understand where they can find their way reliably and where you need to double-check. (And I haven't even mentioned the more banal, productivity-boosting side – though since Claude Code came out, any time I can't get my computer to do something tedious I need done, I just get Claude to write a program that takes care of it for me.)
None of this is a substitute for reading or writing or thinking or arguing – I do just as much of those as I always did – it's just a great complement to them. It's easy to imagine someone in 1995 saying "This internet thing is for losers – whatever happened to real life?????" But even the people who are very condescending at the start may eventually realise that these tools do have real uses.
These things are literally incapable of giving out a factual answer to anything. You might like toying around with these things but it doesn’t mean that they are capable of accomplishing anything critical or that needs to be relied on to work properly.
Or(in the case of the Internet) they may look around occasionally and think ‘Dear God, what a fng disaster’. Many people do, even young, rightist, intelligent ones.
I suppose it's true that, if your timing is good and the right person happens to see your post, you will very occasionally get an answer on Reddit that is as well-written and thorough as a baseline Sonnet 4.6 response. But by the time you get to the latter part of the conversation, where the LLM is following you with infinite patience and cooperation through some tangent, hypothetical, dumb beginner question, or incredibly narrow specific, with diligent reference to the two PDFs and three images you've uploaded plus everything it already knows from previous conversations about your research interests or personal projects, all of this of course happening instantly (though you can also pick back up seamlessly after a three month gap)... you are never going to get anything from Reddit and Wikipedia that even remotely resembles that experience. To me this has been an enormous gift intellectually, though of course to you it's just a "marginal convenience" from the "genocide automator", and I imagine that's not a gap we're ever going to bridge.
It’s nothing to do with finding answers that were impossible to find or unknowable. It’s being able to do in 15 minutes what used to take 30. Or 60. Even if that was all it ever did, as good as it ever got, that alone crosses the line of “genuinely useful.”
You can sit there if you like and say “well if it doesn’t discover groundbreaking new science every time someone asks who starred in Groundhog Day, then it’s stoooopid!”
But that’s a self-evidently idiotic goalpost. The people clinging to it are the absolute worst mascots for the “look how much smarter humans are” movement. You’re going to make it impossible for anyone that hasn’t already embraced your “reality be damned” quasi-religious beliefs to side with you, even if you occasionally make a good point.
You literally have people telling you that yes, it genuinely helps them in their work, and that’s such a brain-breaking concept for you that you either accuse them of being dumb, of lying, or launching into an obviously bad-faith “oh yeah like what???” so you can immediately dismiss it, knowing nothing about their field or work.
If your concepts of “what a job is” and “what AI can do” are just sending slightly better worded emails, that says a lot more about you than it does anyone else.
If you're looking for leftists who seriously consider AI, I would humbly suggest some of my own writing on the necessity of a positive socialist vision of AI
So many comments like this from people who don’t know, don’t care to know, but demand in bad-faith that people spoon fed them things they will just dismiss out of hand anyway.
You understand that this is called “willful stupidity” yes?
Opposing it is one thing. My experience with the socialist left is straight up dismissal of AI. An almost smug belief that it’s a scam, a fad, and a useless technology that is no threat to them at all.
It's a threat, but it's still useless technology. 20th Century tech was helpful. 21st Century tech is pure greed. Coming up with solutions to problems that don't actually exist. Yeah, we're not ever coming around.
See, there it is. It is most definitely not a useless technology. If you said that to any person working in CS right now, they'd think you're unhinged.
I work in CS and these things are not good. They are good enough to convince mediocre people that they are. At the end of the day LLMs are nothing more than a fancy autocomplete that is incapable of reason or proper engineering practice. This is a fact, and anyone saying otherwise is either not good enough at their job to realize it or simply in on the scam.
Note how I was refuting the notion it's a useless technology and you don't seem to believe they are actually useless. I know people who are very good at cs and they will say stuff like LLMs can't vibecode well enough, and I then ask them "Do you still go on stackoverflow to learn stuff and fix bugs?" and the answer of course is they don't nearly as much as before. The standard way to deconfuse yourself is asking an LLM. Maybe you're so incredibly talented that you instantly divine the perfect answers to your questions and have no use for these mere tools. Us puny mortals are using them and benefiting greatly. They are only useless or a scam in the sense that they aren't machine gods that fully replace us. This is a very weird definition of these two terms. Of course am aware of the limitations of LLMs. (Also aware of how these limitations seem to get nudged further and further out with every new release). But a lot of people look at these limitations and try to pull of the most egregious Motte and Bailey type shit I've ever seen.
Same here and I find the response is usually in bad faith, intellectually speaking, and therefore making us very vulnerable. It is straight-up stupid. I don’t think we can preach the moral high ground about something that we’re reducing to caricature that is in contradiction to evolving experiences. We have to be more creative about how to protect ourselves, and that starts with good faith understanding.
I can't wait to see that side of the internet vanish. The good artists will always be in demand but there is so many who can't make a dime because it's a useless profession or even side hobby. It just isn't in demand and it's created a giant culture of time wasting communicating with each other and honestly culminating in this anti-AI IBS.
But seriously, I grew up in Silicon Valley. My dad was a physicist at Xerox PARC (look it up) for 30 years and when I had to make a decision about my career, did he try to get me to go into science or tech? No, he fully supported my career in the arts because he knew how important they are.
The Arts bring in $1.2 Trillion a year in the US. Big Tech wants that money. That's the long and short of it. There's never enough for these greedy creeps.
the creative community seem, not to put too fine a point on it, to think of themselves as the "good guys"
if we found out that chimpanzees really liked to create artwork for themselves, and this artwork was noticeably different because chimpanzees had different taste than humans, that would be really interesting and i would expect the creative community to be all for it. chimpanzees should get to create art too
if a bunch of companies then started breeding chimpanzees for making chimpslop artwork, and renting out chimpanzee art-generating labor at a few pennies per pound, i suspect that the creative community would be outraged, but i *doubt* it would take the shape of their current outrage against AI
I really wish I understood why it feels so different
I notice that at no point to you address companies like openAI *stealing artists work* to train their models. You just pretended, falsely, that AI is akin to an actually intelligent animal.
i don't think that LLMs are like humans, but i think the way in which they *aren't* like humans is confusing and not easy to round off to things like "not actually intelligent". what they are is something inhuman and alien, but that's not the same thing as 'empty' the way you seem to be implying. in certain ways humans expect robust vibrance, LLMs are empty, and in other ways that humans expect emptiness, they are robust and vibrant. but, as far as the actual question goes:
in particular, what i want is a principled and mechanistic way to understand the difference between "a human being sees the work of an artist, and adapts the artist's techniques for themself. then, they create artwork inspired by the original artist, and they sell it" versus "an LLM is trained on the work of an artist, and adapts the artist's technique for itself. then, the LLM creates artwork inspired by the original artist, and the AI company sells it"
I would *expect* the leftist critique here to be something like "the benefit goes to the AI company, not to the LLM". but instead it's... something else. something I don't understand.
Anthropic having sourced a fraction of its training data from legitimately purchased sources does not negate the fact that one can easily prove that the vast majority of commercially available generative AI models are trained on materials that were not purchased, but scraped from the internet or outright torrented.
Headlines like "Anthropic deletes source material to comply with digitization regulations" making the news cycle is the result of their PR department working overtime to legitimize their operations.
My principle, in response to your dichotomy, is that I believe in, relate to, and care about people. The human is the only difference in your example. Humans are alive in a way machines will never be. Mechanistically, the first scenario is preferable because the human is the input, synthesis, and output. With AI there is human input, in both user and dataset, but the synthesis and the output are performed only by the machine.
AI can only commodify. Because art must come from a life form and AI is not a life form (it is a machine), therefore, AI cannot make art.
i appreciate you spelling out your reasoning with this level of precision, thank you
for what it's worth... hm. i think you're mistaken about this. i think it's a sort of god-of-the-gaps style scenario, where AI will keep checking off more and more items on the list of what you consider "truly alive", and eventually the last box will be checked.
unless you believe in mystical theories of consciousness, i guess. but i'm pretty sure that the human mind, consciousness included, is turing-complete. i would be very surprised to learn for a fact that that weren't true.
and that's sorta the point, yeah? this is a really complicated question of neuroscience and philosophy of mind, and you are reasoning about it as if it were appropriate to be certain. as if obviously everybody ought to be certain.
being certain about this means we don't have to engage with a whole host of tricky philosophical and ethical questions, because they don't apply
but i'd sure like to know how consciousness actually *works*, before i say definitively that some real-life systems have it and others don't
“Life,” at its core, is defined as: “matter that has biological processes such as cell signaling and the ability to sustain itself.” (Wikipedia) LLMs are not biological and therefore not alive. Personally, I value biological life forms, especially humans, over non-biological machines. Furthermore, I'm asserting that only biological life forms have the capacity for creating art. We can debate the philosophical nature of that statement, but I will always prefer art made by a biological life form over a non-biological machine. When I connect with art (usually in the form of music) it does seem like magic! I hope you've experienced that too.
Also, I don't think the makers of AI are trying to create life. They are trying to create Artificial General Intelligence (AGI), which is different than life.
I don't love my refrigerator like I love a chimpanzee, even though my refrigerator is much more useful to me. That's because the chimpanzee has life. Art is about the love of life, which a machine cannot experience and therefore create.
For an article about how people are dismissing a nascent technology it sure is sparse on the details on what's being missed out on.
I do agree that outright refusing to research AI and repeating convenient lies about its workings is unhelpful and unproductive, same as ignoring the reality that it will likely be commonplace and here to stay. But if you wanted to convince a skeptic that this needs to be part of the left's thinking you're going to have to add some actual substance about why it will be so revolutionary.
Ah the old “nothing can ever be known or proven or prepared for until we let it happen so we can have 1000000% certainty that it’s the only outcome physically allowed by the universe.”
Totally. Hey have you ever jumped out of an airplane without a parachute? Sure sure everyone else who has done it has died, but none of them were wearing a Garfield shirt!!! It’s literally impossible to predict what will happen until we try it for real!!!
If you want to convince someone that Product X will change everything 5 year from now then you have to actually prove why before they believe you. This doesn't seem that complicated.
The biggest companies in the world are betting trillions on this technology and hype directly benefits them. Is it so crazy that people have reservations about their marketing claims that AI will change everything?
Oh, and for the record - none of this is a "left" view. Trying to frame it this way is an attempt by this publication and the general pro-AI chorus to associate AI skepticism with a particularly small niche in American political life. But AI skepticism is profoundly ideologically promiscuous. There's just no reason to call it "left" skepticism other than to try to leverage partisan politics to facilitate your views.
"But AI skepticism is profoundly ideologically promiscuous."
Do you have prominent examples of "AI skepticism" on the right or center? I'm aware of right-wing complaints about AI systems being left-biased or "woke," but I'm not aware of any substantial right-wing views dismiss LLM AI as mere toys or glorified autocomplete as the original piece is noting correctly you see a lot of on the left.
It also seems unhelpful to try to accuse the OP of labeling such "skepticism" as left as part of their own attempt to leverage partisan politics. One can be on the left, and concerned that much of the left is buying into this view of AI systems that you and others seem to take for granted rather than really grappling with the technology. That's a pretty easy position to take.
Well said. I don't think this is a left or right issue. It's simply an outgrowth of widespread ignorance among the public, not unlike ignorance about how a microwave or printer works: a lack of curiosity about the world and a willingness to accept the simplest explanation to avoid thinking about it.
As you very accurately point out, the temptation to dismiss what LLMs do by focusing only on the task (next-token prediction) while ignoring the goal (inducing internal representations that in aggregate constitute a world model) is just too strong for most people to resist. Most people don't have the knowledge to even begin questioning why the internal representations in LLMs serve the same purpose as our own internal representations, or why theories of the brain such as predictive coding and Bayesian models place prediction at their core. Why is it that prediction used to minimize error between the brain's model and reality is never thought of dismissively, while the same concept, implemented in the digital realm, is derided when it comes to LLMs?
It's no wonder you don't see people working at AI labs defending their creations from shallow criticism; they simply don't care, and they don't care because they're building the future.
Well said. And the experts aren't only in industrial AI labs. Academic computer science departments have plenty of faculty who are building, harnessing, improving, and evaluating large language models. They are mostly on the left politically, but they understand the technology quite well -- being among its inventors -- and generally don't agree with Bender at all. They are sensible about the opportunities (and the hype), the current defects, and the societal risks. Why not ask them about regulation?
You are a real twat. Comparing views on AI to views on Climare Change is insane. Climate change is real and a danger to every living being. AI is a toy. Sure it has scale and it’s super sophisticated, but nobody in this entire world needs it. You know what we all need? A world. Climate change is going to end the world. You know what is going to accelerate that? AI sucking up all the water to cool down its processing plants. AI is also rampant theft of artists and authors. It’s disgusting. This isn’t a left vs right issue, this is a humanity issue. AI is completely unnecessary. Society would be perfectly fine without it.
Your comment proves the point of the piece. The fact that you cite the "problem" of AI water use, which is flat-out misinformation (in reality you'd have to prompt ChatGPT 200,000 times to equal the water use of a single hamburger) shows that you are getting your ideas on the topic from people who don't really know anything about it – which is not going to prepare you very well for the future!
That sounds reassuring. I’m concerned about local impacts of sustainable energy developments, so I’d appreciate the chain of evidence for your claim if you can spare a moment. We need to concentrate on the burger chains and stop worrying about data centres over here, but I’ll need proof for the planning enquiry/ies, and we need it now, we have a May 5th deadline.
Well the push to "AI supremacy" has seen the reactivation of coal plants, destabilization of power grids, and rising cost of consumer technology across the board(RAM is up like 15x in less than 6 months). In all honesty, there's not much to worry about in terms of a hypothetical future where it's as successful as its proponents fantasize--all you have to do is type in a box. The "skill" is typing in a box and we will all be in the same, undifferentiated position. The primary issue I have is that it's wholesale "success"(whatever that even means?) depends on skirting a myriad of issues we can point to in the present. Even without "AGI" breakthroughs we've seen companies use it to undercut workers, use it as a pretext to pay less/work more/, normalize quality degradation, automate genocide, etc.
Thank you. Its infuriating garbage. I can only assume that some people are getting really concerned about their portfolio. The techno-utopian rhetoric used in this "article" on America's premiere Nazi newsletter platform feels five years old. You can feel the desperation.
“You’ve offended my quasi-religious belief and threatened my devotion to ignorance! I’ll show you by doing the exact thing you called out in the article while not realizing it, to prove how much smarter I am than AI.”
the right ignores the problems of AI, such as how they don't care, and celebrate the use of Grok to generate child pornography *at scale*, to generate mass mis and disinformation campaigns.
But ultimately the left isn't actually ignoring AI, we're the ones who are actually looking at the problems with it. You just *don't like hearing* those problems with the toy you like.
I mean, I don't deny that it's certainly impactful that the people behind LLMs like Grok don't seem to give a shit about generating child porn and revenge porn at scale.
It;s a bad impact, and it should result in people in prison, but you're right, it's impactful. Sorry I care about people more than you do.
William man we are trying to have a civil discourse it is wholly unproductive to be this caustic in Substack comments saying I don't care about people
As someone in tech LLMs have been incredibly impactful in changing my day to day workflow. People in other industries and fields report similar things. It is ostrich-head-in-the-sand behavior to reduce the impact of LLMs to the creation of lurid images and copyright infringement and I suspect you actually do know this.
The problem you described is very real and a constant risk being debated by people at the top labs and in relevant circles concerned with AI safety. Image generation is a narrowly scoped subset of dangerous behaviors that LLMs are capable of with that we need to reckon with. Any open source LLM is capable of being 'jailbroken' programmatically - the legendary Pliny the Liberator (a guy on Twitter) has shown this. It's a very serious danger that many people are oblivious to, so it's good that you and others are on to it. I just don't think it's productive to be so dismissive of the power of the tool we now all have access to.
I'm saying you don't care about people because you don't. You blithely dismiss the mass creation and dissemination of literal child porn by grok as "oh well it's just a downside of the technology", but you don't seem to give a shit about the women and *children* that it is harming. I'm willing to bet money you don't care about the people who've been driven to psychosis and fucking suicide by the sycophantic chatbots you love so much.
What has this style of discourse yielded for you in the past? You seem to have some idea that you are trying to good in the world by caring about important causes. Has insulting people trying to have a good faith dialogue ever changed anyone's mind in your previous experience? Have you ever considered there are more optimal ways to communicate?
Unless your goal is to just spew bile, in which case you are a troll, but I don't really see it.
Substantively --
What part of what I said could possibly be construed as a blithe dismissal?
"The problem you described is very real and a constant risk"
"Image generation is a narrowly scoped subset of dangerous behaviors that LLMs are capable of with that we need to reckon with"
"It's a very serious danger that many people are oblivious to, so it's good that you and others are on to it"
This is an absolutely deranged understanding of the dynamics at play, and even within your article you cite a number of sober and accurate critiques from the left on AI, then describe that as plugging their ears? It is a definitionally hallucinatory predictive model that has no conception of correct or incorrect information being treated like an intelligent search engine, studies consistently show that incorporation of LLMs into organizational workflows leads to more work for workers, while consistent reliance on LLMs leads to poor critical thinking and productivity. This is not to mention the environmental costs, nor the foundation of the tech being the nonconsensual and uncompensated monetization of the personal data and commercial labor of hundreds of millions, if not billions, of people. These are all critiques from the left. The right consists of financially incentivized and hoodwinked executives/investors insisting that generative algorithms are good for society, despite said benefits having never been demonstrated.
The argument of the piece is not that the left has failed to offer responses, it's that the left has failed to offer responses *that are relevant to a scenario where AI technology does grow capable enough to really change the world* – because of the complacent assumption that it's impossible for this to ever happen ("definitionally"). Your comment is a good example of what he's talking about – it's strange to criticise his account of the dynamics at play when you are embodying them in the very same breath!
The most groundbreaking developments in the capabilities of coding agents have been made in the past months and even weeks. Any large scale studies that have been done would be hopelessly outdated.
Moreover, it's just not logical to judge the capabilities of a technology based on mediocre people at mediocre organizations using a mediocre version of the technology for mediocre purposes. What are the smartest people doing with the frontier models? That's what you need to pay attention to.
But who’s assigning the marks here? You’re simply declaring that the arguments you like are from ‘smart’ people, and the ones you don’t like are from ‘mediocre’ people - and that’s the very fault you’re attacking. It used to be called ‘begging the question’ before mediocre people started misusing that phrase. [see what I did there?]
It doesn’t matter “who is assigning the marks,” whatever that even means.
If you want to understand space technology you should look to NASA and SpaceX, not to flat-earthers trying to build steam-powered rockets to see if the earth is really round. That’s entirely self evident and if you ARE pointing to the flat earthers as evidence that space technology is dumb and useless, that’s firmly a “you’re an idiot” problem rather than a stunning insight.
Assigning the marks? Simple enough - who decides, and how do they decide, who the ‘mediocre’ people are, in your argument. Your reductio argument about flatearther versus NASA whizz is too widely-meshed - there may be fools at NASA too.
“But I keep telling you, humans are just made of like, atoms and molecules and stuff! Neurons just fire stupidly in response to mindless chemical signals! What kind of idiot would confuse that for understanding or thinking??”
This is basically your viewpoint, and you’re doing precisely what the article says people like you are doing. Hiding behind rhetorical tricks as if they undo the things that are happening in real time.
Oh good, you’ve successfully said some trivial sounding words. It’s just a predictive machine “by definition!” Cool. I guess the real world is fake news then because you’ve already got your words.
Literally what the article is calling out. Maybe AI can help you understand it.
"It is a definitionally hallucinatory predictive model that has no conception of correct or incorrect information being treated like an intelligent search engine"
You are doing exactly what the essay talks explicitly about. Deciding that it is "definitionally" something and therefore any evidence otherwise can dismissed. But a major part of the entire point of essay is that this sort of view is a view getting repeated a lot on the left in ways which are just not adjusting to the evidence at all. So repeating that here as if it is somehow a useful argument doesn't do anything. Maybe try to reread the essay again more slowly and carefully?
Love your art :)
Was delighted to see you shitting on this dope in the replies after getting pissed off reading this moron's doodoo ass "article"
I appreciate much of what you have here, and I think you are absolutely correct that the left view of AI (on blusky in particular) is primarily influenced by academia. However, I don't think you are focused on the correct subset of academics here. I am a historian and can say firmly that among scholars in the humanities in particular, the concern is not with whether or not LLMs can do what they are proported to do, but rather the impact that they have had on education. The issue isn't that we are being told the calculator knows algebra, it is that the students are being given access to the calculator before they have learned to do simple addition. I do commonly hear the pessimist view that Chat GPT gets everything wrong, but I think that is just cope in response to the enormous problem presented by an unregulated shortcut machine unleashed on a workforce who was never trained to deal with it. The fact that a handful of private companies abolished the college essay is something that humanities profs will probably never get over. Many of them still haven't admitted that it happened and continue to assign the same tasks to increasingly AI dependant students.
Smart people will use AI to conduit being smart. The less smart Kids who don't want to read your mandatory terrible assigned reading will likely be better off not reading it, and learning to use AI to get a grade from you.
The truth is, the American education system is a absolute sham and there's kids don't care about what you want them to do. The amount you actually learn in school is so small after a certain period
It seems like these professors are slow on the uptake but this has all happened so fast! When the history of this period is written I don't think the professors will be faulted for not noticing that their take-home essays were hackable. It just took a few years. It's that in-between time that we're living in, between the time that Speedy Gonzales darted away and Sylvester the Cat reacted. It seems long because we're living in it and we see undergrads exploiting the lag.
The essay itself is a historical artifact, not a truth of the educational process, and smart teachers will figure out the pedagogy.
Would you say this article is that different from an essay? Genuine question. I think there are good reasons to set sustained writing practice.
I do think it's telling that you posed the question "How can we use this to massively advance state capacity and massively expand the ability of the government to deliver public services to people" and the single answer offered that wasn't openly pitched as being exotic, far future speculation was "rich people are really into this so we can use the hype to make them invest in the grid". Which, yes, agreed... but that's not something that the technology itself is offering, it's basically admitting there's a bubble and suggesting we ride it for something good for once lol
Presumably the author thought readers would be smart enough to figure that out on their own rather than being intentionally stupid and pretending that not being by spoon-fed an answer must mean no answer could exist.
Their mistake!
So far the only way "AI" has impacted my career has been landing me a bit of extra work cleaning up tasks that somebody else was hired to do and used "AI" to attempt, and fail, to achieve properly, so you'll have to forgive me if I'm having trouble visualizing how this is going to increase state capacity. "A machine that gives you information that's wrong an unknown but substantial percentage of the time" is not giving me big gamechanger vibes. That's why I tried reading this article to see what else might be on offer. Looks like... not much? But perhaps you have some ideas?
I agree with you it's far from perfect. That being said it's advancing rapidly, hallucination rates for frontier models have been drastically reduced from the early ChatGPT days. It's not a solved problem, so using it as the sole go/nogo decision maker in critical applications is clearly asking for trouble.
When I think about state capacity, I think about bureaucracy. Filling out forms. Reading applications and permit requests. Things that constitute a lot of busywork, a lot of which can be handled by the AI with human oversight, judgment, and decision-making.
Hallucination is more or less of a problem depending on how you use the AI, how you interpet the outputs, and how good your judgment is. Humans are far from perfect and we hallucinate far more, on average, than any frontier model. But we still trust humans orders of magnitude more to not decide "man, what a tricky situation, let's try a nuclear strike to see what happens." As we should, and should continue to do so.
But yeah, outside of things like that where I'm in full agreement with you, I'm thinking of things like how it takes 6 years for funded solar projects to go through the permitting process before they're allowed to connect to the grid. Intelligently allocating human resources to inspect infrastructure efficiently. Stuff like that. Of course humans can do it too, but it's not like we can just hire an additional 5 million civilian government contractors overnight even if we wanted to.
At the very least it's prudent to start preparing for this future, where models don't hallucinate anymore, or at least not a meaningful amount in most domains that aren't ethically/morally questionable. If we wait until things are perfect to get started, we'll end up far behind in the long run.
I mean for the amount of money being "invested" in "AI" we could easily hire millions more public servants. The problem there is that some of these roadblocks are just a consequence of democracy - if everyone gets a say, that takes time and dialogue. You can't speed that up without overruling some set of people. Which some people support, but that's a *political* discussion, not a technical one. This is why China builds a train station in a weekend - they don't consult anybody, they just do it. There's pros and cons there!
For the hallucination issue - hallucination is foundational to generative AI, that's what makes it generative. You cannot resolve it and retain the generational aspects; the best you can do is reduce it by bolting on additional measures. I would be skeptical of the claim that humans hallucinate more than AI models when I'm seeing error rates still of almost 50%, but even if we grant that - humans have systemic solutions to that issue (e.g. when you need medical advice, you can see a trained doctor and get an answer that is highly likely to be correct rather than asking any random person or, worse, RFK Jr. lol).
Now, the hallucinations wouldn't be a big deal if these models weren't being sold as authoritative places to get trustworthy information. That's universally the marketing though - "AI is smarter than humans! AI hallucinates less than humans!" but that's simply not true and it is exposing people to major risk. As an example: I use a tracking app to monitor my newborn's feeding and sleep cycle. If you pay extra, you can add an "AI" chatbot to answer your questions about childcare. Of course it says "these answers may be incorrect so check your sources", for legal reasons. And I know they hallucinate so I didn't use it - I'm absolutely not risking my child's health to a machine I know gets things wrong. But other people do - my wife's friend uses the same app and texted her about frustrations because she was following its advice and it wasn't working, and she shared the advice and it was flat out fiction. Just pure nonsense. *This is an app for caring for infants*. The risk level is INSANE to put that in there. Yet here we are. Surely you can see why I'm deeply concerned about the way this is being rolled out, and the people in charge of doing so. There needs to be WAY more caution and regulation; this stuff is not ready for prime time, casual use by non-experts, at all. It's already killed some people; it's going to get worse. And for what? Are we seeing big productivity gains?
" it’s worth gesturing to a few, in order from most concrete to most exotic."
These two things are described in the article as separate ways AI can be good—the latter (grid) is not a response to the former (public services).
"The gap between what AI systems can do now and what previously hyped technologies ever delivered is already vast."
Provide extraordinary evidence for this extraordinary claim.
This is in a section about blockchain, crypto and metaverse – very obviously, AI can do a lot more than any of those things.
Anyone that whips out the extraordinaire evidence Sagan demand is telling you: "I'm not tall enough for the ride."
“Nuh uh!!!!”
- Freddie in 10,000 words.
On a scale of "mentally challenged" to "midwit", where do you rank yourself?
No self respecting person has said this since 2006 on New Atheist message boards:
> "Provide extraordinary evidence for this extraordinary claim."
It is clearly impossible to base all rational thought and scientific methodology on an aphorism whose meaning is entirely subjective.
Stop being stupid, if anything was legitimate about multiple evidence standards, it actually is **you** who has the far more incredible claim that it won't continue doing what we've already watched it do in 3 years and no sign of stopping.
Epic Reddit retort good sir 🥴
That describes the entirety of Freddie’s writing on AI, yes. Well spotted.
Freddie can always be counted on to grease the wheels on the goalposts.
AI can’t do that, and if it can it’s not impressive, and if it is the it must not be useful, and if it is then it’s too small to matter, and if it’s not then I don’t believe it anyway because it’s “definitionally impossible!”
Heres the thing, you can take the position of the skeptic screaming at a lamppost under a bridge if it makes you feel good. You can keep that going for the next 50 years if you like. That doesn’t mean your opinion has any relevance or connection to reality. You can position yourself as The Ultimate Skeptic Prove Me Wrong Bro til you’re blue in the face. Nobody cares, the world moves on without you. You’re not Feynman or Carl Sagan. Being intentionally stupid is not the same as “there’s LITERALLY NO EVIDENCE!!!”
“If AI is just ‘spicy autocomplete,’ then what’s responsible for the current frenzy of attention?”
The huge amount of corporate and personal wealth tied up in it being the future of everything, that’s what’s responsible. Many of the world’s biggest companies have tens of not hundreds of billions sunk into tech that cannot and will not return even a fraction of that investment, and they’re desperately hoping that a big enough publicity blitz will make the impossible possible.
No. The technology is genuinely engaging. Continuing to ignore this is an incredible error by the left.
The technology is “genuinely engaging” for people who’ve always been bad at writing emails, I guess. Or don’t care at all about the quality of their work output. Or don’t have a creative bone in their body, but always kind of wished they did. Or find interacting with other people a drag on their life rather than the point of being alive.
You need to talk to an actual person who uses AI daily and LISTEN to what they're saying (don't argue, don't force your opinions on them; LISTEN). Your assumptions only come from raw ignorance that is willful at this point considering the simple facts of usage of this technology.
I have a LOT of critiques of LLMs. A LOT. But this willful blindness about their utility and natural engagement does no one any good. Your comments prove every point in the article here.
I'm someone who uses AI at a very basic level, but have found it to be helpful and additive in my daily life in many ways. Its also generally been a source of creativity, as it opens possibilities to do things I wouldn't have previously considered doing. I'll give you a few examples.
1.
I'm learning a language of the country I moved to (lets call it language A) - very important for my continued life here. I wanted to relearn language B, a language of my childhood I've mostly lost, but would love to pick back up and use to speak with relatives. But I don't want to risk losing progress on language A. So I decided to learn language B from language A. There are probably less than 500 people in the world who speak both, so no resources for learning between the two. I used AI to create sets of flashcards between the two languages for the most common verbs, nouns, adjectives, adverbs, etc. I communicate with it in language A to give me lessons and practice exercises in language B. And every day I have it send me a paragraph in language B at the A2 level (my current level in that language) followed by a paragraph in language A at the C1 level, so I can get some practice in for both at once. It's also immensely helpful in explaining the subtleties in meaning between 2 words in one language that translate as the same word in another language. (Sometimes during language classes in language A, my teacher wasn't able to explain some of these subtleties - but the LLM could - and once read to him, he could confirm that the explanation was right).
2.
Recently the hard drive crashed on my laptop. Money is tight right now so I wanted to avoid spending on a new laptop, or even paying for a computer tech to repair it if I could. So I used AI to talk me through what new hard drive I should buy for my laptop, how I could try an Ubuntu backup to salvage what I can from the old hard drive, what tools I need and how to take apart my laptop and replace it with the new hard drive, and how to set it up. I could have found tutorials for all of these things pre-AI, but I probably wouldn't have had the confidence to do it myself, as I've never played around with electronics before, and I'd be worried about something going wrong. Sure enough - it did, a few times! The new hard drive wasn't recognized. The solution was an obscure setting in BIOS mode that I'm pretty unconfident I would have figured out myself, if it wasn't for the LLM I was troubleshooting with the whole time. But in the process, I had a blast poking around inside my computer's hardware and software, and gained a lot of confidence that this is something I could do more of myself in the future.
3.
Another thing that wasn't a necessity, but just a fun side project - I'm not much of a coder, and I don't really have the motivation to learn to be a good enough one to actually make useful things by coding. But I'm a musician and a huge music fan. I check out a lot of sources for music news, releases and concert updates, but have yet to find a really good system for aggregating all the news. Using a coding agent (just a free tier of Claude from a few months ago), I set some parameters for what I wanted, and guided it to create a Python script which would collect articles from a bunch of music websites, then send the results to an LLM via API which would select articles I'd be most interested in based on some detailed criteria I gave it, then use my Gmail API to email me a newsletter with the selected articles and summaries. It helped me set up a recurring task on my computer so it can do this automatically every morning. Now when I wake up every morning I have some fresh ideas for music to check out as I start my day. I learned a lot about Python in the process, since it explained every step to me, and I later made my own modifications to the code without using AI. But to build something like this from scratch would have taken months of study - at least - which I frankly wouldn't have been motivated to do.
These are just a couple examples, and again, very basic use cases using free tier versions of LLMs far below their potential capabilities. I have tons of other ideas in mind of useful future projects to do things I just didn't used to have the desire or knowledge to master on my own.
It's also been useful for trivial but still valuable every day life things too - suggesting recipes based on random ingredients in the fridge I need to use up, talking me through how to remove the aerator on my kitchen faucet after it got wedged in place due to calcium buildup, etc. All these things are possible to find some info or ideas on with a Google search too, but to be able to have an actual conversation, give specific parameters to get feedback, troubleshooting, and coming to a good solution is something genuinely different. Over the past year and half it's gone from an occasional speed up of a google search to a super-tool that I find multiple uses for every day.
Literally none of these things require LLM algorithms to accomplish at all. This is like hunting deer with an RPG that explodes in your hand 1 in 5 times. You could read about any of these things nearly for free and it doesn’t cost half the worlds energy and a trillion dollars.
1-That sounds useful for you!
2-You yourself understand that lack of confidence, not ability, was the limiting factor.
3-Useful for you!
Your use case basically boils down to questions that you'd have previously asked on reddit. I mean I'm not sure what to even say because you acknowledge most of the conveniences afforded to you by AI were available before the GPT era! The big leap is going from "ooo that's cool/nice to have" to framing it as a necessarily transformative technology that will fundamentally upend the nature of productive work/existence. "What if it's super important" is a hypothetical we're asked to seriously consider instead of addressing the actively deleterious bullshit they continue to purvey(and depend on).
I was responding to someone who seemed genuinely unable to imagine a useful or creative case for AI use, so I gave some examples. Now, hopefully they can. Case 1 I mentioned is definitely not something I could have asked on reddit - it's just something I couldn't have done. You could argue I don't absolutely need to do it, but that's true of basically any new technology - it makes things possible that we didn't absolutely need to be done beforehand.
Case 2 and 3 of course could have been done before without AI. But, for the reasons explained, I wouldn't have, or they are processes that would have taken weeks or months - or paying someone else - rather than hours. To say I could have just asked reddit is kind of like explaining to someone how you traveled across the country on a steam locomotive in the 1800s and the response being "What's the big deal, you literally could have just walked!"
I didn't make an argument about its potential impact on the fundamental nature of existence. As I pointed out, I'm using AI at an extremely basic level - I'm not the person you should be listening to for that type of argument. I'll leave that to others who have some basis to know what they're talking about. That means I need to cut through excess hype or doom from some of those people, but that's just called reading critically, and its the case on lots of subjects which I otherwise know nothing about - it's not a good reason to completely bury my head in the sand and ignore it.
Your last sentence is interesting. It seems to suggest - not sure if you're saying this, but in general I see this argument a lot - that we shouldn't use AI because of the various social and environmental harms committed by the companies that offer it. That's certainly a principled stance to take, but the weird thing is everyone I've seen take that stance seems to have no problem using countless other technologies which enact many of the same or even worse harms.
It's pretty difficult to talk about AI without taking hype into context--they feed each other. The thing is that "advanced" LLM use cases are like...orchestrating 20 agents to automate the creation of B2B SAAS. I'm pretty sure we would hear if anything of note was actually being made. lol. The most interesting work is being done by researchers trying to understand how they work(again, completely self referential and necessarily linked to the relevance of the object studied).
Opening up youtube to learn how to boot into BIOS does not take as much time as you think I promise! Recipe websites exist too! You literally called them trivial conveniences, I don't even think YOU believe your transportation analogy.
My last sentence refers issues wrt to big tech. Yes, I think a big part of the calculus behind LLM development is rental seeking/profit extraction of the open web/monopolization of information more generally. And yes I believe it's unethical for a number of reasons and yet I "live in a society" iphone child labor hypocrite or whatever, but when I can minimize my participation in what I clearly see as having many harmful n-th order effects I will. CSAM generator. Suicide whisperer. Misinfo proliferator. Automated scammer. Especially when its most visible issues continue to go unaddressed, and especially when I don't trust the CEOs behind them, marginal convenience just doesn't cut it.
You can tell there’s something to the tech when it’s causing people to completely divorce their minds from reality like this. You are so deeply, personally offended by the possibility that you don’t even care what exists and is happening in the real world. You’ve got your beliefs and that’s all you need!
Modern-day creationists.
No it isn't. No one agrees with you unless they have investments that compel them to most likely just like you.
It’s nice to have comments like this where people out themselves as proudly ignorant of the real world.
Hey did you know vaccines have MERCURY in them!?!?! There’s a whole group of people who think just the same way you do that I’m sure would love for you to join their community.
Interesting – it seems to me like Claude is a remarkable life-enhancing tool I use several times a day, but I'm learning from your comment that I've been fooled into thinking that by a "publicity blitz". Guess I'd better cancel my subscription!
How is Claude enhancing your life?
It's like having 24-hour access to an expert in any topic I'm interested in, whether for reasons of immediate practical need (cooking, gardening, travel) or intellectual curiosity (science, history, economics). A lot of questions arise in life that you can't answer with a Google search because they're too complicated or specific – but now you can just have a conversation about them, and I've found that so enriching. Yes, AIs hallucinate, but the more you use them, the better you understand where they can find their way reliably and where you need to double-check. (And I haven't even mentioned the more banal, productivity-boosting side – though since Claude Code came out, any time I can't get my computer to do something tedious I need done, I just get Claude to write a program that takes care of it for me.)
None of this is a substitute for reading or writing or thinking or arguing – I do just as much of those as I always did – it's just a great complement to them. It's easy to imagine someone in 1995 saying "This internet thing is for losers – whatever happened to real life?????" But even the people who are very condescending at the start may eventually realise that these tools do have real uses.
These things are literally incapable of giving out a factual answer to anything. You might like toying around with these things but it doesn’t mean that they are capable of accomplishing anything critical or that needs to be relied on to work properly.
The great irony is that modern AI is far less often wrong than the overwhelming majority of people who still make comments like this.
If AI can’t think then that really doesn’t bode well for you.
Or(in the case of the Internet) they may look around occasionally and think ‘Dear God, what a fng disaster’. Many people do, even young, rightist, intelligent ones.
your use case: wikipedia and reddit/chatroom? V curious what questions you couldn't find an answer to prior to adopting LLMs
I suppose it's true that, if your timing is good and the right person happens to see your post, you will very occasionally get an answer on Reddit that is as well-written and thorough as a baseline Sonnet 4.6 response. But by the time you get to the latter part of the conversation, where the LLM is following you with infinite patience and cooperation through some tangent, hypothetical, dumb beginner question, or incredibly narrow specific, with diligent reference to the two PDFs and three images you've uploaded plus everything it already knows from previous conversations about your research interests or personal projects, all of this of course happening instantly (though you can also pick back up seamlessly after a three month gap)... you are never going to get anything from Reddit and Wikipedia that even remotely resembles that experience. To me this has been an enormous gift intellectually, though of course to you it's just a "marginal convenience" from the "genocide automator", and I imagine that's not a gap we're ever going to bridge.
‘..infinite patience and cooperation’ lol. Keep paying the sub, eh?
Anything specific lol?
Have you ever had a real job?
It’s nothing to do with finding answers that were impossible to find or unknowable. It’s being able to do in 15 minutes what used to take 30. Or 60. Even if that was all it ever did, as good as it ever got, that alone crosses the line of “genuinely useful.”
You can sit there if you like and say “well if it doesn’t discover groundbreaking new science every time someone asks who starred in Groundhog Day, then it’s stoooopid!”
But that’s a self-evidently idiotic goalpost. The people clinging to it are the absolute worst mascots for the “look how much smarter humans are” movement. You’re going to make it impossible for anyone that hasn’t already embraced your “reality be damned” quasi-religious beliefs to side with you, even if you occasionally make a good point.
You literally have people telling you that yes, it genuinely helps them in their work, and that’s such a brain-breaking concept for you that you either accuse them of being dumb, of lying, or launching into an obviously bad-faith “oh yeah like what???” so you can immediately dismiss it, knowing nothing about their field or work.
If your concepts of “what a job is” and “what AI can do” are just sending slightly better worded emails, that says a lot more about you than it does anyone else.
I completely agree with your comments. I use copilot and ChatGPT on a daily basis. I am left leaning but I see the value of LLM in my work.
I'm not looking for groundbreaking new science. I just find it genuinely useful.
?? You made up a person to be mad at in your head
Enhanced healthcare
Boosted economic growth
Climate change mitigation
Advanced transportation
Customer service excellence
Scientific discovery
Enhanced financial services
Improved agriculture
Enhanced cybersecurity
If you're looking for leftists who seriously consider AI, I would humbly suggest some of my own writing on the necessity of a positive socialist vision of AI
(https://open.substack.com/pub/onethousandmeans/p/the-left-must-plan-for-ai?utm_campaign=post-expanded-share&utm_medium=web)
and how "left-intellectuals" like Bender or Timnit sell left-facing pseudoscience.
(https://open.substack.com/pub/onethousandmeans/p/the-ai-denial-industry-the-lefts?utm_campaign=post-expanded-share&utm_medium=web).
Would add my own work to this https://nicolasdvillarreal.substack.com/t/ai
And cosmonaut magazine in general has taken it seriously.
https://cosmonautmag.com/2023/05/artificial-intelligence-universal-machines-and-killing-bourgeois-dreams/
https://cosmonautmag.com/2024/04/artificial-intelligence/
Hey man when do you actually talk about what ai does?
So many comments like this from people who don’t know, don’t care to know, but demand in bad-faith that people spoon fed them things they will just dismiss out of hand anyway.
You understand that this is called “willful stupidity” yes?
Actually I know a shit ton more than this person and they refuse to engage with reality
Ed Zitron is a Financial Times reporter who has written a lot about AI and certainly seems to know a lot about it
Excellent piece, thank you for so cogently laying each piece of the problem out.
this was excellent thank you for writing it
I'm Left and I actively oppose AI. The creative community is very active against AI. The problem is the perennial problem: leadership. We have none.
Opposing it is one thing. My experience with the socialist left is straight up dismissal of AI. An almost smug belief that it’s a scam, a fad, and a useless technology that is no threat to them at all.
It's a threat, but it's still useless technology. 20th Century tech was helpful. 21st Century tech is pure greed. Coming up with solutions to problems that don't actually exist. Yeah, we're not ever coming around.
See, there it is. It is most definitely not a useless technology. If you said that to any person working in CS right now, they'd think you're unhinged.
I work in CS and these things are not good. They are good enough to convince mediocre people that they are. At the end of the day LLMs are nothing more than a fancy autocomplete that is incapable of reason or proper engineering practice. This is a fact, and anyone saying otherwise is either not good enough at their job to realize it or simply in on the scam.
Note how I was refuting the notion it's a useless technology and you don't seem to believe they are actually useless. I know people who are very good at cs and they will say stuff like LLMs can't vibecode well enough, and I then ask them "Do you still go on stackoverflow to learn stuff and fix bugs?" and the answer of course is they don't nearly as much as before. The standard way to deconfuse yourself is asking an LLM. Maybe you're so incredibly talented that you instantly divine the perfect answers to your questions and have no use for these mere tools. Us puny mortals are using them and benefiting greatly. They are only useless or a scam in the sense that they aren't machine gods that fully replace us. This is a very weird definition of these two terms. Of course am aware of the limitations of LLMs. (Also aware of how these limitations seem to get nudged further and further out with every new release). But a lot of people look at these limitations and try to pull of the most egregious Motte and Bailey type shit I've ever seen.
lol nobody on the left doesn't see AI as a threat, which shows how seriously you should be taken.
Same here and I find the response is usually in bad faith, intellectually speaking, and therefore making us very vulnerable. It is straight-up stupid. I don’t think we can preach the moral high ground about something that we’re reducing to caricature that is in contradiction to evolving experiences. We have to be more creative about how to protect ourselves, and that starts with good faith understanding.
Cool good for you. It changes nothing.
I can't wait to see that side of the internet vanish. The good artists will always be in demand but there is so many who can't make a dime because it's a useless profession or even side hobby. It just isn't in demand and it's created a giant culture of time wasting communicating with each other and honestly culminating in this anti-AI IBS.
Opposing it only hurts
I'm sorry you were raised by wolves.
But seriously, I grew up in Silicon Valley. My dad was a physicist at Xerox PARC (look it up) for 30 years and when I had to make a decision about my career, did he try to get me to go into science or tech? No, he fully supported my career in the arts because he knew how important they are.
The Arts bring in $1.2 Trillion a year in the US. Big Tech wants that money. That's the long and short of it. There's never enough for these greedy creeps.
That’s nice, whatever helps you feel better about being the leftist equivalent of climate-change deniers and vaccine skeptics.
“I think it’s dumb so lalalalalala I’m not listening 🙉”
Only a rotted brain thinks this way
That is the most incorrect statement of the year.
i've always wondered about this
the creative community seem, not to put too fine a point on it, to think of themselves as the "good guys"
if we found out that chimpanzees really liked to create artwork for themselves, and this artwork was noticeably different because chimpanzees had different taste than humans, that would be really interesting and i would expect the creative community to be all for it. chimpanzees should get to create art too
if a bunch of companies then started breeding chimpanzees for making chimpslop artwork, and renting out chimpanzee art-generating labor at a few pennies per pound, i suspect that the creative community would be outraged, but i *doubt* it would take the shape of their current outrage against AI
I really wish I understood why it feels so different
I notice that at no point to you address companies like openAI *stealing artists work* to train their models. You just pretended, falsely, that AI is akin to an actually intelligent animal.
well, i'm not pretending
i don't think that LLMs are like humans, but i think the way in which they *aren't* like humans is confusing and not easy to round off to things like "not actually intelligent". what they are is something inhuman and alien, but that's not the same thing as 'empty' the way you seem to be implying. in certain ways humans expect robust vibrance, LLMs are empty, and in other ways that humans expect emptiness, they are robust and vibrant. but, as far as the actual question goes:
in particular, what i want is a principled and mechanistic way to understand the difference between "a human being sees the work of an artist, and adapts the artist's techniques for themself. then, they create artwork inspired by the original artist, and they sell it" versus "an LLM is trained on the work of an artist, and adapts the artist's technique for itself. then, the LLM creates artwork inspired by the original artist, and the AI company sells it"
I would *expect* the leftist critique here to be something like "the benefit goes to the AI company, not to the LLM". but instead it's... something else. something I don't understand.
quick question. did they buy the art they're training on? no?
yes they did
there is currently a whole damn series of mainstream news articles really pissed at anthropic for destroying a bunch of books
this happened because, in order to make a legal digital backup of a book you purchased, you have to destroy the original
they bought books on the market and read them to their AI, that's literally what happened
Anthropic having sourced a fraction of its training data from legitimately purchased sources does not negate the fact that one can easily prove that the vast majority of commercially available generative AI models are trained on materials that were not purchased, but scraped from the internet or outright torrented.
Headlines like "Anthropic deletes source material to comply with digitization regulations" making the news cycle is the result of their PR department working overtime to legitimize their operations.
My principle, in response to your dichotomy, is that I believe in, relate to, and care about people. The human is the only difference in your example. Humans are alive in a way machines will never be. Mechanistically, the first scenario is preferable because the human is the input, synthesis, and output. With AI there is human input, in both user and dataset, but the synthesis and the output are performed only by the machine.
AI can only commodify. Because art must come from a life form and AI is not a life form (it is a machine), therefore, AI cannot make art.
i appreciate you spelling out your reasoning with this level of precision, thank you
for what it's worth... hm. i think you're mistaken about this. i think it's a sort of god-of-the-gaps style scenario, where AI will keep checking off more and more items on the list of what you consider "truly alive", and eventually the last box will be checked.
unless you believe in mystical theories of consciousness, i guess. but i'm pretty sure that the human mind, consciousness included, is turing-complete. i would be very surprised to learn for a fact that that weren't true.
and that's sorta the point, yeah? this is a really complicated question of neuroscience and philosophy of mind, and you are reasoning about it as if it were appropriate to be certain. as if obviously everybody ought to be certain.
being certain about this means we don't have to engage with a whole host of tricky philosophical and ethical questions, because they don't apply
but i'd sure like to know how consciousness actually *works*, before i say definitively that some real-life systems have it and others don't
“Life,” at its core, is defined as: “matter that has biological processes such as cell signaling and the ability to sustain itself.” (Wikipedia) LLMs are not biological and therefore not alive. Personally, I value biological life forms, especially humans, over non-biological machines. Furthermore, I'm asserting that only biological life forms have the capacity for creating art. We can debate the philosophical nature of that statement, but I will always prefer art made by a biological life form over a non-biological machine. When I connect with art (usually in the form of music) it does seem like magic! I hope you've experienced that too.
Also, I don't think the makers of AI are trying to create life. They are trying to create Artificial General Intelligence (AGI), which is different than life.
I don't love my refrigerator like I love a chimpanzee, even though my refrigerator is much more useful to me. That's because the chimpanzee has life. Art is about the love of life, which a machine cannot experience and therefore create.
maybe a better comparison would be, a company that trains a colony of bacteria to make art
For an article about how people are dismissing a nascent technology it sure is sparse on the details on what's being missed out on.
I do agree that outright refusing to research AI and repeating convenient lies about its workings is unhelpful and unproductive, same as ignoring the reality that it will likely be commonplace and here to stay. But if you wanted to convince a skeptic that this needs to be part of the left's thinking you're going to have to add some actual substance about why it will be so revolutionary.
Ah the old “nothing can ever be known or proven or prepared for until we let it happen so we can have 1000000% certainty that it’s the only outcome physically allowed by the universe.”
Totally. Hey have you ever jumped out of an airplane without a parachute? Sure sure everyone else who has done it has died, but none of them were wearing a Garfield shirt!!! It’s literally impossible to predict what will happen until we try it for real!!!
What a brilliant way for policymakers to think.
If you want to convince someone that Product X will change everything 5 year from now then you have to actually prove why before they believe you. This doesn't seem that complicated.
The biggest companies in the world are betting trillions on this technology and hype directly benefits them. Is it so crazy that people have reservations about their marketing claims that AI will change everything?
Oh, and for the record - none of this is a "left" view. Trying to frame it this way is an attempt by this publication and the general pro-AI chorus to associate AI skepticism with a particularly small niche in American political life. But AI skepticism is profoundly ideologically promiscuous. There's just no reason to call it "left" skepticism other than to try to leverage partisan politics to facilitate your views.
Well to be fair, this piece is explicitly *about* sceptics *on the left*
"But AI skepticism is profoundly ideologically promiscuous."
Do you have prominent examples of "AI skepticism" on the right or center? I'm aware of right-wing complaints about AI systems being left-biased or "woke," but I'm not aware of any substantial right-wing views dismiss LLM AI as mere toys or glorified autocomplete as the original piece is noting correctly you see a lot of on the left.
It also seems unhelpful to try to accuse the OP of labeling such "skepticism" as left as part of their own attempt to leverage partisan politics. One can be on the left, and concerned that much of the left is buying into this view of AI systems that you and others seem to take for granted rather than really grappling with the technology. That's a pretty easy position to take.
Well said. I don't think this is a left or right issue. It's simply an outgrowth of widespread ignorance among the public, not unlike ignorance about how a microwave or printer works: a lack of curiosity about the world and a willingness to accept the simplest explanation to avoid thinking about it.
As you very accurately point out, the temptation to dismiss what LLMs do by focusing only on the task (next-token prediction) while ignoring the goal (inducing internal representations that in aggregate constitute a world model) is just too strong for most people to resist. Most people don't have the knowledge to even begin questioning why the internal representations in LLMs serve the same purpose as our own internal representations, or why theories of the brain such as predictive coding and Bayesian models place prediction at their core. Why is it that prediction used to minimize error between the brain's model and reality is never thought of dismissively, while the same concept, implemented in the digital realm, is derided when it comes to LLMs?
It's no wonder you don't see people working at AI labs defending their creations from shallow criticism; they simply don't care, and they don't care because they're building the future.
Well said. And the experts aren't only in industrial AI labs. Academic computer science departments have plenty of faculty who are building, harnessing, improving, and evaluating large language models. They are mostly on the left politically, but they understand the technology quite well -- being among its inventors -- and generally don't agree with Bender at all. They are sensible about the opportunities (and the hype), the current defects, and the societal risks. Why not ask them about regulation?
You are a real twat. Comparing views on AI to views on Climare Change is insane. Climate change is real and a danger to every living being. AI is a toy. Sure it has scale and it’s super sophisticated, but nobody in this entire world needs it. You know what we all need? A world. Climate change is going to end the world. You know what is going to accelerate that? AI sucking up all the water to cool down its processing plants. AI is also rampant theft of artists and authors. It’s disgusting. This isn’t a left vs right issue, this is a humanity issue. AI is completely unnecessary. Society would be perfectly fine without it.
Your comment proves the point of the piece. The fact that you cite the "problem" of AI water use, which is flat-out misinformation (in reality you'd have to prompt ChatGPT 200,000 times to equal the water use of a single hamburger) shows that you are getting your ideas on the topic from people who don't really know anything about it – which is not going to prepare you very well for the future!
That sounds reassuring. I’m concerned about local impacts of sustainable energy developments, so I’d appreciate the chain of evidence for your claim if you can spare a moment. We need to concentrate on the burger chains and stop worrying about data centres over here, but I’ll need proof for the planning enquiry/ies, and we need it now, we have a May 5th deadline.
Well the push to "AI supremacy" has seen the reactivation of coal plants, destabilization of power grids, and rising cost of consumer technology across the board(RAM is up like 15x in less than 6 months). In all honesty, there's not much to worry about in terms of a hypothetical future where it's as successful as its proponents fantasize--all you have to do is type in a box. The "skill" is typing in a box and we will all be in the same, undifferentiated position. The primary issue I have is that it's wholesale "success"(whatever that even means?) depends on skirting a myriad of issues we can point to in the present. Even without "AGI" breakthroughs we've seen companies use it to undercut workers, use it as a pretext to pay less/work more/, normalize quality degradation, automate genocide, etc.
Thank you. Its infuriating garbage. I can only assume that some people are getting really concerned about their portfolio. The techno-utopian rhetoric used in this "article" on America's premiere Nazi newsletter platform feels five years old. You can feel the desperation.
“You’ve offended my quasi-religious belief and threatened my devotion to ignorance! I’ll show you by doing the exact thing you called out in the article while not realizing it, to prove how much smarter I am than AI.”
Not much to say, just a very well written piece
the right ignores the problems of AI, such as how they don't care, and celebrate the use of Grok to generate child pornography *at scale*, to generate mass mis and disinformation campaigns.
But ultimately the left isn't actually ignoring AI, we're the ones who are actually looking at the problems with it. You just *don't like hearing* those problems with the toy you like.
And yet you still call it a toy -- the exact dismissive attitude described in the essay. Toys don't tend to be as impactful as LLMs, don't you think?
Other kinds of toys don’t get used as vehicles for passive income on a level akin to real estate, and to the extent that they do, so much the worse.
I mean, I don't deny that it's certainly impactful that the people behind LLMs like Grok don't seem to give a shit about generating child porn and revenge porn at scale.
It;s a bad impact, and it should result in people in prison, but you're right, it's impactful. Sorry I care about people more than you do.
William man we are trying to have a civil discourse it is wholly unproductive to be this caustic in Substack comments saying I don't care about people
As someone in tech LLMs have been incredibly impactful in changing my day to day workflow. People in other industries and fields report similar things. It is ostrich-head-in-the-sand behavior to reduce the impact of LLMs to the creation of lurid images and copyright infringement and I suspect you actually do know this.
The problem you described is very real and a constant risk being debated by people at the top labs and in relevant circles concerned with AI safety. Image generation is a narrowly scoped subset of dangerous behaviors that LLMs are capable of with that we need to reckon with. Any open source LLM is capable of being 'jailbroken' programmatically - the legendary Pliny the Liberator (a guy on Twitter) has shown this. It's a very serious danger that many people are oblivious to, so it's good that you and others are on to it. I just don't think it's productive to be so dismissive of the power of the tool we now all have access to.
I'm saying you don't care about people because you don't. You blithely dismiss the mass creation and dissemination of literal child porn by grok as "oh well it's just a downside of the technology", but you don't seem to give a shit about the women and *children* that it is harming. I'm willing to bet money you don't care about the people who've been driven to psychosis and fucking suicide by the sycophantic chatbots you love so much.
Sorry you failed at humanity.
You would lose your money.
What has this style of discourse yielded for you in the past? You seem to have some idea that you are trying to good in the world by caring about important causes. Has insulting people trying to have a good faith dialogue ever changed anyone's mind in your previous experience? Have you ever considered there are more optimal ways to communicate?
Unless your goal is to just spew bile, in which case you are a troll, but I don't really see it.
Substantively --
What part of what I said could possibly be construed as a blithe dismissal?
"The problem you described is very real and a constant risk"
"Image generation is a narrowly scoped subset of dangerous behaviors that LLMs are capable of with that we need to reckon with"
"It's a very serious danger that many people are oblivious to, so it's good that you and others are on to it"