`Own bug file — not malware.`
It seems that it's obsessively checking if it's working on malware production.
In another situation where I was working on a parser of a HTML document with JS, it refused because it believed that I was bypassing security measurements.
I believe AI has to be supportive in the work that I'm doing. When it's obsessively checking me if I am doing anything wrong or abusing the system, I have the feeling it is controlling me. I understand that we do have guardrails and I also understand that it's very important that people do not abuse this new tech for bad stuff.
I pay $200 per month for a max subscription. They already know who I am. Claude knows I work in scraper tech, and it also knows that our clients are the companies we scrape.
Now with Opus 4.7, I've had a situation that it refused to continue because I asked to automate the cookie creation with a Chrome extension.
In a situation where someone is abusing the system, let's say create malware or hacking stuff with bad intentions. I can imagine there will be some signal system or algorithm that can form an opinion about the intentions that someone has. But now that the AI is limiting me in my work, I feel a little bit disrupted. Who the hell does this system think he is to limit me?
Am I going to accept this in the future? That a system will tell me that I cannot continue because I don't have sufficient rights or beliefs that I'm doing anything wrong.
I can work fine on the local AI on my Blackwell GPU. But of course, I want to use the latest tech, the latest AI and the best models available. Is this the beginning of a split? Where good people and naughty people make different choices? Am I the bad guy now?
Last year I passed 40. I grew up reading, talking about Kevin Mitnick. I was a member of a local computer club. Hacking stuff as a 14-year-old kid who did not have intentions to break anything but to outsmart systems. Is that area gone now? Is the newer generation going to accept that they have to please the AI?
In a few years, the cognitive decline will be obvious.
The only people who remain curious are the people who actively want to, despite AI, and most of the time against it.
Our ability to keep digging into things is entirely tied to the will of the people controlling AI to let us do so. Knowledge used to be power; now knowledge is money and they won't let us have it for much longer.
Whenever I tried the same with Google in the past, more often than not I couldn't find what I was looking for, because I didn't know the correct keywords to search for in order to start getting relevant results. With ChatGPT & co. I can just pose the question in natural language, get results and continue exploring.
So I guess I'd say it's more about how you're using the tool and what kinds of problems you're looking to solve with it. A calculator can be dinged for getting effortless answers at every turn or it can be praised for enabling a higher volume of solved math problems and enabling more complex work for a broader set of people.
If you're just copy-pasting answers and you don't internalize what is being said, sure, you're not being curious or more importantly, learning. This DOES NOT mean that every person who engages with an LLM is doing that or doing it every time, and just like using a search engine or grabbing a book can lead you into interesting rabbit holes, so can an LLM, it's just a matter of how fast and to want end.
The real issue is the hallucinations which for people unfamiliar with said topic, can lead them into believing what they're being told is a fact when it's not. Also LLMs like leaving out URLs and sources from their replies to save on tokens often if you don't remind them, that's also annoying.
This whole discussion is bunch of anecdotal evidence, which is fair, and as such I'll give my own. I've found myself engaging more with obscure topics that interest me via the LLMs than I did with a search engine because the barrier is lower. I don't have to sieve through horribly designed websites filled fluff that doesn't interest me, many with dozens of JS trying to run (UBO + noscript thumbs up) and in some cases demanding that certain JS run just for me to see some plain text, some slow to browse with topics hidden under sub-sub-menus. It's annoying and just one of many barriers. Others being language. etc...
You think things like "is the accordion a better user experience than the side tabs" instead of "why the f is the third accordion pane empty?"
Sure, the curiosity of figuring out where you made the mistake is gone, but that was never very valuable. It's just a detour that forces you to be curious about something else.
Do you never click through to the sources or experimentally test the information presented to you by the LLM? If not, who's stopping you? To me, this seems a bit like a tenured academic complaining about the abundance of research assistants working for them preventing them from properly understanding things anymore.
The real problem is that most people either don't see the value in or don't have the time to indulge in their curiosity. Even the language we use, indulgence to describe scratching that itch. How funny. Because curiosity is a luxury.
It is indeed. Curiosity, for me, very often stems out of a particular kind of idleness and boredom, paired with a tricky question I can't find an immediate answer to.
And I can definitely still be bored that way even with LLMs.
I brought it up two years ago and get downvoted when I brought it up a couple months ago.
There is a story on the front page right now about someone losing their child's family videos from a youtube ban. We hear about this stuff all the time. I suspect we are gonna be in somewhat of an arms race with AI products as the bubble grows over the next 18-24 months. This makes me worried about how disadvantaged people are going to be if they lose access to the better platform (whichever that ends up being).
Do you think AI is going to be so important that we would benefit from legal protections for access?
Or do you think the models and technology will become so small we will be able to personalize / decentralize the tech and it still be useful / competitive?
https://news.ycombinator.com/item?id=40784126
These things can be a poisoned chalice, leading to weaker long-term performance, or they can be a force multiplier. It's up to you how you use them.
Only if you let yours be killed.
There will always be a demand for high-value signal, even though it might not be as easy to find anymore. But then again, has it ever been?
> Our ability to keep digging into things is entirely tied to the will of the people controlling AI to let us do so.
I have sympathy for that argument when it comes to locked bootloaders, closed-source software etc., but with AI? How? Is the existence of ChatGPT and Claude somehow preventing you personally from reading a book or looking at source code?
I do see big problems around motivation of the next generation of engineers to keep looking under the hood if avoiding it is becoming so easy, but you should, individually, arguably feel more enabled to do so than ever.
Microsoft owns CoPilot and controls GitHub, LinkedIn, etc
Google owns Gemini and control search results for most of the web
Meta owns whatever their model name is now and controls person-to-person relationships on the web
etc
It's up to any of them to flip the switch and make AI the default entry point when they decide that their AI isn't gaining enough traction. And then you can just hide the source data as proprietary information. Is it cynical? Sure, but I don't think we can say it's unlikely.
This is what gets me every single time. I genuinely don’t think this is a hard realization to come to, and yet, the vast majority of arguments from both sides of the aisle, both proponents and antis, always assume that EITHER you do everything yourself, OR you have the AI do everything for you. If you use AI, you’re DOOMED to never think critically about anything anyone ever tells you ever again. If you don’t, you’re an idiot, because everyone else is using it, and skills and experience no longer matter because everyone can now do everything.
And this is on HN, too; supposedly, a site where experienced engineers, developers, and builders converge; the exact kind of demographic you’d expect to understand such a thing as nuance. And yet, your comment is one of very few. There’s someone RIGHT HERE, a few comments down, saying, verbatim, “it’s a solution engine not a curiosity engine. Getting effortless answers at every turn is the opposite of curiosity.” Treating curiosity as the end rather than the means, as if I stop being a curious person once I find an answer to a question I’ve been asking myself, or as if curiosity is some sort of “temporary status effect” that an answer/solution “consumes.”
And it seems to be worse than just “no one’s thought it through properly.” I’ve literally had someone show a fundamental incapability to understand the concept. I spent a non-trivial amount of effort writing out three comments with several paragraphs about how knowing your knowns and unknowns, and the fact that you have unknown unknowns, is the most important thing in any project, not just when it comes to AI. That these tools aren’t just doers, but also searchers. That they’re pretty much the best rubber ducky that’s ever been created, and that I argue a rubber ducky is exactly what you should be using for in any contexts that don’t have it automate trivial and testable work. The guy refused to read any of it and, after three walls of text, continued claiming I’m “advocating for the LLM to guide me.” There is some sort of deeply instinctive and intrinsically defensive reflex that a lot of people seem to immediately collapse into when the topic comes up, and it seems to seriously impair the ability to acknowledge nuance or concede a single fraction of an inch. It’s baffling.
They are even worse than Google, which at least doesn't ban your whole account if you search the wrong thing.
If some LLMs become too strict, they'll simply be impossible to reliably use, and hopefully fail along with their providers. Claude (only reasoning models, after 4) has repeatedly refused to perform translations for text that was not lyrics (poems), it's very stupid.
Since there's no real solution, they'll implement some "trick" that as a side effect will randomly block other people's work.
The two failure modes are different. Task refusal is recoverable. What ivankra described (account termination for building Node and V8 to investigate crashes) isn’t. No diagnostic output, no visible appeal path. Standard debugging workflow but with permanent consequences.
This is a reliability characteristic you have to design around, not a policy question. Any workflow that touches the classifier’s surface features needs a fallback. Most people find out they need one after the fact.
Whether that's Linux on your personal desktop and Windows on your work machine...
Oh and you built that desktop yourself, didn't you? But you can't even open the one at work or it's a violation.
GrapheneOS on your personal phone, and iOS on your work phone...
When this AI bubble crashes, we'll all be flooded with graphics cards no one else will want and all kinds of cool things will be built (are being built).
If you can stick it out a little longer you'll be fine. The tech you want to tinker with will be there.
Well obviously the narrative that is pushed is to stop learning to code, don't become a doctor, stop perusing careers in law, creative writing, and art.
Why?
AI will be doing all of these things.
What a dumb take! As if AI is the means to all ends. Hopefully the next generation will learn what AI is for and that is that is simply a tool to augment your work - not something that you 100% delegate your thinking to.
presumably you paid money to another person who lent you the ability to use their API for _their_ purposes (likely: making money)
in an environment where "money-seeking" is the default behavior, it is only natural they're stopping you from doing things that will make them less money
think back to your computer club; was it about money?
leave to Caeser what is Caesers, or something
It told me it would not help me.
Past iterations of Claude have done this without blinking.
I don’t like that it’s telling me what I can and can’t do with technology.
That feels like it’s trying to make judgment calls like it’s a Terminator instead of just the exoskeleton I used to fight the Queen Alien.
We are all witnessing the start of an AI era that will not end soon. Guiderails are a part in this development. I do have questions about the people, or systems, that decide on what's good and bad behavior. This tech is used in any country in the world. As long as they are able to pay their subscription in dollars, someone is able to use it. Is it up to a company to decide what's good or bad behavior? Is this a debate? Is this politics? Is this just a vision of one company? Would it shift in time? Will it be stricter for more hyper-intelligent models? Will it change when open source models are becoming better and better?