For me the fatigue is a little different— it’s the constant switching between doing a little bit of work/coding/reviewing and then stopping to wait for the llm to generate something.
The waits are unpredictable length, so you never know if you should wait or switch to a new task. So you just do something to kill a little time while the machine thinks.
You never get into a flow state and you feel worn down from this constant vigilance of waiting for background jobs to finish.
I dont feel more productive, I feel like a lazy babysitter that’s just doing enough to keep the kids from hurting themselves
I know this is a terribly irresponsible and immature suggestion, but what I've been doing is every time I give claude code a request of indeterminate length, I just hit a blunt and chill out. That and sometimes I'll tab into the kind of game that can be picked up and put down on very short notice, here's where I shameless plug for the free and open source game Endless Sky.
For me personally, programming lost most of it's fun many years ago, but with claude code I'm having fun again. It's not the same, but for me personally, at this stage in my life, it's more enjoyable.
Programming stopped feeling fun for me once MBAs and bean counters took over. There's rarely time to write thoughtful code anymore. Even convincing management to invest in a sane architecture feels like an endless uphill battle.
Engineer teams are nothing but an annoying expense on the balance sheet and the goal is to cram as many features, as quickly as possible to get the sale.
That's exactly why I'm happy to use every tool available to get the work done efficiently. To this end, LLMs have been great for me, especially when dealing with large amounts of boilerplate code.
The code works quite well, but I wouldn't inflict it on other humans. In my view, when you use a coding agent you're committing to forever maintaining that code with a coding agent. There are no human programmers participating in these projects.
I think oftentimes in the absence of context, people will substitute their own, usually worst-case context. They imagine someone vibe coding safety critical software that flies airplanes.
I think we much too often forget that the domain of software development has expanded its reach into literally everything and that we share a guild hall with all kinds: those who write deeply safety critical correct code, those who are hacking a blender, those who are just making their clerical task less repetitive, etc.
yeah that's how I do it too but careful with the blunt, yesterday I was working away, had some time while an agent swarm ran, took a "little break" and now ghostty looks like this: https://s.h4x.club/kpuGgD12 :|
I feel like we used to do so much more customization and theming of our desktop environments in the 2000s/2010s. I miss it, and therefore love this for you!
I was totally obsessed with it, when I first got into linux as a kid in the late 90s, I spent forever and hours tweaking just my bashrc. I spent entirely too long on this stupid terminal, the media player and getting the rainbow thing to match up to the time of whatever was playing... oh well, had fun!
While this is certainly very true, I find coding through an LLM to require far less effort dedicated to this cognitive switching than does writing in some programming language, primarily because I no longer have to load the mental context for converting my high level human instructions to code that a programming environment actually supports. The mental context seems more lightweight and closer to the way I think about the problem when I'm not sitting at the computer actively working on it. If an idea comes to me while I'm away from the computer I can momentarily sit down, type in whatever I just thought of, and get going almost immediately. I think it also saves a huge amount of cognitive load and stress (for me) involved with switching around between different programs and languages, an unfortunate fact of life when dealing with legacy systems.
I appreciate your honesty. For what it's worth, this is not a commercial endeavor and is all motivated entirely by scratching my own personal itches. I'm not being paid to do this.
One of the best programmers I know personally is constantly under the influence of marijuana. As "immature" as it may sound, she's still extremely aware of what she's doing and is able to work in an environment I would give up in after 2 weeks. The kind of environment that denies 1 day PTO for your birthday because of a deadline (hint, every week is a deadline).
I do not smoke myself, but it made me realize how little I know regarding THC and CBD
That's a type of fatigue that is not new but I hear you, context switching fatigue has increased ten fold with the introduction of agentic AI coding tools. Here are some more types of fatigue that have been increased with the adoption of LLMs in writing code.
There are plenty of articles on review fatigue including https://www.exploravention.com/blogs/soft_arch_agentic_ai/ which I published recently. The focus there is less about the impact on the developer and more about the impact on the organization as letting bugs go to production will trigger the returning to high ceremony releases and release anxiety.
The OP article talks about AI fatigue of which review fatigue is a part. I guess that I would sum up the other parts like this. The agentic AI workflow is so focused on optimizing for productivity that it burns the human out.
The remedy is also not new for office work, take frequent breaks. I would also argue that the human developer should still write some code every now and then, not because the AI cannot do it but because it would slow the process down and allow for the human to recover while still feeling invested.
I think all of this is why I don’t really experiment with an LLM anymore. I just use it to ideatw/rewrite things in different styles so I can turn rough drafts into finished things. It’s just an editor to bounce ideas off of essentially. Using it that way is the only way I find myself being actually productive and not annoyed with it
Seriously and beyond productivity, flow state was what I liked most about the job. A cup of coffee and noise cancelling headphones and a 2 hour locked in session were when I felt most in love with programming.
Speaking as someone with over 40 years paid programming experience, I've never understood this "flow" thing. I typically do about half an hours typing, get up and walk around, mooch over to colleague and yack bit, or go to the coffee machine, or just think a bit and then go back to the keyboard.
Never used headphones - if the environment is too loud, make it quieter. I once moved into a new office area that had a dot-matrix printer that "logged", in the worst sense of the word (how could you find any access on such a giant printout), every door open/close in the block. It was beyond annoying (ever heard a DM printer? only thing worse is a daisy wheel) so I simply unplugged it, took out the ink ribbon and twisted off the print head. It was never replaced, because as is very often the case nobody ever used the "reports" it produced.
I'm not at all convinced that "break your concentration and go check on an agent once every several minutes" is a productivity increaser. We already know that compulsively checking your inbox while you try to code makes your output worse. Both kill your focus and that focus isn't optional when you're doing cognitively taxing work--you know, the stuff an AI can't do. So at the moment it's like we're lobotomizing ourselves in order to babysit a robot that's dumber than we are.
That said I don't dispute the value of agents but I haven't really figured out what the right workflow is. I think the AI either needs to be really fast if it's going to help me with my main task, so that it doesn't mess up my state of flow/concentration, or it needs to be something I set and forget for long periods of time. For the latter maybe the "AIs submitting PRs" approach will ultimately be the right way to go but I have yet to come across an agent whose output doesn't require quite a lot of planning, back and forth, and code review. I'm still thinking in the long run the main enduring value may be that these LLMs are a "conversational UI" to something, not that they're going to be like little mini-employees.
For me AI has given that back to me. I'm back to just getting stuff built, not getting stuck for long when working in a new area. And best of all using AI for cleanup! Generate some tests, refactor common code. The boring corporate stuff.
I love the flow state, and I’m pretty sure it’s fundamentally incompatible with prompting. For me, when the flow state kicks in, it’s completely nonverbal and my inner dialogue shuts up. I think that’s part of why it feels so cool and fun when it hits.
But LLM prompting requires you to constantly engage with language processing to summarize and review the problem.
That's pretty funny because LLM's actually help me achieve flow state easier because they help me automate away the dumb shit that normally kind of blocks me. Flow state for me is not (just) churning out lines of code but having that flow of thought in my head that eventually flows to a solved problem without being interrupted. Interesting that for you the flow state actually means your mind shutting up lol. For me it means shutting up about random shit that doesn't matter to the task at hand and being focused only on solving the current problem.
It helps that I don't outsource huge tasks to the LLM, because then I lose track of what's happening and what needs to be done. I just code the fun part, then ask the LLM to do the parts that I find boring (like updating all 2000 usages of a certain function I just changed).
Interesting that for some people flow state is non-verbal. I personally have sort of a constant dialogue in my head (or sometimes muttered out loud under my breath) that I have to buffer or spool into various notes/diagrams/code. The process of prompting winds up being complementary to this—typing out that stream of consciousness into a prompt and editing it becomes a more effective form of reflection and ideation than my own process had been before. Sometimes I don’t even send the prompt—the act of structuring my thinking while writing it having made me rethink my approach altogether.
I still hit the flow state in cursor, always reviewing the plan for some feature, asking questions, learning, reviewing code. I'm still thinking hard to keep up with the model.
I joke that I'm on the "Claude Code workout plan" now.
Standing desk, while it's working I do a couple squats or pushups or just wander around the house to stretch my legs. Much more enjoyable than sitting at my desk, hands on keyboard, all day long. And taking my eyes off the screen also makes it easier to think about the next thing.
Moving around does help, but even so, the mental fatigue is real!
I've seriously wondered about merging a home office and home gym into one, and doing sets in between claude working. My usual workout has about 22-30 sets of exercises total and I probably wait on Claude that often in a day. It would be wonderful to be able to spread my exercise throughout the entire day. I'd also include an adjustable height desk so that I could be standing up for much of the workout/workday. I could even have a whiteboard in there.
I used to lose myself in focused work for hours. That's changed. Now I'm constantly pulled away, and I've noticed the pattern: I send a prompt, wait for the reply, and drift into browsing. Without SelfControl blocking me, I can't seem to resist. I am certainly more productive with LLMs, but I also feel much more tired (and guilty) after a day of work.
This has been a common pattern for me before LLMs, when my work required constantly rebuilding models or doing small deployments where each task/try took more than ~20ish seconds and less than say 3 minutes. It's enough to pull you out of it but not enough to make a proper break or switch tasks.
I suffered from the problems you describe, grabbing a browser window or my phone which would usually take my attention much longer than the task and it left me burned out at the end of the day.
There are some helper tools, like blocking "interesting" pages (like HN, reddit) on the browser, putting the phone in the bag at the end of the room or using a pomodoro timer so sequence proper breaks. But at the end the only thing that really helped is getting into meditation: I try to use these little interruptions of flow as a opportunity to bore myself. Try to reframe boredom from being an annoyance that needs to be fought to a chance to relax your brain for a couple of seconds and refocus.
The want to grab the phone is hard at the start, but it gets better very soon when you manage to push through the discomfort in the first days.
I don’t think it’s unreasonable to assume that in 1-2 years inference speed with have increased enough to allow for “real time” prompting where the agent finishes work in a few seconds instead of a couple minutes. That will certainly change our workflows. Seems like we are in the dial-up era currently.
It's arguably already here, only cost is a concern. We now have an open weights model - =you can throw as much hardware at it as you want to speed it up - at Sonnet 4.5+ level.
Today Anthropic started offering 3x(?) Opus speed at 5x cost as well.
This. It’s the context switching and synchronicity, just like when you are managing a project and go round the table - every touch point risks having to go back and remember a bazillion things, plus in the meantime you lose the flow state.
You're supposed to write a detailed spec first (ask the AI for help with that part of the job too!) so that it's less likely to go off track when writing the code. Then just ask it to write the code and switch to something else. Review the result when the work is done. The spec then becomes part of your documentation.
I try to fix it by having multiple opencode instances running on multiple issues from different projects at the same time, but it feels like I'm just herding robots.
This way you can do twice the terrible job twice as fast!
(Also, this only applies if what you're working on happens to be easily parallelizable _and_ you're part of the extremely privileged subset of SV software engineers. Try getting two Android Studios/XCodes/Clang builds in parallel without 128GB of RAM, see what happens).
But yeah improving build speed & parallel running I think are one of the biggest advances devs can do to speed up development time in the AI age. With native apps that can be a challenge. I restructured a react native project to make it faster to iterate, but I have a feeling you might not be fond of rn.
It's a different kind of fatigue, but it's something a felt I got stronger at over time. Beats waiting IMHO, but be sure to give yourself a chance to rest.
really interested in what the brain does when it "loads" the context for something it's familiar with but is currently unloaded from the working memory. Does it mostly try to align some internal state? or more just load memories into fast access
The next step is running an LLM that tries to figure out parts of the project that you aren't working on so it automatically starts coding that while letting you code in peace other stuff manually.
I hope Google has been improving their diffusion model in the background this whole time. Having an agentic system that can spin up diffusion agents for lite tasks would be awesome
For me it honestly matches pretty well. I give it an instruction and go reply to an email, and when I'm back in my IDE I have work (that was done while I was doing something else) to review.
Going back from writing an email to working, versus going back from email to reviewing someone else's work feels harder.
What has worked for me is having multiple agents do different tasks (usually in different projects) and doing something myself that I haven't automated yet.
e.g. managing systems, initiating backups, thinking about how I'll automate my backups, etc.
The list of things I haven't automated is getting shorter, and having LLMs generate something I'm happy to hand the work to has been a big part of it.
For me: Will this task take 30 seconds or 3 minutes.
With good planning I've been able to step away and come back. Sometimes it decides to prompt me within 5 seconds for permissions. Sometimes it runs for 15 minutes.
The output is still small and I can review it. I can switch tasks, however if it's my primary effort for the day I don't like stepping away for an hour to do something else.
Not the OP, but the new LLMs together with harnesses (OpenCode in my case) can handle larger scopes of work - so the workflow moves away from pair programming (single-file changes, small scope diffs) to full-feature PR reviewing.
Somewhat. You have to set yourself up to manage your own attention because the context switching is rough. If you don’t you will burn out.
But the cycle is longer. When you help a person they don’t come back to you 4 minutes later.
I also only review PRs at specific times a day, because that’s more cognitively intensive and switching in and out pretty much ensures you’ll do it badly.
Either way, I’m really starting to think agentic as designed is a deeply flawed workflow. The future could be small, fast models that finish pseudo code and look stuff up to aide focus. Anthropic’s own research seems to support this.
I'm certainly getting tired of the AI slop images and videos. For coding and software development, I'm outright excited (and a little scared) of what I've been able to accomplish with GPT and Claude. I'm a software developer with 25 years experience living in the upper Midwest USA.
I write software professionally and remotely for large boring insurance company, but I'm building a side project of an area of interest using AI tools to assist, and I've created in a couple months a few hours per week what would have taken me a year or more to create. I've read other's comments about having to babysit the AI tools, but that's not so bad.
The little benefit I've noticed using AI tools to "vibecode" is sometimes they come back with solutions that I never would have come up with. ...and then there's the solutions where I click the Undo button and shake my head.
This write-up has good ideas but gives me the "AI-generated reading fatigue." Things that can cleanly be expressed in 1-2 sentences are whole paragraphs, often with examples that seem unnecessary or unrealistic. There are also some wrong claims like below:
> The Hacker News front page alone is enough to give you whiplash. One day it's "Show HN: Autonomous Research Swarm" and the next it's "Ask HN: How will AI swarms coordinate?" Nobody knows. Everyone's building anyway.
These posts got less than 5 upvotes, they didn't make it to home page. And while overall quality of Show HN might have dropped, HN homepage is still quite sane.
The topic is also not something "nobody talks about," it's being discussed even before agentic tools became available: https://hn.algolia.com/?q=AI+fatigue
> If you haven't spent at least $1,000 on tokens today per human engineer, your software factory has room for improvement
> Code must not be reviewed by humans
> Following this hypothesis, what C did to assembler, what Java did to C, what Javascript/Python/Perl did to Java, now LLM agents are doing to all programming languages.
(All quoted from actual homepage posts today. Fun game: guess which quote is from which article)
The real AI fatigue is the constant background irritation I have when interacting with LLMs.
"You're not imagining it"
"You're not crazy"
"You're absolutely right!"
"Your right to push back on this"
"Here's the no fluff, correct, non-reddit answer"
“You’re not [X]—you’re [Y]” is the one that drives me nuts. [X] is typically some negative characterization that, without RLHF, the model would likely just state directly. I get enough politics/subtext from humans. I’d rather the LLM just call it straight.
The boring and likely answer is that is was just clauded out,”I’m tired chat, look through my last ten days of sessions and write and publish a blog post about why,” but it would be fascinating to discover that the author has actually looked at so much ai output that they just write like this now
> Things that can cleanly be expressed in 1-2 sentences are whole paragraphs
Perhaps the author just likes to write? I've only just recently started blogging more, but I unexpectedly started to really enjoy writing and am hoping to have my posts be more of a "story". Different people have different writing styles. It's not a problem, it's just that you prefer reading posts that are straight to the point.
> but gives me the "AI-generated reading fatigue."
Agree. The article could have been summarized into a few paragraphs. Instead, we get unnecessary verbiage that goes on and on in an AI generated frenzy. Like the "organic" label on food items, I can foresee labels on content denoting the kind of human generating the content: "suburbs-raised" "free-lancer" etc.
> Things that can cleanly be expressed in 1-2 sentences are whole paragraphs
Funny, I don't associate that with AI. I associate it with having to write papers of a specific length in high school. (Though at least those were usually numbers of pages, so you could get a little juice from tweaking margins, line spacing and font size.)
Article is mostly GPT vomit after a couple bullet pints. If it’s not as easy for others to tell I’ll stay my blade runner style shop that tells who NOT to hire
I'd personally rethink about applying some advice in that section. Here's my take.
> Time-boxing AI sessions.
Unless you are a full-time vibe coder, you already wouldn't be using AI all the time. But time boxing it feels artificial, if it's able to make good and real progress (not unmaintainable slop).
> Separating AI time from thinking time.
My usage of AI involves doing a lot of thinking, either collaboratively within a chat, or by myself while it's doing some agentic loop.
> Accepting 70% from AI.
This is a confusing statement. 70% what? What does 70% usable even mean? If it means around 70% of features work and other 30% is broken, perhaps AI shouldn't be used for those 30% in the first place.
> Being strategic about the hype cycle.
Hype cycles have always been a thing. It's good for mind in general to avoid them.
> Logging where AI helps and where it doesn't.
I do most of this logging in my agent md files instead of a separate log. Also after a bit my memory picks it up really quickly what AI can do and what it can't. I assume this is a natural process for many fellow engineers.
> Not reviewing everything AI produces.
If you are shipping in an insane speed, this is just an expected outcome, not an advice you can follow.
> Your manager sees you shipping faster, so the expectations adjust. You see yourself shipping faster, so your own expectations adjust. The baseline moves.
This problem has been going on a long time, Helen Keller wrote about this almost 100 years ago:
> The only point I want to make here is this: that it is about time for us to begin using our labor-saving machinery actually to save labor instead of using it to flood the nation haphazardly with surplus goods which clog the channels of trade.
I really feel this. I can make meaningful progress on half a dozen projects in the course of a day now but I end the day exhausted.
I've had conversations with people recently who are losing sleep because they're finding building yet another feature with "just one more prompt" irresistible.
Decades of intuition about sustainable working practices just got disrupted. It's going to take a while and some discipline to find a good new balance.
> I've had conversations with people recently who are losing sleep because they're finding building yet another feature with "just one more prompt" irresistible.
My problem is - before, I'd get ideas, start something, and it would either become immediately obvious it wouldn't be worth the time, or immediately obvious that it wouldn't turn out well / how I thought.
Now, the problem is, everything starts off so incredibly well and goes smoothly... Until it doesn't.
I used to have ideas and jot them down in Apple Notes and then usually forget about them entirely.
Now I have an idea and jot it down in the Claude Code tab on my iPhone... and a couple of minutes later the idea is software, and now I have another half-baked project to feel guilty about for the rest of time.
I still use Opus for difficult challenges, but if we're building a web app or creating a few scripts, I default to Haiku. It's so much faster, and obviously doesn't impact your usage as much.
There will be a split of two major outcomes from LLM coding near-term.
The larger often half-baked projects will flail like they always have. People will get tired of bothering to attempt these. Oh look you created a big bloated pile of garbage that nobody will ever use. And of course there will be rare exceptions, some group of N people will work together to vibe code a clone of a billion dollar business and it'll actually start taking off and that'll garner a lot of attention. It'll remain forever extremely difficult to get users to a service. And if app & website creation scales up in volume due to simplicity of creation, the attention economy problem will only get more intense (neutralizing most of the benefits of the LLMs as an advantage).
The smaller, quasi micro projects used to more immediately solve narrow problems will thrive in a huge way, resulting in tangible productivity gains, and there will be a zillion of these, both at home and within businesses of all sizes.
This is real, so im a freelancer, i used this small invoicing platfrom to create invoices for my customers. At "work" im working on accounting systems, and erp-s. So with AI, why would i pay monthly for invoicing when i can build it myself. After i day i had invoicing working. Like the simple thing where you get PDF out. Then i started implementing doube-entry booking. And support different tax systems. And then, but we need a sales part then crm, then warehouse. Then projects to track time and so on. And now i have a full saas that i dont need and im not going to waste time on competing in that market. Now im thinking of puting it as open source.
"Invoicing for freelancers" has just about as many solutions as "to do" lists or ticket systems. Just use what you built if it works, open sourcing it is likely to get zero interest among the thousands of other options.
> they're finding building yet another feature with "just one more prompt" irresistible.
Totally my experience too. One last little thing to make it perfect or something that I decide would be "nice to have" ends up taking so much time in total. Luckily now I can access the same agent session on my phone mobile browser too so I can keep an eye on things even in bed. (Joke but not joke :D)
It reduces the friction of coding tremendously. Coding was usually not the bottleneck but it still took a significant amount of time. Now we get to spend more time on the real bottlenecks. Gathering requirements from end users, deciding what should be built, etc.
> It reduces the friction of coding tremendously. Coding was usually not the bottleneck but it still took a significant amount of time.
I don’t think I agree. How can something be both “usually not a bottleneck” that usually “takes a significant amount of time” ?
> Now we get to spend more time on the real bottlenecks. Gathering requirements from end users, deciding what should be built, etc.
Sounds like you might really enjoy a PM role. Either way, LLM or not, whatever gets written up and presented will have a lot of focus on a bike shed or will make the end user realize allllll the other things they want added/changed, so the requirements change, the priorities change…
So now we just don’t get to do the interesting part… engineer things.
Just because the magical fairy helps you write things, you still need to ensure it's engineered properly. Especially at the macro level.
Some day it'll handle that, but for now it's very bound to make silly decisions that you need to be on top of, especially as those compound in a large scale system.
> How can something be both “usually not a bottleneck” that usually “takes a significant amount of time” ?
I dont understand what you dont understand. Is everything that takes a significant amount of time necessarily a bottleneck? That seems implied by you but makes no logical sense.
The funnel into the programming work is often more difficult/time consuming/resource intensive than the programming.
Also, sometimes its not as costly but should be. And insufficient time and resources were spent up front which caused the coding portion to take a lot longer than it should. In which case the programming time may appear to be the bottleneck but it was still really the funnel leading into it.
> Sounds like you might really enjoy a PM role
Enjoyment isn't really a factor in terms of what work needs to be done. And designing technical features isnt really a PM responsibility.
You write a lot about AI. If this is in your free time why not just take a break? If you are ten times more productive, rest for at least twice as much. I don’t get it.
I assume that if you take a break you'll have missed a lot when you come back, at the pace things are evolving. Which is OK for some people like OP but maybe not for simonw
We should ask how the traders manage this. It's essentially 24/7 markets in the world. For them, the FOMO effects are even stronger... actual money earning opportunity.
Why we as a society should give a fuck if someone can’t stop prompting? Unless you mean we as a society should make you pay for the damages your prompts are doing to nature?
I said this a few times here. Tech is never to make the life easier for the worker. It is to make the worker more productive and product more competitive.
Moving from horses to cars did not give you more free time. Moving from telephone to smartphone did not give more fishing time. You just became more mobile, more productive and more reachable.
It's not a choice. For example, Windows XP is no longer a choice, because the context around it made it unsafe now, though it didn't change. Life style from an older era is no longer the norm, which means your relative life quality degrades automatically and it actually becomes unsafe.
When I retire I plan to have no phone, no computer, and no TV. These are by far the biggest time sucks in my life and I want to see what I can do without their distractions.
I might keep a tablet or old phone with no service so that I can still do email.
It depends on the place where you go to live, and what it expects from you.
Some people tried that a bit and they had to retreat back to the usual connected life. What happens is, that old non-digital disconnected world is no longer there waiting for you. It may pretend to be the old world you desired, but it is looking at you and judging you. You become an animal in a zoo, instead of an anonymous part of the old-time world.
Author here. Not an anti-AI post. It's about the cognitive cost - faster tasks lead to more tasks, reviewing AI output all day causes decision fatigue, and the tool landscape churns weekly. Wrote about what actually helped. Curious if others are hitting similar walls.
Why did you use an LLM to write/change the words in your blog and your post? It really accentuates the sense of fatigue when I can tell I'm not interacting with a human on the other side of a message.
Great post, I certainly feel you. Not just the anxiety but the need to push myself more and accomplish more now that I have some help. Setting right expectations and what is more practical and not every "AI magic post" is worth the attention, has helped me by not being anxious and with the FOMO.
Ugh, yes. Normally, you can theoretically pair someone up with a stronger engineer and watch as they learn and grow through their mistakes, while the stronger engineer keeps them on the proverbial straight and narrow with what they produce, through code reviews, documents, etc.
But now, I can't trust any of the models to be that reliable. I can't delegate that responsibility. And since context and prompting is such a fickle thing, I can't really trust any of them to learn from their mistakes, either.
Obviously AI generated article. And the author hasn't made any attempt to disclose it. Take that into consideration.
Yet, The Machine has good points.
>For someone whose entire career is built on "if it broke, I can find out why," this is deeply unsettling. Not in a dramatic way. In a slow, grinding, background-anxiety way. You can never fully trust the output. You can never fully relax. Every interaction requires vigilance.
> you are collaborating with a probabilistic system, and your brain is wired for deterministic ones. That mismatch is a constant, low-grade source of stress.
Back when I bought my first computer, it was a crappy machine that crashed all the time. (Peak of the fake capacitors plague in 2006). That made me doubt and second guess everything that is usually taken for granted in hardware and software (Like simply booting up). That mindset proved useful latter in my career.
I’m not saying anything new. Andy Hunt and Dave Thomas have written about it in a way better way. I find it to still hold very relevant guidelines.
Executive functioning fatigue. Usually you’re doing this in between applying skills, here it’s always making top level decisions and reasoning about possibilities. You don’t have nearly as much downtime because you don’t have to implement, you go from hard problem to hard problem with little time in between. You’re probably running your prefrontal cortex a lot hotter than usual.
People say AI will make us less intelligent, make certain brain regions shrink, but if it stays like this (and I suspect it won’t, but anyway…) then it’ll just make executive functioning super strong because that’s all you’re doing.
I loved the section about trying to fight against a system that isn't deterministic.
LLMs because of their nature require constant hand-holding by humans, unless business are willing to make them entirely accountable for the systems/products they produce.
I'm not saying that it's a good idea, but the obvious way would be with evolution: Give each agent its own wallet, rewarding it for a job well done and penalizing it for a poor job. Then if it runs out of money, it's "out of the game", but if it earns enough to it can spawn off another agent with similar characteristics, and give it some of its money.
My main source of AI fatigue is how it is the main topic anywhere and everywhere I go. I can't visit an art gallery without something pestering me about LLMs.
It seems a better and fuller solution to a lot of these problems is to just stop using AI.
I may be an odd one but I'm refusing to use agents, and just happily coding almost everything myself. I only ask a LLM occasional questions about libraries etc or to write the occasional function. Are there others like me put there?
AI generates a solution that's functional, but that's at a 70% quality level. But then it's really hard to make changes because it feels horrible to spend 1 hour+ to make minor improvements to something that was generated in a minute.
It also feels a lot worse because it would require context switching and really trying to understand the problem and solution at a deeper level rather than a surface level LGTM.
And if it functionally works, then why bother?
Except that it does matter in the long term as technical debt piles up. At a very fast rate too since we're using AI to generate it.
I agree with the sentiment. I don‘t code a lot, but AI has sped up things in all fields for which I use AI (or at least the expectation of speed has grown). For me, it’s the context switching but also just the mental load of holding so many projects and ideas together in my head. It somewhat helps that the usable context of LLMs has grown over time so I tend to trust the „memory“ of the AI a bit more to keep track of things and somewhat try to offload stuff from my brain
I'm somewhat new to HN, but most times I am inclined to add an emoji to a comment, it turns out that neither the tone or content are up to community standards.
My other comments probably aren't any better, but those escape my notice!
HN isn’t a singular hive-mind. There are different opinions on what kinds of humor have its place on it. At present the root comment has a good number of net upvotes, so there’s that.
We've all seen that if you are interacting with an AI over a lengthy chat, eventually it loses the plot. It gets confused. It appears to me that it's necessary, when coding an AI, to keep its task very limited in terms of the amount of information it needs to complete the task. Even then you still have to check the output very carefully. If it seems to be losing focus, I start a new task to reduce the context window, and focus on something that still needs to be fixed in the previous task.
> Here's what I think the real skill of the AI era is. It's not prompt engineering. It's not knowing which model to use. It's not having the perfect workflow.
> It's knowing when to stop.
99% of gamblers stop right before they hit it big.
I personally am a lot less stressed. It helped my mood a lot over the last couple of months. Less worries about forgetting things, about missing problems, about getting started, about planning and prioritizing in solo work. Much less of the "swirling mess" feeling. Context switches are simpler, less drudgery, less friction and pulling my hair out for hours banging against some dumb plumbing and gluing issue or installing stuff from github or configuring stuff on the computer.
I don't get this sentiment. If you don't want investors to give you any input, don't take money from investors. With a Claude Max subscription, it's cheaper than ever to develop a product entirely by yourself or with a couple of friends, if that's what you prefer to do.
Never, ever have productivity gains improved the lives of those who do the actual work. They only ever enriched the owners of the factories.
But with “AI” the gain is more code getting generated faster. That is the dumbest possible way to measure productivity in software development. Remember, code is a liability. Pumping out 10x the amount of code is not 10x productivity.
> The reason is simple once you see it, but it took me months to figure out. When each task takes less time, you don't do fewer tasks. You do more tasks.
> AI reduces the cost of production but increases the cost of coordination, review, and decision-making. And those costs fall entirely on the human.
The combination of these two facts is why I'm so glad I quit my job a couple of years ago and started my own business. I'm a one-man show and having so much fun using AI as I run things.
Long term, it definitely feels like AI is going to drive company sizes down and lead to a greater prevalence of SMBs, since they get all the benefits with few of the downsides.
Apart from the exhaustion of context switching, I believe there is a internal signal that gauges how "fast" things are happening in your life. Stress responses are triggered whenever things are going too fast (as if you were driving in a narrow road at too much speed) and it feels like there is danger since you intuit that a small mistake is gonna have big consequences.
Some people thrive in more stressful situations, because they don't get as aroused in calmness, but everybody has a threshold velocity at which discomfort starts, higher or lower. AI puts us closer to that threshold, for sure.
Personally, I take a break from AI and write the code myself at least a few times each day. It keeps one intellectually honest about whether or not you really understand what's going on.
It seems a better and fuller solution to a lot of these problems is to just stop using AI.
I may be an odd one but I'm refusing to use agents, and just happily coding everything myself. I only ask a LLM occasional questions about libraries etc. Are there others like me put there?
Yeah, I read a zine the other day, where a sociologist warned that the biggest threat isn't that AI destroys jobs, it's that AI is compacting the work per person.
Employers expect more from each employee, because, well, AI is helping them, right?
On the other side, I feel like using AI tools can reduce the cognitive overload of doing a single task, which can be nice. If you're able to work with a tool that's fast enough and just focus on a single task at a time, it feels like it makes things easier. When you try to parallelize that's when things get messier.
There's a negative for that too - cognitive effort is directly correlated with learning, so it means that your own skills start to feel less sharp too as you do that (as the article mentions)
All these tools are can be a big waste of time if you’re an end user dev. It only makes sense if you are investing your time to eventually use that workflow knowledge to make a product.
I only use the free tiers of any particular app. It forces you to really think about you want the tool to do as opposed to treating it as the 'easy' button.
The other part that's exhausting is having to rethink your tool chain and workflow on a very regular basis. New models, new tools, new prompting strategies.
I keep pushing the ai to do absolutely everything to a fault and instead of spending 10mins to manually correct a mistake the ai made i spend hours adjusting and rerunning the prompt to correct the mistake.
I’m finding fewer and fewer people that need convincing that there’s some value in coding agents and prompting skills at this point. To the point where my reply to this is quite simple;
You’ve been left behind and at this very late point in the game i feel no obligation to even try to convince you.
> You're experiencing something real that the industry is aggressively pretending doesn't exist.
I agree with the article and recognize the fatigue, but I have never experienced that the industry is "aggressively pretending it does not exist". It feels like a straw man, but maybe you have examples of this happening.
> What should this function be named? I didn't care. Where should this config live? I didn't care. My brain was full. Not from writing code - from judging code.
Does it matter anymore? Most good engineering principles are to ensure code is easy to read and maintain by humans. When we no longer are the target audience for that, many such decisions are no longer relevant.
1. Make long pauses: 1h of work, stop for 30 minutes or more. The productivity gain should leave you more time to rest. Alternatively work just 50% of time, 2h the morning, 2h the evening, instead of 8 hours. Yet trying to deliver more than before.
2. Don't mix N activities. Work in a very focused way in a single project, doing meaningful progresses.
3. Don't be too open-ended in the changes you do just because you can do it in little time now. Do what really matters.
4. When you are away, put an agent in the right rails to reiterate and provide potentially some very good result in terms of code quality, security, speed, testing, ... This increases the productivity without stressing you. When you return back, inspect the results, discard everything is trash, take the gems, if any.
5. Be minimalistic even if you no longer write the code. Prompt the agent (and your AGENT.md file) to be focused, to don't add useless dependencies, nor complexity, to take the line count low, to accept an improvement only the complexity-cost/gain is adequate.
6. Turn your flow into specification writing. Stop and write your specifications even for a long time, without interruptions. This will improve a lot the output of the coding agents. And it is a moment of calm focused work for you.
(1) is not something the typical employee can do, in my experience. They're expected to work eight hours a day. Though I suppose the breaks could be replaced with low effort / brain power work to implement a version of that.
Work for a smaller company with more reasonable expectations of a knowledge worker.
You're an engineer, not a manager, or a chef, or anything else. Nothing you do needs to be done Monday-Friday between the hours of 8 and 5 (except for meetings). Sometimes it's better if you don't do that, actually. If your work doesn't understand that, they suck and you should leave.
1) Is this for founders, because employees surely cant do this. With new AI surveillance tech, companies are looking over our shoulders even more than before.
Engineers that have the audacity to think they can context switch between a dozen different lines of work deserve every ounce of burnout they feel. You're the tech equivalent of wanting to be a Kardashian and you're complicit in the damage being caused to society. No, this isn't hyperbole.
> If you're an engineer who uses AI daily - for design reviews, code generation, debugging, documentation, architecture decisions - and you've noticed that you're somehow more tired than before AI existed, this post is for you.
AI is not good for human health - we have it here.
> When each task takes less time, you don't do fewer tasks. You do more tasks.
And you're also paid more. Find a job that ask less from you if you are fatigued, not everyone want to sacrifice his personal life for his career. That's choices you got to make but ai doesn't inherently force you to become overworked.
When I was in my mid 20s, I interned at a machine shop building automotive parts. In general, the work was pretty easy. I was modifying things via cad, doing dry runs on the cnc machine, loading raw material, and then unloading finished products for processing.
Usually there was a cadence to things that allowed for a decent amount of downtime while the machine was running, but I once got to a job where the machine milled the parts so quickly, that I spent more time loading and unloading parts than anything else. Once I started the first part, I didn't actually rest until all of them were done. I ended up straining my back from the repetitive motion. I was shocked because I was in good shape and I wasn't really moving a significant amount.
If I talk about excessive concern for productivity (or profit) being a problem, certain people will roll their eyes. It's hard to separate a message from the various agendas we perceive around us. Regardless of personal feelings, there will always be a negative fallout for people when there's a sudden inversion in workflow like the one described in this article or the one I experienced during my internship.
I’ve definitely been feeling that shift too. What have you guys found that helps with this? Any habits you use to avoid the constant context switching and decision fatigue?
I dont have exhaustion as such but an increasing sense of dread, the more incredibly work I achieve, the less valuable I realise it potentially will be due to its low cost effort.
This is like people crying how their phones are ruining their lives. Just stop. Take some responsibility and control of your life. I don’t feel exhausted especially after letting LLM hallucinate some code lines. If you do, maybe it is time to re-evaluate your life choices
> So you read every line. And reading code you didn't write, that was generated by a system that doesn't understand your codebase's history or your team's conventions, is exhausting work.
I’ve noticed this strongly on the database side of things. Your average dev’s understanding of SQL is unfortunately shaky at best (which I find baffling; you can learn 95% of what you need in an afternoon, and probably get by from referencing documentation for the rest), and AI usage has made this 10x worse.
It honestly feels unreasonable and unfair to me. By requesting my validation of your planned schema or query that an AI generated, you’re tacitly admitting that a. You know it’s likely that it has problems b. You don’t understand what it’s written, but you’re requesting a review anyway. This is outsourcing the cognitive load that you should be bearing as a normal part of designing software.
What makes it even worse is MySQL, because LLMs seem to consistently think that it can do things that it can’t (or is at least highly unlikely to choose to), like using multiple indices for a single table access. Also, when pushed on issues like this, I’ve seen them make even more serious errors, like suggesting a large composite index which it claimed could be used for both the left-most prefix and right-most prefix. That’s not how a B{-,+}tree works, my dude, and of all things, I would think AI would have rock-solid understanding of DS&A.
This author puts to words several thoughts of mine that have been jelling recently. In the end I still seems to miss that AI is totally optional. It's a depressing read because the basic gist is, this AI stuff is weird and depressing and addicting to the point of losing productivity, but I have to use it so here are some ways to try and counter those negatives.
Dude! You don't have to use it!! Just write code yourself. Do a web search if you are stuck, the information is still out there on stack overflow and reddit. Maybe us kagi instead of Google, but the old ways still work really well.
But ... but ... your productivity as an engineer shoots up! You can take on more tasks and ship more!
-- Dumbass Engineering Director who has never written a line of code in their life.
Unfortunately, with these types of software simpleton's making decisions we are going to see way more push for AI usage and thus higher productivity expectations. They cannot wrap their heads around the fact (for starters) that AI is not deterministic so that increases the overhead on testing, security, requirements, integrations etc. making all those productivity gains evaporate. Worse (like the author mentioned), it makes your engineer less creative and more burnt-out.
Let's be honest here. Engineers picked this career broadly for 2 reasons, creativity and money. With AI, the creativity aspect is taken away and you are now more of a tester. As for money, those same dumbass decision makers are now going to view this skillset as a commodity and find people who can easily be trained in to "AI Engineers" for way less money to feed inputs.
I am all for technological evolution and welcome it but this isn't anything like that. It is purely based on profits, shareholders and any but building good, proper software systems. Quality be damned. Profession of Software Development be damned. We will regret it in the future.
AI cope regarging "you can still carefully design, AI wont take away your creative control or care for the craft" is the new "there's no problem with C's safety and design, devs just need to pay more attention while coding" or the "I'm not alcoholic, I can quit anytime" of 2026...
taking breaks is really something to try and solve in 2026 - to just write regular code, to read, to exercise even. The mind can eventually get overloaded, and there’s no way around proper hygiene.
The way I experience this is through unprecedented amount of feature creep. We don't use AI generated code for all our projects, but in the ones we do, I see a weird anti-pattern settle in: Simply because it's faster than ever before to generate a patch and get it merged, it doesn't mean that merging 50+ commits this week makes sense.
Code and feature still need to experience time and stability in order to achieve maturity. We need to give our end users time to try stuff, to shape their opinions and habits. We need to let everyone on the dev team take the time to update their mental model of the project as patches are merged. Heck, I've seen too many Product Owners incapable of telling you clearly what went in and out of the code over the previous 2 releases, and those are usually a few weeks apart.
Making individual tasks faster should give us more time to think in terms of quality and stability. Instead, people want to add more features more often.
Absolute middlebrow dismissal incoming, but the real thinking atrophy is writing blog posts about thinking atrophy caused by LLMs using an LLM.
It is getting very hard to continue viewing HN as a place where I want to come and read content others have written when blog posts written largely with ChatGPT are constantly upvoted to the top.
It's not the co-writing process I have a problem with, it's that ChatGPT can turn a shower thought into a 10 minute essay. This whole post could have been four paragraphs. The introduction was clearly written by an intelligent and skilled human, and then by the second half there's "it's not X, it's Y" reframe slop every second sentence.
The writing is too good to be entirely LLM generated, but the prose is awful enough that I'm confident this was a "paste outline into chatgpt and it generates an essay" workflow.
Frustrating world. I'm lambasting OP, but I want him to write, but actually, and not via a lens that turns every cool thought into marketing sludge.
Why do you think author used ChatGPT to write this? It has human imperfections and except this 'The "just one more prompt" trap' I didnt think it was written by a prompt
...and I usually come to doubt my own intuitions that this is the case when people say things like this, but my experience is usually that the LLM is doing more heavy lifting than you realise.
> Distill - deterministic context deduplication for LLMs. No LLM calls, no embeddings, no probabilistic heuristics. Pure algorithms that clean your context in ~12ms.
I simply do not believe that this is human-generated framing. Maybe you think it said something similar before. But I don't believe that is the case. I am left trying to work out what you meant through the words of something that is trying to interpret your meaning for you.
I can’t take seriously an article on AI written so obviously using AI. The unmistakable (lack of) style. If the author is not even aware that the rhetoric that GPT produces is unvaried and predictable, how can I believe the author really means what they write? Slop is cancer
I haven’t hit this yet and now I feel like someone just told me about thorns for the first time while I’m here jogging confidently through the woods with shorts on.
I've been building https://roborev.io/ (continuous background code review for agents) essentially as a cope to supervise the poor quality of the agents' work, since my agents write much more code than I can possible review directly or QA thoroughly. I think we'll see a bunch of interesting new tools to help alleviate the cognitive burden of supervising their work output.
True determinism is rare, we often don't get it. That's what purely functional languages are all about and they're a minority.
We are trained on the other thing: unpredictable user interaction, parallelism, circuit-breaking, etc. That's the bread and butter of engineering (of all kinds, really, not just IT).
The non-deterministic intuition is baked into engineering much more than determinism is.
I see, you're using "determinism" coloquially, in the sense of "exact outcome".
That's perfectly fine. We are honed for this too.
We don't need to produce exact solutions or answers. We need to make things work despite the presence of chaos. That is our job and we're good at it.
Product managers freak out when someone says "I don't know how much time it will take, there are too many variables!". CFOs freak out when someone says "we don't know how much it will cost". Those folk want exact, predictable outcomes.
Engineers don't, we always dealt with unpredictable chaotic things. We're just fine.
Personally I'm loving AI for TECHNICAL problems. Case in point... I just had a server crash last night and obviously I need to do a summary on what could have possibly caused the issue. This use to take hours and painfully hours at that. If you ever had to scroll through a Windows event log you know what I'm talking about. But today I just got an idea of just exporting the log and uploading it to Gemini and asking it:
Looking at this Windows event log, the server rebooted unexpected this morning at 4:21am EST, please analyze the log and let me know what could have been the cause of the reboot.
It took Gemini 5 minutes to come back with an analyst and not only that, it asked me for the memory dump that the machine took. I uploaded that as well and it told me that it looks like SentinelOne might have caused the problem and to update the client if possible.
Checking the logs myself, that's exactly what it looks like.
That used to take me HOURS to do and now it took me 30 seconds, took Gemini 10 minutes, but me 30 seconds. That is a game changer if you ask me.
I love my job, but I love doing other things rather than combing over a log trying to figure out why a server rebooted. I just want to know what to do to fix it if it can be fixed.
I get that AI might be giving other people a sour taste, but to me it really has made my job, and the medial tasks that come with it. easier.
I have little to no experience with Windows Server, but at least on Linux, this shouldn’t take hours.
Find the last log entries for the system before the reboot; if they point to a specific application, look at its logs, otherwise just check all of them around that time, filtering by log level. Check metrics as well - did the application[s] stop handling requests prior to the restart (keeping in mind that metrics are aggregations), or was it fine up until it wasn’t?
If there are no smoking guns, a hardware issue is possible, in which case any decent server should have logged that.
> I just want to know what to do to fix it if it can be fixed.
Serious question: how do you plan on training juniors if troubleshooting consists of asking an AI what to do?
I'm a big AI booster, but I'm so sick of how crazy hype has gotten. Claude Cowork? Game changer! Ralph? Nothing will ever be the same. LOLClaw? Singularity, I welcome our new AI overlords.
I’m shocked that the obvious analysis hasn’t come up: this is more disingenuous talk Karpathy-style, designed to awaken feelings of FOMO from someone who’s not developing normal software with A.I. but is selling A.I. programming tools.
> I shipped more code last quarter than any quarter in my career. I also felt more drained than any quarter in my career. These two facts are not unrelated.
I’m gonna be generous (and try not to be pedantic) and assume that more-code means more bugfixes and features (and whatnot) and not more LOC.
Your manager has mandated X tokens a day or you feel you have to use it to keep up. Huh?
> I build AI agent infrastructure for a living. I'm one of the core maintainers of OpenFGA (CNCF Incubating), I built agentic-authz for agent authorization, I built Distill for context deduplication, I shipped MCP servers. I'm not someone who dabbles with AI on the side. I'm deep in it. I build the tools that other engineers use to make AI agents work in production.
Oh.
> If you're an engineer who uses AI daily - for design reviews, code generation, debugging, documentation, architecture decisions - and you've noticed that you're somehow more tired than before AI existed, this post is for you. You're not imagining it. You're not weak. You're experiencing something real that the industry is aggressively pretending doesn't exist. And if someone who builds agent infrastructure full-time can burn out on AI, it can happen to anyone.
This is what ChatGPT writes to me when I ask “but why is that the case”.
1. No, you are not wrong
2. You don’t have <bad character trait>
3. You are experiencing something real
> I want to talk about it honestly. Not the "AI is amazing and here's my workflow" version. The real version.
And it will be unfiltered. Raw. And we will conclude with how to go on with our Flintstone Engineering[2] but with some platitudes about self-care.
> The real skill ... It's knowing when to stop.
Stop prompting? Like, for
> Knowing when the AI output is good enough.
Ah. We do short prompting sessions instead.
> Knowing that your brain is a finite resource and that protecting it is not laziness - it's engineering.
Indeed it’s not this thing. It’s that—thing.
> AI is the most powerful tool I've ever used. It's also the most draining. Both things are true. The engineers who thrive in this era won't be the ones who use AI the most. They'll be the ones who use it the most wisely.
Of course we will keep using “the most powerful tool I’ve ever used”. But we will do it wisely.
What’s to worry about? You can use ChatGPT as your therapist now.
The weird thing at the end of the day is that we live in this world where there is this default individual desire to be more "productive." I am always wondering, productive for who, for what?
I know more than most there is some baseline productivity we are always trying to be at, that can sometimes be a target more than a current state. But the way people talk about their AI workflows is different. It's like everyone has become tyranical factory floor managers, pushing ever further for productive gains.
Leave this kind of productivity to the bosses I say! Life is a broader surface than this. We can/should focus on be productive together, but leave your actual life for finer, more sustainable ventures.
I think the fatigue is that the technology has been hyped since long before today when it’s actually started to become somewhat useful.
And even today when it’s useful, it’s really most useful for very specific domains like coding.
It’s not been impressive at all with other applications. Just chat with your local AI chat bot when you call customer service.
For example, I watch a YouTube channel where this guy calls up car dealerships to negotiate car deals and some of them have purchased AI receptionist solutions. They’re essentially worse than a simple “press 1 for sales” menu and have essentially zero business value.
Another example, I switched to a cheap phone plan MVNO that uses AI chat as its first line of defense. All it did was act as a natural language search engine for a small selection of FAQ pages, and to actually do anything you needed to find the right button to get a human.
These two examples of technology were not worth the hype. We can blame those businesses all day long but at the end of the day I can’t imagine those businesses are going to be impressed with the results of the tech long term. Those car dealerships won’t sell more cars because of it, my phone plan won’t avoid customer service interactions because of it.
In theory, these AI systems should easily be able to be plugged in to do some basic operations that actually save these businesses from hiring people.
The cellular provider should be able to have the AI chatbot make real adjustments to your account, even if they’re minor.
The car dealership bot should be able to set the customer up in the CMS by collecting basic contact info, and maybe should be able to send a basic quote on a vehicle stock number before negotiations begin.
But in practice, these AI systems aren’t providing significant value to these businesses. Companies like Taco Bell can’t even replace humans taking food orders despite the language capabilities of AI.
My comment is relevant because I’m pointing out, like the article does, that AI isn’t turning out to be anywhere near as useful and low-friction as it has been promised. Hence, the fatigue.
For me the fatigue is a little different— it’s the constant switching between doing a little bit of work/coding/reviewing and then stopping to wait for the llm to generate something.
The waits are unpredictable length, so you never know if you should wait or switch to a new task. So you just do something to kill a little time while the machine thinks.
You never get into a flow state and you feel worn down from this constant vigilance of waiting for background jobs to finish.
I dont feel more productive, I feel like a lazy babysitter that’s just doing enough to keep the kids from hurting themselves
I know this is a terribly irresponsible and immature suggestion, but what I've been doing is every time I give claude code a request of indeterminate length, I just hit a blunt and chill out. That and sometimes I'll tab into the kind of game that can be picked up and put down on very short notice, here's where I shameless plug for the free and open source game Endless Sky.
For me personally, programming lost most of it's fun many years ago, but with claude code I'm having fun again. It's not the same, but for me personally, at this stage in my life, it's more enjoyable.
Programming stopped feeling fun for me once MBAs and bean counters took over. There's rarely time to write thoughtful code anymore. Even convincing management to invest in a sane architecture feels like an endless uphill battle.
Engineer teams are nothing but an annoying expense on the balance sheet and the goal is to cram as many features, as quickly as possible to get the sale.
That's exactly why I'm happy to use every tool available to get the work done efficiently. To this end, LLMs have been great for me, especially when dealing with large amounts of boilerplate code.
Long gone the days of crafting artisan code.
Now that’s vibe coding.
That’s vibe coding while high. Probably terrible for assesing the results from Claude
The code works quite well, but I wouldn't inflict it on other humans. In my view, when you use a coding agent you're committing to forever maintaining that code with a coding agent. There are no human programmers participating in these projects.
I think oftentimes in the absence of context, people will substitute their own, usually worst-case context. They imagine someone vibe coding safety critical software that flies airplanes.
I think we much too often forget that the domain of software development has expanded its reach into literally everything and that we share a guild hall with all kinds: those who write deeply safety critical correct code, those who are hacking a blender, those who are just making their clerical task less repetitive, etc.
Ballmer curve in full effect
For anyone else that didn't recognize this reference, it's also known as the "Ballmer Peak": https://xkcd.com/323/
That's precisely how I refactored dank-extract from dank-mcp and finally got dank-data to archive CT canna-data every Sunday at 4:20pm Pacific.
[1] https://github.com/AgentDank/dank-extract
[2] https://github.com/AgentDank/dank-data
yeah that's how I do it too but careful with the blunt, yesterday I was working away, had some time while an agent swarm ran, took a "little break" and now ghostty looks like this: https://s.h4x.club/kpuGgD12 :|
I feel like we used to do so much more customization and theming of our desktop environments in the 2000s/2010s. I miss it, and therefore love this for you!
I was totally obsessed with it, when I first got into linux as a kid in the late 90s, I spent forever and hours tweaking just my bashrc. I spent entirely too long on this stupid terminal, the media player and getting the rainbow thing to match up to the time of whatever was playing... oh well, had fun!
20 years ago we had the Ballmer Peak. With AI you may have found the next thing!
I play piano, it' a cool game play loop
Except there is a well-known phenomenon among programmers that commencing to work requires more energy than working itself (*).
Every time you chill out and come back to work, you will have to invest that extra bit of start-up energy. Which can be draining.
(* probably has to do with reloading your working memory)
While this is certainly very true, I find coding through an LLM to require far less effort dedicated to this cognitive switching than does writing in some programming language, primarily because I no longer have to load the mental context for converting my high level human instructions to code that a programming environment actually supports. The mental context seems more lightweight and closer to the way I think about the problem when I'm not sitting at the computer actively working on it. If an idea comes to me while I'm away from the computer I can momentarily sit down, type in whatever I just thought of, and get going almost immediately. I think it also saves a huge amount of cognitive load and stress (for me) involved with switching around between different programs and languages, an unfortunate fact of life when dealing with legacy systems.
I didn’t know it was a common thing but it’s definitely something I’ve experienced!
Context switching tax.
For some. I have never experienced this myself. All my energy goes into keeping unconsciously destructive people from sabotaging the effort.
Man you restored the original vibe in vibe coding.
(In all seriousness though, it's probably not good for your health.)
Imagine the captain high while auto pilot is on... who's flying this thing!
It’ll probably be fine. Mostly.
Wow, this is a new low I did not expect to see. "Just get high and you'll be able to code with an LLM." Preceded by, "I know it's terrible."
I'm wholly unwilling to relinquish my health and my morals to "AI" so I can "ship faster." What a pathetic existence that would be.
I appreciate your honesty. For what it's worth, this is not a commercial endeavor and is all motivated entirely by scratching my own personal itches. I'm not being paid to do this.
One of the best programmers I know personally is constantly under the influence of marijuana. As "immature" as it may sound, she's still extremely aware of what she's doing and is able to work in an environment I would give up in after 2 weeks. The kind of environment that denies 1 day PTO for your birthday because of a deadline (hint, every week is a deadline).
I do not smoke myself, but it made me realize how little I know regarding THC and CBD
Welcome to Taylorism. Not just for assembly line workers anymore.
That's a type of fatigue that is not new but I hear you, context switching fatigue has increased ten fold with the introduction of agentic AI coding tools. Here are some more types of fatigue that have been increased with the adoption of LLMs in writing code.
There are plenty of articles on review fatigue including https://www.exploravention.com/blogs/soft_arch_agentic_ai/ which I published recently. The focus there is less about the impact on the developer and more about the impact on the organization as letting bugs go to production will trigger the returning to high ceremony releases and release anxiety.
The OP article talks about AI fatigue of which review fatigue is a part. I guess that I would sum up the other parts like this. The agentic AI workflow is so focused on optimizing for productivity that it burns the human out.
The remedy is also not new for office work, take frequent breaks. I would also argue that the human developer should still write some code every now and then, not because the AI cannot do it but because it would slow the process down and allow for the human to recover while still feeling invested.
I think all of this is why I don’t really experiment with an LLM anymore. I just use it to ideatw/rewrite things in different styles so I can turn rough drafts into finished things. It’s just an editor to bounce ideas off of essentially. Using it that way is the only way I find myself being actually productive and not annoyed with it
Seriously and beyond productivity, flow state was what I liked most about the job. A cup of coffee and noise cancelling headphones and a 2 hour locked in session were when I felt most in love with programming.
Speaking as someone with over 40 years paid programming experience, I've never understood this "flow" thing. I typically do about half an hours typing, get up and walk around, mooch over to colleague and yack bit, or go to the coffee machine, or just think a bit and then go back to the keyboard.
Never used headphones - if the environment is too loud, make it quieter. I once moved into a new office area that had a dot-matrix printer that "logged", in the worst sense of the word (how could you find any access on such a giant printout), every door open/close in the block. It was beyond annoying (ever heard a DM printer? only thing worse is a daisy wheel) so I simply unplugged it, took out the ink ribbon and twisted off the print head. It was never replaced, because as is very often the case nobody ever used the "reports" it produced.
I'm not at all convinced that "break your concentration and go check on an agent once every several minutes" is a productivity increaser. We already know that compulsively checking your inbox while you try to code makes your output worse. Both kill your focus and that focus isn't optional when you're doing cognitively taxing work--you know, the stuff an AI can't do. So at the moment it's like we're lobotomizing ourselves in order to babysit a robot that's dumber than we are.
That said I don't dispute the value of agents but I haven't really figured out what the right workflow is. I think the AI either needs to be really fast if it's going to help me with my main task, so that it doesn't mess up my state of flow/concentration, or it needs to be something I set and forget for long periods of time. For the latter maybe the "AIs submitting PRs" approach will ultimately be the right way to go but I have yet to come across an agent whose output doesn't require quite a lot of planning, back and forth, and code review. I'm still thinking in the long run the main enduring value may be that these LLMs are a "conversational UI" to something, not that they're going to be like little mini-employees.
For me AI has given that back to me. I'm back to just getting stuff built, not getting stuck for long when working in a new area. And best of all using AI for cleanup! Generate some tests, refactor common code. The boring corporate stuff.
I love the flow state, and I’m pretty sure it’s fundamentally incompatible with prompting. For me, when the flow state kicks in, it’s completely nonverbal and my inner dialogue shuts up. I think that’s part of why it feels so cool and fun when it hits.
But LLM prompting requires you to constantly engage with language processing to summarize and review the problem.
That's pretty funny because LLM's actually help me achieve flow state easier because they help me automate away the dumb shit that normally kind of blocks me. Flow state for me is not (just) churning out lines of code but having that flow of thought in my head that eventually flows to a solved problem without being interrupted. Interesting that for you the flow state actually means your mind shutting up lol. For me it means shutting up about random shit that doesn't matter to the task at hand and being focused only on solving the current problem.
It helps that I don't outsource huge tasks to the LLM, because then I lose track of what's happening and what needs to be done. I just code the fun part, then ask the LLM to do the parts that I find boring (like updating all 2000 usages of a certain function I just changed).
Interesting that for some people flow state is non-verbal. I personally have sort of a constant dialogue in my head (or sometimes muttered out loud under my breath) that I have to buffer or spool into various notes/diagrams/code. The process of prompting winds up being complementary to this—typing out that stream of consciousness into a prompt and editing it becomes a more effective form of reflection and ideation than my own process had been before. Sometimes I don’t even send the prompt—the act of structuring my thinking while writing it having made me rethink my approach altogether.
I still hit the flow state in cursor, always reviewing the plan for some feature, asking questions, learning, reviewing code. I'm still thinking hard to keep up with the model.
The question is the result of these 2 hours in noise cancelling headphones.
I joke that I'm on the "Claude Code workout plan" now.
Standing desk, while it's working I do a couple squats or pushups or just wander around the house to stretch my legs. Much more enjoyable than sitting at my desk, hands on keyboard, all day long. And taking my eyes off the screen also makes it easier to think about the next thing.
Moving around does help, but even so, the mental fatigue is real!
I've seriously wondered about merging a home office and home gym into one, and doing sets in between claude working. My usual workout has about 22-30 sets of exercises total and I probably wait on Claude that often in a day. It would be wonderful to be able to spread my exercise throughout the entire day. I'd also include an adjustable height desk so that I could be standing up for much of the workout/workday. I could even have a whiteboard in there.
Coffee shops got filled with the laptop crew, are gyms the next frontier?
As a programmer I want to minimize my context switches, because they require a lot of energy.
LLMs force me to context switch all the time.
I used to lose myself in focused work for hours. That's changed. Now I'm constantly pulled away, and I've noticed the pattern: I send a prompt, wait for the reply, and drift into browsing. Without SelfControl blocking me, I can't seem to resist. I am certainly more productive with LLMs, but I also feel much more tired (and guilty) after a day of work.
This has been a common pattern for me before LLMs, when my work required constantly rebuilding models or doing small deployments where each task/try took more than ~20ish seconds and less than say 3 minutes. It's enough to pull you out of it but not enough to make a proper break or switch tasks.
I suffered from the problems you describe, grabbing a browser window or my phone which would usually take my attention much longer than the task and it left me burned out at the end of the day.
There are some helper tools, like blocking "interesting" pages (like HN, reddit) on the browser, putting the phone in the bag at the end of the room or using a pomodoro timer so sequence proper breaks. But at the end the only thing that really helped is getting into meditation: I try to use these little interruptions of flow as a opportunity to bore myself. Try to reframe boredom from being an annoyance that needs to be fought to a chance to relax your brain for a couple of seconds and refocus.
The want to grab the phone is hard at the start, but it gets better very soon when you manage to push through the discomfort in the first days.
I don’t think it’s unreasonable to assume that in 1-2 years inference speed with have increased enough to allow for “real time” prompting where the agent finishes work in a few seconds instead of a couple minutes. That will certainly change our workflows. Seems like we are in the dial-up era currently.
It's arguably already here, only cost is a concern. We now have an open weights model - =you can throw as much hardware at it as you want to speed it up - at Sonnet 4.5+ level.
Today Anthropic started offering 3x(?) Opus speed at 5x cost as well.
For me plan mode is consistently pretty fast. Then to implement I just walk away and wait for it to be done while working on new plan in new tab
Probably more stress if I’m on battery and don’t want the laptop to sleep or WiFi to get interrupted.
This. It’s the context switching and synchronicity, just like when you are managing a project and go round the table - every touch point risks having to go back and remember a bazillion things, plus in the meantime you lose the flow state.
You're supposed to write a detailed spec first (ask the AI for help with that part of the job too!) so that it's less likely to go off track when writing the code. Then just ask it to write the code and switch to something else. Review the result when the work is done. The spec then becomes part of your documentation.
I have this problem too.
I try to fix it by having multiple opencode instances running on multiple issues from different projects at the same time, but it feels like I'm just herding robots.
Maybe I'm ready for gastown..
Inferring is the new compiling: https://3d.xkcd.com/303/
Edit: Looks like plenty of people have observed this: https://www.reddit.com/r/xkcd/comments/12dpnlk/compiling_upd...
That’s why now it’s legitimate to work on multiple features or projects at the same time
This way you can do twice the terrible job twice as fast!
(Also, this only applies if what you're working on happens to be easily parallelizable _and_ you're part of the extremely privileged subset of SV software engineers. Try getting two Android Studios/XCodes/Clang builds in parallel without 128GB of RAM, see what happens).
I appreciate sarcasm, but this is just snarky.
But yeah improving build speed & parallel running I think are one of the biggest advances devs can do to speed up development time in the AI age. With native apps that can be a challenge. I restructured a react native project to make it faster to iterate, but I have a feeling you might not be fond of rn.
I tried this but didn't realize how exhausting it is to think about even 2 smaller items at once.
Context switching like that is exhausting
It's a different kind of fatigue, but it's something a felt I got stronger at over time. Beats waiting IMHO, but be sure to give yourself a chance to rest.
really interested in what the brain does when it "loads" the context for something it's familiar with but is currently unloaded from the working memory. Does it mostly try to align some internal state? or more just load memories into fast access
Depends on the person I guess
The next step is running an LLM that tries to figure out parts of the project that you aren't working on so it automatically starts coding that while letting you code in peace other stuff manually.
I hope Google has been improving their diffusion model in the background this whole time. Having an agentic system that can spin up diffusion agents for lite tasks would be awesome
Because they would be faster?
~1000 tok/sec and lite/flash model quality, without crazy cerebras level hardware.
makes you wonder how automate-able this babysitter roles is...
That was my reaction.
For me it honestly matches pretty well. I give it an instruction and go reply to an email, and when I'm back in my IDE I have work (that was done while I was doing something else) to review.
Going back from writing an email to working, versus going back from email to reviewing someone else's work feels harder.
What has worked for me is having multiple agents do different tasks (usually in different projects) and doing something myself that I haven't automated yet.
e.g. managing systems, initiating backups, thinking about how I'll automate my backups, etc.
The list of things I haven't automated is getting shorter, and having LLMs generate something I'm happy to hand the work to has been a big part of it.
Same here. But, then again, I talk about it all the time, so who knows what the article is trying to get at.
It’s like being a manager.
No, it’s like being a micro manager.
I don’t just give somebody a bit’s ticket a let the go. I give them a ticket but have to hover over their shoulder and nitpick their design choices.
Tell them “you should use a different name for that new class”, “that function should actually be a method on this other thing”, etc
Nitpicking seems like a choice. It's also possible to be more relaxed/removed and only delve in when there is a problem.
Have a three monitor setup. Have some game on one in alt tab.
What are you generating that the llm takes so long ? I usually prompt and review in small pieces.
For me: Will this task take 30 seconds or 3 minutes. With good planning I've been able to step away and come back. Sometimes it decides to prompt me within 5 seconds for permissions. Sometimes it runs for 15 minutes.
The output is still small and I can review it. I can switch tasks, however if it's my primary effort for the day I don't like stepping away for an hour to do something else.
Not the OP, but the new LLMs together with harnesses (OpenCode in my case) can handle larger scopes of work - so the workflow moves away from pair programming (single-file changes, small scope diffs) to full-feature PR reviewing.
I wonder if this is how managers feel -_-'
Somewhat. You have to set yourself up to manage your own attention because the context switching is rough. If you don’t you will burn out.
But the cycle is longer. When you help a person they don’t come back to you 4 minutes later.
I also only review PRs at specific times a day, because that’s more cognitively intensive and switching in and out pretty much ensures you’ll do it badly.
Either way, I’m really starting to think agentic as designed is a deeply flawed workflow. The future could be small, fast models that finish pseudo code and look stuff up to aide focus. Anthropic’s own research seems to support this.
"Compiling!" (C.f. xkcd)
I'm certainly getting tired of the AI slop images and videos. For coding and software development, I'm outright excited (and a little scared) of what I've been able to accomplish with GPT and Claude. I'm a software developer with 25 years experience living in the upper Midwest USA.
I write software professionally and remotely for large boring insurance company, but I'm building a side project of an area of interest using AI tools to assist, and I've created in a couple months a few hours per week what would have taken me a year or more to create. I've read other's comments about having to babysit the AI tools, but that's not so bad.
The little benefit I've noticed using AI tools to "vibecode" is sometimes they come back with solutions that I never would have come up with. ...and then there's the solutions where I click the Undo button and shake my head.
This write-up has good ideas but gives me the "AI-generated reading fatigue." Things that can cleanly be expressed in 1-2 sentences are whole paragraphs, often with examples that seem unnecessary or unrealistic. There are also some wrong claims like below:
> The Hacker News front page alone is enough to give you whiplash. One day it's "Show HN: Autonomous Research Swarm" and the next it's "Ask HN: How will AI swarms coordinate?" Nobody knows. Everyone's building anyway.
These posts got less than 5 upvotes, they didn't make it to home page. And while overall quality of Show HN might have dropped, HN homepage is still quite sane.
The topic is also not something "nobody talks about," it's being discussed even before agentic tools became available: https://hn.algolia.com/?q=AI+fatigue
> HN homepage is still quite sane.
Those Show HN posts aren't the insane part. Insane part is like:
> Thank you, OpenClaw. Thank you, AGI—for me, it’s already here.
> If you haven't spent at least $1,000 on tokens today per human engineer, your software factory has room for improvement
> Code must not be reviewed by humans
> Following this hypothesis, what C did to assembler, what Java did to C, what Javascript/Python/Perl did to Java, now LLM agents are doing to all programming languages.
(All quoted from actual homepage posts today. Fun game: guess which quote is from which article)
I personally believe what we’re seeing are newcomers who aren’t even programmers who fall for all this crap and then come here to post about it
This is so disheartening. Time to short more tech stocks
"You're not imagining it." I hit back immediately.
Sigh.. same.
The real AI fatigue is the constant background irritation I have when interacting with LLMs.
"You're not imagining it" "You're not crazy" "You're absolutely right!" "Your right to push back on this" "Here's the no fluff, correct, non-reddit answer"
“You’re not [X]—you’re [Y]” is the one that drives me nuts. [X] is typically some negative characterization that, without RLHF, the model would likely just state directly. I get enough politics/subtext from humans. I’d rather the LLM just call it straight.
The boring and likely answer is that is was just clauded out,”I’m tired chat, look through my last ten days of sessions and write and publish a blog post about why,” but it would be fascinating to discover that the author has actually looked at so much ai output that they just write like this now
> Things that can cleanly be expressed in 1-2 sentences are whole paragraphs
Perhaps the author just likes to write? I've only just recently started blogging more, but I unexpectedly started to really enjoy writing and am hoping to have my posts be more of a "story". Different people have different writing styles. It's not a problem, it's just that you prefer reading posts that are straight to the point.
> but gives me the "AI-generated reading fatigue."
Agree. The article could have been summarized into a few paragraphs. Instead, we get unnecessary verbiage that goes on and on in an AI generated frenzy. Like the "organic" label on food items, I can foresee labels on content denoting the kind of human generating the content: "suburbs-raised" "free-lancer" etc.
> Things that can cleanly be expressed in 1-2 sentences are whole paragraphs
Funny, I don't associate that with AI. I associate it with having to write papers of a specific length in high school. (Though at least those were usually numbers of pages, so you could get a little juice from tweaking margins, line spacing and font size.)
I think those kind of texts (school papers, marketing fluff, linkedin influencers trying to look smart) just influenced the dataset a lot.
Too bad we didn't have more laconic, interesting books to feed in?
I had word/page quotas, but I also don't write my blog in a way that resembles the papers I wrote for school 10 years ago.
The headline is clickbait-y but I think the article is well articulated. I found the "What actually helped" helpful too.
Article is mostly GPT vomit after a couple bullet pints. If it’s not as easy for others to tell I’ll stay my blade runner style shop that tells who NOT to hire
I'd personally rethink about applying some advice in that section. Here's my take.
> Time-boxing AI sessions.
Unless you are a full-time vibe coder, you already wouldn't be using AI all the time. But time boxing it feels artificial, if it's able to make good and real progress (not unmaintainable slop).
> Separating AI time from thinking time.
My usage of AI involves doing a lot of thinking, either collaboratively within a chat, or by myself while it's doing some agentic loop.
> Accepting 70% from AI.
This is a confusing statement. 70% what? What does 70% usable even mean? If it means around 70% of features work and other 30% is broken, perhaps AI shouldn't be used for those 30% in the first place.
> Being strategic about the hype cycle.
Hype cycles have always been a thing. It's good for mind in general to avoid them.
> Logging where AI helps and where it doesn't.
I do most of this logging in my agent md files instead of a separate log. Also after a bit my memory picks it up really quickly what AI can do and what it can't. I assume this is a natural process for many fellow engineers.
> Not reviewing everything AI produces.
If you are shipping in an insane speed, this is just an expected outcome, not an advice you can follow.
> Your manager sees you shipping faster, so the expectations adjust. You see yourself shipping faster, so your own expectations adjust. The baseline moves.
This problem has been going on a long time, Helen Keller wrote about this almost 100 years ago:
> The only point I want to make here is this: that it is about time for us to begin using our labor-saving machinery actually to save labor instead of using it to flood the nation haphazardly with surplus goods which clog the channels of trade.
https://www.theatlantic.com/magazine/archive/1932/08/put-you...
I really feel this. I can make meaningful progress on half a dozen projects in the course of a day now but I end the day exhausted.
I've had conversations with people recently who are losing sleep because they're finding building yet another feature with "just one more prompt" irresistible.
Decades of intuition about sustainable working practices just got disrupted. It's going to take a while and some discipline to find a good new balance.
> I've had conversations with people recently who are losing sleep because they're finding building yet another feature with "just one more prompt" irresistible.
My problem is - before, I'd get ideas, start something, and it would either become immediately obvious it wouldn't be worth the time, or immediately obvious that it wouldn't turn out well / how I thought.
Now, the problem is, everything starts off so incredibly well and goes smoothly... Until it doesn't.
I used to have ideas and jot them down in Apple Notes and then usually forget about them entirely.
Now I have an idea and jot it down in the Claude Code tab on my iPhone... and a couple of minutes later the idea is software, and now I have another half-baked project to feel guilty about for the rest of time.
In couple of minutes? My claude code takes like 5 mins just to wake up and write a simple plan.
You are taking to simonw, surely Claude have given him free super fast unlimited token access to nightly version of Claude 5.1 Opus.
(just joking, your posts are great, Simon!)
I still use Opus for difficult challenges, but if we're building a web app or creating a few scripts, I default to Haiku. It's so much faster, and obviously doesn't impact your usage as much.
There will be a split of two major outcomes from LLM coding near-term.
The larger often half-baked projects will flail like they always have. People will get tired of bothering to attempt these. Oh look you created a big bloated pile of garbage that nobody will ever use. And of course there will be rare exceptions, some group of N people will work together to vibe code a clone of a billion dollar business and it'll actually start taking off and that'll garner a lot of attention. It'll remain forever extremely difficult to get users to a service. And if app & website creation scales up in volume due to simplicity of creation, the attention economy problem will only get more intense (neutralizing most of the benefits of the LLMs as an advantage).
The smaller, quasi micro projects used to more immediately solve narrow problems will thrive in a huge way, resulting in tangible productivity gains, and there will be a zillion of these, both at home and within businesses of all sizes.
This is real, so im a freelancer, i used this small invoicing platfrom to create invoices for my customers. At "work" im working on accounting systems, and erp-s. So with AI, why would i pay monthly for invoicing when i can build it myself. After i day i had invoicing working. Like the simple thing where you get PDF out. Then i started implementing doube-entry booking. And support different tax systems. And then, but we need a sales part then crm, then warehouse. Then projects to track time and so on. And now i have a full saas that i dont need and im not going to waste time on competing in that market. Now im thinking of puting it as open source.
"Invoicing for freelancers" has just about as many solutions as "to do" lists or ticket systems. Just use what you built if it works, open sourcing it is likely to get zero interest among the thousands of other options.
> they're finding building yet another feature with "just one more prompt" irresistible.
Totally my experience too. One last little thing to make it perfect or something that I decide would be "nice to have" ends up taking so much time in total. Luckily now I can access the same agent session on my phone mobile browser too so I can keep an eye on things even in bed. (Joke but not joke :D)
It reduces the friction of coding tremendously. Coding was usually not the bottleneck but it still took a significant amount of time. Now we get to spend more time on the real bottlenecks. Gathering requirements from end users, deciding what should be built, etc.
> It reduces the friction of coding tremendously. Coding was usually not the bottleneck but it still took a significant amount of time.
I don’t think I agree. How can something be both “usually not a bottleneck” that usually “takes a significant amount of time” ?
> Now we get to spend more time on the real bottlenecks. Gathering requirements from end users, deciding what should be built, etc.
Sounds like you might really enjoy a PM role. Either way, LLM or not, whatever gets written up and presented will have a lot of focus on a bike shed or will make the end user realize allllll the other things they want added/changed, so the requirements change, the priorities change…
So now we just don’t get to do the interesting part… engineer things.
If I wanted to be a PM I’d do that.
Just because the magical fairy helps you write things, you still need to ensure it's engineered properly. Especially at the macro level.
Some day it'll handle that, but for now it's very bound to make silly decisions that you need to be on top of, especially as those compound in a large scale system.
> How can something be both “usually not a bottleneck” that usually “takes a significant amount of time” ?
I dont understand what you dont understand. Is everything that takes a significant amount of time necessarily a bottleneck? That seems implied by you but makes no logical sense.
The funnel into the programming work is often more difficult/time consuming/resource intensive than the programming.
Also, sometimes its not as costly but should be. And insufficient time and resources were spent up front which caused the coding portion to take a lot longer than it should. In which case the programming time may appear to be the bottleneck but it was still really the funnel leading into it.
> Sounds like you might really enjoy a PM role
Enjoyment isn't really a factor in terms of what work needs to be done. And designing technical features isnt really a PM responsibility.
You write a lot about AI. If this is in your free time why not just take a break? If you are ten times more productive, rest for at least twice as much. I don’t get it.
I assume that if you take a break you'll have missed a lot when you come back, at the pace things are evolving. Which is OK for some people like OP but maybe not for simonw
I meant like watch a movie.
Throw in the fact that clawdbot can work 24/7.
It reminds me of why people wanted financial markets to be 24/7.
We as a society should probably take a look at that otherwise it may lead to burnout in a not so small percentage of people
We should ask how the traders manage this. It's essentially 24/7 markets in the world. For them, the FOMO effects are even stronger... actual money earning opportunity.
Why we as a society should give a fuck if someone can’t stop prompting? Unless you mean we as a society should make you pay for the damages your prompts are doing to nature?
>Why we as a society should give a fuck if someone can’t stop prompting?
It is not prompting, it is the constant feeling that you always have to be "on."
I said this a few times here. Tech is never to make the life easier for the worker. It is to make the worker more productive and product more competitive.
Moving from horses to cars did not give you more free time. Moving from telephone to smartphone did not give more fishing time. You just became more mobile, more productive and more reachable.
How we use efficiency is a choice. It's possible to work a lot less if you accept quality of life from an older era (no phone, Netflix, etc.)
It's not a choice. For example, Windows XP is no longer a choice, because the context around it made it unsafe now, though it didn't change. Life style from an older era is no longer the norm, which means your relative life quality degrades automatically and it actually becomes unsafe.
When I retire I plan to have no phone, no computer, and no TV. These are by far the biggest time sucks in my life and I want to see what I can do without their distractions.
I might keep a tablet or old phone with no service so that I can still do email.
It depends on the place where you go to live, and what it expects from you.
Some people tried that a bit and they had to retreat back to the usual connected life. What happens is, that old non-digital disconnected world is no longer there waiting for you. It may pretend to be the old world you desired, but it is looking at you and judging you. You become an animal in a zoo, instead of an anonymous part of the old-time world.
Author here. Not an anti-AI post. It's about the cognitive cost - faster tasks lead to more tasks, reviewing AI output all day causes decision fatigue, and the tool landscape churns weekly. Wrote about what actually helped. Curious if others are hitting similar walls.
Those images make me think of
https://scienceintegritydigest.com/2024/02/15/the-rat-with-t...
Why did you use an LLM to write/change the words in your blog and your post? It really accentuates the sense of fatigue when I can tell I'm not interacting with a human on the other side of a message.
Great post, I certainly feel you. Not just the anxiety but the need to push myself more and accomplish more now that I have some help. Setting right expectations and what is more practical and not every "AI magic post" is worth the attention, has helped me by not being anxious and with the FOMO.
Thanks <3
I've started doing it now, still needs to work on it. Thanks for the tip though, i hope it is working well for you!!
isn't it a bit too ironic that you expect us to read your ai generated slop about ai fatigue?
Who knew managing a team of ten occasionally brilliant but generally unreliable engineers would be so draining.
I think you mean _micro_managing.
Ugh, yes. Normally, you can theoretically pair someone up with a stronger engineer and watch as they learn and grow through their mistakes, while the stronger engineer keeps them on the proverbial straight and narrow with what they produce, through code reviews, documents, etc.
But now, I can't trust any of the models to be that reliable. I can't delegate that responsibility. And since context and prompting is such a fickle thing, I can't really trust any of them to learn from their mistakes, either.
Obviously AI generated article. And the author hasn't made any attempt to disclose it. Take that into consideration.
Yet, The Machine has good points.
>For someone whose entire career is built on "if it broke, I can find out why," this is deeply unsettling. Not in a dramatic way. In a slow, grinding, background-anxiety way. You can never fully trust the output. You can never fully relax. Every interaction requires vigilance.
> you are collaborating with a probabilistic system, and your brain is wired for deterministic ones. That mismatch is a constant, low-grade source of stress.
Back when I bought my first computer, it was a crappy machine that crashed all the time. (Peak of the fake capacitors plague in 2006). That made me doubt and second guess everything that is usually taken for granted in hardware and software (Like simply booting up). That mindset proved useful latter in my career.
I’m not saying anything new. Andy Hunt and Dave Thomas have written about it in a way better way. I find it to still hold very relevant guidelines.
https://www.khoury.northeastern.edu/home/lieber/courses/csg1...
>Think! About Your Work
>Critically Analyze What You Read and Hear
Executive functioning fatigue. Usually you’re doing this in between applying skills, here it’s always making top level decisions and reasoning about possibilities. You don’t have nearly as much downtime because you don’t have to implement, you go from hard problem to hard problem with little time in between. You’re probably running your prefrontal cortex a lot hotter than usual.
People say AI will make us less intelligent, make certain brain regions shrink, but if it stays like this (and I suspect it won’t, but anyway…) then it’ll just make executive functioning super strong because that’s all you’re doing.
I loved the section about trying to fight against a system that isn't deterministic.
LLMs because of their nature require constant hand-holding by humans, unless business are willing to make them entirely accountable for the systems/products they produce.
How could you hold a dumb machine “accountable”? Attempting that would be insane. How would you discipline it? Reduce the voltage in its power supply?
Do you hold the dice accountable when you lose at the craps table?
I'm not saying that it's a good idea, but the obvious way would be with evolution: Give each agent its own wallet, rewarding it for a job well done and penalizing it for a poor job. Then if it runs out of money, it's "out of the game", but if it earns enough to it can spawn off another agent with similar characteristics, and give it some of its money.
Heh, agree it sounds absurd doesn't it.
I would imagine instead companies will end up sleeping walking into this scenario until catastrophy hits.
How would that make them any more deterministic? I haven't yet met a deterministic human dev.
It doesn't.
The difference is that we as humans are held accountable for our non-determinism.
The consequences of our actions have real world implications on our lives.
My main source of AI fatigue is how it is the main topic anywhere and everywhere I go. I can't visit an art gallery without something pestering me about LLMs.
I agree, faster tasks means more tasks, more tasks means more balls to juggle at once. The machine keeps going brrrr.
This is essentially Jevons' paradox
If anyone is running AI agents in parallel and babysitting them, what tools are you using?
I like conductor.build, they are doing amazing job, but I don't want to give up my freedom and get heavily reliant on closed source
It seems a better and fuller solution to a lot of these problems is to just stop using AI.
I may be an odd one but I'm refusing to use agents, and just happily coding almost everything myself. I only ask a LLM occasional questions about libraries etc or to write the occasional function. Are there others like me put there?
I'd like to also add 'perceived cost aversion':
AI generates a solution that's functional, but that's at a 70% quality level. But then it's really hard to make changes because it feels horrible to spend 1 hour+ to make minor improvements to something that was generated in a minute.
It also feels a lot worse because it would require context switching and really trying to understand the problem and solution at a deeper level rather than a surface level LGTM.
And if it functionally works, then why bother?
Except that it does matter in the long term as technical debt piles up. At a very fast rate too since we're using AI to generate it.
I agree with the sentiment. I don‘t code a lot, but AI has sped up things in all fields for which I use AI (or at least the expectation of speed has grown). For me, it’s the context switching but also just the mental load of holding so many projects and ideas together in my head. It somewhat helps that the usable context of LLMs has grown over time so I tend to trust the „memory“ of the AI a bit more to keep track of things and somewhat try to offload stuff from my brain
There would be less AI fatigue if people stopped talking about AI. ;)
That's not the type of fatigue the article is talking about.
I know, hence the emoticon.
I'm somewhat new to HN, but most times I am inclined to add an emoji to a comment, it turns out that neither the tone or content are up to community standards.
My other comments probably aren't any better, but those escape my notice!
HN isn’t a singular hive-mind. There are different opinions on what kinds of humor have its place on it. At present the root comment has a good number of net upvotes, so there’s that.
The article is talking about something completely different.
We've all seen that if you are interacting with an AI over a lengthy chat, eventually it loses the plot. It gets confused. It appears to me that it's necessary, when coding an AI, to keep its task very limited in terms of the amount of information it needs to complete the task. Even then you still have to check the output very carefully. If it seems to be losing focus, I start a new task to reduce the context window, and focus on something that still needs to be fixed in the previous task.
> Here's what I think the real skill of the AI era is. It's not prompt engineering. It's not knowing which model to use. It's not having the perfect workflow.
> It's knowing when to stop.
99% of gamblers stop right before they hit it big.
I personally am a lot less stressed. It helped my mood a lot over the last couple of months. Less worries about forgetting things, about missing problems, about getting started, about planning and prioritizing in solo work. Much less of the "swirling mess" feeling. Context switches are simpler, less drudgery, less friction and pulling my hair out for hours banging against some dumb plumbing and gluing issue or installing stuff from github or configuring stuff on the computer.
Its a million little quality of life stuff.
That's what increasing productivity means. You are working harder to increase the unearned income of "investors".
That's the way society is set up.
I don't get this sentiment. If you don't want investors to give you any input, don't take money from investors. With a Claude Max subscription, it's cheaper than ever to develop a product entirely by yourself or with a couple of friends, if that's what you prefer to do.
That is what I prefer to do. I prefer "cottage industry" to "capitalism" but that's not the easiest option in this society.
That's the sentiment you don't get.
Edit: haha, I'll repeat an earlier comment! Nothing can fly on the moon.
Never, ever have productivity gains improved the lives of those who do the actual work. They only ever enriched the owners of the factories.
But with “AI” the gain is more code getting generated faster. That is the dumbest possible way to measure productivity in software development. Remember, code is a liability. Pumping out 10x the amount of code is not 10x productivity.
Instead of managing code, you're now managing AI entities.
Managing people has always been emotionally and psychologically exhausting.
Managing AI entities can be even more taxing. They're not human beings.
Or we're being managed to refine models
That too. Management is always a two-way street. The "manager" manages down. The "employee" manages up.
Managing people has always seemed easy to me. Don't be an asshole, don't get personally invested in their problems, and things generally work out.
> The reason is simple once you see it, but it took me months to figure out. When each task takes less time, you don't do fewer tasks. You do more tasks.
> AI reduces the cost of production but increases the cost of coordination, review, and decision-making. And those costs fall entirely on the human.
The combination of these two facts is why I'm so glad I quit my job a couple of years ago and started my own business. I'm a one-man show and having so much fun using AI as I run things.
Long term, it definitely feels like AI is going to drive company sizes down and lead to a greater prevalence of SMBs, since they get all the benefits with few of the downsides.
The overhead of having even a second employee is huge. Being a one person shop is a huge efficiency gain.
Apart from the exhaustion of context switching, I believe there is a internal signal that gauges how "fast" things are happening in your life. Stress responses are triggered whenever things are going too fast (as if you were driving in a narrow road at too much speed) and it feels like there is danger since you intuit that a small mistake is gonna have big consequences.
Some people thrive in more stressful situations, because they don't get as aroused in calmness, but everybody has a threshold velocity at which discomfort starts, higher or lower. AI puts us closer to that threshold, for sure.
“You’re not imagining it. You’re not weak.”
If “I get exhausted that I have to check in on my coding agent while it does my job” isn’t weak, what is? This has to be satire.
Personally, I take a break from AI and write the code myself at least a few times each day. It keeps one intellectually honest about whether or not you really understand what's going on.
It seems a better and fuller solution to a lot of these problems is to just stop using AI.
I may be an odd one but I'm refusing to use agents, and just happily coding everything myself. I only ask a LLM occasional questions about libraries etc. Are there others like me put there?
Yeah, I read a zine the other day, where a sociologist warned that the biggest threat isn't that AI destroys jobs, it's that AI is compacting the work per person.
Employers expect more from each employee, because, well, AI is helping them, right?
Task switching sucks.
On the other side, I feel like using AI tools can reduce the cognitive overload of doing a single task, which can be nice. If you're able to work with a tool that's fast enough and just focus on a single task at a time, it feels like it makes things easier. When you try to parallelize that's when things get messier.
There's a negative for that too - cognitive effort is directly correlated with learning, so it means that your own skills start to feel less sharp too as you do that (as the article mentions)
The cognitive load is in the lack of a "defined problem break".
With Ai, the situations where you know what you are building and you get into flow are fewer and further apart.
So much more time is thinking about the domain, and the problem to solve.
And that is exhausting.
All these tools are can be a big waste of time if you’re an end user dev. It only makes sense if you are investing your time to eventually use that workflow knowledge to make a product.
I only use the free tiers of any particular app. It forces you to really think about you want the tool to do as opposed to treating it as the 'easy' button.
I just ignore it and don't care.
Sounds like a good way to kill yourself, considering "fatigue" here means actual physical fatigue and not "I'm tired of AI".
The other part that's exhausting is having to rethink your tool chain and workflow on a very regular basis. New models, new tools, new prompting strategies.
We’re all still getting the hang of it.
I keep pushing the ai to do absolutely everything to a fault and instead of spending 10mins to manually correct a mistake the ai made i spend hours adjusting and rerunning the prompt to correct the mistake.
I’m learning how to prompt well at least.
> as i’m learning how to prompt well
Prompting isn't a real skill and you're not learning anything.
"Claude 4.5 Sonnet operator" is not a job description.
I’m finding fewer and fewer people that need convincing that there’s some value in coding agents and prompting skills at this point. To the point where my reply to this is quite simple;
You’ve been left behind and at this very late point in the game i feel no obligation to even try to convince you.
Left behind in regards to _what_? The "optimal" workflow for these things seems to be changing every week.
> You're experiencing something real that the industry is aggressively pretending doesn't exist.
I agree with the article and recognize the fatigue, but I have never experienced that the industry is "aggressively pretending it does not exist". It feels like a straw man, but maybe you have examples of this happening.
Absolutely nailed this one. My team has been talking about this for a few weeks, everyone including our manager is completely burned out.
> What should this function be named? I didn't care. Where should this config live? I didn't care. My brain was full. Not from writing code - from judging code.
Does it matter anymore? Most good engineering principles are to ensure code is easy to read and maintain by humans. When we no longer are the target audience for that, many such decisions are no longer relevant.
I think we've spent exponentially more effort to ensure the code is readable by machines.
I also don't understand why you assume what the AI generates is more readable by AI than human generated code.
AI pro/agaisnt/made-related-somehow in every topic is definitely talked a lot. Even my imaginary dog can't stop talking AI all the time.
1. Make long pauses: 1h of work, stop for 30 minutes or more. The productivity gain should leave you more time to rest. Alternatively work just 50% of time, 2h the morning, 2h the evening, instead of 8 hours. Yet trying to deliver more than before.
2. Don't mix N activities. Work in a very focused way in a single project, doing meaningful progresses.
3. Don't be too open-ended in the changes you do just because you can do it in little time now. Do what really matters.
4. When you are away, put an agent in the right rails to reiterate and provide potentially some very good result in terms of code quality, security, speed, testing, ... This increases the productivity without stressing you. When you return back, inspect the results, discard everything is trash, take the gems, if any.
5. Be minimalistic even if you no longer write the code. Prompt the agent (and your AGENT.md file) to be focused, to don't add useless dependencies, nor complexity, to take the line count low, to accept an improvement only the complexity-cost/gain is adequate.
6. Turn your flow into specification writing. Stop and write your specifications even for a long time, without interruptions. This will improve a lot the output of the coding agents. And it is a moment of calm focused work for you.
(1) is not something the typical employee can do, in my experience. They're expected to work eight hours a day. Though I suppose the breaks could be replaced with low effort / brain power work to implement a version of that.
Work for a smaller company with more reasonable expectations of a knowledge worker.
You're an engineer, not a manager, or a chef, or anything else. Nothing you do needs to be done Monday-Friday between the hours of 8 and 5 (except for meetings). Sometimes it's better if you don't do that, actually. If your work doesn't understand that, they suck and you should leave.
Yep, slow QA, things that also make the real difference in quality.
1) Is this for founders, because employees surely cant do this. With new AI surveillance tech, companies are looking over our shoulders even more than before.
Engineers that have the audacity to think they can context switch between a dozen different lines of work deserve every ounce of burnout they feel. You're the tech equivalent of wanting to be a Kardashian and you're complicit in the damage being caused to society. No, this isn't hyperbole.
> If you're an engineer who uses AI daily - for design reviews, code generation, debugging, documentation, architecture decisions - and you've noticed that you're somehow more tired than before AI existed, this post is for you.
AI is not good for human health - we have it here.
This is a non issue imo.
> When each task takes less time, you don't do fewer tasks. You do more tasks.
And you're also paid more. Find a job that ask less from you if you are fatigued, not everyone want to sacrifice his personal life for his career. That's choices you got to make but ai doesn't inherently force you to become overworked.
When I was in my mid 20s, I interned at a machine shop building automotive parts. In general, the work was pretty easy. I was modifying things via cad, doing dry runs on the cnc machine, loading raw material, and then unloading finished products for processing.
Usually there was a cadence to things that allowed for a decent amount of downtime while the machine was running, but I once got to a job where the machine milled the parts so quickly, that I spent more time loading and unloading parts than anything else. Once I started the first part, I didn't actually rest until all of them were done. I ended up straining my back from the repetitive motion. I was shocked because I was in good shape and I wasn't really moving a significant amount.
If I talk about excessive concern for productivity (or profit) being a problem, certain people will roll their eyes. It's hard to separate a message from the various agendas we perceive around us. Regardless of personal feelings, there will always be a negative fallout for people when there's a sudden inversion in workflow like the one described in this article or the one I experienced during my internship.
I’ve definitely been feeling that shift too. What have you guys found that helps with this? Any habits you use to avoid the constant context switching and decision fatigue?
This was a good article.
I dont have exhaustion as such but an increasing sense of dread, the more incredibly work I achieve, the less valuable I realise it potentially will be due to its low cost effort.
AI fatigue is visible in every AI discussion on the internet.
This is like people crying how their phones are ruining their lives. Just stop. Take some responsibility and control of your life. I don’t feel exhausted especially after letting LLM hallucinate some code lines. If you do, maybe it is time to re-evaluate your life choices
To me the slop has becaume umbearably exhausting, it’s everywhere and nearly impossible to avoid.
> So you read every line. And reading code you didn't write, that was generated by a system that doesn't understand your codebase's history or your team's conventions, is exhausting work.
I’ve noticed this strongly on the database side of things. Your average dev’s understanding of SQL is unfortunately shaky at best (which I find baffling; you can learn 95% of what you need in an afternoon, and probably get by from referencing documentation for the rest), and AI usage has made this 10x worse.
It honestly feels unreasonable and unfair to me. By requesting my validation of your planned schema or query that an AI generated, you’re tacitly admitting that a. You know it’s likely that it has problems b. You don’t understand what it’s written, but you’re requesting a review anyway. This is outsourcing the cognitive load that you should be bearing as a normal part of designing software.
What makes it even worse is MySQL, because LLMs seem to consistently think that it can do things that it can’t (or is at least highly unlikely to choose to), like using multiple indices for a single table access. Also, when pushed on issues like this, I’ve seen them make even more serious errors, like suggesting a large composite index which it claimed could be used for both the left-most prefix and right-most prefix. That’s not how a B{-,+}tree works, my dude, and of all things, I would think AI would have rock-solid understanding of DS&A.
The irony is that this article has likely been crafted by AI. The smell is not too obvious but still there.
This author puts to words several thoughts of mine that have been jelling recently. In the end I still seems to miss that AI is totally optional. It's a depressing read because the basic gist is, this AI stuff is weird and depressing and addicting to the point of losing productivity, but I have to use it so here are some ways to try and counter those negatives.
Dude! You don't have to use it!! Just write code yourself. Do a web search if you are stuck, the information is still out there on stack overflow and reddit. Maybe us kagi instead of Google, but the old ways still work really well.
Seems to me that a lot of people talk about nothing else.
This reflects my experiences exactly. Thanks for writing this up.
But ... but ... your productivity as an engineer shoots up! You can take on more tasks and ship more! -- Dumbass Engineering Director who has never written a line of code in their life.
Unfortunately, with these types of software simpleton's making decisions we are going to see way more push for AI usage and thus higher productivity expectations. They cannot wrap their heads around the fact (for starters) that AI is not deterministic so that increases the overhead on testing, security, requirements, integrations etc. making all those productivity gains evaporate. Worse (like the author mentioned), it makes your engineer less creative and more burnt-out.
Let's be honest here. Engineers picked this career broadly for 2 reasons, creativity and money. With AI, the creativity aspect is taken away and you are now more of a tester. As for money, those same dumbass decision makers are now going to view this skillset as a commodity and find people who can easily be trained in to "AI Engineers" for way less money to feed inputs.
I am all for technological evolution and welcome it but this isn't anything like that. It is purely based on profits, shareholders and any but building good, proper software systems. Quality be damned. Profession of Software Development be damned. We will regret it in the future.
testing is quite creative too btw
AI cope regarging "you can still carefully design, AI wont take away your creative control or care for the craft" is the new "there's no problem with C's safety and design, devs just need to pay more attention while coding" or the "I'm not alcoholic, I can quit anytime" of 2026...
taking breaks is really something to try and solve in 2026 - to just write regular code, to read, to exercise even. The mind can eventually get overloaded, and there’s no way around proper hygiene.
The way I experience this is through unprecedented amount of feature creep. We don't use AI generated code for all our projects, but in the ones we do, I see a weird anti-pattern settle in: Simply because it's faster than ever before to generate a patch and get it merged, it doesn't mean that merging 50+ commits this week makes sense.
Code and feature still need to experience time and stability in order to achieve maturity. We need to give our end users time to try stuff, to shape their opinions and habits. We need to let everyone on the dev team take the time to update their mental model of the project as patches are merged. Heck, I've seen too many Product Owners incapable of telling you clearly what went in and out of the code over the previous 2 releases, and those are usually a few weeks apart.
Making individual tasks faster should give us more time to think in terms of quality and stability. Instead, people want to add more features more often.
Absolute middlebrow dismissal incoming, but the real thinking atrophy is writing blog posts about thinking atrophy caused by LLMs using an LLM.
It is getting very hard to continue viewing HN as a place where I want to come and read content others have written when blog posts written largely with ChatGPT are constantly upvoted to the top.
It's not the co-writing process I have a problem with, it's that ChatGPT can turn a shower thought into a 10 minute essay. This whole post could have been four paragraphs. The introduction was clearly written by an intelligent and skilled human, and then by the second half there's "it's not X, it's Y" reframe slop every second sentence.
The writing is too good to be entirely LLM generated, but the prose is awful enough that I'm confident this was a "paste outline into chatgpt and it generates an essay" workflow.
Frustrating world. I'm lambasting OP, but I want him to write, but actually, and not via a lens that turns every cool thought into marketing sludge.
Why do you think author used ChatGPT to write this? It has human imperfections and except this 'The "just one more prompt" trap' I didnt think it was written by a prompt
Author here: Sir, it is almost fully written by human and english/grammar improved by AI
...and I usually come to doubt my own intuitions that this is the case when people say things like this, but my experience is usually that the LLM is doing more heavy lifting than you realise.
> Distill - deterministic context deduplication for LLMs. No LLM calls, no embeddings, no probabilistic heuristics. Pure algorithms that clean your context in ~12ms.
I simply do not believe that this is human-generated framing. Maybe you think it said something similar before. But I don't believe that is the case. I am left trying to work out what you meant through the words of something that is trying to interpret your meaning for you.
Goddamn it pisses me off so much when people rant about AI but use LLMs to write their blog posts!
Use your own words!
I'd rather read the prompt!
IMHO, this is not really about AI, it's about setting boundaries and not overwork yourself.
I can’t take seriously an article on AI written so obviously using AI. The unmistakable (lack of) style. If the author is not even aware that the rhetoric that GPT produces is unvaried and predictable, how can I believe the author really means what they write? Slop is cancer
F'n hell, AI fatigue is real because everybody is talking about it
I haven’t hit this yet and now I feel like someone just told me about thorns for the first time while I’m here jogging confidently through the woods with shorts on.
text is AI generated/assisted?
I've been building https://roborev.io/ (continuous background code review for agents) essentially as a cope to supervise the poor quality of the agents' work, since my agents write much more code than I can possible review directly or QA thoroughly. I think we'll see a bunch of interesting new tools to help alleviate the cognitive burden of supervising their work output.
You can see the exponential growth of tokens in real time! lol
Do you find it works well?
With these agents I've found that making the workflows more complicated has severe diminishing returns. And is outright worse in a lot of cases.
The real productivity boost I've found is giving it useful tools.
Super well! I don't work without this tool running in the background supervising all the agents' work
> Engineers are trained on determinism.
I'm fatigued by this myth.
Explain?
True determinism is rare, we often don't get it. That's what purely functional languages are all about and they're a minority.
We are trained on the other thing: unpredictable user interaction, parallelism, circuit-breaking, etc. That's the bread and butter of engineering (of all kinds, really, not just IT).
The non-deterministic intuition is baked into engineering much more than determinism is.
Fair point. But are we moving even further away from determinism with the current ways of working with AI?
I see, you're using "determinism" coloquially, in the sense of "exact outcome".
That's perfectly fine. We are honed for this too.
We don't need to produce exact solutions or answers. We need to make things work despite the presence of chaos. That is our job and we're good at it.
Product managers freak out when someone says "I don't know how much time it will take, there are too many variables!". CFOs freak out when someone says "we don't know how much it will cost". Those folk want exact, predictable outcomes.
Engineers don't, we always dealt with unpredictable chaotic things. We're just fine.
Welcome to management. Herding cats is the idiom. AI is behaving on the nose in this aspect. Perhaps this is the author's first taste of it?
Just a few days ago: https://news.ycombinator.com/item?id=46885530
Clearly written before Codex 5.3 and Opus 4.6 shipped :)
Personally I'm loving AI for TECHNICAL problems. Case in point... I just had a server crash last night and obviously I need to do a summary on what could have possibly caused the issue. This use to take hours and painfully hours at that. If you ever had to scroll through a Windows event log you know what I'm talking about. But today I just got an idea of just exporting the log and uploading it to Gemini and asking it:
Looking at this Windows event log, the server rebooted unexpected this morning at 4:21am EST, please analyze the log and let me know what could have been the cause of the reboot.
It took Gemini 5 minutes to come back with an analyst and not only that, it asked me for the memory dump that the machine took. I uploaded that as well and it told me that it looks like SentinelOne might have caused the problem and to update the client if possible.
Checking the logs myself, that's exactly what it looks like.
That used to take me HOURS to do and now it took me 30 seconds, took Gemini 10 minutes, but me 30 seconds. That is a game changer if you ask me.
I love my job, but I love doing other things rather than combing over a log trying to figure out why a server rebooted. I just want to know what to do to fix it if it can be fixed.
I get that AI might be giving other people a sour taste, but to me it really has made my job, and the medial tasks that come with it. easier.
I have little to no experience with Windows Server, but at least on Linux, this shouldn’t take hours.
Find the last log entries for the system before the reboot; if they point to a specific application, look at its logs, otherwise just check all of them around that time, filtering by log level. Check metrics as well - did the application[s] stop handling requests prior to the restart (keeping in mind that metrics are aggregations), or was it fine up until it wasn’t?
If there are no smoking guns, a hardware issue is possible, in which case any decent server should have logged that.
> I just want to know what to do to fix it if it can be fixed.
Serious question: how do you plan on training juniors if troubleshooting consists of asking an AI what to do?
I'm a big AI booster, but I'm so sick of how crazy hype has gotten. Claude Cowork? Game changer! Ralph? Nothing will ever be the same. LOLClaw? Singularity, I welcome our new AI overlords.
Of course you are more tired: Code review is more difficult than writing code.
They you have to deal with slop, slopfluencer articles written under the influence of AI psychosis, AI addicts, lying managers, lying CEOs etc.
And you usually, the author of this article being an exception, get dumber and are only able to verbalize AI boosterism.
AI only works if you become a slopfluencer, sell a course on YouTube and have people "like and subscribe".
I feel none of this. In the absence of data or studies, you might consider writing about your own experience rather than the audience’s.
Sounds a lot like Marx's theory of alienation
We already were living in an alienating society. This is mass psychosis.
you are the problem. this "article" is ai slop.
I’m shocked that the obvious analysis hasn’t come up: this is more disingenuous talk Karpathy-style, designed to awaken feelings of FOMO from someone who’s not developing normal software with A.I. but is selling A.I. programming tools.
More AI Inevitablism Soothsaying.[1]
> I shipped more code last quarter than any quarter in my career. I also felt more drained than any quarter in my career. These two facts are not unrelated.
I’m gonna be generous (and try not to be pedantic) and assume that more-code means more bugfixes and features (and whatnot) and not more LOC.
Your manager has mandated X tokens a day or you feel you have to use it to keep up. Huh?
> I build AI agent infrastructure for a living. I'm one of the core maintainers of OpenFGA (CNCF Incubating), I built agentic-authz for agent authorization, I built Distill for context deduplication, I shipped MCP servers. I'm not someone who dabbles with AI on the side. I'm deep in it. I build the tools that other engineers use to make AI agents work in production.
Oh.
> If you're an engineer who uses AI daily - for design reviews, code generation, debugging, documentation, architecture decisions - and you've noticed that you're somehow more tired than before AI existed, this post is for you. You're not imagining it. You're not weak. You're experiencing something real that the industry is aggressively pretending doesn't exist. And if someone who builds agent infrastructure full-time can burn out on AI, it can happen to anyone.
This is what ChatGPT writes to me when I ask “but why is that the case”.
1. No, you are not wrong
2. You don’t have <bad character trait>
3. You are experiencing something real
> I want to talk about it honestly. Not the "AI is amazing and here's my workflow" version. The real version.
And it will be unfiltered. Raw. And we will conclude with how to go on with our Flintstone Engineering[2] but with some platitudes about self-care.
> The real skill ... It's knowing when to stop.
Stop prompting? Like, for
> Knowing when the AI output is good enough.
Ah. We do short prompting sessions instead.
> Knowing that your brain is a finite resource and that protecting it is not laziness - it's engineering.
Indeed it’s not this thing. It’s that—thing.
> AI is the most powerful tool I've ever used. It's also the most draining. Both things are true. The engineers who thrive in this era won't be the ones who use AI the most. They'll be the ones who use it the most wisely.
Of course we will keep using “the most powerful tool I’ve ever used”. But we will do it wisely.
What’s to worry about? You can use ChatGPT as your therapist now.
[1] https://news.ycombinator.com/item?id=46935607
[2] https://news.ycombinator.com/item?id=44163821
The weird thing at the end of the day is that we live in this world where there is this default individual desire to be more "productive." I am always wondering, productive for who, for what?
I know more than most there is some baseline productivity we are always trying to be at, that can sometimes be a target more than a current state. But the way people talk about their AI workflows is different. It's like everyone has become tyranical factory floor managers, pushing ever further for productive gains.
Leave this kind of productivity to the bosses I say! Life is a broader surface than this. We can/should focus on be productive together, but leave your actual life for finer, more sustainable ventures.
I think the fatigue is that the technology has been hyped since long before today when it’s actually started to become somewhat useful.
And even today when it’s useful, it’s really most useful for very specific domains like coding.
It’s not been impressive at all with other applications. Just chat with your local AI chat bot when you call customer service.
For example, I watch a YouTube channel where this guy calls up car dealerships to negotiate car deals and some of them have purchased AI receptionist solutions. They’re essentially worse than a simple “press 1 for sales” menu and have essentially zero business value.
Another example, I switched to a cheap phone plan MVNO that uses AI chat as its first line of defense. All it did was act as a natural language search engine for a small selection of FAQ pages, and to actually do anything you needed to find the right button to get a human.
These two examples of technology were not worth the hype. We can blame those businesses all day long but at the end of the day I can’t imagine those businesses are going to be impressed with the results of the tech long term. Those car dealerships won’t sell more cars because of it, my phone plan won’t avoid customer service interactions because of it.
In theory, these AI systems should easily be able to be plugged in to do some basic operations that actually save these businesses from hiring people.
The cellular provider should be able to have the AI chatbot make real adjustments to your account, even if they’re minor.
The car dealership bot should be able to set the customer up in the CMS by collecting basic contact info, and maybe should be able to send a basic quote on a vehicle stock number before negotiations begin.
But in practice, these AI systems aren’t providing significant value to these businesses. Companies like Taco Bell can’t even replace humans taking food orders despite the language capabilities of AI.
How is your comment relevant to the article?
How is your comment relevant to the article?
My comment is relevant because I’m pointing out, like the article does, that AI isn’t turning out to be anywhere near as useful and low-friction as it has been promised. Hence, the fatigue.
Your comment is the one that contributes nothing.
The article is talking about physical fatigue from being more productive. Your comment is about the "people are tired of the AI hype" type of fatigue.
Hello developer. Welcome to the tech lead role. Please enjoy your stay till AI makes this role obsolete too.
progress does not care