I think this is a specific example of a generalized mistake, one that various bits of our infrastructure and architecture all but beg us to make, over and over, and which must be resisted, which is: Your development feedback loop must be as tight as possible.
Granted, if you are working on "Windows 12", you won't be building, installing, testing, and deploying that locally. I understand and acknowledge that "as tight as possible" will still sometimes push you into remote services or heavyweight processes that can't be pushed towards you locally. This is an ideal to strive for, but not one that can always be accomplished.
However, I see people surrender the ability to work locally much sooner than they should, and implement massively heavyweight processes without any thought for whether you could have gotten 90% of the result of that process with a bit more thought and kept it local and fast.
And even once you pass the event horizon where the system as a whole can't be feasibly built/tested/whatever on anything but a CI system, I see them surrendering the ability to at least run the part of the thing you're working on locally.
I know it's a bit more work, building sufficient mocks and stubs for expensive remote services that you can feasibly run things locally, but the payoff for putting a bit of work into having it run locally for testing and development purposes is just huge, really huge, the sort of huge you should not be ignoring.
"Locally" here does not mean "on your local machine" per se, though that is a pretty good case, but more like, in an environment that you have sole access to, where you're not constantly fighting with latency, and where you have full control. Where if you're debugging even a complex orchestration between internal microservices, you have enough power to crank them all up to "don't ever timeout" and attach debuggers to all of them simultaneously, if you want to. Where you can afford to log every message in the system, interrupt any process, run any test, and change any component in the system in any manner necessary for debugging or development without having to coordinate with anyone. The more only the CI system can do by basically mailing it a PR, and the harder it is to convince it to do just the thing you need right now rather than the other 45 minutes of testing it's going to run before running the 10 second test you actually need, the worse your development speed is going to be.
Fortunately, and I don't even how exactly the ratio between sarcasm and seriousness here (but I'm definitely non-zero serious), this is probably going to fix itself in the next decade or so... because while paying humans to sit there and wait for CI and get sidetracked and distracted is just Humans Doing Work and after all what else are we paying them for, all of this stuff is going to be murder on AI-centric workflows, which need tight testing cycles to work at their best. Can't afford to have AI waiting for 30 minutes to find out that its PR is syntactically invalid, and can't afford for the invalid syntax to come back with bad error messages that leave it baffled as to what the actual problem is. If we won't do it for the humans, we'll do it for the AIs. This is definitely not something AI fixes, despite the fact they are way more patient than us and much less prone to distraction in the meantime since from their "lived experience" they don't experience the time taken for things to build and test, it is made much worse and more obvious that this is a real problem and not just humans being whiny and refusing to tough it through.
Storment33
today at 3:19 PM
Yes exactly! I am using Nix/Make and I can prompt Claude to use Nix/Make, it uses these local feedback loops and corrects itself sometimes etc.