From the post it's clear that the shop has a set schedule of services and prices that the bot is pulling from. All the things you're saying are true for a shop that needs to custom quote each job but do not apply to the situation as presented.
It's clear that the author interpreted the data that way, yes.
And perhaps the shop actually charges the same for brakes whether it is an Ford F150 or a Toyota Corolla.
But that seems very unlikely to me. While they're both very common vehicles, they are also very different and the parts have substantially different costs associated with them.
As the resident Diagram Maker at my job I really appreciate any and all discourse on the topic. Knowing the purpose of your diagram is a hugely under-appreciated part of the process. Service flow chart or system architecture? High level system overview or actionable, followable flow-chart? The engineer in me always wants to put All The Things in the chart, to make it maximally "correct". It's never the right move. But how to make it clear what's included or not, and why?
I still struggle with finding the best approach each time; I'd love more discussion of this stuff.
Just because you said that you were interested in some
Opinions, one of the least appreciated aspects of any documentation (but especially diagrams) is defining who the stakeholders are at the start of the document. It’s the difference between having frustrated users who can’t understand things to happy users that understand limitations.
The corollary to this is that the best diagram that boundaries are often along communication lines between teams. This is Conway’s law all the way down. And the reason is that most often people use diagrams to get a spatial sense of where ‘they’ fit into things. I have only anecdotal evidence for this, but the most helpful and lasting diagrams I’ve ever made are when 1) they define (and stick to) specific stakeholders, and b) they are delineated by groups/teams.
Yep! That's almost always the correct solution. It can be a lot to figure out, tho: which perspectives are most valuable to present? Are the linkages clear? Does this kind of box belong on THIS chart or THAT chart?!
The issue is that in domains novel to the user they do not know what is trivially false or a non sequitur and the LLM will not help them filter these out.
If LLMs are to be valuable in novel areas then the LLM needs to be able to spot these issues and ask clarifying questions or otherwise provide the appropriate corrective to the user's mental model.
In my job the task of fully or appropriately specifying something is shared between PMs and the engineers. The engineers' job is to look carefully at what they received and highlight any areas that are ambiguous or under-specified.
LLMs AFAIK cannot do this for novel areas of interest. (ie if it's some domain where there's a ton of "10 things people usually miss about X" blog posts they'll be able to regurgitate that info, but are not likely to synthesize novel areas of ambiguity).
They can, though. They just aren't always very good at it.
As an experiment, recently I've been using Codex CLI to configure some consumer networking gear in unusual ways to solve my unusual set of problems. Stuff that pros don't bother with (they don't have the same problems I face), and that consumers tend to shy away from futzing with. The hardware includes a cheap managed switch, an OpenWRT router, and a Mikrotik access point. It's definitely a rather niche area of interest.
And by "using," I mean: In this experiment, the bot gets right in there, plugging away with SSH directly.
It was awful with this at first, mostly consisting of a long-winded way to yet-again brick a device that lacks any OOB console port. It'd concoct these elaborate strings of shit and feed them in, and then I'd wander over and reset whatever box was borked again. Footgun city.
But after I tired of that, I had it define some rules for engaging with hardware, validation, constraints, and for order of execution, and commit those rules to AGENTS.md. It got pretty decent at following high-level instructions to get things done in the manner that I specified, and the footguns ceased.
I didn't save any time by doing this. But I also didn't have to think about it much: I never got bogged down in wildly-differing CLI syntax of the weirdo switch, the router (whose documentation is locked behind a bot firewall), and access point's bespoke userland. I didn't touch those bits myself at all.
My time was instead spent observing the fuckups and creating a rather generic framework that manages the bot, and just telling it what to do -- sometimes, with some questions. I did that using plain English.
Now that this is done, I get to re-use this framework for as many projects as I dare, revising it where that seems useful.
(That cheap switch, by the way? It's broken. It has bizarro-world hardware failure modes that are unrelated to software configuration or firmware rev. Today, a very different cheap switch showed up to replace it. When I get around to it, I'll have the bot sort that transition out. I expect that to involve a bit of Q&A, and I also expect it to go fine.)
I caught that too. The piece is otherwise good imo, but "the luddites were wrong" is wrong. In fact, later in the piece the author essentially agrees – the proposals for UBI and other policies that would support workers (or ex-workers) through any AI-driven transition are an acknowledgement that yes, the new machines will destroy people's livelihoods and that, yes, this is bad, and that yes, the industrialists, the government and the people should care. The luddites were making exactly that case.
> while it’s true that textile experts did suffer from the advent of mechanical weaving, their loss was far outweighed by the gains the rest of the human race received from being able to afford more than two shirts over the average lifespan
I hope the author has enough self awareness to recognize that "this is good for the long term of humanity" is cold comfort when you're begging on the street or the government has murdered you, and that he's closer to being part of the begging class than the "long term of humanity" class (by temporal logistics if not also by economic reality).
> We should hate/destroy this technology because it will cause significant short term harm, in exchange for great long term gains.
Rather
> We should acknowledge that this technology will cause significant short term harm is we don't act to mitigate it. How can we act to do that, while still obtaining the great long term gains from it.
The matte effect is a huge part of why these look bad. Marble does an amazing job of showing off the subtle variations in the carving and matte paint flattens everything out. A glossier finish and literally any variation of tones would vastly improve the effect.
If someone had this experience I’d encourage them to look into how police departments across the US consistently fight against any accountability for the cops who perpetuate those relatively few awful encounters. “Most interactions are harmless therefore the negativity is overblown and cops are trustworthy” is one takeaway if you stop your research at the right point. “if you have a bad experience with a cop the entire department will turn against you; they are not to be trusted” is a more accurate takeaway.
If we apply your logic, would you say it's fair to go around and say "all teachers are bastards", when referring to teacher unions that make it hard to fire incompetent teachers? Or maybe "all doctors are bastards" when referencing how the american medical association (the trade association for doctors) makes it hard for more doctors to be admitted?
Sure, but one key difference is that if either of those groups steps outside the law, you can recourse to the law to check them.
Since police are part of the law, when they don't hold their own accountable, there's no recourse. And that's a real problem. This is before one even starts unpacking the knapsack of how much law is designed to protect the police from consequences of performing their duties (leading to the unfortunate example "They can blow the side off your house if they have reason to believe it will help them catch a suspect and the recompense is that your insurance might cover that damage.")
>Since police are part of the law, when they don't hold their own accountable, there's no recourse. And that's a real problem.
I don't see how this is a relevant factor for the two cases I mentioned. Sure, it's bad that are part of the justice system, and therefore you can't use the justice system to correct their misbehavior, but you're not going to involve the justice system for incompetent teachers, or not enough doctors being admitted. For all intents and purposes the dynamic is the same.
I am not at all joking when I make the claim that police committing sex crimes is a problem that is frequently swept under the rug by both police internal affairs and the judicial system.
you are definitely going to start involving the justice system if teachers and doctors start physically abusing people, illegally detaining them and killing them!
reply