Yep, this topic is covered in the blog post, search the post for "Expectations of Question Driven Development".
The TL;DR is until you write a lot of code you'll have no idea what you're doing so it's fully expected you'll be writing, refactoring and deleting code as you go. It's only after you've taken so much action that you discover the more simple and efficient solutions.
Front loading all the reading and research isn't going to help you get there sooner. IMO the best time to read a book or take a course on a subject is months after you've built something real, because then you can apply and understand all of the efficient solutions and end up with a bunch of takeaways to improve what you've done. Things that might take you 1-2 days to apply after you've read the book. This is also covered near the end of the blog post btw.
If you ever want to reconsider the topic of SWE not being Engineers, here's an example.
It's typically not feasible to "wing it".
Cost, time, and Effort make it so you need to be correct the first time. mistakes do happen, but when I switched from Design Engineering to SWE, I'm allowed significantly more mistakes. These can be inefficiencies, incorrect outcomes, bugs/errors, etc... No big deal because I can recompile. Can't do that with a half million dollar in steel molds with production on a set date.
Additionally with engineering, I'll have 4 layers of management review and sign off. With programming it's subjective code reviews.
Inexpensive/inconsequential mistakes simply enable bigger mistakes to be made. Systems grow more complicated as long as they're allowed to.
You can recompile a syntax error, but even in software, you can't back track on things like choice of language / tech stack / architectural design decisions without wasting millions of dollars.
For traditional industries, there's (presumably) some best practice or de-facto standard for many things, since most things have already been invented in the past.
For software, you have contradicting best practices with people arguing convincingly both ways. Everybody has their favorite tech stack and some end up being fads. New technologies come out every couple years, and if your project is successful enough and runs long enough, you either get stuck with old tech or spend millions of dollars figuring out an upgrade path. Sometimes the "upgrade" is actually another fad, but sometimes the upgrade is crucial to your future business.
Not sure whether these things fall under "engineering" but I don't think it's inherently less "hard" than what traditional engineers do. Sure your junior dev is not going to do this, but people who make these decisions are often called software engineers ("senior", "lead", whatever).
This seems like gatekeeping. You’re describing the awesome power of software: it can be updated and infinitely replicated and is therefore more tolerant of mistakes. Just because the field has this boon doesn’t mean that those working in it aren’t “real engineers” or whatever.
Suppose technology was sufficiently advanced to make steel mold production nearly free and instant. Are the mold designers no longer engineers then? (Watch out for 3D printing, by the way...)
This is an interesting topic. "Engineer" is a word that people respect. "Developer" is a meh word. You'd rather want to be Software Engineer as opposed to Software Developer even though some companies call you former and the other latter despite your responsibilities and everything being exactly the same.
Common people would respect classical engineers more because they can see that they are building something tangible. And it seems many definitions for engineers everywhere seem to exclude "software", something like "a person who designs, builds, or maintains engines, machines, or structures.".
Fun story is, when I was doing medical checkup for my work the old lady saw I was marked as "software engineer", and she didn't like it and asked me for an alternative.
I believe this confusion arises between the valid definition of the act of "engineering" with another equally valid definition of the profession of engineering, which like most professions, require some sort of formal degree.
You could also be winging it when you are inventing something or building something experimental and given the thing is not large enough to last several years. E.g. when inventing a light bulb for the first time you would be iterating on it and "winging it".
Learning fundamentals doesn't mean you don't code as you learn, it just means you code to learn rather than code to finish a project. Coding to learn is deliberate practice and is the best way to learn fundamentals. Just learning whatever you need to get a specific project done means you will always have a lot of holes, likely you'd get more things done in the medium to long run by practicing and learning things properly from the beginning instead.
The goal isn't to code to finish the project and ignore any questionable code you've written because "it works".
It's to write a lot of code which is practice in the end. It's expected you'll be doing an ongoing combination of writing code and looking things up as you go, but you're not looking for the first working solution that you copy / paste and move on. The topics you end up researching will lead you to write good code if you put in the effort.
There's been plenty of examples of doing this where I spent hours going over a few functions because I wanted to make sure I was doing things nicely and the journey to go from the original version to the end version lead to learning a lot. That might have been reading through 5 pages worth of search results, skimming as many videos as I could find and maybe even asking an open question somewhere which yielded code examples written by people who have been working with the language for years.
You can then use all of that as input to guide your code. Through out the process I may have written a few versions and ultimately landed on 1 based on what feels good when using it, isn't too clever, easy to test, easy to understand, runs quickly, etc. Basically all of the properties that make code good.
The TL;DR is until you write a lot of code you'll have no idea what you're doing so it's fully expected you'll be writing, refactoring and deleting code as you go. It's only after you've taken so much action that you discover the more simple and efficient solutions.
Front loading all the reading and research isn't going to help you get there sooner. IMO the best time to read a book or take a course on a subject is months after you've built something real, because then you can apply and understand all of the efficient solutions and end up with a bunch of takeaways to improve what you've done. Things that might take you 1-2 days to apply after you've read the book. This is also covered near the end of the blog post btw.