In my experience the 'traps' left by the DM, actually reflect the inherent complexity of the problem domain. Complexity it took months or years to fully understand, made out of corner cases and strange usage patterns that were never anticipated, and have been forgotten again since (forgotten, but still used, from time to time).
If you think this new trendy solution is going to be cleaner or less hacky, and still cover the same (or more) use cases, you're rather naive.
80% of the ugliness comes from getting the last 20% of the functionality (maybe even 90/10). It is trivially easy to re-engineer a system that is 80% less ugly/hacky, and covers 80% of the functionality of its predecessor. Then everyone slaps themselves on the back because the refactor is going great!.
It's also true that 90% of users only need 10% of the features of your software. If you decide to drop some features that introduce huge complexity that is not apparent, you might find out that users don't really need (anymore? ever?) that feature in that form, and you might make them achieve the same goal through a different features set that doesn't affect the architecture and code complexity that much.
>On the other hand, the replacement is needed because we are now hitting the limits of the original approach, be it the model, the technology, the market evolution or the understanding of them all. Releasing a new solution, way better than the original one, will prove that the DM did a bad job.
Not it won't. It will just prove that the rewrite was justified.
>Good developers turn into Dungeon Masters because this is what the system demands.
They really don't. Good developers realize that there is an endless of cycle of the old being replaced by the new and if you can't deal with that you are not suited to a career producing software.
The whole thing reads like the author encountered one or more people whose self esteem is so fragile that they are unsuited to the commercial software industry. It sounds like those individuals had a lot of trouble keeping their own (fragile) ego out of technical and business decisions.
>The moment a different implementation shows that a lot money could have been saved, then the cost of keeping the old software for too long is exposed too. And this can be scary: imagine you discover a wrong design decision you made 10 years ago costed millions to your company… would you happily expose it?
Yes. Exposing it presumably then saves the company millions going forward and any halfway competent boss will respect you for putting the organization first. Shelve your ego, it's called being a professional. Never mind being professional, its also a matter of acting like an adult.
The article isn't so much about dungeon masters as it is about people who shouldn't be programmers.
Folks who shouldn't be N: folks who make declarations about who should or shouldn't be N. There's no gate and you, sir or ma'am, are certainly not the gatekeeper.
Your statement generates a declaration about who shouldn't be N for all N that have declarations about them, hence barring you from doing pretty much anything... including making statements about who shouldn't be N?
So, part of the work i'm doing at the moment (mainly devops for a pre-revenue startup) involves doing an ETL sub-project (scrape data from websites, ingest into a database) which i'm implementing with Django, Django Rest Framework, Scrapy, Celery, Haystack (with ElasticSearch) and Postgres, with the Django Admin configured for data curation/high level job monitoring. It basically took me half a sprint (1 week plus a couple of days) to get a large part of the work done (Scrapy spider work was mosly done by a previous developer), including unit tests for the API/DRF code base, CI integration (Codeship) and a local dev Vagrant setup.
The rest of our stack (the main backend API), which was poorly implemented by a non-proficient developer (which i wouldn't consider a bad developer, merely non-experienced; he eventually moved jobs), is currently done in NodeJS/Express/Sequelize and IMHO should be scrapped and rewritten.
Recently, during a conversation with a new hire frontend developer, he mentioned the ETL sub-project should be done in NodeJS as well, for same language consistency across the stack and i basically replied that if he was willing to rewrite it (with the same features i get from solid, highly featured mature community libraries) in NodeJS himself, he'd be free to do it :-)
Unfortunately, the CTO <rant>which IMHO is pretty null at managing people/software projects and sole aspiration seems to be becoming the CTO of the next Trello</rant>keeps throwing out buzzwords as "Kafka based event data driven application", "snappy websockets driven UI" and "microservices", while in the last year was hardly able to get a buggy over engineered MVP up and doesn't seem to have taken a Q, which means i'm looking for a new job. In the middle of all this NodeJS madness, i'm hopeing i have not become a dungeon Master myself :-)
I think the article touches of a more general problem that I have been thinking recently about. That is: In the absence of understanding (which costs time and effort), how do you decide that some expert opinion on the matter is good or useful or not? (This problem crops up everywhere in human societies, often in politics.)
It seems that people use many different heuristics:
- We have an expert, surely his advice is good.
- We have an expert, surely he is bullshitting us for his own benefit.
- The subject matter is complicated, surely it must be useful, because the problem is complex.
- The subject matter is complicated, surely it's because people working on it over-engineered the solution.
- The thing is old, surely it has proven it's practical use.
- The thing is old, surely it uses obsolete technology.
- There are lot of features, surely the thing will solve my problem.
- There are lot of features, surely it's not all needed.
As above examples show, in absence of information, decision making will be subjective. And that's where the article falls flat, IMHO, because it wants to give an advice, but it can't, because there is probably no rational advice to be given, aside from, have somebody else (who you trust) understand the original system as much as possible and then tell you.
The article reminds me of a similar situation I have at work. I work on a large legacy application (but I am not a DM, I am not really invested in it yet), which should be slowly replaced by company, because we have a costly dependency on some vendor. Unfortunately, the new team doesn't want to consult our team!
"The article reminds me of a similar situation I have at work. I work on a large legacy application (but I am not a DM, I am not really invested in it yet), which should be slowly replaced by company, because we have a costly dependency on some vendor. Unfortunately, the new team doesn't want to consult our team!"
Then I think you are a Dungeon Master anyway. Whether you like it or not. You'll be pushed into the role. Advice you think is essential will be ignored and it will be painful to watch a lot of the same mistakes being repeated.
I think you can end up in this anti-pattern in two ways - either the old maintainers don't want change or the new ones don't want to listen. The result is the same.
I am not sure I would call it an anti-pattern, that's probably my main polemic with the article. The situation really is that someone has a unique understanding or knowledge about some system, and the issue is whether you trust them or not. But modern human society is built on this principle! The only thing that sounds anti-pattern-ish about it is potentially low bus factor.
I am well aware that today's society prefers foxes (people who have shallow knowledge of many things) to hedgehogs (people who have deep knowledge of few things), and I think it's wrong and unfortunate (because it actually quite heavily depends on hedgehogs). The article is, IMHO, a good example of this line of thought.
I could not help myself, but laugh, when I read about Node.js
Yes, the magical silver bullet, that kills the dungeon master! Snake oil extracted from within the great depths of brogramming. Acme Elbow Grease to help develop scalable™ MVP's with possibilities of future growth. That's it - 42 is outdated and irrelevant anymore, the answer now is Node.js
> And this can be scary: imagine you discover a wrong design decision you made 10 years ago costed millions to your company… would you happily expose it?
Yes. Obviously. That's my job. Are you saying you wouldn't?
The whole article has this really weird assumption that people will naturally be ashamed that work they did years ago with a more limited budget/team/understanding of the problem is now showing cracks. But that is the natural state of software development. It happens to everyone, everywhere. A developer who can't accept that has a serious ego problem, and a manager who can't accept that should not be managing developers. Either way, the problem is the individual, not the design pattern.
I couldn't help thinking of this Joel Spolsky article as I read this. http://www.joelonsoftware.com/articles/fog0000000069.html sometimes the dungeon master dragging his feet over a rewrite is right to do so. Ironically though I think micro services do offer a way to gradually update and scale an existing system that is creaking under its own weight
The only new idea this article has is giving a cute name to something thats been going on for ages.
Assigning malice to the people who understand how to work within the technical debt on a project is age-old programming nonsense. It was - and continues to be - a stupid thing to do.
This article just makes it a bit more conspiratorial, and re-writes the old story to become less falsifiable. Rather than accuse someone of simply being counterproductive to protect their job security, the fault is found in their ego and subconscious actions.
It's comforting to find a way to frame a problem such that you can blame a person for holding out. But the barrier to entry on a codebase is just another form of technical debt. It's no fun to accept that codebases evolves in such a way to accumulated debt, or that the development plan might have even accelerated that accumulation for other short term goals. It's doubly no fun to think about scheduling developers to pay down that debt.
What's the name of 'anti-pattern' when you oversimplify a complicated phenomenon and give it a 'snappy' name in order to write some useless drivel on medium?
Whenever I write/see something in a way I think could be better, or improved, I drop a 'TODO' comment. It doesn't even mean I intend to do it (things change), it's just a way of sharing knowledge/ideas.
In general, I think this is the issue. Dev "experience" is acquired knowledge that is no shared/communicated.
I feel like I'm somewhat of a "Dungeon Master" right now, and this article comes across as somewhat condescending. I think this is because right off the bat, we make the assumption that,
> The dark secret of the Dungeon Master is that he knows every trap in the existing legacy software, because he was the one to leave the traps around.
It could be that your Dungeon Master isn't responsible for the traps; perhaps they were handed to him from his predecessor, or they were the result of a bad tradeoff — in my experience, usually quality being sacrificed in favor of short term time and cost (and much to the detriment of long term development), time and again, without ever allowing the dev team to revisit the accumulated technical debt, until it becomes so back breaking that we throw our hands up and — against what seems to be the prevailing wisdom — cry for a rewrite. If you start from such an accusatory stance as the above, well, yes, you and your "Dungeon Master" are going to have an adversarial relationship.
I feel like when I've been in this position, it's not the code that I'm trying to protect: the code is awful, and I'll readily admit that. However, it meets a set of requirements, albeit badly; but more often than not I see the rewrite dropping crucial requirements in favor of "simplicity", or "we're going to start with a minimum viable product" — after which you'll repeat the history of the same set of mistakes that the original made by allowing your product to grow organically, all the while ignoring the existing implementation that is ripe with knowledge of how well that won't go.
> The worst domain expert is the one whose expertise is built from the intricacies of the existing systems
I agree! And yet this article seems to seek to vilify this person. We shouldn't have domain experts whose domain is the cruft of the system, but as long as our software development processes are so broken that the problems with the current system can't be addressed, I'd much rather have someone who is aware of what the limitations of that system are, what corners can be cut and what the risks of cutting them are.
Perhaps you've ran into someone who is really as counterproductive as the article makes a Dungeon Master out to me, but I've never so feared for my job security, which seems to be the core reason provided for the Dungeon Master's behavior and position. The article comes across as incredibly deadset on a us vs. them mindset, to the point where I question whether the author should take a step back as ask if he isn't himself part of the problem.
> They are going to find themselves less productive than their younger peers, good in useless things, but embarrassingly slow in what matters today.
This struck me as incredibly ageist when I read it. He recants somewhat at the end,
> One thing that I didn’t intend was to raise a generational/age issue. Thanks to Ann Witbrock for pointing me to that. It’s not age: it’s attitude, systems and vicious circles.
> Perhaps you've ran into someone who is really as counterproductive as the article makes a Dungeon Master out to me, but I've never so feared for my job security, which seems to be the core reason provided for the Dungeon Master's behavior and position.
I have and it's very scary indeed. I've heard "You will be fired if you don't support technology X" simply because that's the technology the DM used originally and is more familiar with.
I'm not disagreeing with your other comments, but I think the authors main point boils down to when a DM's ego becomes tied to a system in need of replacing, the act of replacing said system can feel like a personal attack to that DM. If that developer is in a or has risen to a position of power it can feel even more intimidating to suggest other technologies.
That being said, I've learned to expect such behavior. It's part of the job. A great skill to have is negotiating and communicating why the system needs to be replaced in such a way that the DM is collaborating and seeing the value the new system can produce. I'm going to work with people everyday that don't see or don't like how I'm trying to solve something and their concerns need to be addressed for us to continue working together.
Like you I think I've inadvertently become the "Dungeon Master". I don’t like maintaining a legacy app but if you’re going to replace this software the replacement better be an improvement.
> The dark secret of the Dungeon Master is that he knows every trap in the existing legacy software, because he was the one to leave the traps around.
I take issue with this quote as well. In my experience, I know about the traps because I've seen what happens when they get triggered and had to recover from them - often at great time and expense. Put another way traps are edge cases and like any system it's easy to account for 90% of the complexity it's the remaining 10% of edge cases that cause the headaches.
When I hear a young developer say something like "We’ll just use a const and assume each day is 24 hours long because it simplifies the code immensely".
When I then point out they’ve stumbled into a trap because their program will now fail catastrophically twice a year (daylight savings, when you have a 23 and 25 hour day). I’m not doing it to be a curmudgeonly dungeon master trying to protect my turf. I’m pointing it out because I’ve dealt with that pain when I got bitten by DST in the app I maintain. I was young once too.
It’s easy for some junior to say “This error will rarely occur it is not worth introducing complexity to handle it.”
They aren't the ones who will get called out to fix it when the 'unlikely error' inevitably crops up. In the case of DST twice a year for the 15+ year life of the app adds up.
Edit: I'm sure someone will mention UTC - I agree better to use this if underlying DB supports it.
I'm destined to be a "dungeon master" on a recent project, and would love to restart with an MVP. I had to fight tooth and nail to leave out features no one was going to use - some of which weren't cut. Project manager kept predicting we would be done in 1 month when we obviously had way more than 1 months worth of work. Ultimately we ended up developing our current version as if it was just 1 month from being done (cut some corners so we can hit the target!) for the majority of it's development time.
This is a fairly common scenario across the industry. But I'd venture a guess that usually the guy/gal who had a large role in writing the original system is long gone.
In other words, while the rewrite is common, having the original author around is uncommon.
This reads like a 26 year old wrote it after supporting a toy system for two years, and has a few things to school his elders on.
I'm sure projects fail for many, many reasons, and inertia plus ego from an original developer that's now powerful is certainly one of them.
On the other hand, the number of half-baked ORM replacements for SQL I've seen represents a staggering amount of wasted labor, labor that could have launched a number of great tech teams if they'd had the wisdom to learn from someone who was coding while they were in diapers.
I think I find the whole thing so offensive because node plus microservices are put out in the essay as the thing that a hot new dev knows better than her elders.
So -- computing models are cyclical; they wax and wane for social and practical reasons, and admittedly, beyond all predictions javascript has been massaged into an amazingly fast thing given where it started.
But, really? microservices as an approach that's just too hard for some old fogey? The same fogey that probably could concatenate piped unix commands to get real shit done in the '80s, and had a solid understanding of awk syntax?
Because...you know...SOA and N-tier systems didn't exist before Node. So what's the name of the anti-pattern where a recent grad stumbles into an enterprise codebase for the first time and has a panic attack because the system architecture doesn't rely on whatever's hot that month in Silicon Valley tech blogs?
Still, when something succeeds beyond my expectations, I encourage myself to ask myself why that has has happened.
On the one hand, I strongly agree with the sarcasm of this: "Because...you know...SOA and N-tier systems didn't exist before Node. "
That is well said.
On the other hand, something must explain the continued rise of Node and Javascript.
Since 2010 I've been a huge fan and promoter of Clojure. I keep thinking it is going to sweep the startup world. And that keeps failing to happen. And I keep wondering why.
Last year I worked at a startup that was in the startup incubator run by NYU in New York. The incubator is an office at 137 Varick street, and there are something like 20 to 30 startups up in there (the number varies a lot as students of NYU are allowed to use the space for short periods).
One thing that surprised me was how completely Node had gained the support of the newer startups. It was definitely the hot new thing. Some of the older startups were using Rails, the rest were using Node. And Node very much has the fire that Rails had 10 years ago.
My startup used a mix of Clojure and Java but we were a rare thing (I wrote about this elsewhere). None of the startups were using Groovy or Java or Scala or Clojure -- the vast resources of the JVM were largely ignored. Nor do I recall any of the startups using C#.
Why is that? I am not sure, but I can make a few guesses.
You mention SOA. The culture that grew up around industry standards such as WSDL lead James Lewis and Martin Fowler to complain about “a complexity that is, frankly, breathtaking“:
"Certainly, many of the techniques in use in the microservice community have grown from the experiences of developers integrating services in large organisations. The Tolerant Reader pattern is an example of this. Efforts to use the web have contributed, using simple protocols is another approach derived from these experiences — a reaction away from central standards that have reached a complexity that is, frankly, breathtaking. (Any time you need an ontology to manage your ontologies you know you are in deep trouble.)"
Arguably, Node offers an easy approach to "SOA lite". Consider how forbidding the previous link is, and then consider how friendly an approach is offered by a company such as LSQ (which started off specializing in Node):
Especially since LSQ is about to roll out an automated processes for microservices on a stack to discover one another (which app runs on which port), a relative novice can get much of the benefits of SOA, quick and easy.
Also, of course, Javascript can be used on the frontend and backend, and nowadays there are a lot of programmers who start off as frontenders and then later learn the backend, and want to bring their favorite language with them.
I could counter-argue that Clojure has Clojurescript and Om-Next, both of which are very, very cool, but I must admit that the JVM has no on-ramp as easy as what Node offers (and programmers who use Clojurescript, but not Clojure, have been rare, at least till very recently).
Also, I've wrestled with enough too complex build scripts that I now appreciate how simple "npm install" is.
So maybe some programmers, who are still in the early phases of their career, are over enthusiastic about Node/Javascript. We should all still admit that what Node achieved in terms of ease of use is fairly impressive.
I'm on the other side of that one, which is to say I don't like the JVM, (I think of maven and have a bad, bad feeling) and inre: WSDL, and SOAP, I prefer something much, much simpler than anything that came from XML-RPC folks.
I guess if you think back, JSON has the same story that the node story you tell has; it was easy to work with for developers and didn't require a lot of theory or tooling. That makes sense to me as a great pattern to get adoption.
The much larger point though is that we should be able to talk over pluses and minuses in different ecosystems without calling somebody outdated or too old or too slow.
Node/JavaScript cater to those who also know a bit of front end.
Node has its uses for prototyping and quick projects, at least for me.
I honestly spend about 1/3 of my time in the Java ecosystem -- it has its uses, I've done a lot of it (dev and operations), but I personally dislike the general weight of dealing with server side Java - app servers, etc.
Language and ecosystem, for me, are dictated by the needs of the particular project. I said 1/3 Java above, the other 2/3 for me (right now) are C and Node... Across 4 projects.
I think the difference between the Java (let's say EE) ecosystem and the NodeJS ecosystem is how you learn them. In a typical Java EE system, you have to learn a lot about the architecture, available libraries and practices before you can really start. With the NodeJS ecosystem, you can get something running in very short order - but while the education is incremental, you still will end up learning a lot of new libraries to build anything substantial. When I built a PoC copy of an existing (and complicated) system, I ended up building most of the functionality of a Java EE application server out of NodeJS components.
As an aside, don't confuse Java EE with J2EE - the new enterprise Java system is a far simpler development environment than the old (I almost gave up on enterprise Java myself before this transition).
That is an amazing reply to what was intended to be simple sarcasm. Ease of use/adoption is king in my book, although it's interesting to note php doesn't get any points despite its well-documented low barrier to entry.
What's the name of the anti-pattern where some punk wanders in and after a week or a month decides they totally understand all of the business requirements and constraints involved in a system that took five years to get to it's current state and decides a rewrite should be launched immediately?
What is the name of the anti-pattern where the new person to the team can't do any work because the DM has the only working copy of the source code on their own machine. Where version control is keeping folders in the desktop labeled 'new', 'new2', 'new3'. Where you do fixes in the live database. Where there is no test database, or any tests. Where deploying is copying some files to a shared drive. Where functions have multiple uses and multiple effects based on what flags are set. Where the answer to every flow control problem is more flags. Where "no the users can't upgrade to Access 2015, this only works with Access 12". Where the project maintainer can't be bothered to learn anything new, because they've already forgot more than you'll ever know. Where we bs the managers and users in to beliving it's supposed to work that way, and if it crashes well then don't do that.
Sorry, you struck a nerve. You didn't deserve that response. I've been in that situation before though, and gotten viewed as the "young punk" for merely suggesting that we improve the code and processes (which as you can tell were extensive enough changes that to the outside observer it would be called a "rewrite")
If you think this new trendy solution is going to be cleaner or less hacky, and still cover the same (or more) use cases, you're rather naive.
80% of the ugliness comes from getting the last 20% of the functionality (maybe even 90/10). It is trivially easy to re-engineer a system that is 80% less ugly/hacky, and covers 80% of the functionality of its predecessor. Then everyone slaps themselves on the back because the refactor is going great!.