Sometimes it's not so much "NIH syndrome" as "Everything else really, truly is crap". Scottschulthess mentiones that all CMSes are terrible. Some of my colleagues, for example, needed to spin up hundreds of virtual machines very very quickly, and as it turns out, OpenStack is absolute crap. So they took 2 weeks and wrote something a fraction of the size of OpenStack which can have hundreds of virtual machines running in the time it takes OpenStack to boot up 1/10th as many... and crash.
Yeah, that is very true. These are meant to be smells, just something to watch out for. I personally think its impractical to remove all smells, just to be wary of having too many.
""Not Invented Here" syndrome: Expressed most commonly in a desire for everything needed to be developed in house. E.g.: “Need a CMS? Let’s make our own from scratch!” Perhaps, you work at a place where all your teammates do is constantly bring you bad ideas. Are they really all terrible? Or are only your ideas good enough for the organization? Not Invented Here can also apply to your own head, not just the organization."
The answer is yes, they (CMSes) are all terrible. :)
"No dedicated QA for externally-facing software: Someone who is experienced at breaking software should have a crack at it before it goes to users. Developers (including me) are too enamored with their own work to really take the time to break it, so someone with a sense of pride in finding problems needs to be given the task."
Having dedicated QA is not always good. In fact it is often counter-productive.
"Low Joel Test score: The Joel Test remains a great indicator of institutional ego. A team that scores low on the Joel Test does so because someone along the way decided that, "nah, we don't need that here, we are special", and almost certainly they are not. I have yet to hear of a team with a legitimate reason for a low Joel Test score."
The joel test is actually quite out of date. Specifically the stuff about fixing bugs before writing features, hallway usability testing, testers, having a 'spec', and having a bug database are all not necessary or good to have in all cases.
> Having dedicated QA is not always good. In fact it is often counter-productive.
Maybe 'dedicated QA' is a bad but a developer must at least be point man for test, even if the responsibilities aren't 100% of their workload. One could argue that having a horde of peons to click randomly is counter-productive, but teams >6 or so need a specific dev responsible for automating test. Its too specific a domain, and intermittently demanding, and to force all devs to split the duties.
I wouldn't say that a perfect score means you are good to go, but just that it is a list of things that either should be addressed or deliberately discussed and weighed. You want to make sure that you are leaving in known bugs, not using a bug tracker, or not having requirements for a a very good reason. If the reason is "we are a one man shop, and I am fine without it", OK, just realize that will not scale very efficiently.
I'm curious about this. What do you do when you encounter bugs that have a longer fix period, say several days to a couple of weeks? Do you stop new development or are you dedicating part of your team or team's time to only working on bugs?
Often I don't need a "Content Management System", but I need a way to "Manage Content". The 'system' part is... pretty much in every situation I've seen, a straightjacket with little room to be extended, or something which requires a lot of time/effort to be proficient with. I've still not yet found a good middle ground.
Anything not at either end of that spectrum (I'd argue Wordpress, for instance is dangerously close to being an example) is a compromise which shouldn't exist because it will incur a heavy maintenance burden (greater than the limited but simple solution) in exchange for it's flexibility which it might not be able to pay back as well as the fully flexible solution.
Assuming content management is the key goal; I tend to use either a static generator (limited, purpose-built, ultimately simple) or Drupal (approaching becoming a framework that happens to ship with a CMS, has a learning curve from hell).
For someone that needs more than the simple use case, my suggestion is to delegate to a guy who has spent a long time and cleared the hurdles to proficiency with one of the flexible-at-expense-of-complication CMSes. Though, I'm one of said guys, so I'm biased.
Obviously, Drupal's not exactly pretty (and PHP is a lot of fun to hate) but I find that it comes with code I don't have to write to scratch enough of my customers' itches for me that it's hard not to love it anyway.
In order to "save development time" and implement "best practices" a company decides to use an "out of the shelf" package from a big name vendor (IBM, Oracle, Microsoft, etc) and customize it to their needs. Some initial customization is done and some shortcuts are taken to make the project go live.
Fast forward a couple of years. The custom code added by the company complicates upgrade paths and vendor support.
The business demands more and more functionality which the package does not support "out of the box".
The customization initially implemented turns out to have been technical debt in disguise.
They bypassed the recommended usage of the package or platform which resulted in performance problems.
Eventually they end up having to re-work everything "the right way" in order to get vendor support or fix the performance problems.
This is specially prevalent in corporate IT departments in large corporations.
Moral of the story is that "off the shelf" platforms are not a silver bullet.
If you don't understand the platform and the business requirements you will have problems later on.
I don't know how many times I've fixed performance problems of "off the shelf" packages by simply writing code which replaces the OOB code with a couple of SQL queries/stored procedures and only does what the business needs, bypassing features which the customer does not need in the present.
There will always be problems and headaches with off the shelf software, but the thing is that even with all of this you may still save over building it yourself. I worked for years on a Siebel implementation, and even with these issues it was worlds better than the previous incarnation which we had built in house.
I used to apply the Joel test religiously, but I've gradually come to the opinion that dedicated QA is a net negative. It creates a natural opposition between two halves of what should be the same team, and removes the focus that comes from knowing that what you create will be what's deployed. It also adds friction that goes against the agile "deploy early, deploy often" approach.
This is incredibly specific to one way to do QA. Arguably I'd say it's a bad way to do it, much as you can find really bad engineering practice, in which case I agree with you: bad practice is bad practice. :) People complain about how hard it is to hire good developers, and this goes double or even triple for people in the test industry, frankly.
So much depends on what the project actually needs. If developers are doing a great job of handling QA-related issues themselves, great. But sometimes you need someone to help you clear out technical debt, rig up some test infrastructure, help refine a release process, etc. And sometimes the risk of regression is so high and the product is so big that you do need an army of testers.
Likewise, for any sufficiently large project, having a dedicated person who keeps an eye on the big picture -- a clean & accurate bug DB; tracking customer reports/issues; finding & narrowing bugs -- can be an asset. This goes double when the tester's job is to be a product expert, and when the tester is reasonably technical.
And yes testers should be more or less embedded with developers. "Throw it over the wall" is invariably a terrible way to handle releases. Testers should exist to enable deploy early, deploy often, not to inhibit it. And so on. But see above re: hiring good testers. It's hard.
Anecdotally, I can say that I've seen the issue with QA causing code deployments to take far longer than they should. For example, right now we have code (one small feature) that QA wants almost two full weeks to test. I was just telling one of our QAs that I hope we start getting much better automated test coverage so that manual QA time can be brought down to a much more reasonable day or two before releasing code to the public.
Some of it comes down to incentives/culture. If QA gets yelled at when bugs go out, they're going to take a conservative approach.
It's funny, though, to hear you "hope" you start getting better automated test coverage. :) It happens pretty often that QA is the sole owner of automated tests, which has its own set of potentially bad incentives.
> developers and managers repeatedly act as if established best practices do not apply to them
... and rightly so. A junior programmer might be better off always following "established best practices", but I expect any developer worth their salt to be able to know when to break the rules.
"Best Practices" are not always well defined or applicable. In some cases there are conflicts among different practices. Even worse, there are cases where one person believes something is a best practice and another believes it to be a bad practice.
The reality is that there isn't really such thing is a 'best practice.' There are only practices that have been found to be 'less wrong' than other ways, and perhaps even 'useful.'
Whatever your development practices are, the important thing is that you have and maintain the flexibility to adapt to what the particular situation needs.
Hmm, just playing devil's advocate here but I'll bet that a huge list of successful people topped by the obvious like Gates, Zuckerberg, and even the humble Wozniak have used all of these elements some or most of the time in their development lives :)
Generational differences abound and manifest in management trends as people reach an age of influence. Born in the 50/60's - you're more likely to be a lone-wolf type. Born in the 80's/90's and you probably consider teamwork and connectedness to be instinctual. Not claiming either is correct but certainly projects beyond a certain scale can only be completed by a structured team. Too, those who prefer to work alone are not automatically egomaniacs.
You are certainly correct that there is probably a huge list of successful people who have used these elements, and that is fine, but that doesn't mean I want to work for them! I have heard people say working at Apple is horrific, as that ego is permeated into everything they do. Sure they (used to) turn out great work, but at the cost of a terrible work environment. Check out the book Good to Great http://en.wikipedia.org/wiki/Good_to_Great#Seven_characteris..., notice that first thing? Humble leaders. Notice what is happening to Apple lately without that massive ego whipping them onward?
As to the generational differences, I had not noticed only the older devs wanting to "cowboy" while the younger devs "cooperate". Usually, it has been a variety of both doing both. I will have to keep my eye on that though.
I believe EDD affects everyone from the large corps right down to the one and two-person teams. Take a look at highest upvoted hacker ideas here: http://www.todaystopthing.com/hackerideas/top
No, not every company uses it, nor is it good for every company or circumstance. It is just a smell that the developers do not believe that working with someone else will help them increase their quality or speed. Some developers really are fast enough that it would slow them down to pair, but then generally, their slower team members might be hugely benefited by just a few hours a week of pairing. So it is a trade-off, do they go fast alone, or go pretty fast with a partner, and increase the _total_ productivity over time?
Pair programming is a great way to get new developers up to speed with the code-base, but the team that is going to be using it long-term needs to apply it with discipline.
As for hardware setup, the best I have used is: machines with cloned monitors, two mice, and two keyboards, both side by side so the developers can talk in low tones and not bother anyone else. I do this now at work, and it is a lot of fun, and it really averages to be that we get done more than twice what we would work normally alone.
My experience: pair programming usually degenerates into having a human version of the annoying syntax checker prevalent in previous versions of Visual Basic, which required you to close the dialog and then fix the error before allowing you to move to another line. "Whoops, you forgot a semicolon there." "Are you going to close that open brace?" YES I'M GOING TO CLOSE IT I WAS GETTING TO THAT ASDFGHJKL;
That honestly does happen at first, until you get used to the other person a bit more, then the real speed happens.
Once you really are switching every few minutes, and you learn to trust the other guy to figure that crap out himself, then you can stop watching his semi-colons and start watching ahead for what you are going to do when you get your turn to drive. I have worked with someone like that, and the effect was profound, we would get done sometimes 4-5 days of low bug count, high quality work each day. You have to learn to disengage just a bit and start actually thinking about how his work fits in the big picture, and what you are going to do next, not his spelling errata.
IMHO that's either the navigator not knowing any better, or wishing he was the driver, or just being a jerk. The navigator's job isn't to focus on the nitty-gritty, that's what the driver is doing. The navigator should instead, since he's free to think more and look around, keep his eyes on the bigger picture. If the navigator is more skilled or experienced than the driver, then the navigator should be that much more patient and respectful.