In the late 2010's, my startup built a messaging app with millions of daily users sending and receiving >1B messages a month, (somewhere north of 300 messages/sec, IIRC) with a trailing 12 month average uptime of our systems of 99.995%
When we were going through due diligence as part of being acquired by a large tech company, I had to answer a lot of questions about our infrastructure and to "prove" that our boring tech stack could actually achieve this.
That "stack" was 3 pods (2 in one AWS region, 1 in another) comprised of 4 Linux boxes and 3 Windows boxes.
In a stack, Linux boxes were 2 monitoring servers and 2 app servers running an OSS SW package, while the Windows boxes were two app servers running our .NET APIs, and a SQL server.
Fail over was handled by SQL cluster at DB level and ELB at the app level.
While the stack and code were boring, a lot of work went into performance, reliability, and observability.
After we were acquired, my new infra budget was based on their own modeling for scale, resulting in a monthly AWS budget that was 10X my previous annual budget.
I wonder if it’s possible for this to get resubmitted, or given another chance. We often see these “boring” tech write ups, but very rarely applied to something like healthcare.gov.
With hindsight, I’m curious which of the other choices made would have changed the most with a multi page architecture, and have you discovered a cleaner way of avoiding it? Struggling with something similar at the moment.
Talks about boring code, then immediately introduces a completely modern stack that most developers would love to use including grpc, gateways, rds, s3, ec2, testing, separated services & databases, mocking frameworks...
There's nothing boring about this, it's unfortunate that they took to name this "boring code".
Aside from this, the actual implementation details are quite interesting and worth a read. A better title would be "lessons learned using a modern stack for healthcare.gov"
I think they're saying "boring" in that there were no surprising or unusual choices. Go. Test. Mock. Three Postgres Databases, mostly immutable, partitioned by data patterns / access patterns. Crontabs and shell scripts for data loading. No microservices, no multiple repo's, no strange/innovative technology. Pre-computing a bunch of stuff. 250-line query-builder instead of using GraphQL.
`grpc` is a bit fancy (compared to `curl *.json`), which he admits to, but is also fairly battle-tested / boring, and gets some extra benefit of schema + performance. Maybe swagger or something would have been more "boring", but also veers a little away from "the default golang stack".
I think it's boring in the sense that none of it is shiny and new. It's all well-known and battle-tested.
I agree that the article is very much worth a read.
I think it's important that to figure out what boring code is, you need to know the context of the organization creating it.
At ad hoc in 2018, using go/react/rds/ec2 was a pretty standard stack that I could be sure my coworkers would have no trouble getting up to speed with.
When we were going through due diligence as part of being acquired by a large tech company, I had to answer a lot of questions about our infrastructure and to "prove" that our boring tech stack could actually achieve this.
That "stack" was 3 pods (2 in one AWS region, 1 in another) comprised of 4 Linux boxes and 3 Windows boxes.
In a stack, Linux boxes were 2 monitoring servers and 2 app servers running an OSS SW package, while the Windows boxes were two app servers running our .NET APIs, and a SQL server.
Fail over was handled by SQL cluster at DB level and ELB at the app level.
While the stack and code were boring, a lot of work went into performance, reliability, and observability.
After we were acquired, my new infra budget was based on their own modeling for scale, resulting in a monthly AWS budget that was 10X my previous annual budget.