I had never heard of ffmpeg until yesterday. In fact, I just Googled the term because I _still_ wasn't sure what it was beyond a dependency required to utilize a huggingface model I was testing. And now here we are...
Every media processing or delivery service you've interacted with in the last decade is a lib-ffmpeg wrapper, including broadcast TV, streaming, or anything remotely related to re-encoding video. I'm only being slightly hyperbolic, FFMpeg is ubiquitous.
I think we can just let this rest. These kinds of operations are not as ergonomic in python. That's pretty clear. No example provided is even remotely close to the simplicity of the F# example. Acquiesce.
The fact is the language just works against you in this area if you have to jump through hoops to approximate a feature other languages just have. And I don't even mean extra syntax like F#'s pipe operators (although I do love them). Just swapping the arguments so you could chain the calls would look a lot better, if a little LISPy. It really is that bad.
Totally agree. The world is the best it has ever been by nearly any metric.
My father had to _literally_ practice hiding under his desk at school with a Geiger counter in preparation for nuclear war. He was then _drafted_ into Vietnam... I don't think I have it so bad.
"The world is the best it has ever been by nearly any metric."
How about fertility of the soil, area of land covered by desert vs forest, number of insects and in general diversity of wildlife, amount of fossil fuels burned every minute, amount of opioids intake, anti depression drugs consumption, number of days you have to wait to see a special doctor, inflation rate, ...
So good to hear, that you are currently doing well. And we surely could be worse off and many things certainly did improve. But give it some more geopolitical escalation and you getting drafted as well in the neae future remains a very real possibility.
I mean, if we look at history, climate change (even much smaller than what we are facing) tends to trigger wars. Suddenly land that was valuable isn't and vice versa. Even in the most optomistic scenario it will probably destabalize the world order and result in conflicts.
Like e.g. weather events are thought to be one of the likely factors in the late bronze age collapse. There's evidence that unusual climate events coincided with the fall of rome. Like obviously a lot of other factors were at play, climate just pushed something brittle to its breaking point.
But you aren't building a generic product. You are building something specific.
I'm my experience the _vast_ majority of CSS never gets reused. Sure if all you are building is marketing websites or landing pages then I suppose some generic "cards" or whatever will get you pretty far, but for anything even moderately more complex... no chance
There are many glaring issues with the ideas presented in that linked "take down" of DDD.
I can forgive the early strawman comparing "spreadsheets" to bespoke software solutions. As if they cover exactly equal problem/solution spaces. Fine. I'll play along...
I can even forgive failing to understand that DDD is a design methodology, not an architecture; That DDD in no way prescribes or enforces any particular code organizational or deployment strategy. Easy to get wrong I suppose, and doesn't necessarily invalidate what could become a coherent argument...
But the author then has to go on and give examples of just how little they understand the topic at hand!
The first points to database software as "an example of good architectural design where their purpose of storing data is not confused with the domain the database will be used for". Oh the irony! Of course that's the case! The domain of database software is... (drumroll) persisting data! What would you expect to see if you opened the source code for an RDBMS? Code for blasting out marketing emails? I could go a step further and opine that it isn't possible to know whether any piece of software follows DDD without actually seeing the design/code base... but I digress. This point is not forgivable. It's a clear and obvious misunderstanding of DDD and how it is applied to systems.
The next examples they give are of applications they worked on! In both cases the author completely misses the fact that the software they "fixed" by decoupling it from the "domain" are simply examples where they followed DDD to create a "better" system. That's right! If the goal of your software is to create a generic "data integration application" or "workflow engine", then yes, coupling your design to "healthcare" is a mistake. Both are examples where the author is confused about what domain their software is servicing, and how aligning the software design to the correct domain was a major improvement. Hmm... sounds like DDD to me :)
I think there are valid criticisms of DDD, but the article linked is quite poor at articulating them.
This confirms my suspicion that "domain" has no definition. If "storing data" is a domain than anything can be a domain, and the word is meaningless.
There are logical groupings of nouns and functionality into services / modules / orthogonal parts of the software. Aggregate root is one useful term here, domain is not.
And for me it further reinforces the point of the article: design functional horizontal layers, not ones locked into your business "domains".
Come on now... You know perfectly well that "storing data" means something different to database software than some shitty LoB app. Let's not be so imprecise as to give off a disingenuous impression. The article makes almost no argument against DDD, and in fact could be a case study supporting the opposite conclusion! The author confuses architecture with design in every important way (similar to your "horizontal layers" comment).
DDD is not about how software is physically organized nor how it is deployed. You can have a traditional N-tier architecture AND follow DDD. The domain model is a logical model used to abstract the functional requirements of a system. It doesn't "lock" you in anymore than whatever else you have in place serving the same purpose. You cannot simply avoid your functional requirements. DDD is a methodology with the specific goal of drawing boundaries (answering "what" goes "where") in such a way to minimize the cost of change. If your resulting design is not doing so, you have simply failed to model your domain in a useful way.
In the author's case, maybe they really did need a generic "data integration" or "workflow engine" application. That's not an unreasonable assumption. And it follows that coupling those kinds of applications to the data/workflows contained therein would lead to all sorts of problems.
But surely he is not arguing that every application should be designed as a platform? It's considerably more difficult to design and maintain a "workflow engine" than "a single workflow", or a "data ingestion" application than to "just ingest the data". Most of the important bits in the above are already abstracted away from users.
Eh... I agree that the minimization of LoC is almost certainly not the most important vector on which to optimize, but I'm not convinced the example linked here is an improvement. The author is obviously correct that their version is easier to debug and slightly easier to understand, neither of these improvements, taken in isolation, satisfy these conditions when taken as a whole.
In terms of ease-of-debugging, sure, splattering local variables and extra control statements may allow you to break/inspect a certain class of bug in a certain way. But it also creates a lot of noise and makes the code a lot more "dense". It's hard to see given an example in isolation, but when all of your code looks like this it can make it significantly more "tiring" to understand. "Easy to debug", while important, is also something that must be balanced against other factors.
And in terms of easy-to-understand, again, I agree that the author's example has a slight edge (give the first one a shot though... it's not so bad). But what does it mean for a `Contact` to both be "inactive" and also "a family or friend"? They have forgotten to capture the single most important condition! Similar to my first point, it can be hard to see the issue when given an example in isolation, but imagine looking for whatever condition or rule the author is enforcing in a sea of other blocks that look similar.
A simple comment over the original version would suffice for me:
While providing a syntactically correct, optimal solution to a given problem is important (this was reiterated many times during my recent interview process at a FAANG), it is also important to be able to demonstrate the ability to synthesize an approach to solving a problem to which you don't already know the answer. That is, how to solve a problem.
These questions, while certainly aimed at identifying core competency areas, are also an opportunity for the candidate to demonstrate an aptitude for problem solving. Nobody really cares "how many piano tuners operate in Seattle". What they care about is, given that kind of question, how a candidate responds. You might be surprised by the number of people totally unable to even generate a reasonable estimate:
- How many people live in Seattle?
- What percentage have pianos?
- How often does a piano need tuning?
- How long does it take to tune a piano?
- How many pianos can be tuned per work day (given commuting time)?
- etc.. driving towards a number.
Did the candidate attack the problem and even attempt to solve it? Did they use metrics? The list goes on... Most LC-style questions (save the "aha! moment" ones) have a similar impetus, but are also much more practical and efficient because they test for many things at once.
And if I'm being honest... most FAANGs aren't looking for engineers that require quiet and time to solve problems. They are looking for elite engineers who can write a solution nearly as fast as the problem is presented (I'm obviously exaggerating a little bit here). The difficulty of the questions presented to me (6 technical interviews) were what I consider semi-trivial. I was able to describe the solution-space within seconds of reading the prompt and code an optimal solution nearly as fast as I could type in all cases. It honestly made me wonder about how poorly the average programmer must perform such that these were the questions they needed to evaluate my potential.
Another commenter posted this article [0] about the interviewing philosophy back in the early '00s. I found it very enlightening, and reassuringly, while reading through it I knew the answer to nearly every question (and could expand upon the topics) just off the top of my head. I'm not trying to sound arrogant here but the idea that a FAANG, who pays top-dollar for engineers, would settle for "10 years experience" as a suitable replacement for "demonstrates ability" is what's absurd. A false positive is way more costly than a false negative.
I think There is a fundamental mismatch between the signal an interview can provide and the requirements necessary to be successful on the job.
You see and interview is like a “test” or a “pop quiz” where the day to day is more like “homework”. That is, you want to hire people who do their homework, but the only reasonable method to evaluate a candidate is to give them a test. And while a “high test score” may serve as a proxy for “completes homework”, as we all know, it’s far from perfect.
The unfortunate reality is that the best process for determining the quality of a candidate benefits neither the candidate nor the company. That is some sort of paid-trial-period-type arrangement to evaluate fit.
So we are left with these gauntlets of an interview process because it kind of splits the baby…
I've seen this idea of trial periods floated around a bit.
My honest question is, how do you run this at big tech scale? Honestly, we get dozens if not hundreds of applicants per opening. Are we going to give all of them a trial? If not, how do we choose which ones? Again we are back at square 1.
I've probably written some version of this comment over dozen times throughout the years when discussion about PHP rears its head (note that I'm fond of PHP):
PHP as a language is... okay. It's been modernized to include many of the faculties a developer would expect of a language in 2022. But honestly, I'm ready for the "improvements" to the language to stop. The closer you get to C# or Java the more obvious it is that PHP will never catch up! It's not that it has to catch up, rather, that few people actually choose PHP because the language is so great. If you were choosing a your stack based on "which language offers us the most features or highest ability to express our problem" you wouldn't be choosing PHP!
Here's the secret: PHP is probably the best language for server side webdev because of the runtime! It's so dead simple to develop and deploy PHP. I mean the language itself is basically a web framework right? How many other languages automatically parse an HTTP request and hand it to you by default? Or offer built-in templating semantics for composing HTML? Or have request routing by default? I don't know of another one. That's why I, and many others, choose PHP!
You know what I'd like to see? The trajectory of PHP's development change to lean in to the above -- to focus development on improving the reasons why it is chosen in the first place! I don't need a C# with slightly different syntax (or named parameters). I'd like:
- Improved templating semantics
- A better module system
- A way to bake routing into each file (overwriting the default semantics)
- Adding a built-in service-locator pattern/DI
- A way to make PHP scripts more testable
What I'm saying is just make it a better framework by default. I know I know... "Laravel this, Symfony that, have you heard of Slim?". Yes... I get it. There are community-made solutions to the above problems. And you know what? They won't be as performant or integrated as if the runtime itself was upgraded to make them obsolete. It's not like routing and DI are that complicated (the two most obvious reasons frameworks are employed).
I like that I can write a simple API in PHP without pulling in a single dependency! It makes so many things easier (the right things too). Now let's just extend that to all of the basic facets a modern web framework offers!
The odds of those things being worked into php-src are very, very small. Most of the time there has been discussion on these topics on the internals mailing list, it was clear that the majority favored keeping those things in userland.
The problem with templating systems, routers, DI containers, testing frameworks, etc.. is that they are all very opinionated. There is no one way to do it, and each implementation has upsides and downsides.
On top of that, the PHP ecosystem is so mature that for each of the possible implementations of these features, there is a stable, production-ready and actively maintained library/framework. Why reinvent the wheel?
An important detail is that most of these frameworks have also built a business around their product, which means that people are being paid to work on them. How would the financial situation work if you moved all that into the core?
For me, the current situation is close to the ideal one. For example, the recent addition of fibers into the core in PHP 8.1 was a logical step to allow cooperative multi-threading. It will be used by great userland frameworks like Revolt, Amp, ReactPHP and more. The full event loop feature would be too heavy for the core.
+ Composer is so great that installing any of these packages is a breeze anyway.
I used to look down on PHP, but after being forced to use it for a while I begrudgingly came to admire it for all the reasons that you've pointed out. And I started using PHP by choice, generally for writing small scripts (no frameworks or libraries) that I could "deploy" simply by copying the .php file into /var/www. It's a very productive language when used as intended.
PHP scripts on shared webhosting was the original "serverless" and you didn't have to be too careful about object lifetimes and leaking database connections because as soon as the request was finished the entire process was nuked from orbit.
But "modern PHP" is the complete antithesis of everything that I grew to admire about PHP because it's just another Java or C# with awful syntax and weird equality rules and gigantic frameworks with thousands of classes and interfaces.
I would very much like to see a version of PHP that dropped all the nasty namespace and object-oriented misfeatures and just focused on cleaning up the language semantics and enhancing the built-in framework features...
> because of the runtime! It's so dead simple to develop and deploy PHP.
As someone who had to deploy PHP applications: hell no. For developers it might be easy, but for admins, the runtime is nowhere near good. Sure if you think deploying your site using scp, already discovered rsync to do this, or even use the cutting edge way of deploying by using a git checkout on the server, you might think it's great. But that's how you take down sites or have service interruption.
The problem is not that it cannot be done right, the problem is that the runtime enables and even promotes such bad practices, and makes doing it properly harder than it should be. And then we don't go into php extensions, performance tuning, profiling, scaling, ...
I deploy PHP applications all the time! I simply cannot share your sentiment though.
I cannot think of an easier deployment paradigm than what essentially amounts to a copy and paste in the simplest case (i.e. what we should strive for), and maybe one or two more steps in a more complex case (e.g. `composer install`).
An entire PHP installation can literally be copied and pasted onto another machine and "just work"(though you may need to fiddle with some paths in the .ini). Similarly PHP is fast enough that it's unlikely to ever be the bottleneck in terms of performance/scaling for 99% of use-cases.
I suppose I can agree that PHP may lend itself well to certain kinds of "bad practices", but that's true for any system if not managed well.
I think we live in different worlds, and that's a bit PHP's problem, many of it's devs and users do.
From my pov, manually logging in on a server has been something for emergencies only for a long time now (10y+), and if it's frequently required or even essential to deploy and run an application, this is a sign of bad practices. So the last thing I want to read to deploy some software is "can literally be copied and pasted onto another machine". Maybe you can do that, but you never ever should even have to consider doing this. Where is your versioning, ci/cd (testing, quality gates, automated deploy), how do you roll back in case of a problem? How do you track when new versions were deployed? How do you address patching, runtime differences, active PHP modules and php-level config? What webserver do you use, how does it need to be tuned for your workload? PHP-FPM? Do you expect something specific? Do you rely on .htaccess files? The list goes on and on.
These things are a major hassle at the scale I've had to deploy PHP services. Its best option is containerise them, and then you have a bunch of other challenges due to the process per request runtime design impacting monitoring, metrics, autoscaling, ...
Sorry. I wasn't advocating for "copying and pasting", rather, I was using that as an example of how simply PHP can be managed. More generally, the simplest case is often a good proxy for what to expect when dealing with more complex scenarios. Of course we automate the above behind the proper pipelines!
Many of your other concerns are big "nothing burgers". It's not like versioning, patching, config, tuning, monitoring, etc. are any less of a pain in other runtimes. At worst you could argue that it's different.
Because of the separation between PHP and the web server it makes developing PHP really easy because the web server handles the request for you.
But you pay the price when configuring your environment. Apache conf, php.ini, php modules, htaccess, etc.
To avoid too much craziness I tend to avoid custom htaccess files, unusual php modules, or weird php.ini settings to minimize the friction when deploying.
For large project you are almost required to run containers, but even then you can’t escape the custom bash script.
Farewell FFmpegKit. You will be missed.