Hacker Newsnew | past | comments | ask | show | jobs | submit | more electromech's commentslogin

> My real worry is that this is going to make mid level technical tornadoes...

Yes! Especially in the consulting world, there's a perception that veterans aren't worth the money because younger engineers get things done faster.

I have been the younger engineer scoffing at the veterans, and I have been the veteran desperately trying to get non-technical program managers to understand the nuances of why the quick solution is inadequate.

Big tech will probably sort this stuff out faster, but much of the code that processes our financial and medical records gets written by cheap, warm bodies in 6 month contracts.

All that was a problem before LLMs. Thankfully I'm no longer at a consulting firm. That world must be hell for security-conscious engineers right now.


Hold my beer...

...

On second thought, grab me another beer.


Walking to the bathroom to pee out all the beer counts as moving right?


n = 404 p = 0.003

I'm too dumb to understand how that math works.


It doesn’t. At least not in a way that’s, at best, misleading.


Perhaps if you sit some more…


I love the game! I hate hate hate the timer though. Other than the timer I'd happily add this to my daily word game routine.


Which workload can't it do? I've had good success with jaq performance.


It bombs out on the jq program I use for the 2nd corpus that I mentioned. On further investigation, the show-stopping filter is strftime. In the jaq readme this is the only not-yet-checked box in the compatibility list, so perhaps some day soon.


I'd be curious how the performance compares to this Rust jq clone:

cargo install --locked jaq

(you might also be able to add RUSTFLAGS="-C target-cpu=native" to enable optimizations for your specific CPU family)

"cargo install" is an underrated feature of Rust for exactly the kind of use case described in the article. Because it builds the tools from source, you can opt into platform-specific features/instructions that often aren't included in binaries built for compatibility with older CPUs. And no need to clone the repo or figure out how to build it; you get that for free.

jaq[1] and yq[2] are my go-to options anytime I'm using jq and need a quick and easy performance boost.

[1] https://github.com/01mf02/jaq

[2] https://github.com/mikefarah/yq


> I'd be curious how the performance compares to this Rust jq clone

Every once in a while I test jaq against jq and gojq with my jq solution to AoC 2022 day 13 https://gist.github.com/oguz-ismail/8d0957dfeecc4f816ffee79d...

It's still behind both as of today


As a bonus that people might not be aware of, in the cases where you do want to use the repo directly (either because there isn't a published package or maybe you want the latest commit that hasn't been released), `cargo install` also has a `--git` flag that lets you specify a URL to a repo. I've used this a number of times in the past, especially as an easy way for me to quickly install personal stuff that I throw together and push to a repo without needing to put together any sort of release process or manually copy around binaries to personal machines and keep track of the exact commits I've used to build them.


Thanks for the recommendation! I'm reading one of the chapters now. The examples are giving me ideas and helping me to see a bigger picture.


I'm genuinely intrigued by Dagger, but also super confused. For example, this feels like extra complexity around a simple shell command, and I'm trying to grok why the complexity is worth it: https://docs.dagger.io/quickstart/test/#inspect-the-dagger-f...

I'm a fanboy of Rust, Containerization, and everything-as-code, so on paper Dagger and your Rust SDK seems like it's made for me. But when I read the examples... I dunno, I just don't get it.


It is a perfectly valid crtitisim dagger is not a full build system that dictates what your artifacts look like. Unlike maybe something like bazel or nix. I think of dagger as a sort of interface that now allows me to test and package my build and ci into smaller logical bits and rely on the community for parts of it as well.

In the end you do end up slinging apt install commands for example, but you can test those parts in isolation. Does my ci actually scan this kind of vulnerability, install postgres driver, when I build a rust binary is it musl and working on scratch images.

In some sense dagger feels a little but like a programmatic wrapper on top of docker, because that is actually quite close to what it is.

You can also use it for other things because in my mind it is the easiest way of orchestrating containers. For example running renovate over a list of repositories, spawning adhoc llm containers (pre ollama), etc. Lots of nice uses outside of ci as well even if it is the major selling point


I'm merely an outsider to Dagger, but I believe the page you linked to would give one the impression "but why golang[1] around some shell literals?!" because to grok its value one must understand that m.BuildEnv(source) <https://docs.dagger.io/quickstart/env#inspect-the-dagger-fun...> is programmatically doing what https://docs.github.com/en/actions/writing-workflows/workflo... would do: define the docker image (if any), the env vars (if any), and other common step parameters

That conceptually allows one to have two different "libraries" in your CI: one in literal golang, as a function which takes in a source and sets up common step configurations, and the other as Dagger Functions via Modules (<https://docs.dagger.io/features/modules> or <https://docs.dagger.io/api/custom-functions#initialize-a-dag...>) which work much closer to GHA uses: blocks from a organization/repo@tag style setup with the grave difference that they can run locally or in CI, which for damn sure is not true of GHA uses: blocks

The closest analogy I have is from GitLab CI (since AFAIK GHA does not allow yaml anchors nor "logical extension"):

  .common: &common # <-- use whichever style your team prefers
    image: node:21-slim
    cache: {} # ...
    environment: [] # ...
  my-job1:
    stage: test
    <<: *common
    script: [ npm, run, test:unit, run ]
  my-job2:
    stage: something-else
    extends: .common
    script: echo "do something else"
1: I'm aware that I am using golang multiple times in this comment, and for sure am a static typing fanboi but as the docs show they allow other languages, too


Exactly. When comparing dagger with a lot of the other formats for ci, it may seem more logical. Ive spent so much time debugging github actions, waiting 10 minutes for testing a full pipeline after a change etc. Over an over again. Dagger has a weird DSL in the programming languages as well but at least it is actual code that I can write for loops around or give parameters and reuse. The instead of a groovy file for Jenkins ;)


I liked tslog last time I tried it.


No mention of Ferrocene other than a "further reading" bullet point at the end. Are they using it? Does that help with respect to getting a device safety certified?

From https://ferrocene.dev:

> ISO26262 (ASIL D), IEC 61508 (SIL 4) and IEC 62304 available targetting Linux, QNX Neutrino or your choice of RTOS.

The article also mentions one of those standards:

> Sonair is developing a safety-certified device (IEC 61508 and SIL2).


Disclaimer: I’m one of the founders of Ferrous Systems, the company behind Ferrocene.

One of the goals when certifying Ferrocene was that it serves as a drop in replacement for rustc. So while we’re happy if you start out building your product with Ferrocene (and have made our pricing model compatible with that) - going the route of “rustc first, then slot in Ferrocene” is entirely supported. There are also sometimes good reasons to pick that approach - Ferrocene is much more limited in terms of target support and while we may have a timeline to deliver the target you need in the qualification level you need, we might not ship it yet (though we usually can enable Support relatively quickly)

That said, I’m quite confident that using Ferrocene gives you a faster route to certification than trying on your own. I’d not be surprised if we hear from them.


Thank you for your work on Ferrocene! I particularly appreciate that you're open about the price, at least for a starter license.

We're using Rust for space applications, and while we don't have a need to certify our software yet, we'll keep Ferrocene in mind for the future.

EDIT: Oh, the Ferrocene compiler is fully open source. I didn't expect that!


We’d also be more open about the rest of the plans, but it’s a hard information design problem more than anything else - the website desperately needs an update. Too much to do and too little time. If I just didn’t spend so much time around here ;)

The compiler being open source is not the big thing - it’s mostly upstream rustc with very little modification. However, all of the safety manuals are open too, so you can see what you’ll get.

I see that you’re from Berlin - if you’re interested in a chat, ping us.


> Too much to do and too little time. If I just didn’t spend so much time around here ;)

Heh, very relatable :)

> I see that you’re from Berlin - if you’re interested in a chat, ping us.

I also just saw you're based in Berlin. Will definitely ping you when I'm back. Particularly interested in your "Rust Experts" offering.

One question about Rust safety certification in general:

How do you deal with dependency sprawl? For example, if you write a basic async program with Tokio and friends you may end up depending on >200 crates. Would you our your clients certify them one by one? Are they much more picky with which dependencies they "take on"?


Dependencies. Hard topic. The question is less about the numbers, but rather in the amount of code you pull in. In the end, every line needs to be certified. The team that wrote sudo-rs blogged about their approach here https://www.memorysafety.org/blog/reducing-dependencies-in-s...

Essentially, expand your use for initial development and whittle down later as much as possible.

That said, Tokio is not going to be a good certification candidate - but that’s a topic for a longer conversation. (TL;DR: The Tokio project has aims and goals that are good for their use, but problematic when it comes to writing safety certified software)


That makes a lot of sense, thanks!


Really glad you all are out there doing what you do. In my opinion it is the most important thing for Rusts long term success and longevity. No clue what the costs are like for running on Ferrocene but maybe one day I will have a project that'd benefit from it.


So the cost for the basic level (quality managed, one target architecture) are pretty low - about 240 EUR/human per year. CI runners are free. Certification material is billed separately to allow speculative and experimental usage. You only pay for it if and when you need it.

A lot of projects can benefit from that level of assurance since we have a different support tier policy than upstream Rust, that is: we treat different targets as Tier 1. And you get signed installers for windows etc.

Also, with the upcoming CRA legislation, using a quality managed toolchain will make your life easier - one part you don’t have to manage.


That is incredibly reasonable pricing thank you for being open to sharing. Hoping one day we talk from a business perspective. Cheers


We are already talking :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: