Where is the flaw in the following argument?
You can formally verify software/algorithms for correctness, but formal verification is difficult and takes a long time.
So why can't we do it in piecemeal? Imagine something like StackOverflow, but instead of posting a question, you post a formal specification. The answers to your post are then formally verified pieces of software.
The advantage of this approach would be that the results would be useful forever. Right now we're always building on foundations of sand, because we can never really be sure that our algorithms are correct/optimal.
You could then go even further and connect the verified algorithms/components from this hypothetical StackOverflow. So instead of posting a new formally verified algorithm, you piece it together with parts that have been individually formally verified. That way you only have to verify the new additions.
This way we could build a database/repository of "eternal" software. Eternal in the sense that once a problem is solved, it's like a mathematical proof, it's solved forever (as long as the verification was performed correctly).
You could even add more things to the verification step, like performance analysis - that way we could immediately tell for each of these algorithms/components, how fast they perform.
This of course wouldn't solve software development - instead it would shift the challenge to correctly formulating the problem in a formal specification.
Is this possible? If so, why isn't this being done?
2) it's a pretty rare skillet with specialist tooling, that doesn't really provide the itch scratch of most open source
3) "correctly formulating the problem in a formal specification" is really hard, given the inability of most stack overflow posters to formulate questions rigorously
4) for almost all projects, we value mutability over eternity