The question posed is, “With agents, does it become practical to have large codebases that can be read like a narrative, whose prose is kept in sync with changes to the code by tireless machines?”
It's not practical to have codebases that can be read like a narrative, because that's not how we want to read them when we deal with the source code. We jump to definitions, arriving at different pieces of code in different paths, for different reasons, and presuming there is one universal, linear, book-style way to read that code, is frankly just absurd from this perspective. A programming language should be expressive enough to make code read easily, and tools should make it easy to navigate.
I believe my opinion on this matters more than an opinion of an average admirer of LP. By their own admission, they still mostly write code in boring plain text files. I write programs in org-mode all the time. Literally (no pun intended) all my libraries, outside of those written for a day job, are written in Org. I think it's important to note that they are all Lisp libraries, as my workflow might not be as great for something like C. The documentation in my Org files is mostly reduced to examples — I do like docstrings but I appreciate an exhaustive (or at least a rich enough) set of examples more, and writing them is much easier: I write them naturally as tests while I'm implementing a function. The examples are writen in Org blocks, and when I install a library of push an important commit, I run all tests, of which examples are but special cases. The effect is, this part of the documentation is always in sync with the code (of course, some tests fail, and they are marked as such when tests run). I know how to sync this with docstrings, too, if necessary; I haven't: it takes time to implement and I'm not sure the benefit will be that great.
My (limited, so far) experience with LLMs in this setting is nice: a set of pre-written examples provides a good entry point, and an LLM is often capable of producing a very satisfactory solution, immediately testable, of course. The general structure of my Org files with code is also quite strict.
I don't call this “literate programming”, however — I think LP is a mess of mostly wrong ideas — my approach is just a “notebook interface” to a program, inspired by Mathematica Notebooks, popularly (but not in a representative way) imitated by the now-famous Jupyter notebooks. The terminology doesn't matter much: what I'm describing is what the silly.business blogpost is largerly about. The author of nbdev is in the comments here; we're basically implementing the same idea.
silly.business mentions tangling which is a fundamental concept in LP and is a good example of what I dislike about LP: tangling, like several concepts behing LP, is only a thing due to limitations of the programming systems that Donald Knuth was using. When I write Common Lisp in Org, I do not need to tangle, because Common Lisp does not have many of the limitations that apparently influenced the concepts of LP. Much like “reading like a narrative” idea is misguided, for reasons I outlined in the beginning. Lisp is expressive enough to read like prose (or like anything else) to as large a degree as required, and, more generally, to have code organized as non-linearly as required. This argument, however, is irrelevant if we want LLMs, rather than us, read codebases like a book; but that's a different topic.
This is a good opinion. Maybe humans do not really know how to teach this skill of reading code. We do not have a good, exact protocol because people rely on their personal heuristic methods.
There is a difference. I write a Lisp program, and it runs everywhere Emacs does, including my Android device. I wouldn't do that with Java, as thaw would be extremely impractical. (I might be biased: I probably wouldn't be programming at all if Lisp didn't exist.)
Lisp is the №1 enabler of Stallman's “Freedom 1”. Smalltalk might be close but I can't really say, and it doesn't look like I can install Pharo on my Android device. Nothing else even comes close.
pfff... Lisps exists pretty much for every platform and domain these days. You need to target JVM - here's Clojure. Javascript - Clojurescript and nbb. You need system scripting - you can use babashka or Janet. OTP - there's LFE. You need iOS/Android development - there's ClojureDart and ReactNative. Low-level systems programming - try Carp or Ferret. Data science and numerical computing - Hy gives you Python interop and there's clj-python/libpython-clj. Microcontrollers and embedded - uLisp covers it. Academic/research work - still can't beat Common Lisp or Scheme.
The OP is wrong about creative freedom being overblown. Lisp gives you creative freedom - if you choose to accept it.
> Bolting packages (module system) on top of symbols leads to some problems in practice.
Packages are namespaces, unrelated to modules. They are not used to organize dependencies but to remove ambiguity of naming and to provide encapsulation.
> The progress in CL development is severely hampered by the set-in-stone standard, small pool of users
The language is extensible, compilers do get new features added, sometimes in a coordinated manner. Standard being stable is not that significant an obstacle to moving forward; pool of users being small is much more noticeable. Yet more noticeable is that community is less healthy than the Emacs one. This comparison is valid because development in Emacs Lisp is attractive for the same reasons, unique to it and CL; they also happen to have extremely similar syntax. Elisp is moving forward steadily, and in recent years quite rapidly. CL would progress forward similarly, or better, had it had something like Emacs that provided that synergetic effect.
An ordered pair is a useful data structure, and it's beneficial to have special short names for accessors to its elements. Humanity got very lucky with the names “car” and “cdr”. Naggum does point that out.
Any rule about any language could be labeled a barrier to its success because any such rule contributes to the cognitive load, making learning the language slightly more difficult than it would be without that rule. What matters more is how much cognitive load is there after you learn the rules. Common Lisp is very successful at it.
Note that the value returned by cons is guaranteed to have type cons, and it is guaranteed that a fresh cons is created. Neither is true for list*; note that its return type is t.
Another evidence that linear algebra concepts are terribly confusing when layed out without geometric background.
This had been criticised for decades. Competently and popularly presented criticism goes back to at least as far as 1957 (Artin's Geometric Algebra; see discussion of determinants somewhere near the beginning) but linear algebra is still often presented decoupled from geometry.
I wonder though if there's purely algebraic approach to matrices that explains as much (or more) as geometric one. Maybe approaching algebra of matrices consistently as an example of category algebra could be illuminating.
I learned abstract linear algebra first, and didn't learn geometrical interpretations until I taught myself many years later so I could write an asteroids clone in svg.
I don't think it was a pedagogically a problem, except I couldn't bring myself to care about matrices when I was learning them... It was very easy to take my abstract knowledge and apply it, and for me it might have been harder the other way around.
In retrospect a hybrid applied/theoretical topic (like reed Solomon encoding and recovery) might have perked me up. But I might have also been a strange case.
If by “abstract” linear algebra you mean “course that starts with the definition of vector space”, then it is geometric enough in the sense Artin talks about.
It is pedagogically a problem for people who ask questions like the one in topic. But it can also be a problem when one encounters bilinear form and linear operator in practice but can't distingish between the two. I can't think of a specific example but I was asked once about some problem (in electrical engineering, iirc) where the source of confusion was this; some transformations of a (square) matrix were natural while others were not.
Some people feel strongly about the topic—mostly those with “pure math” inclinations.
For a non-technical text about a basic, broadly-appealing topic, this has disappointingly high level of mentions of something called “FaceTime”.
I have not yet read it all but I already have doubts it's worth reading further after second mention if author just expects all readers to be on Facebook or be familiar with a particular program to connect to others, or with an experience of usage thereof.
I'm sorry you're disappointed by the fact that I said "FaceTime" three times in a 6000 word essay about connection with oneself.
The essay is not predicated on you using Facebook, having an iPhone, using FaceTime, Zoom, or any specific program, but I think that'd be pretty clear to you if you read it before commenting.
FaceTime is not a complicated concept. It’s a phone call but with video. I’m sure you already know this, even if you’re pretending not to (to look “cool” I guess). If the author explained every single word used, he’d be writing a dictionary and not an article.
There may be different aspects to programs like this. They may provide calls, they may provide some other features that are significant to understand their influence on users but non-obvious to non-users.
I never encountered FaceTime and I have avoided Facebook since I firnt saw it. Also, curiously, I've recently observed a group of like-minded people divided, in a fairly confrontational manner, unable to listen to each other, with one side being, according to my observanions, overrepresented on Facebook, compared to another, and the other overrepresented on Twitter.
This can be attributed to “echo chamber” phenomenon, or to platform preference by leaders of opinions. But my null hypothesis now is, Facebook actually changes its users to being worse communicators, unlike Twitter, and it contributes to users' feelings of isolation, significantly. I'm sure details of means of communication matter.
FaceTime is not a Facebook product. It's also bad form to accuse others of being poor communicators while ignoring all obvious social cues to continue on with a preformed rant against an unrelated entity to the topic at hand.
I think you might gain something from this article for its actual content. If it helps, replace Facebook/Twitter with Hacker News and FaceTime with whatever video or phone calls you use. Or engage in some suspension of disbelief to get to the real point. The author talks about ways of really connecting with people.
Your computer can help you look up the definitions of words you don't understand.
e.g. in FireFox, double click to select the word, then right click and there is "search DuckDuckGo for 'word'" in the menu. Dictionary extensions are available for in-browser definitions, and searching DuckDuckGo or Google for "define:word" will usually bring up a definition in the search results page, or if not then in the top few result links.
So-called “anti-piracy” measures are futile. As long as something can be played back, it can be recorded, copied, re-played.
If watermarks become an issue, people will develop methods to mangle watermarks and make them unreliable in figuring out the source of the leak.
Any attempts to prevent copying by technical means are at odds with public ownership and use of general-purpose computers. The only reliable way to prevent people from making and sharing copies of data is to ban such computers or restrict their usage.
> even if he was openly anti-Putin and supported opposition in any way
This implies he does not support opposition in any way. This is not true. He does support opposition rallies—when they are held to support his cause, true, but such is political climate in Russia that publically supporting those who vocally oppose the state policy is a domain for very few and quite brave.
When Libertarian Party of Russia was organizing a rally in 2017 to protest Telegram ban, Durov contacted the Party himself. (I'm a member of LP RU.)
I don't count on Telegram security but Durov's public image and actions are certainly noticeable and appreciated by opposition.
> This implies he does not support opposition in any way. This is not true.
It implied just that I don't know for sure and it does not change a lot. Even Khodorkovsky is not considered a threat worthy of eliminating - he much better serves the cause being a propaganda target. Durov has even more to offer to Russian gov't (digital technology is the new arms race) so he has even lesser chance of bodily harm.
It's not practical to have codebases that can be read like a narrative, because that's not how we want to read them when we deal with the source code. We jump to definitions, arriving at different pieces of code in different paths, for different reasons, and presuming there is one universal, linear, book-style way to read that code, is frankly just absurd from this perspective. A programming language should be expressive enough to make code read easily, and tools should make it easy to navigate.
I believe my opinion on this matters more than an opinion of an average admirer of LP. By their own admission, they still mostly write code in boring plain text files. I write programs in org-mode all the time. Literally (no pun intended) all my libraries, outside of those written for a day job, are written in Org. I think it's important to note that they are all Lisp libraries, as my workflow might not be as great for something like C. The documentation in my Org files is mostly reduced to examples — I do like docstrings but I appreciate an exhaustive (or at least a rich enough) set of examples more, and writing them is much easier: I write them naturally as tests while I'm implementing a function. The examples are writen in Org blocks, and when I install a library of push an important commit, I run all tests, of which examples are but special cases. The effect is, this part of the documentation is always in sync with the code (of course, some tests fail, and they are marked as such when tests run). I know how to sync this with docstrings, too, if necessary; I haven't: it takes time to implement and I'm not sure the benefit will be that great.
My (limited, so far) experience with LLMs in this setting is nice: a set of pre-written examples provides a good entry point, and an LLM is often capable of producing a very satisfactory solution, immediately testable, of course. The general structure of my Org files with code is also quite strict.
I don't call this “literate programming”, however — I think LP is a mess of mostly wrong ideas — my approach is just a “notebook interface” to a program, inspired by Mathematica Notebooks, popularly (but not in a representative way) imitated by the now-famous Jupyter notebooks. The terminology doesn't matter much: what I'm describing is what the silly.business blogpost is largerly about. The author of nbdev is in the comments here; we're basically implementing the same idea.
silly.business mentions tangling which is a fundamental concept in LP and is a good example of what I dislike about LP: tangling, like several concepts behing LP, is only a thing due to limitations of the programming systems that Donald Knuth was using. When I write Common Lisp in Org, I do not need to tangle, because Common Lisp does not have many of the limitations that apparently influenced the concepts of LP. Much like “reading like a narrative” idea is misguided, for reasons I outlined in the beginning. Lisp is expressive enough to read like prose (or like anything else) to as large a degree as required, and, more generally, to have code organized as non-linearly as required. This argument, however, is irrelevant if we want LLMs, rather than us, read codebases like a book; but that's a different topic.