Yeah, LLMs are a solution to the cold start problem plus they are easy to integrate and if you know what you're doing in terms of evals, post processing and so on you can get excellent performance out of them, plus they can do semantic classification and reasoning that you won't get out of some bespoke traditional DS/ML model.
I feel like a licencing process for software engineers would
A) test lots of skills that are common but not universal. I'm thinking javascript trivia here, where I don't write any javascript in my professional capacity as a software engineer; but there are many people who think Software Engineer == Javascript Programmer
B) shine too much of a light on the fact that this industry is full of people who demand high salaries but can't program their way out of a paper bag
Also discussed on HN. Yeah I can ignore them, but a lot of people watch those videos and fall for the grift (going by their views) and that's sad. It personally annoys me also when yt recommends them to me because it thinks I'm interested in software
I make sure to hit not interested the second I see anything I very much don't want pop up in me feed. I don't want mine to drift towards the average feed of the lowest effort, sensationalist garbage.
It seems to help. But it's just one factor. I also have a lot of subscriptions to help guide the algorithm. And it seems most heavily weighted on things you've recently watched, so if you ever leave youtube playing while you're not actually watching it, you might need to manually remove videos from your watch history that don't align with what you want to see suggested.
In case you start watching such a video (and maybe in general), it’s probably more effective to downvote it and remove it from your watch history. And when you use “not interested”, there are two “tell us why” follow-up options “already watched” and “don’t like”. Selecting the latter may be necessary for “not interested” to have a stronger effect.
I don’t know if YouTube Premium makes a difference, but I don’t see highly clickbaity thumbnails very often.
Also in Europe and can only agree. Granted I'm on the 20x plan, but I have yet to hit a limit once and I'm using Claude 12h+ per day on multiple projects.
There were no "dark ages", that's the same common wisdom blunder like "in the middle ages everybody was dressed in drab grey clothing, ate gruel and walked through mountains of poop everywhere". It was a time of transition away from the slave powered empire to decentralized kingdoms and ultimately the Europe of today. It was by no means a time of standstill.
As far as I can tell, the dark ages were called the dark ages because there wasn't much evidence to be found: writing was less prominent during that time.
> It was a time of transition away from the slave powered empire to decentralized kingdoms and ultimately the Europe of today.
Yes, Europe did not have dark ages, it only had period of population decline, of less emissions, less building, less inventions, less records and severed trade networks.
The world is projected to hit population decline already sometime between 2060 and 2080, so I guess the younger ones of us will find out definitively whether it's a good or bad thing.
I am very sorry, but you are wrong. Between the fall of Rome (476 AD) and the Carolingian empire (~800 AD) there was a period of not only standstill, but regression, devolution and forgetfulness. Compared with what came before, it can be rightly called the dark ages.
It's dark ages in Europe but it's really golden age in Islamic Empires which far surpassed the Greek and Roman. The epitome was the Baitul Hikmah or House of Wisfom established at the time of Abassid Caliphate [1].
>Between the fall of Rome (476 AD) and the Carolingian empire (~800 AD)
During this time Al-Khwarizmi was born and House of Wisdom was established. His contributions in algebra (book Al-Jabr) and many others, and the word algorithm literally originated from his name [2].
Many Greek and Indian books translated to Arabic during this myth so called dark age by the Islamic Scholars.
Many many more new books were written that improved and innovated the prior knowledge. These books greatly expanding the state-of-the-art centered both in Baghdad, Iraq and Toledo, Spain.
The book Almagest by Ptolemy studied by Galileo was the Arabic translation of the Ptolemy works. Of course at the time of Galileo, Islamic astronomers knowledge and contributions have already far surpassed the Greek and the Indian.
The Islamic scholars not only translating (that it self is a progress) but also contributing to the body and foundations of knowledge not only in astronomy, but in many others in mathematics, sciences (physics, chemistry, biology), medicine, engineering, geography, psychology, politics, economy, architecture, etc.
There are numerous other polymath scholars like Al-Khwarizmi, for examples Al-Haitham, Ibnu-Sina, Ibnu-Rusd to name just a few. But European community has been in denial for so long time and in addition unfairly supressing these Islamic scholars contributions. They even literally changed the scholars name and Latinized them with lousy names like Alhazen, Avicennia, Averroes to hide the fact that they are Islamic scholars. Imagine if now people changed name of the scholar like Newton to Nawab in the literature.
Heck, even Copernicus copied diagrams from the Islamic earlier astronomers' books without proper citations, that in the modern will be plagiarism [3]. If this happen today, the university president (if plagiarised his/her book/thesis/paper) will be asked to resign.
Open weight models that run under your desk are not frontier model level, but they are getting closer. Improvements in agentic post training and things like TurboQuant mean that even if all frontier labs pull the plug tomorrow, we will still have agents to work with.
TurboQuant is not a step change, it's more of a smaller incremental improvement to KV quantization, and possibly (unsure) to quantization more generally. I'm actually more positive about SSD weights offload, which opens up very large local models for slow inference (good enough for slow chat) to virtually any hardware or amount of RAM.
Every AI/agentic thread on HN follows the same tension: builders want to build and solve problems. Code or task completion are implementation details to be done on the path to the actual prize: solving the problem. And then there are the coders, that have honed their mechanical skill of implementation and derive their intellectual fulfillment from that. The latter crowd has a rough time because much of it can be automated now, the former camp is happy because look at all the stuff that can now be built!
reply