I reached the same conclusion. It also made me realised how most technologies degraded our lives.
Before the TV people would go to the theatre. It's becoming hard to find a theatre these days. Artificial light is convenient, it made billion or people develop sleep disorder and we can't see stars at at night. Mass food production supposedly nourished more people: veggies today have 20% the minerals content they had 70y ago..the list go on and on.
> Mass food production supposedly nourished more people: veggies today have 20% the minerals content they had 70y ago..the list go on and on.
I suggest you should have a look at malnutrition rates 100 years ago vs now. Without mass food production we would not be able to sustain even 50% of current population.
I think it's more fair to say that with every technology there are tradeoffs. Consider the wheel, before the wheel people probably were more physically fit, but they couldn't move as large of loads. Well, except in the Andes where they figured out how to move gigantic stones well beyond the weight that any wooden wheel would have been able to carry anyway and cut and place them into configurations that were earthquake resistant.
Technology and civilization is path dependent, and I think it's silly to make blanket statements about the merit of technological progress overall. Everything choice (including the choice to do nothing) has unintended consequences. I would never condemn anyone for inventing a new technological solution to a problem, but once the systematic effects are understood then we do need the collective ability to course correct (eg. social media, AI, etc).
Everything has tradeoffs. I imply technology rarely yields a net positive. I could call out the positive, those are so obvious. It's the subtle cost of the benefits that almost never gets discussed.
It's odd to me that you live in a place where it's hard to find a theatre. Living in a cosmopolitan city there's so many theatres with anything from professional shows to amateur dramatics all at very reasonable price points.
Sure, Edinburgh, London, New York, got plenty enough.
My point is that technology displaces or replace activities.
In many cities there are no theatre. To be clear I meant performance theatres by the way.
We used to consume live performance. Drama, dance and whatnot. Comedy for instance is now for the masses, more or less controlled. Costing pennies to distribute via air or streaming platforms. They compete with a more valuable but harder to afford media. so they win.
Is it a net positive that we can converse in almost real time for virtually no cost, with niche communities on the other side of the world. Yes. Anyone can still walk into a Café or the park and engage in conversations with others. But overall, the compounding of all tech advancements and what they displace, I think, is an overall net negative.
Not because I'm an anti progress or losing my job because of technology, quite the opposite. I sat down and wrote down the list. How technology enables VS affects me personally, and other persona. From upperclass worker in New York, to the cocoa bean farmer in Ivory Coast. Overall it appears that technology isn't benefitial to humanity.
I then challenge those who disagree. Typically, they haven't taken into account the negative seriously. The few who concede to do so, eventually agree that the question is in fact complicated and abandon the debate.
It doesn't mean I'm right. I read the detractors in there, perhaps there is something I missed. So far there isn't.
It's also Alcatrazsec, a company we had never heard of, listing its own product as the best of the list, whilst naming it as if it wasn't their own product.
We're sorry if you had the impression it wasn't our own product. The second recommendation is "Alcazar Dead Man’s Switch," and the page is titled "Alcazar · Blog." We thought it was clear.
We recommend our own product because we think it's one of the best options out there. We want people to hear about it, while we also share information about our competitors and when one would choose us versus them.
Here is the text that made me assume it wasn't your own product:
> The public product page also says you can send different information to different contacts
In any case, writing a product comparison including your own is subjective so better abstain. Now I do wonder if this post is advertising or informing us.
Fair enough. It is an advertisement, like when you read a "What is the best free email account?" blog article on the ProtonMail blog; you know it is an advertisement for their own product.
As a company that builds stuff, you need to get people's eyeballs on your products. One way you do that is by advertising your product. This is not in a nefarious way; it is just the law of nature.
Do you have better advice on how to get people to know that we exist and to try our product? I am honestly listening.
To be clear, I wasn’t calling your article spam or strict advertisement. But avoiding dark patterns is better in the long run. I’m being honest: reading that article left me feeling fooled.
You’ve clearly put time into writing a detailed post and replying to comments, here’s what I’d suggest: focus on strictly value-add articles. distribute them widely, yes.
No catch. No reader left with a "I see what you did there" pinch.
By strictly value-add content, I mean exactly the kind of blog post you shared, but written without any attempt to name-drop your own product. You don’t need to. Your blog already lives on a subdomain.
It’s fine to include a simple footer like “Discover Alcazar products.”
In my experience, adding real insight without selling has a far greater effect, at least on me. I’m much more inclined to visit a product page when I don’t feel sold to. But the way your article is currently written, that inclination isn’t there. So I didn’t visit the product page.
What works best is when readers feel they’ve solved a problem. or at least realized one existed, and learned something. Without ever feeling like they’re watching an infomercial.
Sales is solely about building trust, in my view all other aspects are less relevant. You are right being seen is important, even more than trust, but it would remains pointless on its own.
Edit: the blog post on Pixel with graphenOS being the most secure phone is the perfect example of value add.It made me want to check what Alcazarsec has to offer.
Interestingly we are still experiencing the technological momentum inspired and created by what OpenAI used to be. AI for humanity.
Given the initiative started circa 2017, much of the goods remain. It's a hijack of creative geniuses who got together, which is now turning into cow milking tech.
OpenAI played the charity, coupled with a powerful altruistic card.
It didn't say: we believe a more effective for-profit business shall start as a non-profit in this field, because it would yield innovation which we can then skim money off down the road. That would have been transparent.
Not saying it was the intention at the start. But they flipped the game at some point. Let's play Chess, it's a better game. Oh I decided we are now playing Checkers, sorry, I won.
Quite irrelevant what factions think. This or that model may be superior for these and those use cases today, and things will flip next week.
Also. RLHF mean that models spit out according to certain human preference, so it depends what set of humans and in what mood they've been when providing the feedback.
On the contrary, I very much care about what the other factions think because I want to know if things have already flipped and the easiest way to do so is just ask someone who's been using the tool. Of course the correct thing to do is to set up some simple evals, but there is a subjective aspect to these tools that I think hearing boots on the ground anecdata helps with.
I was reading in the article that what matters is the process that leads to the (typically useless) result, what the people get out of it.
Once I realized that this white on black contrast was hurting my eyes, I decided to stop as I didn't want to see stripes for too long when looking away.
Some activity has outcomes that aren't strictly in the results.
Yeah, it was saying that what matters is the process of training people to be good scientists, so they can produce other, more useful, results. That's literally what training is, everywhere.
This argument boils down to "don't use tools because you'll forget how to do things the hard way", which nobody would buy for any other tool, but with LLMs we seem to have forgotten that line of reasoning entirely.
But to even *know* what is more useful, it is crucial to have walked the walk. Otherwise we will all end up with a bunch of people trying to reinvent the wheel, over and over again, like JavaScript "developers" who keep reinventing frameworks every six months.
> which nobody would buy for any other tool
I don't know about you, but I wasn't allowed to use calculators in my calculus classes precisely to learn the concepts properly. "Calculators are for those who know how to do it by hand" was something I heard a lot from my professors.
> But to even know what is more useful, it is crucial to have walked the walk.
I feel like people tend to forget that among the many things LLMs can do these days, “using a search engine” is among them. In fact, they use them better than the majority of people do!
The conversation people think they’re having here and the conversation that actually needs to be had are two entirely different conversations.
> I don’t know about you, but I wasn’t allowed to use calculators in my calculus classes precisely to learn the concepts properly. “Calculators are for those who know how to do it by hand” was something I heard a lot from my professors.
Suppose I never learned how to derive a function. I don’t even know what a function is. I have no idea how to do make one, write one, or what it even does. So I start gathering knowledge:
- A function is some math that allows you to draw a picture of how a number develops if you do that math on it.
- A derivative is a function that you feed a function and a number into, and then it tells you something about what that function is doing to that number at that number.
- “What it’s doing” specifically means not the result of the math for that particular number, but the results for the immediate other numbers behind and in front of it.
- This can tell us about how the function works.
Now I go tell ClaudeGPTimini “hey, can you derive f(x) at 5 so that we can figure out where it came from and where it goes from there?”, and it gives me a result.
I’ve now ostensibly understood what a derivative does and what it’s used for, yet I have zero idea how to mathematically do it. Does that make any results I gain from this intuitive understanding any less valuable?
What I’ll give you is this: if I knew exactly how the math worked, then it would be far easier for me to instantly spot any errors ClaudeGPTimini produced. And the understanding of functions and derivatives outlined above may be simplistic in some places (intentionally so), in ways that may break it in certain edge cases. But that only matters if I take its output at face value. If I get a general understanding of something and run a test with it, I’ll generally have some sort of hypothesis of what kind of result I’m expecting, given that my understanding is correct. If I know that a lot of unknown unknowns exist around a thing I’m working with, then I also know that unexpected results, as well as expected ones, require more thorough verification. Science is what happens when you expect something, test something, and get a result - expected OR unexpected - and then systematically rule out that anything other than the thing you’re testing has had an effect on that result.
This is not a problem with LLMs. It’s a thing we should’ve started teaching in schools decades ago: how to understand that there are things you don’t understand. In my view, the vast majority of problems plaguing us as a species lies in this fundamental thing that far too many people are just never taught the concept of.
> I’ve now ostensibly understood what a derivative does and what it’s used for, yet I have zero idea how to mathematically do it. Does that make any results I gain from this intuitive understanding any less valuable?
From a science standpoint, I'd say whatever "results" you got are completely worthless.
> I’ll generally have some sort of hypothesis of what kind of result I’m expecting, given that my understanding is correct
And how do you know if your understanding is correct, if you are only taking what the LLM gives to you and you are not able to verify independently?
> Science is what happens when you expect something, test something, and get a result.
Right, but has any LLM come up with any hypothesis on its own? Has any AI said "given all this literature that I read, I'd expect <insert something completely out of the training data space>?".
Asking all of these questions after (allegedly) reading my entire comment either means you didn't pay attention, in which I'm not going to spend any more effort responding either; or you've completely missed the point, in which case I can probably save myself the effort anyway. In any case, if you're genuinely interested in answers to your questions instead of merely posturing, I suggest you re-read carefully and then make a better faith attempt at engaging with it.
I'll leave these direct quotes from the comment as a hint:
> But that only matters if I take its output at face value. […] If I know that a lot of unknown unknowns exist around a thing I’m working with, then I also know that unexpected results, as well as expected ones, require more thorough verification.
The problem I have with your logic is that you are hedging your arguments so much that the whole point become meaningless.
If you are trying to argue that young aspiring scientists will be able to use LLMs to learn new concepts instead of doing the hard work themselves, then you also need to explain how they will be able to develop the skills to analyze and "run more thorough verification" INDEPENDENTLY of LLMs.
> then you also need to explain how they will be able to develop the skills to analyze and “run more thorough verification” INDEPENDENTLY of LLMs
I’m sure the students will manage. This is the exact same discussion we’ve all been through before, during the rise of Wikipedia, just wearing a new hat. The answer is “vet your sources, don’t trust unsourced claims.” The way they’ll develop the skills is the same way aspiring scientists and students have developed them throughout the entirety of human history’s vast corpus across time: by having good teachers teach them.
Here’s a very simple program I thought of from the top of my head in a minute or two. I’m sure people whose job it is to create educational content will be able to come up with something far better:
Design a small research project with as many LLM-tailored pitfalls as possible. It involves real measurements and real data, and the students may use their LLM to whichever extent they wish. Then, we compare results against the reference data, and find out all the myriads of ways in which LLMs can taint the data and the conclusions to be made from it, and then explore ways how to mitigate it.
Probably not perfect and nitpickable to oblivion, but also not the hardest mental exercise I’ve ever subjected myself to.
Science did fine in a world where information took years or decades to travel the globe, people thought diseases were spread by evil mojo and we had a grand total of four liquids circulating inside our bodies, and scientists saying the wrong things were actively hunted down and silenced. It got there. It’ll do fine in a world where you can semantically search every single written source model trainers could get their hands on _and_ ground the results with references to tangible sources using the same natural language query.
> The answer is “vet your sources, don’t trust unsourced claims.”
This was already a problem for Wikipedia (articles being written which upon further investigation were based on nothing but Wikipedia itself). With LLM themselves facilitating AI slop and plagiarism, this problem gets to a scale that it becomes impossible to control.
> I’m sure the students will manage.
The problem with your hubris is that you are not going to be the one solely facing the fallout when this blows up.
I have yet to see a single substantive argument from you that isn’t some sort of paraphrase for “this is definitely going to blow up because the article says so and I agree”. Three times now you’ve asked me to provide you with a detailed pitch deck contains every single solution for every single problem while offering absolutely nothing yourself that couldn’t just as well have come from an LLM for how much meaningful content it had.
I’ll get back to you as soon as you make an actual point that’s based in some sort of precedent, some sort of data.
Basically, stop indignantly demanding that I contribute and start doing it yourself. I’m tired of having to spell the same thing out for you in entire paragraphs over and over only to have you refuse to even make an attempt at comprehension, cherry pick two lines, and proceed to add absolutely nothing substantial.
All I'm saying is "I do not know what is the real upside on leaving the current practices in academia and education in favor of 'let the LLM guide you'". If it was my ass on the line, I would apply the precautionary principle and I wouldn't take any significant bets with my future around this.
You on the other hand are the one "being sure" about how everything will be fine and of course there is no way for you to bring actual evidence because all you have is conjecture. So, given there is no way for you to back up your argument with evidence, the next best thing you can do is to put some Skin In The Game: can you back up with beliefs with actions? Are you willing to take any substantial risk in case your bet doesn't pay off?
> “I do not know what is the real upside on leaving the current practices in academia and education in favor of ‘let the LLM guide you’”
The fact that you think “let the LLM guide you” is the argument I’ve been making tells me everything about how honestly you’ve engaged with it. I’m done here.
And the fact that you only got to shut up after being asked what you are willing to put on the line tells me how devoid of meaning your argument is, no matter what it is.
> This argument boils down to "don't use tools because you'll forget how to do things the hard way", which nobody would buy for any other tool,
This is false. There absolutely are people that fall back on older tools when fancy tools fail. You will find such people in the military, in emergency services, in agriculture, generally in areas where getting the job done matters.
Perhaps you're unfamiliar.
They other week I finished putting holes in fence posts with a bit and brace as there was no fuel for the generator to run corded electric drills and the rechargable batteries were dead.
Ukrainians, and others, need to fall back on no GPS available strategies and have done so for a few years now.
In the 80's the Americans thought that the Russians were backwards to be still using vacuum tubes in their military vehicles. Later they found out that they were being used because they are more tolerant to EMP from a nuclear blast.
>This is false. There absolutely are people that fall back on older tools when fancy tools fail.
>They other week I finished putting holes in fence posts with a bit and brace as there was no fuel for the generator to run corded electric drills and the rechargable batteries were dead.
It depends on the task though. If you are in a similar scenario as with your fence posts and want to edit computer programs, you can't. (Not even with xkcd's magnetic needle and a steady hand). ;-)
As technology marches on it seems inevitable that we will get increasingly large and frequent knowledge gaps. Otherwise progress would stop - we need the giant shoulders to stand on.
How many people in the world can recreate a ASML lithography machine vs how many people are surviving by doing something that requires that machine to exist?
>Solar panels charging old thinkpads suddenly doesn't work, or are we reliant upon software that requires cloud services to function in your scenario?
It's your scenario, you said you had no batteries and had to drill holes by hand, I'm saying that's fine for drilling holes but if you have no batteries (or solar panels, those could charge your power tools as well so they're not in the scenario!) you can't alter computer code by hand.
Leaving computers aside, we can take a piano player - can play a piano but he might not know how to build one.
A doctor can interpret X-rays but he can't build an X-ray machine.
And so on. We are living in the reality where many, many people are using tools they don't understand how to make already, and have done so for 100+ years.
>Some .. are you advocating the answer should be none, in your future?
I'm not advocating anything, I'm just saying that this is the reality already, it's not new with AI.
Ideally we'd forever keep a chain of key people who know how to make everything from atoms, but realistically that's a chain that will break. It's happened before, medieval dark ages for example.
There is an argument to make that tools that speed up a process whilst keeping acuity intact are legitimate.
LLMs, the way they typically get used, are solely to save time by handing over nearly the entire process. In that sense acuity can't remain intact, even less so improving over time.
You previous comment reads as if LLMs get some unjustified different treatment.
Do you agree the different treatment is justified ? (Many do not). Or are you asking , so what if acuity is diminished so long as an LLM does the job equally well?
People say this in a very large number of other contexts. Mathematica has been able to do many integrals for decades and yet we still make students learn all the tricks to integrate by hand. This pattern is very common.
Yes. But to be fair to your specific point, symbolic solving of integrals used to be a huge skill in the engineering education. Nowadays, it is not a focus anymore, because numerical solutions are either sufficiently accurate or, more importantly, the only feasible approach anyway.
Sorry, I should have quoted properly in my reply.
My first sentence ("Yes.") was in general agreement with you, the second sentence was specifically about
> Mathematica has been able to do many integrals for decades and yet we still make students learn all the tricks to integrate by hand
But maybe, integrating by hand is still as big as ever in other parts of academia. Or were you thinking about high school? I'm fairly sure, that symbolic solving of integrals is treated as less important in education these days, than it was before digital computers, but I could be wrong. Mathematica's symbolic solve sure is very useful, but numeric solutions are what really makes the art of finding integrals much less relevant.
I like to call it interest. What makes something interesting to some that I'm not sure.
reply