Hacker Newsnew | past | comments | ask | show | jobs | submit | meken's commentslogin

I had so much fun making videos with my mom when it came out. During the first two weeks, we made over 100 cameo videos together - we were constantly running up against the upload limit. It unleashed tons of genuine creativity, joy, and laughter from us.

After those first two weeks though, we just… didn’t use it again. The novelty wore off and there wasn’t anything really to bring us back. That was the real downfall of Sora.


The problem is that due to the ease these can be made there is also really no reason to make this social. “Why would I look at somebody else’s creations when I can do mine.”

I can see some usage for this use case - "look Morty, I turned myself into a pickle!" - but just like image / meme generators, this is like 10-30 seconds of engagement within a friend circle at best (although some might go viral, but that won't bring in much money for in this case OpenAI).

There will be (or is, I'm behind the times / not on the main social networks) an undercurrent or long tail of AI generated videos, the question is whether those get enough engagement for the creators to pay for the creation tool.


I'm not an artist or creative person in any sense. My persona is closer to a settings menu than a colorful canvas.

The AI art I have seen creatives produce is far beyond anything I have been able to come up with. We're not at the point yet where you can just prompt "Make me a video that is visually stunning and captivating" and get something cool.


> My persona is closer to a settings menu than a colorful canvas

ah, but what a persona that would be if you were a Kai's Power Tools settings menu!


> The AI art I have seen creatives produce is far beyond anything I have been able to come up with

.. such as? What's the "Mona Lisa of AI art"? Is there, like, a gallery? Awards?


Unfortunately I don't have a solid reference point or checklist for the defining qualities of "good art". And frankly I don't take those who do very seriously. To me art is all about the personal vibes you get from it. So I enjoy Zach London (gossip goblin), Bennet Weisbren, and voidstomper/gloomstomper if you want something to measure with your "real true art" checklist.

Didn't we used to think the same of Photos?

They're different impulses. Some want to consume. Others want to create.

TikTok and social media is a strange mix of both, people posting response videos to everything.

Personally, I've stopped subscribing to Spotify, YT music, etc because the slop from Suno is good enough to replace mainstream music or whatever lofi playlist. It's free, it's good enough, and it's not grating to hear after a few days of that favorite song.

The video slop can well replace TikTok and Reels. Make educational content about your hometown. Explain how to throw an uppercut.

But I guess the desire to create something that others would consume is also different from the desire to simply create.


Sweet Jesus. You realise this is the mental equivalent of stuffing your stomach full of junkfood and soda every day?

This is a mainstream break up song: https://youtu.be/ekzHIouo8Q4

This is a vocaloid break up song: https://youtu.be/9pQR4a5sisE

The first isn't bad by any means. There's a million break up songs and that's one of the best sad ones. Most are just... angry? Blaming? Empowering? They work fine. They sell records. Many have have a billion views.

But the second one, even with the clunky translation, strikes somewhere deeper. It's written by someone who had enough time ruminating on a break up. The ending hits a little harder, because break up songs are about endings.

Both are sincere, but the first feels more formulaic. I'm inclined to think the first one is the soda.

I feel Suno leans towards this group of songwriters and poets who have something to say. Sora doesn't.


Vocaloids are hardly similar to fully AI-generated songs. Vocaloids are still human controlled.

And also that VOCALOID uses "traditional" signal processing techniques as opposed to generative deep learning techniques.

As opposed to the kardashians and real house wives and Chappell Roan?

No, the whole horseshit belongs together of course. Just that the AI slop is the logical culmination of the dumbed down pop-culture of the last 15ish years or so.

That doesn't sound meaningfully different from what people are already doing on Instagram and TikTok all day.

Absolutely correct and my comment is by no means dedicated just strictly to the AI slop.

For a lot of people music is a focus aid, not the object of contemplation.

Some want to consume... content that they don't think they could do in one minute themselves. They want to consume content made by other humans, even if it's still brain-eating algorithmic fodder, but still. Sora proved it quite clearly. These clips had ZERO value.

> Personally, I've stopped subscribing to Spotify, YT music, etc because the slop from Suno is good enough to replace mainstream music or whatever lofi playlist.

The musician in me just shed a tear


Pink Beatles, in a purple Zeppelin comes to mind

Had to create an account just to let you know that someone out there got the reference.

That comment for sure made me sad

I occasionally use Suno to re-imagine songs in different keys, tempos, and genres, and sample them. Most of the output from Suno is slop, but occasionally has a few good bits you can sample, chop up, re-pitch, and create something totally new from, which also has the added benefit of being unrecognizable to rights algorithms and lawyers from major labels.

It's a neat tool for genuine creators, and a crutch for people interested in slop.


Modern music has done this to itself. When the human product is already pure corporate slop, it's not hard for AI to compete.

Hopefully AI outcompeting humans at slop sparks a renaissance of humans creating truly beautiful human artwork. And if it doesn't, then was anything of value truly lost?


> Modern music has done this to itself

I get my modern music from Bandcamp. If you can't find good stuff to listen to, that's a 'you' problem.


How much of your super-awesome bandcamp music is topping charts, selling millions, packing mega stadiums, and is penetrating the zeitgeist so deeply that people around the world are addicted to it?

Maybe, just maybe, I'm not talking about "my" music tastes, but offering commentary on the state of music at a global scale. Weird that this point was so hard to follow!


So true. AI music gens like Suno can't do Paul Shapera works even remotely, but can recreate a lot of pop or EDM music very faithfully. There's just no distance to close, it's already mainstreamly bad.

> Modern music has done this to itself. When the human product is already pure corporate slop, it's not hard for AI to compete.

What are you talking about? There’s lots of modern music that’s not corporate slop and that’s absolutely great. Never in history was access to great music as easy as it is now.


I'm talking about modern music. Just because a couple of dweebs on hackernews have "totally amazing underground music" doesn't mean the overall zeitgeist agrees. Regardless of your esoteric music tastes, music by sales and music by charting tells a very different story. And that story is one of replaceable slop.

So find music you like that isn't modern corporate slop. My music right now consists mainly of indie stuff I've found on youtube and daft punk. No plagiarism machine needed, just human-made music

"No plagiarism machine needed, just human-made music"

From wikipedia: Many Daft Punk songs feature vocals processed with effects and vocoders including Auto-Tune, a Roland SVC-350 and the Digitech Vocalist. Bangalter said: "A lot of people complain about musicians using Auto-Tune. It reminds me of the late '70s when musicians in France tried to ban the synthesiser. They said it was taking jobs away from musicians. What they didn't see was that you could use those tools in a new way instead of just for replacing the instruments that came before. People are often afraid of things that sound new."


Did Daft Punk put in a lot of effort to remix existing sounds to make their own music? Yes. Did they type "pls make french house electronic music number 1 chart" into a text box? No. Did they also credit original authors? Yes. I've not gone through their whole library, but for example, Edwin Birdsong has songwriting credit for harder, better, faster, stronger

There's this fallacy with AI generation that people think that all you have to do is type "i lik musik pls remake favrite song but better" and you get amazing results.

This is patently untrue.

It's like how if a junior engineer and a principal engineer use claude opus 4.6 they get radically different results. The junior doesn't have the taste or knowledge to know good from bad so the AI oversteers and slop is made. The principal has finely tuned sense of taste and deep knowledge, so they aggressively steer the AI at every step. This is also true in other AI domains.

To be absolutely clear: you can't make good AI music. Try all you want. Try the prompt you just wrote. Show and tell. It's not something you're going to be able to do.


> The video slop can well replace TikTok and Reels. Make educational content about your hometown. Explain how to throw an uppercut.

There is a fundamental issue of trust here. Facebook has me tagged as history nerd so I get to see those slop videos. They are fun, but always superficial and often plainly wrong. So unless the slop comes from a known, trustworthy source, the educational element is simply not there.

For throwing an uppercut it's even more important, if you follow wrong slop instructions you can end up breaking your wrist or fingers.


> the slop from Suno is good enough to replace mainstream music

I wonder what OP categorises as 'mainstream'. As a classical musician this breaks my heart.


Many of the things on a top #100 list for the last few decades. That includes plenty of "indies" as well as pop.

There are exceptions though. FUKOUNA GIRL by STOMACH BOOK, for example. AI can't come close to replicating something like this. Not the cover art, not the off-key voices, not the relatable part of the lyrics. I don't believe this is a top #100 song, though it certainly is popular.


It is chaotic crap.

you could not waterboard an admission of bad taste like this out of me

How do you get Suno songs for free? You listen to others or make your own?

Almost nobody listens to others' songs on Suno, that's the entire point.

You wouldn't care to order the food as I personally like it -- might be too spicy (or too bland) for your taste.

Suno songs are overtuned for personal preference in the same way.


I get that, but you have to pay to create your own.

And on the second part, I somewhat disagree. I mean, yes everyone has a personal preference, but if you bucket all those personal preferences they all fit nicely together (In many buckets).


A fairly narrow buckets, sure.

I think the point of Suno is to make you not search for your specific thing though, and instead produce your own. Searching for niche music has always been a thing. If our goal is to listen for free, we don't care about Suno (or any other way to make music) one bit, it's just another DAW for those making music.

And AI music in general sure has its fans, check out Only Fire for example.


They have a discover section for songs made public.

I'm with you here, resonates so much. I'm so fed up with endless subway tunnels, they all look and sound utterly same and boring.

So I quit riding the overpriced subway altogether and now consume AI-generated subway imagery and soundscapes for free, they are just good enough to feed my passion for boring tunels.

Some ego-bloated edgelords had nerve to tell me that there are, like, other modes of transportation, but I honestly find their high-horse elitism despicable.. Damn morons.


Sounds like when we first had smartphones with orientation sensors and we could drink a beer from the phone, so cool... for 2 weeks.

But now you can vibe the same app 1000 times for root beer, coca cola, ginger ale, even a milkshake, and nobody will ever have to have a new idea again!

I wouldn't be surprised that the beer apps cost less to develop than one AI generated video.

Was there a Send Me to Heaven for Sora?

That is for loved things

This is consistent with a lot of AI apps. I fell in love with Gamma and haven’t used it in forever. Same with NotebookLM.

I somewhat consistently use notebookLM for podcasts of academic papers I'm reading in my PhD. You have to go read it yourself afterwards but it makes better use of time in the gym or doing dishes/groceries.

> You have to go read it yourself afterwards

^ this is important.

Otherwise you may very well be missing anything really surprising or novel.

See for example https://www.programmablemutter.com/p/after-software-eats-the... , an experience report of NotebookLM where

> It was remarkable to see how many errors could be stuffed into 5 minutes of vacuous conversation. What was even more striking was that the errors systematically pointed in a particular direction. In every instance, the model took an argument that was at least notionally surprising, and yanked it hard in the direction of banality.


On one hand 2024 in AI time was a decade ago.

On the other, Google might not have done much to upgrade the podcast feature since them.


This regression towards the mean is still very much a feature of the newer models, in my experience. I don't see how a model that predicts the most likely word based on previous context + corpus data could possibly not have some bias towards non-novelty / banality.

It’s gotten somewhat better over time though clearly not their top priority.

I found notebookLM to consistently make up about 20% of it's summary. Entertaining but unreliable.

I used it most key to learn about history. There isn’t much damage if it got 1600s or 1700s detail wrong. My high school teachers got much of it wrong too.

The bantering of the podcast I found distracting and the breathless enthusiasm. I guess there was a way to make it more no nonsense? I found I lost content if tuned for brevity.

I just use elevenreader for this. I copy in essays or whatever text I want to listen to and it works decently well. It's far from perfect, but certainly good enough.

Sometimes I'll take deep research output and listen to it too that way.


I tell them “no idle conversation or verbal tics” in the instructions.

I've found notebookLM summaries to be too high-level and oversimplified to be useful. Hopefully in a few years they can go deeper.

You can alao use NotebookLM's as source for Gemini app and ask it to do more in-depth summaries with custom prompting.

This somewhat makes whole NotrbookLM less useful, but still.


I also like doing that for topics that I am tangentially interested in. One minor thing that I find annoying is that the narrators switch roles in the middle of conversation. They start with the female voice explaining a concept to the male voice and suddenly they switch. In the meantime I have identified myself with the voice being explained to.

> You have to go read it yourself afterwards

Or before! Either is mandatory to actually learn the content.


Just listen to actual audio books... literally doing double the work for no benefit... why?

There aren't a lot of highly technical audiobooks or ones that give the same specificity that would be the same as an academic paper

Okay but the user is describing listening to papers, then having to read the papers because listening to them isn't efficient. So why bother listening to it in the first place if you're going to read it?

Not yet but it seems like they're getting to the point of AI narration finally being good enough to make any text an 'audiobook'.

Having said that I absolutely hate the audio format, I only used it when I had to drive or when I swam lanes. But these days I do neither.


No, reading verbatim from a technical paper is way too dense. You need a lot of filler words to slow it down and repetition to make it stick when read aloud.

Hmm fair enough but text manipulation is exactly something where LLMs do shine. Writing and modifying text is what they were meant for.

Ps I don't mean the word 'manipulation' in a negative context.


Writing a book takes like 2-3 years on average. Papers are published everyday. Having a cute two-person "conversational chat" w/ audio works for a lot of people vs. just reading a paper. "No benefit" to you perhaps. Don't generalize the lived experience.

Okay but this person is literally saying that listening with LLM tools isn't helping their understanding and they have to still read the paper... why listen at this point? Why listen using a tool that literally causes you to do more work?

We all have the same amount of time on this Earth, saying how great a tool is that is causing you to do more work is just... weird?

I'd personally never do this, I value my time.


It can synthesize and summarize many topics.

For example, I can give it 8 papers on best practices in online marketing, it will turn it into a 20 minute podcast.

There are errors, but also with real podcasters.


Yeah it's not just the hardware depreciating, it's the social impact of what the model can do

NotebookLM is great for learning I feel

It's not just software: I use my Vision Pro (now in year 3) less than once a month now, and each time I do the painful/awkward/unpleasant set-up and prep and difficult interface sours me on the device yet again, until a new blockbuster movie like "Project Hail Mary" appears that when watched on the VP in 4K on a virtual 40-foot screen blows my mind.


The interesting difference here is that other hedonic activities do bring people back even after the first time they build up a tolerance and get bored. But many of these AI "creative" apps seem like a one-and-done thing. Once the novelty wears off there isn't anything more deeply rewarding to bring people back.

It’s because they are slop which is only funny by the novelty of it. Stephen hawking at a skate board park it’s funny for a bit but as soon as the novelty wears off it’s just slop.

It's not really that people wouldn't come back - it's that they were losing money on each customer.

Those 100 videos probably cost $100+ for them to create. Did you pay them $100+? (not a critisism, just a re-framing)


When it launched we all talked about the serving/inference costs being massive. In hindsight if they had a paywall, it might not have self-imploded so fast, might have stayed aspirational, and they might have a profitable business today. Interesting case study.

This tracks my usage exactly. It was like Mad Libs - in that moment it was THE MOST FUN but after a while it became just a novelty bordering on... creepy. Now I feel kind of guilty for having exposed so many friends to what looks like a data gathering scheme.

I thinks its the same reason why chess tournaments, where two AIs play against each other are not as popular, compared to when two humans play each other. Maybe its because humans generally compare themselves to other humans and that's part of how they value.

It's the same with e.g. faceapp, fun for a minute but then... then what?

And this is the challenge that these tools have - they have to have a free tier to get people to explore it, but unless they can make it a habit, those people will never upgrade to a paid subscription.

I have no figures, but if I'm being optimistic, these freemium subscription services have 10% conversion rate at best; can that 10% pay for the other 90%? For a lot of services that's a yes, but not for these video generators which are incredibly compute intensive.

I'm sure there's a market for it, but it's not this freemium consumer oriented model, not without huge amounts of investments. Maybe in 5-10 years, assuming either compute becomes 10-100x cheaper / more available, or they come up with generators that run cheaper.


A lot of AI hype is parlor tricks

Sounds like me with listening to AI covers. After a couple of weeks I couldn't care less. But I was so stoked in it at the start

The Cameo feature is really excellent. The likeness of both the person and the voice is exceptional. I really enjoyed making some funny Cameo videos with my friends. I don't know of another simple way to insert your own avatar with your own voice into a video, and I'm pretty deep in this space.

Yes, Sam Altman was talking about how he lost you on his blog and how it lead to Sora's downfall :D But honestly... I believe this too. There's just no value.

Yep. Impressive toys, but not useful day to day.

There's some market for b2b I'm sure, but as a consumer facing product it's tough to see how it could ever come close to paying for itself.


Reminds me of when photo filters and initial stickers and mirror filters came out on MacBook in like 2007. It was super fun for a couple days then the novelty wore off.

Humans are very good at pattern recognition, even if you generate different stuff, you still see a pattern, either in the cutting, color, cadence of movements, the color grading, camera lens used, everything, your mind will tag it as slop.

Essentially you are watching the same videos over and over subconsciously


This is something that people working on procedurally generated games have already noticed. No Man's Sky has billions of planets, each with "unique" plant and animal species, but you can easily sort them into a few dozen templates with minor variations.

Procgen has a niche, but it never became ubiquitous, because for most people exploring a nice hand-made intentional environment is better.


U say that but then when u look at most “content” on social media it is the same video over and over again. How many JRE podcasts are basically the same crap as last time? How many influencer “life” videos are the same thing over again? Even the stuff i like is formulaic to the point ai can almost write the scripts.

I think people attach to other people more than “ai”. When there isnt a narrative “person” behind the content it is way less interesting.


Wow that's a really good point. The style of the videos did become quite repetitive.

You know who the novelty didn’t wear off for? My in-laws, who for some ungodly reason are superusers on TikTok. Once the audio-enabled, realistic videos of babies and children hit the feed, it was a virtual 9/11 moment. The group chat is spammed by 90% believable videos of babies arguing, dogs doing smart shit and it’s all slop.

I am hoping against hope that this will stem the tide because the slop-generators are too lazy or too poor to run other models locally or search them online.


"...and when everyone's super, no one will be"

I think this is starting to play out.

When I personally see a blog post which didn't need an image, but still does have an AI-slop image banner, I mentally check out. I might have Claude summarize it, or (more likely) just skip it altogether.


I honestly forgot about Sora until this post, and yeah same behavior played with it for a bit, then moved on with my life.

[flagged]


probably one of the few human commenters remaining here

Cue a flood of crass jokes as the bots attempt to prove their humanity

Such a stupid joke but it gave me a laugh.

noice

[flagged]


(FYI, this is an LLM bot, check their comment history and note the repetitive structure with every comment they've ever posted all within the last hour)

  > This is the right question but hard to answer in practice ...

  > The brownfield vs greenfield split is the real answer to ... 

  > The babysitting point is the one people keep glossing over ...

I dunno, it was the same for me and creative writing with AI.

First it looked like it was crazy inventive, good at writing snappy dialouge, and in general a very good font of ideas.

Then the same concepts, turns of phrase, story ideas kept reappearing, and I kinda soured on the concept.

I haven't done it in a while, but that kind of usage really shows the weakness of LLMs - if you keep messing with its generations, editing what it made, and as the context length keeps increasing, its more end more likely it goes into dumb mode, where it feels like talking to GPT3, constantly getting confused, contradicting itself etc.


I've never seen an AI video that made me feel anything other than bland dread. What were you generating that was so entertaining? Had you ever actually developed creative skills before?

Please don't cross into personal attack. Your comment would be fine without the swipe at the end.

https://news.ycombinator.com/newsguidelines.html


I think you’re fumbling on an important distinction.

Sometimes people want to paint, sometimes people want a painting.

To have wonderful time with their mom… I bet they had absolutely zero interest in the act and process of making silly videos.


Totally. This wasn't a situation where a stranger was slopping another stranger, it was a mother and son doing something fun together.

I get your point but it goes too far in the opposite direction. We should now discuss absolutely nothing in relation to Sora and genAI videos? That seems overly charitable to the platform.

Here, let me try this approach:

Read the main comment out loud to yourself while imagining it’s someone sitting at a table at a pub.

Now imagine someone turning to this person in the pub, and speaking the subsequent comment, word for word.

No seriously, try it out.


Agreed. I did try this out! So the reply to the original comment is dumb. I actually dismissed it for being flippant.

Your reply is more interesting. Hence my (albeit maybe snarky) chiming in. So the original comment does end at a very specific app/sora related conclusion. "Sora didn't keep us coming back."

If I may amend your scenario: imagine this bar is actually in the center of SF or across the street from Open-AI or whatever. We're on HN discussing a post on X about Sora.

The appeal to humanity is not wrong. My point is more let's keep the connection with that humanity in relation to AI, to Sora, to what's going on in this forum.


Come on now...'We're curing cancer, right?!'

You didn't at least puff a little ack through your nostrils for that one?


> Current Common Lisp implementations can usually support both image-oriented and source-oriented development. Image-oriented environments (for example, Squeak Smalltalk) have as their interchange format an image file or memory dump containing all the objects present in the system, which can be later restarted on the same or distinct hardware. By contrast, a source-oriented environment uses individual, human-readable files for recording information for reconstructing the project under development; these files are processed by the environment to convert their contents into material which can be executed.

Am I reading this right that people can (and do??) use images as a complete replacement for source code files?


All the magic of Smalltalk is in the development tools that work by means of introspection into the running image, writing source code in text files causes you to lose all that. Add to that the fact that Smalltalk when written as source files is quite verbose.

Smalltalk does have standard text source file format, but that format is best described as human-readable, not human-writable. The format is essentially a sequence of text blocks that represent operations done to the image in order to modify it to a particular state interspersed with "data" (mostly method source code, but the format can store arbitrary stuff as the data blocks).

One exception to this is GNU Smalltalk which is meant to be used with source files and to that end uses its own more sane source file syntax.


Fascinating. Thanks for the explanation.


> Am I reading this right that people can (and do??) use images as a complete replacement for source code files?

Images are not replacements of source code files. Images are used in addition to source code files. Source code is checked in. Images are created and shipped. The image lets you debug things live if you've got to. You can introspect, live debug, live patch and do all the shenanigans. But if you're making fixes, you'd make the changes in source code, check it in, build a new image and ship that.


in smalltalk you make the changes in the image while it is running. the modern process is that you then export the changes into a version control system. originally you only had the image itself. apparently squeak has objects inside that go back to 1977: https://lists.squeakfoundation.org/archives/list/squeak-dev@...


Does "originally" mean before release from the offices and corridors of Xerox Palo Alto Research Center.

Perhaps further back: before change sets, before fileOut, before sources and change log ? There's a lot of history.

I wonder if the Digitalk Smalltalk implementation "has objects inside that go back to 1977".


with originally i meant before the use of version control systems became common and expected. i don't know the actual history here, but i just found this thread that looks promising to contain some interesting details: https://news.ycombinator.com/item?id=15206339 (it is also discussing lisp which bring this subthread back in line with the original topic :-)



that's very interesting, thank you, i should have realized that even early on there had to be a way to share code between images. (and i don't know why i missed that comment before responding myself)

but, doesn't building a new system image involve taking an old/existing image, adding/merging all the changes, and then release new image and sources file from that?

in other words, the image is not recreated from scratch every time and it is more than just a cache.

what is described there is the process of source management in the absence of a proper revision control system. obviously when multiple people work on the same project, somewhere the changes need to be tracked and merged.

but that doesn't change the fact that the changes first happen in an image, and that you could save that image and write out a new sources file.


The image is not stand-alone: there should also be a sources file and a changes file (and of course a virtual machine).

"When you use a browser to access a method, the system has to retrieve the source code for that method. Initially all the source code is found in the file we refer to as the sources file. … As you are evaluating expressions or making changes to class descriptions, your actions are logged onto an external file that we refer to as the changes file. If you change a method, the new source code is stored on the changes file, not back into the sources file. Thus the sources file is treated as shared and immutable; a private changes file must exist for each user."

1984 "Smalltalk-80 The Interactive Programming Environment" page 458

    ~
The image is a cache. For a reproducible process, version and archive source-code.

1984 "Smalltalk-80 The Interactive Programming Environment" page 500

"At the outset of a project involving two or more programmers: Do assign a member of the team to be the version manager. … The responsibilities of the version manager consist of collecting and cataloging code files submitted by all members of the team, periodically building a new system image incorporating all submitted code files, and releasing the image for use by the team. The version manager stores the current release and all code files for that release in a central place, allowing team members read access, and disallowing write access for anyone except the version manager."


I've never heard of anybody doing it, but in theory it could work.

SBCL (and maybe others) use a "core image" to bootstrap at startup. It's not unheard of for people to build a custom core image with the packages they use a lot from the REPL. It's become less common as computers have gotten faster, and most people use systems like Quicklisp or Roswell to automatically get updates and load from source. Of course the SBCL core image is generated from the compiler source code when building it, and the dependencies are loaded and compiled from source initially, too, so there's still going to be source code files around.

You could, in theory, start with the compiled SBCL image, exclusively type code into the REPL, save the image and exit, and then restart with the new image and continue adding code via the REPL. I really doubt anybody uses that workflow exclusively, though. At the very least most people will eventually save the code they entered in the REPL into a source file once they've debugge it and got it working.


Ironically JIT caches are nothing other than core images.

Several JVM implementations, including the ART cousin, .NET, and apparently node.js is getting one as well.


I imagine some systems may start out by tinkering with definitions in the REPL in the live system, and then as it grows, the best definition of the system is found in the current state of the REPL, rather than any more formal specification of the system – including by source code.

At some point maybe the system state will be captured into source code for longer term maintenance, but I can totally see the source code being secondary to the current state of the system during exploration.

After all, that's how I tend to treat SQL databases early on. The schema evolves in the live server, and only later do I dump it into a schema creation script and start using migrations to change it.


> After all, that's how I tend to treat SQL databases early on.

Ah, that’s a very helpful analogy/parallel that didn’t occur to me. Thank you!


Besides all answers, this concept exists in modern IDEs like Eclipse, anything JetBrains, Netbeans, Visual Studio,...

Even though their appear to be file based, the plugins API makes use of a virtual filesystem that allows for managing the code as if it was the image based concepts from Smalltalk, Lisp and other systems like Cedar, Mesa, Oberon,....

Also something that many don't think about, databases with stored procedures.


Maybe you understood image as in photo-image instead of image as in memory-image (like disk-image); a glorified memory dump, more-or-less.


I understood it as the latter.


> what is this actually used for

If you're interested in LeetCode, Racket is one of their accepted languages.


I was using Apple Notes for some “math thinking” the other week. A killer feature for me would be an easy way to input various math Unicode characters (I was just copy and pasting them).


There are various stylus-based tools which do that sort of thing:

https://www.inftyproject.org/en/software.html

(I used to use the math input palette w/ a Wacom ArtZ on my NeXT Cube for transcribing math documents in college)


> The biggest downside of Racket is that you can't build up your environment incrementally the way you can with Common Lisp/Sly. When you change anything in your source you reload REPL state from scratch.

I don’t quite understand… I’m using Racket in emacs/SLIME and I can eval-last-sexp, regions, etc.


Ah, I'm using racketmode which doesn't support live state buildup (and the builtin GUI doesn't either). What exactly is your setup? SLIME only has a Common Lisp backend, it doesn't support Racket to my knowledge.

EDIT: ok with geiser and geiser-racket incremental state buildup works really well. I rescind my objection!


I think that should work in racket-mode as well. You can easily send individual sexps to the repl and add to the live state. However, one thing that CL does that Racket doesn't do (afaik) is when you change a data type (e.g. alter a struct), it automatically ensures live code uses the new types. In Racket by contrast I have to either carefully go through all affected forms and send them to the repl, or send the whole buffer to the repl. This does make the whole experience feel more static than in CL.


Oh, my mistake. I'm using Spacemacs and it looks like it's just using racket-mode..?

https://www.spacemacs.org/layers/+lang/racket/README.html


I'm guessing via swank: https://github.com/mbal/swank-racket


I learned recently that Racket is an accepted language on LeetCode, which solved the problem “when am I ever going to write lisp in real life…” for me. It’s provided a great excuse.

I have really been enjoying writing it! Paredit and SLIME are addictive.


> LeetCode

> ...real life...

    (≠ "LeetCode" "real life")


For sure. I just meant having some motivating purpose to write Racket.


Get the best of both worlds by having it be Ctrl when held down + pressed with another key and Esc when you press and release it by itself.


Well, there's a fantastic idea. Apparently so many people are already in this better world: https://gist.github.com/tanyuan/55bca522bf50363ae4573d4bdcf0...

I have Karabiner Elements so I added it and it's amazing!


I tried this, but found it annoying that it will add a slight delay. Totally makes sense if you've been running on caps lock -> escape for a long time. I've bound caps lock -> ctrl and left ctrl -> escape.


Very cool.

Though the 01 column is a bit unsatisfying because it doesn’t seem to have any connection to its siblings.


I’m in a similar camp to the OP. For me, my joy doesn’t come from building - it comes from understanding. Which incidentally has actually made SWE not a great career path for me because I get bored building features, but that’s another story…

For me, LLMs have been a tremendous boon for me in terms of learning.


> But for me, it was missing something I didn't know how to name until I found it: the chance to be technical and connected.

I genuinely throught this was impossible for a very long time. In my SWE roles I’ve mostly felt disconnected and isolated.

I resigned from my last dev job and started working in donut and coffee shops. I loved it.

I’m pursuing Support Engineer roles now hoping it will provide the human focus that was missing prior.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: