Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> With alexa i can program if/then statements, like basically when i say X then do Y. If something like chatgpt requires the same thing then i don't see the advantage.

Yes, I was thinking about even something as if/then, which could be configured in the UI and manifest to GPT-4 as the usual function call stuff.

The advantage here would be twofold:

1. GPT-4 won't need you to talk a weird command language; it's quite good at understanding regular talk and turning it into structured data. It will have no problem understanding things like "oh flip the lights in the living room and run some music, idk, maybe some Beatles", followed by "nah, too bright, tone it down a little", and reliably converting them into data you could feed to your if/else logic.

2. ChatGPT (the app) has a voice recognition model that, unlike Google Assistant, Siri and Alexa, does not suck. It's the first model I've experienced that can convert my casual speech into text with 95%+ accuracy even with lots of ambient noise.

Those are the features ChatGPT app offers today. Right now, if they added a basic bidirectional Tasker integration (user-configurable "function calls" emitting structured data for Tasker, and ability for Tasker to add messages into chat), anyone could quickly DIY something 20x better than Google Assistant.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: