Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
gtirloni
on July 16, 2024
|
parent
|
context
|
favorite
| on:
Exo: Run your own AI cluster at home with everyday...
> eventually your edge hardware is going to be able to infer a lot faster than the 50ms+ per call to the cloud.
This is interesting. Is that based on any upcoming technology improvement already in the works?
a_t48
on July 16, 2024
|
next
[–]
GP is likely referring to network latency here. There's a tradeoff between smaller GPUs/etc at home that have no latency to use and beefier hardware in the cloud that have a minimum latency to use.
yjftsjthsd-h
on July 16, 2024
|
parent
|
next
[–]
Sure, but if the model takes multiple seconds to execute, then even 100 milliseconds of network latency seems more or less irrelevant
datameta
on July 16, 2024
|
prev
[–]
Comms is also the greatest battery drain for a remote edge system. Local inference can allow for longer operation, or operation with no network infra.
Consider applying for YC's Summer 2026 batch! Applications are open till May 4
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search:
This is interesting. Is that based on any upcoming technology improvement already in the works?