Hacker Newsnew | past | comments | ask | show | jobs | submit | swapinvidya's commentslogin

I’ve been working on whether advanced ML models used in computational biology can run outside centralized cloud infrastructure.

In a recent study, I evaluated running graph neural networks (GNNs) for protein–protein interaction analysis on GPU-enabled single-board computers (edge devices), instead of cloud GPUs. The goal was to understand feasibility, latency, and practical constraints rather than chasing benchmark scores.

What I observed:

Stable inference on edge hardware

Inference latency on the order of milliseconds

No dependency on cloud GPUs during execution

This raises some interesting questions:

Are edge devices underutilized for graph ML workloads?

Where does edge inference make sense vs. cloud execution for biological or scientific ML?

What trade-offs (graph size, memory, model depth) matter most in real deployments?

For context, here’s a longer write-up with code and system design notes: https://dev.to/your-article-link (replace with your Dev.to link)

And the research paper (preprint): https://doi.org/10.21203/rs.3.rs-8645211/v1

Curious to hear thoughts from folks working on ML systems, edge computing, or scientific ML.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: