Hacker Newsnew | past | comments | ask | show | jobs | submit | yann_eu's commentslogin

Great to hear it! Let us know how it goes :)


There are similarities, one key difference is that we're running on top of high-performance BareMetal servers with high-end CPUs.


Agree, data location is indeed a central challenge when building globally distributed apps.

We picked the largest peering points in Europe and the US for the two first locations aka Washington / US-East and Frankfurt in Europe. For the following 4 locations which we announced last week in early access [1], we tried to pick the next best-interconnected locations on the world map: SFO / the valley, Singapore, Paris, and Tokyo.

We definitely need to do a better job in the doc [2], we can definitely provide some mapping matrix and will be working on some latency measurements/speedtest/iperf servers.

In this direction, did you look at PolyScale [3]? They do the job of database caching at Edge.

What do you have in mind regarding lower-level access to compute? We're looking at providing block storage and direct TCP/IP support if that's what you have in mind.

[1] https://community.koyeb.com/t/changelog-25-san-francisco-sin...

[2] https://www.koyeb.com/docs/reference/regions

[3] https://www.koyeb.com/docs/integrations/databases/polyscale


For now, the vcpus are shared for all types of MicroVMs with a constant ratio for each GB of RAM.

We're planning to release instance types with dedicated CPUs for applications that need them.


If I wanted to create a microvm with 4G RAM and 2CPU or 4G RAM and 8CPU would that work? Or is it currently only possible to create 1:1 vms?

Also wanted to note, that I came to this question by reading the website explanation on the benefits of microvms, to me it seems to heavily favor the operator vs the customer by touting the oversubscription capabilities. This feels like something that should not be presented front and center to prospective customers?

With lack of nested virtualization and compatibility with qemu cloud images I also wonder how much benefit is really there for microvms. The fast boot time feels like the only benefit, but how much faster than qemu when you add on API overhead. Offering both hypervisors and have the customer choose could be interesting.

Congrats on the launch and thanks for sharing!


The starter plan is actually a pay-per-use plan, so you're starting at $0 and spending depending on your usage. You don't need to move directly to $79. Does that make sense?

It seems we need to do some work on the pricing page.


I see. Thanks


Thanks for all the questions!

We're headquartered in Europe, you'll find the legal in our terms :) https://www.koyeb.com/docs/legal/terms

There is a difference between edge and non-edge locations (we call them core): edge locations terminate the TLS connection, do caching, and route traffic to the nearest core location. We explained how this works in this post [1] and this talk [2]. The TLDR is: If this core location is set up to run an instance of your service, it will send it to the right machine in the location. Otherwise, it's going to be routed to the core location where an instance is running.

Data storage can be tied to a location as you're deciding where you're application is running: if you ask us to run an application in Frankfurt, Germany we're not going to move it to the US nor to any other location.

The build engine is tied to GitHub but you can deploy a pre-built Docker container. GitLab has been highly demanded [3], so this is definitely on the list of things we're considering implementing.

Databases should land on the platform in September in early access [4], we're actively working on it.

[1] https://www.koyeb.com/blog/building-a-multi-region-service-m...

[2] https://www.youtube.com/watch?v=IB93WCoroL8

[3] https://feedback.koyeb.com/feature-requests/p/git-driven-dep...

[4] https://feedback.koyeb.com/feature-requests/p/managed-postgr...


Thanks, it sounds great!

> The build engine is tied to GitHub but you can deploy a pre-built Docker container.

That's fine!


I’d say we don’t use it exactly the same way: we don’t have a single global nomad cluster, which is a critical difference.

We have one Nomad cluster per region, which we “federated” ourselves using our own orchestrator. This basically reduces the latencies between agents and each cluster, reduces the failure domains, and also avoids encoding all the constraints in one single Nomad job definition.

I'm not so much worried about scaling with our setup but the performance of the autoscaler might be a concern in the future.


Thanks! Hope you’ll like it :)

We have similarities with fly.io (Firecracker MicroVMs on top of BareMetal) and also some key differences:

- we directly integrate with GitHub to automatically build your application on push. We support building native code with Builpacks or from Dockerfile in addition to pre-built containers.

- we put a CDN in front of all your services to provide caching and edge TLS termination

- technically, our internal network is a service mesh built with Kuma and Envoy

- overall, we aim to be a bit higher in the stack, instead of looking at providing low-level virtual machines, we want to focus on productivity features like preview environments

We actually thought zero infrastructure configuration. At this stage, there is some basic setup to do for a multi-service app. You need to configure the HTTP routes. We aim to add as much automatic discovery of the codebase as possible.

Thanks for the feedback on the pricing. $0 is actually the price of the plan and we provide $5.5 of free credit in the plan. It seems the “Up to” was somehow skipped in the “16GB & 16 vCPU per service”, this is indeed confusing.


Do you have any plans to get into managed databases and other storage / state management solutions? I feel like 90% of what from a cloud provider is managed Postgres (with strong durability and availability guarantees) and some way to run compute (I don't even really care what or how that works for the most part). Everything else is a nice to have.


We are currently working on our managed Postgres offering. Should be available in technical preview in September. Other services like providing object storage are planned too but I don't have any ETA to share for now :)


Awesome! Object storage is also useful but less crucial because integrating with external services for object storage is much less painful :)


We focused on estimating the minimum/entry-level cost of Kubernetes here.

If you have a data intensive service, it surely would add up, but it's not specific to Kubernetes. If you go with VMs or a Serverless deployment, you'll have to pay it too.

If you're speaking about the storage and data transfers related to the Kubernetes control plane itself, I don't believe it represents a significant cost, even with a large cluster.


You're right, having only one person on call for a managed cluster doesn't make a lot of sense. We should probably have planned with at least 2 people for a managed cluster too to cover 24/7/365 operations.

I think our thought process here is that developers are also involved in on call support for the service availability and the k8s cluster availability is mostly managed by the provider, but the cluster can still fail even if the control plane is managed.


Self managed cluster needs networking, some kind of persistent volume storage and the nodes themselves need to be somewhat maintained.

I think you could get one person to be on-call for all those things, personally. But then I think that person should not be on-call for application support (IE; not the things running inside k8s, they would be the person the on-call application developer/administrator would call if they couldn’t debug issues with networking, for instance).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: