Facilities management is the worst part of "cloud" services. Getting this outsourced for low/no cost is brilliant. Check out the OnAPP CDN implementation.
Please provide a standardish xen/kvm/vmware/vagrent client image. Being win32 exe only is probably costing you a lot of resource provides (like me). Clients running xen/kvm/vmware are also more likely to be providing higher value resources, like a colo'd host.
Move your API to a different (sub)domain ASAP. At some point you'll need to change your DNS architecture. Having your API tied to your zone apex is going to cause no end of grief. If it's in a (sub)domain you can easily delegate control to another authoritative name server. You might want to use a directional DNS product, CNAME to another resource, add another product api, add another API endpoint, etc.
Provide some sort of initialization hook. Every node I launch should be able to auto configure my stack without a "push" action, ssh or otherwise. I much prefer when each node can bootstrap and poll my instance/config management stack. For example EC2 provides this functionality through the UserData param of RunInstances.
"Sign in" doesn't have an option or link to "Sign up". From the docs I had to go back to the home page to find the sign up.
Support payment other than credit card. Paypal etc is nice in that I can create a balance without extending as much trust to an unknown party.
Don't get suckered in to providing 1:1 IPv4 with your proxy model. If the product's successful you'll quickly discover that even a /16 is expensive/unpossible. A 1:1 mapping with IPv6 is operationally plausible. As a bonus you'll get PR for a "shiny" feature.
Stay away domU egressing through the dom0 IPv4 for now. I guarantee you'll attract bad actors. A chinese gambling/porn site hosted on those domUs is going get DDOS'd all the little long day. If that takes out the dom0 internet connectivity you'll have a revenue generating customer who's upset.
Terminate long running domU instances. You can use this to shape customer expectations about the ephemeral nature of your product. Expose this to the instance owner the same way as a dom0 going offline. Try something like max(mean dom0 availability || 12 hours) + weighted random to get each domUs lifetime.
Provide a way to request & inspect network locality or latency. Three use case here I think. 1) Launch instances near $foo. Get decreased latency to a centralized endpoint, like a scheduler. 2) What is the location of $instance. I can determine the nearest S3 region for faster GET/PUTs. 3) Launch instances within $n ms of each other. If instances have shared state or messages during compute phase this can increase throughput
They'd have to download your custom image, probably hundred of megabytes or gigabytes, each time upon starting new instance. This makes no sense on WAN. It’s much better for both contributors (less stuff to download) and clients (less time wasted waiting) to only download your application stack.
It’s not EC2 alternative, this is a different kind of service.
I think you miss my point. Currently (AFAIK) the provider/dom0 must download and run a windows executable. Many people have no windows hosts, but do have existing xen/kvm dom0s running. If grid spot provided their service as a xen/kvm client image I would be able to host it.
There's no requirement for the host to download the client image multiple times. I'd imagine you have something like ephemeral domu disk and use kmods to provide a control plane and network tunnel.
> Many people have no windows hosts, but do have existing xen/kvm dom0s running. If grid spot provided their service as a xen/kvm client image I would be able to host it.
They should provide their image instead of the binary. Theirs, rather than their clients. Got you know!
But I don’t think their audience will have much use for it. Well it’s unlikely to be a priority, anyway.
> Don't get suckered in to providing 1:1 IPv4 with your proxy model.
BTW, he doesn’t. It’s all proxied through single IP on different ports. E.g. this is what I got:
Please provide a standardish xen/kvm/vmware/vagrent client image. Being win32 exe only is probably costing you a lot of resource provides (like me). Clients running xen/kvm/vmware are also more likely to be providing higher value resources, like a colo'd host.
Move your API to a different (sub)domain ASAP. At some point you'll need to change your DNS architecture. Having your API tied to your zone apex is going to cause no end of grief. If it's in a (sub)domain you can easily delegate control to another authoritative name server. You might want to use a directional DNS product, CNAME to another resource, add another product api, add another API endpoint, etc.
Provide some sort of initialization hook. Every node I launch should be able to auto configure my stack without a "push" action, ssh or otherwise. I much prefer when each node can bootstrap and poll my instance/config management stack. For example EC2 provides this functionality through the UserData param of RunInstances.
"Sign in" doesn't have an option or link to "Sign up". From the docs I had to go back to the home page to find the sign up.
Support payment other than credit card. Paypal etc is nice in that I can create a balance without extending as much trust to an unknown party.
Don't get suckered in to providing 1:1 IPv4 with your proxy model. If the product's successful you'll quickly discover that even a /16 is expensive/unpossible. A 1:1 mapping with IPv6 is operationally plausible. As a bonus you'll get PR for a "shiny" feature.
Stay away domU egressing through the dom0 IPv4 for now. I guarantee you'll attract bad actors. A chinese gambling/porn site hosted on those domUs is going get DDOS'd all the little long day. If that takes out the dom0 internet connectivity you'll have a revenue generating customer who's upset.
Terminate long running domU instances. You can use this to shape customer expectations about the ephemeral nature of your product. Expose this to the instance owner the same way as a dom0 going offline. Try something like max(mean dom0 availability || 12 hours) + weighted random to get each domUs lifetime.
Provide a way to request & inspect network locality or latency. Three use case here I think. 1) Launch instances near $foo. Get decreased latency to a centralized endpoint, like a scheduler. 2) What is the location of $instance. I can determine the nearest S3 region for faster GET/PUTs. 3) Launch instances within $n ms of each other. If instances have shared state or messages during compute phase this can increase throughput