3Blades offers a cloud based service (Software as a Service, also known as SaaS) and an on-premises product. 3Blades cloud based service allows the customer to delegate all infrastructure and application management needs to 3Blades. On-premises customers are those that prefer to manage their data and their infrastructure in-house.
On-demand compute time, measured as GB per hour. These would be catalogued similar to AWS EC2 instances. For now, we can set top end limits for bandwidth and storage. GPU cores would be added later.
Enterprise support plans are available for both Cloud and On-Premises versions . This support level includes:
2 hours response time SLA, 6 AM to 7 PM EST
Frequently Asked Questions:
How do you calculate pricing for server runtime?
We have a catalog of server sizes and each has it’s own pricing. Since our service is hosted with Amazon Web Services, our instance sizes closely reflect their instance types. We meter usage based on GB per hour. For example, if you use a server with 1 GB for 10 hours the cost would be the same as using a 2 GB server for 5 hours.
Are all features included with the Trial version?
Yes. However, servers are limited to the period stated for the trial period or $5 in server run time credits, whatever comes first. For example, if the Trial period is 14 days but you used $5 for server run times in 7 days, then the Trial period expires in 7 days.
Do you delete my data after the Trial period has ended?
No. We do not delete your data unless you deactivate your account. Once the Trial period has ended, you will be able to access your files but not start server instances to manage your data.
Under what circumstances do you cull idle servers?
Idle servers are those that are have not shown any activity for a period of time. If after 15 minutes your server has not shown any activity, our system will automatically stop the server. We also remove servers that are in the stopped state for more than 30 days.
What GPU version do you support?
We currently support Nvidia Tesla K80 GPUs. If the framework you are using automatically detects Nvidia GPUs with CUDA drivers, then the framework will use the GPU resources available by default.