Used by Data Scientists from 50+ countries
We offer virtual servers with various GPUs for Machine Learning.
NVIDIA Quadro, AMD Radeon Vega and Radeon RX instances are available.
Starting at $0.0992 per Hour
Reasons that you must try AMD GPUs
for Machine Learning research.
NVIDIA, which dominates the machine learning market, provides drivers under a proprietary license so that they can modify terms and conditions freely. In fact, they changed their EULA relating GeForce/Titan to restrict the data center deployment and commercial hosting etc.
On the other hand, drivers called ROCm contains the Kernel driver and runtime, and libraries for machine learning called MIOpen developed by AMD are subject to open source licenses. Therefore, it is unlikely that commercial use will be restricted even for consumer products.
The MIOpen that runs on ROCm supports two programming models of OpenCL and HIP. And HIP which is a component of ROCm allows developers to convert CUDA code to portable C++. The same source code can be compiled to run on NVIDIA or AMD GPUs.
Also, We have verified the running of Deep Learning's major models on AMD GPUs with TensorFlow1.3 including Dence, CNN, RNN, LSTM, AlexNet, VGG, GoogLeNet, ResNet, YoloV2, SSD, PSPNet, FCN, ICNet etc.
Reasons that you must Try NVIDIA Quadro GPU
for Deep Learning Inference.
Currently, major cloud companies such as Amazon Web Services (AWS) offer instances based on NVIDIA TESLA GPUs, and the usage of such services for Deep Learning Inference requires that they are run continuously without stoppage. In the case of AWS, however, users are required to pay approximately $ 658.80 per month (Linux on p2.xlarge), a cost which can be prohibitive for small-scale research facilities, universities as well as research and development personnel at smaller enterprises.
Our research, however, has revealed that with Deep Learning Inference for tasks such as Image Classification and Object Recognition, it is far more common for the CPUs and the web API implementation to act as a bottleneck, rather than the GPU. Therefore, we have concluded that in many cases, it is possible to increase cost effectiveness over machines with expensive CPUs and GPUs by utilizing multiple machines combining moderate CPUs with lower-specification GPUs.