Accelerating computing instance having three series
P, G, F series
Accelerating computing families use hardware accelerators or co-processors to perform some functions such as floating point number calculations, graphics processing or data pattern matching more efficiency than is possible in software running on CPUs.
* F1 Instance :
- F1 instance offers customisable hardware accelerating with field programmable gate arrays (FPGA).
- Each FPGA contains 2.5 millions logic elements & 6800DSP engines.
- Designed to accelerate computationally intensive algorithms, Such as data flow or highly parallel operations.
- F1 provides local NVME SSD storage.
VCPU - 08 to 64
RAM - 122 to 976GB
Storage - NVME SSD
Used in: Genomics, Research, Financial analysis, Real time video processing and big data search.
* P2 and P3 Instance :
- Its used in Nvdia tesla GPUs
- Provide high bandwidth networking
- Upto 32GB of memory per GPUs. Which makes them ideal for deep learning and computational fluid dynamics.
P2 Instance P3 Instance
VCPU - 04 to 64 08 to 96
GPU - 01 to 16 01 to 08
RAM - 61 to 732 GB 61 to 768GB
GPU RAM - 12 to 192 GB
Network Bandwidth - 25Gbps
Storage: SSD and EBS
Used in: Machine learning, Database seismic analysis, Genomics molecular modeling, AI, Deep learning.
Note: P3 supports CUDA9 and Open ClAPI , P2 supports CUDA9 and Open ClAPI
* G2 and G3 Instance:
Optimised for graphics intensive application
- Well suited for app like 3d virtualisation
- G3 Instance use NVDIA Tesla M60 GPU and provide to cost effective high performance platform for graphics applications
VCPU - 04 to 64
RAM - 30.5 to 488GB
GPU - 01 to 04
Storage - NVME SSD
Network Performance - 25 Gbps
Used in: Video Creation Services, 3D virtualisation, Streaming graphics, Intensive application.