Linkedin

High Performance Computing on AWS

Project Overview

Project Detail

  1. Users deploy HPC cases with one of the AWS SDKs or the AWS Command Line Interface (AWS CLI). Users can interface directly with the cluster through NICE DCV.

  2. Data is staged both to and from AWS with Amazon Simple Storage Service (Amazon S3). Amazon S3 offers low-cost, reliable storage while interfacing directly to Amazon FSx for Lustre for a fully managed, high-performance storage.

  3. Serverless services manage case workflow. AWS Step Functions provides workflow management and orchestrates other services, such as serverless compute with AWS LambdaAWS Systems Manager can be used for operational management of compute clusters.

  4. AWS ParallelClusterAWS Batch, and custom-made clusters lie at the core of the HPC infrastructure, each with access to high-performance Amazon Elastic Compute Cloud (Amazon EC2) instances connected by a high performance network with an optional Elastic Fabric Adapter. Cost optimization with Amazon EC2 is achieved with payment-model choice and environment right sizing

  5. Manage applications with a consistent, versioned, and repeatable framework. AWS Developer Tools accelerate software development. Installed software can be stored in containers or snapshots, depending on the compute cluster.

https://docs.aws.amazon.com/architecture-diagrams/latest/high-performance-computing-on-aws/high-performance-computing-on-aws.html?did=wp_card&trk=wp_card

To know more about this project connect with us

High Performance Computing on AWS