Watch the Intel ‘AI Everywhere’

launch event.

Cloud AI Solutions: Powered by Intel

Intel Cloud

Organizations are looking to mature their cloud strategies across private, public, and hybrid clouds plus the intelligent edge. Intel® architecture is a trusted foundation that provides what you need to build, scale, and transform. Our open source approach and broad ecosystem support help ensure Intel® technology works with what you already have for your constantly evolving AI needs.

Google + Intel – AI Solutions

Google Cloud offers customizable solutions based on the latest Intel technologies, designed to address security, compute, and memory requirements for the most demanding enterprise workloads and applications. Boost application performance and efficiency with Intel® Xeon® scalable processors available with the C3, C2, and N2 VMs.

Additional Intel Optimizations in Google Cloud:

Intel based instances including the C3, C2 & N2 have built-in Intel AI performance accelerators including Intel AMX (C3 virtual machines only), Intel Deep Learning Boost and Intel AVX-512.  Please reference this guide on how to take advantage of Intel AI Performance in Google Cloud.

Additional Intel Optimizations in Google Cloud:

Google Cloud Prebuilt Docker Containers take advantage of Intel optimizations for Tensorflow, Pytorch & xgboost

Additional Intel Optimizations in Google Cloud:

Vertex.AI – Intel Xeon-based instances that accelerate online and batch prediction and training. Intel-based instances allow you to build, tune and deploy foundation models faster.

Additional Intel Optimizations in Google Cloud:

Dataproc (Google Cloud’s managed Spark & Hadoop service) will soon support the ability to choose and configure 4th Gen Intel Xeon Scalable-based C3 instances with Intel AMX.

C3 is the First GCP Instance with Intel’s Most Sustainable Data Center CPU.

Confidential Computing keeps data encrypted in memory, and elsewhere outside the CPU, while it is being processed. Since its first offering in 2018, Google has been a pioneer in making the technology widely available through its cloud.

Based on Intel’s 4th generation Xeon platform, H3 VM instances deliver higher performance and lower costs to HPC users. These improvements can enable HPC users to accelerate R&D while reducing costs.

AWS + Intel – AI Solutions

Intel and Amazon Web Services (AWS) are pleased to announce the general availability of Amazon EC2 M7i-flex instances and Amazon Elastic Compute Cloud (Amazon EC2) M7i instances. Both EC2 instances are powered by custom 4th generation Intel® Xeon® Scalable processors that bring Intel® Accelerator Engines to the masses using Xeon you can trust and AWS’s expansive global footprint. The Intel® Accelerator Engines provide unmatched customer value, including increased performance, cost savings, and sustainability advantages for the biggest and fastest-growing workloads.

Additional Intel Optimizations in AWS:

Amazon Personalize and Amazon Sagemaker use the latest Intel® Xeon® Scalable processors and AI optimizations through Intel AVX-512, VNNI, and Intel RL Coach

Additional Intel Optimizations in AWS:

AWS utilizes Intel hardware for its AWS DeepLens and AWS DeepRacer computer vision developer kits

Additional Intel Optimizations in AWS:

Deep learning services (including DeepComposer) run on Intel-based instances in Sagemaker

Additional Intel Optimizations in AWS:

C6i, M5, M5n, T3, C5, C5n, R5n, Z1d instances support acceleration via AVX-512

Additional Intel Optimizations in AWS:

M5n, C5, R5n instances support acceleration with VNNI

Additional Intel Optimizations in AWS:

Deep learning AMIs are pre-configured with Intel’s Math Kernel Library (MKL)

Additional Intel Optimizations in AWS:

C6i instances support oneDNN-optimized frameworks, such as TensorFlow

Additional Intel Optimizations in AWS:

M7i instances are preferred for majority inferencing workloads, and also capable for deploying medium size LLM (sub-100ms latency). It can also be used for fine tuning LLM. The AMX feature dedicated to speed up the overall perf.

Azure + Intel AI Solutions

Accelerate the end-to-end machine learning lifecycle with an enterprise-grade service for simplifying the building, training, and deployment of machine learning models. Build, train, and deploy machine learning models quickly and cost-effectively with Azure Machine Learning and powerful Intel CPU and FPGA compute resources. Create and accelerate model inferencing across Intel hardware using Azure Machine Learning and ONNX Runtime, an open-source project founded by Microsoft and supported by Intel.

Additional Intel Optimizations in Azure:

Azure Machine Learning Services are powered by Intel® FPGAs.

Additional Intel Optimizations in Azure:

Azure Stack Edge runs on Intel Xeon processors, Intel FPGAs, Movidius (Mini R), Intel SSDs and Intel® distribution for OpenVINO.

Additional Intel Optimizations in Azure:

Azure Confidential Computing utilizes Intel® Trust Domain Extensions and Intel® Software Guard Extensions for multi-party Machine Learning.

Additional Intel Optimizations in Azure:

ONNX Runtime takes advantage of Intel® Distribution for OpenVINO.

Additional Intel Optimizations in Azure:

Data science virtual machine (DSVM) is optimized for AVX-512 and includes a deep learning reference stack with Intel Math Kernel Library for Deep Neural Networks (Intel MKL-DNN), and Intel optimized Tensorflow and MXNet.

Intel® Developer Cloud

Intel® Developer Cloud is a service platform for developing and running AI workloads in Intel®-optimized deployment environments with the latest Intel® processors and performance-optimized software stacks.