High Performance Computing in CST STUDIO SUITE

High Performance Computing
in
CST STUDIO SUITE
Felix Wolfheimer
CST – COMPUTER SIMULATION TECHNOLOGY | www.cst.com
GPU Computing Performance
GPU computing performance has
been improved for CST STUDIO
SUITE 2014 as CPU and GPU
resources are used in parallel.
Speedup of Solver Loop
18
Promo offer for EUC
participants:
25% discount for K40 cards
16
Speedup
14
12
GPU
10
8
CPU
6
CST STUDIO SUITE 2013
4
CST STUDIO SUITE 2014
2
0
0
1
2
3
4
Number of GPUs (Tesla K40)
Benchmark performed on system equipped with dual Xeon E5-2630 v2 (Ivy Bridge EP) processors, and four Tesla K40 cards. Model has 80 million mesh cells.
CST – COMPUTER SIMULATION TECHNOLOGY | www.cst.com
Typical GPU System Configurations
Entry level
Workstation with 1 GPU card
 Available "off the shelf“
 Good acceleration for
smaller models
 Limited model size
(depends on available GPU
memory and features used)
Professional level
Workstation/server with
multiple internal or
external GPU cards
 Many configurations available
 Good acceleration for medium
size and large models
 Limited model size
(depends on available GPU
memory and features used)
Enterprise level
Cluster system with highspeed interconnect.
High flexibility: Can
handle extremely large
models using MPI
Computing and also a lot
of parallel simulation
tasks using Distributed
Computing (DC)
 Administrative overhead
 Higher price
CST engineers are available to discuss with you which configuration makes sense for your applications and usage scenario.
CST – COMPUTER SIMULATION TECHNOLOGY | www.cst.com
MPI Computing — Area of Application
MPI Computing is a way to handle very large models efficiently
Some application examples for MPI Computing:
Electrically very large structures
(e.g. RCS calculation, lightning strike)
CST – COMPUTER SIMULATION TECHNOLOGY | www.cst.com
Extremely complex structures
(e.g.SI simulation for a full package)
MPI Computing — Working Principle
Subdomain boundary
CST STUDIO SUITE®
Frontend
connects to
MPI Client Nodes
Domain decomposition is
shown in mesh view.
High speed/low latency interconnection network (optional)
 Based on a domain decomposition of the simulation domain.
 Each cluster computer works on its part of the domain.
 Automatic load balancing ensures an equal distribution of the workload.
 It works cross-platform on Windows and Linux systems.
CST – COMPUTER SIMULATION TECHNOLOGY | www.cst.com
MPI Matrix Computation
The performance of the matrix computation step has been improved significantly for the
new version of CST STUDIO SUITE.
Performance Results (for two cluster nodes):*
Model
Matrix Comp.
Time/s (2013)
Matrix Comp.
Time/s (2014)
Speedup
(Matrix Comp.)**
Speedup
(Total Sim.)**
10,301
1,217
8.46
2.63
340M cells
12,921
4,018
3.22
47M cells
1.85
CPU
Core
CPU
Core
CPU
Core
CPU
Core
Matrix computation is
single-threaded in case of
MPI up to version 2013.
Version 2014 uses all
available cores on all
cluster nodes.
* =System configuration: Compute nodes are equipped with dual eight core Xeon E5-2650 processors, 4xK20 GPUs, and Infiniband FDR interconnect.
**=Speedup between version 2013 and 2014 of CST STUDIO SUITE.
CST – COMPUTER SIMULATION TECHNOLOGY | www.cst.com
MPI Calculation Example
2 GHz blade antenna positioned on aircraft
2 GHz
17.4 x 4.5 x 16.2 m
116 x 30 x 108 λ
375,840 λ3
660 million cells
4 node MPI cluster
4 Tesla K20 GPU on each node
Total of 16 GPUs with 6GB RAM at 60% Memory
Total memory: < 100 GB
CST – COMPUTER SIMULATION TECHNOLOGY | www.cst.com
MPI Calculation Example
2 GHz blade antenna positioned on aircraft
2 GHz
17.4 x 4.5 x 16.2 m
116 x 30 x 108 λ
375,840 λ3
660 million cells
4 node MPI cluster
4 Tesla K20 GPU on each node
Total of 16 GPUs with 6GB RAM at 60% Memory
Total memory: < 100 GB
Broadband calculation time ~ 4h
CST – COMPUTER SIMULATION TECHNOLOGY | www.cst.com
Sub-Volume Monitors
Sub-volume monitors allow to record field data only in a region of interest allowing for a reduction of
data. This is especially important for large models which have hundreds of millions mesh cells.
Field data is only stored in the
sub-volume defined by the box
CST – COMPUTER SIMULATION TECHNOLOGY | www.cst.com
Distributed Computing
CST STUDIO SUITE®
Frontend
“Jobs” could be:




port excitations*
excitations
frequency points*
points
parameter variations
optimization iterations
*2 in parallel included
with standard license
CST – COMPUTER SIMULATION TECHNOLOGY | www.cst.com
DC Main Controller
connects to
DC Solver Servers
 Model has 16 ports
 Only 8 ports need to be computed if defining symmetry conditions
 Distribute the 8 simulation runs to different solver servers with
GPU acceleration
CST – COMPUTER SIMULATION TECHNOLOGY | www.cst.com
DC Simulation Time Improvement
Speedup (total time)
30
CPU
25
1 GPU (Tesla 20)
Speedup
20
15
10
5
0
1
2
4
Number of DC Solver Servers
8
Dual Intel Xeon X5675 CPUs (3.06 GHz), fastest memory configuration, 1 Tesla 20 GPU
per node, 1 Gb Ethernet interconnect, 40 million mesh cells
CST – COMPUTER SIMULATION TECHNOLOGY | www.cst.com
DC Main Controller
The DC Main Controller gives you a complete overview about what is happening on your cluster.
Job Status
Machine Status
Essential resources (RAM usage
and disk space) are monitored
as well in the 2014 version.
CST – COMPUTER SIMULATION TECHNOLOGY | www.cst.com
GPU Assignment
Users who have
smaller jobs can start
multiple solver servers
and assign each GPU
to a separate server.
This allows for a more
efficient use of multiGPU hardware
CST – COMPUTER SIMULATION TECHNOLOGY | www.cst.com
Supported Acceleration Methods
Acceleration methods supported by the solvers of CST STUDIO SUITE.
Solver
Multithreading
GPU Computing
Distributed Computing
MPI Computing
on one
GPU card
Most other solvers support Multithreading and Distributed Computing for parameter sweeps and
optimization.
CST – COMPUTER SIMULATION TECHNOLOGY | www.cst.com
Choose the Right Acceleration Method
Solver
Transient
Transient
Transient
Frequency Domain
Integral Equation
Integral Equation
Parameter
Sweep/Optimization
Model Size
Number of
Simulations
below memory limit of GPU
hardware
low
below memory limit of GPU
hardware
medium/high
above memory limit of GPU
hardware
-
can be handled by a single
machine
medium/high
can't be handled by a single
machine
-
can be handled by a single
machine
medium/high
Distributed Computing (Distributed Frequency Points)
n/a
medium/high
Distributed Computing
CST – COMPUTER SIMULATION TECHNOLOGY | www.cst.com
Acceleration Technique
GPU Computing
GPU Computing on a DC Cluster (Distributed Excitations)
MPI or combined MPI+GPU Computing
Distributed Computing (Distributed Frequency Points)
MPI Computing
HPC in the Cloud
CST is working together with HPC hardware and service providers to enable easy
access to large computing power for challenging simulations which can't be run
on in-house hardware.
Users rent a CST license for the resources they need and pay the HPC provider
for the required hardware.
+
HPC system provider
Currently supported providers hosting CST STUDIO SUITE:
More information can be found in the HPC section of our website:
https://www.cst.com/Products/HPC/Cloud-Computing
CST – COMPUTER SIMULATION TECHNOLOGY | www.cst.com
HPC Hardware Design Process
A general hardware recommendation is available on our website which helps you to
configure standard systems (e.g. workstations) for CST STUDIO SUITE.
For HPC systems (multi-GPU systems, clusters) our hardware experts are available to guide
you through the whole process of system design and benchmarking to ensure that your new
system is compatible with CST STUDIO SUITE and delivers the expected performance.
HPC System Design Process
Personal contact with CST
engineers to design solution.
Benchmarking of designed
computing solution in the
hardware test center of the
preferred vendor.
CST – COMPUTER SIMULATION TECHNOLOGY | www.cst.com
Buy the machine if it fulfills your
expectations.