granit

GRANIT
(Global Russian Advanced Networking Initiative) Testbed
AGENDA
1
Challenges & Requirements
2
Software Defined Infrastructure
3
Building GRANIT Testbed
4
GRANIT Use Cases
Challenges &
Requirements
Modern scientific research
Challenges
• Infrastructure
heterogeneity
• Data processing
• Distributed science
…
Requirements
• Deeply reconfigurable
• Economical
• Easy formation of
collaborations
• “Friction-free”
movement of data
• Connecting
experimental devices
HPC (GRID)
Application level
Agents
Brokers
“Deals”
Middleware level
Compute
node
Storage
node
Resource level
Network
Data
node
Network configuration?
SLA guarantee?
DC interaction: traditional model
VM 1 VM 2
VM 3 VM 4
Hypervisor
Hypervisor
Tunnel
Switch
Data center 1
Phy. Switch
DC interaction: SDN Controller
Data center 1
DC interaction: net virtualization
Data center 2
VM 3 VM 4
Hypervisor
Hypervisor
SDN
controller
VM 1 VM 2
Service
manager
Switch
Phy. Switch
Service
manager
Data centers
Data center 2
Net virt.
controller
Service
manager
Data center 1
Data center 2
VM 1 VM 2
V Switch
VM 3 VM 4
Tunnel
V Switch
Hypervisor
Hypervisor
Phy. Switch
Phy. Switch
Tunnel using?
SDN Switch
Tunnel
SDN Switch
Slice intersection?
SDI
Software Defined
Networks Federation
Software
Defined
Infrastructure
Consistent policies
among federates
Virtualization
&
Scalability
GRANIT
GEANT
GENI
FIRE
GRANIT
GRANIT
GRANIT
Moscow
Saint
Petersburg
Nizhny
Novgorod
Yaroslavl
Rostov-onDon
Orenburg
Tomsk
research
publications
graphics
testing
documents
tables
believe?
GRANIT
Experiment
Description
Format
numbers
GRANIT
Research
GRANIT
Workflow
Text description
GRANIT GUI
ORCA
OpenStack
VM postscript
Neuca (ORCA)
Monitoring
tools
Stage 1:
Stage 2:
Stage 3:
Stage 4:
Define experiment
Provision the resources
Launch experiment
Analyze the results
• VM instances
• Check VM
template status
• Launch VMs
• Get data from
VM
• Network capacity
• Create VM
instance
• Start control Data
switch via SDN
• Save data for
controller
future
• Preconfigured
Apps list
• Program Data
switch
• Execute Apps
postscripts
• Topology
• Visualize data
• Analyze data
• Apps postscript
preparing
• Give User access
to all VMS
GRANIT
Rack
Management Switch
Head Node
Worker Node 1
Worker Node 2
Worker Node n
Storage Node
Data Switch
allow access to the local network provider
running:
• ORCA
• OpenStack
• xCat
to provision VMs
provide OpenStack virtualized instances
provide iSCSI NAS
carry experimental traffic via Layer 2
GRANIT
IBM Rack
Management Switch
Head Node
48 × 1 GbE RJ45 ports and 4 standard 10 GbE SFP+ ports
128GB SATA 2.5”
x2
Intel Xeon Processor E5-2640 v2 8C 2.0GHz 20MB
Cache1600MHz x 2
Worker Node 1
Intel x520 Dual Port 10GbE SFP+ Adapter x 2
Worker Node 2
8GB ECC DDR3 1600MHz x 8
Worker Node n
8GB ECC DDR3 1600MHz x 8x16 Cores
Storage Node
2TB 7.2K 6Gbps NL SATA 3.5” x 6
Data Switch
300GB 10K 6Gbps SAS 2.5" SFF G2HS HDD
48 × 10 GbE SFP+ ports, 4 × 40 GbE QSFP+ ports in a 1U
Federated Orchestration,
Authorization and
Provisioning
Open substrate
ORCA
ORCA
Broker
Dynamic contracts
Off-the-shelf back-ends Federated coordination
Provider autonomy
4. request
resources
Resource visibility
1. delegate
tickets
Multiple deployment
scenarios
3. request
2. request
slice
5. provide
tickets
7. redeem
resources
tickets
6. redeem
User
tickets
Controller
ORCA
Slice manager
8. return
ORCA
leases Aggregate manager
GRANIT
Stack
Hardware
level
System
level
SoftwareDefined level
Interface
level
Management Switch
EC2 API
Head Node
Worker Node 1
Worker Node 2
ORCA
NIaaS
OpenStack
GENI API
Worker Node n
SOC
Storage Node
Data Switch
OCCI API
RUNOS Controller
SOC API
GRANIT
FUTURE
ORCA
SOC
ORCA
interface
SOC
ORCA
USA
SOC
Other
World
RUSSIA
Management Switch
Head Node
Local Computational
Resources
Worker Node 1
Worker Node 2
ORCA
Worker Node n
Specialized Devices
Lomonosov
Chebyshev
Storage Node
Data Switch
High Performance Computational (HPC)
Resources
Open Resource Control
Architecture
GRANIT
Resources
Bare-Metal Resources
…?
GRANIT
RUNNET
GRANIT
RTK
Q&A
THANK YOU!