Abstract

Data centers face significant performance challenges with parallel processing for network I/O in virtualized environments, particularly for latency-critical (LC) workloads that must satisfy strict Service Level Objectives (SLOs). While previous studies have addressed performance challenges in network I/O virtualization, they overlook the impact of excessive parallelism on the performance of Virtual Machines (VMs). We observe that excessive parallelization for VMs and network I/O processing can lead to core oversubscription, resulting in significant resource contention, frequent preemptions, and task migrations. Based on these observations, we propose vSPACE, dynamic core management specifically designed to support parallel network I/O processing in virtualized environments efficiently. To reduce scheduling contention, vSPACE creates distinct core allocation groups for VM and network I/O and assigns dedicated cores to each. Then, it dynamically adjusts the number of allocated cores to enforce appropriate parallelism for VMs and network I/O processing based on varying demands. vSPACE employs continuous monitoring and a heuristic algorithm to periodically determine appropriate core allocation, addressing excessive contention and improving energy and resource efficiency. vSPACE operates in three modes: performance improvement, energy efficiency, and resource efficiency. Our evaluations demonstrate that vSPACE significantly enhances throughput by up to 4.2 × compared to existing core allocation approaches and improves energy and resource efficiency by up to 16.5% and 30.5%, respectively.

Keywords

Dynamic core management, Parallel network packet processing, Virtualized environments, Latency-critical workloads, Service Level Objectives, Energy efficiency, Data center performance.

Related Research Topics

Cloud Computing & Applications Optimizations

Power/Resource Management for Energy Efficiency of Data-center Servers