When talking about modern infrastructure, storage is as critical as compute and networking. Applications — whether traditional VMs or cloud-native containers — depend on reliable, performant, and consistent storage.
I’ll focus on three common approaches to storage in mixed Kubernetes + hypervisor environments:
- Hypervisor-based storage (VM-centric, provided by the virtualization platform).
- Kubernetes-managed storage (container-centric, via CSI plugins).
- Integrated/hybrid storage models (shared storage pools accessible to both VMs and containers).
Note: These are general architectural patterns, not tied to any single hypervisor or storage vendor. Real-world implementations may vary.
1. Hypervisor-Based Storage#
In this model, storage is managed at the hypervisor layer.
- VMs consume storage as virtual disks.
- Kubernetes nodes (running in VMs) see these as block devices.
- Persistent storage for pods depends on attaching volumes through the VM layer.
Diagram:
flowchart TB classDef storage fill:#FFD580,stroke:#333,stroke-width:1px; classDef vm fill:#ADD8E6,stroke:#333,stroke-width:2px; classDef pod fill:#90EE90,stroke:#333,stroke-width:1px; A["Physical Server"] --> B["Hypervisor"] B --> C["VM: K8s Node"] C --> P1["Pod: App"] B --> S["Hypervisor Storage"] S -.-> C C -.-> P1 class S storage; class C vm; class P1 pod;
Figure 1: Hypervisor provides primary storage. Pods consume volumes indirectly via VM disks.
Pros:
- Mature, enterprise-grade features (snapshots, replication, HA).
- Existing investments in hypervisor storage can be reused.
Cons:
- Indirect path for containers (pod → VM → hypervisor → storage).
- Limited Kubernetes-native flexibility.
2. Kubernetes-Managed Storage#
Here, Kubernetes manages storage directly using the Container Storage Interface (CSI).
- Pods request storage via PersistentVolumeClaims (PVCs).
- Backed by CSI drivers for different storage systems.
- Kubernetes abstracts the underlying system, making it container-first.
- See the CSI overview for more details.
Diagram:
flowchart TB classDef storage fill:#FFD580,stroke:#333,stroke-width:1px; classDef node fill:#ADD8E6,stroke:#333,stroke-width:2px; classDef pod fill:#90EE90,stroke:#333,stroke-width:1px; X["K8s Node"] --> P["Pod: App"] X --> CSI["CSI Driver"] CSI --> DS["Storage System"] class X node; class P pod; class DS storage;
Figure 2: Kubernetes manages storage directly using CSI plugins. Pods request PersistentVolumes via PVCs.
Pros:
- Kubernetes-native workflows (PVCs, dynamic provisioning).
- Portable across environments.
- Scales with the cluster.
Cons:
- Requires CSI integration.
- Features (e.g., snapshots, encryption) depend on the storage backend.
3. Integrated / Hybrid Storage Models#
Some environments expose shared storage pools to both VMs and Kubernetes.
- Hypervisor workloads and container workloads use the same underlying storage system.
- Example: hypervisor storage made available to Kubernetes through a CSI driver.
Diagram:
flowchart TB classDef storage fill:#FFD580,stroke:#333,stroke-width:1px; classDef vm fill:#ADD8E6,stroke:#333,stroke-width:2px; classDef pod fill:#90EE90,stroke:#333,stroke-width:1px; A["Physical Server"] --> H["Hypervisor"] H --> VM["VM: App"] H --> K["K8s Node"] K --> P["Pod: App"] H --> S["Shared Storage Pool"] S -.-> VM S -.-> K K -.-> P class S storage; class VM,K vm; class P pod;
Figure 3: Shared storage pool used by both VMs and Kubernetes pods.
Pros:
- Unified storage strategy for VMs and containers.
- Simplifies operations.
- Easier data mobility across environments.
Cons:
- More complex integration.
- May require specific vendor solutions.
Storage Comparison Diagram#
To summarize, here’s a visual comparison of how storage flows in each model:
flowchart LR classDef storage fill:#FFD580,stroke:#333,stroke-width:1px; classDef vm fill:#ADD8E6,stroke:#333,stroke-width:2px; classDef pod fill:#90EE90,stroke:#333,stroke-width:1px; O1H["Hypervisor"] --> O1VM["VM: K8s Node"] O1VM --> O1P["Pod: App"] O1H --> O1S["Hypervisor Storage"] O1S -.-> O1VM O1VM -.-> O1P O2N["K8s Node"] --> O2P["Pod: App"] O2N --> O2C["CSI Driver"] O2C --> O2S["Storage System"] O3H["Hypervisor"] --> O3VM["VM: App"] O3H --> O3K["K8s Node"] O3K --> O3P["Pod: App"] O3H --> O3S["Shared Storage Pool"] O3S -.-> O3VM O3S -.-> O3K O3K -.-> O3P class O1S,O2S,O3S storage; class O1VM,O3VM,O2N,O3K vm; class O1P,O2P,O3P pod;
Figure 4: Comparison of storage models across hypervisor-based, Kubernetes-managed, and hybrid environments.
Performance Comparison#
| Option | Latency | Throughput | Scalability |
|---|---|---|---|
| 1. Hypervisor-based | Higher (extra VM layer) | Good for VM-centric workloads | Scales with hypervisor cluster |
| 2. Kubernetes-managed (CSI) | Lower (direct to storage) | Scales with Kubernetes cluster | Highly scalable with CSI drivers |
| 3. Integrated/Hybrid | Medium (shared layer adds overhead) | Balanced across VMs and pods | Scales with both hypervisor + Kubernetes |
Choosing the Right Storage Model#
| Option | Description | Pros | Cons |
|---|---|---|---|
| 1. Hypervisor-based | VMs own storage; pods consume via VM disks | Mature features, reuse existing storage | Indirect for pods, less flexible |
| 2. Kubernetes-managed (CSI) | Kubernetes directly provisions storage | Native integration, portable | Depends on CSI backend, may lack enterprise features |
| 3. Integrated/Hybrid | Shared pool for VMs and pods | Unified strategy, easier mobility | Complex integration, vendor lock-in |
Workload Considerations#
- VM-heavy workloads: Hypervisor-based or hybrid models may be more efficient.
- Container-first workloads: Kubernetes-managed CSI storage is often best.
- Mixed workloads: Hybrid approaches provide the most flexibility.
Summary#
Storage design in Kubernetes + hypervisor environments is about balancing maturity, flexibility, and integration:
- Hypervisor storage is stable and feature-rich but less Kubernetes-native.
- Kubernetes CSI offers portability and container-first workflows (CSI overview).
- Hybrid models unify the storage plane for both worlds but can add complexity.
As with compute and networking, the right storage model depends on your workload priorities and operational requirements.

