vHost

The Linux SCSI Target Wiki

(Difference between revisions)
Jump to: navigation, search
m
m
Line 1: Line 1:
{{DISPLAYTITLE:vHost}}
{{DISPLAYTITLE:vHost}}
{{Infobox software
{{Infobox software
-
| name                  = LIO Target
+
| name                  = LIO Linux SCSI Target
| logo                  = [[Image:RisingTide_Logo_small.png|180px|Logo]]
| logo                  = [[Image:RisingTide_Logo_small.png|180px|Logo]]
| screenshot            = {{RTS screenshot|iSCSI}}
| screenshot            = {{RTS screenshot|iSCSI}}
Line 27: Line 27:
:''See [[Target]] for a complete overview over all fabric modules.''
:''See [[Target]] for a complete overview over all fabric modules.''
-
'''vHost''' provides a very high performance local SCSI target for [[KVM]] guests.
+
'''vHost''' provides a very high performance local [[SCSI]] target for Linux [[KVM]] guests.
{{Image|vHost architecture.png|RTS vHost architecture and data processing.}}
{{Image|vHost architecture.png|RTS vHost architecture and data processing.}}
Line 152: Line 152:
=== I/O processing ===
=== I/O processing ===
-
vHost processes I/Os from Linux guests to Unified Target backstores as follows:
+
vHost processes I/Os from Linux guests to Linux [[SCSI]] [[Target]] backstores as follows:
# The KVM guest enqueues the SCSI I/O descriptor(s) to its virtio ring;
# The KVM guest enqueues the SCSI I/O descriptor(s) to its virtio ring;
-
# The KVM guest kicks the Unified Target to wake up;
+
# The KVM guest kicks the Linux SCSI Target to wake up;
-
# The Unified Target wakes up, dequeues the I/O descriptor(s) off the virtio ring and processes them;
+
# The Linux SCSI Target wakes up, dequeues the I/O descriptor(s) off the virtio ring and processes them;
-
# The Unified Target dispatches the I/O to the backend storage device (HDDs, SSDs, flash, RAM, etc.);
+
# The Linux SCSI Target dispatches the I/O to the backend storage device (HDDs, SSDs, flash, RAM, etc.);
-
# The Unified Target backend storage device completes the I/O;
+
# The Linux SCSI Target backend storage device completes the I/O;
-
# The Unified Target enqueues the resulting I/O descriptor(s) to the KVM guest virtio ring;
+
# The Linux SCSI Target enqueues the resulting I/O descriptor(s) to the KVM guest virtio ring;
-
# The Unified Target kicks the KVM guest to wake up;
+
# The Linux SCSI Target kicks the KVM guest to wake up;
# The KVM guest wakes up and dequeues the I/O descriptor(s) off the virtio ring.
# The KVM guest wakes up and dequeues the I/O descriptor(s) off the virtio ring.
== targetcli ==
== targetcli ==
-
''[[targetcli]]'' from {{RTS full}} is used to configure vHost targets. ''targetcli'' aggregates service modules via a core library, and exports them through an API to the Unified [[Target]], to provide a unified single-node SAN configuration shell, independently of the underlying fabric(s).
+
''[[targetcli]]'' from {{RTS full}} is used to configure vHost targets. ''targetcli'' aggregates service modules via a core library, and exports them through an API to the Linux SCSI [[Target]], to provide a unified single-node SAN configuration shell, independently of the underlying fabric(s).
{{Ambox| type=info| head=RTS OS Admin Manual| text=The [[RTS OS]] [http://www.risingtidesystems.com/doc/RTS%20OS%20Admin%20Manual%20CE.pdf Admin Manual] provides comprehensive background and many examples on using ''targetcli'' and on programming the RTS library.}}
{{Ambox| type=info| head=RTS OS Admin Manual| text=The [[RTS OS]] [http://www.risingtidesystems.com/doc/RTS%20OS%20Admin%20Manual%20CE.pdf Admin Manual] provides comprehensive background and many examples on using ''targetcli'' and on programming the RTS library.}}
Line 193: Line 193:
== See also ==
== See also ==
* [[RTS OS]], [[targetcli]]
* [[RTS OS]], [[targetcli]]
-
* [[Target]]
+
* Linux SCSI [[Target]]
* [[Fibre Channel]], [[Fibre Channel over Ethernet|FCoE]], [[iSCSI]], [[iSER]], [[SRP]], [[tcm_loop]]
* [[Fibre Channel]], [[Fibre Channel over Ethernet|FCoE]], [[iSCSI]], [[iSER]], [[SRP]], [[tcm_loop]]
* [[ConfigFS]]
* [[ConfigFS]]

Revision as of 04:50, 10 June 2013

LIO Linux SCSI Target
Logo
LIO 150513.png
vHost fabric module
Original author(s) Nicholas Bellinger
Stefan Hajnoczi
Developer(s) Datera, Inc.
Preview release 4.2.0-rc5 / July 30, 2012;
7 years ago
 (2012-07-30)
Development status Production
Written in C
Operating system Linux
Type Fabric module
License GNU General Public License
Website datera.io
See Target for a complete overview over all fabric modules.

vHost provides a very high performance local SCSI target for Linux KVM guests.

RTS vHost architecture and data processing.
RTS OS self-hosted with Cent OS guest.

Contents

Overview

The RTS vHost fabric module implements I/O processing based on the Linux virtio mechanism. It provides virtually bare-metal local storage performance for KVM guests. Currently, only Linux guest VMs are supported, with Windows support under development with a virtual LSI MegaRAID SAS driver.

Guests

RTS vHost allows effectively running very high-performance applications, such as database engines, in KVM guests on local storage.

Linux performance

The Linux performance numbers were measured on a dual Intel Xeon-E5-2687W 3.10 Ghz CPU Romley-EP system with 32x threads and 32 GB DDR3-1600 SDRAM.[1]

The results for one single KVM VM and a varying number of LUNs on RTS OS with vHost on RAM are as follows.

IOPS:

KVM
Workload Jobs 25%/75% Read/Write 75%/25% Read/Write
1x rd_mcp LUN 8 ~155k IOPs ~145k IOPs
16x rd_mcp LUN 16 ~315k IOPs[2] ~305k IOPs[2]
32x rd_mcp LUN 16 ~425k IOPs[2] ~410k IOPs[2]


Native
Workload Jobs 25%/75% Read/Write 75%/25% Read/Write
1x rd_mcp LUN 32 ~160K IOPs ~150K IOPs
16x rd_mcp LUN 32 ~1125k IOPs ~1100k IOPs
32x rd_mcp LUN 32 ~1185k IOPs ~1175k IOPs

Throughput:

KVM
Workload Jobs 25%/75% Read/Write 75%/25% Read/Write
1x rd_mcp LUN 8 ~1800 MB/s ~1800 MB/s
16x rd_mcp LUN 16 TBD TBD
32x rd_mcp LUN 16 ~18500 MB/s ~18500 MB/s


Native
Workload Jobs 25%/75% Read/Write 75%/25% Read/Write
1x rd_mcp LUN 32 ~2048 MB/s ~2048 MB/s
16x rd_mcp LUN 32 ~17500 MB/s ~17500 MB/s
32x rd_mcp LUN 32 ~20480 MB/s ~20480 MB/s

The benchmarks were done with FIO with the following parameters:

[randrw]
rw=rw
rwmixwrite=25
rwmixread=75
ioengine=libaio
direct=1
size=100G
iodepth=64
iodepth_batch=4
iodepth_batch_complete=32
numjobs=32
blocksize=1M

filename=/dev/sdb
filename=/dev/sdX....

Windows

RTS will enable Windows guests to achieve the same level of performance with a virtual LSI MegaRAID SAS driver.

I/O processing

vHost processes I/Os from Linux guests to Linux SCSI Target backstores as follows:

  1. The KVM guest enqueues the SCSI I/O descriptor(s) to its virtio ring;
  2. The KVM guest kicks the Linux SCSI Target to wake up;
  3. The Linux SCSI Target wakes up, dequeues the I/O descriptor(s) off the virtio ring and processes them;
  4. The Linux SCSI Target dispatches the I/O to the backend storage device (HDDs, SSDs, flash, RAM, etc.);
  5. The Linux SCSI Target backend storage device completes the I/O;
  6. The Linux SCSI Target enqueues the resulting I/O descriptor(s) to the KVM guest virtio ring;
  7. The Linux SCSI Target kicks the KVM guest to wake up;
  8. The KVM guest wakes up and dequeues the I/O descriptor(s) off the virtio ring.

targetcli

targetcli from Datera, Inc. is used to configure vHost targets. targetcli aggregates service modules via a core library, and exports them through an API to the Linux SCSI Target, to provide a unified single-node SAN configuration shell, independently of the underlying fabric(s).

Spec file

RTS spec files define the fabric-dependent feature set, capabilities and available target ports of the specific underlying fabric.

In particular, the iSCSI spec file /var/target/fabric/vhost.spec is included via RTSlib.

# WARNING: This is a draft specfile supplied for demo purposes only.

# The vHost fabric module feature set
features = nexus

# Use naa WWNs.
wwn_type = naa

# Non-standard module naming scheme
kernel_module = tcm_vhost

# The configfs group name is default
configfs_group = vhost

See also

Notes

  1. Nicholas A. Bellinger (2012-08-07). "SCSI small block random I/O performance on 3.6-rc0 using SCSI loopback ports". http://permalink.gmane.org. 
  2. a b c d IOPS results are currently impacted by limitations in the vHost interrupt processing implementation. The underlying bottleneck is being addressed, and future RTS vHost releases will scale up IOPs much better with more LUNs and vCPUs per VMs.

External links

Timeline of the LinuxIO
Release Details 2011 2012 2013 2014 2015
123456789101112 123456789101112 123456789101112 123456789101112 123456789101112
4.x Version 4.0 4.1
Feature LIO Core Loop back FCoE iSCSI Perf SRP
CM WQ FC
USB
1394
vHost Perf Misc 16 GFC iSER Misc VAAI Misc DIF Core
NPIV
DIF iSER DIF FC vhost TCMU Xen Misc Misc virtio 1.0 Misc NVMe OF
Linux 2.6.38 2.6.39 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13 3.14 3.15 3.16 3.17 3.18 3.19 3.20 3.21 3.22
Personal tools
Namespaces
Variants
Actions
Navigation
Toolbox
Google AdSense