vHost

The Linux SCSI Target Wiki

(Difference between revisions)
Jump to: navigation, search
m
 
(21 intermediate revisions not shown)
Line 1: Line 1:
{{DISPLAYTITLE:vHost}}
{{DISPLAYTITLE:vHost}}
{{Infobox software
{{Infobox software
-
| name                  = LIO Linux SCSI Target
+
| name                  = {{Target}}
-
| logo                  = [[Image:RisingTide_Logo_small.png|180px|Logo]]
+
| logo                  = [[Image:Corp_Logo.png|180px|Logo]]
| screenshot            = {{RTS screenshot|iSCSI}}
| screenshot            = {{RTS screenshot|iSCSI}}
| caption                = vHost fabric module
| caption                = vHost fabric module
Line 22: Line 22:
| language              =
| language              =
| genre                  = Fabric module
| genre                  = Fabric module
-
| license                = GNU General Public License
+
| license                = {{GPLv2}}
| website                = {{RTS website}}
| website                = {{RTS website}}
}}
}}
-
:''See [[Target]] for a complete overview over all fabric modules.''
+
:''See [[LIO]] for a complete overview over all fabric modules.''
-
'''vHost''' provides a very high performance local [[SCSI]] target for Linux [[KVM]] guests.
+
'''vHost''' provides a very high performance local {{Target}} for [[KVM]] guests.
-
{{Image|vHost architecture.png|RTS vHost architecture and data processing.}}
+
{{Image|vHost architecture.png|{{T}} vHost architecture and data processing.}}
-
{{Image|RTSos1205-selfhosted-with-CentOS61-guest.png|RTS OS self-hosted with Cent OS guest.}}
+
{{Image|RTSos1205-selfhosted-with-CentOS61-guest.png|{{OS}} self-hosted with Cent OS guest.}}
== Overview ==
== Overview ==
-
The RTS vHost fabric module implements I/O processing based on the Linux virtio mechanism. It provides virtually bare-metal local storage performance for KVM guests. Currently, only Linux guest VMs are supported, with Windows support under development with a virtual LSI MegaRAID SAS driver.
+
The {{Target}} vHost fabric module implements I/O processing based on the Linux virtio mechanism. It provides virtually bare-metal local storage performance for [[KVM]] guests. Currently, only Linux guest VMs are supported, with Windows support under development with a virtual LSI MegaRAID SAS driver.
 +
 
 +
As of late 2016, vhost-scsi has been integrated with libvirt by IBM. https://wiki.libvirt.org/page/Vhost-scsi_target
== Guests ==
== Guests ==
-
RTS vHost allows effectively running very high-performance applications, such as database engines, in KVM guests on local storage.
+
{{T}} vHost allows effectively running very high-performance applications, such as database engines, in KVM guests on local storage.
=== Linux performance ===
=== Linux performance ===
Line 44: Line 46:
The Linux performance numbers were measured on a dual Intel Xeon-E5-2687W 3.10&nbsp;Ghz CPU Romley-EP system with 32x threads and 32&nbsp;GB DDR3-1600 SDRAM.<ref>{{cite web| url=http://permalink.gmane.org/gmane.linux.kernel/1338608| title=SCSI small block random I/O performance on 3.6-rc0 using SCSI loopback ports| author=Nicholas A. Bellinger| date=2012-08-07| publisher=http://permalink.gmane.org}}</ref>
The Linux performance numbers were measured on a dual Intel Xeon-E5-2687W 3.10&nbsp;Ghz CPU Romley-EP system with 32x threads and 32&nbsp;GB DDR3-1600 SDRAM.<ref>{{cite web| url=http://permalink.gmane.org/gmane.linux.kernel/1338608| title=SCSI small block random I/O performance on 3.6-rc0 using SCSI loopback ports| author=Nicholas A. Bellinger| date=2012-08-07| publisher=http://permalink.gmane.org}}</ref>
-
The results for one single KVM VM and a varying number of LUNs on [[RTS&nbsp;OS]] with vHost on RAM are as follows.
+
The results for one single KVM VM and a varying number of LUNs are as follows.
IOPS:
IOPS:
Line 148: Line 150:
=== Windows ===
=== Windows ===
-
RTS will enable Windows guests to achieve the same level of performance with a virtual LSI MegaRAID SAS driver.
+
{{RTS short}} will enable Windows guests to achieve the same level of performance with a virtual LSI MegaRAID SAS driver.
=== I/O processing ===
=== I/O processing ===
-
vHost processes I/Os from Linux guests to Linux [[SCSI]] [[Target]] backstores as follows:
+
vHost processes I/Os from Linux guests to {{T}} backstores as follows:
-
# The KVM guest enqueues the SCSI I/O descriptor(s) to its virtio ring;
+
# The [[KVM]] guest enqueues the SCSI I/O descriptor(s) to its virtio ring;
-
# The KVM guest kicks the Linux SCSI Target to wake up;
+
# The KVM guest kicks {{T}} to wake up;
-
# The Linux SCSI Target wakes up, dequeues the I/O descriptor(s) off the virtio ring and processes them;
+
# {{T}} wakes up, dequeues the I/O descriptor(s) off the virtio ring and processes them;
-
# The Linux SCSI Target dispatches the I/O to the backend storage device (HDDs, SSDs, flash, RAM, etc.);
+
# {{T}} dispatches the I/O to the backend storage device (HDDs, SSDs, flash, RAM, etc.);
-
# The Linux SCSI Target backend storage device completes the I/O;
+
# {{T}} backend storage device completes the I/O;
-
# The Linux SCSI Target enqueues the resulting I/O descriptor(s) to the KVM guest virtio ring;
+
# {{T}} enqueues the resulting I/O descriptor(s) to the KVM guest virtio ring;
-
# The Linux SCSI Target kicks the KVM guest to wake up;
+
# {{T}} kicks the KVM guest to wake up;
# The KVM guest wakes up and dequeues the I/O descriptor(s) off the virtio ring.
# The KVM guest wakes up and dequeues the I/O descriptor(s) off the virtio ring.
== targetcli ==
== targetcli ==
-
''[[targetcli]]'' from {{RTS full}} is used to configure vHost targets. ''targetcli'' aggregates service modules via a core library, and exports them through an API to the Linux SCSI [[Target]], to provide a unified single-node SAN configuration shell, independently of the underlying fabric(s).
+
''[[targetcli]]'' from {{RTS full}} is used to configure vHost targets. ''targetcli'' aggregates {{T}} service modules via a core library, and exports them through an API, to provide a unified single-node SAN configuration shell, independently of the underlying fabric(s).
-
{{Ambox| type=info| head=RTS OS Admin Manual| text=The [[RTS OS]] [http://www.risingtidesystems.com/doc/RTS%20OS%20Admin%20Manual%20CE.pdf Admin Manual] provides comprehensive background and many examples on using ''targetcli'' and on programming the RTS library.}}
+
{{Ambox| type=info| head=[[LIO]] Admin Manual| text=The {{LIO Admin Manual}} provides comprehensive background and many examples on using ''targetcli'' and on programming the RTS library.}}
== Spec file ==
== Spec file ==
-
 
+
{{RTS short}} spec files define the fabric-dependent feature set, capabilities and available target ports of the specific underlying fabric.
-
RTS spec files define the fabric-dependent feature set, capabilities and available target ports of the specific underlying fabric.
+
In particular, the iSCSI spec file ''/var/target/fabric/vhost.spec'' is included via RTSlib.
In particular, the iSCSI spec file ''/var/target/fabric/vhost.spec'' is included via RTSlib.
Line 185: Line 186:
# Non-standard module naming scheme
# Non-standard module naming scheme
-
kernel_module = tcm_vhost
+
kernel_module = vhost_scsi
# The configfs group name is default
# The configfs group name is default
Line 192: Line 193:
== See also ==
== See also ==
-
* [[RTS OS]], [[targetcli]]
+
* [[{{OS}}]]
-
* Linux SCSI [[Target]]
+
* {{Target}}, [[targetcli]]
-
* [[Fibre Channel]], [[Fibre Channel over Ethernet|FCoE]], [[iSCSI]], [[iSER]], [[SRP]], [[tcm_loop]]
+
* [[FCoE]], [[Fibre Channel]], [[iSCSI]], [[iSER]], [[SRP]], [[tcm_loop]]
* [[ConfigFS]]
* [[ConfigFS]]
Line 201: Line 202:
== External links ==
== External links ==
-
* [[RTS OS]] [http://www.risingtidesystems.com/doc/RTS%20OS%20Admin%20Manual%20CE.pdf Admin Manual]
+
* {{LIO Admin Manual}}
-
* RTSlib Reference Guide [[http://www.risingtidesystems.com/doc/rtslib-gpl/html/ HTML]][[http://www.risingtidesystems.com/doc/rtslib-gpl/pdf/rtslib-API-reference.pdf PDF]]
+
* RTSlib Reference Guide {{Lib Ref Guide HTML}}{{Lib Ref Guide PDF}}
* [http://www.linux-kvm.org/page/Multiqueue Multiqueue KVM]
* [http://www.linux-kvm.org/page/Multiqueue Multiqueue KVM]
* [http://wiki.qemu.org/BlockRoadmap#tcm_vhost_.5BZhi_Yong.5D QEMU roadmap]
* [http://wiki.qemu.org/BlockRoadmap#tcm_vhost_.5BZhi_Yong.5D QEMU roadmap]

Latest revision as of 22:03, 10 March 2017

LinuxIO
Logo
LIO 150513.png
vHost fabric module
Original author(s) Nicholas Bellinger
Stefan Hajnoczi
Developer(s) Datera, Inc.
Preview release 4.2.0-rc5 / July 30, 2012;
7 years ago
 (2012-07-30)
Development status Production
Written in C
Operating system Linux
Type Fabric module
License GNU General Public License, version 2 (GPLv2)
Website datera.io
See LIO for a complete overview over all fabric modules.

vHost provides a very high performance local LinuxIO for KVM guests.

LIO vHost architecture and data processing.
LIO self-hosted with Cent OS guest.

Contents

Overview

The LinuxIO vHost fabric module implements I/O processing based on the Linux virtio mechanism. It provides virtually bare-metal local storage performance for KVM guests. Currently, only Linux guest VMs are supported, with Windows support under development with a virtual LSI MegaRAID SAS driver.

As of late 2016, vhost-scsi has been integrated with libvirt by IBM. https://wiki.libvirt.org/page/Vhost-scsi_target

Guests

LIO vHost allows effectively running very high-performance applications, such as database engines, in KVM guests on local storage.

Linux performance

The Linux performance numbers were measured on a dual Intel Xeon-E5-2687W 3.10 Ghz CPU Romley-EP system with 32x threads and 32 GB DDR3-1600 SDRAM.[1]

The results for one single KVM VM and a varying number of LUNs are as follows.

IOPS:

KVM
Workload Jobs 25%/75% Read/Write 75%/25% Read/Write
1x rd_mcp LUN 8 ~155k IOPs ~145k IOPs
16x rd_mcp LUN 16 ~315k IOPs[2] ~305k IOPs[2]
32x rd_mcp LUN 16 ~425k IOPs[2] ~410k IOPs[2]


Native
Workload Jobs 25%/75% Read/Write 75%/25% Read/Write
1x rd_mcp LUN 32 ~160K IOPs ~150K IOPs
16x rd_mcp LUN 32 ~1125k IOPs ~1100k IOPs
32x rd_mcp LUN 32 ~1185k IOPs ~1175k IOPs

Throughput:

KVM
Workload Jobs 25%/75% Read/Write 75%/25% Read/Write
1x rd_mcp LUN 8 ~1800 MB/s ~1800 MB/s
16x rd_mcp LUN 16 TBD TBD
32x rd_mcp LUN 16 ~18500 MB/s ~18500 MB/s


Native
Workload Jobs 25%/75% Read/Write 75%/25% Read/Write
1x rd_mcp LUN 32 ~2048 MB/s ~2048 MB/s
16x rd_mcp LUN 32 ~17500 MB/s ~17500 MB/s
32x rd_mcp LUN 32 ~20480 MB/s ~20480 MB/s

The benchmarks were done with FIO with the following parameters:

[randrw]
rw=rw
rwmixwrite=25
rwmixread=75
ioengine=libaio
direct=1
size=100G
iodepth=64
iodepth_batch=4
iodepth_batch_complete=32
numjobs=32
blocksize=1M

filename=/dev/sdb
filename=/dev/sdX....

Windows

Datera will enable Windows guests to achieve the same level of performance with a virtual LSI MegaRAID SAS driver.

I/O processing

vHost processes I/Os from Linux guests to LIO backstores as follows:

  1. The KVM guest enqueues the SCSI I/O descriptor(s) to its virtio ring;
  2. The KVM guest kicks LIO to wake up;
  3. LIO wakes up, dequeues the I/O descriptor(s) off the virtio ring and processes them;
  4. LIO dispatches the I/O to the backend storage device (HDDs, SSDs, flash, RAM, etc.);
  5. LIO backend storage device completes the I/O;
  6. LIO enqueues the resulting I/O descriptor(s) to the KVM guest virtio ring;
  7. LIO kicks the KVM guest to wake up;
  8. The KVM guest wakes up and dequeues the I/O descriptor(s) off the virtio ring.

targetcli

targetcli from Datera, Inc. is used to configure vHost targets. targetcli aggregates LIO service modules via a core library, and exports them through an API, to provide a unified single-node SAN configuration shell, independently of the underlying fabric(s).

Spec file

Datera spec files define the fabric-dependent feature set, capabilities and available target ports of the specific underlying fabric.

In particular, the iSCSI spec file /var/target/fabric/vhost.spec is included via RTSlib.

# WARNING: This is a draft specfile supplied for demo purposes only.

# The vHost fabric module feature set
features = nexus

# Use naa WWNs.
wwn_type = naa

# Non-standard module naming scheme
kernel_module = vhost_scsi

# The configfs group name is default
configfs_group = vhost

See also

Notes

  1. Nicholas A. Bellinger (2012-08-07). "SCSI small block random I/O performance on 3.6-rc0 using SCSI loopback ports". http://permalink.gmane.org. 
  2. a b c d IOPS results are currently impacted by limitations in the vHost interrupt processing implementation. The underlying bottleneck is being addressed, and future RTS vHost releases will scale up IOPs much better with more LUNs and vCPUs per VMs.

External links

Timeline of the LinuxIO
Release Details 2011 2012 2013 2014 2015
123456789101112 123456789101112 123456789101112 123456789101112 123456789101112
4.x Version 4.0 4.1
Feature LIO Core Loop back FCoE iSCSI Perf SRP
CM WQ FC
USB
1394
vHost Perf Misc 16 GFC iSER Misc VAAI Misc DIF Core
NPIV
DIF iSER DIF FC vhost TCMU Xen Misc Misc virtio 1.0 Misc NVMe OF
Linux 2.6.38 2.6.39 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13 3.14 3.15 3.16 3.17 3.18 3.19 3.20 3.21 3.22
Personal tools
Namespaces
Variants
Actions
Navigation
Toolbox
Google AdSense