vHost

The Linux SCSI Target Wiki

(Difference between revisions)
Jump to: navigation, search
 
(41 intermediate revisions not shown)
Line 1: Line 1:
{{DISPLAYTITLE:vHost}}
{{DISPLAYTITLE:vHost}}
{{Infobox software
{{Infobox software
-
| name                  = LIO Target
+
| name                  = {{Target}}
-
| logo                  = [[Image:RisingTide_Logo_small.png|180px|Logo]]
+
| logo                  = [[Image:Corp_Logo.png|180px|Logo]]
| screenshot            = {{RTS screenshot|iSCSI}}
| screenshot            = {{RTS screenshot|iSCSI}}
-
| caption                = iSCSI fabric module
+
| caption                = vHost fabric module
| collapsible            =  
| collapsible            =  
| author                = {{Nicholas Bellinger}}<br/>{{Stefan Hajnoczi}}
| author                = {{Nicholas Bellinger}}<br/>{{Stefan Hajnoczi}}
Line 22: Line 22:
| language              =
| language              =
| genre                  = Fabric module
| genre                  = Fabric module
-
| license                = GNU General Public License
+
| license                = {{GPLv2}}
| website                = {{RTS website}}
| website                = {{RTS website}}
}}
}}
-
:''See [[Target]] for a complete overview over all fabric modules.''
+
:''See [[LIO]] for a complete overview over all fabric modules.''
-
'''vHost''' provides a very high performance local SCSI target for [[KVM]] guests.
+
'''vHost''' provides a very high performance local {{Target}} for [[KVM]] guests.
-
{{Image|vHost architecture.png|RTS vHost architecture and data processing.}}
+
{{Image|vHost architecture.png|{{T}} vHost architecture and data processing.}}
 +
{{Image|RTSos1205-selfhosted-with-CentOS61-guest.png|{{OS}} self-hosted with Cent OS guest.}}
== Overview ==
== Overview ==
-
The RTS vHost fabric module implements I/O processing based on the Linux virtio mechanis. With this, Linux KVM guests can drive more than 3 times better performance with local Linux backstores than with any other SCSI target that is available under Linux.
+
The {{Target}} vHost fabric module implements I/O processing based on the Linux virtio mechanism. It provides virtually bare-metal local storage performance for [[KVM]] guests. Currently, only Linux guest VMs are supported, with Windows support under development with a virtual LSI MegaRAID SAS driver.
 +
 
 +
As of late 2016, vhost-scsi has been integrated with libvirt by IBM. https://wiki.libvirt.org/page/Vhost-scsi_target
== Guests ==
== Guests ==
-
RTS vHost allows effectively running very high-performance applications, such as database engines, in KVM guests on local storage.
+
{{T}} vHost allows effectively running very high-performance applications, such as database engines, in KVM guests on local storage.
-
=== Linux ===
+
=== Linux performance ===
-
The Linux performance numbers were measured on a dual Intel Xeon-E5-2687W 3.10&nbsp;Ghz CPU Romley-EP system with 32x threads and 32&nbsp;GB DDR3-1600 SDRAM.<ref>{{cite web| url=http://permalink.gmane.org/gmane.linux.kernel/1338608| title=SCSI small block random I/O performance on 3.6-rc0 using SCSI loopback ports| author=Nicholas A. Bellinger| date=8/7/2012| publisher=http://permalink.gmane.org}}</ref>
+
The Linux performance numbers were measured on a dual Intel Xeon-E5-2687W 3.10&nbsp;Ghz CPU Romley-EP system with 32x threads and 32&nbsp;GB DDR3-1600 SDRAM.<ref>{{cite web| url=http://permalink.gmane.org/gmane.linux.kernel/1338608| title=SCSI small block random I/O performance on 3.6-rc0 using SCSI loopback ports| author=Nicholas A. Bellinger| date=2012-08-07| publisher=http://permalink.gmane.org}}</ref>
-
The results for one KVM VM and a varying number of LUNs on [[RTS OS]] with vHost on RAM are as follows.
+
The results for one single KVM VM and a varying number of LUNs are as follows.
IOPS:
IOPS:
Line 60: Line 63:
|-
|-
! align="left" | 16x [[rd_mcp]] LUN
! align="left" | 16x [[rd_mcp]] LUN
-
| 16 || ~315k IOPs || ~305k IOPs
+
| 16 || ~315k IOPs<ref name="IOPS">IOPS results are currently impacted by limitations in the vHost interrupt processing implementation. The underlying bottleneck is being addressed, and future RTS vHost releases will scale up IOPs much better with more LUNs and vCPUs per VMs.</ref> || ~305k IOPs<ref name="IOPS"/>
|-
|-
! align="left" | 32x [[rd_mcp]] LUN
! align="left" | 32x [[rd_mcp]] LUN
-
| 16 || ~425k IOPs || ~410k IOPs
+
| 16 || ~425k IOPs<ref name="IOPS"/> || ~410k IOPs<ref name="IOPS"/>
|}
|}
<br/>
<br/>
Line 100: Line 103:
|-
|-
! align="left" | 16x [[rd_mcp]] LUN
! align="left" | 16x [[rd_mcp]] LUN
-
| 16 || ~8000 MB/s || ~8000 MB/s
+
| 16 || TBD || TBD
|-
|-
! align="left" | 32x [[rd_mcp]] LUN
! align="left" | 32x [[rd_mcp]] LUN
Line 124: Line 127:
| 32 || ~20480 MB/s || ~20480 MB/s
| 32 || ~20480 MB/s || ~20480 MB/s
|}
|}
-
<br/>
 
-
{{Message/note|Future improvements.|The benchmark results are currently impacted by limitations in the vHost interrupt processing implementation, which in particular affect the IOPS numbers. The underlying bottleneck is being addressed, and future vHost releases will allow much better IOPS scalability with the number of LUNs and vCPUs per VMs.}}
 
The benchmarks were done with FIO with the following parameters:
The benchmarks were done with FIO with the following parameters:
Line 149: Line 150:
=== Windows ===
=== Windows ===
-
RTS will enable Windows guests to achieve the same level of performance with the virtio Virtual MegaRAID driver, and other future work on native Windows virtio drivers.
+
{{RTS short}} will enable Windows guests to achieve the same level of performance with a virtual LSI MegaRAID SAS driver.
-
== targetcli ==
+
=== I/O processing ===
-
''[[targetcli]]'' from {{RTS full}} is used to configure vHost targets. ''targetcli'' aggregates service modules via a core library, and exports them through an API to the Unified [[Target]], to provide a unified single-node SAN configuration shell, independently of the underlying fabric(s).
+
vHost processes I/Os from Linux guests to {{T}} backstores as follows:
-
== I/O processing details ==
+
# The [[KVM]] guest enqueues the SCSI I/O descriptor(s) to its virtio ring;
 +
# The KVM guest kicks {{T}} to wake up;
 +
# {{T}} wakes up, dequeues the I/O descriptor(s) off the virtio ring and processes them;
 +
# {{T}} dispatches the I/O to the backend storage device (HDDs, SSDs, flash, RAM, etc.);
 +
# {{T}} backend storage device completes the I/O;
 +
# {{T}} enqueues the resulting I/O descriptor(s) to the KVM guest virtio ring;
 +
# {{T}} kicks the KVM guest to wake up;
 +
# The KVM guest wakes up and dequeues the I/O descriptor(s) off the virtio ring.
-
vHost processes I/Os from Linux guests to Unified Target backstores as follows:
+
== targetcli ==
-
# The KVM guest enqueues the SCSI I/O descriptor(s) to its virtio ring;
+
''[[targetcli]]'' from {{RTS full}} is used to configure vHost targets. ''targetcli'' aggregates {{T}} service modules via a core library, and exports them through an API, to provide a unified single-node SAN configuration shell, independently of the underlying fabric(s).
-
# The KVM guest kicks the Unified Target to wake up;
+
-
# The Unified Target wakes up, dequeues the I/O descriptor(s) off the virtio ring and processes them;
+
-
# The Unified Target dispatches the I/O to the backend storage device (HDDs, SSDs, flash, RAM, etc.);
+
-
# The Unified Target backend storage device completes the I/O;
+
-
# The Unified Target enqueues the resulting I/O descriptor(s) to the KVM guest virtio ring;
+
-
# The Unified Target kicks the KVM guest to wake up;
+
-
# The KVM guest wakes up and dequeues the I/O descriptor(s) off the virtio ring.
+
-
== Spec file ==
+
{{Ambox| type=info| head=[[LIO]] Admin Manual| text=The {{LIO Admin Manual}} provides comprehensive background and many examples on using ''targetcli'' and on programming the RTS library.}}
-
RTS spec files define the fabric-dependent feature set, capabilities and available target ports of the specific underlying fabric.
+
== Spec file ==
 +
{{RTS short}} spec files define the fabric-dependent feature set, capabilities and available target ports of the specific underlying fabric.
In particular, the iSCSI spec file ''/var/target/fabric/vhost.spec'' is included via RTSlib.
In particular, the iSCSI spec file ''/var/target/fabric/vhost.spec'' is included via RTSlib.
Line 184: Line 186:
# Non-standard module naming scheme
# Non-standard module naming scheme
-
kernel_module = tcm_vhost
+
kernel_module = vhost_scsi
# The configfs group name is default
# The configfs group name is default
configfs_group = vhost
configfs_group = vhost
</pre>
</pre>
-
 
-
== Timeline ==
 
-
{{LIO Timeline}}
 
== See also ==
== See also ==
-
* [[RTS OS]], [[targetcli]]
+
* [[{{OS}}]]
-
* [[Target]]
+
* {{Target}}, [[targetcli]]
-
* [[Fibre Channel]], [[Fibre Channel over Ethernet|FCoE]], [[InfiniBand]], [[iSCSI]], [[tcm_loop]]
+
* [[FCoE]], [[Fibre Channel]], [[iSCSI]], [[iSER]], [[SRP]], [[tcm_loop]]
* [[ConfigFS]]
* [[ConfigFS]]
== Notes ==
== Notes ==
-
{{Reflist|2}}
+
{{Reflist}}
== External links ==
== External links ==
-
* [[RTS OS]] [http://www.risingtidesystems.com/doc/RTS%20OS%20Admin%20Manual%20CE.pdf Admin Manual]
+
* {{LIO Admin Manual}}
-
* RTSlib Reference Guide [[http://www.risingtidesystems.com/doc/rtslib-gpl/html/ HTML]][[http://www.risingtidesystems.com/doc/rtslib-gpl/pdf/rtslib-API-reference.pdf PDF]]
+
* RTSlib Reference Guide {{Lib Ref Guide HTML}}{{Lib Ref Guide PDF}}
* [http://www.linux-kvm.org/page/Multiqueue Multiqueue KVM]
* [http://www.linux-kvm.org/page/Multiqueue Multiqueue KVM]
* [http://wiki.qemu.org/BlockRoadmap#tcm_vhost_.5BZhi_Yong.5D QEMU roadmap]
* [http://wiki.qemu.org/BlockRoadmap#tcm_vhost_.5BZhi_Yong.5D QEMU roadmap]
* [http://git.kernel.org/?p=virt/kvm/nab/qemu-kvm.git;a=shortlog;h=refs/heads/vhost-scsi QEMU/KVM vhost-scsi] driver in qemu-kvm.git repository
* [http://git.kernel.org/?p=virt/kvm/nab/qemu-kvm.git;a=shortlog;h=refs/heads/vhost-scsi QEMU/KVM vhost-scsi] driver in qemu-kvm.git repository
-
* {{cite video |title=Virtio SCSI an alternative virtualized storage stack for KVM |publisher=YouTube |location=KVM Forum, Vancouver, Canada |people=Hajnoczi, et al |year=2011 |url=http://www.youtube.com/watch?v=7-rnyxuY-xc}}
+
* {{cite video |title=Virtio SCSI an alternative virtualized storage stack for KVM |publisher=YouTube |location=KVM Forum, Vancouver, Canada |people=Hajnoczi, et al |url=http://www.youtube.com/watch?v=7-rnyxuY-xc| date=2011}}
 +
 
 +
{{LIO Timeline}}
[[Category:Fabric modules]]
[[Category:Fabric modules]]
Line 214: Line 215:
[[Category:Ethernet]]
[[Category:Ethernet]]
[[Category:Network protocols]]
[[Category:Network protocols]]
-
__NOEDITSECTION__
 

Latest revision as of 22:03, 10 March 2017

LinuxIO
Logo
LIO 150513.png
vHost fabric module
Original author(s) Nicholas Bellinger
Stefan Hajnoczi
Developer(s) Datera, Inc.
Preview release 4.2.0-rc5 / July 30, 2012;
7 years ago
 (2012-07-30)
Development status Production
Written in C
Operating system Linux
Type Fabric module
License GNU General Public License, version 2 (GPLv2)
Website datera.io
See LIO for a complete overview over all fabric modules.

vHost provides a very high performance local LinuxIO for KVM guests.

LIO vHost architecture and data processing.
LIO self-hosted with Cent OS guest.

Contents

Overview

The LinuxIO vHost fabric module implements I/O processing based on the Linux virtio mechanism. It provides virtually bare-metal local storage performance for KVM guests. Currently, only Linux guest VMs are supported, with Windows support under development with a virtual LSI MegaRAID SAS driver.

As of late 2016, vhost-scsi has been integrated with libvirt by IBM. https://wiki.libvirt.org/page/Vhost-scsi_target

Guests

LIO vHost allows effectively running very high-performance applications, such as database engines, in KVM guests on local storage.

Linux performance

The Linux performance numbers were measured on a dual Intel Xeon-E5-2687W 3.10 Ghz CPU Romley-EP system with 32x threads and 32 GB DDR3-1600 SDRAM.[1]

The results for one single KVM VM and a varying number of LUNs are as follows.

IOPS:

KVM
Workload Jobs 25%/75% Read/Write 75%/25% Read/Write
1x rd_mcp LUN 8 ~155k IOPs ~145k IOPs
16x rd_mcp LUN 16 ~315k IOPs[2] ~305k IOPs[2]
32x rd_mcp LUN 16 ~425k IOPs[2] ~410k IOPs[2]


Native
Workload Jobs 25%/75% Read/Write 75%/25% Read/Write
1x rd_mcp LUN 32 ~160K IOPs ~150K IOPs
16x rd_mcp LUN 32 ~1125k IOPs ~1100k IOPs
32x rd_mcp LUN 32 ~1185k IOPs ~1175k IOPs

Throughput:

KVM
Workload Jobs 25%/75% Read/Write 75%/25% Read/Write
1x rd_mcp LUN 8 ~1800 MB/s ~1800 MB/s
16x rd_mcp LUN 16 TBD TBD
32x rd_mcp LUN 16 ~18500 MB/s ~18500 MB/s


Native
Workload Jobs 25%/75% Read/Write 75%/25% Read/Write
1x rd_mcp LUN 32 ~2048 MB/s ~2048 MB/s
16x rd_mcp LUN 32 ~17500 MB/s ~17500 MB/s
32x rd_mcp LUN 32 ~20480 MB/s ~20480 MB/s

The benchmarks were done with FIO with the following parameters:

[randrw]
rw=rw
rwmixwrite=25
rwmixread=75
ioengine=libaio
direct=1
size=100G
iodepth=64
iodepth_batch=4
iodepth_batch_complete=32
numjobs=32
blocksize=1M

filename=/dev/sdb
filename=/dev/sdX....

Windows

Datera will enable Windows guests to achieve the same level of performance with a virtual LSI MegaRAID SAS driver.

I/O processing

vHost processes I/Os from Linux guests to LIO backstores as follows:

  1. The KVM guest enqueues the SCSI I/O descriptor(s) to its virtio ring;
  2. The KVM guest kicks LIO to wake up;
  3. LIO wakes up, dequeues the I/O descriptor(s) off the virtio ring and processes them;
  4. LIO dispatches the I/O to the backend storage device (HDDs, SSDs, flash, RAM, etc.);
  5. LIO backend storage device completes the I/O;
  6. LIO enqueues the resulting I/O descriptor(s) to the KVM guest virtio ring;
  7. LIO kicks the KVM guest to wake up;
  8. The KVM guest wakes up and dequeues the I/O descriptor(s) off the virtio ring.

targetcli

targetcli from Datera, Inc. is used to configure vHost targets. targetcli aggregates LIO service modules via a core library, and exports them through an API, to provide a unified single-node SAN configuration shell, independently of the underlying fabric(s).

Spec file

Datera spec files define the fabric-dependent feature set, capabilities and available target ports of the specific underlying fabric.

In particular, the iSCSI spec file /var/target/fabric/vhost.spec is included via RTSlib.

# WARNING: This is a draft specfile supplied for demo purposes only.

# The vHost fabric module feature set
features = nexus

# Use naa WWNs.
wwn_type = naa

# Non-standard module naming scheme
kernel_module = vhost_scsi

# The configfs group name is default
configfs_group = vhost

See also

Notes

  1. Nicholas A. Bellinger (2012-08-07). "SCSI small block random I/O performance on 3.6-rc0 using SCSI loopback ports". http://permalink.gmane.org. 
  2. a b c d IOPS results are currently impacted by limitations in the vHost interrupt processing implementation. The underlying bottleneck is being addressed, and future RTS vHost releases will scale up IOPs much better with more LUNs and vCPUs per VMs.

External links

Timeline of the LinuxIO
Release Details 2011 2012 2013 2014 2015
123456789101112 123456789101112 123456789101112 123456789101112 123456789101112
4.x Version 4.0 4.1
Feature LIO Core Loop back FCoE iSCSI Perf SRP
CM WQ FC
USB
1394
vHost Perf Misc 16 GFC iSER Misc VAAI Misc DIF Core
NPIV
DIF iSER DIF FC vhost TCMU Xen Misc Misc virtio 1.0 Misc NVMe OF
Linux 2.6.38 2.6.39 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13 3.14 3.15 3.16 3.17 3.18 3.19 3.20 3.21 3.22
Personal tools
Namespaces
Variants
Actions
Navigation
Toolbox
Google AdSense