Each ESXi Host in a vSphere cluster hosting the RP4VM vRPA’s (virtual appliances running on ESXi hosts) require the iSCSI channel to allow communication between the ESXi hosts kernel embedded I/O splitter and the vRPA’s, utilizing an iSCSI software adapter running on the ESXi hosts. Thus a software iSCSI adapter and associated VMkernel ports need to be configured on every ESXi node hosting the RP4VM vRPA’s.

This post provides an example of the iSCSI configuration required when using a VMware vSphere distributed switch (VDS). This example displays how to configure the iSCSI settings required for RP4VM via the vSphere web client.

The below steps provide an example on how to create additional port groups on a VDS, create the VMkernel adapters, add the software iSCSI adapter and bind the VMkernel Port Groups to the ESXi iSCSI Adapter along with associated config such as MTU and uplink order.

Note: An upcoming release of RP4VM 5.X introduces a new TCP/IP based communication path for the splitter, which will eliminate the need to configure the iSCSI based software initiator (More on this later).


Create Distributed Port Groups on VDS

The following steps detail how to create two DPG’s on a VDS to support RP4VM iSCSI communications.

  1. Right-click the distributed switch and chose ‘Distributed Port Group’ -> ‘New Distributed Port Group’.RP4VMiSCSIdvs2A

2. Enter a name for the first iSCSI port group and click Next.RP4VMiSCSIdvs2B

3. Input your iSCSI VLAN details and complete the PG creation.RP4VMiSCSIdvs3Create a second distributed Port Group (RP_iSCSI_2 as per this example) following the above same steps.RP4VMiSCSIdvs2c

Configure Port Group Failover

The following steps detail the iSCSI Port Groups failover order configuration:

  1. Select RP_iSCSI_1 and select Edit Settings.RP4VMiSCSIdvs11a
2. Teaming and failover’ order config (Repeat for the second PG e.g ‘RP_iSCSI_2’). Ensure that additional network adapters that might be connected to the VMkernel port are designated as unused adapters. iSCSI binding is allowed if the VMKernel port is attached to only one active adapter in the port group while all other adapters are configured as unused.RP4VMiSCSIdvs11b
Repeat for RP_iSCSI_2.

Configure the VDS MTU Setting

  1. From the vSphere Web client click Networking’.RP4VMiSCSIdvs1Aa

  2. Right-click the distributed switch and chose ‘Settings’ -> Edit Settings’.RP4VMiSCSIdvs1A

  3. From the Advanced option enter an MTU value of 9000.

  4. Click OK.RP4VMiSCSIdvs1

Create VMkernel Adapters on ESXi Hosts

Two VMkernel adapters will need to be created on each ESXi host in an RP4VM solution as follows:

  1. Select an ESXi Host click Manage’->‘Networking’->’VMkernel adapters’ and clickAdd host networking’.RP4VMiSCSIdvs4a

  2. Select VMkernel Network Adapter’ and click Next.RP4VMiSCSIdvs4

  3. Click Browse and select the first port group ‘RP_iSCSI_1’ as per this example to bind the VMkernel port to.


  4.  Leave Port properties as default and click Next.

RP4VMiSCSIdvs3b5. Select IPv4 and enter the assigned VMkernel IP address and subnet mask.RP4VMiSCSIdvs156. Review and click Finish (Repeat the same steps 1-6 to create the second VMkernel port).

7. Click the pencil icon to modify the VMkernel Adapter and change the MTU value to 9000.RP4VMiSCSIdvs3cRP4VMiSCSIdvs10

 Create an iSCSI Software Storage Adapter on ESXi Hosts

  1. From the vSphere Web Client select the ESXi Host and chose the Manage’ tab.

  2. From ‘Storage Adapters’ click the + and select Software iSCSI adapter’.RP4VMiSCSCI13

  3.  Click OK’ to the warning.RP4VMiSCSCI14

Bind the VMkernel Port Groups to the ESXi iSCSI Adapter

  1. From the vSphere Web Client select the ESXi Host and chose the Manage’ tab.

  2. From ‘Storage Adapters’ select the newly created Software iSCSI adapter’.


  3. Select the ‘Network Port Binding’ tab and click the + icon in order to add the required Port Groups.


4. Select the 2 Port Groups created earlier.RP4VMiSCSCI19

5. If prompted with a warning to rescan the adapter click OK. Two Port Groups RP_iSCSI_1 & RP_iSCSI_2 added:RP4VMiSCSIdvs12DRepeat this procedure on each VMware ESXi host in the RP4VM solution.

This completes the iSCSI config – next step is to deploy the vRPA OVA’s.

Once the vRPA’s have been deployed then you will see them appear as targets on the iSCSI adapter, example screen capture:


Useful esx cmds to verify the rp4vm iSCSI config: 

  • esxcli iscsi networkportal list –adapter vmhba32
  • esxcfg-vmknic -l
  • esxcfg-mpath -l

[root@DRM-ESXi078:~] esxcli iscsi networkportal list –adapter vmhba32vmhba32
Adapter: vmhba32
Vmknic: vmk2
MAC Address: 00:25:b5:05:01:6e
MAC Address Valid: true
IPv4 Subnet Mask:
IPv6: fe80::250:56ff:fe62:2292/64
MTU: 9000
Vlan Supported: true
Vlan ID: 2061
Reserved Ports: 63488~65536
TOE: false
TSO: true
TCP Checksum: false
Link Up: true
Current Speed: 10000
Rx Packets: 438419565
Tx Packets: 2653060951
NIC Driver: enic
NIC Driver Version:
NIC Firmware Version: 2.2(3b)
Compliant Status: compliant
NonCompliant Message:
NonCompliant Remedy:
Vswitch: dvsMgmt
PortGroup: DvsPortset-0
VswitchUuid: 09 47 09 50 39 53 1b ec-a7 ba 85 89 6f 46 de 09
PortGroupKey: dvportgroup-322
PortKey: 90
Path Status: active

Once the vRPA’s have been deployed  you will notice KASHYA ‘Devices’ on the iSCSI adapter, these are used for communication between the ESXi splitter and the vRPA’s (esxcfg-mpath -l):



1 Comment »

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s