Data Center Hyper-Converged Infrastructure (HCI)
6 minute read

iSCSI Storage on Cisco HyperFlex

The next release of Cisco HyperFlex Data Platform will introduce iSCSI block storage to complement the existing NFS file protocol that customers have grown to love. Block storage will enable customers to run clustered workloads with shared disks that currently cannot be hosted on HyperFlex. We were able to test a beta release of the software 4.5(1a) at the ATC for functionality and usability. Additionally, we want to give you an early look at enabling iSCSI and creating storage objects! Starting the beta installation, we expect the setup of iSCSI to be simple and straight forward. Since iSCSI is configured on a cluster after the cluster is operational, setup is streamlined and can be performed from HyperFlex Connect. We take you through a step-by-step process of our configuration and testing. If you would like to see these details click on the ATC Insight Section. **Note that UI and command-line options may change with the release of the final product (currently scheduled for late December 2020).

In This Insight

copy link

ATC Insight


Before we dive in and enable iSCSI, let's review the existing HyperFlex networks that each HyperFlex cluster has out of the box.

  • Management Hypervisor - Management network for ESXi host, used for management/SSH.
  • Management Storage Controller - Management network for controller virtual machine, used for SSH and HyperFlex Connect.
  • Data Network Hypervisor - NFS network for ESXi, used for NFS VMkernel.
  • Data Network Storage Controller - NFS network for controller virtual machines

HyperFlex iSCSI introduces another network for the controller virtual machines. This network would need to be added to HyperFlex like the rest of your networks meaning you will need to create the network on your core switches and trunk it down to the FI's before any configuration below can begin. 

HyperFlex iSCSI volumes can be accessed via Layer 2 (non-routed) and Layer 3 (routed) clients. In our setup, we chose to use Layer 2 for ease of setup. Additional configuration for Layer 3 clients is addressed later in the Advanced Configuration section.

Enabling iSCSI

Log into HyperFlex Connect


On the left navigation bar, there's a new iSCSI option under Manage. Click iSCSI and then Configure Network at the top of the screen.


On the next screen, we'll enter the network information for the iSCSI Network. Enter the Subnet, Gateway, IP Range for the storage controller VMs, and click Add IP Range. Note that multiple ranges can be added if the available addresses aren't contiguous. The ranges can also be deleted by clicking the trash can icon. Next, enter the iSCSI Storage IP. This address will be the primary address that guests connect to can move between any of the controller virtual machines. The default MTU is 1500, but can be changed to 9000 if jumbo frames are desired. Jumbo Frames would need to pre-configured in the environment. Last, a VLAN needs to be configured for this network. Either a new VLAN can be created or an existing VLAN in UCS can be selected.


We chose Create a new VLAN, entered the VLAN ID, VLAN Name, UCS Manager information, and clicked Configure.


The network configuration task starts and may take a few minutes to complete.


After the task completes, the iSCSI page will be updated.


There is a notice that Initiator Groups should be configured before creating Targets. In talking with the Cisco, they stated that this is simply a recommendation and doesn't affect functionality. Click Initiator Groups and then Create.


Enter a Name for the Initiator Group, the Initiator IQN from a server, and click Add Initiators. Multiple IQNs can be added at this time or the group can be edited later to add additional IQNs. IQN groups can be edited, but individual IQNs cannot. If an IQN is entered incorrectly, it must be deleted and added again. This can be done by clicking the trash can icon next to the IQN. Click Create Initiator Group.


The Initiator Group ATC was created successfully. Click Targets.


Enter a Target Name and click Create Target. Note that HyperFlex supports CHAP authentication and this can be enabled or disabled as needed.


With the ATC Target selected, click Create LUN.


In the Create LUN Dialog, enter a Name and Size and click Create LUN.


With the LUN created, click Linked Initiator Groups and then Link.


Select the Initiator Group and click Link Initiator Group(s).


Connecting to iSCSI Target

iSCSI is configured inside of HyperFlex and it's time to test it. We'll be using a Windows VM that's on the same network as HyperFlex. In Windows, open iSCSI Initiator and enter the HyperFlex iSCSI Storage IP. Click Quick Connect.


Windows discovers the ATC target that was previously created. Click Done and then OK.


We can confirm that the disk is connected in Disk Management.

Advanced Configuration

Many of the tasks can also be performed via command line. This is performed by connecting to the HyperFlex management IP address via SSH or Web CLI in HyperFlex Connect. Web CLI does not have support for interactive commands that require user input and for that reason we prefer to use SSH for consistency. In the below configurations, IP Whitelisting can be performed via Web CLI but deleting the iSCSI network cannot.

Whitelist IP Addresses

Client machines that need to connect to iSCSI over Layer 3 routed networks must have their IP address whitelisted in HyperFlex for target discovery to be successful. The command is hxcli iscsi allowlist add -p IPADDRESS


IP Addresses can be removed from the whitelist with hxcli iscsi allowlist remove -p IPADDRESS.


Delete iSCSI Network

If there's a misconfiguration with the iSCSI network or it needs to be moved to another subnet, the network can be deleted and the configuration can be created again. The command is hxcli iscsi network delete. The output will refresh numerous times until the iSCSI network deletion is complete.

HyperFlex REST API

Any operation that can be performed from HX Connect or the command line can also be performed from the HyperFlex REST API. The REST API Explorer can be accessed by appending /apiexplorer to the cluster URL. 


While testing, we ran into a few issues that need to be mentioned.

#1 - iSCSI didn't work with Jumbo Frames. We attempted to get it working with MTU 9000 but were unable to do so. This kept us from comparing performance between iSCSI and NFS. Cisco identified it as an issue.

#2 - iSCSI failure caused our configuration to disappear in HX Connect. We wanted to see how an unexpected failure of the Controller VM with the iSCSI Storage IP would affect client access to storage. We chose to reset the host via UCS Manager of the Controller VM. We found that the clients remained connected, but our configuration (targets, initiators groups, and LUNs) was not visible in HX Connect. Cisco determined that deleting the iSCSI network and recreating it would bring the configuration back.


Overall, the setup of HyperFlex iSCSI is simple to implement and gives customers additional functionality to move clustered workloads to HyperFlex.

As you anxiously await the latest release of HyperFlex, take a moment to get hands-on experience with our HyperFlex Lab and view other ATC Insights!  We look forward to demonstrating HyperFlex iSCSI on the portal when it is released.

Check out or Launch our On-Demand Lab around Cisco HyperFlex 

Cisco HyperFlex Lab

Take a look at additional ATC Insights

ATC Insights

copy link

Technologies Under Test

Cisco HyperFlex Data Platform, Release 4.5(1a)

copy link


Cisco HyperFlex Data Platform Administration Guide, Release 4.0 - Chapter: Managing iSCSI