Visitors

HOW TO: Reset admin password on EMC ESRS VE

The root and admin passwords for the EMC ESRS VE virtual appliance are configured during VA deployment, there is no ‘default‘ password.

EMC ESRS VE admin password failed change

If you have EMC ESRS VE 3.04 and below installed and lost admin password, you have no other option other than to re-deploy the ESRS VA.

In 3.06+ you can login to the VA console or through SSH as root and run the following command to reset admin’s password:

login as: root

emcsrs001:~ # cd /opt/esrs/webuimgmt-util/

emcsrs001:/opt/esrs/webuimgmt-util # ls
passwordAdmin passwordAdmin.sh

emcsrs001:/opt/esrs/webuimgmt-util # ./passwordAdmin.sh
************************************************************************************************
******************************************Password Reset Util***********************************
************************************************************************************************
------------------------------------------Password Specifications-------------------------------
1. Be 8 or more characters in length, with a maximum of 16 characters
2. Contain at least one numeric character
3. Contain at least one uppercase and one lowercase character
4. Contain at least one special character such as ` ~ ! @ # $ % ^ & * ( ) - _ = + [ ] { } ; < >
5. Do not use special characters / ? : , . |  ' and " as part of the password
------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------
************************************************************************************************
************************************************************************************************
Provide the password to be set for the user admin:Margarin001!
Confirm the new password to be set for the user admin:Margarin001!
Password has been successfully reset for the user admin

I hope this will help.

Configuring Syslog on VMware vSphere ESXi host fails with 'Got no data from process' error

I installed VMware vRealize Log Insight and configured all ESXi hosts to send logs to it. Automatic configuration failed and Log Insight suggested to configure Syslog service manually.

Here are a couple of VMware Knowledge Base articles that will help you:

OK, let’s check syslog configuration:

Default Network Retry Timeout: 180
Local Log Output: /scratch/log
Local Log Output Is Configured: false
Local Log Output Is Persistent: true
Local Logging Default Rotation Size: 10240
Local Logging Default Rotations: 20
Log To Unique Subdirectory: false
Remote Host: 10.100.20.1

Remote Host configuration is incorrect.

Configure Syslog service to send logs to a remote host:

~ # esxcli system syslog config set --loghost='udp://10.100.20.1:514,udp://10.100.150.100:514'
Got no data from process
/usr/lib/vmware/vmsyslog/bin/esxcfg-syslog --plugin=esxcli --loghost='udp://10.100.20.1:514,udp://10.100.150.100:514'

Configuration command failed with “Got no data from process” error message.

Let’s check if the syslog service is running:

~ # ps | grep -i syslog
38072906 38072906 vmsyslogd  /bin/python
38072907 38072906 vmsyslogd  /bin/python
38072908 38072906 vmsyslogd  /bin/python
38072909 38072906 vmsyslogd  /bin/python

..and check the syslog service log:

~ # tail /var/log/.vmsyslogd.err
2016-01-04T17:22:02.499Z vmsyslog.loggers.file : ERROR ] Gzip logfile /scratch/log/vmkernel.0.gz failed <type 'exceptions.MemoryError'>
2016-01-04T17:22:04.451Z vmsyslog.loggers.file : ERROR ] Gzip logfile /scratch/log/vmkernel.0.gz failed <type 'exceptions.MemoryError'>
2016-01-04T17:22:06.364Z vmsyslog.loggers.file : ERROR ] Gzip logfile /scratch/log/vmkernel.0.gz failed <type 'exceptions.MemoryError'>
2016-01-04T17:22:08.312Z vmsyslog.loggers.file : ERROR ] Gzip logfile /scratch/log/vmkernel.0.gz failed <type 'exceptions.MemoryError'>

There are error messages in the log.

OK, let’s kill syslog processes:

~ # kill -9 `ps -Cuv | grep syslog | awk '{print $1}'`

…and reconfigure syslog again:

~ #  esxcli system syslog config set --loghost='udp://10.100.20.1:514,udp://10.100.150.100:514'

No error message this time.

Check syslog configuratio again:

~ #  esxcli system syslog config get
Default Network Retry Timeout: 180
Local Log Output: /scratch/log
Local Log Output Is Configured: false
Local Log Output Is Persistent: true
Local Logging Default Rotation Size: 10240
Local Logging Default Rotations: 20
Log To Unique Subdirectory: false
Remote Host: udp://10.100.20.1:514,udp://10.100.150.100:514

All good.

Hope this will help.

VMware PowerCLI script to get VM's virtual and RDM disk information

I have been tasked to migrate several VMs with RDM disks between storage arrays / datastores. The data LUNs have been migrated/copied and all was left is the migration of the VM configuration files and RDM pointers. To make matter even worse, VMs in question were Oracle RAC clustered VMs therefore is was imperative to migrate the disks and SCSI IDs in “like it was before” way.

Let me start with a script that lists VM’s virtual and RDM disks, SCSI IDs and Disk Device Name and files names:

$DiskInfo= @()
foreach ($VMview in Get-VM node001, node002 | Get-View){
foreach ($VirtualSCSIController in ($VMView.Config.Hardware.Device | where {$_.DeviceInfo.Label -match "SCSI Controller"})) {
foreach ($VirtualDiskDevice in ($VMView.Config.Hardware.Device | where {$_.ControllerKey -eq $VirtualSCSIController.Key})) {
$VirtualDisk = "" | Select VMname, SCSIController, DiskName, SCSI_ID, DeviceName, DiskFile, DiskSize
$VirtualDisk.VMname = $VMview.Name
$VirtualDisk.SCSIController = $VirtualSCSIController.DeviceInfo.Label
$VirtualDisk.DiskName = $VirtualDiskDevice.DeviceInfo.Label
$VirtualDisk.SCSI_ID = "$($VirtualSCSIController.BusNumber) : $($VirtualDiskDevice.UnitNumber)"
$VirtualDisk.DeviceName = $VirtualDiskDevice.Backing.DeviceName
$VirtualDisk.DiskFile = $VirtualDiskDevice.Backing.FileName
$VirtualDisk.DiskSize = $VirtualDiskDevice.CapacityInKB * 1KB / 1GB
$DiskInfo += $VirtualDisk
}}}
$DiskInfo | sort VMname, Diskname | Export-Csv -Path 'd:DiskInfo.csv'
# You can also use FT -AutoSize or Out-GridView if it helps

The script output shows everything you need to complete the migration successfully:

VMname SCSIController DiskName SCSI_ID DeviceName DiskFile DiskSize
Node001 SCSI controller 0 Hard disk 1 00:00 [My_Datastore_1] Node001/Node001.vmdk 70
Node001 SCSI controller 1 Hard disk 10 01:08 vml.0200240000514f0c5ff200006e587472656d41 [My_Datastore_1] Node001/Node001_11.vmdk 2
Node001 SCSI controller 1 Hard disk 11 01:09 vml.0200470000514f0c5ff200006f587472656d41 [My_Datastore_1] Node001/Node001_12.vmdk 10
Node001 SCSI controller 1 Hard disk 12 01:10 vml.0200480000514f0c5ff2000070587472656d41 [My_Datastore_1] Node001/Node001_13.vmdk 2
Node001 SCSI controller 0 Hard disk 2 00:01 vml.02000d0000514f0c5ff2000047587472656d41 [My_Datastore_1] Node001/Node001_1.vmdk 30
Node001 SCSI controller 0 Hard disk 3 00:02 vml.0200290000514f0c5ff2000048587472656d41 [My_Datastore_1] Node001/Node001_2.vmdk 30
Node001 SCSI controller 1 Hard disk 4 01:00 vml.02002c0000514f0c5ff200004b587472656d41 [My_Datastore_1] Node001/Node001_3.vmdk 5
Node001 SCSI controller 1 Hard disk 5 01:06 vml.0200450000514f0c5ff2000064587472656d41 [My_Datastore_1] Node001/Node001_9.vmdk 100
Node001 SCSI controller 1 Hard disk 6 01:04 vml.0200440000514f0c5ff2000063587472656d41 [My_Datastore_1] Node001/Node001_7.vmdk 5
Node001 SCSI controller 1 Hard disk 7 01:03 vml.0200430000514f0c5ff2000062587472656d41 [My_Datastore_1] Node001/Node001_6.vmdk 125
Node001 SCSI controller 1 Hard disk 8 01:05 vml.0200420000514f0c5ff2000061587472656d41 [My_Datastore_1] Node001/Node001_8.vmdk 5
Node001 SCSI controller 0 Hard disk 9 00:03 [My_Datastore_1] Node001/Node001_10.vmdk 21
Node002 SCSI controller 0 Hard disk 1 00:00 [My_Datastore_2] Node002/Node002.vmdk 70
Node002 SCSI controller 1 Hard disk 10 01:08 vml.0200240000514f0c5ff200006e587472656d41 [My_Datastore_1] Node001/Node001_11.vmdk 2
Node002 SCSI controller 1 Hard disk 11 01:09 vml.0200470000514f0c5ff200006f587472656d41 [My_Datastore_1] Node001/Node001_12.vmdk 10
Node002 SCSI controller 1 Hard disk 12 01:10 vml.0200480000514f0c5ff2000070587472656d41 [My_Datastore_1] Node001/Node001_13.vmdk 2
Node002 SCSI controller 0 Hard disk 2 00:01 vml.02002a0000514f0c5ff2000049587472656d41 [My_Datastore_2] Node002/Node002_1.vmdk 30
Node002 SCSI controller 0 Hard disk 3 00:02 vml.02002b0000514f0c5ff200004a587472656d41 [My_Datastore_2] Node002/Node002_2.vmdk 30
Node002 SCSI controller 1 Hard disk 4 01:00 vml.02002c0000514f0c5ff200004b587472656d41 [My_Datastore_1] Node001/Node001_3.vmdk 5
Node002 SCSI controller 1 Hard disk 5 01:06 vml.0200450000514f0c5ff2000064587472656d41 [My_Datastore_1] Node001/Node001_9.vmdk 100
Node002 SCSI controller 1 Hard disk 6 01:04 vml.0200440000514f0c5ff2000063587472656d41 [My_Datastore_1] Node001/Node001_7.vmdk 5
Node002 SCSI controller 1 Hard disk 7 01:03 vml.0200430000514f0c5ff2000062587472656d41 [My_Datastore_1] Node001/Node001_6.vmdk 125
Node002 SCSI controller 1 Hard disk 8 01:05 vml.0200420000514f0c5ff2000061587472656d41 [My_Datastore_1] Node001/Node001_8.vmdk 5
Node002 SCSI controller 0 Hard disk 9 00:03 [My_Datastore_2] Node002/Node002_3.vmdk 21

To migrate both VMs to another Datastore you just need to (BTW, please refer to ‘Migrating virtual machines with Raw Device Mappings (RDMs) (1005241)VMware KB article for more info):

  1. Using vCenter Web Client you migrate Node001 to the new datastore (this will move the RDM pointers as well);
  2. Remove RDM disk from the Node002;
  3. Migrate Node002 to the new datastore;
  4. Add cluster RDM disks to Node002 using the information provided by the script.

Job done.

You may also play with where command:

# Find VM with a specific RDM disk name or LUN ID :

Get-VM | Get-View | where {$_.Config.Hardware.Device.Backing.DeviceName -match "514f0c511580009f"}

# Find VM with a RDM disk on s specific storage array – c5cc2 is the LUN ID prefix :

Get-VM | Get-View | where {$_.Config.Hardware.Device.Backing.DeviceName -match "c5cc2"}

I hope this will help.

EMC VNXe 3200 upgrade completed with error message "The DPE has faulted"

We recently upgraded EMC VNXe 3200 storage array from 3.1.1.5395470 to 3.1.1.5803064 (VCE RCM 5.0.10 or 6.0.3). Upgrade completed successfully (the NFS services failed over and back between Storage Processors without any issues and the hosts did not lose connectivity to the storage) but at the last minute we received the following error messages:

The DPE has faulted
It is unsafe to remove SP B now
It is unsafe to remove SP A now
System VNXe has experienced one or more problems that have had a major impact

EMC VNXe 3200 upgrade The DPE has faulted

This is a known issue and fix is being developed. A permanent fix will be available in MR1SP2 code (3.1.2.). Since this does not impact production and also hardware is not actually faulted, these alert messages can be safely ignored.

Cause: Baseboard Management Controller (BMC) is onboard device which queries all hardware components periodically. At some point some of the components take long time to process this request. This delay results in ‘timeout’ according to BMC which thinks components are bad. However, next cycle of device query may work fine and result in ‘operating normally’ message.  This software bug has been identified and timeout value has been enhanced to accommodate any delay.

I hope this will help.

Cisco UCS B200 M4: FlexFlash FFCH_ERROR_OLD_FIRMWARE_RUNNING error

After upgrading Cisco UCS B200 M4 blade firmware from 2.2(3e) to 2.2(5c) the following minor fault appeared:

Cisco UCS B200 M4 - FlexFlash FFCH_ERROR_OLD_FIRMWARE_RUNNING error - 1

Apparently, it is known Cisco bug.

Cisco Bug: CSCut10525: FlexFlash FFCH_ERROR_OLD_FIRMWARE_RUNNING error on B200 M4

Quickview: https://tools.cisco.com/quickview/bug/CSCut10525
Details: https://tools.cisco.com/bugsearch/bug/CSCut10525/?referring_site=bugquickviewclick

Symptom:
After updating B200 M4 server firmware using MR3 build 169 bundle B, FFCH_ERROR_OLD_FIRMWARE_RUNNING is displayed on fault summary. Please see the attached screenshot in Enclosures.

Conditions:
B200 M4 Server after upgrade to 2.2.4b code

Workaround:
Reset FlexFlash Controller manually to make that error disappear

Here is how you do it:

  1. Open properties of the blade that that shows this error. Navigate to Inventory / Storage / Controller
    Under FlexFlash Controller 1 click on Reset FlexFlash Controller:
  2.  Click Yes.
    Cisco UCS B200 M4 - FlexFlash FFCH_ERROR_OLD_FIRMWARE_RUNNING error - 3
  3. Click OK.
    Cisco UCS B200 M4 - FlexFlash FFCH_ERROR_OLD_FIRMWARE_RUNNING error - 4
  4. The error message should go away.

Hope this will help.

VMware Host zoning for multi X-Brick EMC XtremIO storage array

VMware vSphere ESXi 5.5 and 6.0 have a limit of 256 LUNs per host:

https://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf
https://www.vmware.com/pdf/vsphere6/r60/vsphere-60-configuration-maximums.pdf

In an environment where a host with two HBAs (VMware Best Practice) is connected to two fabrics and storage array with two Storage Controllers (EMC VNX, for example) the host will have four paths to a LUN: 2 Controllers x 2 HBAs = 4 paths

When we apply the same approach to EMC XtremIO clusters (two or more X-Bricks, 8 maximum), we should also consider another limit, the ‘Number of total paths on a server‘ which is 1024. With two X-Bricks, you have four controllers, multiply by two HBAs and you have eight paths per LUN. Therefore the max number of LUNs will be 128 LUNs (1024 / 8 = 128).

The following diagram displays the logical connection topology for 8 paths.
Host zoning EMC XtremIO dual X-Brick configuration - 8 paths

If you go to the extreme and configure your XtremIO with eight X-Brick, you have 16 controllers. Again, two HBAs per host and the max number of LUNs you can attach to an ESXi host will be 32… I understand, different OS’es may have different limits than VMware and this logic will not be applicable.

If you have hit the limit of 1024 paths per host (1024 / 4 controllers / 2 HBAs = 128 LUNs) and need to provision more LUNs, the best way will be to re-zone the host to limit the number of X-Bricks / Controllers the host HBA can connect to.

The following diagram displays the logical connection topology for 4 paths.
Host zoning EMC XtremIO dual X-Brick configuration - 4 paths

OK, let see how to reconfigure the zoning. Please refer to ‘HOW TO: Configure smart zoning on Cisco MDS 9148s‘ blog post to see how it was configured in the first place.

  1. To begin with, lets check the current zoning configuration:
    • Fabric1:
      zone name Cluster01_hosts_XIO vsan 10
        fcalias name Cluster01_hosts vsan 10
          pwwn 20:00:00:25:b5:03:a0:04 [Cluster01n01_vhba0]  init
          pwwn 20:00:00:25:b5:03:a0:05 [Cluster01n02_vhba0]  init
      
        fcalias name Cluster01_XIO vsan 10
          pwwn 21:00:00:24:ff:5e:f7:4a [X1_SC1_FC1]  target
          pwwn 21:00:00:24:ff:5f:0b:90 [X1_SC2_FC1]  target
          pwwn 21:00:00:24:ff:8c:9b:78 [X2_SC1_FC1]  target
          pwwn 21:00:00:24:ff:3d:5c:32 [X2_SC2_FC1]  target
            
    • Fabric2:
       zone name Cluster01_hosts_XIO vsan 11
          fcalias name Cluster01_hosts vsan 11
            pwwn 20:00:00:25:b5:03:b1:04 [Cluster01n01_vhba1]  init
            pwwn 20:00:00:25:b5:03:b1:05 [Cluster01n02_vhba1]  init
      
          fcalias name Cluster01_XIO vsan 11
            pwwn 21:00:00:24:ff:5e:f7:4b [X1_SC1_FC2]  target
            pwwn 21:00:00:24:ff:5f:0b:91 [X1_SC2_FC2]  target
            pwwn 21:00:00:24:ff:8c:9b:79 [X2_SC1_FC2]  target
            pwwn 21:00:00:24:ff:3d:5c:33 [X2_SC2_FC2]  target
            
  2. The idea is to configure one HBA to one X-Brick zones therefore I will create an FCalias for X-Brick1 and X-Brick2 (X-BrickN, if you have more…).
    • Fabric1:
      fcalias name XIO_X1 vsan 10
      member device-alias X1_SC1_FC1 target
      member device-alias X1_SC2_FC1 target
      
      fcalias name XIO_X2 vsan 10
      member device-alias X2_SC1_FC1 target
      member device-alias X2_SC2_FC1 target
         
    • Fabric2:
      fcalias name XIO_X1 vsan 11
      member device-alias X1_SC1_FC2 target
      member device-alias X1_SC2_FC2 target
      
      fcalias name XIO_X2 vsan 11
      member device-alias X2_SC1_FC2 target
      member device-alias X2_SC2_FC2 target
         
    • N.B. These aliases can also be used to zone hosts to all X-Bricks in the normal fashion if the LUN/path limit is not going to be an issue:
       zone name Cluster01_X1_X2 vsan 11
      member Cluster01_hosts 
      member fcalias XIO_X1
      member fcalias XIO_X2
          
  3. Let’s configure the zones. There is only one HBA per zone, therefore I will not be configuring the fcalias but use device alias instead:
    • Fabric1:
      zone name Cluster01N01_X1 vsan 10
      member device-alias Cluster01n01_vhba0 initiator
      member fcalias XIO_X1
      
      zone name Cluster01N02_X2 vsan 10
      member device-alias Cluster01n02_vhba0 initiator
      member fcalias XIO_X2
         
    • Fabric2:
      zone name Cluster01N01_X1 vsan 11
      member device-alias Cluster01n01_vhba1 initiator
      member fcalias XIO_X1
      
      zone name Cluster01N02_X2 vsan 11
      member device-alias Cluster01n02_vhba1 initiator
      member fcalias XIO_X2
         
  4. Add the zones to the zoneset:
    • Fabric1:
      zoneset name zs_vsan10 vsan 10
      member Cluster01N01_X1
      member Cluster01N02_X2
         
    • Fabric2:
      zoneset name zs_vsan11 vsan 11
      member Cluster01N01_X1
      member Cluster01N02_X2
         
  5. Activate zoneset and commit the zone:
    • Fabric1:
      zoneset activate name zs_vsan10 vsan 10
      zone commit vsan 10
    • Fabric2:
      zoneset activate name zs_vsan11 vsan 11
      zone commit vsan 11
  6. Remove the old zones and fcaliases:
    1. Fabric1:
      no zone name Cluster01_hosts_XIO vsan 10
      no fcalias name Cluster01_hosts vsan 10
      no fcalias name Cluster01_XIO vsan 10
    2. Fabric2
      no zone name Cluster01_hosts_XIO vsan 11
      no fcalias name Cluster01_hosts vsan 11
      no fcalias name Cluster01_XIO vsan 11
      
  7. Commit the zones again.
  8. Rescan HBAs and confirm the number of path has changed.

I hope this will help.

Michael Dell spells out his plans for VMware:…

Michael Dell spells out his plans for @VMware: “Crown jewel of the EMC federation”

Michael Dell spells out his plans for VMware:…

“We believe it is very important to maintain VMware’s successful business model supporting an open and independent ecosystem,” Dell said. “We do not plan to do anything proprietary with VMware as regards Dell or EMC, nor place any limitations on VWware’s ability to partner with any other company.”


VMware Advocacy

HOW TO: Upgrade EMC Virtual Storage Integrator (VSI) for VMware vSphere Web Client

The EMC Virtual Storage Integrator (VSI) for VMware vSphere Web Client is a plug-in for VMware vCenter. It enables administrators to view, manage, and optimize storage for VMware ESX/ESXi servers and hosts and then map that storage to the hosts.

VSI consists of a graphical user interface and the EMC Solutions Integration Service (SIS), which provides communication and access to the storage systems.

Depending on the platform, tasks that you can perform with VSI include:

  • Storage provisioning
  • Setting multipathing policies
  • Cloning
  • Block deduplication
  • Compression
  • Storage mapping
  • Capacity monitoring
  • Virtual desktop infrastructure (VDI) integration
  • Data protection using EMC AppSync or EMC RecoverPoint

Using the Solutions Integration Service, a storage administrator can enable virtual machine administrators to perform management tasks on a set of storage pools.

Some light reading and the OVA package:

OK, let’s discuss the upgrade procedure!  Well, it is not actually an upgrade (not an in-place upgrade) but rather a database backup, deployment of the new version and then the DB restore (migration).

I am going to upgrade VSI for VMware vSphere Web Client from 6.4.1.1 to 6.6.3.

Note: For migration from VSI 6.1 only: A known limitation causes the migration of VMAX storage systems from 6.1 to 6.6 to fail. Before creating a backup of the existing database, the storage administrator must delete all VMAX storage systems and then re-register them and the VMAX users after the upgrade.

  1. Log in to your current version of the Solutions Integration Service https://<SIS IP Address>:8443/vsi_usm/, click Database and select Take a Backup from the Task drop-down and click Submit to create a backup file of the existing database, and save the backup to a secure location.
    EMC VSI SIS - Database backup
  2. Deploy the new Solutions Integration Service .
    I will actually power the existing VA off and rename it as the new VA will have the same VM name and be running in the same cluster.

    1. Deploy EMC Solutions Integration Service OVA file;
    2. Power On virtual machine with EMC Solutions Integration Service and wait for the deployment to finish;
    3. Log in to the new Solutions Integration Service as admin/ChangeMe (see Default Password for details) and change the default password;
      https://<SIS IP Address>:8443/vsi_usm/
      There is no need to configure the EMC SIS, the configuration will be restored from the DB backup
    4. Just to be safe, log in to the new Solutions Integration Service, click Database and select Take a Backup to create a backup of the new Solutions Integration Service database, and save the backup to a secure location.

    5. Log in to the new Solutions Integration Service, click Database and select Data Migration from the Task drop-down menu. Click Choose File and locate the database backup of the previous version. Click Submit.
      It should not matter but in my environment the Data Migration did not work in Mozila FireFox 41.0.2 but worked in Internet Explorer 11.
      EMC VSI SIS - Database migration
    6. If the migration is successful, all Solutions Integration Service data for the previous version are moved to the new version, including users, storage systems, and access control information.
      If you changed VM name (host name), deactivate the previous version of the Solutions Integration Service.
      Follow Step 8.
    7. If the migration fails:
      a. Restore the new Solution Integration Service database from the backup file you created in step 4.
      b. Manually provision all required elements.
    8. Register the VSI plug-in with vCenter:
      1. Log in to the new Solutions Integration Service  https://<SIS IP Address>:8443/vsi_usm/admin
      2. Click VSI Setup;
      3. Enter the values for the following parameters and click Register.
        • vCenter IP/Hostname: The IP address that contains the VSI plug-in package.
          This is the IP address of the vCenter to which you are registering the VSI plug-in. If you are using the vCenter hostname, ensure that DNS is configured.
        • vCenter Username: The username that has administrative privileges.
        • vCenter Password: The administrator’s password.
        • Admin Email (Optional): The email address to which notifications should be sent.
      4. If the registration was successful, the following will be displayed in the Status window:
        10/30/2015 17:23:12 sending your request ...
        10/30/2015 17:23:21 receiving your response ...
        
        The operation was successful.
        
        Registered VSI Plugin:
        Key: com.emc.vsi.plugin
        Version: 6.6.3.39
        Name: EMC VSI Plugin
        Description: Integrated management support for EMC storage systems
        Admin Email: none
      5. Browse to the vSphere Web Client address.
        After you log in, the VSI plug-in is downloaded and deployed.
        Note: The download takes several minutes.
        If you have installed previous versions of the VSI plug-in, clear your browser cache to ensure that you use the newest version of VSI.

Hope this will help.

EMC PowerPath Virtual Appliance Version 2.0 SP1: New features and changes

EMC PowerPath Virtual Appliance Version 2.0 SP1 brings the following enhancements to the PowerPath Virtual Appliance web console.

A new tab named PowerPath Monitor added. Includes:

  • PowerPath Monitor > Group/Host view lists all the PowerPath hosts under corresponding groups. Physical PowerPath hosts are organized under their corresponding user-defined host groups (created from the inventory tab) and ESXi hosts are grouped under the corresponding vCenter to which they belong.
  • PowerPath Monitor > Group/Host > Summary view Provides host monitoring capabilities including device/path monitoring. The monitor displays PowerPath volume, path, and bus details.
    Healthy state:
    EMC PowerPath Virtual Appliance Version 2.0 SP1 - ESX host summary - healthy

    Degraded:
    EMC PowerPath Virtual Appliance Version 2.0 SP1 - ESX host summary - degraded

  • PowerPath Monitor > Group/Host > LUN view similar to PowerPath Viewer, LUN view adds Queued IO value displayed under LUN view, (not available with PowerPath Viewer).
    EMC PowerPath Virtual Appliance Version 2.0 SP1 - LUN view
  • PowerPath Monitor > Group/Host > BUS view shows the association between HBA Ports > Array > Array Ports along with the IOs queued for the bus.
    EMC PowerPath Virtual Appliance Version 2.0 SP1 - BUS view
  • EMC PowerPath Virtual Appliance capabilities available as a plug-in for VMware vCenter: PowerPath Virtual Appliance supports script plugins on vSphere vCenter 5.5 and above (only vCenter Web Client is supported). vCenter plugin provides a minimal view of the PowerPath Virtual Appliance GUI. The script plugin is registered and enabled on the vCenter server. It can be accessed via the short cut “EMC PowerPath/VE License Manager” in vCenter web client. Only the details of ESXi hosts belonging to the current instance of vCenter are displayed.
    EMC PowerPath Virtual Appliance Version 2.0 SP1 - vCenter Plugin
    Registering EMC PowerPath vCenter Plugin:

    • Click on Register vCenter Plugin;
    • Accept vCenter Server certificate if required;
    • By default script based plugins are disabled in vCenter Web Client. To enable script the plugin on vCenter server follow these steps:
      • Edit the webclient.properties file, and then append the following line at the end of the file (if not already added):
        scriptPlugin.enabled = true
        You can locate the webclient.properties file in the following locations of the vCenter server:

        • On vCenter server 5.x:
          • VMware vCenter Server Appliance: /var/lib/vmware/vsphere-client
          • Windows 2003: %ALLUSERSPROFILE%Application DataVMwarevSphere Web Client
          • Windows 2008/2012: %ALLUSERSPROFILE%VMwarevSphere Web Client
        • On vCenter server 6.0.x:
          • VMware vCenter Server Appliance: /etc/vmware/vsphere-client/
          • Windows 2008/2012: %ALLUSERSPROFILE%VMwarevCenterServercfgvsphereclient
      • Restart the vSphere Web client service on the vCenter server.
        • On Windows-based vCenter appliances, restart the service with name vSphere Web Client
        • On Linux-based vCenter appliances, run /etc/init.d/vsphere-client restart
    • Access the plugin after registering and enabling it. Use the short-cut, EMC PowerPath/VE License Manager, located in the vCenter Web client.
      EMC PowerPath Virtual Appliance Version 2.0 SP1 - vCenter Plugin - icon
      Note
      If you use the default self-signed certificate of Virtual Appliance, open the PowerPath Virtual Appliance GUI in a new tab in order to accept the certificate before accessing plugin via the vCenter web client.
  • Minimum configurable polling interval for PowerPath/VE changed to 10 minutes.
    Navigate to System > Settings, adjust polling interval if required
    EMC PowerPath Virtual Appliance Version 2.0 SP1 - ESXi poll interval
  • Added polling capability for PowerPath/VE 6.0 and later hosts – based on events.
  • Direct upgrade support from EMC PowerPath Virtual Appliance 2.0.0.
    See EMC PowerPath Virtual Appliance Upgrade from 1.2.x -> 2.0.0 and 2.0.0 to 2.0.1 for details.
  • Added user interface enhancement for Context menu option in PowerPath Monitor tab for inventory management
  • Enhanced back-up and restore capability of EMC PowerPath Virtual Appliance inventory.
    The EMC PowerPath Virtual Appliance Installation and Configuration Guide provides more information.
  • Added new fields in REST API response for host including:
    • osVersion: Version of OS running on the host
    • hostStateTimestamp: Time when the host state was updated in PowerPath Virtual Appliance
    • deadPathCount: Total number of dead paths in the host
    • totalPathCount : Total number of paths in the host
    • totalVolumeCount : Total number of volumes in the host
    • degradedVolumeCount: Total number of degraded volumes on the host
  • Support for RTOOLS 6.0 SP1
  • Queued IO count has been added in the response for Path REST API

Documentation:

Enjoy!

HOW TO: Upgrade EMC PowerPath Virtual Appliance

This article covers EMC® PowerPath® Virtual Appliance upgrade from 2.0.0 to 2.0.1. Please refer to “HOW TO: Upgrade EMC PowerPath Virtual Appliance” article for the upgrade from EMC® PowerPath® Virtual Appliance 1.2.x.

  1. Download EMC® PowerPath® Virtual Appliance upgrade package from EMC Online Support portal:
  2. Shutdown PowerPath Virtual Appliance and:
    1. Take a snapshot;
    2. Add a new 10 GB disk from storage to PowerPath Virtual Appliance (VM). It is recommended that you increase the size of root file system in case of upgrade from PowerPath Virtual Appliance 2.0.0
  3. Power on PowerPath Virtual Appliance;
  4. SSH to EMC PowerPath Virtual Appliance, login as root and follow this procedure to extend the root file system:
    1. epp001:~ # df
      Filesystem                  1K-blocks    Used Available Use% Mounted on
      rootfs                        3853936 1652176   2000852  46% /
      udev                          2028592     108   2028484   1% /dev
      tmpfs                         2028592       0   2028592   0% /dev/shm
      /dev/mapper/systemVG-LVRoot   3853936 1652176   2000852  46% /
      /dev/mapper/systemVG-LVswap   6094400  143484   5636344   3% /swap
      /dev/sda1                      165602   24037    133015  16% /boot
    2. epp001:~ # cat /proc/partitions
      major minor  #blocks  name
      8        0   10485760 sda
      8        1     171012 sda1
      8        2   10311680 sda2
      8       16   10485760 sdb
      253        0    6291456 dm-0
      253        1    4018176 dm-1
    3. epp001:~ # ls -l /sys/class/scsi_host
      total 0
      lrwxrwxrwx 1 root root 0 Oct 29 12:18 host0 -> ../../devices/pci0000:00/0000:00:10.0/host0/scsi_host/host0
      lrwxrwxrwx 1 root root 0 Oct 29 12:18 host1 -> ../../devices/pci0000:00/0000:00:07.1/ata2/host1/scsi_host/host1
      lrwxrwxrwx 1 root root 0 Oct 29 12:18 host2 -> ../../devices/pci0000:00/0000:00:07.1/ata3/host2/scsi_host/host2
    4. epp001:~ # echo "- - -" > /sys/class/scsi_host/host0/scan

      This command is used to rescan scsi host. Echoing a wildcard value of “channel target and lun”, and the operating system will rescan the device path.

    5. epp001:~ # cat /proc/partitions
      major minor  #blocks  name
      8        0   10485760 sda
      8        1     171012 sda1
      8        2   10311680 sda2
      8       16   10485760 sdb
      253        0    6291456 dm-0
      253        1    4018176 dm-1
    6. epp001:~ # fdisk -l /dev/sdb
      
      Disk /dev/sdb: 10.7 GB, 10737418240 bytes
      255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors
      Units = sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes
      Disk identifier: 0x00000000
      
      Disk /dev/sdb doesn't contain a valid partition table
    7. epp001:~ # vgdisplay
        --- Volume group ---
        VG Name               systemVG
        System ID
        Format                lvm2
        Metadata Areas        1
        Metadata Sequence No  4
        VG Access             read/write
        VG Status             resizable
        MAX LV                0
        Cur LV                2
        Open LV               2
        Max PV                0
        Cur PV                1
        Act PV                1
        VG Size               9.83 GiB
        PE Size               4.00 MiB
        Total PE              2517
        Alloc PE / Size       2517 / 9.83 GiB
        Free  PE / Size       0 / 0
        VG UUID               RHiVZ9-cqSo-srmH-NGZs-zm4W-rAAm-fLeC15
    8. epp001:~ # vgextend systemVG /dev/sdb
      No physical volume label read from /dev/sdb
      Physical volume "/dev/sdb" successfully created
      Volume group "systemVG" successfully extended
    9. epp001:~ # vgdisplay
      --- Volume group ---
      VG Name               systemVG
      System ID
      Format                lvm2
      Metadata Areas        2
      Metadata Sequence No  5
      VG Access             read/write
      VG Status             resizable
      MAX LV                0
      Cur LV                2
      Open LV               2
      Max PV                0
      Cur PV                2
      Act PV                2
      VG Size               19.83 GiB
      PE Size               4.00 MiB
      Total PE              5076
      Alloc PE / Size       2517 / 9.83 GiB
      Free  PE / Size       2559 / 10.00 GiB
      VG UUID               RHiVZ9-cqSo-srmH-NGZs-zm4W-rAAm-fLeC15
    10. epp001:~ # lvextend -L +5GB /dev/systemVG/LVRoot
      Extending logical volume LVRoot to 8.83 GiB
      Logical volume LVRoot successfully resized
    11. epp001:~ # resize2fs /dev/systemVG/LVRoot
      resize2fs 1.41.9 (22-Aug-2009)
      Filesystem at /dev/systemVG/LVRoot is mounted on /; on-line resizing required
      old desc_blocks = 1, new_desc_blocks = 1
      Performing an on-line resize of /dev/systemVG/LVRoot to 2315264 (4k) blocks.
      The filesystem on /dev/systemVG/LVRoot is now 2315264 blocks long.
    12. epp001:~ # df
      Filesystem                  1K-blocks    Used Available Use% Mounted on
      rootfs                        8884968 1653164   6771068  20% /
      udev                          2028592     108   2028484   1% /dev
      tmpfs                         2028592       0   2028592   0% /dev/shm
      /dev/mapper/systemVG-LVRoot   8884968 1653164   6771068  20% /
      /dev/mapper/systemVG-LVswap   6094400  143484   5636344   3% /swap
      /dev/sda1                      165602   24037    133015  16% /boot
  5. Upgrade EMC PowerPath Virtual Appliance:
    1. Upload ‘applianceUpdate.zip‘ to a directory on Virtual Appliance VM, /tmp for example;
    2. SSH into the PowerPath Virtual Appliance VM, login as root;
    3. Unzip the upgrade package:
      epp001:~ # cd /tmp
      pdc1epp001:/tmp # unzip applianceUpdate.zip
      Archive: applianceUpdate.zip
        creating: applianceUpdate/
       inflating: applianceUpdate/preUpdate.sh
       inflating: applianceUpdate/postUpdate.sh
      {skipped}
    4. Run
      epp001:/tmp # cd applianceUpdate
      epp001:/tmp/applianceUpdate #
    5. Run
      epp001:/tmp/applianceUpdate # /bin/bash applianceUpdate
      14:19:23 [INFO]: * Starting the appliance update process *
      14:19:23 [INFO]: Updating PowerPath: 2.0.0.0.86 -&gt; 2.0.1.0.206
      14:19:23 [INFO]: Logs can be found here:
      14:19:23 [INFO]: /opt/ADG/update/logs/update-2.0.0.0.86-2.0.1.0.206-2015_10_29-14_19_23.log
      14:19:23 [INFO]: * Validating update *
      14:19:24 [INFO]: Checking installed product version ...
      14:19:24 [INFO]: Product version check is successful.
      14:19:24 [INFO]: Adding update repo ...
      14:19:24 [INFO]: Update repo added successfully.
      14:19:24 [INFO]: Checking OS version ...
      14:19:24 [INFO]: OS version check is successful.
      14:19:24 [INFO]: Update validation successful.
      14:19:24 [INFO]: * Starting Update *
      14:19:24 [INFO]: * Running pre-update script *
      14:19:56 [INFO]: Updating the existing packages. This may take some time. Please wait ...
      14:21:15 [INFO]: Update of existing packages completed.
      14:21:15 [INFO]: Installing new packages. This may take some time. Please wait ...
      14:21:17 [INFO]: Installation of new packages completed.
      14:21:17 [INFO]: Updating the product version to 2.0.1.0.206 ...
      14:21:17 [INFO]: * Running post-update script *
      14:21:19 [INFO]: * COMPLETE: Appliance update completed successfully *
    6. Reboot the PowerPath Virtual Appliance:
      epp001:/tmp/applianceUpdate # reboot
      Broadcast message from root (pts/0) (Thu Oct 29 14:22:27 2015):
      The system is going down for reboot NOW!
  6. Log in to EMC PowerPath Virtual Appliance, navigate to System / Health and check all services are running.

Hope this will help.