Difference between revisions of "Removing disk from VIOS server"

From Notes_Wiki
(Created page with "Home > VIOS or AIX > Removing disk from VIOS server To remove disk from VIOS server use below steps: # Login to VIOS server (padmin) # Find the hdisk name of the disk to be removed using: #:<pre> #:: lspv #:</pre> #: Example output #::<pre> #:::$ $ lspv #:::NAME PVID VG STATUS #:::hdisk120 00c2eae02d3989c8 None #:::hdisk121 00c2eae095ba1018...")
 
m
 
(One intermediate revision by the same user not shown)
Line 1: Line 1:
[[Main Page|Home]] > [[VIOS or AIX]] > [[Removing disk from VIOS server]]
[[Main Page|Home]] > [[VIOS or AIX]] > [[Removing disk from VIOS server]]
=Removing disk from VIOS server=


To remove disk from VIOS server use below steps:
To remove disk from VIOS server use below steps:
Line 63: Line 64:
### Right click on the volume again, select "Delete"
### Right click on the volume again, select "Delete"
### Enter the number of volumes in "Verify the number of volumes that you are deleting", then click on "Delete"
### Enter the number of volumes in "Verify the number of volumes that you are deleting", then click on "Delete"
Refer:
* https://www.ibm.com/docs/en/aix/7.2?topic=r-rmdev-command
* https://www.ibm.com/docs/en/power7?topic=commands-lsmap-command
* https://www.ibm.com/support/pages/vio-server-and-lpar-client-relationship-and-storage-mapping




Line 87: Line 93:
lsdev
lsdev
</pre>
</pre>
output also.  Then using either of below does not seems to delete the
output also.  Then using either of below does not seems to delete the vhost device in one case:
vhost device:
<pre>
<pre>
rmdev -Rdl vhost42
rmdev -Rdl vhost42
rmdev -dev vhost42
rmdev -dev vhost42
</pre>
</pre>
While in other case we were able to delete the device via 'rmdev -dev <vhost-device-name>' without any problem.  Having such devices led to alert LED on the power server.  Then LED was cleared after deleting the vhost device using '''any one option from below''':
===Using diag menu===
# diag
# Task Selection (Diagnostics, Advanced Diagnostics, Service Aids, etc.)
# Identify and Attention Indicators
# Set System Attention Indicator to NORMAL
# Enter and press F7 to commit
===Using command===
/usr/lpp/diagnostics/bin/usysfault -s normal
===From ASMI===
# Login to ASMI
# Service indicators
# System Attention Indicator
# Turn off the system attention indicators .


===From HMC===
# Login to HMC
# Systems Management
# Servers
# Select the problematic server.
# In the pop-up menu, click on Operations> LED status> Deactivate attention LED.






[[Main Page|Home]] > [[VIOS or AIX]] > [[Removing disk from VIOS server]]
[[Main Page|Home]] > [[VIOS or AIX]] > [[Removing disk from VIOS server]]

Latest revision as of 08:23, 12 April 2023

Home > VIOS or AIX > Removing disk from VIOS server

Removing disk from VIOS server

To remove disk from VIOS server use below steps:

  1. Login to VIOS server (padmin)
  2. Find the hdisk name of the disk to be removed using:
    lspv
    Example output
    $ $ lspv
    NAME PVID VG STATUS
    hdisk120 00c2eae02d3989c8 None
    hdisk121 00c2eae095ba1018 None
    hdisk122 none None
    hdisk123 none None
  3. Note if the disk is not initialized at VIOS level using:
    bootinfo -s hdisk124
    chdev -l hdisk124 -a algorithm=shortest_queue -a reserve_policy=no_reserve
    chdev -l hdisk124 -a pv=yes
    then PVID column may show none instead of hexademical value. We cannot assume a particular hdisk is unused just based on PVID column.
  4. Thus, first we should check output of
    lsmap -all | more
    to see which physical volumes (lspv hdisk) are mapped with which partition. Against each partition for each PV there is a VTD value, which is a unique ID identifying mapping of this PV (hdisk) to the LPAR / partition.
  5. If we want to remove this mapping between hdisk and partition, we can use below as root user (r o #OR oem_setup_env):
    rmdev -dev <VTD of mapping between hdisk and partition>
  6. Additionally if we want to validate (map) a particular hdisk to a LUN at storage level use, we can find the volume serial(UID) number and other details for hdisk using:
    lsmpio -ql <device-name>
    For Ex:
    lsmpio -ql hdisk4
    This can help in validating we have correct hdisk against storage LUN IDs
  7. After validating that disk is not mapped to partition and we are removing correct disk as seen at storage end, for actual removal use:
    1. Switch to elevation mode
      r o
    2. Remove the disk which is not mapped to any partition via
      rmdev -Rdl <device-name>
      For Ex:
      rmdev -Rdl hdisk4
  8. Then finally remove the disk from storage via
    1. Login to storage and Go to Volumes
    2. Unmap the volume
      1. Select the volume by validating name and UID number
      2. Right click on the volume, Select "Unmap All Hosts"
      3. Enter the mappings number in "Verify the number of mappings that this operation affects", and click on "Unmap"
    3. Delete the volume
      1. Right click on the volume again, select "Delete"
      2. Enter the number of volumes in "Verify the number of volumes that you are deleting", then click on "Delete"

Refer:


Vhost without any mapping

If 'lsmap -all | more' shows vhost with no target such as:


SVSA            Physloc                                      Client Partition ID
--------------- -------------------------------------------- ------------------
vhost42         U9223.22H.782EAE0-V1-C16                     0x00000006

VTD                   NO VIRTUAL TARGET DEVICE FOUND

SVSA            Physloc                                      Client Partition ID
--------------- -------------------------------------------- ------------------
vhost43         U9223.22H.782EAE0-V1-C17                     0x00000006

VTD                   NO VIRTUAL TARGET DEVICE FOUND

and vhost are part of

lsdev

output also. Then using either of below does not seems to delete the vhost device in one case:

rmdev -Rdl vhost42
rmdev -dev vhost42

While in other case we were able to delete the device via 'rmdev -dev <vhost-device-name>' without any problem. Having such devices led to alert LED on the power server. Then LED was cleared after deleting the vhost device using any one option from below:

Using diag menu

  1. diag
  2. Task Selection (Diagnostics, Advanced Diagnostics, Service Aids, etc.)
  3. Identify and Attention Indicators
  4. Set System Attention Indicator to NORMAL
  5. Enter and press F7 to commit

Using command

/usr/lpp/diagnostics/bin/usysfault -s normal

From ASMI

  1. Login to ASMI
  2. Service indicators
  3. System Attention Indicator
  4. Turn off the system attention indicators .

From HMC

  1. Login to HMC
  2. Systems Management
  3. Servers
  4. Select the problematic server.
  5. In the pop-up menu, click on Operations> LED status> Deactivate attention LED.


Home > VIOS or AIX > Removing disk from VIOS server