Difference between revisions of "CentOS 7.x File /boot/vmlinux not found, load kernel first boot issue"

From Notes_Wiki
(Created page with "<yambe:breadcrumb self="File /boot/vmlinux not found, load kernel first boot issue">CentOS_7.x_troubleshooting|Troubleshooting</yambe:breadcrumb> =CentOS 7.x File /boot/vmlinu...")
 
m
Line 50: Line 50:


unused devices: <none>
unused devices: <none>
[root@rekallcm1 ~]# mdadm --detail /dev/md12
mdadm: cannot open /dev/md12: No such file or directory
[root@rekallcm1 ~]# mdadm --detail /dev/md125
[root@rekallcm1 ~]# mdadm --detail /dev/md125
/dev/md125:
/dev/md125:
Line 110: Line 108:
</pre>
</pre>


Then wait for raid arrays to rebuild.  Then '<tt>yum reinstall &lt;latest-kernel&gt;</tt>'.  Then reboot and validate.
# Then wait for raid arrays to rebuild.   
# Then '<tt>yum reinstall &lt;latest-kernel&gt;</tt>'.   
# Then reboot and validate.





Revision as of 12:56, 9 December 2019

<yambe:breadcrumb self="File /boot/vmlinux not found, load kernel first boot issue">CentOS_7.x_troubleshooting|Troubleshooting</yambe:breadcrumb>

CentOS 7.x File /boot/vmlinux not found, load kernel first boot issue

Tried "yum reinstall kernel" as per https://www.centos.org/forums/viewtopic.php?t=66555 but it did not work

Further grub2-mkconfig showed

      /usr/sbin/grub2-probe: warning: Couldn't find physical volume `(null)'. Some modules may be missing from core image..

Then based on https://serverfault.com/questions/617552/grub-some-modules-may-be-missing-from-core-image-warning checked "cat /proc/mdstat" and really one of the raid arrays was degraded, esp wrt /dev/sda.

Related terminal I/O

[root@rekallcm1 ~]# cat /proc/mdstat 
Personalities : [raid1] [raid6] [raid5] [raid4] 
md125 : active raid1 sdc2[2] sdb2[1]
      1049536 blocks super 1.0 [3/2] [_UU]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md126 : active raid1 sdc3[2] sdb3[1]
      52419584 blocks super 1.2 [3/2] [_UU]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md127 : active raid5 sdb1[1] sdc1[3]
      3799744512 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
      bitmap: 11/15 pages [44KB], 65536KB chunk

unused devices: <none>
[root@rekallcm1 ~]#   mdadm --assemble --scan
mdadm: Found some drive for an array that is already active: /dev/md/boot_efi
mdadm: giving up.
mdadm: Found some drive for an array that is already active: /dev/md/pv00
mdadm: giving up.
mdadm: Found some drive for an array that is already active: /dev/md/root
mdadm: giving up.
[root@rekallcm1 ~]# more /proc/mdstat 
Personalities : [raid1] [raid6] [raid5] [raid4] 
md125 : active raid1 sdc2[2] sdb2[1]
      1049536 blocks super 1.0 [3/2] [_UU]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md126 : active raid1 sdc3[2] sdb3[1]
      52419584 blocks super 1.2 [3/2] [_UU]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md127 : active raid5 sdb1[1] sdc1[3]
      3799744512 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
      bitmap: 11/15 pages [44KB], 65536KB chunk

unused devices: <none>
[root@rekallcm1 ~]# mdadm --detail /dev/md125
/dev/md125:
           Version : 1.0
     Creation Time : Wed Aug  7 11:14:33 2019
        Raid Level : raid1
        Array Size : 1049536 (1024.94 MiB 1074.72 MB)
     Used Dev Size : 1049536 (1024.94 MiB 1074.72 MB)
      Raid Devices : 3
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sat Dec  7 17:05:49 2019
             State : clean, degraded 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : localhost:boot_efi
              UUID : 34cceb88:1f9e97b2:0c130d06:1a07a551
            Events : 485

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       18        1      active sync   /dev/sdb2
       2       8       34        2      active sync   /dev/sdc2
[root@rekallcm1 ~]# mdadm --examine /dev/sda2 | grep 'Event'
         Events : 332
[root@rekallcm1 ~]# mdadm --examine /dev/sdb2 | grep 'Event'
         Events : 485
[root@rekallcm1 ~]# mdadm /dev/md125 --add /dev/sda2
mdadm: re-added /dev/sda2
[root@rekallcm1 ~]# more /proc/mdstat 
Personalities : [raid1] [raid6] [raid5] [raid4] 
md125 : active raid1 sda2[0] sdc2[2] sdb2[1]
      1049536 blocks super 1.0 [3/3] [UUU]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md126 : active raid1 sdc3[2] sdb3[1]
      52419584 blocks super 1.2 [3/2] [_UU]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md127 : active raid5 sdb1[1] sdc1[3]
      3799744512 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
      bitmap: 11/15 pages [44KB], 65536KB chunk

unused devices: <none>
[root@rekallcm1 ~]# mdadm /dev/md126 --add /dev/sda3
mdadm: re-added /dev/sda3
[root@rekallcm1 ~]# mdadm /dev/md127 --add /dev/sda1
mdadm: re-added /dev/sda1
[root@rekallcm1 ~]# 
  1. Then wait for raid arrays to rebuild.
  2. Then 'yum reinstall <latest-kernel>'.
  3. Then reboot and validate.



<yambe:breadcrumb self="File /boot/vmlinux not found, load kernel first boot issue">CentOS_7.x_troubleshooting|Troubleshooting</yambe:breadcrumb>