Linux软阵列 用mdadm做RAID实验

用mdadm做raid实验时遇到一个问题,一直没有搞定,在网上一直也没有查到,准备问那个培训老师,结果等了半天硬是没有等到,然后又在google上搜索,终于在一个老外的BBS里发现问题原因了
 
问题如下:不用多加解释,相信大家都能看懂
[root@localhost ~]# mdadm -C /dev/md5  -l 5 -n 3 -x 1 /dev/sdb[5-8]
mdadm: Cannot open /dev/sdb5: Device or resource busy
mdadm: Cannot open /dev/sdb6: Device or resource busy
mdadm: Cannot open /dev/sdb7: Device or resource busy
mdadm: Cannot open /dev/sdb8: Device or resource busy
 
这是我第一台虚拟机,后面换了一台干净的计算机,然后实验的,单独加了三块硬盘,和前面问题基本类似,不过就是三块硬盘和三个分区的问题。
 
寻找问题的过程中,在网上有的人说是由于主板不支持,我感觉不大可能,还有说内核不支持的,其实最终的问题根本不是这些,而是由于/dev/md*设备被占用,只要用mdadm --stop 命令把该设备停止即可,问题就可以解决。下面是过程。
 
[root@localhost ~]# mdadm --stop /dev/md5
mdadm: stopped /dev/md5
[root@localhost ~]# mdadm -C /dev/md5 -l 5 -n 3 /dev/sd[bcd]
mdadm: /dev/sdb appears to be part of a raid array:
    level=raid5 devices=3 ctime=Wed Dec  9 03:37:11 2009
mdadm: /dev/sdc appears to be part of a raid array:
    level=raid5 devices=3 ctime=Wed Dec  9 03:37:11 2009
mdadm: /dev/sdd appears to be part of a raid array:
    level=raid5 devices=3 ctime=Wed Dec  9 03:37:11 2009
Continue creating array?
Continue creating array? (y/n) y
mdadm: array /dev/md5 started.
[root@localhost ~]# mdadm --detail /dev/md5
/dev/md5:
        Version : 00.90.03
  Creation Time : Wed Dec  9 03:46:26 2009
     Raid Level : raid5
     Array Size : 2097024 (2048.22 MiB 2147.35 MB)
  Used Dev Size : 1048512 (1024.11 MiB 1073.68 MB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 5
    Persistence : Superblock is persistent
    Update Time : Wed Dec  9 03:46:26 2009
          State : clean, degraded, recovering
 Active Devices : 2
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 1
         Layout : left-symmetric
     Chunk Size : 64K
 Rebuild Status : 55% complete
           UUID : 297ef6fe:26379be0:84c578f1:21ebe5cf
         Events : 0.1
    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       3       8       48        2      spare rebuilding   /dev/sdd
[root@localhost ~]#
[root@localhost ~]# mdadm --detail /dev/md5
/dev/md5:
        Version : 00.90.03
  Creation Time : Wed Dec  9 03:46:26 2009
     Raid Level : raid5
     Array Size : 2097024 (2048.22 MiB 2147.35 MB)
  Used Dev Size : 1048512 (1024.11 MiB 1073.68 MB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 5
    Persistence : Superblock is persistent
    Update Time : Wed Dec  9 03:47:34 2009
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0
         Layout : left-symmetric
     Chunk Size : 64K
           UUID : 297ef6fe:26379be0:84c578f1:21ebe5cf
         Events : 0.2
    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
[root@localhost ~]# mkfs.ext3 /dev/md5
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
262144 inodes, 524256 blocks
26212 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=536870912
16 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912
Writing inode tables: done                           
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 29 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
[root@localhost ~]# mount /dev/md5 /mnt
[root@localhost ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2             3.8G  1.9G  1.8G  53% /
/dev/sda3             4.3G  137M  3.9G   4% /home
/dev/sda1              46M   11M   33M  24% /boot
tmpfs                 339M     0  339M   0% /dev/shm
/dev/md5              2.0G   36M  1.9G   2% /mnt
[root@localhost ~]#

相关推荐