Intro.

This is not for the fainthearted, but since you're here, you're probably looking for what my title says...

This procedure is especially important when one of your Hard Disk Drives breaks and you urgently need to swap it with another (and unfortunately you are not able to buy another new 80GB hdd, but you must cope with adding a new 1TB-hdds instead).

So, basically, you could simply add a new 1TB hdd to your existing 80GB RAID1 Array and live with it, but considering the cost of new hard drives, what's the point?

It's a lot better if you just swap BOTH the old AND the broken original 80GB Hard Disk Drives with fresh new 1TB hdds.

This way you:

  1. Refresh your existing RAID 1 array.
  2. Circumvent dealing with a potential problem that is going to happen (on your old but working 80GB hdd).

This article is split in 2 parts for organizational purposes. Part 1 involves the "information gathering" part. Part 2 will deal with hands-on operations.

Preliminary checks: Check IF there is a Bitmap on your /dev/mdX array.

mdadm /dev/mdX --examine-bitmap
cat /proc/mdstat | grep bitmap

The previous commands will check if your array has a bitmap.

Example n.1: Array with a bitmap.

cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] md_d0 : active raid5 sde1[0] sdf1[4] sdb1[5] sdd1[2] sdc1[1]       1250241792 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]       bitmap: 0/10 pages [0KB], 16384KB chunk

unused devices: <none>

Example n.2: Normal array example (without bitmap line):

cat /proc/mdstat
Personalities : [raid1] md0 : active raid1 sdb1[1] sda1[0]       264960 blocks [2/2] [UU]

md2 : active raid1 sdb4[1] sda4[0]       65312192 blocks [2/2] [UU]

md1 : active raid1 sdb2[1] sda2[0]       10482304 blocks [2/2] [UU]

unused devices: <none>

If your array has a "[Bitmap]"-entry then the following (line) describes the bitmap state (Example 1 reported):

     bitmap: 0/10 pages [0KB], 16384KB chunk

What's a bitmap? It's a portion of data which keeps an "picture" of "data on disk". Write intent bitmaps means every time some data is about to be written, the RAID array region is marked as dirty. Then, say, after a power failure, all that needs to be done to ensure that all disks in the array have matching data it is to check the regions that are listed as dirty (Vs checking the entire disk surface for dirty bits).

This way, instead of waiting for an hour or more, only a few seconds of work are required.
here: http://etbe.coker.com.au/2008/01/28/write-intent-bitmaps/
here: http://www.linuxfoundation.org/collaborate/workgroups/linux-raid/mdstat#bitmap_line

Bitmaps are of two types: internal (stored inside the array) & external (stored outside the array).

How to add an internal bitmap?

mdadm -G /dev/md0 --bitmap=internal

(Quoted from the linked sites):"The down-side to this feature is that it will slightly reduce performance: When I enabled internal bitmaps a simple copy operation performa(n)ce dropped to HALF! So if you really can’t afford slowdown during RAID rebuild try to use external bitmap if at all possible. For example use fast SSDs, usb flash drives or CF".

External bitmap example:

mdadm --grow --bitmap=/boot/md2-raid5-bitmap /dev/md2

(Quoted from the above linked sites):"If the array has a write-intent bitmap, it is strongly recommended that you remove the bitmap before increasing the size of the array. Failure to observe this precaution can lead to the destruction of the array if the existing bitmap is insufficiently large, especially if the increased array size necessitates a change to the bitmap's chunksize".

How to remove a bitmap?

mdadm --grow /dev/mdX --bitmap none

Before following my procedure, I assume you don't have a bitmap (or if you do, you disabled it).

Preliminary checks: gathering info on your actual hdds status.

My hdds were partitionet this way:

md0 = boot    (sda1+sdb1)
md1 = /        (sda2+sdb2)
swap        (sda3 = sdb3)
md2 = var    (sda4+sdb4)

Device         Boot      Start   End   #cyls    #blocks   Id  System

/dev/sda1   Active      0+     32      33-    265041   fd  Autorilevamento raid di Linux

/dev/sda2         33    1337    1305   10482412+  fd  Autorilevamento raid di Linux

/dev/sda3       1338    1598     261    2096482+  82  Linux swap / Solaris

/dev/sda4       1599    9729    8131   65312257+  fd  Autorilevamento raid di Linux

Take notes on how your hdds are partitioned (i.e.: how swap was defined, etc.).

Rate this post

One comment on “RELOCATING a RAID1 mdadm array to bigger hdds on CentOS - pt.1.

Comments are closed.