Problems With Multiple MetaDB Partitions

UPDATE TO “Solaris Disk Partition Layout & Mirroring Scripts

Several Months ago, I tried to use my old mirroring scripts on a new Solaris 9 install. I found that the the kernel would panic upon reboot because it was unable to mount /. I tried many things, including opening a support call with Sun. They reviewed my scripts and said that they should work, but despite repeated tries, they did not.

In the end, I created only one metadb partition instead of two, and found that the system would boot. I attributed this to a problem with the mirror disk, until it happened to me again this week. For some reason, the implementation of Disk Suite on Solaris 9 does not accept multiple metadb partitions.

Previously, in Solaris 8, I always created a total of four metadb partitions. Two on each drive…

#!/bin/sh
#Mirrorme.sh
prtvtoc /dev/rdsk/c1t0d0s2 | fmthard -s – /dev/rdsk/c1t1d0s2
metadb -a -f -c3 /dev/dsk/c1t0d0s3 /dev/dsk/c1t1d0s3
metadb -a -f -c3 /dev/dsk/c1t0d0s4 /dev/dsk/c1t1d0s4

Currently, with Solaris 9, that method does not work, and results in a kernel panic. To resolve this issue, you must create only one metadb partition on each disk. I’ve been using s3 for this, although you could use any slice you wish.

#!/bin/sh
#Mirrorme.sh
prtvtoc /dev/rdsk/c1t0d0s2 | fmthard -s – /dev/rdsk/c1t1d0s2
metadb -a -f -c3 /dev/dsk/c1t0d0s3 /dev/dsk/c1t1d0s3

Aside from this change, the mirroring scripts continue to work. Please let me know if you find any other problems not mentioned.

Copy a Solaris Boot Drive to a New Disk

If you’ve ever gone to mirror a system drive with Solstice Disk suite, you know how frustrating it can be when you either don’t have any more slices to use for your meta database partitions, or all the space on the disk has already been allocated to existing partitions. While Disk Suite only requires one partition be reserved for its meta database information on boot drives, two are really suggested for redundancy purposes, and in the example below, I found myself needing to mirror a system disk that had only one remaining partition, and no space left that could be used for the meta database.

While I could have taken a small amount of space from the swap partition and re-allocated it to a new meta database partition on slice 7, this solution would not have been elegant, and I would have still only had one meta database partition. As it stood, the system had the following filesystems on the following disk slices:

c1t0d0

Part Tag
0 root
1 swap
2 backup
3 usr
4 usr/local
5 opt
6 var
7 unused

In order to bring the system into line with my standards and prepare it for proper mirroring, I would have to carve up another disk, and migrate the data to it.

Here is what the partition table on the new disk looked like:

c1t2d0
Current partition table (original):
Total disk cylinders available: 24620 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 root wm 0 1088 1.50GB (1089/0/0) 3146121
1 swap wu 1089 6896 8.00GB (5808/0/0) 16779312
2 backup wu 0 24619 33.92GB (24620/0/0) 71127180
3 wm 6897 6967 100.16MB (71/0/0) 205119
4 wm 6968 7038 100.16MB (71/0/0) 205119
5 opt wm 7039 8853 2.50GB (1815/0/0) 5243535
6 usr wm 8854 12483 5.00GB (3630/0/0) 10487070
7 var wm 12484 24619 16.72GB (12136/0/0) 35060904

Now that everything is all laid out, we can start moving all the data from c1t0d0 to c1t2d0, keeping in mind that we will be merging /usr/local onto /usr on the new system disk… Here we go

Make a new filesystem for /:

# newfs /dev/rdsk/c1t2d0s0
newfs: /dev/rdsk/c1t2d0s0 last mounted as /
newfs: construct a new file system /dev/rdsk/c1t2d0s0: (y/n)? Y

Mount the new / filesystem as /mnt:
# mount -F ufs -o rw /dev/dsk/c1t2d0s0 /mnt
Move the data from c1t0d0s0 to c1t2d0s0:
# ufsdump 0f – / | ( cd /mnt ;ufsrestore xvf – )
Add links
Set directory mode, owner, and times.
set owner/mode for ‘.’? [yn] y
Directories already exist, set modes anyway? [yn] y
DUMP: 405886 blocks (198.19MB) on 1 volume at 406 KB/sec
DUMP: DUMP IS DONE

Unmount /mnt
# umount /mnt

That’s the general idea… Now we just have to do the same thing for the other partitions, leaving out swap, backup, and our two meta database partitions of course. These partitions (1,2,3 and 4) should be left alone for the time being, as they are never mounted as filesystems.

# newfs /dev/rdsk/c1t2d0s5
# mount -F ufs -o rw /dev/dsk/c1t2d0s5 /mnt
# ufsdump 0f – /opt | ( cd /mnt ;ufsrestore xvf – )
# umount /mnt
# newfs /dev/rdsk/c1t2d0s6
# mount -F ufs -o rw /dev/dsk/c1t2d0s5 /mnt
# ufsdump 0f – /usr | ( cd /mnt ;ufsrestore xvf – )
# umount /mnt
# newfs /dev/rdsk/c1t2d0s7
# mount -F ufs -o rw /dev/dsk/c1t2d0s5 /mnt
# ufsdump 0f – /var | ( cd /mnt ;ufsrestore xvf – )
# umount /mnt
Finally, the /usr/local partition
# mount -F ufs -o rw /dev/dsk/c1t2d0s5 /mnt
# ufsdump 0f – /usr/local | ( cd /mnt/local ;ufsrestore xvf – )
# umount /mnt

Now that we have all the data moved, we still don’t have a disk that is bootable. Since the whole idea here is for us to end up with a new bootable system disk, we have to install bootblocks onto the new system disk. This is done with the installboot command:

# installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c1t2d0s0

Now that we have the bootblocks needed to boot the solaris kernel, the last thing we have to do is make sure our new vfstab file points to all the right partitions.

Mount the new / partition:
# mount -F ufs -o rw /dev/dsk/c1t2d0s0 /mnt
Edit the new vfstab file:
# vi /mnt/etc/vfstab
For the information given in this example, this file will contain the following entries:

#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
fd /dev/fd fd no
/proc /proc proc no
/dev/dsk/c1t2d0s1 swap no
/dev/dsk/c1t2d0s0 /dev/rdsk/c1t2d0s0 / ufs 1 no
/dev/dsk/c1t2d0s5 /dev/rdsk/c1t2d0s5 /usr ufs 1 no
/dev/dsk/c1t2d0s6 /dev/rdsk/c1t2d0s6 /var ufs 1 no
/dev/dsk/c1t2d0s7 /dev/rdsk/c1t2d0s7 /opt ufs 2 yes
swap /tmp tmpfs yes

Notice that the target number will remain 2, not move to 0 when we swap the disks and boot from the new one. To resolve this, it is strongly suggested that you rebuild the solaris device tree and change the vfstab file to reflect the new disk position.

That is everything! We now shutdown the system, swap the positions of c1t0d0 and c1t2d0 and reboot off our new system disk. We are now ready to move onto the mirroring process.

Migrating Veritas Volume Manager disk groups between servers

Never having been to Veritas Volume Manager training, I was feeling quite a bit of stress when my manager asked me to do a box upgrade on our most critical server. I remember wondering how in the world I was going to figure out the details of moving our Volume Manager configuration over the the new server. What was more, we had been taking care of ALL raid, including the boot drives with Volume Manager, and I wanted to start using Disk Suite all of our non fiber storage. Well, I figured it out, and it really is not all that hard… The most important thing is that you keep your external storage all in one disk group. Read more for details on how I did it.

The first thing to know is that you have to have at least one disk in the root disk group (rootdg). Most of our servers have four internal drives, so I mirror the first two sith Disk Suite, and let Volume Manager take care of the other two. At any rate, you have to run vxinstall on the new server, and add at least one drive to the root disk group. This will also set up Volume Manager in general. Once you’ve gone through all the hoops of vxinstall, reboot the new server.

Now, on the old server, you have to “deport” the disk group you want to attach to the new server. We’ll pretend that /u01 and /u02 are the mount points of your disk disk group. Here is how to “deport” them:

First, umount the filesystems.
umount /u01
umount /u02

Next, display the disks and disk groups just to make sure you are working with the disks you think you are, and that they are in the right disk group. both “vxprint -ht” and “vxdisk list” will do this, but I like vxdisk more because it is simpler to read the output.
vxdisk list

So long as everything is as it should be, you can now go about “deporting” the disk group.
vxdg deport diskgroup

Now unplug the fiber, and plug it into your new server. Do a “reboot — – r” or a boot -r at the “ok” prompt, and check to make sure the external storage can be seen in “format”.

So long as you can see it in format, it’s just a simple matter of importing the old disk group into the new disk group configuration.

vxdg import diskgroup

Now all you have to do is reboot your system, and the volume should be ready to go.

Solaris Disk Partition Layout & Mirroring Scripts

UPDATED “Problems With Multiple MetaDB Partitions

For some reason, it always seems that no two servers in the world have the same disk partition layout. For the longest time, I’d get a server in, lay out the partitions in a way that seemed to make sense to me at the time, and move on with my life.

Anyhow, I finally decided to come up with a standard layout based on the spiralbound story “Mirroring a boot drive using Solstice Disk Suite”, which has been nice since I can now script the mirroring process. Read more to see the layout, and the scripts I use to mirror the disks. Hopefully it will make sense in your machine room, but if not, change it and tell me why. Please do not run these scripts unless you know EXACTLY what you are doing, and EXACTLY what each and every command in them is doing. You could very easily overwrite data on your drives, and ruin your whole day.

Partition Mount Point Size in MB Notes
0 / 1024  
1 Swap 12288  
2 Backup    
3   100 Meta DB
4   100 Meta DB
5 /usr 2048  
6 /var 10240  
7 /opt Remainder  

As you can see, I like to leave partiton 3 and (4 UPDATED “Problems With Multiple MetaDB Partitions“) reserved to the meta databases required by Solstice Disk Suite. I also like to get the swap partition way up near the front of the disk so that I can, at least in theory, get a little more speed on my virtual memory.

I like to mirror my system disks, so I wrote a script to do it for me. You may have to change things around a little depending on which disks are on which controller, but that’s usually a lot easier than typing everything in by hand, and keeping it all lined up in your head. Edit as needed.

————-SNIP————
#!/bin/sh
#Mirrorme.sh
prtvtoc /dev/rdsk/c1t0d0s2 | fmthard -s – /dev/rdsk/c1t1d0s2
metadb -a -f -c3 /dev/dsk/c1t0d0s3 /dev/dsk/c1t1d0s3
metadb -a -f -c3 /dev/dsk/c1t0d0s4 /dev/dsk/c1t1d0s4 (UPDATED “Problems With Multiple MetaDB Partitions“)

# / Filesystem
metainit -f d10 1 1 c1t0d0s0
metainit d20 1 1 c1t1d0s0
metainit d30 -m d10
metaroot d30

# Swap Filesystem
metainit -f d11 1 1 c1t0d0s1
metainit d21 1 1 c1t1d0s1
metainit d31 -m d11

# /usr filesystem:
metainit -f d12 1 1 c1t0d0s5
metainit d22 1 1 c1t1d0s5
metainit d32 -m d12

# /var filesystem:
metainit -f d13 1 1 c1t0d0s6
metainit d23 1 1 c1t1d0s6
metainit d33 -m d13

# /opt filesystem:
metainit -f d14 1 1 c1t0d0s7
metainit d24 1 1 c1t1d0s7
metainit d34 -m d14
metainit hsp001

echo “Do a lockfs -fa, and then an init 6”
————-SNIP————
If you are unsure of what these commands are doing, please read the man pages on them. Change the drive and controller numbers based on your server, run this script, and reboot the system.
After the system has come back up, it’s time to attach the mirror partitions to the metadevices. This next script will take care of that for you.

————-SNIP————
#!/bin/sh
#Mirrormemore.sh
metattach d30 d20
metattach d31 d21
metattach d32 d22
metattach d33 d23
metattach d34 d24

echo “Don’t forget to make the new mirror bootable”echo ” # installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c1t1d0s0″echo “Making darn sure this is pointed towards the mirror drive!”
————-SNIP————

After running this script, the drive lights will go crazy for a while as everything gets all synced up. While the system is mirroring the drives, you can go in and edit your /etc/vfstab file. How to do this is well covered in my story about mirroring Solaris boot drives.