RHEL System Configuration Changes for Oracle 10G

Below is a list of RHEL system configuration changes that Oracle 10G requires before it is installed.

First, check the following kernel parameters using the commands below:

/sbin/sysctl -a | grep kernel.shmall
/sbin/sysctl -a | grep kernel.shmmax
/sbin/sysctl -a | grep kernel.shmmni
/sbin/sysctl -a | grep kernel.sem
/sbin/sysctl -a | grep fs.file-max
/sbin/sysctl -a | grep net.ipv4.ip_local_port_range
/sbin/sysctl -a | grep net.core.rmem_default
/sbin/sysctl -a | grep net.core.rmem_max
/sbin/sysctl -a | grep net.core.wmem_default
/sbin/sysctl -a | grep net.core.wmem_max

If any parameters are lower than the examples below, you will have to increase them by editing “/etc/sysctl.conf” file, adding the appropriate lines as expressed below. If the current value is higher, leave it as is.

kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 262144
net.core.rmem_max = 262144
net.core.wmem_default = 262144
net.core.wmem_max = 262144

Next, edit your “/etc/security/limits.conf” file, adding the following lines:

oracle          soft    nproc           2047
oracle          hard    nproc           16384
oracle          soft    nofile          1024
oracle          hard    nofile          65536

If your current “/etc/pam.d/login” file does not already contain the following line, add it:

session    required     pam_limits.so

Finally, add the following lines to your "/etc/profile" file:

#Tweaks for Oracle
if [ $USER = "oracle" ]; then
    if [ $SHELL = "/bin/ksh" ]; then
    ulimit -p 16384
    ulimit -n 65536
    ulimit -u 16384 -n 65536

These are just the basic steps I take. See the “Oracle Database Installation Guide” for more complete instructions.

Making RHEL 3 See Multiple LUNS

For some reason RHEL 3 comes out of the box configured to see only the first Lun on a SCSI channel. This is usually not a problem, as the first Lun is all you care about, but in some instances, you will need to configure the SCSI module to see multiple Luns.

In this case we are using an Adaptec DuraStor 6200S, which is set up to present the RAID controller as Lun 00, and the actual RAID array as Lun 01. Without any modifications to the system, we plug in in, and after a reboot check /proc/scsi/scsi. We can see the RAID controller, but since we can only see the first Lun on the channel, we never get to the array:

Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: Adaptec Model: DuraStor 6200S Rev: V100
Type: Processor ANSI SCSI revision: 03

The actual array would show up as “Channel: 00 Id: 00 Lun: 01”, but it’s not there. To resolve this, we have to first edit “/etc/modules.conf” and add the following line:

options scsi_mod max_scsi_luns=128 scsi_allow_ghost_devices=1

In our case, modules.conf looks like this after the modification:

alias eth0 e1000
alias eth1 e1000
alias scsi_hostadapter megaraid2
alias usb-controller usb-uhci
alias usb-controller1 ehci-hcd
alias scsi_hostadapter1 aic7xxx
options scsi_mod max_scsi_luns=128 scsi_allow_ghost_devices=1

Next we have to build a new initrd image. This is done with the “mkinitrd” command.

WARNING: MAKE DARN SURE you build this against the right kernel (the kernel you want to use). If you are going to replace your current initrd image with the new one, you should make a back-up copy first. The -f option will force or overwrite the current initrd image file.

cp /boot/initrd-2.4.21-47.ELsmp.img /boot/initrd-2.4.21-47.ELsmp.img.bak
mkinitrd -f -v /boot/initrd-2.4.21-47.ELsmp.img 2.4.21-47.ELsmp

Once this is done, you can reboot your machine, and check “/proc/scsi/scsi” to see confirm that it sees the second Lun. You should see something like this:

Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: Adaptec Model: DuraStor 6200S Rev: V100
Type: Processor ANSI SCSI revision: 03

Host: scsi2 Channel: 00 Id: 00 Lun: 01
Vendor: Adaptec Model: DuraStor 6200S Rev: V100
Type: Direct-Access ANSI SCSI revision: 03

Hat Tip: Alan Baker for help figuring this out.
UPDATE: RHEL 4 doest not have this problem.

Increase The Max Number of Processes Per UID in Solaris

If you are running solaris 8 or 9, and getting strange errors like:

“VFork failed”

when you try to run commands, or

“Mar 31 10:40:32 sauron genunix: [ID 748887 kern.notice] NOTICE: out of per-user processes for uid 1234”

from the dmesg command output, the most likely cause is that you have exceeded the maximum number of processes per user that you are allowed to run on your server.

To be sure, you can run:

ps -ef | grep <uid> | wc -l

and compare that number against the v.v_proc settings from the command:

sysdef -i

There will be a lot of output from “sysdef -i”, but you are looking for “v.v_proc” and “v.v_maxup”. “v.v_proc” is the max number of processes per user, and “v.v_maxup” is the max number of processes plus the reserved number for root.

You should see something like this:

4058 maximum number of processes (v.v_proc)
99 maximum global priority in sys class (MAXCLSYSPRI)
4053 maximum processes per user id (v.v_maxup)

Since the settings in this example is 4058, if 4058 is something like what you are seeing in the output of:

ps -ef | grep <uid> | wc -l

Than you will need to increase the value of v.v_proc and v.v_maxup. The good news is it’s easy to to. The bad news is that the settings you need to change are NOT v.v_proc and v.v_maxup. It is, instead, the “maxusers” value. The other bad news is that you can’t change them dynamicly.

Here is what you will most likely want to add to your /etc/system file. Tweak the setting as needed:

set maxusers = <This should be something like the amount of available physical memory in MB>

Then reboot the system, and your maximum number of processes per user should now be set to something much more reasonable than the Solaris defaults.

As you can see, setting maxusers to a value will control max_nprocs and maxuprc. The algorythms are:
max_nprocs = 10 + (16 x maxusers)
maxuprc = max_nprocs – reserved_procs (default is 5)

As a result, usually only maxusers is tuned.

NetBackup, Solaris 9, and LTO2 drives

If you are using Veritas NetBackup on Solaris 9 with LTO Ultrium-2 tape drives, you will be constantly annoyed by slow tape write performance unless you use blocks of at least 256KB.

To resolve this, the first thing you’ll want to do is increase both the number of buffers and the buffer size on the media manager host:

Create and edit the file: /usr/openv/netbackup/db/config/SIZE_DATA_BUFFERS and add 262144 on the first line.
Create and edit the file: /usr/openv/netbackup/db/config/NUMBER_DATA_BUFFERS and add 16 on the first line.

These numbers with their respective quotes should be the only thing in these two files.

Next, since Solaris still has insanely low limits on its default shared memory subsystem, we have to increase these limits as to not exhaust them with the increased NetBackup Buffer sizes. We do this by editing the /etc/system file and adding the following lines.

set msgsys:msginfo_msgmni=1024
set msgsys:msginfo_msgtql=1024
set semsys:seminfo_semmni=2048
set semsys:seminfo_semmns=2048
set semsys:seminfo_semopm=128
set shmsys:shminfo_shmmax=33554432
set shmsys:shminfo_shmmni=512

It is now necessary to reboot the system for the kernel parameters to become active.

You should now notice a dramatic increase in tape write speed during your backups.