Useful MySQL Commands

There are a lot of usefully MySQL commands that I either don’t run enough to remember or just end up using some GUI to accomplish. Nothing against Navicat, but what kind of sysadmin would I be if I couldn’t do it at the command line as well. Anyhow, I had to do a bunch of this kind of work lately, so I thought I would write the more common commands down here so I would have them to reference.

It is generally a good idea to firewall off your MySQL port (3306) unless you are on a secure network. The downside to this is that it prevents you from using GUI tools like Navicat to connect to your database server. No problem, just use this command to SSH tunnel a connection. After doing this, you can connect to your database on localhost port 8888:

ssh -fNg -L 8888: login@mysql-server

Once you have installed a new instance of MySQL, you have to change the root password:

# mysql -u root
mysql> SET PASSWORD FOR 'root'@'localhost' = PASSWORD('newpwd');
mysql> SET PASSWORD FOR 'root'@'yourhostname' = PASSWORD('newpwd');
mysql> exit

Depending on the way you handle your MySQL installation, you will most likely have to run the mysql_upgrade script after upgrades to use all the new features:

mysql_upgrade -u root –password=yourpassword

Do a thorough check to see if your tables are corrupted:

mysqlcheck -u root -p --check --extended --all-databases

Repair any corruption if it exists:

mysqlcheck -u root -p --repair --all-databases

Look at all your tables and optomize them if necessary:

mysqlcheck -u root -p --optimize --all-databases

MySQL Connection Over an ssl Tunnel

Rather than leave MySQL open to the world, consider firewalling it off and tunneling a connection over ssh.

This command will allow you to connect to your MySQL server on on localhost:7777.

ssh -L7777:

Horde / IMP on RHEL 4 From RPM HOWTO

Whenever you go to install applications and services on registered RHEL servers, it’s always nice to use the RPMs because up2date will keep everything current for you. Managing upgrades gets a whole lot easier when you can bring your system up to current with one simple command. Because of this, I decided that I would try to use as many RPMs as I could when I set up our latest Horde / IMP installation.

Unfortunately, RedHat does not supply RPMs for the Horde applications, but luckily CentOS does. You should be able to download them from here. Get the latest version, which at the time of this writing was horde-3.1.3-1 and imp-h3-4.1.3-1.

Don’t install them yet though because Horde and IMP have always had a lot of dependancies which must be installed and enabled first. Installing the following RPMs should take care of them.

  • mysql-4.1.20-1.RHEL4.1.i386.rpm
  • mysqlclient10-3.23.58-4.RHEL4.1.i386.rpm
  • mysqlclient10-devel-3.23.58-4.RHEL4.1.i386.rpm
  • mysql-devel-4.1.20-1.RHEL4.1.i386.rpm
  • mysql-server-4.1.20-1.RHEL4.1.i386.rpm
  • perl-DBD-MySQL-2.9004-3.1.i386.rpm
  • php-4.3.9-3.15.i386.rpm
  • php-devel-4.3.9-3.15.i386.rpm
  • php-domxml-4.3.9-3.15.i386.rpm
  • php-imap-4.3.9-3.15.i386.rpm
  • php-ldap-4.3.9-3.15.i386.rpm
  • php-mysql-4.3.9-3.15.i386.rpm
  • php-pear-4.3.9-3.15.i386.rpm

Assuming you will want up2date to handle upgrades of these packages, it is very important that you either use “up2date” to install them, or download them from correct channel at the RedHat website. You could also simply get them from the CD distribution that you used to install the system itself.

Once PEAR is installed, you will have to upgrade it, and install the PEAR::Log module.

[root@server]# pear upgrade -a PEAR-1.3.6
[root@server]# pear upgrade PEAR

Ok, now let’s make sure the web server is configured to start when the system comes up:

[root@server /]# /sbin/chkconfig --list httpd

You should see this:

httpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off

But if you see 5:off, simply run:

[root@server /]# /sbin/chkconfig httpd on

Now we enable and start up our new MySQL database server:

[root@server /]# /sbin/chkconfig mysqld on
[root@server /]# /sbin/service mysqld start

And we’re ready to install Horde and IMP. Install the following RPM’s, which will put everything in /usr/share/horde and creates a file called horde.conf in /etc/httpd/conf.d/

  • horde-3.1.3-1.c4.noarch.rpm
  • imp-h3-4.1.3-1.c4.noarch.rpm

This will install the HORDE and IMP packages in /usr/share, and /usr/share/horde respectively.

Finally, we start or restart apache:

[root@server /]# /sbin/service httpd start

Grab a browser and go to the following URL to proceed with the Horde and IMP configuration.

Why Modern RAID 5 is Ideal for Oracle Databases

There is a convention of thought amongst Oracle DBA’s that databases should never be installed on disks that are configured into a RAID 5 array. The argument goes, that since Oracle accesses and writes to random points within relatively large files, the overhead of constantly calculating block-level parity on these files is substantial, resulting in serious performance degradation. They suggest that RAID 1 (mirroring) is the ideal disk configuration since no parity needs to be calculated, and Oracle is more than happy to divide up its database over many smaller mount points.

This way of thinking has largely been correct over the years because most systems have traditionally used software RAID. This means that the CPU of the server itself had the job of doing all those parity calculations, and it really did slow down both the server and the disk when RAID 5 configurations were used. Oracle, in particular, had a hard time with these configurations for the exact reasons the DBA’s point to.

In many cases, software RAID is still used, and to be sure, it is wholly inappropriate to deploy RAID 5 in these environments. However, it is increasingly common to find IT departments using a SAN-type architecture where the RAID type and configuration are invisible to the host operating system. In these environments, the the disk array has a dedicated controller that is singly tasked with handling all read, write, and parity operations. The RAID controller is no longer software running on a generic CPU, but rather firmware that is optimized to handle parity calculations. This results in a system where parity is calculated so quickly by the dedicated controller that differences in speed between RAID 1 and Raid 5 should be virtually nonexistent.

To prove this, I carved up our new InfoTrend EonStor A12F-G2221 into three arrays – a RAID 5, a RAID 1, and a RAID 10. I then set out to run some benchmarks on these different arrays to see what, if any, the differences would be.

The hardware used was as follows:

  • The RAID 5 LUN consisted of 4 drives
  • The RAID 1 LUN consisted of 2 drives
  • The RAID 10 LUN consisted of 4 drives

I then identified the iozone tests that most accurately simulated Oracle disk activity. What I really wanted to do was to simulate select and update queries on various sized files and see how the different RAID types held up under the load. To do this, I ran iozone, a well-respected benchmark utility, with the following arguments:

/opt/iozone/bin/iozone -Ra -g 2G -b /home/sysop/new/raid5-2G-1.wks

This put the disk through its paces, as it ran the iozone tests in automatic mode on a 2 Gb file, but in the end, I was interested in analyzing the following tests because they were the ones our DBA team suggested would most closely represent database activity.

Random Read (select queries)

This test measures the performance of reading a file with accesses being made to random locations within the file. The performance of a system under this type of activity can be impacted by several factors such as: Size of operating system’s cache, number of disks, seek latencies, and others.

Random Write (update queries)

This test measures the performance of writing a file with accesses being made to random locations within the file. Again the performance of a system under this type of activity can be impacted by several factors such as: Size of operating system’s cache, number of disks, seek latencies, and others.

Strided Read (more complex select queries)

This test measures the performance of reading a file with a strided access behavior. An example would be: Read at offset zero for a length of 4 Kbytes, then seek 200 Kbytes, and then read for a length of 4 Kbytes, then seek 200 Kbytes and so on. Here the pattern is to read 4 Kbytes and then %%[Page: 3]%%

I ran several instances of the same tests using the same command line to ensure that there were no anomalies, and the machine was doing nothing else during the tests besides running the host OS. The results were pretty much as I expected, and I found little to no variation between the raid types on this disk subsystem.

Random Read Tests:

In this test, there seems to be the slightest advantage to the mirror-type RAID arrays when it comes to very small files. This, I suspect can be attributed to actual drive head latency as, in RAID 5 volumes, the correct block needs to be found on a larger number of disks. This advantage quickly falls off, however as the file size grows, meaning that this slight advantage would not be seen in an Oracle database.

Random Write Tests:

In this test, both RAID 5 and RAID 10 seem to hold a slight advantage over the direct mirror. This, I would imagine can be attributed to the fact that the writes are happening over a larger number of spindles. This indicates that the controller is calculating the parity faster than the 2Gb connection speed to the disk subsystem. Again, the variation is incredibly small, so there is no arguable performance advantage to using one type of RAID over another when using a hardware controller.

Stride Read Tests:

Here again we see no real advantage to one RAID type over any other. It could be said that the RAID 10 volume held up ever-so-slightly better on this test, but any edge is so slight that it would be hard to imagine how this could translate into a noticeable performance gain in an Oracle database.

In the end, these tests proved my suspicion that hardware RAID controllers have become so efficient and fast that it no longer makes any real difference what type of RAID you decide to use for your Oracle database. Largely gone are the days when your disk space and RAID volumes were inexorably tied to the server itself. So long as you are using hardware RAID, and the LUNS are abstracted from your operating system, you can largely feel free to make the most of your storage dollar by using RAID 5 in your production database environments.

Picking a Multiuser Blogging System

I’m a blogger. I’m also a systems administrator at a University. Thus, when the University decided to charter a project to offer each member of the institution a blog, I was selected to sit on the committee. We boiled all of the software choices down to Drupal, Movable Type, and WordPress MU.

In my evaluation of these solutions, the software was ranked on a five-point scale against the following requirements, which we had decided were important to us. I’ve also included my personal rankings. In many cases the software received a lower ranking on a feature because it was not customizable by the individual user.

Movable Type:5
Wrodpress MU:5
Movable Type:5
Wrodpress MU:5
Movable Type:5
Wrodpress MU:5
Movable Type:5
Wrodpress MU:5
Supports XMLRPC
Movable Type:5
Wrodpress MU:5
Comment SPAM blocking
Movable Type:4
Wrodpress MU:5
Image support
Movable Type:5
Wrodpress MU:5
Movable Type:5
Wrodpress MU:5
RSS feed Support
Movable Type:4
Wrodpress MU:5
Movable Type:5
Wrodpress MU:5
URL Rewriting
Movable Type:5
Wrodpress MU:5
Hosted here
Movable Type:5
Wrodpress MU:5
Single installation
Movable Type:5
Wrodpress MU:5
Single Database
Movable Type:5
Wrodpress MU:5
Single Table structure
Movable Type:4
Wrodpress MU:0
Mysql or Oracle
Movable Type:5
Wrodpress MU:5
Integrate with IDM (ex. CAS)
Movable Type:0
Wrodpress MU:0
Quota on file uploads
Movable Type:0
Wrodpress MU:5
Prefer PHP:
Movable Type:1
Wrodpress MU:5
Ease of Upgrade
Movable Type:4
Wrodpress MU:5
Movable Type:3
Wrodpress MU:5

Drupal: 88
Movable Type: 90
Wordpress MU: 100

Drupal came in last not because it is unable to meet the requirements we had listed, but because it is not as intuitive as the other solutions, and there are many areas where the user can not be granted any granularity of control over the blog. There is, for example, no way to allow the individual user to manage comment spam settings. The administrator chooses the settings that are then applied globally to each user on the system.

Movable Type evaluated quite well. The user interface is very nice, and the entire software package is very feature rich from a user experience standpoint. It meets most of our pre-determined criteria, but there are no built in “file upload quotas”, and the application is written in Perl. Also of possible concern is the fact that, while content exists within the database, it is delivered via static pages. This means that a site (all static pages) must be rebuilt whenever significant changes are made, such as changing a theme. It is anyone’s guess how this static / dynamic model will scale to a very large number of users. Also of note is the fact the MT is a pay for product. Like any proprietary software solution, this can be both a benefit and a drawback.

WordPress MU mets or exceeds all of our pre-determined criteria with one major exception. The database does not have a unified table structure, meaning each blog requires its own set of database tables. It is largely unknown how this model will scale to a large number of users, however has this software successfully rolled out to a large and growing number of users. Should we decided on WPMU, it may be worth discussing the implementation with the WPMU team. The user interface is both elegant and intuitive, and the user retains maximum control over most of the unique blog settings. It is, however, worth mentioning that WPMU has adopted the Web 2.0 standard of not versioning their releases. Upgrades are downloaded via a nightly builds directory, which means that version releases may not be as well tested as with a more traditional release strategy.

For my part, even though I scored it slightly higher, I have no real preference for WordPress MU over Movable Type. Both are great blogging solutions, and I feel that we could find a way around MT’s lack of file upload quota. I do, however, prefer either of these two solutions over Drupal because of issues surrounding usability, and customization.

In the end, the committee agreed that WordPress MU was the winner. We will now go on to write the CAS authentication module, and see if we can get around the requirement that usernames have greater than three letters. We are also hoping to get in touch with the people to discuss how the non-unified table structure will scale to 30.000+ users.

WordPress Backup Script

I wrote this little script the other day to back up my WordPress install. Because I use Navicat, I had always been pretty good about backing up the database, but I didn’t backup the install base nearly as often as I should have. I’m sure it won’t be useful for everyone because it requires access to the command line, and mysqldump, but it’s nice to know that my blog is getting backed up.

It’s really just a simple shell script that is executed nightly by cron. You can set up the backup directories in any way you like, but if you store your database backups in a subdirectory of the WordPress install, make darn sure that the directory is not readable by the web server.

The example below assumes the following:

  • WordPress (your web root) is: /webserver/wordpress
  • Your dumps directory is: /home/backups/wp-backups
  • The name of your WordPress database is wpdata
  • Your WordPress database user is: dbuser
  • Your WordPress database password is: dbpassword

Remember to change these variables for your install.

Date=`date “+%Y-%m-%d”`;
echo “Creating Database Backup”;
mysqldump -u dbuser -pdbpassword wpdata | gzip > /home/backups/wp-backups/wordpress-$Date.sql.gz;
echo “Done”;
echo “Creating Filesystem Backup”;
cd /webserver/;
/usr/bin/tar -czf /home/backups/wp-backups/wordpress-$Date.tgz wordpress;
Echo “Done”;
echo “Backup Complete”;

Each time the script is run, it will create two timestamped files in /home/backups/wp-backups. The first one: wordpress-TIMESTAMP.sql.gz is a compressed export of your database. The second file: wordpress-TIMESTAMP.tgz is a compressed tar archive of your WordPress install.

What’s up with the new CMS?

There are quite a few reasons I decided to move away from PhPNuke, but the biggest of the all was the fact that it is simply FULL of security holes. Day after day I read about SQL injection vulnerabilities that would allow a user to gain administrative access to the site, and go at it with a hatchet. I was simply tired of always worrying that someone would hack my site and leave me with a big headache.

I was also getting increasingly frustrated by the fact that PhPNuke never supported pingbacks, trackbacks, or xmlrpc. I read something months ago about someone developing trackbacks for PhPNuke, but it would seem he went the way of Jimmy Hoffa, and nobody ever heard from him again. As time went on, it was becoming more and more clear that PhPNuke needed to be abandoned for a better product, and the only question that remained was which product to chose.

I’ve always liked Mambo, but while it is much more secure than PhPNuke, it still lacks pingbacks, trackbacks and xmlrpc. Drupal is a very nice product that does everything I wanted it to do and more. There is even a script that will migrate all the users and data from PhPNuke into Drupal, but in the end, I was just not happy with the look and feel of the site under Drupal.

Casey over at had recently moved his site from P Machine to Word Press, so once I confirmed that it had Gallery2 Integration I decided to take a look to see what he liked so much about it. He had written a really cool statistics plugin called bstats, and overall the CSM presented things very cleanly. Overall, I’m pretty happy with the software. It does everything I want it to, and it’s quite nice to have the option of managing my stories through xmlrpc with Ecto.

Casey Bisson’s bstats plugin > 
Word Press Gallery2 integration >