Tip: Storing MD5 Values (and other string/binary representations)

A common occurrence I have noticed in MySQL apps is that MD5 values are stored as 32 byte values rather than 16. Just to ‘rehash’, an MD5 value is a 16 byte hexadecimal value, typically used as a unique fixed-length signature of a string, useful for identifying unique strings or one-way encryption of passwords. The binary representation takes 16 bytes, though a human readable hexadecimal version takes twice as many.

The same goes for any of the other hashing techniques. They tend to output a friendly hex format, useful in a number of cases like in Javascript or within a particular format such as CSV or TSV (the random binary bytes would mess up the delimiting of data). When you’re looking to store these values though, most of the time it makes sense to have them in their shorter binary representation.

Another common example is IP addresses, I often see VARCHAR(16) for IPv4 addresses. Perhaps when IPv6 is more commonplace we will see VARCHAR(64) instead. IPv4 addresses are 32-bit values and can be stored as an UNSIGNED INT (4 bytes), while IPv6 addresses are 128-bit. There isn’t a native 16-byte integer type in MySQL so a BINARY(16) or two UNSIGNED BIGINT fields would do, though perhaps software will address this as IPv6 gains adoption.

When doing lookups on these kinds of fields, you want them as small as possible so that they can fit neatly into indexes and less processing time is spent evaluating them.

The following is a simple test to compare speeds of a CHAR(32) MD5 column versus a BINARY(16)

The MD5 values that are inserted are deliberately left-padded with 0’s to emphasise the fact that field lengths do make a difference when searching on a field, regardless of whether the field is indexed or not. This is because we’re only populating the table with ~2^20 rows, whereas random MD5s have 2^128 possible values. If we just used random MD5s then MySQL would only have to examine the 1st byte or two due of our small dataset and there would be negligible difference in our small sample. Over millions of runs, or a larger dataset… the difference grows.

Output may be similar to

A Quick and Efficient URL Shortener Using PHP and MySQL

URL shortener’s have proliferated in the past few years, mainly due to the confines of data length that mobile and social networks like Twitter apply. The following code example shows how to make a simple and efficient URL shortener, with plenty scope for improvement.

Although in this example I am going to use localhost as the serving domain, you’d be looking to use as short a domain name as possible in production. You have a fairly good chance of securing a 5 character domain that’d result in 7 character short URLs to begin with.

This example assumes you’re on a 64-bit system that uses a web server capable of rewriting, and uses PHP/MySQL. I use the hashing function murmurhash() as it’s known to be very quick and effective with short strings, though it’s not native to standard PHP installations. You can follow these instructions to install murmurhash. If you do change the hashing method, just ensure you also update the table schema in MySQL (one field is used for the hash values).

Also, I’ve used MySQL partitioning which makes lookups more efficient, but it’s not necessary for the working of the script.

The HTML layout of it is extremely simple, in fact you’ll only see a form and the display of newly created short URLs.

Other than displaying HTML contents, the concept is quite simple:

— Allow a user to submit a long URL and convert it into a short one
— Deal with requests for short URLs and redirect them to long ones.

This, essentially is what a URL shortener is. Some services will give you nice statistics and graphs about the people who visit a short URL. The code provided here is simple and extensible enough for you to do that should you wish.

There are 3 locations where data is stored in this script, 2 MySQL tables and one flatfile. One table is for inserts, one for selects and the flatfile contains the long URLs in the system.

Insert Table

When new URLs are added, a quickish method is needed to see whether the URL already exists in the database or not. This is done by creating a 64-bit hash from the contents of the URL, though you can use whichever hashing method and size of data you wish. 8 bytes is a fairly good size for avoiding collisions while not being too large a key.

insert_table is partitioned (if you choose to) and contains three other fields…

— fileoffset – A pointer to a position in the flatfile that contains the URL in question. Since hashes can collide, all matching hash values are checked until the corresponding URL is found (or not found).
— urllength – Also part of the primary key, this is used to further reduce the potential result set in the case of collisions. Only more than one result will appear for URLs that match the hash and also match the urllength.
— id – The unique incremental ID of the URL, this is converted into short URLs. In cases where someone submits a URL to be shortened that already exists in the database, this datapoint is used.

After a long URL is submitted, the short version is returned to the end user that they can use.

Select Table

select_table simply holds the unique incremental ID of the URL and a fileoffset for where to find the long URL. It is used when someone load up a short URL and needs redirected to a long URL.

URL file

The URL file is simply a raw list of long URLs entered into the system

The Code

Without further ado, here’s the code in order to try it out.

First off, we want to redirect all requests that may be shortened URLs to our single script. Something like this in .htaccess does the trick.

This rule basically means “if the URL is not shortener.php and does not contain a forward slash, redirect to the URL shortener”. This will allow you to create extra pages on the domain that won’t be redirected, but they’ll need to have a forward slash included in the URL.

Now, add this SQL schema to a database of your choice.

Not required but recommended, also apply the following SQL. Partitions can be created during table creation but I’ve separated the two concepts here for clarity.

Note that partitions are usually stored as separate files and mostly treated as separate tables. You can have up to 8192 partitions on more recent versions of MySQL, which take longer to create but certainly should help scaling. You may come into some issues with operating/user system open file limits if you put a higher number, though it is trivial to change after a quick Google.

In more recent versions of MySQL you can also reference particular partitions directory, which can help the query optimiser pick the correct/minimal partitions, particularly when using JOIN’s.

Here’s the PHP driving it all…

Some Notes and Possible Improvements

— As is, the shortener is fine for personal use or use within a trusted network.

— You may want to log some details about the users submitted URLs, in case particular users become problematic by submitting garbage, filling up your database or link to material you believe should not be linked to.

— Along the same lines, you may want to rate limit new URL submissions per IP or per session cookie.

— Some basic statistics about the number of visitors to each URL may be useful.

— Along the same lines, you may want to build a memory cache based on popular URLs.

— Occasionally you will want to rebuild the partitions in the insert_table. This is because the hash values are inserted in a random order and gradually fragment the table. Having partitions helps you do this process incrementally and continue to be able to serve requests (you would need some kind of indication that a particular partition is ‘busy’ and copy the contents of the partition somewhere temporary in order to continue serving requests for data within it, until the partition is fully rebuilt).

— You could sacrifice some processing for more optimal storage of URLs by converting common components of a URL into an integer flag. For example, most if not all URLs would be either HTTP or HTTPS and that only requires 1 bit of information to distinguish them from each other. “www.” in hostnames is another common component. TLD’s are another, as are file extensions.

— Pre filling the URL file with a large amount of space would avoid fragmentation of the file due to the small incremental writes on it. You’d want to mark somewhere how much data is actually in the file as the script currently just seeks to the end of it to write new data. (The same idea could be applied to the MySQL insert_table)

— Having multiple disks (preferably SSD) would obviously help with an IO contention. Also having a more logical ordering of the URLs (by length for instance) could rid you of fileoffsets in the database altogether, because you’d only need to know the unique ID and the length of a short URL in order to find its longer counterpart. I deliberately made this example code simple in order to not have too many open file handles.

Storing InnoDB Tables on Multiple Directories and Disks

InnoDB by default stores all data in one large file, typically referred to as the InnoDB tablespace. Without customisation, A file named ibdata1 at the root of your MySQL data directory will contain all your data and indexes.

One problem that has been noted with this setup is that data cannot be reclaimed when you delete data from your tables, so in the long run the data file can grow to an awkward (and redundant size).

However there is the option to have each InnoDB table as a separate file. For this you must use the innodb_file_per_table option in your my.cnf file.

MyISAM tables have 3 files per table, a table.FRM for table format, table.MYD for data, and table.MYI for indexes. InnoDB also has the .frm file but stores both data and indexes in table.ibd. This setup has the slight advantage of having less files open to access your tables, which can become important particularly for partitioned tables or setups with a low open_files_limit setting.

By using separate files per table, the opportunity arises to split your data up across different directories and disks. This can be done easily with MyISAM tables by specifying a DATA DIRECTORY and INDEX DIRECTORY in your CREATE TABLE syntax, allowing you spread your tables across directories and disks with ease. The problem is that InnoDB ignores these specifications…

I encountered this problem whilst trying to take advantage of a 2xHDD and 2xSSD (solid state disk) setup, with the intention of putting regularly accessed tables on the SSD to speed things up. InnoDB default behaviour seemed to prevent me from doing this, however there is a workaround. The solution is to create a separate database and create a symbolic link of the folder containing your new database, which is an eloquent or messy solution depending on whether you have an existing or new project.

Consider the following example where your default MySQL data directory is /var/lib/mysql and you have another folder/partition/disk you wish to use at /home/mysql.

Login to MySQL to create the following database (just for testing)

Exit MySQL into the command line to create the symbolic link. The following command is for Linux systems:

…and back into MySQL

Now you should have two databases, and if you check /var/lib/mysql, the db1 folder should be there with its data, alongside a symbolic link to db2, which now resides on /home/mysql.

And that is how it is done! The drawback is that if you have existing scripts referencing your database, you’re going to have to update your queries to reference db2 for all the tables that now reside in your symlinked database. This is a hassle, but I’m sure some release of MySQL in the near future will harness the DATA and INDEX DIRECTORY syntax that works so well for MyISAM tables.

A word of caution: to save you having to re-write and reference db2, you could of course use symbolic links for all the tables in a pre-existing table you wish to have in a separate directory. ALTER TABLE queries will break this setup, so beware if you decide to go down that route.

All in all, it is a bit of an unusual situation to what shouldn’t really be a problem, given the versatility of MySQL, but for the meantime is a handy workaround.

When InnoDB is Slow in phpMyAdmin

You may have recently switched over from MyISAM tables to InnoDB, or in fact used InnoDB for a long time. phpMyAdmin has been a mainstay tool for quick viewing and editing of databases, but unfortunately seems to grind to a halt after clicking to view a particular database.

This is mainly due to the way that SELECT COUNT(*) FROM TABLE is calculated. MyISAM keeps that kind of metadata at hand so can instantly calculate the value while InnoDB does not. phpMyAdmin doesn’t recognise this major difference; which means that if you have big InnoDB tables or a number of medium sized ones, loading up a database’s details can take a number of seconds, even minutes. It also is sucking up resources while it intensely tries to calcuate some statistics about your tables.

If you are not too interested in the general stats of a database, and more interested in viewing and manipulating the tables, there is a small hack you can make to one of phpMyAdmin’s PHP files that will load tables up instantly, namely libraries/database_interace.lib.php.

At around line 290 there is a variable $sql declared (it is declared a number of times in the script but we’re interested in this instance), edit the $sql command to this:

The small downside is that you can’t see row counts and some other general metadata, but it’s a small price to pay to continue using phpMyAdmin as a quick GUI reference to your database. I can verify that a 600M row database (10 tables) originally took about 6 seconds to load, but loads instantly after this fix. A smaller 150M row database (40 tables, lots of table partitions) would take up to 30 seconds to load and hang my browser… now only takes a second to spark up.

Credit goes to Richard Dale in the Source Forge forum, who created this workaround. I thought I would dedicated a post to this as the issue does not seem to be too prominent. With MySQL soon to use InnoDB as its default database engine, no doubt this issue will come ot the fore, and phpMyAdmin will implement a more permanent workaround.