Numbering Google Search Results

Sometimes it’s convenient when looking at Google search results, to know the position in the ranking order for a particular URL or domain.

You can use this link as a bookmarklet, and click on the bookmarklet while viewing Google search results in order to see numbered rankings. Simply drag the link into your bookmark toolbar to save it.

I’ve found myself counting from number 1 downwards once too often and this little bit of Javascript has saved me the time and hassle, especially so when the one you’re looking for is further down the ranks!

NGINX Configuration for Sendy

Having recently migrated from Apache 2 to NGINX (with PHP-FPM) for a site that included Sendy, I thought it would be worth posting the configuration required in order for it to work.

Sendy natively has an .htaccess file and some simple rules in order for it to work correctly. I came across some other configurations via Google but most seemed overly complicated.

You will need to tweak:

  • The domain name used for your configuration
  • The base directory used
  • The PHP-FPM UNIX socket path, or replace it your TCP/IP settings


Firefox Browser Automation with mozrepl – A PHP Class

Browser automation can be an effective method where command line clients like cURL will fail. There are a number of better known browser automation tools like Selenium and AutoIt that can achieve a lot in themselves. The prevailing use for browser automation is to imitate a real-world user and should only be used where necessary, as it’s a lot more efficient to use a command line client which uses far less computer resources.

A lesser known plugin for Firefox called mozrepl is an excellent browser automation tool which gives complete control of your browser from top to bottom. Think of everything your browser does, its extensions and more…

mozrepl gets its name from the generic definition of REPL, and I assume the “moz” refers to Mozilla (Firefox). mozrepl is a firefox-plugin that allows you to telnet into your browser, whether it is local or on a remote machine. You communicate with mozrepl via telnet, giving it Javascript commands which it’ll process synchronously and return the resulting data back to you, if any.

I could go on and on about the various ways you can utilise it, instead I’ll post some real-world examples and code that can inspire you. Meanwhile, the remainder of this article will describe how to get mozrepl running, as well as using a generic PHP class that’ll communicate with mozrepl via telnet.

How to Install Mozrepl

Install the add-on
– Restart your browser if required
– Go to Tools > Mozrepl and ensure Activate on Startup is checked. Also click Start if mozrepl has not already started
– There is also an option to Allow outside connections should you wish to control the browser over a network. For now, testing on your local machine should be fine.

A basic PHP class to use mozrepl via telnet

Basic Example

After including the above, you can test it is working by using some sample Javascript.

Run this from the command line. You can run it in your browser if you like but it makes more sense to run it from the command line.

I’d recommend spending an hour reading through the mozrepl guide to get an idea of the general scope of the add-on. Keep checking back to for some real-world examples.

A Simple Generic PHP cURL Class

cURL is a most excellent library that enables you to communicate across a network with a number of protocols. PHP integrated the cURL library early in its development and nowadays its use is widespread across countless applications. To read more about it you can visit the author’s home page, Daniel Steinberg.

I use a very short generic class that can deal with the majority of my network requests, which is available to you below.

An example request and output is shown below.

Much of the magic in this generic class happens in the curl_setopt_array() function call, which sets the options described on this page. cURL pretty much covers all angles, hence the huge list of options you can pass to your request.

There is also the option of performing multiple requests should you require, which is slightly more efficient.

The nice thing about using cURL for network requests is that there is a huge amount of documentation, and the author continues to provide assistance on various forums and sites to this day.

Murmurhash2 in PHP without the extension

Murmurhash is a nice and speedy hashing algorithm that is handy for creating hash values based on strings. I use it often as benchmarks suggest it is one of the speedier implementations out there. Murmurhash can create 32-bit or 128-bit outputs.

In PHP, if you are able to install extensions, then you can simply install the murmurhash extension * (see bottom of page for instructions) and be done with it. If you’re on shared hosting, here is an extensionless alternative to produce 32-bit outputs based on the 2nd version of the murmurhash algorithm.

Do note, it is many times slower than the extension implementation, simply because it’s a user-created function. The code itself is relatively efficient and mostly bitshifting anyway. I had to knock this together because I needed murmurhash in a shared hosting environment where installing extensions is not an option.

* More recently that particular link for instructions on how to install the murmurhash extension is no available. Here is the general gist of how to install the extension:

Tip: Storing MD5 Values (and other string/binary representations)

A common occurrence I have noticed in MySQL apps is that MD5 values are stored as 32 byte values rather than 16. Just to ‘rehash’, an MD5 value is a 16 byte hexadecimal value, typically used as a unique fixed-length signature of a string, useful for identifying unique strings or one-way encryption of passwords. The binary representation takes 16 bytes, though a human readable hexadecimal version takes twice as many.

The same goes for any of the other hashing techniques. They tend to output a friendly hex format, useful in a number of cases like in Javascript or within a particular format such as CSV or TSV (the random binary bytes would mess up the delimiting of data). When you’re looking to store these values though, most of the time it makes sense to have them in their shorter binary representation.

Another common example is IP addresses, I often see VARCHAR(16) for IPv4 addresses. Perhaps when IPv6 is more commonplace we will see VARCHAR(64) instead. IPv4 addresses are 32-bit values and can be stored as an UNSIGNED INT (4 bytes), while IPv6 addresses are 128-bit. There isn’t a native 16-byte integer type in MySQL so a BINARY(16) or two UNSIGNED BIGINT fields would do, though perhaps software will address this as IPv6 gains adoption.

When doing lookups on these kinds of fields, you want them as small as possible so that they can fit neatly into indexes and less processing time is spent evaluating them.

The following is a simple test to compare speeds of a CHAR(32) MD5 column versus a BINARY(16)

The MD5 values that are inserted are deliberately left-padded with 0’s to emphasise the fact that field lengths do make a difference when searching on a field, regardless of whether the field is indexed or not. This is because we’re only populating the table with ~2^20 rows, whereas random MD5s have 2^128 possible values. If we just used random MD5s then MySQL would only have to examine the 1st byte or two due of our small dataset and there would be negligible difference in our small sample. Over millions of runs, or a larger dataset… the difference grows.

Output may be similar to

A Quick and Efficient URL Shortener Using PHP and MySQL

URL shortener’s have proliferated in the past few years, mainly due to the confines of data length that mobile and social networks like Twitter apply. The following code example shows how to make a simple and efficient URL shortener, with plenty scope for improvement.

Although in this example I am going to use localhost as the serving domain, you’d be looking to use as short a domain name as possible in production. You have a fairly good chance of securing a 5 character domain that’d result in 7 character short URLs to begin with.

This example assumes you’re on a 64-bit system that uses a web server capable of rewriting, and uses PHP/MySQL. I use the hashing function murmurhash() as it’s known to be very quick and effective with short strings, though it’s not native to standard PHP installations. You can follow these instructions to install murmurhash. If you do change the hashing method, just ensure you also update the table schema in MySQL (one field is used for the hash values).

Also, I’ve used MySQL partitioning which makes lookups more efficient, but it’s not necessary for the working of the script.

The HTML layout of it is extremely simple, in fact you’ll only see a form and the display of newly created short URLs.

Other than displaying HTML contents, the concept is quite simple:

— Allow a user to submit a long URL and convert it into a short one
— Deal with requests for short URLs and redirect them to long ones.

This, essentially is what a URL shortener is. Some services will give you nice statistics and graphs about the people who visit a short URL. The code provided here is simple and extensible enough for you to do that should you wish.

There are 3 locations where data is stored in this script, 2 MySQL tables and one flatfile. One table is for inserts, one for selects and the flatfile contains the long URLs in the system.

Insert Table

When new URLs are added, a quickish method is needed to see whether the URL already exists in the database or not. This is done by creating a 64-bit hash from the contents of the URL, though you can use whichever hashing method and size of data you wish. 8 bytes is a fairly good size for avoiding collisions while not being too large a key.

insert_table is partitioned (if you choose to) and contains three other fields…

— fileoffset – A pointer to a position in the flatfile that contains the URL in question. Since hashes can collide, all matching hash values are checked until the corresponding URL is found (or not found).
— urllength – Also part of the primary key, this is used to further reduce the potential result set in the case of collisions. Only more than one result will appear for URLs that match the hash and also match the urllength.
— id – The unique incremental ID of the URL, this is converted into short URLs. In cases where someone submits a URL to be shortened that already exists in the database, this datapoint is used.

After a long URL is submitted, the short version is returned to the end user that they can use.

Select Table

select_table simply holds the unique incremental ID of the URL and a fileoffset for where to find the long URL. It is used when someone load up a short URL and needs redirected to a long URL.

URL file

The URL file is simply a raw list of long URLs entered into the system

The Code

Without further ado, here’s the code in order to try it out.

First off, we want to redirect all requests that may be shortened URLs to our single script. Something like this in .htaccess does the trick.

This rule basically means “if the URL is not shortener.php and does not contain a forward slash, redirect to the URL shortener”. This will allow you to create extra pages on the domain that won’t be redirected, but they’ll need to have a forward slash included in the URL.

Now, add this SQL schema to a database of your choice.

Not required but recommended, also apply the following SQL. Partitions can be created during table creation but I’ve separated the two concepts here for clarity.

Note that partitions are usually stored as separate files and mostly treated as separate tables. You can have up to 8192 partitions on more recent versions of MySQL, which take longer to create but certainly should help scaling. You may come into some issues with operating/user system open file limits if you put a higher number, though it is trivial to change after a quick Google.

In more recent versions of MySQL you can also reference particular partitions directory, which can help the query optimiser pick the correct/minimal partitions, particularly when using JOIN’s.

Here’s the PHP driving it all…

Some Notes and Possible Improvements

— As is, the shortener is fine for personal use or use within a trusted network.

— You may want to log some details about the users submitted URLs, in case particular users become problematic by submitting garbage, filling up your database or link to material you believe should not be linked to.

— Along the same lines, you may want to rate limit new URL submissions per IP or per session cookie.

— Some basic statistics about the number of visitors to each URL may be useful.

— Along the same lines, you may want to build a memory cache based on popular URLs.

— Occasionally you will want to rebuild the partitions in the insert_table. This is because the hash values are inserted in a random order and gradually fragment the table. Having partitions helps you do this process incrementally and continue to be able to serve requests (you would need some kind of indication that a particular partition is ‘busy’ and copy the contents of the partition somewhere temporary in order to continue serving requests for data within it, until the partition is fully rebuilt).

— You could sacrifice some processing for more optimal storage of URLs by converting common components of a URL into an integer flag. For example, most if not all URLs would be either HTTP or HTTPS and that only requires 1 bit of information to distinguish them from each other. “www.” in hostnames is another common component. TLD’s are another, as are file extensions.

— Pre filling the URL file with a large amount of space would avoid fragmentation of the file due to the small incremental writes on it. You’d want to mark somewhere how much data is actually in the file as the script currently just seeks to the end of it to write new data. (The same idea could be applied to the MySQL insert_table)

— Having multiple disks (preferably SSD) would obviously help with an IO contention. Also having a more logical ordering of the URLs (by length for instance) could rid you of fileoffsets in the database altogether, because you’d only need to know the unique ID and the length of a short URL in order to find its longer counterpart. I deliberately made this example code simple in order to not have too many open file handles.

Storing InnoDB Tables on Multiple Directories and Disks

InnoDB by default stores all data in one large file, typically referred to as the InnoDB tablespace. Without customisation, A file named ibdata1 at the root of your MySQL data directory will contain all your data and indexes.

One problem that has been noted with this setup is that data cannot be reclaimed when you delete data from your tables, so in the long run the data file can grow to an awkward (and redundant size).

However there is the option to have each InnoDB table as a separate file. For this you must use the innodb_file_per_table option in your my.cnf file.

MyISAM tables have 3 files per table, a table.FRM for table format, table.MYD for data, and table.MYI for indexes. InnoDB also has the .frm file but stores both data and indexes in table.ibd. This setup has the slight advantage of having less files open to access your tables, which can become important particularly for partitioned tables or setups with a low open_files_limit setting.

By using separate files per table, the opportunity arises to split your data up across different directories and disks. This can be done easily with MyISAM tables by specifying a DATA DIRECTORY and INDEX DIRECTORY in your CREATE TABLE syntax, allowing you spread your tables across directories and disks with ease. The problem is that InnoDB ignores these specifications…

I encountered this problem whilst trying to take advantage of a 2xHDD and 2xSSD (solid state disk) setup, with the intention of putting regularly accessed tables on the SSD to speed things up. InnoDB default behaviour seemed to prevent me from doing this, however there is a workaround. The solution is to create a separate database and create a symbolic link of the folder containing your new database, which is an eloquent or messy solution depending on whether you have an existing or new project.

Consider the following example where your default MySQL data directory is /var/lib/mysql and you have another folder/partition/disk you wish to use at /home/mysql.

Login to MySQL to create the following database (just for testing)

Exit MySQL into the command line to create the symbolic link. The following command is for Linux systems:

…and back into MySQL

Now you should have two databases, and if you check /var/lib/mysql, the db1 folder should be there with its data, alongside a symbolic link to db2, which now resides on /home/mysql.

And that is how it is done! The drawback is that if you have existing scripts referencing your database, you’re going to have to update your queries to reference db2 for all the tables that now reside in your symlinked database. This is a hassle, but I’m sure some release of MySQL in the near future will harness the DATA and INDEX DIRECTORY syntax that works so well for MyISAM tables.

A word of caution: to save you having to re-write and reference db2, you could of course use symbolic links for all the tables in a pre-existing table you wish to have in a separate directory. ALTER TABLE queries will break this setup, so beware if you decide to go down that route.

All in all, it is a bit of an unusual situation to what shouldn’t really be a problem, given the versatility of MySQL, but for the meantime is a handy workaround.

When InnoDB is Slow in phpMyAdmin

You may have recently switched over from MyISAM tables to InnoDB, or in fact used InnoDB for a long time. phpMyAdmin has been a mainstay tool for quick viewing and editing of databases, but unfortunately seems to grind to a halt after clicking to view a particular database.

This is mainly due to the way that SELECT COUNT(*) FROM TABLE is calculated. MyISAM keeps that kind of metadata at hand so can instantly calculate the value while InnoDB does not. phpMyAdmin doesn’t recognise this major difference; which means that if you have big InnoDB tables or a number of medium sized ones, loading up a database’s details can take a number of seconds, even minutes. It also is sucking up resources while it intensely tries to calcuate some statistics about your tables.

If you are not too interested in the general stats of a database, and more interested in viewing and manipulating the tables, there is a small hack you can make to one of phpMyAdmin’s PHP files that will load tables up instantly, namely libraries/database_interace.lib.php.

At around line 290 there is a variable $sql declared (it is declared a number of times in the script but we’re interested in this instance), edit the $sql command to this:

The small downside is that you can’t see row counts and some other general metadata, but it’s a small price to pay to continue using phpMyAdmin as a quick GUI reference to your database. I can verify that a 600M row database (10 tables) originally took about 6 seconds to load, but loads instantly after this fix. A smaller 150M row database (40 tables, lots of table partitions) would take up to 30 seconds to load and hang my browser… now only takes a second to spark up.

Credit goes to Richard Dale in the Source Forge forum, who created this workaround. I thought I would dedicated a post to this as the issue does not seem to be too prominent. With MySQL soon to use InnoDB as its default database engine, no doubt this issue will come ot the fore, and phpMyAdmin will implement a more permanent workaround.

Creating CSS Sprites with PHP

When you have a number of small images that appear on a page, or a number of pages that you know one particular client is going to visit that include these small images, it makes sense to use CSS sprites to speed-up the rendering of your web page.

This has two advantages, namely:

  • It reduces the number of HTTP requests a client has to make to download the images on your page
  • Speeds up the rendering of the images and the web-page due to limits on the number of HTTP connections between a server and a client.

The slight downside is that you may make a CSS sprite that contains images that don’t appear on a particular page, and therefore the client doesn’t need. Some consideration has to be put into these situations to weigh the pro’s and con’s of using a sprite.

A fairly comprehensive analysis of using sprites can be found at

Using PHP to Create Sprites

It’s very easy to create a sprite using PHP, which can generate a single image from a collection of images, and also generate the CSS for you. This is a huge timesaver if you intend to make a number of sprites.

The following simple class will perform the following upon initiating the class:

A summary of what’s going on…

  • Accept 4 arguments to perform the sprite and CSS creation:
    1. $folder , the folder to read images from
    2. $output, the filename given to the output, $output.css and $output.png
    3. $x,$y, the dimensions of the images you want to consider, all other images are ignored. If you wish to have images of variable size you will also want to do some mathematical optimization, to fit the images into the smallest sprite dimension possible.
  • The $folder you provide will then be scanned for matching files
  • The sprite image is then created, with a size according to the number of images that will be put into it. A CSS file is also created.
  • For each image in the folder, the image is appended to the sprite image in its relevant position, and the position is logged in the CSS file. A simple counter is used to differentiate the classes declared in the CSS file.