From Laggy Listings to Lightning Speed: A Database Optimization Case Study

Problem: Filtered listings on the web frontend were taking a long time (22-25 seconds) to load.

Analysis: Identified inefficient database queries as the culprit. Specifically, tables lacked proper indexing, causing the database to scan through a large amount of data unnecessarily when filtering was applied. Joins with filtering conditions were especially slow because the primary key alone wasn’t enough to optimize the search.

Solution:

  • Added required indexes: Created indexes on the columns used for filtering. Indexes act like an organized filing system for the database, allowing it to quickly locate relevant data based on the filter criteria. This significantly reduces the amount of data the database needs to scan.
  • Combined partitioning with the filter field and primary key: Partitioning is a technique that divides large tables into smaller, more manageable chunks. By combining partitioning with the filter field and the primary key, you further optimized the search process. When a filter is applied, the database can quickly identify the relevant partition(s) containing the filtered data, reducing the search scope even further.
  • Few code changes: When analyzed in detail found that in this particular scenario the concerned situation demanded only to fetch those records related to the currently logged in user. The original developers had used a join statement with the user_master and condition was user name. But the userid (int ) was already in the session, so just tweaked to remove the join statement and use the userid from the session to filter on the single table with userid = ‘xx’ condition.

Result: These optimizations led to a significant improvement in performance. The filtered pages now load and render in just 4-5 seconds, which is a massive improvement from the original 22-25 seconds.

Percentage decrease in loading time:

  • The average of the original loading time range: (22 seconds + 25 seconds) / 2 = 23.5 seconds
  • The difference between the original and optimized times: 23.5 seconds – 4.5 seconds = 19 seconds
  • Divide this difference by the original average time and multiply by 100% to arrive at a percentage: (19 seconds / 23.5 seconds) * 100% = 80.85% (approximately 81%) decrease in loading time.

Percentage increase in loading speed:

  • Calculate the improvement by dividing the original average time by the optimized time and multiply by 100%: (23.5 seconds / 4.5 seconds) * 100% = 522.22% (approximately 522%) increase in loading speed. This is absolutely insane and mind-blowing.

The anonymized captures of Firefox developer tools network performance analysis is added.

screenshots of analysis

SQL – Always use aggregate functions if you can

One would wonder what the title means. Well it was a thunder bolt for me when I was trying to optimize some headless scripts. Well we at Saturn do heavily use headless scripts written in php. Some use mysql some use xml and some other use memcache, in fact pretty much all would use memcache. But that is not the situation now.

In an attempt to multi thread a cron job part of optimizations done an year back, instead of sequential processing, we had switched it to single row processing. Which required to have a method getNextRow which was passed a parameter, the primary key of the table. The cron starts with a value 0 (zero) and after each is processed, the value is supplied as the last processed one. The getNextRow had


select ID from [table] WHERE ID > [LAST_ID] ORDER BY ID ASC LIMIT 1;

Continue reading “SQL – Always use aggregate functions if you can”

How I got MySQL 5.5 running on Ubuntu 10.04 Server

After referring to all sort of documents online, for getting the src deb and compiling, which always created one or other method of headaches for me, we approached the Oracle recommended method. Downloaded the binary from official oracle downloads. In the actual process I just deviated here and there to suit my preferences. Untarred the binary to /opt/mysql-5.5.15-linux2.6-x86_64, soft linked to this from /usr/local/mysql.

After creating the /etc/my.cnf to my desired settings, I proceeded with the mysql_install_db, which promptly failed complaining about a libaio. Again a bit of googling, found the answer ‘apt-get install libaio1’. Okay the install db went straight. Now the startup script. Copied the script [mysql-base-dir]/support-files/mysql.server to /etc/init.d/mysql. Issued update-rc.d mysql defaults. All was well, the server started smoothly. But Ubuntu did not have path spec for finding the binaries, so did the easy way ln -s /usr/local/mysql/bin/* /usr/bin/. Since we were using the Ubuntu stock mysql-server and client for a long time, most of our shell scripts also expect the same path.
Continue reading “How I got MySQL 5.5 running on Ubuntu 10.04 Server”

MySQL: Optimize all tables; Run using cron

I was looking for a optimize table script and finally stumbled upon some code which could be salvaged into a shell script. The full code is reproduced here for future reference.


#!/bin/bash
 
#-- I usually add the root username and password to the ~/.my.cnf
MYSQL_CMD='/usr/bin/mysql '
 
$MYSQL_CMD -e "SELECT concat('OPTIMIZE TABLE ',TABLE_SCHEMA,'.',TABLE_NAME,';') as cmd FROM information_schema.TABLES WHERE TABLE_SCHEMA NOT IN ('information_schema','mysql') AND Data_free > 10 AND NOT ENGINE='MEMORY' INTO OUTFILE '/tmp/cmd.sql'"
 
$MYSQL_CMD < /tmp/cmd.sql
rm -f /tmp/cmd.sql

Using Bash to take incremental backups on MySQL database

The backup plan was to have a full backup of data and structure every Sunday and then difference of the database from last sunday to every other day, rotate the backups every fourth week.

At Saturn in the development labs, we restructured the development mysql by logically splitting out or combining projects and spread it across four instances of mysql running on a dedicated server. Later we felt that the weekly backups of mysql data folder was not sufficient for some of our projects. And for these we decided on having a rotational backup. The backup plan was to have a full backup of data and structure every Sunday and then difference of the database from last sunday to every other day, rotate the backups every fourth week. The structure backup was decided to be taken with mysqldump and the options –routines –triggers –no-data –compact. Whereas the data backup was to be taken as tab-seperated-values, using select into outfile. We had specific reasons to decide these methods as well as to take data and structure seperately.

Continue reading “Using Bash to take incremental backups on MySQL database”

Getting datetime type into JavaScript as Date object

When selecting datetime to be displayed in a JavaScript ui library, select the unix_timestamp * 1000 from the sql

This is not any new thing and may be discussed at different other places. But just as I came across like any other things, I just wanted to make a record of this.

When selecting datetime to be displayed in a JavaScript ui library, select the unix_timestamp * 1000 from the sql.
Continue reading “Getting datetime type into JavaScript as Date object”

Session TimeZones and DateTime fields

I was pondering over how to coax mysql (and mssql) to automagically select local time if a session variable was set with the required timezone

Today I was pondering over how to coax mysql (and mssql) to automagically select local time if a session variable was set with the required timezone. Well we arrived at a conclusion that we could not do that on DateTime fields in MySQL. Alas we were having varchar(19) in the equivalent field in MsSQL, which was the outcome of an import from MySQL using a connection string and a procedure. We were at a dismay, and admitted defeat. Later while having lunch, I got enlightend about using the built in functions to do the methodical conversions.
Continue reading “Session TimeZones and DateTime fields”

Migration from MySQL to MSSQL – Quick Tip

With the help of some colleagues, I could export all the indexes and primary keys, which we were using from a MySQL table as MSSQL compatiable TSQL using a union query and on the MySQL information_schema.

With the help of some colleagues, I could export all the indexes and primary keys, which we were using from a MySQL table as MSSQL compatiable TSQL using a union query and on the MySQL information_schema.

I did the migration to MSSQL using some tips found online, and calling a sp after connecting directly with the MySQL server. Actually forgot how it was done. Anyway it does not matter in this post. Will add it as a comment later on. The important matter was that, the method did not import any of the indexes, or primary keys. We manually created those were absolutely necessary to roll out the project. And now that we were about to release a service patch, and this includes more incredible search methods, the indexes are absolutely necessary.
Continue reading “Migration from MySQL to MSSQL – Quick Tip”

MySQL vs MsSQL – comparisons never end

Search for the title, and you will land in several sites, which will tell you the views of the author. In fact there may be hell lot of people who would tell you that one out performs the other. But in reality what does that mean ?. If one knows how to optimize the database server, the sql, and the code behind. The performance can be tuned to a factor that both are at par. Now that is a far fetched view. Just think of the real bare minimum hardware to handle 300 tables, with less than 50 having a million rows, and the rest having master data. The MsSQL server will need a min of dual core with 8G memory, whereas the mysql would run on a 2G Virtual Machine and even out perform the MsSQL server. Think of the SELECT …. LIMIT X,Y and the SELECT GROUP_CONCAT, the advantages when considering server side pagination as well as selections directly outputting JSON for ajax frontends.
Continue reading “MySQL vs MsSQL – comparisons never end”