At work, I rolled out a crucial test for the Postgres database: Originally the Invoicing program was never meant to work in a real multi-user environment. This is particularly due to a couple of statements used to generate a unique number for tables that have a primary index on that number. One of them looked like this:
SELECT * FROM atable
ORDER BY uniquenumber DESC
In the program itself, the ‘uniquenumber’ field’s value was then extracted (and incremented by one) and passed to an INSERT statement that added a new row to that same table.
Think about it. The table is slowly growing. The larger the table the longer it takes to get that new unique number value. Secondly, trying to retrieve a complete table (ASTERISK, dammit) and pump it in memory doesn’t really help either. Imagine two users around the same time, trying to add a new row. Who gets which number and is it really unique?
We ended up with ‘primary index errors’, that is, and recently they appeared frequently. I took this problem seriously and have started to roll out the use of Postgres ‘sequences’. This added two advantages: Sequences are ‘multi-user’ safe: the database takes care of it all. And, I got rid of complete SELECTS, which eventually made a couple of processes ridiculously fast.
On a related note