“97% of the time, Premature optimization is the root of all evil”
I think I first read this quote of Tony Hoare in McConnell's “Code Complete” and have thought about it many times when doing code reviews, during my own programming, and when talking with others about programming problems they’re trying to solve.
In a nutshell, Hoare meant that in the early stages of a programming project, you shouldn't worry about micro-optimizing for performance, but rather focus on overall system design.
Quite often in FileMaker, there are multiple ways to do the same thing and while one might be faster than the other, the speed difference is often measured in milliseconds and so worrying about which one is faster could be a waste of time or budget.
However, with more and more of our applications being accessed over the internet these days and the server being ‘in the cloud’, performance optimization has become a task we’ve returned to more often recently. In this post, I recount a few areas where we’ve gained significant performance improvements during code review sessions. In a follow-up blog post, I will discuss Perform Script on Server (PSOS), a capability introduced in FileMaker 13 that can be an important tool for performance optimization (See a section of a recent article from Kate on things to watch out for with PSOS.)
We were recently trying to improve performance in two solutions that were originally written and used over local area networks, but have gradually become almost completely accessed via wide area networks. They are also both solutions that have been around awhile, in one case well over 10 years, so there is ample scripting written in older versions of FileMaker.
These examples are what I would call ‘low hanging fruit’ (with the possible exception of using SQL). They don’t require a degree in computer science or advanced mastery of FileMaker to understand and utilize.
Triggers are quite powerful and I was delighted when they were introduced in FileMaker 10, but one downside is that they can significantly slow down your solution. In the old days, it was fairly safe to run a script that looped through a set of records to perform a task without thinking a great deal about the layout you’re on. With script triggers it can be a performance killer. If you have a script trigger that takes 15 milliseconds to execute, no one will ever notice it's there, but if you loop through 50 records on a layout with on OnRecordLoad script trigger attached to it, you’d introduce a delay of over 10 seconds. Not so bad really, since you would notice that delay and fix the problem, but what if the introduced delay was only 2 or 3 seconds? Your solution would get a bit more sluggish and perhaps no one would realize it was completely unnecessary.
We typically create an ‘internal’ layout for each base table that has no fields on the layout that can be used in any situation where you want to be sure script triggers aren’t running. We have a strict rule that no fields or triggers are to be added to this layout.. If you need fields on the layout for some reason, then a different ‘utility’ layout is created.
An additional helpful technique is to have all script triggers check a global trigger on/off variable so that you can control trigger execution at will. This is often useful, but still a bit slower than using a layout with no script triggers at all.
I’ve been known to occasionally throw a Refresh Window script step into a script as a last ditch effort to get data to display the way I want when it inexplicably refuses to do so, sometimes with the Flush Cache Join Results option checked and sometimes not. Generally, I remember to remove the refresh window step if I don’t need it, but not always. An unnecessary refresh, especially with the Flush Cache Join Results option checked, can slow things down. When refactoring, hopefully you won’t find that you have a looping script that’s using a layout with a script trigger that also has a Refresh Window script step unnecessarily running. It can be a little embarrassing. 3rd party tools like Inspector, BaseElements, FM Assistant, etc… can really help here - giving you a list of all Refresh Window uses so they can be reviewed.
When working in FileMaker, I’m afraid I still don’t instinctively think to use SQL all the time and as a consequence don’t always make a considered decision about whether an old fashioned ‘Find’ is the best choice or if ‘Execute SQL’ might be better.
There are many cases where SQL can help you avoid complicating your relationship graph, avoid context changes, or allow you to perform queries that wouldn’t be nearly as easy to do as a traditional Find or that would require multiple finds to get the same data.
When using SQL, don’t forget to abstract the table and field names in your SQL code. Using hard coded table and field names is a good way to make your database more difficult to maintain over time.
FileMaker gets better with each new version. Hopefully, we all become better developers over time, and business rules change. Refactoring an application is an important on-going job in maintaining an application that will last for a long time and performs optimally.
We were recently working on speeding up an app that was moving to the cloud and found that we were able to remove several script triggers, and entirely delete the associated scripts, simply because FileMaker 15 introduced the Top Navigation layout part. We had been attempting to emulate that behavior as best we could without this feature in an earlier version and the performance improvement from switching to using the Top Navigation part was quite significant in some situations.
The recently released FileMaker 16 introduces a number of ways to simplify your code that you will likely use when refactoring in the future. New features like JSON integration and the new script steps for working with lists (Unique Values and Sort Values) cry out to be used to replace old script parameter passing methods and old custom functions for working with lists.
This is by no means an exhaustive study of FileMaker performance optimization. Some obvious ones have been left out entirely (field indexing, unstored calcs, poorly designed schema, summary fields, etc…), but it might give you some ideas for going back and tuning up your solution.