Recently, we had a client request to export their FileMaker solution into CSV files with field names. The solution had a sizable number of tables, though doable manually, it inspired me to look for a more automated method. Much to my dismay, a script or module that fit our needs did not present itself. But I learned quite a bit about FileMaker exporting in the process.
At first, I wondered how hard it would be to write a script myself to automatically loop through all the tables in a file and export all records. I quickly ran into a number of roadblocks to this, such as:
Where do I get a list of all table names in the file?
I need a layout with context to the table to export, how do I export tables without layouts?
How do I assign fields to export without the export dialogue?
How do I add field names to the file?
The task looked more and more hopeless the more I looked for solutions; exporting to an excel file adds field names, but you cannot export to excel with the export records script step and the save as excel script step only exports fields on the current layout. While it seems there’s no good method to use the exporting script steps in a modular script, I did find out that Merge files are a good alternative to CSV that do include field names.
Thanks to FileMaker 18’s new features, a simple solution presented itself with the new data file script steps. We can easily create and write as many data files as we need, as long as we can gather the data to put in them. I then set out to find solutions to the other roadblocks, starting with a list of all tables in the file, so how about the TableNames function? This seems plausible, however, it returns a list of all table occurrences in our file, exporting from each occurrence would work, but it would result in redundant files from occurrences based on the same base table. Some searching lead me to a wonderful solution in the form of a custom function found here. It makes use of an ExecuteSQL function and searches one of two tables I wasn’t aware could be queried, “FileMaker_Tables” and “FileMaker_Fields”.
This post on Databuzz gives a more in depth look at this, but in short, these tables store all the schema table and field data of your file. With this we now have two additional pieces to our script, a list of all tables we need to iterate through, as well as the ability to get a list of all fields in any given table. Finally, with an ExecuteSQL function we can query all fields in any table regardless of context,we now have all the data required to get our script running. There is one last issue to address, however, and that is commas and returns in our data. If we just pull the data as is, any commas or returns in any given field will skew the alignment of our CSV file. Adding quotation marks around our delimiters as well as at the start and end of the whole dataset clears this problem but also adds the issue of quotes in your data. I solved this by performing a substitute on my dataset to remove all quotes from the data, depending on the data you are working with this might not be an ideal solution.
You can freely explore and use the script by downloading the demo file below. The script can be run from the scripts menu or by adding it to a button. It can be run immediately after importing into another file without any changes.
If you have any questions about this topic or have issues implementing the functions in this post, leave a comment below or contact us.