There is a new kid on the block: Excel 2013. So its time to see how it shapes up for VBA performance against its predecessors. Along the way I will try to answer these questions:.
How does Excel 2013 VBA performance compare with previous versions?
Is 64-bit VBA faster or slower than 32-bit VBA?
Is there an optimum block size (number of cells) for getting cell data with VBA or C++?
How much faster is C++ than VBA when reading data from Cells?
Excel 2013 VBA Benchmark
Frequently the bottleneck in VBA processing time is reading and writing to Excel cells. So my Variant_Benchmark times this for a variety of cell sizes.
It turns out the relationship between time taken and the number of cells you get in a single block is pretty much linear over small ranges:
Time = overhead + number_of_cells * Time_per_cell
Running this test for on my Dell Studio XPS desktop (2.9 MHZ) for all the versions of Excel I currently have installed gives this table, (slightly overcoloured by the Excel 2013 quick analysis tool – but Quick Analysis is great for charting and formatting idiots like me).
The first 2 columns give times in milliseconds for reading and writing a single block of 512 cells, and the second 2 columns give times in milliseconds for writing a single cell 512 times.
Two things are immediately obvious:
- Reading and writing cell-by-cell is at least an order of magnitude slower than reading and writing a large block of cells in a single call.
- Writing data back to Excel is much slower than reading data from Excel.
Looking at the variations by Excel versions it is interesting to see that Excel 97 is the fastest version for cell-by-cell but the slowest version for block by block.
Read times for a 512 block have been decreasing since XL 2003: and the current champion reader is Excel 2013 64-bit.
But Write times for a 512 block have been increasing since XL 2000 – the current champion writer.
Looking at cell-by-cell you get this bar chart:
Reading and writing cell-by-cell just goes from bad (Excel 97) to worse (Excel 2010 32).
But its interesting to note that the 64-bit versions are faster than the 32-bit versions for 512 cell blocks, but slower for cell-by-cell.
Is there an optimum Block size for VBA data transfer?
Many excel versions ago (Excel 97?) I did an analysis which showed that if you increased the number of cells being read into the variant array too much the time started to increase. So I thought I would look at this again with the latest Excel versions.
So far I have not detected any decrease in efficiency: you can use as large a block of cells as you like (subject to memory limits of course).
What about C++ XLLs?
The XLL equivalent to transferring data from a range of excel cells to a variant array is the COERCE function. This has the additional benefit of letting you know if the attempt failed because of uncalculated cells.
And XLL Coerce is faster than VBA, by a factor of 2 or more.
Coerce does seem to have an optimum block size. Beyond about 1000 cells the time taken suddenly jumps by 25 to 30% in the 32-bit versions, except for XL 2013 64-bit where this does not happen.
- Read/write using large Ranges is much more efficient than cell-by-cell
- Excel 2013 VBA read/write speed is comparable to Excel 2010
- The 64-bit versions are faster for large ranges than the 32-bit versions
- VBA does not currently appear to have an optimum block size: the larger the range the better.
- XLL Coerce is a lot faster than VBA
- XLL Coerce 32-bit has an optimum block size just under 1000 cells
I have not tested .NET Interop with Excel 2013, but I expect its still the same depressing performance story. If you know anything different please let me know!