Archive for the ‘XLL’ Category

SpeedTools now Live – but more Feedback needed!

April 9, 2013

SpeedTools Beta 3 has completed and SpeedTools is now live. You can download the 30-day trial from here, or purchase a license from here.

If you have not already submitted your feedback on SpeedTools its not too late to win a SpeedTools Coffee mug and free license!

SpeedTools AVLOOKUP2 & MEMLOOKUP versus VLOOKUP – Performance, Power and Ease-of-Use Shootout Part 2

February 27, 2013

In part 1 I looked at how FastExcel SpeedTools MEMLOOKUP and AVLOOKUP2 compared to VLOOKUP and INDEX/MATCH for ease of use and power.
This post will benchmark the performance of the SpeedTools lookups against the standard Excel functions.

You can download a free 30-day trial of SpeedTools from the Decision Models website.

The test workbooks are VLOOKUP2.xlsx and MEMLOOKUP2.xlsx in the downloadable file MemLookup2.zip .

The LOOKUP Dependency Problem

A problem with all Excel LOOKUP formulas is that if even if only one of the values in the Lookup Table changes every single LOOKUP formula that refers to the lookup table gets recalculated, although most of them will returns a completely unchanged answer. When you have large amounts of data (tens or hundreds of thousands of rows) this can be very slow.

Exact Match with Sorted Data

SpeedTools MEMLOOKUP and AVLOOKUP2 both use a variation of the superfast binary search algorithm that can do exact match searches on sorted data. You can make Excel’s VLOOKUP do a similar thing by using two VLOOKUPS and an IF (see Why 2 VLOOKUPS are better than 1 VLOOKUP).

If you sort the data in the test workbook and use the sorted data option it takes about 0.14 seconds to do 20000 MEMLOOKUPs on 70000 rows on my system. This compares with about 4.25 seconds to do the same thing with VLOOKUP using the VLOOKUP exact match option. (The 2 VLOOKUPS trick is faster than MEMLOOKUP but more complicated!).

Exact Match with Unsorted Data

But if your data is not sorted you are stuck with doing a slow linear search from the start until a match is found. The VLOOKUP2.xlsx file has 20000 VLOOKUPs on a lookup table with 70000 rows.
This calculates in 4.25 seconds on my desktop system (Intel i7 quad core 870 2.93GHz with 4 GB RAM and using Excel 2013 32-bit and Windows 7). This actually quite fast if you consider that Excel has to make about 1100 million comparisons (so thats 258 MXIPS – Million eXcel Instructions Per Second).

But if you do exactly the same thing (see test workbook MEMLOOKUP2.xlsx) using SpeedTools MEMLOOKUP  it only takes 0.12 seconds! Thats about 35 times faster.

So how does it work?

The MEMLOOKUP and AVLOOKUP family of functions are implemented using a multi-threaded C++ XLL. This is the fastest available technology for extending Excel’s function library, and allows the functions to support all the Excel versions from Excel 2013 64-bit to Excel 2000.

Using Lookup Memory with MEMLOOKUP and AVLOOKUP2

MEMLOOKUP and AVLOOKUP2 store in memory the index of the lookup result for each row.
So suppose for the MEMLOOKUP on row 3 the result was found in the 47th row of the lookup table. Then MEMLOOKUP stores in memory 47 for row 3.
At the next recalculation of that formula MEMLOOKUP first looks in the memory, finds 47 and checks if the lookup column row 47 still gives the correct result.
If it does then MEMLOOKUP returns the result from the answer column of row 47 in the lookup table.
If row 47 no longer gives the correct result (because the data in the lookup column on that row in the lookup table has changed) then MEMLOOKUP does a full lookup.

This is a fail-safe and very efficient process.

Built-in Optimisation

If (as often happens) you have more than one lookup on the same row returning data from different columns then the lookup memory can be reused for the subsequent lookups. This built-in optimisation is similar to creating an extra MATCH column with several MATCH formulas referring to the MATCH, but is much simpler and more automatic.

Memory is stored with the workbook.

The lookup memory is automatically stored and retrieved with the workbook so that when you reopen a workbook your MEMLOOKUP and AVLOOKUP2 formulas will reuse the lookup memory from the previous calculation.

Memory Type Options

SpeedTools has options for 4 different kinds of lookup memory:

• Book-Sheet-Row memory (default option): This option stores the index separately for each Workbook, Worksheet and row. This works well unless you are using lookups on multiple tables within the same row on a worksheet.
• Named Memory: this option stores the index separately for each combination of Name, Workbook and row. Usually you would use the same name for the memory as the lookup table. This allows for optimising the re-use of the lookup memory across all the worksheets in a workbook for each lookup table, and for multiple lookups on different tables within a single formula.
• Global memory for rows or columns: This option stores the index globally for each row or column so that it can be re-used acroos all open workbooks and worksheets. This is the most efficient option for a single lookup table.
• Book-Sheet-Cell memory: this option provides the most tightly scoped memory.

Summary

The SpeedTools MEMLOOKUP and AVLOOKUP family of Lookup functions provide significant performance advantages compared to the standard Excel lookup functions, together with enhanced ease of use and extended capability.

Please try them out and let me know what you think.

SpeedTools AVLOOKUP2 & MEMLOOKUP versus VLOOKUP – Performance, Power and Ease-of-Use Shootout Part 1

February 27, 2013

Its time for some peformance tests to see how the new functions in SpeedTools stack up against the standard Excel functions. First up is MEMLOOKUP and AVLOOKUP2 vs VLOOKUP!

SpeedTools Lookups are easier to use, more powerful and less error prone than VLOOKUP or INDEX/MATCH

Having the right Default Parameters helps Ease-of-Use

Most people want their LOOKUPs to tell them when the thing they are looking for does not exist in the lookup table. And most of the time people are working with unsorted data.
Unfortunately VLOOKUP’s default settings don’t do that: it defaults to trying to give you an approximate match on sorted data.

So if you use the VLOOKUP defaults you will probably get the wrong answer!
MEMLOOKUP always does an exact match, even with sorted data (but it will still use fast binary search if you have sorted data and tell MEMLOOKUP about it).
AVLOOKUP2 also defaults to unsorted data and exact match, with an option for approximate match on sorted data if you are sure thats what you want.

Here is an example of VLOOKUP getting the wrong answers when using its defaults:

And here is the same example showing MEMLOOKUP getting the correct results.

Use both Exact Match and Approximate Match with Sorted Data

AVLOOKUP2 has separate parameters for sorted data and exact match, and can use the superfast binary search algorithm on sorted data for both exact match and approximate match.

Both MEMLOOKUP and AVLOOKUP2 allow you to use column labels from a header row instead of column numbers. This is easier to use and understand, and also makes your LOOKUP formulas more resistant to changes such as rearrangement of the data or extra columns appearing. And you can also use this to do 2-dimensional lookup.

Built-in Error Handling

AVLOOKUP2 allows you to specify what you want returned if no exact match can be found, avoiding the need for wrapping the LOOKUP inside an IFERROR function.

The lookup column does not have to be first

You can tell AVLOOKUP2 which column to use for the lookup using a column label or a column number.

Use multiple lookup columns without requiring slow, complicated concatenation or array formulas

AVLOOKUP2 makes it simple to use multiple lookup columns (you can use a constant array {“Jess”,”Ben”} or a range of cells).

Find the first, last, Nth or all Lookup matches

AVLOOKUPNTH extends AVLOOKUP2 with an extra parameter so that you can find the first, last or Nth match when you have duplicates, for text, numbers and dates etc.
AVLOOKUPS2 returns ALL the records that match the lookup criteria. You can use AVLOOKUPS2 either as a multi-cell array formula or embedded inside an aggregating function like MAX, SUM, MEDIAN etc.

Also MATCH, Case-Sensitive and Regular Expression Lookups

The SpeedTools Lookup family also includes variations for MATCH as opposed to LOOKUP, Case-Sensitive lookups and lookups using Regular Expressions.

Try it out yourself!

You can download a free 30-day trial of SpeedTools from the Decision Models website.

And you can also download a workbook VLOOKUP1.xlsx in MemLookup2.zip that contains all the examples used above.

February 5, 2013

FastExcel SpeedTools Beta 3

FastExcel SpeedTools Beta 3 is a state-of-the-art set of tools to help you speed up calculation of slow Excel workbooks.

Win one of 20 exclusive FastExcel SpeedTools coffee mugs plus a free SpeedTools License for the best 20 Beta Test reports submitted before the end of March 2013

The mug will look something like this!

Send your Beta Test reports to Charles@DecisionModels.com

Good beta test reports include the following:

• Windows Version used
• Excel Version used
• SpeedTools functions tested
• Bugs found (include enough information to enable duplication)
• Positive and negative comments (likes & dislikes)
• Documentation & Help file problems

You can also post in the SpeedTools Beta Test Google Groups Forum

Supercharge Excel’s calculation engine with SpeedTools

With FastExcel V3 SpeedTools you can calculate what you need, when you need, faster:

• 90 Powerful faster-calculating functions to unblock your calculation bottlenecks.
• New Calculation methods and modes give you greater control of calculation.
• FastExcel V3 high-resolution timers so that you can accurately compare and contrast the calculation performance of your formulae, UDFs, worksheets and workbooks.

Choose just the speed up components you need with 4 separate SpeedTools products;

• Or from the Toolbar in Excel 2003 and earlier:

Supports Excel 2013 and previous versions

• Fast Multi-threaded calculation with Excel Versions 2013 64 and 32 bit, 2010 64 and 32 bit, 2007
• Excel 2003, 2002, 2000 also supported
• Fast calculating functions written in C++ using the XLL interface
• Windows 8, Windows 7, Vista and XP

The SpeedTools FILTER.IFS Function Design Part 3: Excel Data Types – Strange COUNTIF behaviour

January 18, 2013

The previous post discussed how Excel’s data types, and how FILTER.IFS was designed to handle them.

Colin Legg suggested that a good starting point for the design choices could be what COUNTIF/SUMIF do. So here is an example of some of the problems with COUNTIF, and what the equivalent SpeedTools function ACOUNTIFS does. (ACOUNTIFS uses the same filtering engine as FILTER.IFS).

Using COUNTIF with Number Strings

Suppose you have a list of zero-prefixed numbers, headed DATA, and you want to count how many of each of the numbers there are:
Each zero-prefixed number is unique apart from 0012345 which appears twice in the second and third row (54 and 55).
So I created a COUNTIF formula to count the number of occurrences in the list for each number, using each of the different criteria operators.

If COUNTIF works correctly in this situation the answer should be {1;2;2;1;1}, but as you can see below it gets it wrong!

Each row in the table tries to count how many of the corresponding cell can be found using the relational operator.

So COUNTIF always give 5 when using =, so I think it must be converting ALL the text in both the data and the criteria to numbers.
And < and > always give zero because COUNTIF thinks all the data cells contain the same thing (a number  12345).

But COUNTIF(\$A\$50:\$A\$54,”<>” & \$A50) also gives 5 ! This looks like a BUG to me.

Here is what the SpeedTools ACOUNTIFS function gives:

ACOUNTIFS treats the text numbers as text numbers and so gives what looks to me like a more “correct” answer for all the relational operators.

Conclusion

Using COUNTIF/SUMIF/COUNTIFS/SUMIFS with mixed data types looks very unwise to me!

But maybe you can figure out a way to make them work sensibly?

January 14, 2013

Excel Data Types

Excel has only  4 or 5 native data types:

• Numbers (which can be formatted as Dates, Times, Currency, Integers, Doubles etc, but are all held internally as floating point doubles)
• Strings (Text including zero length strings like “”)
• Booleans (True or False)
• Errors (#N/A, #DIVO etc)
• Empty (which annoyingly is only partly supported -  for instance you can’t return it from a function or a formula)

You can format all these data types in lots of different ways so that they look different, but a Cell’s underlying value is always going to be one of these types.

And unlike most Database systems Excel allows the cells in a column to contain multiple data types.
This can lead to problems: the most frequent one being a column of numbers some of which have been entered as text strings and some as real numbers. Usually you can visually see them because the numbers that are text are left-aligned in the cell and the real numbers are right-aligned.

Numbers as text can arrive in Excel in various ways:

• Start by entering a ‘ followed by the number
• Format the cell as text
• Data imported from external sources

Sorting Columns containing multiple data types

When Excel sorts data contining different data types it uses this relationship between types:

Numbers<Strings<Booleans<Errors

Empty cells are always sorted last, both in Ascending and Descending sorts!
When you sort data containing numbers stored as text strings Excel asks you if you want to sort Text numbers as text or as numbers. Usually its better to sort text numbers as text rather than risk confusing any subsequent operation that relies on things being properly sorted.

Comparing Data Types

If you use a simple formula (=A6<A5)  to compare data types you get this:

You can see that XYZ is >= the empty cell above it, but ABCD is <XYZ.
Numeric 1234 in A9 is less than text string 1234 in A8
A12 entered as a ‘ is a zero-length string and is > the number in A11.
The errors in A16:A18 propagate so you can’t see how Excel compares them.

So if you get rid of the error cells, sort the data and change the formula so its looking for A6>A5 you get this:

So the formula comparison precedence rules are the same as the Excel sorting rules, except for empty cells!

FILTER.IFS Data Type Comparison Operators

Its useful to be able to filter by data type (although the standard Excel Filter command does not have this option), so I added some type filtering operators:

• #ERR – filters all the error cells
• #TXT – filters all the string/text cells
• #N – filters all the number cells
• #BOOL – filters all the True/False cells
• #EMPTY – filters all the empty cells
• #ZLS – filters all the cells containing a zero-length string
• #TYPE – filters all the cells that have the same data type as the first cell in the filtered range
• #BLANK – filters all the cells that contain one or more blanks or spaces

You can prefix these operators with ¬ to filter out everything that does NOT match the data type.

And you can have a list of multiple filtering operators: {“¬”,”#EMPTY”,”#ZLS”,”#BLANK”,”#ERR”} would exclude empty cells, cells with zero length strings or blanks, and cells with errors.

Here is an example:

The FILTER.IFS formula is =FILTER.IFS(0,\$A\$6:\$B\$17,1,D\$5) entered as an array formula (Control-Shift-Enter) and copied across.

• The 0 says there are no sorted Criterion columns (because the type filters don’t care if the data is sorted or not).
• \$A\$6:\$B\$17 gives the range to be filtered
• 2 gives the column within the data range to be returned as the answer
• 1 gives the column within the data range to be filtered using the criterion
• D\$5 gives the cell containing the criterion itself

Handling Data Types with the Relational Operators <,<=,>,>=,¬=

Suppose you create a FILTER.IFS formula  like this: =FILTER.IFS(1,\$A\$6:\$B\$17,2,1,”<1235″)

The criterion says less than 1235, but which 1235 – the numeric one or the string one or both?

I don’t think there is neccessarily a “correct” answer to this, so I invented a rule!

If the Criterion value can be converted into more than one data type (in this case a string and a number) choose the same data type as the first cell in the column.

In this case the first cell is a number, so FILTER.IFS chose to use numeric 1235, which results in a single result, the 1 from row 6.

Because the data is sorted the binary search routine has to use a single datatype, so looking for both the string 1235 and the numeric 1235 is not an option.

But if the data is NOT sorted a linear search can find both: so you can tell FILTER.IFS to compare using ALL the avialable datatypes by using an & prefix.

=FILTER.IFS(0,\$A\$6:\$B\$17,2,1,”&<1235″)

you get ALL the data which is less than numeric 1235 AND all the data that is less than string 1235.

Of course if you don’t use any of the Criterion operators and it finds only the matching data type (=FILTER.IFS(1,\$A\$6:\$B\$17,2,1,”1235″) or =FILTER.IFS(1,\$A\$6:\$B\$17,2,1,1235)

Conclusion

Using mixed data types with relational operators can be tricky – sometimes its difficult to work out what Excel is doing.
A drawback of following the same kind of syntax as SUMIFS (a string containing both the relational operator and the value) is that there is no clear datatype choice.

But I was not sure that the previous FILTER.IFS design, which could give different results for sorted and unsorted data, made sense, so I changed it so that sorted and unsorted data gave the same results and added the & prefix :
What do you think?

The SpeedTools FILTERIFS function: Design and Implementation Part 1

December 21, 2012

Excel users have been using SUMPRODUCT and array formulas to create multiple-condition formulas for many years. This is a powerful technique, but can be painfully slow with large amounts of data. Pivot Tables and Excel 2013′s PowerPivot can provide good solutions in some instances, and the introduction of SUMIFS in Excel 2007 gave a fast alternative for some scenarios.

But there is still a need for a powerful, dynamic function that can perform better than SUMPRODUCT/ array formulas, so let me introduce my attempt at creating one : FILTERIFS.

FILTERIFS Design Objectives

• Speed of calculation – multi-threaded, exploit sorted data and clustered data.
• Extended criterion types to include AND/OR, Lists, Wild Card Patterns, Regular Expressions, Calculated Columns and Arrays etc.
• Dynamic calculation in the same way as other Excel functions.
• Extend multiple condtions to many more functions by outputing an array to other functions such as SUM, MEDIAN, LISTDISTINCTS, VSORT etc, or directly as a multi-cell array formula.

The original implementation was done using a VB6 automation addin, but lack of multi-threading and 64-bit support in VB6 lead me to re-implement as a C++ XLL.

So how do you make it fast?

The idea is to process each criterion in turn using only the rows that meet all the criteria processed so far, thus avoiding the SUMPRODUCT/array formula approach of evaluating all the criteria for all the rows.
Criteria operating on sorted columns are processed first using a fast High-Low binary search modified for relational operators.
Non-sorted columns and criterion types like Regex are then processed using linear search in a sequence designed to minimise data transfer/coercion time.
And using a C++ XLL allows multi-threading and fast execution.

FILTERIFS Syntax

The syntax uses a similar approach to SUMIFS to pass the criterion as a string concatenation of a relational operator and a value. Because the value is passed as a string FILTERIFS has to do some datatype conversions of the value to match the datatype of the criterion column (and hopefully avoid some of the SUMIFS bugs in this area).

FILTERIFS( nSortedCols, InputRange, ReturnCol, CriteriaColumn1, Criteria1,
CriteriaColumn2, Criteria2, … , ["#OR#", nsortedCols,] CriteriaColumnx, Criteriax, …)

nSortedCols gives the number of columns which are sorted in the InputRange

InputRange is a range reference to the data containing the sorted columns and return column.
The data can contain a header row of names for the columns.

ReturnCol is the header name or number of the column within InputRange to return results from.

Criteria Column gives either the name/number of a column in InputRange, or a range reference to an independent column, or an array or an expression returning a column of data to be used as the criterion column.

Criteria is the expression used to filter the criterion column.
This can be a relational operator (=, >=, <=, >, <, ¬=, ~, ¬~, ~~) ( ¬ means NOT, ~ means LIKE, and ~~ means Regex) and value.
It can also be a LIST of alternatives to look for, given either as an array ({“FL”,”NY”,TX”} or with a relational operator {“~ABC*”,”~DEF*}) or as a range reference.

#OR# allows you to have multiple alternative sets of criteria.

FILTERIFS Components

To deliver this fairly complex set of capabilities the function is broken down into a number of component blocks. These are the major ones:

• Handle any header row column names & translate column names and numbers to column indexes.
• Parse and analyse the criterias, storing the result in an array of Criterion structures
• Data Type detection and type-casting of the criteria values
• Find the optimum sequence to process the criteria
• Row-Pairs class to store first-row last-row pairs for the rows that meet the criteria. Methods for this class include Append, Merge, Condense, CountPairs, CountRows etc.
• High-Low binary search for the sorted criteria
• Translate High-Low to rowpairs using the relational operators
• Determine optimum data-transfer/coercion strategy and sequencing for the non-sorted criteria
• Linear Search on row-pairs for non-sorted criteria
• Comparison functions for the various Criterion operators.
• Conversion of row-pairs to results

These components currently result in just under 5000 lines of code.

FILTERIFS Status

As at December 2012 the function is coded and the first phase of testing has been completed. It has taken considerably longer than planned, mainly because the VB6 version made extensive use of EVALUATE, which turned out not to be allowed to multi-thread in C++ and so I had to redesign most of the approach for non-sorted columns.

There is still some performance testing, refactoring and rework to be done but the target is to start Beta3 in early January 2013.

(Assuming that the Mayan calendar is wrong in predicting the end of the world today Friday 21 Decembery 2012).

Writing Efficient UDFs Part 11 – Full-Column References in UDFs: Used Range is Slow

December 2, 2012

Excel users often find it convenient to use full-column references in formulas to avoid having to adjust the formulas every time new data is added. So when you write a User Defined Function (UDF) you can expect that sooner or later someone will try to use it with a full-column reference:

`=MyUDF(A:A,42)`

When Excel 2007 introduced the “Big Grid” with just over 1 million rows it became even more important to handle these full-column references efficiently. The standard way to handle this in a VBA UDF is to get the INTERSECT of the full-column reference and the used-range so that the UDF only has to process the part of the full-column that has actually been used. The example VBA code below does this intersection and then returns the smaller of the number of rows in the input range and the number of rows in the used range.

```Public Function GetUsedRows(theRng As Range) Dim oRng As Range Set oRng = Intersect(theRng, theRng.Parent.UsedRange) GetUsedRows = oRng.Rows.Count End Function```

The parent of theRng is the worksheet that contains it, so `theRng.Parent.UsedRange `gets the used range of the worksheet you want.

Two problems with this technique are:

• Getting the Used Range can be slow.
• The XLL interface does not have a direct way to access the Used Range, so you have to get it via a single-thread-locked COM call. (More on this later).

So just how slow is it to get the used Range?

I created a very simple UDF and timed the calculation of 1000 calls to this UDF for filled used ranges of between 10K rows and 640K rows.

```Public Function CountUsedRows() CountUsedRows = ActiveSheet.UsedRange.Rows.Count End Function```

It turns out that the time taken to execute this UDF is a linear function of the number of used rows in the used range.

And its quite slow, 1000 calls to this UDF with 640K rows of data takes 33 seconds!

When the used range is small you won’t notice the time taken, but for large used ranges with the big grid you certainly will. And the problem is that your UDF will do this check on every range that is passed to the UDF, even if its not really needed.

Colin points out that what affects the time is actually the number of cells containing data or formatting (or that previously contained data or formatting) rather than the last cell in the used range.

Speeding up finding the used range.

So you could start by only doing the used-range check when theRng parameter has a large number of rows:

```Public Function GetUsedRows2(theRng As Range) Dim oRng As Range If theRng.Rows.Count > 500000 Then Set oRng = Intersect(theRng, theRng.Parent.UsedRange) GetUsedRows = oRng.Rows.Count Else GetUsedRows = theRng.Rows.Count End If End Function```

This example only does the check if the user gives the UDF a range referring to more than half a million rows.

Another, more complicated, way of minimising the time is to store the number of rows in the used range in a cache somewhere and retrieve it from the cache when needed. The tricky part of this is to make sure that the used-range row cache always is either empty (in which case go and get the number) or contains an up-to-date number.

One way of doing this would be to use the Application AfterCalculate event (which was introduced in Excel 2007) to empty the cache. Then only the first UDF that requested the used range for each worksheet would use time to find the used range, and (assuming that the calculation itself did nothing to alter the used range) the correct number would always be retrieved.

The equivalent for Excel versions before Excel 2007 would be to use the Application SheetCalculate event to empty the cache for that particular worksheet. This technique would be less efficient since a worsheet may well be calculated several times in each calculation cycle.

As Colin points out, if you want to find the last row containing data it is faster to use Range.Find when you have many cells containing data.
Note that you can only use Range.Find in UDFS in Excel 2002 and later, and you cannot use the Find method at all from an XLL except in a command macro or via COM.

```Public Function CountUsedRows2() CountUsedRows2 = ActiveSheet.Cells.Find(What:="*", LookIn:=xlFormulas, SearchOrder:=xlByRows, SearchDirection:=xlPrevious).Row End Function```

So have you got any better ideas on how to process full-column references efficiently?

Getting Cell data with VBA and C++: Benchmarking Excel 2013

November 9, 2012

There is a new kid on the block: Excel 2013. So its time to see how it shapes up for VBA performance against its predecessors. Along the way I will try to answer these questions:.

How does Excel 2013 VBA performance compare with previous versions?

Is 64-bit VBA faster or slower than 32-bit VBA?

Is there an optimum block size (number of cells) for getting cell data with VBA or C++?

How much faster is C++ than VBA when reading data from Cells?

Excel 2013 VBA Benchmark

Frequently the bottleneck in VBA processing time is reading and writing to Excel cells. So my Variant_Benchmark times this for a variety of cell sizes.

It turns out the relationship between time taken and the number of cells you get in a single block is pretty much linear over small ranges:
Time = overhead + number_of_cells * Time_per_cell

Running this test for on my Dell Studio XPS desktop (2.9 MHZ) for all the versions of Excel I currently have installed gives this table, (slightly overcoloured by the Excel 2013 quick analysis tool – but Quick Analysis is great for charting and formatting idiots like me).

The first 2 columns give times in milliseconds for reading and writing a single block of 512 cells, and the second 2 columns give times in milliseconds for writing a single cell 512 times.

Two things are immediately obvious:

• Reading and writing cell-by-cell is at least an order of magnitude slower than reading and writing a large block of cells in  a single call.
• Writing data back to Excel is much slower than reading data from Excel.

Looking at the variations by Excel versions it is interesting to see that Excel 97 is the fastest version for cell-by-cell but the slowest version for block by block.

Here is a bar chart (again produced by Excel 2013 Quick Analysis) for reading and writing a block of 512 cells:

Read times for a 512 block have been decreasing since XL 2003: and the current champion reader is Excel 2013 64-bit.
But Write times for a 512 block have been increasing since  XL 2000 – the current champion writer.

Looking at cell-by-cell you get this bar chart:

Reading and writing cell-by-cell just goes from bad (Excel 97) to worse (Excel 2010 32).
But its interesting to note that the 64-bit versions are faster than the 32-bit versions for 512 cell blocks, but slower for cell-by-cell.

Is there an optimum Block size for VBA data transfer?

Many excel versions ago (Excel 97?) I did an analysis which showed that if you increased the number of cells being read into the variant array too much the time started to increase. So I thought I would look at this again with the latest Excel versions.

So far I have not detected any decrease in efficiency: you can use as large a block of cells as you like (subject to memory limits of course).

The XLL equivalent to transferring data from a range of excel cells to a variant array is the COERCE function. This has the additional benefit of letting you know if the attempt failed because of uncalculated cells.

And XLL Coerce is faster than VBA, by a factor of 2 or more.

Coerce does seem to have an optimum block size. Beyond about 1000 cells the time taken suddenly jumps by 25 to 30% in the 32-bit versions, except for XL 2013 64-bit where this does not happen.

Conclusions

• Read/write using large Ranges is much more efficient than cell-by-cell
• Excel 2013 VBA read/write speed is comparable to Excel 2010
• The 64-bit versions are faster for large ranges than the 32-bit versions
• VBA does not currently appear to have an optimum block size: the larger the range the better.
• XLL Coerce is a lot faster than VBA
• XLL Coerce 32-bit has an optimum block size just under 1000 cells

I have not tested .NET Interop with Excel 2013, but I expect its still the same depressing performance story. If you know anything different please let me know!

Multi-threading XLL functions – Evaluate fails

November 1, 2012

I have just about finished converting the first part of the FILTER.IFS function from VB6 to C++.
This first part uses modified binary search routines to handle multiple kinds of criteria (EQ, GT, LT, GE, LE, NE with AND and OR etc) on sorted columns.

The second part handles unsorted columns and additional criteria types such as Regex and Like. Some of this is done in the VB6 version of the function using EVALUATE to get the results of array formulas on subsets of the data. So the C++ XLL version uses the same technique but using the XLL equivalent xlfEvaluate.

The explanation here http://msdn.microsoft.com/en-us/library/office/bb687899.aspx seems to say that using xlfEvaluate is threadsafe as long as the expression being evaluated does not contain any non-threadsafe components.

But in practice using xlfEvaluate on even a simple formula string like =2+3 fails with a return code of 128 if the UDF function is flagged as multi-threaded, but works OK if the function is flagged as single-threaded.

At the moment this leaves me with a choice of either making FILTER.IFS single-threaded or using some alternative to xlfEvaluate. Both of these choices look bad.