Selecting random records from a table

Some time you may come to a situation, when you have a table and need to get only a few random records from it. You can come with a question how to achieve this on SQL Server.

You may come with incorporating the RAND() function, but as I mentioned in my previous post Random numbers in query, the RAND() function is a constant function executed once per whole query and does not it’s work.

One of the solution could be using the TABLESAMPLE clause. All the samples here use the AdventureWorks2008R2 sample database.

SELECT
    *
FROM Sales.SalesOrderDetail
TABLESAMPLE (100 ROWS);

As this for some tables could work, probably it will not give you exact results you probably expect as the current implementation of the TABLESAMPLE clause build for purpose of statistics retrieving.

Selecting random rows using NEWID()

As working workaround for the purpose of random records selection we can use again the NEWID() function mentioned in my previous post. Each call to the function creates new unique identifier. There are several possibilities to return random rows using the function.

  1. Select an approximate percentage sample of random rows
  2. Select exact number of random rows
  3. Select an exact percentage sample of random rows
  4. Select random number of random rows

Selecting approximate percentage sample of random rows

To select an approximate percentage sample of random rows from a table you can you a below query. The query will select around 10 % of records from a table.

SELECT
    *
FROM Sales.SalesOrderDetail
WHERE 0.1 >= CAST(CHECKSUM(NEWID(), SalesOrderID) & 0x7fffffff AS float)
    / CAST (0x7fffffff AS int)

Each execution of the above query returns a different count of records, which will be close to the 10 percent of total records in the table.

To select a different percentage simply change the 0.1 to and appropriate percentage. In the condition there is added a SalesOrderID field in the calculation to force generation of the NEWID() for each row, otherwise the NEWID() would be generated only once as a constant in this query. The 0x7fffffff represents maximum positive integer value, so when using it in bit AND operation it limits the original number to that value.

The above query produces following plan:

Approximate percentage sample of random rows query plan

And trace from the profiler:

Approximate percentage sample of random rows profiler trace

The table Sales.SalesOrderDetail has 121317 rows and from the profiler results we can see, that the query returned exactly 10 % of records (without 1.7). With multiple runs the row count produced will be different, but very close to 10 %.

Selecting exact number of random rows

Selecting exact number of random rows is very easy, as it is only necessary to sort the query by the NEWID() result and limit the number of returned rows. For returning 100 random rows it’s possible to use below query:

SELECT TOP 100
    *
FROM Sales.SalesOrderDetail
ORDER BY NEWID()

In this example it is not necessary to include any random number generation and checksum as the NEWID() is in the ORDER BY clause and therefore it will be evaluated once for each row so the SQL Server can sort the record on it’s value.

Here is a plan for the query:

Exact number of random rows Query Plan

And trace from profiler:

Exact number of random rows Profiler Trace

As we can see, even the query looks much simpler than the previous for approximate percentage, this query produces much more complicated parallel plan and from the profiler we can see, that even returning les rows, it produces more reads and has much higher CPU cost. This is caused by the ORDER BY operator as the database engine has to sort all the records prior returning the TOP 100 rows. The duration of the first query is longer, but this is caused by returning 12 times more rows to client.

On small tables this doesn’t matter, but on large one it can has high impact. To solve the issue, we can use the approximate count query and limit the records returned by this query. Only we need to choose appropriate count records in the WHERE clause. If we know, that in the table is more than 120 000 records, we can first limit the results to about 0.15 % so we will be sure, the base query returns more than 100 rows and then simply limit them to 100.

SELECT TOP 100
    *
FROM Sales.SalesOrderDetail
WHERE 0.0015 >= CAST(CHECKSUM(NEWID(), SalesOrderID) & 0x7fffffff AS float)
    / CAST (0x7fffffff AS int)

This query produces below plan and trace from profiler.

Query Plan for Exact number of random rows with random generator in where condition

Profiler Trace for Exact number of random rows with random generator in where condition

As we can see, the query plan is again much simpler and the Reads count is more than half less then previous and the CPU cost is much lower and duration is a little bit lower, but generally equal because of returning rows to client.

Selecting exact percentage sample of random rows

To select an exact percentage sample of random rows it’s very simple and consist of simple modification of the TOP clause to a percentage value. So to select 10 percent of random records we can use below query.

SELECT TOP 10 PERCENT
    *
FROM Sales.SalesOrderDetail
ORDER BY NEWID()

Query produces below and again more complex plan then the solution with WHERE clause. It’s even more complicated than the one for the TOP without PERCENT as a table spool operator is added to the plan.

Query Plan for exact percentage sample of random rows

Profiler Trace for exact percentage sample of random rows

From the profiles we again see, tan it produces enormous amount of reads comparing the to the approximate percentage sample even the amount of returned records is similar. The duration and CPU cost is incomparable and even writes are produced because of the writes to Table Spool.

To optimize this, we cannot use the approximate percentage count query and limit it by percent, as the TOP 10 PERCENT clause will return only 10 percent of records from the one limited by the WHERE clause.

We can optimize the reads, execution time and resources cost by below query.

SELECT TOP ((SELECT COUNT(*) FROM Sales.SalesOrderDetail) / 10)
    *
FROM Sales.SalesOrderDetail
WHERE 0.11 >= CAST(CHECKSUM(NEWID(), SalesOrderID) & 0x7fffffff AS float)
    / CAST (0x7fffffff AS int)

The (SELECT COUNT(*) FROM Sales.SalesOrderDetail) / 10 query in TOP clause calculates 10 percent of records and whole query produces below plan and profiler trace.

Query Plan for exact percentage sample of random rows with random generator in WHERE clause

Profiler Trace for exact percentage sample of random rows with random generator in WHERE clause

Even the plan may look more complex than the previous one, from the profiler trace we can see, that for the same Row Count this plan produces 256 time less reads, no writes and 3 time lower duration and 6.5 time less CPU cost.

Selecting random number of random records

To select a random number of random records we can again use the above query and slightly modify the top clause. This modification consist of adding a random generator into the TOP clause. To select random number of records (or percent) in particular interval – in this example 10-20 records or 10 – 20 percent a below queries can be used:

--Select 10 - 20 random records
SELECT TOP (ABS(CHECKSUM(NEWID())) % 10 + 10)
    *
FROM Sales.SalesOrderDetail
ORDER BY NEWID()

--Select 10 - 20 random records
SELECT TOP (ABS(CHECKSUM(NEWID())) % 10 + 10)
    *
FROM Sales.SalesOrderDetail
WHERE 0.0022 >= CAST(CHECKSUM(NEWID(), SalesOrderID) & 0x7fffffff AS float)
    / CAST (0x7fffffff AS int)

Profiler Trace for Random number of random records

The plans for the queries are the same as the plan mentioned above and from the profiler trace we can again see, the incomparable lower costs of the second query compared to the first one.

--Select 10 - 20 % of random records
SELECT TOP (ABS(CHECKSUM(NEWID())) % 10 + 10) PERCENT
    *
FROM Sales.SalesOrderDetail
ORDER BY NEWID()

--Select 10 - 20 % of random records
SELECT TOP ((SELECT COUNT(*) FROM Sales.SalesOrderDetail) * (ABS(CHECKSUM(NEWID())) % 10 + 10) / 100)
    *
FROM Sales.SalesOrderDetail
WHERE 0.21 >= CAST(CHECKSUM(NEWID(), SalesOrderID) & 0x7fffffff AS float)
    / CAST (0x7fffffff AS int)

Profiler Trace for Random percentage of random records

Again the plan are the same as the plan for each version showed above and again from the profiler trace we can again see, that the performance of the second query is incomparable better, than the first one.

Complete comparison

To make a complete comparison of performance of all the queries mentioned in this article we can use below query batch. All the queries are written as sub queries of SELECT COUNT(*) to avoid wrong duration calculation which is affected by transferring the records from server to client.

--Approximate 10 percent =======================
SELECT COUNT(*) FROM (
    SELECT
        *
    FROM Sales.SalesOrderDetail
    WHERE 0.1 >= CAST(CHECKSUM(NEWID(), SalesOrderID) & 0x7fffffff AS float)
        / CAST (0x7fffffff AS int)
) q
GO
--TOP 100, ORDER BY =============================
SELECT COUNT(*) FROM (
    SELECT TOP 100
        *
    FROM Sales.SalesOrderDetail
    ORDER BY NEWID()
) q
GO
--TOP 100, Approximate =============================
SELECT COUNT(*) FROM (
    SELECT TOP 100
        *
    FROM Sales.SalesOrderDetail
    WHERE 0.0015 >= CAST(CHECKSUM(NEWID(), SalesOrderID) & 0x7fffffff AS float)
        / CAST (0x7fffffff AS int)
) q
GO
--TOP 10 PERCENT, ORDER BY =============================
SELECT COUNT(*) FROM (
    SELECT TOP 10 PERCENT
        *
    FROM Sales.SalesOrderDetail
    ORDER BY NEWID()
) q
GO
--TOP 10 PERCENT, Approximate =============================
SELECT COUNT(*) FROM (
    SELECT TOP ((SELECT COUNT(*) FROM Sales.SalesOrderDetail) / 10)
        *
    FROM Sales.SalesOrderDetail
    WHERE 0.11 >= CAST(CHECKSUM(NEWID(), SalesOrderID) & 0x7fffffff AS float)
        / CAST (0x7fffffff AS int)
) q
GO
--Random 10 - 20 Records, ORDER BY  =============================
SELECT COUNT(*) FROM (
    SELECT TOP (ABS(CHECKSUM(NEWID())) % 10 + 10)
        *
    FROM Sales.SalesOrderDetail
    ORDER BY NEWID()
) q
GO
--Random 10 - 20 Records, Approximate =============================
SELECT COUNT(*) FROM (
    SELECT TOP (ABS(CHECKSUM(NEWID())) % 10 + 10)
        *
    FROM Sales.SalesOrderDetail
    WHERE 0.0022 >= CAST(CHECKSUM(NEWID(), SalesOrderID) & 0x7fffffff AS float)
        / CAST (0x7fffffff AS int)
) q
GO
--Random 10 - 20 PERCENT, ORDER BY =============================
SELECT COUNT(*) FROM (
    SELECT TOP (ABS(CHECKSUM(NEWID())) % 10 + 10) PERCENT
        *
    FROM Sales.SalesOrderDetail
    ORDER BY NEWID()
) q
GO
--Random 10 - 20 PERCENT, Approximate =============================
SELECT COUNT(*) FROM (
    SELECT TOP ((SELECT COUNT(*) FROM Sales.SalesOrderDetail) * (ABS(CHECKSUM(NEWID())) % 10 + 10) / 100)
        *
    FROM Sales.SalesOrderDetail
    WHERE 0.21 >= CAST(CHECKSUM(NEWID(), SalesOrderID) & 0x7fffffff AS float)
        / CAST (0x7fffffff AS int)
) q
GO

And profiler output for above queries.

Queries comparison profiler trace output

Conclusion

As we can see from the queries, their query plans and profiler traces, we have several possibilities to select random records from table. Some possibilities have a very simpler query, but produces more complex plan and have a higher cost and can be suitable for relatively small tables.

Other versions producing the same results are more complex to write the query itself, but finally the query produces a much simpler plan and have much better performance comparing to the simple looking one. Such queries are then suitable even fro very large tables to produce results in acceptable time with low performance impact on the server.

Advertisements

Random numbers in query

Some times you may want to select rows from some table and in each row you would like to have some random number generated. You may want to try the RAND() function. However, it will generate a random number, all the returned rows will have the same random number and this is mostly not what you intended.

SELECT TOP 10
    RAND()
FROM sys.all_columns

Running the above query you will receive.

----------------------
0,173895240378828
0,173895240378828
0,173895240378828
0,173895240378828
0,173895240378828
0,173895240378828
0,173895240378828
0,173895240378828
0,173895240378828
0,173895240378828

(10 row(s) affected)

This is caused because the RAND() function is a constant function and the constant function is executed once on the query start, and then the value is used everywhere in the query. There are more functions in SQL Server, that behave as Constant Expressions and which are executed once per the whole Query e.g. the GETDATE() function.

Whether the function is a executed as constant expression or whether it is executed once per row, you can find from the XML plan for the query. If we take the a look on the XML plan for the above query we will see:

<?xml version="1.0" encoding="utf-16"?>
<ShowPlanXML xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" Version="1.1" Build="10.50.1600.1" xmlns="http://schemas.microsoft.com/sqlserver/2004/07/showplan">
  <BatchSequence>
    <Batch>
      <Statements>
        <StmtSimple StatementCompId="1" StatementEstRows="10" StatementId="1" StatementOptmLevel="FULL" StatementOptmEarlyAbortReason="GoodEnoughPlanFound" StatementSubTreeCost="0.00346253" StatementText="SELECT TOP 10
    RAND()
FROM sys.all_columns" StatementType="SELECT" QueryHash="0x479AE6BF06344217" QueryPlanHash="0x1960EA8D4F47E514">
          <StatementSetOptions ANSI_NULLS="true" ANSI_PADDING="true" ANSI_WARNINGS="true" ARITHABORT="true" CONCAT_NULL_YIELDS_NULL="true" NUMERIC_ROUNDABORT="false" QUOTED_IDENTIFIER="true" />
          <QueryPlan CachedPlanSize="32" CompileTime="3" CompileCPU="3" CompileMemory="544">
            <RelOp AvgRowSize="15" EstimateCPU="1E-06" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="10" LogicalOp="Compute Scalar" NodeId="0" Parallel="false" PhysicalOp="Compute Scalar" EstimatedTotalSubtreeCost="0.00346253">
              <OutputList>
                <ColumnReference Column="Expr1063" />
              </OutputList>
              <ComputeScalar>
                <DefinedValues>
                  <DefinedValue>
                    <ColumnReference Column="Expr1063" />
                    <ScalarOperator ScalarString="rand()">
                      <Identifier>
                        <ColumnReference Column="ConstExpr1065">
                          <ScalarOperator>
                            <Intrinsic FunctionName="rand">
                              <ScalarOperator>
                                <Const ConstValue="" />
                              </ScalarOperator>
                            </Intrinsic>
                          </ScalarOperator>
                        </ColumnReference>
                      </Identifier>
                    </ScalarOperator>
                  </DefinedValue>
                </DefinedValues>
                <RelOp AvgRowSize="9" EstimateCPU="1E-06" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="10" LogicalOp="Top" NodeId="1" Parallel="false" PhysicalOp="Top" EstimatedTotalSubtreeCost="0.00346153">
                  <OutputList />
                  <Top RowCount="false" IsPercent="false" WithTies="false">
                    <TopExpression>
                      <ScalarOperator ScalarString="(10)">
                        <Const ConstValue="(10)" />
                      </ScalarOperator>
                    </TopExpression>
                    <RelOp AvgRowSize="9" EstimateCPU="0.000515516" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="10" LogicalOp="Concatenation" NodeId="2" Parallel="false" PhysicalOp="Concatenation" EstimatedTotalSubtreeCost="0.00346053">
                      <OutputList />
                      <Concat>
                        <DefinedValues />
                        <RelOp AvgRowSize="9" EstimateCPU="0.00083584" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="10" LogicalOp="Filter" NodeId="3" Parallel="false" PhysicalOp="Filter" EstimatedTotalSubtreeCost="0.00337579">
                          <OutputList />
                          <Filter StartupExpression="false">
                            <RelOp AvgRowSize="13" EstimateCPU="0.0008753" EstimateIO="0.0068287" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="10" LogicalOp="Index Scan" NodeId="4" Parallel="false" PhysicalOp="Index Scan" EstimatedTotalSubtreeCost="0.00335999" TableCardinality="653">
                              <OutputList>
                                <ColumnReference Database="[TestDB]" Schema="[sys]" Table="[syscolpars]" Column="id" />
                              </OutputList>
                              <IndexScan Ordered="false" ForcedIndex="false" ForceSeek="false" NoExpandHint="false">
                                <DefinedValues>
                                  <DefinedValue>
                                    <ColumnReference Database="[TestDB]" Schema="[sys]" Table="[syscolpars]" Column="id" />
                                  </DefinedValue>
                                </DefinedValues>
                                <Object Database="[TestDB]" Schema="[sys]" Table="[syscolpars]" Index="[nc]" IndexKind="NonClustered" />
                                <Predicate>
                                  <ScalarOperator ScalarString="[TestDB].[sys].[syscolpars].[number]=(0)">
                                    <Compare CompareOp="EQ">
                                      <ScalarOperator>
                                        <Identifier>
                                          <ColumnReference Database="[TestDB]" Schema="[sys]" Table="[syscolpars]" Column="number" />
                                        </Identifier>
                                      </ScalarOperator>
                                      <ScalarOperator>
                                        <Const ConstValue="(0)" />
                                      </ScalarOperator>
                                    </Compare>
                                  </ScalarOperator>
                                </Predicate>
                              </IndexScan>
                            </RelOp>
                            <Predicate>
                              <ScalarOperator ScalarString="has_access('CO',[TestDB].[sys].[syscolpars].[id])=(1)">
                                <Compare CompareOp="EQ">
                                  <ScalarOperator>
                                    <Intrinsic FunctionName="has_access">
                                      <ScalarOperator>
                                        <Const ConstValue="'CO'" />
                                      </ScalarOperator>
                                      <ScalarOperator>
                                        <Identifier>
                                          <ColumnReference Database="[TestDB]" Schema="[sys]" Table="[syscolpars]" Column="id" />
                                        </Identifier>
                                      </ScalarOperator>
                                      <ScalarOperator>
                                        <Const ConstValue="" />
                                      </ScalarOperator>
                                      <ScalarOperator>
                                        <Const ConstValue="" />
                                      </ScalarOperator>
                                    </Intrinsic>
                                  </ScalarOperator>
                                  <ScalarOperator>
                                    <Const ConstValue="(1)" />
                                  </ScalarOperator>
                                </Compare>
                              </ScalarOperator>
                            </Predicate>
                          </Filter>
                        </RelOp>
                        <RelOp AvgRowSize="9" EstimateCPU="0.0132712" EstimateIO="0.0749769" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="1" LogicalOp="Index Scan" NodeId="6" Parallel="false" PhysicalOp="Index Scan" EstimatedTotalSubtreeCost="0.00329434" TableCardinality="11922">
                          <OutputList />
                          <IndexScan Ordered="false" ForcedIndex="false" ForceSeek="false" NoExpandHint="false">
                            <DefinedValues />
                            <Object Database="[mssqlsystemresource]" Schema="[sys]" Table="[syscolrdb]" Index="[ncl]" IndexKind="NonClustered" />
                            <Predicate>
                              <ScalarOperator ScalarString="[mssqlsystemresource].[sys].[syscolrdb].[number]=(0)">
                                <Compare CompareOp="EQ">
                                  <ScalarOperator>
                                    <Identifier>
                                      <ColumnReference Database="[mssqlsystemresource]" Schema="[sys]" Table="[syscolrdb]" Column="number" />
                                    </Identifier>
                                  </ScalarOperator>
                                  <ScalarOperator>
                                    <Const ConstValue="(0)" />
                                  </ScalarOperator>
                                </Compare>
                              </ScalarOperator>
                            </Predicate>
                          </IndexScan>
                        </RelOp>
                      </Concat>
                    </RelOp>
                  </Top>
                </RelOp>
              </ComputeScalar>
            </RelOp>
          </QueryPlan>
        </StmtSimple>
      </Statements>
    </Batch>
  </BatchSequence>
</ShowPlanXML>

On the line 19 or the Query you can see ConstExpr1065 under the node of <ScalarOperator ScalarString="rand()">. Each function which is in the XML plan expressed in the ColumnReference as ConstExpr is executed as constant function once per whole query.

Workaround for RAND function

As a workaround for generating the random numbers in a query, it is possible to use the NEWID() function. Although it the return value of the function is uniqueidentifier, every call to the function return a new unique value, which we can use a CHECKSUM() function over it which calculates checksum of an expression the checksum is an integer value. Because the value can be negative or positive it is necessary to use ABS() over it to receive consistent values.

So finally we can write following query:

SELECT TOP 10
    ABS(CHECKSUM(NEWID())) AS Random,
    ABS(CHECKSUM(NEWID())) % 100 AS Random_0_100,
    CAST(CHECKSUM(NEWID()) & 0x7fffffff AS float) / CAST(0x7fffffff as int)  AS RandomFloatValue
FROM sys.all_objects

And the result will be similar to the one below where each value is unique random number.

Random      Random_0_100 RandomFloatValue
----------- ------------ ----------------------
1060745673  73           0,290147927724825
1053300815  99           0,835112006792385
1233927720  13           0,551734537608798
1059863691  70           0,316186188401741
495435569   98           0,770220997636309
1364620416  82           0,404617617095177
783618590   58           0,161057420615599
1176654027  15           0,522251577825403
1871862272  0            0,295878297321442
1870125345  10           0,0176546154625968

(10 row(s) affected)

SQL Script Manager by RedGate

On December 16. 2010 RedGate released a free tool for DBAs for managing and and running T-SQL or IronPython scripts. You can download the script manager directly from RedGate SQL Script Manager site.

SQL Script Manager

The script manager allows you to manage collects scripts, and simplify their execution using Presets. Scripts are organized by Tags and Authors.

Script manager comes pre-bundled with 25 administration scripts and other scripts can be downloaded from SqlCentral.com scripts repository.

User can even create it’s own scripts. The scripts are encapsulated inside the .rgtool file, which is an XML file which structure is described on the RedGate support pages for the SQL Script Manager.

However it is interesting tool for collecting and managing scripts, I think, it brings nothing special what cannot be handled by the Templates and Templates Manager integrated in Management Studio (if talking about T-SQL Scripts). What more, SQL Management Studio Denali brings a new Code Snippets known from C# and VB .NET development in Visual Studio, which brings another power to the SSMS and from my point of view, writing the .rgtool is more complicated than writing a T-SQL template. On the other site it could be a useful tool for beginners DBAs.

Fuzzy strings matching using Levenshtein algorithm on SQL Server (T-SQL vs CLR)

Levenshtein algorithm is one of possible fuzzy strings matching algorithm. Levenshtein algorithm calculates Levenshtein distance which is a metric for measuring a difference between two strings. The Levenshtein distance is also called an edit distance and it defines minimum single character edits (insert/updates/deletes) needed to transform one string to another. Details on the algorithm itself can be found on Wikipedia.
When you need to use it in queries, functions or stored procedures you have two possibilities – T-SQL implementation and CLR implementation.
I will show both solutions here and also compare the speed of both solutions.

T-SQL implementation of Levenshtein algorithm

For T-SQL I will took the one I have found on the SqlTeam.com forums a which was originally developed by Joseph Gama as mentioned in the SqlTeam post.

CREATE FUNCTION [dbo].[edit_distance](
  @s1 nvarchar(3999),
  @s2 nvarchar(3999))
RETURNS int
AS
BEGIN
  DECLARE @s1_len int, @s2_len int
  DECLARE @i int, @j int, @s1_char nchar, @c int, @c_temp int
  DECLARE @cv0 varbinary(8000), @cv1 varbinary(8000)
  SELECT
    @s1_len = LEN(@s1),
    @s2_len = LEN(@s2),
    @cv1 = 0x0000,
    @j = 1, @i = 1, @c = 0

  WHILE @j <= @s2_len
    SELECT @cv1 = @cv1 + CAST(@j AS binary(2)), @j = @j + 1
  WHILE @i <= @s1_len

  BEGIN
    SELECT
      @s1_char = SUBSTRING(@s1, @i, 1),
      @c = @i,
      @cv0 = CAST(@i AS binary(2)),
      @j = 1

    WHILE @j <= @s2_len
    BEGIN
      SET @c = @c + 1
      SET @c_temp = CAST(SUBSTRING(@cv1, @j+@j-1, 2) AS int) +
        CASE WHEN @s1_char = SUBSTRING(@s2, @j, 1) THEN 0 ELSE 1 END
      IF @c > @c_temp SET @c = @c_temp
      SET @c_temp = CAST(SUBSTRING(@cv1, @j+@j+1, 2) AS int)+1
      IF @c > @c_temp SET @c = @c_temp
      SELECT @cv0 = @cv0 + CAST(@c AS binary(2)), @j = @j + 1
    END
    SELECT
      @cv1 = @cv0,
      @i = @i + 1
  END

  RETURN @c
END

CLR implementation of Levenshtein algorithm

public class FuzzyStrings{
  /// <summary>
  /// Calculates the Levenshtein Distance between two strings.
  /// It is minimum of single character insert/delete/update operations needed to transfrom
  /// first string into the second string
  /// </summary>
  /// <param name="firstString">First string to calculate the distance</param>
  /// <param name="secondString">Second string to calculate the distance</param>
  /// <param name="ignoreCase">Specifies whether to ignore case in comparison</param>
  /// <returns>int represending the Levenshtein Distance</returns>
  public static int LevenshteinDistance(SqlString firstString, SqlString secondString, SqlBoolean ignoreCase)
  {
    string strF = ignoreCase ? firstString.Value.ToLower() : firstString.Value;
    string strS = ignoreCase ? secondString.Value.ToLower() : secondString.Value;
    int lenF = strF.Length;
    int lenS = strS.Length;
    int[,] d = new int[lenF + 1, lenS + 1];

    for (int i = 0; i <= lenF; i++)
      d[i, 0] = i;
    for (int j = 0; j <= lenS; j++)
      d[0, j] = j;

    for (int j = 1; j <= lenS; j++)
    {
      for (int i = 1; i <= lenF; i++)
      {
        if (strF[i - 1] == strS[j - 1])
          d[i, j] = d[i - 1, j - 1];
        else
          d[i, j] = Math.Min(Math.Min(
            d[i - 1, j] + 1,        // a deletion
            d[i, j - 1] + 1),       //an Insertion
            d[i - 1, j - 1] + 1);   // a substitution
      }
    }

    return d[lenF, lenS];
  }
}

You need to compile the code using e.g. C# Express into a an assembly and create the assembly in DB. In my Case the assembly is named [SQLCLR].

CREATE FUNCTION [dbo].[fn_LevenshteinDistance](
  @firstString [nvarchar](4000),
  @secondString [nvarchar](4000),
  @ingoreCase [bit] = 1
)
RETURNS [int]
WITH EXECUTE AS CALLER
AS EXTERNAL NAME [CLRSQL].[FuzzyStrings].[LevenshteinDistance]
GO

Testing of the functions

Once we have properly created both functions we can start testing it. Here is a script for test of both functions on several strings.

SELECT
  dbo.edit_distance('Sunday', 'Monday') AS TSQLDistance,
  ClrSafe.fn_LevenshteinDistance('Sunday', 'Monday', 1) AS CLRDistance

UNION ALL 

SELECT
  dbo.edit_distance('Sunday', 'Sunday') AS TSQLDistance,
  ClrSafe.fn_LevenshteinDistance('Sunday', 'Sunday', 0) AS CLRDistance

UNION ALL

SELECT
  dbo.edit_distance('Sunday', 'sunday') AS TSQLDistance,
  ClrSafe.fn_LevenshteinDistance('Sunday', 'sunday', 0) AS CLRDistance

UNION ALL

SELECT
  dbo.edit_distance('Saturday', 'Monday') AS TSQLDistance,
  ClrSafe.fn_LevenshteinDistance('Saturday', 'Monday', 1) AS CLRDistance

UNION ALL

SELECT
  dbo.edit_distance('This is a first string to Compare', 'This is a second string to Compare') AS TSQLDistance,
  ClrSafe.fn_LevenshteinDistance('This is a first string to Compare', 'This is a second string to Compare', 1) AS CLRDistance

And here are the results of the tests:

TSQLDistance CLRDistance
------------ -----------
2            2
0            0
0            1
5            5
6            6

As we can see, the functions return the same results except the second select. As we have selected not to ignore the case in the CLR version. For T-SQL strings are identical as the function uses a default collation which I have Case Insensitive. If I had a default collation Case Sensitive, then the results will be the same. It could be also possible to modify the T-SQL function to accept parameter for Case Sensitive/Insensitive comparison and then use different collations for that, but it’s not what we want to do here.

Speed comparison

As we saw in previous paragraphs here, both T-SQL and CLR version of the algorithm woks correctly. Now take a look on the calculation speed of the Levenshtein distance by both version.

For the test we can use a simple script, which will calculate the Levenshtein distance in cycle 10 000 times

DECLARE
   @TSQLStartTime datetime,
  @TSQLEndTime datetime,
  @CLRStartTime datetime,
  @CLREndTime datetime,
  @distance int,
  @i int

SELECT
  @i = 0,
  @TSQLStartTime = GETDATE();

WHILE (@i < 10000)
BEGIN
  SELECT
    @distance = dbo.edit_distance('This is a first string to Compare', 'This is a second string to compare'),
    @i = @i + 1
END

SELECT
  @TSQLEndTime = GETDATE(),
  @i = 0,
  @CLRStartTime = GETDATE()

WHILE (@i < 10000)
BEGIN
  SELECT
    @distance = [ClrSafe].fn_LevenshteinDistance('This is a first string to Compare', 'This is a second string to compare', 1),
    @i = @i + 1
END

SELECT @CLREndTime = GETDATE()

SELECT
  DATEDIFF(millisecond, @TSQLStartTime, @TSQLEndTime) AS TSQLDuration,
  DATEDIFF(millisecond, @CLRStartTime, @CLREndTime) AS CLRDuration

Here are the results

TSQLDuration CLRDuration
------------ -----------
22993        763

As we can see the 10 000 times calculation using the T-SQL version took 22993 milliseconds which is in average circa 2.3 millisecond for calculating the distance for the strings in our test query.

On the other side the 10 000 times calculation using CLR took only 763 milliseconds which is in average circa 0.08 milliseconds for calculating the distance for the same strings as in T-SQL version.

Conclusion

From the results we can see that the CLR is about 30 times faster on the same machine than the T-SQL version of the same algorithm. Even the T-SQL version took only 2.3 milliseconds per calculation of sample texts and it’s quite good to use in normal usage, the use or CLR can enormously decrease the processing time when processing higher volume of records.

Also it is another example where CLR beats the T-SQL solution and where it has sense to use the CLR instead of pure T-SQL.

Fastest CSV strings splitting using CLR (T-SQL vs. CLR revisited)

In one my previous blog post SQL Server – String splitting (T-SQL vs. CLR) I was comparing speed of T-SQL solution for string splitting vs. CLR RegEx solution. Although the CLR RegEx is fast enough, it isn’t the fastest solution for simple CSV string splitting. Also I will again compare it to the T-SQL solution.

In the mentioned post the T-SQL solution wasn’t usable for larger amount of CSV values, but after some investigations by Jeff Moden in post http://www.sqlservercentral.com/Forums/FindPost997236.aspx found a mistake we both made in the T-SQL Testing, and therefore I will post here also updated comparison to the T-SQL version

Fastest CLR version

Probably fastest CLR version for splitting sting is a CLR table-valued function which processes the whole string on character level and on a delimiter sends the results.

Here is one possible solution.

public class StringSplit
{
    private struct StrRow
    {
        public StrRow(int rowId, SqlChars value)
        {
            RowId = rowId;
            Value = value;
        }

        public int RowId;
        public SqlChars Value;

    }

    [SqlFunction(FillRowMethodName = "FillSplitString3")]
    public static IEnumerable SplitString3(SqlString sourceString, string delimiter, int maxLen)
    {
        char[] buffer = new char[maxLen];
        char delim = delimiter[0];
        int rowNumber = 0;
        int chars = 0;
        char[] finalString;

        foreach (char chr in sourceString.Value)
        {
            if (chr == delim)
            {
                finalString = new char[chars];
                Array.Copy(buffer, finalString, chars);
                yield return new StrRow(++rowNumber, new SqlChars(finalString));
                chars = 0;
            }
            else
            {
                buffer[chars++] = chr;
            }
        }
        if (chars > 0)
        {
            finalString = new char[chars];
            Array.Copy(buffer, finalString, chars);
            yield return new StrRow(++rowNumber, new SqlChars(finalString));
        }

    }

    [SqlFunction(FillRowMethodName = "FillSplitString3")]
    public static IEnumerable SplitString4(SqlString sourceString, string delimiter)
    {
        StringBuilder sb = new StringBuilder();
        char delim = delimiter[0];
        int rowNumber = 0;
        foreach (char chr in sourceString.Value)
        {
            if (chr == delim)
            {
                yield return new StrRow(++rowNumber, new SqlChars(sb.ToString()));
                sb = new StringBuilder(sb.Capacity);
            }
            else
            {
                sb.Append(chr);
            }
        }
        if (sb.Length > 0)
        {
            yield return new StrRow(++rowNumber, new SqlChars(sb.ToString()));
        }

    }

    public static void FillSplitString3(object obj, out int rowId, out SqlChars value)
    {
        StrRow r = (StrRow)obj;
        rowId = r.RowId;
        value = r.Value;
    }
}
CREATE FUNCTION dbo.fn_SplitString3(
  @sourceString nvarchar(max),
  @delimiter nchar(1),
  @maxLen int
)
RETURNS  TABLE (
    RowID int NULL,
    Value nvarchar(10) NULL
) WITH EXECUTE AS CALLER
AS
EXTERNAL NAME SQLRegEx.StringSplit.SplitString3
GO

This function takes three parameters. First the source string to be split, delimiter and maxLen, which is maximum length for an item in the CSV List. It is used to allocate buffer. And e.g.. for integer values it will be 10 as positive integer will have maximum of 10 digits. It is possible to write this function also without this parameter, but I’ve added it because of speed, as it doesn’t require buffer reallocations.

I will compare the speed also to the CLR RegEx version. I will use the function mentioned in my previous post.

For CLR RegEx we will use a simple Regular expression ”d+” as it is enough for the integer values delimited by commas.

As T-SQL candidate for speed comparison I will use the latest optimized version of Tally table splitting by Jeff Moden.

CREATE FUNCTION dbo.Split8KTallyM (
    @Parameter VARCHAR(8000),
    @Delimiter VARCHAR(1)
)
RETURNS @Result TABLE (ItemNumber INT, ItemValue INT) AS
  BEGIN
 INSERT INTO @Result
        (ItemNumber, ItemValue)
 SELECT CAST(ROW_NUMBER() OVER (ORDER BY N) AS INT) AS ItemNumber,
        SUBSTRING(@Parameter,N,CHARINDEX(@Delimiter,@Parameter+@Delimiter,N)-N) AS ItemValue
   FROM dbo.Tally
  WHERE N BETWEEN 1 AND LEN(@Parameter)+1
    AND SUBSTRING(@Delimiter+@Parameter,N,1) = @Delimiter; --Notice how we find the comma
 RETURN
    END;
GO

Test data preparation

I will use as test data the same tables as in previous tests. We will use table with 10 000 rows and each will be with different length of CSV string (16 items, 100 items and 1333 items). The table definition will be only modified and the string will not be stored as nvarchar(max) but as varchar(max). The nvarchar in previous test totally degraded the T-SQL solution so it was not usable for 1333 item in SCV string.

SELECT TOP 11000
    IDENTITY(INT, 1, 1) AS N
INTO dbo.Tally
FROM sys.all_objects o1, sys.all_objects
GO

--Add Clustered Index on Tally table
ALTER TABLE dbo.Tally
    ADD CONSTRAINT PK_Tally PRIMARY KEY CLUSTERED (N) WITH FILLFACTOR = 100
GO

--Create and populate CsvTest table (doesn't matter whether the table has Clustered index or it is simply heap)
SELECT TOP (10000) --Controls the number of rows in the test table
    ISNULL(ROW_NUMBER() OVER (ORDER BY(SELECT NULL)),0) AS RowNum,
    (
        SELECT CAST(STUFF( --=== STUFF get's rid of the leading comma
                ( --=== This builds CSV row with a leading comma
                SELECT TOP (16) --Controls the number of CSV elements in each row
                    ','+CAST(ABS(CHECKSUM(NEWID()))%100000 AS VARCHAR(10))
                FROM dbo.Tally t3      --Classic cross join pseudo-cursor
                CROSS JOIN dbo.Tally t4 --can produce row sets up 121 million.
                WHERE t1.N <> t3.N --Without this line, all rows would be the same
                FOR XML PATH('')
                )
                ,1,1,'') AS VARCHAR(8000))
                ) AS CsvParameter
INTO CsvTest
FROM dbo.Tally t1        --Classic cross join pseudo-cursor
CROSS JOIN dbo.Tally t2;  --can produce row sets up 121 million.
GO

SELECT TOP (10000) --Controls the number of rows in the test table
    ISNULL(ROW_NUMBER() OVER (ORDER BY(SELECT NULL)),0) AS RowNum,
    (
        SELECT CAST(STUFF( --=== STUFF get's rid of the leading comma
                ( --=== This builds CSV row with a leading comma
                SELECT TOP (100) --Controls the number of CSV elements in each row
                    ','+CAST(ABS(CHECKSUM(NEWID()))%100000 AS VARCHAR(10))
                FROM dbo.Tally t3      --Classic cross join pseudo-cursor
                CROSS JOIN dbo.Tally t4 --can produce row sets up 121 million.
                WHERE t1.N <> t3.N --Without this line, all rows would be the same
                FOR XML PATH('')
                )
                ,1,1,'') AS VARCHAR(8000))
                ) AS CsvParameter
INTO CsvTest2
FROM dbo.Tally t1        --Classic cross join pseudo-cursor
CROSS JOIN dbo.Tally t2;  --can produce row sets up 121 million.
GO

SELECT TOP (10000) --Controls the number of rows in the test table
    ISNULL(ROW_NUMBER() OVER (ORDER BY(SELECT NULL)),0) AS RowNum,
    (
        SELECT CAST(STUFF( --=== STUFF get's rid of the leading comma
                ( --=== This builds CSV row with a leading comma
                SELECT TOP (1333) --Controls the number of CSV elements in each row
                    ','+CAST(ABS(CHECKSUM(NEWID()))%100000 AS VARCHAR(10))
                FROM dbo.Tally t3      --Classic cross join pseudo-cursor
                CROSS JOIN dbo.Tally t4 --can produce row sets up 121 million.
                WHERE t1.N <> t3.N --Without this line, all rows would be the same
                FOR XML PATH('')
                )
                ,1,1,'') AS VARCHAR(8000))
                ) AS CsvParameter
INTO CsvTest3
FROM dbo.Tally t1        --Classic cross join pseudo-cursor
CROSS JOIN dbo.Tally t2;  --can produce row sets up 121 million.
GO

Speed comparison

Here is a script I will use to compare the speed:

--================= 16 items ==========
GO
--CLR fn_SplitString3
DECLARE @RowNum INT, @ItemNumber INT, @ItemValue INT;
SELECT
    @RowNum = D.RowNum,
    @ItemNumber = V.RowID,
    @ItemValue = V.Value
FROM dbo.CsvTest D
CROSS APPLY dbo.fn_SplitString3(D.CsvParameter, ',', 10) V
GO
--CLR RegEx
DECLARE @RowNum INT, @ItemNumber INT, @ItemValue INT;
SELECT
    @RowNum = D.RowNum,
    @ItemNumber = V.RowID,
    @ItemValue = V.Value
FROM dbo.CsvTest D
CROSS APPLY dbo.fn_RegExMatches2(D.CsvParameter, 'd+') V
GO
--T-SQL Split8KTallyM
DECLARE @RowNum INT, @ItemNumber INT, @ItemValue INT;
SELECT
    @RowNum = D.RowNum,
    @ItemNumber = V.ItemNumber,
    @ItemValue = V.ItemValue
FROM dbo.CsvTest D
CROSS APPLY dbo.Split8KTallyM(D.CsvParameter, ',') V
GO
--================= 100 items ==========
GO
--CLR fn_SplitString3
DECLARE @RowNum INT, @ItemNumber INT, @ItemValue INT;
SELECT
    @RowNum = D.RowNum,
    @ItemNumber = V.RowID,
    @ItemValue = V.Value
FROM dbo.CsvTest2 D
CROSS APPLY dbo.fn_SplitString3(D.CsvParameter, ',', 10) V
GO
--CLR RegEx
DECLARE @RowNum INT, @ItemNumber INT, @ItemValue INT;
SELECT
    @RowNum = D.RowNum,
    @ItemNumber = V.RowID,
    @ItemValue = V.Value
FROM dbo.CsvTest2 D
CROSS APPLY dbo.fn_RegExMatches2(D.CsvParameter, 'd+') V
GO
--T-SQL Split8KTallyM
DECLARE @RowNum INT, @ItemNumber INT, @ItemValue INT;
SELECT
    @RowNum = D.RowNum,
    @ItemNumber = V.ItemNumber,
    @ItemValue = V.ItemValue
FROM dbo.CsvTest2 D
CROSS APPLY dbo.Split8KTallyM(D.CsvParameter, ',') V
GO
--================= 1333 items ==========
GO
--CLR fn_SplitString3
DECLARE @RowNum INT, @ItemNumber INT, @ItemValue INT;
SELECT
    @RowNum = D.RowNum,
    @ItemNumber = V.RowID,
    @ItemValue = V.Value
FROM dbo.CsvTest3 D
CROSS APPLY dbo.fn_SplitString3(D.CsvParameter, ',', 10) V
GO
--CLR RegEx
DECLARE @RowNum INT, @ItemNumber INT, @ItemValue INT;
SELECT
    @RowNum = D.RowNum,
    @ItemNumber = V.RowID,
    @ItemValue = V.Value
FROM dbo.CsvTest3 D
CROSS APPLY dbo.fn_RegExMatches2(D.CsvParameter, 'd+') V
GO
--T-SQL Split8KTallyM
DECLARE @RowNum INT, @ItemNumber INT, @ItemValue INT;
SELECT
    @RowNum = D.RowNum,
    @ItemNumber = V.ItemNumber,
    @ItemValue = V.ItemValue
FROM dbo.CsvTest3 D
CROSS APPLY dbo.Split8KTallyM(D.CsvParameter, ',') V
GO

And here are the results from profiler:

Profiler results

Results of comparison and conclusion

As we can see in the output from profiles, the new fn_SplitString3 function is unbeatable in all scenarios. While the T-SQL took 3.5 seconds for 16 items, the new CLR split function takes only 253 milliseconds. As mentioned in previous post, the CLR RegEx benefits at higher items count over 100. And in higher counts beats the T-SQL Solutions. The new fn_SplitString even on 1333 items count took only 8.2 sec.

Calculating running total for last X rows – UPDATED

This is another blog post about running totals and again I will use CLR and demonstrate the power of CLR in such situations. Again this post is inspired by an article on SQL Server Central – Calculate the Running Total for the last five Transactions by Divya Agrawal.

This is a kind of running totals when we want to sum only a specific number of last rows.

The CLR function will be nothing than a slightly modification of the running totals function from my previous blog post: SQL Sever and fastest running totals using CLR.

I’ve updated this blog post based on the update of mentioned previous post, after communication with Paul White and added the security check into the CLR function to detect processing of rows out of expected order. Please read the previous post for more details.

Test data preparation

I will use sample data from the article, so it is possible to compare the solutions and we will be able to see the difference.

CREATE TABLE Accounts
(
    ID int IDENTITY(1,1),
    TransactionDate datetime,
    Balance float
)
GO

INSERT INTO Accounts(TransactionDate,Balance) VALUES ('1/1/2000',100)
INSERT INTO Accounts(TransactionDate,Balance) VALUES ('1/2/2000',101)
INSERT INTO Accounts(TransactionDate,Balance) VALUES ('1/3/2000',102)
INSERT INTO Accounts(TransactionDate,Balance) VALUES ('1/4/2000',103)
INSERT INTO Accounts(TransactionDate,Balance) VALUES ('1/5/2000',104)
INSERT INTO Accounts(TransactionDate,Balance) VALUES ('1/6/2000',105)
INSERT INTO Accounts(TransactionDate,Balance) VALUES ('1/7/2000',106)
INSERT INTO Accounts(TransactionDate,Balance) VALUES ('1/8/2000',107)
INSERT INTO Accounts(TransactionDate,Balance) VALUES ('1/9/2000',108)
INSERT INTO Accounts(TransactionDate,Balance) VALUES ('1/10/2000',109)
INSERT INTO Accounts(TransactionDate,Balance) VALUES ('1/11/2000',200)
INSERT INTO Accounts(TransactionDate,Balance) VALUES ('1/12/2000',201)
INSERT INTO Accounts(TransactionDate,Balance) VALUES ('1/13/2000',202)
INSERT INTO Accounts(TransactionDate,Balance) VALUES ('1/14/2000',203)
INSERT INTO Accounts(TransactionDate,Balance) VALUES ('1/15/2000',204)
INSERT INTO Accounts(TransactionDate,Balance) VALUES ('1/16/2000',205)
INSERT INTO Accounts(TransactionDate,Balance) VALUES ('1/17/2000',206)
INSERT INTO Accounts(TransactionDate,Balance) VALUES ('1/18/2000',207)
INSERT INTO Accounts(TransactionDate,Balance) VALUES ('1/19/2000',208)
INSERT INTO Accounts(TransactionDate,Balance) VALUES ('1/20/2000',209)
GO

CLR Solution

As I mentioned above the CLR solution is only a modification of function from previous blog post. We only add a queue of last calculated running totals.

Calculation of Running Total in Excel

The image above displays situation when we want to make running total for last 3 rows. Our solution will calculate the running total continuously as it is done in column B, but when the requested count of 3 rows is met, it will subtract the running total 3 rows back as can be seen on the image in column B. For this we need in our function a queue of the last X running total values (In this example 3). This class also implements the security check introduced in my previously updated article related to Running Totals. The security check consist in providing the correct row numbers in the sequence in which the records should be processed and in case the records are processed out of expected order, an exception is fired.

using System;
using Microsoft.SqlServer.Server;
using System.Runtime.Remoting.Messaging;
using System.Data.SqlTypes;
using System.Collections.Generic;

/// <summary>
/// Class contains CLR scalar functions for calculation of running totals
/// </summary>
public class RunningTotalsQueue
{
    /// <summary>
    /// Storage Structure for holding actual Total and row number for security check.
    /// </summary>
    /// <typeparam name="T">Totals Data Type</typeparam>
    private class RtStorage<T> where T : struct
    {
        public T Total;
        public int RowNo;
        public Queue<T> Queue;
        public RtStorage(int queueLength)
        {
            Queue = new Queue<T>(queueLength);
        }
    }

    /// <summary>
    /// Calculates a running totals on Int (Int32) data type
    /// </summary>
    /// <param name="val">Value of current row</param>
    /// <param name="id">ID of the function in single query</param>
    /// <param name="queueLength">Length of the queue. 0 for classical running totals</param>
    /// <param name="rowNo">Specifies expected rowNo. It is for security check to ensure correctness of running totals</param>
    /// <param name="nullValue">Value to be used for NULL values</param>
    /// <param name="nullForLessRows">Specifies whether return NULL if less values than queue are summed</param>
    /// <returns>SqlInt64 representing running total</returns>
    [SqlFunction(IsDeterministic = true)]
    public static SqlInt64 RunningTotalBigIntQueue(SqlInt64 val, SqlByte id, int queueLength, int rowNo, SqlInt64 nullValue, bool nullForLessRows)
    {
        string dataName = string.Format("MultiSqlRtQueue_{0}", id.IsNull ? 0 : id.Value);

        object lastSum = CallContext.GetData(dataName);

        RtStorage<SqlInt64> storage;
        if (lastSum != null)
            storage = (RtStorage<SqlInt64>)lastSum;
        else
            CallContext.SetData(dataName, storage = new RtStorage<SqlInt64>(queueLength));

        storage.RowNo++;

        if (storage.RowNo != rowNo)
            throw new System.InvalidOperationException(string.Format("Rows were processed out of expected order. Expected RowNo: {0}, received RowNo: {1}", storage.RowNo, rowNo));

        if (!val.IsNull)
            storage.Total = storage.Total.IsNull ? val : storage.Total + val;
        else
            storage.Total = storage.Total.IsNull ? nullValue : (nullValue.IsNull ? storage.Total : storage.Total + nullValue);

        var currentTotal = storage.Total;
        if (queueLength > 0 && storage.Queue.Count == queueLength)
        {
            var lastQueue = storage.Queue.Dequeue();
            currentTotal -= lastQueue.IsNull ? 0 : lastQueue;
        }
        else if (storage.Queue.Count < queueLength && nullForLessRows)
            currentTotal = SqlInt64.Null;

        if (queueLength > 0)
            storage.Queue.Enqueue(storage.Total);

        return currentTotal;
    }

}

On first call the function allocates a queue of requested queue size inside a private storage class and stores it using the CallContext for use by future calls. The function also allows returning NULL when the count of rows processed is lower than the number of rows requested to be calculated in the running total and also allows calculation of classical running totals (not queued) in case the queue length is equal to 0.

CREATE FUNCTION [dbo].[fn_RunningTotalBigIntQueue](
	@val [bigint],                  --value to be added to running total (a fiedl in the query)
	@id [tinyint],                  --id of the running total within a single query
	@queueLength [int],             --lenght of the queue
	@rowNo [int],                   --RowNumber of processed records. This is compared to espected rowNo and in case out of synchronization an exceiption is fired
	@nullValue [bigint] = NULL,     --representation of the NULL value when adding to running totals
	@nullForLesRows [bit] = 0       --specifies whether return NULL if less records than queueLenght were processed
)
RETURNS [bigint]
WITH EXECUTE AS CALLER
AS
EXTERNAL NAME [SqlClrTotals].[RunningTotalsQueue].[RunningTotalBigIntQueue]
GO

Tests

Once we have compiled the sources, created assembly and function in DB, we can make a tests. First we will test the T-SQL solution from the article mentioned at the beginning of this blog post and then the presented CLR solution.

T-SQL solution

SET STATISTICS IO ON
GO
SELECT
    Acc.ID,CONVERT(varchar(10),TransactionDate,101) AS TransactionDate ,
    Balance,
    isnull(RunningTotal,'') AS RunningTotal
FROM Accounts Acc
LEFT OUTER JOIN (SELECT ID,sum(Balance) AS RunningTotal
        FROM (SELECT A.ID AS ID,B.ID AS BID, B.Balance
            FROM Accounts A
                cross JOIN Accounts B
            WHERE B.ID BETWEEN A.ID-4
            AND A.ID AND A.ID>4
            ) T
        GROUP BY ID ) Bal
    ON Acc.ID=Bal.ID
GO
SET STATISTICS IO OFF
GO

Results:

ID          TransactionDate Balance                RunningTotal
----------- --------------- ---------------------- ----------------------
1           01/01/2000      100                    0
2           01/02/2000      101                    0
3           01/03/2000      102                    0
4           01/04/2000      103                    0
5           01/05/2000      104                    510
6           01/06/2000      105                    515
7           01/07/2000      106                    520
8           01/08/2000      107                    525
9           01/09/2000      108                    530
10          01/10/2000      109                    535
11          01/11/2000      200                    630
12          01/12/2000      201                    725
13          01/13/2000      202                    820
14          01/14/2000      203                    915
15          01/15/2000      204                    1010
16          01/16/2000      205                    1015
17          01/17/2000      206                    1020
18          01/18/2000      207                    1025
19          01/19/2000      208                    1030
20          01/20/2000      209                    1035 

(20 row(s) affected) 

Table 'Accounts'. Scan count 37, logical reads 37, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

We cab see, that for only a few records and the T-SQL solution produced 37 logical reads and the query plan is quite complex. And there is a lot of table scans.

If we add a clustered primary key on the ID

ALTER TABLE dbo.Accounts ADD CONSTRAINT PK_Accounts PRIMARY KEY CLUSTERED(ID)

Then the plan will be better as we will replace the three table scans in the query plan by one clustered index scan and two clustered index seeks. But even now the plan is quite complex.

CLR Solution test

SET STATISTICS IO ON
GO
SELECT
    Acc.ID
    ,CONVERT(varchar(10),TransactionDate,101) AS TransactionDate
    ,Balance
    ,ISNULL(dbo.fn_RunningTotalBigIntQueue(Balance, 0, 5, ROW_NUMBER() OVER(ORDER BY ID), NULL, 1), 0) AS RunningTotal
 FROM Accounts Acc
 ORDER BY ID
 OPTION(MAXDOP 1)
GO
SET STATISTICS IO OFF
GO

Here we have results:

ID          TransactionDate Balance                RunningTotal
----------- --------------- ---------------------- ----------------------
1           01/01/2000      100                    0
2           01/02/2000      101                    0
3           01/03/2000      102                    0
4           01/04/2000      103                    0
5           01/05/2000      104                    510
6           01/06/2000      105                    515
7           01/07/2000      106                    520
8           01/08/2000      107                    525
9           01/09/2000      108                    530
10          01/10/2000      109                    535
11          01/11/2000      200                    630
12          01/12/2000      201                    725
13          01/13/2000      202                    820
14          01/14/2000      203                    915
15          01/15/2000      204                    1010
16          01/16/2000      205                    1015
17          01/17/2000      206                    1020
18          01/18/2000      207                    1025
19          01/19/2000      208                    1030
20          01/20/2000      209                    1035 

(20 row(s) affected) 

Table 'Accounts'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Here we can see, that the CLR solution produced exactly the same output as the T-SQL solution. What more, we have only 1 logical read as the table is small and we have also received a very simple execution plan with single Table Scan and Sort. The segment and Sequence Project(Compute Scalar) operators are the ROW_NUMBER() function for security check.  The compute Scalar is our function computing the running totals.

In case of the Clustered Primary Key the plan will contain one Clustered Index scan without sort as the Clustered Key is already in our expected order.

Conclusion

As we can see from the simple test here, the CLR solution is unbeatable for such purposes. Even if we use other T-SQL solution like “quirky updates”, even then the CLR will have less reads and should be quicker.

When increasing the number of records and increasing the last X number, then with higher number count, the T-SQL solution will kill even high end servers. On the other side you will see nearly no difference in speed when using the CLR solution. As it only needs to allocate one queue of last X elements. Even if we want to count last 1000 000 rows, then it only needs to allocate such queue and in case of bigint it will 8 MB + some overhead of the queue.

Of course the CLR solution has the issues with processing rows in right order, but for that purposes there is the security check implemented so in case rows are processed in other than expected order an exception is thrown. But in most cases the scenario is very simple and you need to calculate the total from a simple table and in such cases this solution work without bigger problems. In case of complex queries with JOINS or other windowing functions, records can be processed into a temp table and then the running totals calculated on the temporary table.

SQL Server – String splitting (T-SQL vs. CLR)

This blog post is inspired by forum thread on SQL Server Central Convert CSV values in three columns to rows.

On many sites there are comparison of string splitting functions using pure T-SQL. But not much of them also mention the CLR way of doing it.

On small strings with a few elements the T-SQL solution is unbeatable in speed. But when the number of elements in the string increase, the situation is completely different.

In this comparison I will use one of the quickest T-SQL solution vs. a CLR solution using regular expressions.

For the fastest CLR solution, check my blog post: Fastest CSV strings splitting using CLR (T-SQL vs. CLR revisited).

Probably the quickest known T-SQL solution is function mentioned by Jeff Moden in the forum thread http://www.sqlservercentral.com/Forums/FindPost944589.aspx.

CREATE FUNCTION [dbo].[DelimitedSplit8K]
--/***************************************************************************************************
-- Purpose:
-- Split a given string at a given delimiter and return a list of the split elements (items).

-- Returns:
-- iTVF containing the following:
-- ItemNumber = Element position of Item as a BIGINT (not converted to INT to eliminate a CAST)
-- Item       = Element value as a VARCHAR(8000)

-- CROSS APPLY Usage Example:
-----------------------------------------------------------------------------------------------------
----===== Conditionally drop the test tables to make reruns easier for testing.
--     -- (this is NOT a part of the solution)
--     IF OBJECT_ID('tempdb..#JBMTest') IS NOT NULL
--        DROP TABLE #JBMTest
--;
----===== Create and populate a test table on the fly (this is NOT a part of the solution).
-- SELECT *
--   INTO #JBMTest
--   FROM (
--         SELECT 1,'1,10,100,1000,10000,100000,1000000' UNION ALL
--         SELECT 2,'2000000,200000,20000,2000,200,20,2' UNION ALL
--         SELECT 3, 'This,is,a,test'                    UNION ALL
--         SELECT 4, 'and so is this'                    UNION ALL
--         SELECT 5, 'This, too (no pun intended)'
--        ) d (SomeID,SomeValue)
--;
--GO
----===== Split the CSV column for the whole table using CROSS APPLY (this is the solution)
-- SELECT test.SomeID, split.ItemNumber, split.Item
--   FROM #JBMTest test
--  CROSS APPLY
--        (
--         SELECT ItemNumber, Item
--           FROM dbo.DelimitedSplit8k(test.SomeValue,',')
--        ) split
--;
-----------------------------------------------------------------------------------------------------
-- Notes:
-- 1. Optimized for VARCHAR(7999) or less.  No testing or error reporting for truncation at 7999
--    characters is done.
-- 2. Optimized for single character delimiter.  Multi-character delimiters should be resolved
--    externally from this function.
-- 3. Optimized for use with CROSS APPLY.
-- 4. Does not "trim" elements just in case leading or trailing blanks are intended.
-- 5. If you don't know how a Tally table can be used to replace loops, please see the following...

--http://www.sqlservercentral.com/articles/T-SQL/62867/

-- 6. Changing this function to use VARCHAR(MAX) will cause it to run twice as slow.  It's just the
--    nature of VARCHAR(MAX) whether it fits in-row or not.
-- 7. Multi-machine testing for the method of using UNPIVOT instead of 10 SELECT/UNION ALLs shows
--    that the UNPIVOT method is quite machine dependent and can slow things down quite a bit.
-- 8. Performance testing shows using "TOP" for the limiting criteria of "N" is actually
--    slower and slightly more CPU intensive than the traditional WHERE N < LEN(@pString) + 2.
-- 9. Performance testing shows using ORDER BY (SELECT x) where "x" is anything is actually
--    slower and slightly more CPU intensive than the traditional ORDER BY (SELECT N).

-- Credits:
-- This code is the product of many people's efforts including but not limited to the following:
-- cteTally concept originally by Iztek Ben Gan and "decimalized" by Lynn Pettis (and others) for a
-- bit of extra speed and finally redacted by Jeff Moden for a different slant on readability and
-- compactness. Hat's off to Paul White for his simple explanations of CROSS APPLY. Finally,
-- special thanks to Erland Sommarskog for his tireless efforts to help people understand
-- what you can actually do with T-SQL. I also thank whoever wrote the first article I ever saw
-- on "numbers tables" which is located at the following URL ...

--http://sqlserver2000.databases.aspfaq.com/why-should-i-consider-using-an-auxiliary-numbers-table.html

-- Revision History:
-- Rev 00 - 20 Jan 2010 - Concept: Lynn Pettis and others.
--                        Redaction/Implementation: Jeff Moden
--        - Base 10 redaction and reduction for CTE.  (Total rewrite)

-- Rev 01 - 13 Mar 2010 - Jeff Moden
--        - Removed one additional concatenation and one subtraction from the SUBSTRING in the
--          SELECT List for that tiny bit of extra speed.

-- Rev 02 - 14 Apr 2010 - Jeff Moden
--        - No code changes.  Added CROSS APPLY usage example to the header, some additional credits,
--          and extra documentation.

-- Rev 03 - 18 Apr 2010 - Jeff Moden
--        - No code changes.  Added notes 7, 8, and 9 about certain "optimizations" that dont'
--          actually work for this type of function.

-- Rev 04 - 29 Jun 2010 - Jeff Moden
--        - Added WITH SCHEMABINDING thanks to a note by Paul White.  This prevents an unnecessary
--          "Table Spool" when the function is used in an UPDATE statement even though the function
--          makes no external references.
--***************************************************************************************************/
--===== Define I/O parameters
        (
        @pString    VARCHAR(7999),
        @pDelimiter CHAR(1)
        )
RETURNS TABLE
   WITH SCHEMABINDING
AS
RETURN
--===== "Inline" CTE Driven "Tally Table” produces values up to
     -- 10,000... enough to cover VARCHAR(8000)
WITH
      E1(N) AS ( --=== Create Ten 1's
                 SELECT 1 UNION ALL SELECT 1 UNION ALL
                 SELECT 1 UNION ALL SELECT 1 UNION ALL
                 SELECT 1 UNION ALL SELECT 1 UNION ALL
                 SELECT 1 UNION ALL SELECT 1 UNION ALL
                 SELECT 1 UNION ALL SELECT 1 --10
               ),
      E2(N) AS (SELECT 1 FROM E1 a, E1 b),   --100
      E4(N) AS (SELECT 1 FROM E2 a, E2 b),   --10,000
cteTally(N) AS (SELECT ROW_NUMBER() OVER (ORDER BY (SELECT N)) FROM E4)
--===== Do the split
 SELECT ROW_NUMBER() OVER (ORDER BY N) AS ItemNumber,
        SUBSTRING(@pString, N, CHARINDEX(@pDelimiter, @pString + @pDelimiter, N) - N) AS Item
   FROM cteTally
  WHERE N < LEN(@pString) + 2
    AND SUBSTRING(@pDelimiter + @pString, N, 1) = @pDelimiter
;
GO

For the CLR we will use the RegEx function I mentioned in one of my previous blog post: SQL Server 2005 and SQL Server 2008 Regular Expressions and pattern matching.

To make the comparison more interesting I will use the table structure as Jeff Moden has used in his post http://www.sqlservercentral.com/Forums/FindPost988659.aspx. You can also took on the comparison of different T-SQL solutions made by Jeff in the mentioned thread.

Test data preparation

We will build 3 test data table. One for string with 16 element (CsvTest), one for 100 element (CSVTest3) and one for 1333 elements (CSVTest2). The 1333 elements is near the varchar(8000) for which the T-SQL version of DelimitedSplit8K is optimized.

WITH TallyBase AS (
    SELECT 1 N UNION ALL SELECT 1 N UNION ALL SELECT 1 N UNION ALL SELECT 1 N UNION ALL SELECT 1 N UNION ALL
    SELECT 1 N UNION ALL SELECT 1 N UNION ALL SELECT 1 N UNION ALL SELECT 1 N UNION ALL SELECT 1 N
),
Tally AS (
    SELECT
        CAST(ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) AS INT) as N
    FROM TallyBase t1, TallyBase t2, TallyBase T3, TallyBase T4
)
SELECT TOP (10000) --Controls the number of rows in the test table
    ISNULL(ROW_NUMBER() OVER (ORDER BY(SELECT NULL)),0) AS RowNum,
    (
        SELECT CAST(STUFF( --=== STUFF get`s rid of the leading comma
                ( --=== This builds CSV row with a leading comma
                SELECT TOP (16) --Controls the number of CSV elements in each row
                    ',' + CAST(ABS(CHECKSUM(NEWID()))%100000 AS VARCHAR(10))
                FROM Tally t3      --Classic cross join pseudo-cursor
                CROSS JOIN Tally t4 --can produce row sets up 121 million.
                WHERE t1.N <> t3.N --Without this line, all rows would be the same
                FOR XML PATH('')
                )
                ,1,1,'') AS NVARCHAR(MAX))
                ) AS CsvParameter
INTO CsvTest
FROM Tally t1        --Classic cross join pseudo-cursor
CROSS JOIN Tally t2;  --can produce row sets up 121 million.
GO

ALTER TABLE dbo.CsvTest ADD PRIMARY KEY (RowNum) WITH FILLFACTOR=100
GO

WITH TallyBase AS (
    SELECT 1 N UNION ALL SELECT 1 N UNION ALL SELECT 1 N UNION ALL SELECT 1 N UNION ALL SELECT 1 N UNION ALL
    SELECT 1 N UNION ALL SELECT 1 N UNION ALL SELECT 1 N UNION ALL SELECT 1 N UNION ALL SELECT 1 N
),
Tally AS (
    SELECT
        CAST(ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) AS INT) as N
    FROM TallyBase t1, TallyBase t2, TallyBase T3, TallyBase T4
)
SELECT TOP (10000) --Controls the number of rows in the test table
    ISNULL(ROW_NUMBER() OVER (ORDER BY(SELECT NULL)),0) AS RowNum,
    (
        SELECT CAST(STUFF( --=== STUFF get's rid of the leading comma
                ( --=== This builds CSV row with a leading comma
                SELECT TOP (100) --Controls the number of CSV elements in each row
                    ','+CAST(ABS(CHECKSUM(NEWID()))%100000 AS VARCHAR(10))
                FROM Tally t3      --Classic cross join pseudo-cursor
                CROSS JOIN Tally t4 --can produce row sets up 121 million.
                WHERE t1.N <> t3.N --Without this line, all rows would be the same
                FOR XML PATH('')
                )
                ,1,1,'') AS NVARCHAR(MAX))
                ) AS CsvParameter
INTO CsvTest3
FROM Tally t1        --Classic cross join pseudo-cursor
CROSS JOIN Tally t2;  --can produce row sets up 121 million.
GO

ALTER TABLE dbo.CsvTest3 ADD PRIMARY KEY (RowNum) WITH FILLFACTOR=100
GO

WITH TallyBase AS (
    SELECT 1 N UNION ALL SELECT 1 N UNION ALL SELECT 1 N UNION ALL SELECT 1 N UNION ALL SELECT 1 N UNION ALL
    SELECT 1 N UNION ALL SELECT 1 N UNION ALL SELECT 1 N UNION ALL SELECT 1 N UNION ALL SELECT 1 N
),
Tally AS (
    SELECT
        CAST(ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) AS INT) as N
    FROM TallyBase t1, TallyBase t2, TallyBase T3, TallyBase T4
)
SELECT TOP (10000) --Controls the number of rows in the test table
    ISNULL(ROW_NUMBER() OVER (ORDER BY(SELECT NULL)),0) AS RowNum,
    (
        SELECT CAST(STUFF( --=== STUFF get's rid of the leading comma
                ( --=== This builds CSV row with a leading comma
                SELECT TOP (1333) --Controls the number of CSV elements in each row
                    ','+CAST(ABS(CHECKSUM(NEWID()))%100000 AS VARCHAR(10))
                FROM Tally t3      --Classic cross join pseudo-cursor
                CROSS JOIN Tally t4 --can produce row sets up 121 million.
                WHERE t1.N <> t3.N --Without this line, all rows would be the same
                FOR XML PATH('')
                )
                ,1,1,'') AS NVARCHAR(MAX))
                ) AS CsvParameter
INTO CsvTest2
FROM Tally t1        --Classic cross join pseudo-cursor
CROSS JOIN Tally t2;  --can produce row sets up 121 million.
GO

ALTER TABLE dbo.CsvTest2 ADD PRIMARY KEY (RowNum) WITH FILLFACTOR=100
GO

Speed testing and comparison

The speed comparison Tests will be done using the below queries

--T-SQL version
DBCC DROPCLEANBUFFERS
GO
DBCC FREEPROCCACHE
GO

SET STATISTICS IO ON
SET STATISTICS TIME ON
GO
--===  T-SQL Version  ===
DECLARE @RowNum INT, @ItemNumber INT, @ItemValue INT;
SELECT
    @RowNum = D.RowNum,
    @ItemNumber = V.ItemNumber,
    @ItemValue = V.Item
FROM dbo.CsvTest D
CROSS APPLY dbo.DelimitedSplit8K(D.CsvParameter, ',') V
GO

SET STATISTICS TIME OFF
SET STATISTICS IO OFF
GO

--CLR Version
DBCC DROPCLEANBUFFERS
GO
DBCC FREEPROCCACHE
GO

SET STATISTICS IO ON
SET STATISTICS TIME ON
GO
--===  CLR-RegEx Version  ===
DECLARE @RowNum INT, @ItemNumber INT, @ItemValue INT;
SELECT
    @RowNum = D.RowNum,
    @ItemNumber = V.matchId,
    @ItemValue = CAST(V.value as int)
FROM dbo.CsvTest D
CROSS APPLY dbo.fn_RegExMatches(D.CsvParameter, '(d+?)(?:,|$)') V
WHERE V.groupId = 1 --Return only GroupID=1 as this group contains the value GroupID=0 contains whole RegExMatch
GO

SET STATISTICS TIME OFF
SET STATISTICS IO OFF
GO

In the CLR RegEx version is used following regular expression : ‘(d+?)(?:,|$)’. It could be also possible to use ‘d+’ in this situation.

Results for 16 elements in string

--===  T-SQL Version  ===
SQL Server parse and compile time:
   CPU time = 15 ms, elapsed time = 26 ms.
Table 'CsvTest'. Scan count 1, logical reads 262, physical reads 2, read-ahead reads 281, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

 SQL Server Execution Times:
   CPU time = 1841 ms,  elapsed time = 1922 ms.
SQL Server parse and compile time:
   CPU time = 0 ms, elapsed time = 1 ms.

--===  CLR-RegEx Version  ===
SQL Server parse and compile time:
   CPU time = 16 ms, elapsed time = 22 ms.
Table 'CsvTest'. Scan count 5, logical reads 781, physical reads 5, read-ahead reads 281, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

 SQL Server Execution Times:
   CPU time = 77173 ms,  elapsed time = 25480 ms.
SQL Server parse and compile time:
   CPU time = 0 ms, elapsed time = 0 ms.

and SQL Profiler output:

16 elements string splitting profiler output

Here we can see, that the T-SQL version is unbeatable. The T-SQL took about 2 seconds, while the CLR-RegEx took more than 25 seconds. We can also notice, that the T-SQL version had nearly 3 times less reads.

Results for 100 elements in string

--=== T-SQL Version ===
SQL Server parse and compile time:
    CPU time = 0 ms, elapsed time = 27 ms.
Table 'CsvTest3'. Scan count 1, logical reads 1675, physical reads 4, read-ahead reads 1677, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. 

SQL Server Execution Times:
    CPU time = 19890 ms, elapsed time = 20019 ms.
SQL Server parse and compile time:
    CPU time = 0 ms, elapsed time = 0 ms. 

--=== CLR-RegEx Version ===
SQL Server parse and compile time:
    CPU time = 0 ms, elapsed time = 18 ms.
Table 'CsvTest3'. Scan count 5, logical reads 1836, physical reads 5, read-ahead reads 1677, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. 

SQL Server Execution Times:
    CPU time = 79639 ms, elapsed time = 26602 ms.
SQL Server parse and compile time:
    CPU time = 0 ms, elapsed time = 1 ms.

and SQL Profiler output:

100 elements string splitting profiler output

Here we can see that the duration for T-SQL version rapidly increased to 20 seconds, while the duration of CRL-RegEx is nearly the same as for 16 elements in string not a whole 26 and half seconds. Also Reds are nearly the same.

Results for 1333 elements in string

--=== T-SQL Version ===
SQL Server parse and compile time:
    CPU time = 0 ms, elapsed time = 25 ms.
Table 'CsvTest2'. Scan count 1, logical reads 74, physical reads 2, read-ahead reads 72, lob logical reads 367392280, lob physical reads 2500, lob read-ahead reads 0. 

SQL Server Execution Times:
    CPU time = 2745696 ms, elapsed time = 2758753 ms.
SQL Server parse and compile time:
    CPU time = 0 ms, elapsed time = 0 ms. 

--=== CLR-RegEx Version ===
SQL Server parse and compile time:
    CPU time = 0 ms, elapsed time = 43 ms.
Table 'CsvTest2'. Scan count 5, logical reads 217, physical reads 5, read-ahead reads 72, lob logical reads 320860, lob physical reads 2494, lob read-ahead reads 170000.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. 

SQL Server Execution Times:
    CPU time = 170228 ms, elapsed time = 59624 ms.
SQL Server parse and compile time:
    CPU time = 0 ms, elapsed time = 0 ms.

And SQL Profiler output:

1333 elements string splitting profiler output

From this results we can see, that when the length of the string increased and number of elements increased to 1333, the T-SQL version is nearly unusable and the CLR-RegEx is unbeatable by the T-SQL. Here the T-SQL solution took nearly 46 minutes while the CLR-RegEx took only 1 minute. Also the T-SQL solution now has 1 144 times more reads.

CLR-RegEx Testing of very long strings with hi elements count

Here are some test of the CLR-RegEx solution on some very large strings. This test splits single very long string using the CLR-RegEx. The number of elements in the CSV controls the TOP clause in below testing query. We will use it for several scenarios.

DECLARE @csv varchar(max);
DECLARE @RowNum INT, @ItemNumber INT, @ItemValue INT;

WITH TallyBase AS (
    SELECT 1 N UNION ALL SELECT 1 N UNION ALL SELECT 1 N UNION ALL SELECT 1 N UNION ALL SELECT 1 N UNION ALL
    SELECT 1 N UNION ALL SELECT 1 N UNION ALL SELECT 1 N UNION ALL SELECT 1 N UNION ALL SELECT 1 N
),
Tally AS (
    SELECT
        CAST(ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) AS INT) as N
    FROM TallyBase t1, TallyBase t2, TallyBase T3, TallyBase T4
)
SELECT @csv = CAST(STUFF( --=== STUFF get`s rid of the leading comma
        ( --=== This builds CSV row with a leading comma
        SELECT TOP (100000) --Controls the number of CSV elements in each row
            ',' + CAST(ABS(CHECKSUM(NEWID()))%100000 AS VARCHAR(10))
        FROM Tally t3      --Classic cross join pseudo-cursor
        CROSS JOIN Tally t4 --can produce row sets up 121 million.
        FOR XML PATH('')
        )
        ,1,1,'') AS NVARCHAR(MAX))

SET STATISTICS TIME ON

SELECT
    @ItemNumber = V.matchId,
    @ItemValue = CAST(V.value as int)
FROM dbo.fn_RegExMatches(@csv, '(d+?)(?:,|$)') V
WHERE V.groupId = 1 --Return only GroupID=1 as this group contains the value GroupID=0 contains whole RegExMatch
GO

SET STATISTICS TIME OFF

100 000 elements

SQL Server Execution Times:

CPU time = 484 ms,  elapsed time = 496 ms.

SQL Server parse and compile time:

CPU time = 0 ms, elapsed time = 0 ms.

100 000 elements string splitting profiler output

1 000 000 elements

SQL Server Execution Times:

CPU time = 4352 ms,  elapsed time = 20713 ms.

SQL Server parse and compile time:

CPU time = 0 ms, elapsed time = 0 ms.

1 000 000 elements string splitting profiler output

10 000 000 elements

SQL Server Execution Times:

CPU time = 44320 ms,  elapsed time = 88735 ms.

SQL Server parse and compile time:

CPU time = 0 ms, elapsed time = 0 ms.

10 000 000 elements string splitting profiler output

Conclusion

From the test and measuring done in this blog post we can find, that when we need to split relatively small strings with small elements count (<100 elements) then the T-SQL Solution is the quickest possible and unbeatable.

When we have a moderate count of elements in the string (about 100) then it does not matter if we use the T-SQL or CLR-RegEx solution.

Once the count of elements in the string increases (more than 100) then the CLR-RegEx starts being unbeatable and with high counts of elements (more than 1000, or even millions of elements) then the T-SQL solution is nearly unusable as we can see the results of 10 000 rows with 1333 elements each – the T-SQL version took 46 minutes, while the CLR-RegEx took only 1 minute.

Also during all the tests, there was no occurrence of AppDomain Restart/(Unload/Load) when manipulating large strings using the CLR-RegEx solution. On some blogs I have found, that there was some AppDomain unloading and loading when using CLR but with combination of String.Split method. The RegEx solution doesn’t have such problems.