Generating Partition Function – sp_tblCreatePartitionFunction available don GitHub

If you are working with large databases you have come to situation when you need to partition the large tables.

To be able to partition a table, you need to create a Partition Function. Creation of Partition Function is quite easy, however if you need to create a larger number of partitions at initial phase, it can be pretty annoying as you have to define all the ranges for each partition. Also it is easy to make a type in the list of boundary values and as a result create a partition function with wrong partition alignment.

To simplify that process I’m presenting here a stored procedure which takes care about that and generates the partition function automatically based on input parameters

sp_tblCreatePartitionFunction

As mentioned above, the sp_tblCreatePartitionFunction takes care about generation of the partition function based on input parameters.

You can find complete source code in my SQL-Scripts project on GitHub under the TablesManagement\Partitioning folder.

The stored procedure provides support for generation of ranges based on smallint, int, bigint, date, datetime and datetime2 data types.

Stored procedure generates a partition function with boundary values between @rangeStart and @rangeEnd parameters.

The stored procedure is marked as system stored procedure in the script so it means that it can operate in the context of the current user database.

Parameters

The input parameters define the ranges and the way how the partition function is generated

The available input parameters for the function are @pfName, @rangeStart, @rangeEnd, @boundaryType, @incrementValue, @incrementUnit, @useIntegerDates, @integerFormatType and @printScriptOnly

@pfName

Defines the partition function Name. the parameter data type is nvarchar(128) which corresponds to sysname.

@rangeStart

Defines the starting range for generation of the partition function. The parameter is sql_variant. You have to pass parameter of supported data type for the range. As mentioned above the supported data types are smallint, int, bigint, date, datetime and datetime2.

To simplify specification of the range, you can pass the parameter as string. If the value passed as string represents a DateTime it is automatically converted to datetime data type.

You can also pass a value with a data type specifier. The data type specifier must be first character of the string and the value must follow. The supported data type specifiers are 'D' - date, 'T' - datetime, 'B' - bigint, 'I' - int and 'S' - smallint.

Below are some examples of supported strings passed as @rangeStart :

Sample value Result
'2016-01-01' Converted to datetime 2016-01-01
20160101 Converted to datetime 2016-01-01
D2016-01-01 Converted to date 2016-01-01
T20160101 Converted to datetime 2016-01-01
B1010 Converted to bigint 100
I1010 Converted to int 100
S1010 Converted to smallint 100

The @rangeStart representing a date can be altered during the partition function generation based on the @incrementUnit parameter. The @rangeStart is shifted to the beginning of corresponding unit.. See the @incrementUnit parameter for details.

The @rangeStart is included in the generated ranges.

@rangeEnd

Represents then end of range which should be generated for the partition function. The supported data types are the same as for the @rangeStart. See the @rangeStart for details.

The @rangeEnd is inclusive in the range, but may not be included in the final generated partition function in dependency on the @incrementValue and @incrementUnit parameters. See those parameters for details.

@boundaryType

Specifies the boundary type of the boundary values for the partition function. It can be either LEFT or RIGHT. It defines whether the boundary value is included in the left or right partition respective to the boundary value. Default value is RIGHT.

@incrementValue

Defines increment of the boundary values. This means that the @rangeStart value is incremented by the @incrementValue until @rangeEnd is reached.

@incrementUnit

Specifies the unit of the increment. It is being used only for range values representing date. The allowed units are YEAR, MONTH, WEEK, ISO_WEEK, DAY.

In each iteration step the @rangeStart value is incremented by the @incrementValue number of @incrementUnit.

Also the @incrementUnit may have impact on the @rangeStart value. If the @rangeStart value does not point to the first day of respective unit, it is being automatically shifted to the beginning of corresponding unit based on the @rangeStart value.

Samples of @rangeStart shifting:

@rangeStart @incrementUnit @rangeStart shifted to
'2016-03-17' YEAR '2016-01-01'
'2016-03-17' MONTH '2016-03-01'
'2016-03-17' WEEK '2016-03-14'
'2016-01-01' ISO_WEEK '2015-12-28'

@useIntegerDates

Specifies whether boundary values representing dates in the partition function should be represented as int date types or corresponding date, datetime or datetime2 data type. In case of  @useIntegerDates = 0 the boundary value is always first day of corresponding @incrementUnit.

In case of @useIntegerDates = 1 the boundary value is expressed as integer number and the value representing the range depends on the @integerFormatType parameter. See the @integerFormatType for details

It is being used only for date ranges. Default value is 1.

@integerFormatType

Specifies integer format of date range value. When the partition function is being generated and date ranges are being used, the range value is internally always represented by a first day of corresponding rage. if the @useIntegerDate = 1 the final boundary value is generated based on the format type.

Supported format types are:

FormatType Boundary value formatting
1 yyyyMMdd For example 20160101
2 yyyyxx(x) where xxx(x) correspond to appropriate month, week or day within particular year. For example 2016053 represents a day 53 of year 2016. 201643 represents week or iso_week 43 in year 2016

Default value is 2.

@printScriptOnly

Specifies whether only the partition function CREATE script is being printed or the partition function is being automatically crated.

The default value is 1, this means that only script is printed.

Samples

Below are some sample usages of the stored procedure.

Sample 1

sp_tblCreatePartitionFunction
    @pfName = 'myPf'
   ,@rangeStart = 1
   ,@rangeEnd = 1001
   ,@boundaryType = 'RIGHT'
   ,@incrementvalue = 100
CREATE PARTITION FUNCTION [myPf](int) AS RANGE RIGHT FOR VALUES (
    1
   ,101
   ,201
   ,301
   ,401
   ,501
   ,601
   ,701
   ,801
   ,901
   ,1001
)

Sample 2

sp_tblCreatePartitionFunction
    @pfName = 'myPf'
   ,@rangeStart = '2016-01-01'
   ,@rangeEnd = '2020-12-31'
   ,@boundaryType = 'RIGHT'
   ,@incrementvalue = 1
   ,@incrementUnit = 'YEAR'
   ,@useIntegerDates = 1
   ,@integerFormatType = 2

CREATE PARTITION FUNCTION [myPf](int) AS RANGE RIGHT FOR VALUES (
    2016
   ,2017
   ,2018
   ,2019
   ,2020
)

Sample 3

sp_tblCreatePartitionFunction
    @pfName = 'myPf'
   ,@rangeStart = '2016-01-01'
   ,@rangeEnd = '2016-12-31'
   ,@boundaryType = 'LEFT'
   ,@incrementvalue = 1
   ,@incrementUnit = 'MONTH'
   ,@useIntegerDates = 1
   ,@integerFormatType = 2
CREATE PARTITION FUNCTION [myPf](int) AS RANGE LEFT FOR VALUES (
    20160101
   ,20160201
   ,20160301
   ,20160401
   ,20160501
   ,20160601
   ,20160701
   ,20160801
   ,20160901
   ,20161001
   ,20161101
   ,20161201
)

Sample 4

sp_tblCreatePartitionFunction
    @pfName = 'myPf'
   ,@rangeStart = '2016-01-01'
   ,@rangeEnd = '2016-12-31'
   ,@boundaryType = 'LEFT'
   ,@incrementvalue = 1
   ,@incrementUnit = 'MONTH'
   ,@useIntegerDates = 0
CREATE PARTITION FUNCTION [myPf](datetime) AS RANGE LEFT FOR VALUES (
    '20160101'
   ,'20160201'
   ,'20160301'
   ,'20160401'
   ,'20160501'
   ,'20160601'
   ,'20160701'
   ,'20160801'
   ,'20160901'
   ,'20161001'
   ,'20161101'
   ,'20161201'
)
Advertisement

SQL Server is wasting space when altering fixed length column

Recently I was going through the archives of posts by Kalen Delaney blog and I came across very interesting post Did you know? — Altering the length of a fixed-length column.  This article gives information how SQL Server is wasting space when you alter a fixed length column and increase its length.

It is interesting so I wanted to take a closer look on this and take a look on the physical db pages to see what happens and to see how the data are stored after alter is done.

Test data preparation and initial view of the data

So first let’s prepare a testing table with some testing data.

CREATE TABLE dbo.AlterTest (
    ID int NOT NULL IDENTITY(1,1),
    col1 char(2000),
    col2 char(1000),
    col3 int
)
GO
INSERT INTO dbo.AlterTest (col1, col2, col3)
VALUES('aaa', 'bbb', ABS(BINARY_CHECKSUM(NEWID())))
GO 4

Now if we take a look on the columns information in metadata tables using Kalen’s query to sys.system_internal_partition_columns we can see the offsets of the table columns.

SELECT  
    c.name AS column_name, 
    column_id, 
    max_inrow_length,
    pc.system_type_id, 
    leaf_offset 
FROM sys.system_internals_partition_columns pc
INNER JOIN sys.partitions p ON p.partition_id = pc.partition_id 
INNER JOIN sys.columns c ON column_id = partition_column_id AND c.object_id = p.object_id
WHERE p.object_id=object_id('AlterTest');
column_name   column_id   max_inrow_length system_type_id leaf_offset
------------- ----------- ---------------- -------------- -----------
ID            1           4                56             4
col1          2           2000             175            8
col2          3           1000             175            2008
col3          4           4                56             3008

As we can see, the physical order is the order in which the columns were defined using the CREATE TABLE statement.

Once we have the data in the table, let’s take a look on how the data are stored. The below query will work only on SQL Server 2008+ and is using undocumented virtual column %%physloc%% which provides information about rows physical location in database and undocumented function sys.fn_PhysLocCracker, which cracks the physical location to human readable FileID, PageID and SlotID.

SELECT
    DB_ID() AS DBID
    ,pl.*
    ,t.*
FROM dbo.AlterTest t
CROSS APPLY sys.fn_PhysLocCracker(%%physloc%%) pl
DBID   file_id     page_id     slot_id     ID          col1    col2    col3
------ ----------- ----------- ----------- ----------- ------- ------ -----------
16     1           168         0           1           aaa     bbb    587800818
16     1           168         1           2           aaa     bbb    1396332306
16     1           171         0           3           aaa     bbb    844570652
16     1           171         1           4           aaa     bbb    1332601405

Let’s take a look on the first physical page 168.

dbcc traceon (3604,-1)
GO
dbcc page(16,1,168,3)
GO

The partial results are below

PAGE: (1:168)

BUFFER:

BUF @0x00000005011003C0

bpage = 0x000000048C308000          bhash = 0x0000000000000000          bpageno = (1:168)
bdbid = 16                          breferences = 0                     bcputicks = 0
bsampleCount = 0                    bUse1 = 44191                       bstat = 0xb
blog = 0xab21cccc                   bnext = 0x0000000000000000          

PAGE HEADER:

Page @0x000000048C308000

m_pageId = (1:168)                  m_headerVersion = 1                 m_type = 1
m_typeFlagBits = 0x0                m_level = 0                         m_flagBits = 0x8000
m_objId (AllocUnitId.idObj) = 98    m_indexId (AllocUnitId.idInd) = 256 
Metadata: AllocUnitId = 72057594044350464                                
Metadata: PartitionId = 72057594039959552                                Metadata: IndexId = 0
Metadata: ObjectId = 549576996      m_prevPage = (0:0)                  m_nextPage = (0:0)
pminlen = 3012                      m_slotCnt = 2                       m_freeCnt = 2062
m_freeData = 6126                   m_reservedCnt = 0                   m_lsn = (37:227:3)
m_xactReserved = 0                  m_xdesId = (0:0)                    m_ghostRecCnt = 0
m_tornBits = 0                      DB Frag ID = 1                      

Allocation Status

GAM (1:2) = ALLOCATED               SGAM (1:3) = ALLOCATED              
PFS (1:1) = 0x62 MIXED_EXT ALLOCATED  80_PCT_FULL                        DIFF (1:6) = CHANGED
ML (1:7) = NOT MIN_LOGGED           

Slot 0 Offset 0x60 Length 3015

Record Type = PRIMARY_RECORD        Record Attributes =  NULL_BITMAP    Record Size = 3015

Memory Dump @0x000000002054A060

0000000000000000:   1000c40b 01000000 61616120 20202020 20202020  ..Ä.....aaa         
0000000000000014:   20202020 20202020 20202020 20202020 20202020                      
0000000000000028:   20202020 20202020 20202020 20202020 20202020                      
...
...
00000000000007BC:   20202020 20202020 20202020 20202020 20202020                      
00000000000007D0:   20202020 20202020 62626220 20202020 20202020          bbb         
00000000000007E4:   20202020 20202020 20202020 20202020 20202020                      
...
...
0000000000000BA4:   20202020 20202020 20202020 20202020 20202020                      
0000000000000BB8:   20202020 20202020 f2200923 040000                     ò     #...

Slot 0 Column 1 Offset 0x4 Length 4 Length (physical) 4

ID = 1                              

Slot 0 Column 2 Offset 0x8 Length 2000 Length (physical) 2000

col1 = aaa                                                                                                       

Slot 0 Column 3 Offset 0x7d8 Length 1000 Length (physical) 1000

col2 = bbb                                                                                                       

Slot 0 Column 4 Offset 0xbc0 Length 4 Length (physical) 4

col3 = 587800818                    

Slot 1 Offset 0xc27 Length 3015

We can see, that the physical storage corresponds to the metadata stored in the system table sys.system_internal_partition_columns.

Altering the column length and analyzing impacts

Now let’s alter the table and increase the length of Col1 to 3000 characters and take a look what happens.

ALTER TABLE dbo.AlterTest
    ALTER COLUMN Col1 char(3000)
GO

If we take a look on the sys.system_iternal_partition_columns we will see following:

column_name   column_id   max_inrow_length system_type_id leaf_offset
------------- ----------- ---------------- -------------- -----------
ID            1           4                56             4
col2          3           1000             175            2008
col3          4           4                56             3008
col1          2           3000             175            3012

We can see, that the offset of the Col1 has changed and the Column was moved to the end of the record. From here we can see, that the original 2000 bytes were wasted. If we take a look on the physical page 168 as above, we will see, that there is no change in the physical page as the this change to the column is metadata change only.

Impact on new records

So let’s take a look what impact this have on new records added to the table.

INSERT INTO dbo.AlterTest (col1, col2, col3)
VALUES('ccc', 'ddd', ABS(BINARY_CHECKSUM(NEWID())))
GO 4
SELECT
    DB_ID() AS DBID
    ,pl.*
    ,t.*
FROM dbo.AlterTest t
CROSS APPLY sys.fn_PhysLocCracker(%%physloc%%) pl
DBID   file_id     page_id     slot_id     ID          col1   col2  col3
------ ----------- ----------- ----------- ----------- ------ ----- -----------
16     1           168         0           1           aaa    bbb   587800818
16     1           168         1           2           aaa    bbb   1396332306
16     1           171         0           3           aaa    bbb   844570652
16     1           171         1           4           aaa    bbb   1332601405
16     1           175         0           5           ccc    ddd   2059368981
16     1           177         0           6           ccc    ddd   1449062892
16     1           178         0           7           ccc    ddd   267569086
16     1           179         0           8           ccc    ddd   1325350591

From the results we can see, that the first 4 records originally inserted are occupied only two pages as two records were stored per database page. After the update we can see that each single record is occupied its own page. This is due to the fact that the record length has increased not only by the 1000 characters by which the length of the Col1 was modified but also the original 2000 bytes were wasted. there fore the data length on the page increased from 3008 bytes to 6008 bytes.

Now let’s take a closer look on the physical page. For example the first page occupied by the newly inserted data  (page 175).

dbcc page(16,1,175,3)
GO

Partial results of the DBCC command are here:

PAGE: (1:175)

BUFFER:

BUF @0x0000000503366540

bpage = 0x00000004D7F84000          bhash = 0x000000067753ED81          bpageno = (1:175)
bdbid = 16                          breferences = 0                     bcputicks = 0
bsampleCount = 0                    bUse1 = 45327                       bstat = 0x10b
blog = 0x212121cc                   bnext = 0x0000000000000000          

PAGE HEADER:

Page @0x00000004D7F84000

m_pageId = (1:175)                  m_headerVersion = 1                 m_type = 1
m_typeFlagBits = 0x0                m_level = 0                         m_flagBits = 0x8000
m_objId (AllocUnitId.idObj) = 98    m_indexId (AllocUnitId.idInd) = 256 
Metadata: AllocUnitId = 72057594044350464                                
Metadata: PartitionId = 72057594039959552                                Metadata: IndexId = 0
Metadata: ObjectId = 549576996      m_prevPage = (0:0)                  m_nextPage = (0:0)
pminlen = 6012                      m_slotCnt = 1                       m_freeCnt = 2079
m_freeData = 6111                   m_reservedCnt = 0                   m_lsn = (37:390:11)
m_xactReserved = 0                  m_xdesId = (0:0)                    m_ghostRecCnt = 0
m_tornBits = 0                      DB Frag ID = 1                      

Allocation Status

GAM (1:2) = ALLOCATED               SGAM (1:3) = NOT ALLOCATED          
PFS (1:1) = 0x62 MIXED_EXT ALLOCATED  80_PCT_FULL                        DIFF (1:6) = CHANGED
ML (1:7) = NOT MIN_LOGGED           

Slot 0 Offset 0x60 Length 6015

Record Type = PRIMARY_RECORD        Record Attributes =  NULL_BITMAP    Record Size = 6015

Memory Dump @0x000000002267A060

0000000000000000:   10007c17 05000000 df5af8d7 04000000 ffffffff  ..|.....ßZø×....ÿÿÿÿ
0000000000000014:   ffffff7f 9860f8d7 04000000 0040f8d7 04000000  ÿÿÿ..`ø×.....@ø×....
0000000000000028:   01000000 00000000 63000000 00000000 dd7618f4  ........c.......Ýv.ô
0000000000000794:   00000000 00000000 00000000 00000000 01000000  ....................
00000000000007A8:   000071d8 fe070000 c072210e 00000000 00000000  ..qØþ...Àr!.........
00000000000007BC:   00000000 00000000 00000000 00000000 00000000  ....................
00000000000007D0:   00000000 00000000 64646420 20202020 20202020  ........ddd         
00000000000007E4:   20202020 20202020 20202020 20202020 20202020                      
0000000000000BA4:   20202020 20202020 20202020 20202020 20202020                      
0000000000000BB8:   20202020 20202020 157abf7a 63636320 20202020          .z¿zccc     
0000000000000BCC:   20202020 20202020 20202020 20202020 20202020                      
0000000000000BE0:   20202020 20202020 20202020 20202020 20202020                      
000000000000175C:   20202020 20202020 20202020 20202020 20202020                      
0000000000001770:   20202020 20202020 20202020 050000                         ...

Slot 0 Column 1 Offset 0x4 Length 4 Length (physical) 4

ID = 5                              

Slot 0 Column 67108865 Offset 0x8 Length 0 Length (physical) 2000

DROPPED = NULL                      

Slot 0 Column 3 Offset 0x7d8 Length 1000 Length (physical) 1000

col2 = ddd                                                                                                       

Slot 0 Column 4 Offset 0xbc0 Length 4 Length (physical) 4

col3 = 2059368981                   

Slot 0 Column 2 Offset 0xbc4 Length 3000 Length (physical) 3000

col1 = ccc

From the output above we can see, the now on the physical page, there is a Column 67108865 (DROPPED) following the Column1 and Column2 has physically moved to the end of the record at offset 0xbc4 (3012).  Also from the page dump we can see that the space from the offset 0x8 to 0x7d7 is containing a mess and that those 2000 bytes are wasted.

As we can see, the original query to sys.system_internals_partition_columns doesn’t show the Column with ID 67108865 (DROPPED). It’s because it uses join to the sys.columns and the DROPPED column is not part of the table, but is par tof the partition. If we use the query without join to the sys.columns it will be shown also in the query output.

SELECT 
    partition_column_id, 
    max_inrow_length,
    pc.system_type_id, 
    leaf_offset,
    is_dropped 
FROM sys.system_internals_partition_columns pc
INNER JOIN sys.partitions p ON p.partition_id = pc.partition_id 
WHERE p.object_id=object_id('AlterTest');
partition_column_id max_inrow_length system_type_id leaf_offset is_dropped
------------------- ---------------- -------------- ----------- ---------
1                   4                56             4           0
67108865            2000             175            8           1
3                   1000             175            2008        0
4                   4                56             3008        0
2                   3000             175            3012        0

It Seems that the dropped columns have IDs starting from 67108865 and the numbers increase as there are more dropped columns. Also the DROPPED columns have flag is_dropped = 1.

Multiple updates of the column size

In previous examples we took a look on the update to a single column and in Delaney’s post you can see the result if we want to update multiple column. In case we try to multiple columns, space allocated for all the original columns is dropped and new space allocated. But what happens if we update the same column multiple times? Let’s make a simple test.

CREATE TABLE AlterTest2 (
    id int not null identity(1,1),
    Col1 char(1000),
    Col2 char(1000),
    Col3 int
)
GO
ALTER TABLE AlterTest2
ALTER COLUMN Col1 char(2000)
GO
ALTER TABLE AlterTest2
ALTER COLUMN Col1 char(2500)
GO
SELECT 
    partition_column_id, 
    max_inrow_length,
    pc.system_type_id, 
    leaf_offset,
    is_dropped 
FROM sys.system_internals_partition_columns pc
INNER JOIN sys.partitions p ON p.partition_id = pc.partition_id 
WHERE p.object_id=object_id('AlterTest2');
partition_column_id max_inrow_length system_type_id leaf_offset is_dropped
------------------- ---------------- -------------- ----------- ----------
1                   4                56             4           0
67108865            1000             175            8           1
3                   1000             175            1008        0
4                   4                56             2008        0
67108866            2000             175            2012        1
2                   2500             175            4012        0

From the example we can see, that each change which increase the fixed column length causes that the original column space is dropped and additional space is allocated.

Recovering the wasted space

It is great, that extending the column width of fixed length column is a metadata only operation as it is very quick and avoids blocking especially on large tables, but on the other side as we can see, this can cause a significant space wasting.

In case there will be less new inserts into the table than the current about of rows we do not need to take care about the wasted space much (from the point of wasted storage space) as the metadata change didn’t affect the current records and on the current records we are saving the space as we have extended the record length and only the new records inserted are wasting the space.

On the other side, if we know there will be a lot of inserts and reads of the newly inserted records, than it’s a good idea to recover the wasted space as the data will consume more space and further reads will have to read more unnecessary data.

In the comments to the original post there are some suggestions how to resolve the problem.

From my perspective if the table is already clustered, we do not need to crop and recreate the clustered index, but it is enough to REBUILD the index. The rebuild operation will reorder the data and free up the wasted space and also write the original data with new record length.

In the case of heap, creating and dropping clustered index will be quite costly operation which will move the data twice. On small tables this doesn’t matter but on larger amounts of data it will be better to do simple SELECT * INTO newTable from aTable and than simply drop the original table and rename the new one to the original one.

SELECT
*
INTO dbo.AlterTest2
FROM dbo.AlterTest
GO
SELECT
    DB_ID() AS DBID
    ,pl.*
    ,t.*
FROM dbo.AlterTest2 t
CROSS APPLY sys.fn_PhysLocCracker(%%physloc%%) pl
GO
DBID   file_id     page_id     slot_id     ID          col1  col2  col3
------ ----------- ----------- ----------- ----------- ----- ----- -----------
16     1           411         0           1           aaa   bbb   587800818
16     1           411         1           2           aaa   bbb   1396332306
16     1           412         0           3           aaa   bbb   844570652
16     1           412         1           4           aaa   bbb   1332601405
16     1           413         0           5           ccc   ddd   2059368981
16     1           413         1           6           ccc   ddd   1449062892
16     1           414         0           7           ccc   ddd   267569086
16     1           414         1           8           ccc   ddd   1325350591

We can see, that the new table has no wasted space and all pages are now allocated by two records.

Of course if we there are some foreign keys and indexes than those will have to be recreated. Anyway this will cost much less I/O and processing power than creating and dropping the clustered key

 Conclusion

As we can see fro the examples above, altering the fixed column length and increasing it, even it’s metadata  only operation causes, that the originally allocated space is dropped and new space is allocated in the row for all newly inserted or updated rows in the table. So be carefull when altering a fixed length columns especially when you are doing multiple alters to a single column as significat space can be wasted. Knowing the fact that the dropped columns have flag is_dropped = 1 in the sys.system_internals_partition_columns, we can use a below query to list all the tables containing DROPPED columns with wasted space.

SELECT distinct
    p.partition_id,
    p.object_id,
    o.name as table_name
FROM sys.system_internals_partitions p
INNER JOIN sys.objects o on p.object_id = o.object_id
INNER JOIN sys.system_internals_partition_columns pc ON p.partition_id = pc.partition_id
WHERE pc.is_dropped = 1
partition_id         object_id   table_name
-------------------- ----------- -----------
72057594040025088    1253579504  AlterTest2
72057594040090624    1269579561  AlterTest