I don't know about you guys but I have seen this type of situation so many times where no matter how many times you run your script.At the end, You will see that fragmentation has not been reduced. The one situation could be the obvious one that "the table is a HEAP", means there is no clustered index on that table.
The second reason could be your index is very small. In general, Fragmentation on small tables is often not controllable. As the pages of small indexes are stored on mixed extents. As we all knew that mixed extents are shared by up to eight objects so the fragmentation in a small index might not be reduced after re-organizing or rebuilding the indexes.
I have copied the below lines from Brent Ozar Blog which explains the case in better way:
In the case of our index fragmentation numbers, one of the source data fields is Page Count – the number of pages that an object has. Size matters with fragmentation: small objects with only a handful of pages may appear to have very high fragmentation numbers, but they can’t actually be de-fragmented. There’s only so much de-fragmentation you can do when a table only has three pages. I’ve actually been on support escalation calls where customers demand to know why a defrag job doesn’t reduce all types of fragmentation to absolute zero, even for tables with just one page.
Microsoft’s best practices on SQL Server 2000 index defragmentation notes that:
“Generally, you should not be concerned with fragmentation levels of indexes with less than 1,000 pages. In the tests, indexes containing more than 10,000 pages realized performance gains, with the biggest gains on indexes with significantly more pages (greater than 50,000 pages).”
With that in mind, I added a Page Count Group column and calculated it with a formula:
=IF(Table1[[#This Row],[page_count]]>50000,"Large",(IF(Table1[[#This Row],[page_count]]<10000,"Small","Medium")))
The second reason could be your index is very small. In general, Fragmentation on small tables is often not controllable. As the pages of small indexes are stored on mixed extents. As we all knew that mixed extents are shared by up to eight objects so the fragmentation in a small index might not be reduced after re-organizing or rebuilding the indexes.
I have copied the below lines from Brent Ozar Blog which explains the case in better way:
In the case of our index fragmentation numbers, one of the source data fields is Page Count – the number of pages that an object has. Size matters with fragmentation: small objects with only a handful of pages may appear to have very high fragmentation numbers, but they can’t actually be de-fragmented. There’s only so much de-fragmentation you can do when a table only has three pages. I’ve actually been on support escalation calls where customers demand to know why a defrag job doesn’t reduce all types of fragmentation to absolute zero, even for tables with just one page.
Microsoft’s best practices on SQL Server 2000 index defragmentation notes that:
“Generally, you should not be concerned with fragmentation levels of indexes with less than 1,000 pages. In the tests, indexes containing more than 10,000 pages realized performance gains, with the biggest gains on indexes with significantly more pages (greater than 50,000 pages).”
With that in mind, I added a Page Count Group column and calculated it with a formula:
=IF(Table1[[#This Row],[page_count]]>50000,"Large",(IF(Table1[[#This Row],[page_count]]<10000,"Small","Medium")))
Thanks!!