From there you have a solid knowledge of querytuning, how the optmizer works and solid knowledge of how SQl Server works. Something every dba should know.
And the books of Iztik Ben Gan:
http://www.amazon.com/Inside-Microsoft%C2%AE-SQL-Server%C2%AE-2008/dp/0735626030/ref=sr_1_sc_2?s=books&ie=UTF8&qid=1393572731&sr=1-2-spell&keywords=iztik+ben+gan
This series is now called Microsoft SQL Server Internals [1]. You can get the latest version - Microsoft SQL Server 2012 Internals (Developer Reference)- from Amazon [2].
We have a large multi-use application suffering from OBJECT and PAGE
blocks on a number of tables.
...
The disk i/o is already strained.
Will additional, individual locks require more i/o than the automatic
table locks?
You misunderstand lock escalation, this is clear from the parts of your question that I made bold.
Lock escalation goes from rows to table or from pages to table (I excluded partition as it's not your case), so if now you have page locks it's NOT lock escalation.
Lock granularity is choosen by server unless you use hints (rowlock, paglock) and if it choses page locks there is no escalation. If it then removes all the page locks and substitutes them with table lock, it means lock escalation occured.
The second thing that is wrong is your thinking that locks have to do with IO. This is not true. Locks are held in memory and have nothing to do with reads. You can check this article to see how CPU usage and query duration increase when the locks are more granular: Why ROWLOCK Hints Can Make Queries Slower and Blocking Worse in SQL Server.
You should understand what causes your lock escalation.
Lock Escalation Thresholds
Lock escalation is triggered when lock escalation is not disabled on
the table by using the ALTER TABLE SET LOCK_ESCALATION option, and
when either of the following conditions exists:
A single Transact-SQL statement acquires at least 5,000 locks on a single nonpartitioned table or index.
A single Transact-SQL statement acquires at least 5,000 locks on a single partition of a partitioned table and the
ALTER TABLE SET LOCK_ESCALATION option is set to AUTO.
The number of locks in an instance of the Database Engine exceeds memory or configuration thresholds.
Lock Escalation (Database Engine)
So if you reach 5,000 locks per statement threshold you should split your operations to smaller batches.
And if you are under memory pressure, disabling lock escalation will make your situation even worse.
Locks aren’t on-disk structures. You won’t find a lock field directly
on a data page or a table header, and the metadata that keeps track of
locks is never written to disk. Locks are internal memory structures:
They consume part of the memory used for SQL Server. A lock is
identified by lock resource, which is a description of the resource
that’s locked (a row, index key, page, or table). To keep track of the
database, the type of lock, and the information describing the locked
resource, each lock requires 64 bytes of memory on a 32-bit system and
128 bytes of memory on a 64-bit system. This 64-byte or 128-byte
structure is called a lock block.
... The lock manager maintains a lock hash table. Lock resources,
contained within a lock block, are hashed to determine a target hash
slot in the hash table. All lock blocks that hash to the same slot are
chained together from one entry in the hash table. Each lock block
contains a 15-byte field that describes the locked resource. The lock
block also contains pointers to lists of lock owner blocks. Each of
the three states has a separate list for lock owners.
The 72K of reserved space includes a 64K extent (8 pages of 8K each) plus the 8K IAM page overhead. Of this 72K, only the IAM page and a single data page is actually used. sp_space_used reports the IAM page in the index_size, albeit not technically an index. You can see these details with the undocumented sys.dm_db_database_page_allocations TVF (use only on a test system):
SELECT extent_file_id, extent_page_id, page_type_desc
FROM sys.dm_db_database_page_allocations(DB_ID(), OBJECT_ID(N'dbo.MyTable'), 0, 1, 'DETAILED');
This database apparently has the MIXED_PAGE_ALLOCATION database option set to OFF so a full 64K extent is allocated a initially. If the option were ON, the single data page would be allocated from a mixed extent instead of a 64K extent dedicated to the table. The space allocated in that case would be 16K - a 8K single data page plus the IAM page.
Although mixed extents do reduce space requirements for small tables (under 64K), mixed extents have more overhead and can cause allocation contention in a high concurrency workload so it is off by default in SQL 2016 onwards. In older SQL versions, mixed extent allocation was on by default and can be turned off at the server level with trace flag 1118.
You can see the mixed extent setting in sys.databases:
SELECT name, is_mixed_page_allocation_on
FROM sys.databases;
To toggle the setting:
ALTER DATABASE Test
SET MIXED_PAGE_ALLOCATION ON;
EDIT 1:
Space within a data page includes overhead for the page itself as well as records within the page. This overhead, plus the space needed for user data, will determine how many rows can fit on a page and number of data pages required to store a given number of rows. See Paul Randal's anatomy of a page and anatomy of a record articles for details of that overhead.
EDIT 2:
From your follow-up comment:
7998 bytes, so there are more 194 bytes to go for the next allocation.
What am I missing?
So it looks like once heap page fullness crossed the 96% percent threshold it was considered 100% full and a new page was allocated. Note this does not happen on a table with a clustered index because the page for a new row is first determined by the CI key and a new page allocated only if it can't fit in that page at all. Yet another reason to avoid heaps.
I'd say: learn the internals
http://www.amazon.com/Microsoft-SQL-Server-2012-Internals/dp/0735658560
From there you have a solid knowledge of querytuning, how the optmizer works and solid knowledge of how SQl Server works. Something every dba should know.
And the books of Iztik Ben Gan: http://www.amazon.com/Inside-Microsoft%C2%AE-SQL-Server%C2%AE-2008/dp/0735626030/ref=sr_1_sc_2?s=books&ie=UTF8&qid=1393572731&sr=1-2-spell&keywords=iztik+ben+gan
[1] https://www.sqlserverinternals.com/publications/
[2] https://www.amazon.com/gp/product/0735658560/ref=as_li_tl?ie...
I think good overview is this article:
https://www.simple-talk.com/sql/database-administration/sql-server-storage-internals-101/
See part Indexes. There is shown, that nodes as 7 or 16 has also their leafs Also, I highly recommend book:
SQL Server 2012 Internals by Kalen Delaney
https://www.amazon.com/Microsoft-Server-Internals-Developer-Reference/dp/0735658560
You misunderstand lock escalation, this is clear from the parts of your question that I made bold.
Lock escalation goes
from rows to table
orfrom pages to table
(I excluded partition as it's not your case), so if now you have page locks it's NOT lock escalation.Lock granularity is choosen by server unless you use hints (
rowlock
,paglock
) and if it chosespage locks
there is no escalation. If it then removes all thepage locks
and substitutes them withtable lock
, it meanslock escalation
occured.The second thing that is wrong is your thinking that locks have to do with
IO
. This is not true. Locks are held in memory and have nothing to do with reads. You can check this article to see howCPU usage
andquery duration
increase when the locks are more granular: Why ROWLOCK Hints Can Make Queries Slower and Blocking Worse in SQL Server.You should understand what causes your lock escalation.
Lock Escalation (Database Engine)
So if you reach 5,000 locks per statement threshold you should split your operations to smaller batches.
And if you are under memory pressure, disabling lock escalation will make your situation even worse.
UPDATE
I've found this description of locks in the book Microsoft SQL Server 2012 Internals (Developer Reference)by Kalen Delaney (Author), Bob Beauchemin (Author), Conor Cunningham (Author), Jonathan Kehayias (Author), Paul S. Randal (Author), Benjamin Nevarez (Author
Hope it helps.
The 72K of reserved space includes a 64K extent (8 pages of 8K each) plus the 8K IAM page overhead. Of this 72K, only the IAM page and a single data page is actually used.
sp_space_used
reports the IAM page in the index_size, albeit not technically an index. You can see these details with the undocumentedsys.dm_db_database_page_allocations
TVF (use only on a test system):This database apparently has the
MIXED_PAGE_ALLOCATION
database option set toOFF
so a full 64K extent is allocated a initially. If the option were ON, the single data page would be allocated from a mixed extent instead of a 64K extent dedicated to the table. The space allocated in that case would be 16K - a 8K single data page plus the IAM page.Although mixed extents do reduce space requirements for small tables (under 64K), mixed extents have more overhead and can cause allocation contention in a high concurrency workload so it is off by default in SQL 2016 onwards. In older SQL versions, mixed extent allocation was on by default and can be turned off at the server level with trace flag 1118.
You can see the mixed extent setting in
sys.databases
:To toggle the setting:
EDIT 1:
Space within a data page includes overhead for the page itself as well as records within the page. This overhead, plus the space needed for user data, will determine how many rows can fit on a page and number of data pages required to store a given number of rows. See Paul Randal's anatomy of a page and anatomy of a record articles for details of that overhead.
EDIT 2:
From your follow-up comment:
I almost never use heaps but as you can see in the page dump, the associated PFS (page free space) allocation status for this page is 100 percent full. According to Kalen Delaney's Microsoft SQL Server 2012 Internals book, the PFS status is actually a 3-bit mask of these ranges:
So it looks like once heap page fullness crossed the 96% percent threshold it was considered 100% full and a new page was allocated. Note this does not happen on a table with a clustered index because the page for a new row is first determined by the CI key and a new page allocated only if it can't fit in that page at all. Yet another reason to avoid heaps.