top of page

Deleting large amounts of data

  • Adam Thurgar
  • Nov 20, 2019
  • 1 min read

I had a client call me about wanting to delete a large amount of data from their database. They knew what tables they wanted to delete from. They also knew that deleting large amounts of data causes the transaction log to grow.

Doesn’t matter what recovery model you are using, the transaction log will grow.

So if they went down the traditional delete path you are looking at lots of batch deletes, over a long period of time.

An alternative could be to

  1. Copy the data you want to keep to a staging table e.g. Manifest_Stage

  2. Truncate the Manifest table – truncate doesn’t use the transaction log as much

  3. Import the data from Manifest_Stage to Manifest.

  4. Drop the staging table

You could also just do the first step and then drop Manifest and then rename Manifest_Stage to Manifest.

This is of course a simplified overview as you would need to also manage Primary Key and Foreign Key relationships if they exist. Then make sure that the indexing is correct and any other table constraints or triggers etc.


 
 
 

Recent Posts

See All
Cardinality estimator

Recently I was asked by a software vendor to review a particular query that ran in under a second on a SQL Server 2014 installation at a...

 
 
 
Index fragmentation

A law firm client, occasionally has issues with their legal software, that is provided by the global leader in this field. The response...

 
 
 
Statistics sampling

DBA's and developers know the importance of having up to date statistics to ensure that query performance is optimal. Most of the time...

 
 
 

Comments


bottom of page