SQL Server Backup Under the SIMPLE Recovery Model
I tend to forget how to do certain database administration tasks if I don’t do them often enough. No matter how *SIMPLE* they might be. :)
Recently at work I saw a production database server that was in the SIMPLE recovery model, that was scheduled for weekly FULL backups and daily DIFFERENTIAL backups. I was a little confused when I first saw this. I wondered, “I thought you cannot do FULL backups in SIMPLE recovery model?” So I checked out Books Online.
To my surprise, you CAN run a FULL/DIFF backup on a database in the SIMPLE recovery model. What you cannot do is run transaction log backups.
Look at what happens if you try to do a transaction log backup on a database in the SIMPLE recovery model (see snapshot below)
Going back to the production database server I mentioned above. If that server crashes right before the daily DIFF backup then a total of 23:59 of data can be lost.
So the point between FULL and SIMPLE recovery model is:
FULL Recovery Model – Allows you to take transaction log backups. With log backups you can perform point-in-time restore, and minimize data loss.
SIMPLE Recovery Model – You can only take FULL backups and DIFFERENTIAL backups. You cannot take log backups, which means no point-in-time restore capability. IF you need point-in-time restore then switch your database to the FULL recovery model.
Moral of the story…
So what’s the moral of the story? You, as the DBA, need to know what your organization wants. Don’t ever assume. A very crucial question to ask in order to find out exactly what they want is, “In case of a crash, how much data are you okay with losing?” If the answer is, “NONE!”, then stick with the FULL recovery model and schedule frequent transaction log backups depending on how much data the organization is comfortable with losing. If the answer to the question is, “ah, a day or half a day’s worth of data.” Then, you can stick with SIMPLE recovery model and schedule FULL backups with intermittent DIFFERENTIAL backups.
Hope that helps!
A very quick and easy way to see the how fast your hard disk is, is to run:
BACKUP DATABASE testDB TO DISK=’NUL:’
Don’t mistake the ‘NUL’ for NULL. They are two separate things. The ‘NUL:’ is considered a device and anything written to ‘NUL’ is discarded.
I downloaded and installed the StackOverflow database on my laptop (if you want to install the StackOverflow database on your machine to play around, check out Brent Ozar’s instructions on how to do it here). My laptop is running Crucial MX200 1TB SSD. My throughput is 490.491 MB / sec (see snapshot below).
SQL Server will treat the ‘NUL:’ LOG backup statement as a valid backup. Meaning, if you have a previous LOG backup and then run ‘NUL:’ backup it will break your LOG backup chain and you will get Msg 4305, Level 16, State 1, Line 1 error as well as Msg 3013, Level 16, State 1, Line 1 error (see screenshot below).
You can get around this by using the COPY_ONLY option (see screenshot below)
That way it doesn’t affect the backup chain and you still get the write speed for your disk! I have included the code snippet below if you want to play around with it!
--take FIRST full backup of AdventurWorks2012
BACKUP DATABASE AdventureWorks2012 TO DISK = 'C:\backups\AW_FULLBackup_1.bak'
--take FIRST LOG backup of AdventureWorks2012
BACKUP LOG AdventureWorks2012 TO DISK = 'C:\backups\AW_LOG_Backup_1.trn'
--take NUL: LOG backup of AdventureWorks2012 <---THIS WILL BREAK LOG CHAIN
BACKUP LOG AdventureWorks2012 TO DISK = 'NUL:'
--take 3rd LOG backup of AdventureWorks2012
BACKUP LOG AdventureWorks2012 TO DISK = 'C:\backups\AW_LOG_Backup_2.trn'
--DROP AW Database
DROP DATABASE AdventureWorks2012
--RESTORE from FIRST FULL
RESTORE DATABASE AdventureWorks2012
FROM DISK = 'C:\backups\AW_FULLBackup_1.bak'
--RESTORE FIRST LOG
RESTORE LOG AdventureWorks2012
FROM DISK = 'C:\backups\AW_LOG_Backup_1.trn'
--RESTORE SECOND LOG (will fail)
RESTORE LOG AdventureWorks2012
FROM DISK = 'C:\backups\AW_LOG_Backup_2.trn'
Here’s a quick and easy formula to help show upper management how much your backup compression strategy is saving the company.
Let’s assume your original non-compressed SQL backup file is 100MB. After you enabled compression your compressed backup size is now 5MB. Plug in the numbers below and you get:
Percent decrease = (OriginalSizeofDB – CompressedSizeofDB) ÷ OriginalSizeofDB.
= (100 – 5) ÷ 100
= .95 = 95%
So in the above scenario by simply enabling database backup compression, you saved 95% of disk space per backup file. Neat!
I was fortunate enough to attend Paul Randal’s and Kimberly Tripp’s IETPO1 this past Spring. During the week long training I met Tim Radney (he’s a SQL Consultant at SQLSkills). I approached him, introduced myself and as we were talking, the subject of SQL Server backups came up. I explained my work’s current backup strategy and how I’d like to make it more efficient, both in speed and disk space. Tim suggested I enable the instance-wide backup compression option in SQL Server Management Studio (see image below)
Since then, I have checked that option on all my database servers. In some cases it has compressed the backup file size by 80%. How neat is that!
How to Enable SQL Instance Backup Compression in SQL Server Management Studio
Right click the Instance, click Properties, click “Database Settings” on the left, and make sure there’s a check mark in “Compress backup” check box. Done.