It’s wise to set SQL Server file autogrowth to a set number rather than the default percent. That is because the percent setting can end up growing your file very large. The larger your file, the larger that percent will be. It’s better to set your file autogrowth to a set number and keep an eye on it for proper maintenance.
Start by right-clicking the database in SSMS and choosing Properties. Then in the “Database Properties” window, choose Files. Click the ellipses under the “Autogrowth” column (blue arrow below).
The “Change Autogrowth for …” window will pop up. Under File Growth, you can click on the “In Megabytes” radio button and set a fix amount.
It’s as easy as that.
Recently a developer approached me at work and said, “I Can’t Start SQL Server Browser”
So, I logged into SQL Server Configuration Manager and saw that SQL Server Browser was stopped / off. When I right-clicked the SQL Server Browser to turn it on, I got this:
No Start, no Stop…nothing. I clicked on Properties, clicked on the Service Tab and choose Automatic. (See screenshot below)
Click Apply and OK. Now when I right click the SQL Server Browser I get all options (see screenshot below).
Hope that helped!
For the longest time I’ve had a hard time remembering the difference between SQL Server DDL vs DML statements. I had a hard time remembering what statement fell under what category. Was INSERT, UPDATE, DELETE a DML command or DDL? What about CREATE / ALTER? It was all confusing to me.
I finally figured out a way to remember the difference. Before I tell you my “secret”, here’s a quick explanation of each.
DDL, or Data Definition Language, consists of the following commands:
CREATE, ALTER and DROP
Below is an image taken straight from MSDN that includes all the DDL statements.
DML, or Data Manipulation Language, consists of the following commands:
INSERT, UPDATE, DELETE, SELECT…
Below is an image taken straight from MSDN.
How did I memorize the difference? Well, for DML the ‘M’ is for ‘manipulation’, so I automatically associate “INSERTING” or “UPDATING” data as “manipulating” data. Once I understood that, DDL became easy to remember and associated it with CREATE, ALTER, DROP. By memorizing one of them you automatically know the second. :)
It’s usually something simple that’s overlooked that ends up causing the biggest troubleshooting headache. Let me explain. I was creating a test database called “testDB” with a test table called “testTable” (yes I know, I put a lot of thought in the naming of these objects) and when I tried to insert data into this new testTable I got the following error:
Msg 208, Level 16, State 1, Line 1 Invalid object name ‘testTable’.
What the hec? How can it be an invalid object? I just created it!
The answer is extremely simple and kind of embarrassing to admit.
When I created the testDB I was in the Master database. Then I forgot to “use testDB;” and proceeded to create the testTable. After creating the testTable, I did a “USE testDB;” and ran the INSERT INTO command. That’s when it failed.
Now if you’re confused as I was, this is why it failed:
After creating testDB, and before creating testTable, I was still in the Master database. SO what actually happened was I created the testTable inside of the Master database (look at snapshot below). Doh!
I dropped the testTable inside the Master database, switched over to testDB, created the testTable, ran the insert command and everything worked fine.
Don’t overlook the small things. Pay attention to what database you’re in. :)
During a recent “interview” I was asked, “What two isolation levels in SQL Server will prevent phantom reads?”
I had never heard of “phantom reads” before but thought the person meant, “dirty reads.” So I replied, “READ COMMITTED and SNAPSHOT isolation levels.”
I was wrong…sort of. The interviewer said, “It’s actually Serializable and Snapshot.” As soon as the “interview” was over, I had to read up on phantom reads.
This is what I learned:
Phantom Reads: Occurs when a transaction is allowed to read data from a row that has been modified by another running transaction that has not yet committed.
Dirty Reads: Reading uncommitted modifications. When a transaction is allowed to read data from a row that has been modified by another running transaction that has not yet committed.
Below is a table that shows each isolation level and whether they allow dirty or phantom reads. (Photo taken from MSDN)
Recently I had to find all the tables, columns, data types, etc. from a database. Below is a thorough script that brings back all the tables, attributes, data types, whether the column allows NULLS, whether it’s a Primary Key, or a Foreign Key (and if so, the referencing table). It’s extremely useful and easy to run.
Here is the script. Just make sure to un-comment the first line and replace it with USE yourDB!
SELECT obj.name [Table],
col.isnullable [Allow Nulls?],
CASE WHEN d.name is null
CASE WHEN e.parent_object_id is null
CASE WHEN e.parent_object_id is null
END [Ref Table],
CASE WHEN h.value is null
FROM sysobjects AS obj
JOIN syscolumns AS col ON obj.id = col.id
JOIN systypes AS typ ON col.xtype = typ.xtype
LEFT JOIN (SELECT so.id,sc.colid,sc.name
FROM syscolumns sc
JOIN sysobjects so ON so.id = sc.id
JOIN sysindexkeys si ON so.id = si.id
AND sc.colid = si.colid
WHERE si.indid = 1) d on obj.id = d.id and col.colid = d.colid
LEFT JOIN sys.foreign_key_columns AS e
ON obj.id = e.parent_object_id AND col.colid = e.parent_column_id
LEFT JOIN sys.objects as g
ON e.referenced_object_id = g.object_id
LEFT JOIN sys.extended_properties AS h
ON obj.id = h.major_id AND col.colid = h.minor_id
WHERE obj.type = 'U' ORDER BY obj.name
There are many options to find the last login date for a a SQL Server login. Even though there are awesome scripts like Adam Machanic’s “Who is Active” (download link here), sometimes you might find yourself without internet access, or perhaps at a client site that doesn’t have “Who is Active” installed and you forgot your thumb drive at home. :)
You can easily query the sys.dm_exec_sessions dmv to get the last login time of SQL Server logins. Per MSDN, the sys.dm_exec_sessions DMV,
“Returns one row per authenticated session on SQL Server….it’s a server-scope view that shows information about all active user connections and internal tasks”
Here’s a little script to help you out!
SELECT MAX(login_time) AS [Last Login Time], login_name [Login]
GROUP BY login_name;
One of the developers approached me today asking why their simple SELECT SQL query was taking forever. I walked over to their desk and noticed their SQL code had a BEGIN TRAN but no COMMIT or ROLLBACK. I ran a:
…but that didn’t bring back anything. So then I ran:
…and it returned an open transaction with its associated SPID.
I used the KILL command to kill SPID 57 (Kill 57) and the developer’s query returned instantly.
And just in case you were wondering, the cause of the rogue transaction was a BEGIN statement that the developer ran without a COMMIT or ROLLBACK and the developer tried to access that same table in another session window.
I had an application go kaput on me all of a sudden and that wasn’t good. I had gotten back from lunch (always happens when I get back from lunch) and was immediately approached by the Sys Admin saying that a certain web application couldn’t connect to the SQL database. He wanted me to check out why and get back to him ASAP.
So I quickly log into the database server and see that SQL server is running fine, the SQL account that the web application uses is not “locked” and the password expiration is not checked (meaning that technically the password for that SQL account should never expire). I look at the disk drives by clicking on “My Computer” in Windows Explorer and quickly saw …
“E: Drive 0 GB free of 99 GB”
We have SQL Server configured to store Audit files on the “E” drive. I went into the folder and removed the oldest month worth of audit files. That freed up about 20 GB of space.
Roughly 5 minutes later the same Sys Admin ran in my office saying, “hey! I reset the SQL Account password and it works now.”
I know. What a coincidence right? :)
I believe the reason why the application couldn’t connect to the database is because SQL Server couldn’t write any more audit files to the disk drive. As a result, SQL Server refused outside connections.
I wrote a script that checks whether a specific disk drive space goes below a given threshold. If so, it will send the DBAs an email alert.
Here is the script. Feel free to use/modify it however you see fit.
--create temp table for results
create table #freespace
(drive char(1),mb_free int)
--insert drive data into temp table
insert into #freespace exec sys.xp_fixeddrives
declare @subject varchar(100)
declare @profile_name varchar(25)
declare @body varchar(200)
declare @gb_free int
declare @recipients varchar(50)
--you can specify whatever drive leter
select @gb_free = (mb_free / 1024) from #freespace where drive = 'E'
--you can specify whatever number you want. I put 20GB.
if (@gb_free < 20)
SET @profile_name = 'Profile Name goes here'
SET @recipients = 'email@example.com'
SET @subject = 'ALERT!! E: drive is BELOW 20 GB'
SET @body = 'Please check the E: drive! It has fallen below 20 GB of free space. There is currently ' + CONVERT(varchar,@gb_free) + ' GB of free space left on the E drive.'
@profile_name = @profile_name,
@body = @body,
@subject = @subject,
@recipients = @recipients
--drop temp table
drop table #freespace;
I was fortunate enough to attend Paul Randal’s and Kimberly Tripp’s IETPO1 this past Spring. During the week long training I met Tim Radney (he’s a SQL Consultant at SQLSkills). I approached him, introduced myself and as we were talking, the subject of SQL Server backups came up. I explained my work’s current backup strategy and how I’d like to make it more efficient, both in speed and disk space. Tim suggested I enable the instance-wide backup compression option in SQL Server Management Studio (see image below)
Since then, I have checked that option on all my database servers. In some cases it has compressed the backup file size by 80%. How neat is that!
How to Enable SQL Instance Backup Compression in SQL Server Management Studio
Right click the Instance, click Properties, click “Database Settings” on the left, and make sure there’s a check mark in “Compress backup” check box. Done.