Auditing Users in a Failover Partner DatabaseI read a question on a discussion list today asking if it is possible to audit SELECT queries on a log shipping secondary in stand-by mode. His attempts to set it up had obviously failed because the database is read-only and you cannot create a database audit in the database (nor anything else). Wanting to modify the secondary database is actually a pretty common question with a simple solution though I had never seen it used in conjunction with a SQL audit.
The most common context where this comes up is when you need to give access to a user on the stand-by database but not on the live database. You simply create the database user on the live database and only create the server login on the secondary server. Having the user in the live database does no good if the user cannot log into the server. You have to be careful with this solution because if the user has access to the server via a security group, then the user can use those database permissions. One reason this approach works is that the database user is mapped to server login via it’s security identifier (SID) and doesn’t require any updates to the database to link them up.
I felt pretty confident that the approach would work for linking up a database audit to a server audit, but I had not heard of anyone doing it (though adittedly, I did not do any searches to see if anyone had blogged this solution already). Time for a quick test …. well, maybe a moderately quick test.
The Question Email
Subject: SQL 2008 R2 Logged Shipped Database is ReadOnly Disallows Auditing at DB Level
A SQL 2008 R2 Logged Shipped Database is ReadOnly and it disallows auditing at the DB Level
So this command fails:
CREATE DATABASE AUDIT SPECIFICATION MySecurityAudit_MyDatabase
FOR SERVER AUDIT MySecurityAudit
ADD (SELECT ON DATABASE::MyDatabase BY public )
WITH (STATE = ON) ;
How can I audit who Selects data from my read-only database?
Verifying The Solution
My reply was:
You can do it by creating the audits (server and database) on the primary server. Then script out the server audit and create the exact same audit on the secondary server (for my test, both instances are on my laptop, so I actually used different file paths). Make sure you enabled the database audit on the primary and the server audit on the secondary. The two audits will connect up nicely and it will audit users on the secondary server. Best part is that you don’t even need to enable the server audit on the primary, so it can audit ONLY the secondary if you so choose.
Works for a snapshot of a mirror database as well.
Testing the solution:
First I created a database for my test. I then created a table and populated it with some sample data. Nothing fancy.
Use master; -- Create a database Create Database TestLS; Go Use TestLS; -- Create and populate a table Select * Into dbo.TestFiles from sys.master_files
Then I created a server audit on the primary server and enabled it. Enabling it on the primary is optional, but I wanted to make sure it was sorkign on the primary before I checked to see if it was working on the secondary.
Next I created the database audit mapped to the server audit. The audit is for the SELECT Audit Action Type on the object dbo.TestFiles and the principal public. Everyone maps to the public role, so this allows me to audit everyone in the database.
I started setting up log shipping at this point and had performed the backup and resotre steps to initialize the database. Once the backups were restored on the secondary server, I scripted out the server audit on the primary and created the same server audit on the secondary. As I stated previously, I am testing this on mu laptop, so I used a different filepath for the server audit on the secondary to keep them separate. The importnat part of this step is that I keep the same AUDIT_GUID for the server audit as this is how the dataase audit is mapped to the server audit. After creating the server audit, I enabled it.
CREATE SERVER AUDIT TestLS TO FILE ( FILEPATH = N'c:bakMastertest2' ,MAXSIZE = 0 MB ,MAX_ROLLOVER_FILES = 2147483647 ,RESERVE_DISK_SPACE = OFF ) WITH ( QUEUE_DELAY = 1000 ,ON_FAILURE = CONTINUE ,AUDIT_GUID = '86d6e3e5-75c0-4e47-98e4-79050ff0b85e' )
All that was left was to configure log shipping in stand-by mode and test the audit. Once the secondary was in stand-by mode, I performed a SELECT * from the table and checked the database audit log file to ensure my query was logged. Indeed it was. The below pic is actually after several re-tests of the solution which is why it has so many entries.
One reason there are so many entries is that I also tested this with database mirroring. To test this on the mirror, I had to create a snapshot of the mirror database. I was worried that it may not work on the mirror due to the snapshot being considered a different database, but it did in fact work on the mirror. This solution works with both mirroring and log shipping.
Even though the solution isn’t new, it is being applied in a new way. It’s also a nice look at how server audits are mapped to database audits and how the design enables us to map them automagically. This results in soemthing that has never happened before …. me saying, “I’m glad they used GUIDs when they designed it.”