Measuring Transactional Replication Latency Without Tracer Tokens SQL Server 2005 introduced Tracer Tokens (SQL 2005 | SQL 2008), a new methodology for programmatically measuring replication latency in transactional replication. To measure latency with a tracer token, you simply insert a tracer token at the publisher. The replication process will trace the token as it moves through the steps of the process and report back how long it took for the token to reach the distributor and the subscriber. Sounds great,
T-SQL Tuesday #004: IO — Where Are My TempDB Objects? This blog entry is participating in T-SQL Tuesday #004, hosted this month by Mike Walsh. You are invited to visit his bloG to join the party and read more blogs participating in this month’s theme: IO. The question was raised recently in a discussion group about how to tell if your temporary tables and table variables were being maintained in memory or on disk. Here is my attempt to solve
T-SQL Tuesday #004: IO — Where Are My TempDB Objects? This blog entry is participating in T-SQL Tuesday #004, hosted this month by Mike Walsh. You are invited to visit his blog to join the party and read more blogs participating in this month’s theme: IO. The question was raised recently in a discussion group about how to tell if your temporary tables and table variables were being maintained in memory or on disk. Here is my attempt to solve
What is tempDB contention? From the outside looking in, tempDB contention may look like any other blocking. There are two types of contention that tends to plague tempDB’s, especially when the tempDB is not configured to best practices (multiple, equally sized data files, located on a dedicated, high-speed drive, etc.). For the purpose of this blog, I want to focus on latch contention on the allocation pages. What are allocation pages? Allocation pages are special pages in the data files
SQLSaturday 26 Session Files: 10/3/2009 in Redmond, WA Thanks to everyone that attended my sessions at SQLSaturday 26 in Redmond, WA on 10/3!! This was my first SQLSaturday event. I was granted the opportunity to be a last minute replacement speaker and gave two presentations. This was my first time speaking in front of a large audience at an event. It was a thoroughly enjoyable experience, and I hope to speak at future events as well. As promised in my
The following was sent to me by my friend and colleague Dave Miller: Dave’s Email: Wanted to pass along something I hadn’t used before and found useful to easily get rid of duplicates in a set of data. The functionality has existed in the SQL language and was supported in SQL Server 2005. This uses Common Table Expressions (CTE) and the ROW_NUMBER() function. The PARTITION BY portion of the statement specifies when to reset the row number, in my example
I’ve got a date with an Error Log — Error Logs Part II In my first post on SQL Server Error Logs, I briefly mentioned using xp_enumerrorlogs to list the archived error logs. Here I want to demonstrate how to use the procedure to find and output all error logs since a specific date. xp_enumerrorlogs This procedure returns 3 columns: Archive #, Date, and Log File Size (Byte). Archive numbering is 0 based with 0 being the currently active log
“Select *” is bad. Everyone knows it, but everyone still uses it. I use it. Most of the time it is fairly innocuous. No harm, no foul, right? But what about those precious milliseconds lost sending data across the network to client applications? That’s where you start to notice the effect of a Select *. This effect is amplified when we deal with tables with large data types such as XML and the new max data types.
How do you know what procedures are cached in SQL Server? Simple, just ask, and SQL Server will tell you. You can query the SQL Server dynamic management views to get a list of procedures in cache. In this example, I query sys.dm_exec_cached_plans and sys.dm_exec_sql_text:
It’s nice to be able to package a process into a single, tidy, elegant query, but it isn’t always possible to do so. And even if it is possible, it may not be the best way to do it. Often, we can get better performance out of large or complex queries by breaking them up into smaller pieces. I encountered a great example of this today. A developer asked me about a query that was taking a really long time to process in the test environment. The particular step that was having issues is building a long string by concatenating short strings to pass through to a remote server for processing of data on the remote server. It was a simple, recursive string building query.