Comments (35)

  1. Robert,
    Will this be recorded and be available afterwards?

    Thanks,

    1. It will be recorded and available for viewing by anyone who registered.

  2. Great, thanks

  3. Robert,

    I appreciate you taking the time to deliver this webcast. I was in attendance, but my one question was not addressed, so I’m hoping you have time to address it here.

    Q: At the beginning of the presentation, you answered the question of whether to colocate the data files and log file for tempdb with the standard “It Depends” answer. Can you provide some criteria or metrics that would make you lean one way or the other?

    Thanks in advance for your time.

    -Mike Eastland

    1. It would depnd on the activiity hitting tempDB. Are your objects being maintained in memory or on disk? What is the amount of activity hitting the files? Are you seeing a lot of pageiolatch waits that are not on allocation pages? Are you seeing IO_completion waits, etc.

      If you disk subsystem is getting overwhelmed, you will definitely see the difference in wait times. This can tell you if tempDB is relying heavily on the disk subsystem and if it needs improvement.

      You will then have to dig into your disk counters to determine where your bottleneck is. If the bottleneck is the disks themselves, then you will be able to see some definite benefits by separating the disk files to separate dedicated LUNs. If the bottleneck is throughput, then spearating the files to separate LUNs on the same SAN may not give you a big performance boost. If the SAN itslef is overloaded, separating them out to different LUNs will yield no benefit because the SAN will still be overloaded.

      If the problem with throughput is related to the SAN connections themselves, then it depends on how you set up the new connections. If you can add new, dedicated LUNs with the same number of paths as the original LUN, this could alleviate a lot of the load on throughput. You could also get a lot of benefit in this case simply by increasing the number of paths to the SAN on the existing LUN.

      There are a lot of ways to improve your disk subsystem to improve tempDB throughput. Testing is the best way to know for sure which configuration will work best for you. This is when it is important to have a good relationship with the SAN engineers.

  4. Thank you very much! This was hot Stuff!

    1. Thanks Thomas! I’m glad you enjoyed the presentation.

  5. Robert,

    I do not see page contention (GAM, SGAM pages) with tempdb in my environment. Do I need to create multiple tempdb data files?

    Thanks in advance for your time.

    Regards,
    Tuan

    1. The answer is it depends. If you are not seeing tempDB contention with you maximum expected workload, then you probably don’t need multiple files. However, having multiple data files has a very minimal overhead with less than 8 data files, so you should ask yourself whether you want to wait for tempDB contention to occur. My recommendation is to start at least with 1 data file for every 2 or 4 logical CPU and adjust as needed. If you are not actively monitoring for tempDB contention or are not comfortable with identifying it and resolving it in a timely fashion, I recommend goign with 1 data file per logical CPU.

  6. Robert,
    Where would i be able to get the recording and the presentation materials?

    Thanks,
    Abi

    1. If you registered for the event, you will get an email on Friday with a direct link to the recording. You can access the recording now from the Idera website by filling out the registration form: http://www.idera.com/Events/RegisterWC.aspx?EventID=208

      The session files are available at http://www.sqlsoldier.com/tempdb

  7. I am looking to grow the tempdb log file from 2GB to 18GB. can you show me TSQL scripts to accomplish this? I don’t don’t want to have a 16 large VLF. How do I increment the log file size in chunk of 4GB or 8GB?

    1. Here’s a script I just threw together to resize the log file in a loop 4000 MB at a time:

      Select @LogSizeKB = size * 8,
      @LogName = name
      From sys.master_files
      Where database_id = 2
      And type = 1; — log file

      While @LogSizeKB < @TargetSizeKB Begin Set @AlterSizeKB = @LogSizeKB + 4096000; If @AlterSizeKB > @TargetSizeKB
      Begin
      Set @AlterSizeKB = @TargetSizeKB;
      End

      Set @SQL = N’Alter Database tempdb Modify File
      (Name = N”’ + @LogName + ”’,
      Size = ‘ + Cast(@AlterSizeKB As nvarchar) + ‘KB);’;

      Print @SQL;

      Exec sp_executesql @SQL;

      Select @LogSizeKB = size * 8
      From sys.master_files
      Where database_id = 2
      And type = 1 — log file
      And name = @LogName;
      End

      Select LogName = name,
      LogSizeKB = size * 8
      From sys.master_files
      Where database_id = 2
      And type = 1 — log file
      And name = @LogName;

  8. Robert,

    Thank you very much! The presentation was very good. I appreciate it.

    Best Regards,
    Tony

    1. Thanks Tony! Glad that people liked it and found it educational.

  9. I attended the webinar on 7/13, but joined a few minutes late. You mentioned a script or scripts for monitoring stress on tempdb, at least one of which was using sys.dm_os_waiting_tasks. Have you posted those scripts anywhere? I did find your “Where are my temp db objects” script, but I could really benefit from the others as well.

  10. (Please ignore my previous post; I see the file is now posted on the tempdb page.
    Thank you!)

    1. Thanks for attending Christopher! I’m glad you found what you were looking for.

  11. I loved the presentation. Is there a link to the script that locates the PFS, GAM and SGAM pages and determines whether there is contention for them? I’d relly love that script — it’s the one you showed in the presentation.

    1. Thanks Bob! All of the presentation files can be found at http://www.sqlsoldier.com/tempdb

  12. Great presentation! It was very helpful.

    1. Thanks Jean! I’m glad you enjoyed it!

  13. Placement of TEmpdb Log files. If you have 3 drives – D, L and T.
    Assume your production database is on D (for Data files (mdf)) and L (for Log files (ldf)), How should Tempdb data and log files be placed? (just assume it is 1 cpu). Should Tempdb data and Log files both place together on T drives or Placed the Tempdb Data files on T drive, and put the Tempdb Log files on L drive (that means it will be placed on the same disk as the other Production database’s log files)?

    1. Hi Erica. I would put the tempdb data and log file on the same drive, T. It’s more important to have the tempdb files separated from the user databases than from each other.

      1. We have server builds with C:( mirrored -os, pagefile,bin ), D: {raid-10 for log files} and E: {raid-10 for data files}. Kinda of common setup, perhaps.
        So in order to separate tempdb files from user db files, out on C: ? If so, shouldn’t OS pagefile no exist on same vol as tempdb files ?
        or would the contention be .. opps ,, it depends, but what then would be your recommendation ?
        Thanks ! Rick

        1. My recommendation would be to get more drives.

          This is the minimum drive configuration we use for our systems:

          C: system file (local drive)
          D: SQL install drive, system DBs (local drive)
          E: Backup drive (RAID 10 preferred, RAID 6 alt.)
          H: User DB data files (RAID 10 preferred, RAID 6 alt.)
          O: User DB log files (RAID 10 preferred, RAID 6 alt.)
          T: tempdb DB files (RAID 10/1 preferred, RAID 6 alt.)

          I list RAID 6 as an alternate on all of our SAN drives because all of our shared SANs only offer RAID 6. If you can get considerably more usable space by using a lower level of RAID like 5 or 6, you may get better performance by segregating the files onto more drives than by combining them onto RAID 10.

          This is when it really helps to have a good SAN admin.

  14. I found the webcast very well presented and very informative. It was for me very well organized as well as compact enough to realize there is a whole bunch to consider rather than what you get with the classic OOB installs !
    I’d like to see more on this topic in the future { possible include estimation/planning inital deployment of tempdb — sizing based upon logical CPU along with types / quantities of user dbs, etc. More on monitoring and interprtation of possible problems, metrics, .. geez ! a book !!
    Anyway we can get session script of your presentation that would go with your PPT stack ?
    Thank you, sincerely,
    Rick Willemain

    1. Thanks for the feedback Rick! Unfortunately, I was quite time constrained. I’m hoping to find the time to write a whitepaper on tempdb, so I may be able to get more info out there.

      There was no script. I don’t script my presentations. Everything I want to say is in my head, and I just start talking and don’t stop until I run out of time. 🙂

      The direct link to the recording will be emailed out tomorrow to everyone who registered or you can access it today from teh Idera website (additional registration required): http://www.idera.com/Events/RegisterWC.aspx?EventID=208

  15. Hi I really like your webcast. Very informative. We are having some issue on SQL 2000 SE server. Do you have any scripts which will work on SQL 2000 for get tempdb contention and tempdb objects.

    1. Thanks Sandeep. Here is a version of the tempDB contention script that will work on SQL 2000:

      Select spid,
          lastwaittype,
          waittime,
          blocked,
          waitresource,
          ResourceType = Case
              When Cast(Right(waitresource, Len(waitresource)
                  - Charindex(':', waitresource, 3)) As Int) - 1 % 8088 = 0 Then 'Is PFS Page'
              When Cast(Right(waitresource, Len(waitresource)
                  - Charindex(':', waitresource, 3)) As Int) - 2 % 511232 = 0 Then 'Is GAM Page'
              When Cast(Right(waitresource, Len(waitresource)
                  - Charindex(':', waitresource, 3)) As Int) - 3 % 511232 = 0 Then 'Is SGAM Page'
              Else 'Is Not PFS, GAM, or SGAM page'
          End
      From sysprocesses
      Where lastwaittype Like 'PAGE%LATCH_%'
      And waitresource Like '2:%'
      And status = 'suspended';
      
  16. The presentation was very useful. Tracking the allocation of the tempdb objects that were in cache and disc was an interesting topic. A couple of observations:

    The script doesn’t identify the requesting process allocating the tempdb resourses. I support a large third party vendor K-12 education product that consumes resources. I find the following script helps me id the requesting spid who owns the tempdb objects:

    SELECT R3.session_id, R3.plan_handle, R3.sql_handle , R4.text,
    R3.request_internal_objects_alloc_page_count, R3.request_internal_objects_dealloc_page_count
    FROM (SELECT R1.session_id, R1.request_id,
    R1.request_internal_objects_alloc_page_count, R1.request_internal_objects_dealloc_page_count,
    R2.sql_handle, R2.statement_start_offset, R2.statement_end_offset, R2.plan_handle
    FROM ( SELECT session_id, request_id,
    SUM(internal_objects_alloc_page_count) AS request_internal_objects_alloc_page_count,
    SUM(internal_objects_dealloc_page_count)AS request_internal_objects_dealloc_page_count
    FROM sys.dm_db_task_space_usage
    WHERE session_id > 50 — SQL Server uses spid 50 and less for internal processes
    AND session_id @@spid
    GROUP BY session_id, request_id
    ) R1
    INNER JOIN sys.dm_exec_requests R2 ON R1.session_id = R2.session_id and R1.request_id = R2.request_id) AS R3
    OUTER APPLY (select text from sys.dm_exec_sql_text(R3.sql_handle)) as R4

    Plus it returns returns the text of the requesting spid ( granted I could return the executing statement text say of the store procedure or batch ) but I have other scripts that are used for that type of monitoring.

    Second the tempdb cache allocation script can take a long time to execute on an extremely active production system or even encounter a resource deadlock. Some of the processes allocate/deallocate 400,000 to a 1,000,000 pages in temdb both due to temp objects, hash spills. Even with some 70K+ pages execution times can exceed 3 minutes for your script

    Granted I find the breakdown by temporary object extremely interesting that I would like to pursue further.

    Thank you.

    tregards,

    Joe

    1. Thanks Joe! I look forward to trying the script out! Do you have a blog where you post stuff like this? Or having considered blogging stuff like this?

      1. No blog (other than the one my wife recently strated on Florida Theme Parks http://www.floridafuntimes.com).

        I put together a little script that pulls the space allocation and row counts for tempdb objects. Runs nicley even as the tempdb page allocation approaches 3/4 of a million pages.

        select sao.name, sao.type_desc, sao.create_date
        , au.type ,total_pages, used_pages, data_pages, part.rows, part.index_id
        FROM sys.all_objects sao (nolock)
        inner join sys.partitions part (nolock)
        on part.object_id = sao.object_id
        inner join sys.allocation_units au (nolock)
        on au.container_id = part.hobt_id
        where sao.type_desc =’USER_TABLE’
        order by sao.name desc

        I believe the issue is the number of objects being allocated in tempdb. I ran the 2 scripts while monitoring a large state extract. Although the one process was only haly way done we had created some 50 tempdb tables with several million rows. Tempdb had approx 8300 additional objects, though most were 2 or less pages. The original base script seems to be blocked on sys.sysallocunits which would explain the long duration under extreme loads. Wasn’t able to capture the deadlock chain for original script

        I tend to share my scripts on my vendors community board and with their in-house development team and implementation consultants.

        I had to cobble these and other scripts together to smooth out the implemtation at our site since we are one of their largest customers in terms of student populations.

        Much Regards,

        Joe
        It was a good presentation!!!

        1. Thanks again Joe! Might check out Floridafuntimes.com too. Going there for a SQL Saturday soon.

          1. For an excellant dinner head to Disney’s Boardwalk Resort for dinner at Kouzzina by Cat Cora ( Iron Chef). If you love seafood try the Fisherman’s Stew – Scallops, seasonal Fish, Shellfish, Fennel, Grilled Bread, and Ouzo Butter. Start the meal with Cat’s Ouzo-tini (Absolut Vodka,
            Metaxa Ouzo, Pineapple Juice, fresh Lime Juice,
            and Pomegranate) or a Pear Martini (A combination of Glenfiddich 12 year, Hendrick’s Gin, Kern’s Pear Nectar, agave nectar and fresh lime juice. A personnel favorite).

            For a nice lunch try the Kona Cafe at the Polynesian Resort. Duck Pot Stickers, then an entry of Sustainable Fish with Tostones and veggies. Enjoy an Island Sunset cocktail (a smoothie with Seven Tiki Spiced Rum, Parrot Bay Coconut Rum, Melon, and Peach combined with our Orange Guava-Passion Fruit Juice) or a Oraganic Agave Nectar Margarita in the Tambu Lounge and enjoy the scenery.

            Yes we love Disney and know how to enjoy ourselves there.

            Regards,

            Joe

Leave a Reply to SQLSoldierCancel reply