Extending The SharePoint 2010 Health & Usage – Part 1: Feature and Capability Overview

This is the first article in a 4 part series were I discuss the Health & Usage Services built into SharePoint 2010 and how they can be extended to build some very interesting solutions. My hope is that after reading the series you find really cool ways to use the Usage Services and I hope to hear back on what you all have built.

The 4 part series breaks down as so with all parts will be published in quick succession so I won’t keep you waiting for the ending.

  1. Feature and Capability Overview (this article)
  2. Writing a Custom Usage Provider
  3. Writing Custom Reports
  4. Writing a Custom Usage Receiver

Code Download: Microsoft.SP.Usage.zip (160 kb)


In this first article we need to lay some groundwork before digging into code. The SharePoint 2010 Usage and Health Data Collection component is new with SharePoint 2010 and ships as part of the Foundation SKU so everything we will be discussing throughout the series is equally applicable to any SharePoint SKU. The service is comprised of a single Service Application and database, Health Providers, Usage Providers, and a number of SharePoint Timer jobs.

There are two distinct types of data which are collected and reported on with this service; usage and health data.The usage data can be used to gain insight into what your users, services, and other components within SharePoint are accessing or what actions they may be performing. For example, one of the most useful bits of usage information collected is the Request Usage which catalogs each request received from users by SharePoint. This data is a bit like the traditional IIS logs in that much of the same data is collected however unlike IIS logs additional SharePoint specific information is included with each request which provides a better context of SharePoint intrinsic objects which were involved in the request. For example, the Web Application ID, Site ID, Web ID, and correlation IDs are collected for each request which allow the administrator to gain a deeper understanding of usage for each Site and Web as well as correlation of errors conditions, which typically display or log correlation IDs, back to the request which produced the error. In addition to the usage information you may choose to have health data collected which too can be a valuable tool when troubleshooting error conditions or performance issues. An administration may choose to have SharePoint ULS, Windows Event Logs, Performance counters, and SQL DVM information collected. All of this information is collected through one of two provider interfaces, Usage Providers and Health Providers. The way to distinguish between these providers is to think about Usage Provider data as any information which is collected as a result of some usage activity such as a user request or execution of a service. The more users generating request the more usage data which will be collected. Health Provider data is collected regardless of any usage and is outside the scope of any user activity, for example, performance monitor information can be continually collected even when there is no requests being made of the system.

Usage and Health Data Collection Service Application

Its important to note the Usage and Health Data Collection Service Application is not a “normal” Service Application in that it does not appear in the UI where we might see other Service Applications. This service application can only be installed once per farm and cannot be consumed across farms. In my previous post, where I provide a simple PowerShell script to get started deploying Service Applications, you may notice I create the Usage and Health Data Collection Service Application as the first Service Application. This is not be accident, some Service Applications such as the SharePoint Search Service Application require an instance of the Usage and Health Data Collection Service Application to exist, so when provisioning the SharePoint Search Service Application if one does not exist it will be created with default settings which may not be desired. No worries if this has happened to you already or if you might have leaned on the Farm Configuration Wizard for a bit of assistance – a little additional PowerShell can re-created your Usage Database with the name and on the SQL instance you desire.

Usage and Health Database

The Usage and Health Data Collection Service Application’s main data store is the Usage Database. As mentioned previously we only get one Health Data Collection Service Application per SharePoint farm which can contain a single reference to a SQL Database so if you plan to make heavy use of this database it may make since to host it on a dedicated LUN with plenty of space and potentially even a dedicated SQL instance. The DB itself is write optimized and stores Usage information such as page requests, search query usage, timer job usage, etc. It also stores health information such as Event logs, ULS logs, and performance counter data, etc. In most cases it does not make since to attempt to mirror or configure this DB for log shipping as in most cases this data can be considered throw away – its usage and health data, your users will create more!

By default the retention period for the Usage Database data is 14 days however an administrator can change this to up to 31 days if desired. There is one gotcha you need to be aware of, anytime the retention period is modified any usage data which may exist at the time of the change will be lost.

As you would expect the data is stored within the database in tables. Each Usage and Health provider has its own set of tables with which to store its collected data. The tables are partitioned, not using SQL partitioning, but rather by creating 32 tables for each provider. The tables use a naming schema such as: dbo.{provider-name}_Partition{0-31}, where the provider-name is replaced by the name of the provider and the number designator of 0-31 is used to distinguish between each of the tables. Provider data is written into each table based upon UTC time and for each day a new table is used. So while each table holds a days worth of collected data, unless your servers are in the GMT time zone you will have to look across 2 tables to get a view of a single day’s worth of information. No worries however because included with each provider is a SQL view named dbo.{provider-name} which can be used to join the 32 partitioned tables into a single view of all your data.

Since I have been discussing the database schema and providing information on were you can find all this great diagnostic information its probably a good time to discuss the database access policy as it relates to the health and usage DB. As you may know the SharePoint support team (and the product team for that matter) have a very strict policy when it comes to modifying and querying databases which are owned by SharePoint. There are obviously very reasonable reasons why these rules exist and you are best to abide by them. The Usage DB is a bit different and you can query and extend the schema of this DB without getting into trouble from support as long as you add your own objects and don’t try to modify OOB. It’s still advisable however that if you plan to do some very heavy querying that you move this data off onto another DB and optionally another SQL instance otherwise you could impact the ability to capture the very information you are after. An example of where you may want to extend the schema with custom stored procedures or UDFs is provided in the part 3 of this series were we build custom reports.

Data Movement

The Health providers capture data and log directly into the Usage and Health Database however Usage information takes a bit of a different path. Remember usage data collection is associated with system requests and if we were to attempt to log into SQL each time one of these events were raised it would not be long before we would surly see performance issues. To ensure performance is not an issue the SPUsageManager uses ETW tracing to write the collected usage information into binary “*.usage” files on the local machine. The SPUsageManager wraps the ETW tracing and uses binary serialization to push an instance of a SPUsageEntry object into the usage files (I will discuss how you create your own SPUsageEntry classes in the next article) for each usage request which is logged.

Two timer jobs are employed to move and process usage data. The first timer job is the Microsoft SharePoint Foundation Usage Data Import, this job runs every 30 minutes and picks up any data which was written into the *.usage files and moves these into the Usage database. It does this by reading the *.usage files and calling the Usage Provider which will write the information into the Usage DB. After each Usage Provider has had the opportunity to process its data any registered Usage Receivers will be called which optionally may do additional processing. Think of Usage Receivers much like SPList event recovers where the usage information collected for a provider will be passed to the receiver each time the Microsoft SharePoint Foundation Usage Data Import service runs – assuming there is data for the Usage Provider for which the Usage Receiver is registered. In part 4 of this series I will cover creating a custom Usage Receiver in more detail.


Usage Providers don’t have to write their data into the Usage DB however all of the OOB Usage providers do. In the figure above you will note the ability for a custom Usage Provider to either write to the Usage DB or another data store. You may also notice that custom Usage Receivers can not only listen for Usage import events from custom usage providers but also from the OOB Usage Providers. In fact, this is how the SharePoint Web Analytics Service Application receives its data. The SharePoint Web Analytics Service Application registers a custom Usage Receiver to the OOB SPRequestUsageDefinition, aka, the Request Usage provider, and uses this receiver to move data into its staging database for further processing to be used for the analytic reporting.image

The second and final timer job is the Microsoft SharePoint Foundation Usage Data Processing which runs daily and calls into each Usage Provider allowing the provider to do any daily aggregation processing (ProcessData()) and truncating of data (TruncateData()). This processing is optional and not all OOB providers leverage this process. In fact the only OOB provider that does take advantage of this interface and uses the ProcessData() method to process/update usage information for each Site Collection, eg the SPSite.UsageInfo.


Like so many components within SharePoint the Health and Monitoring functionality can be extended in a number of ways, for example:

  • Health Rules – Useful for periodic checks against rules
  • Usage Providers & Usage Entries (Part 2) – Collect usage related data and import/process into data store
  • Usage Reports (Part 3) – add additional usage reports
  • Usage Receivers (Part 4) – Receive notifications of OOB or custom usage definition data imports for additional processing and/or reporting.
  • Diagnostics Providers – Collect non-usage data and import/process into data store.


The Scenario

For the remaining posts in this series I will walk you through the building a solution to track file downloads from SharePoint Document Libraries and report on the results. Its key that this solution be scalable so we will create a custom usage provider, add a few custom usage reports into Central Administration, and we will create a usage receiver to help us show download information for each SPSite within our environment.

Post to Twitter Post to Facebook Post to LinkedIn Post to Delicious Post to Digg

10 thoughts on “Extending The SharePoint 2010 Health & Usage – Part 1: Feature and Capability Overview”

  1. How does SP collect usage data? Does it mine it from IIS logs? Does it rely on the SP ISAPI? Does the Web Analytics Service Application act as its own TCP listener? I can’t seem to find much info on where/how the data is collected.

  2. Mark,
    The Usage data is collected by SharePoint independent of IIS Logs so there is no dependency on IIS logging even being enabled. No part of SharePoint 2010 uses ISAPI but instead uses Asp.Net HttpModules as we not support a fully integrated pipeline offered by IIS 7.0 and 7.5. Web Analytics gets all of its data from the Usage Logging SA through a Usage Receiver which I discuss in a later blog post.

  3. Great article!
    I have a question with regards to a comment where you say
    “you can query and extend the schema of this DB without getting into trouble…”
    I need a Microsoft article where it states this. Do you know of such a reference?

    1. I don’t know of a reference but I use to work for Microsoft and have never heard or run into any issues querying this DB. Its a logging DB and its data is not critical to SharePoint.

  4. I almost never drop remarks, however i did some searching and wound up here Extending
    The SharePoint 2010 Health & Usage – Part 1: Feature and Capability Overview | Todd Carter.
    And I actually do have a few questions for you if it’s allright. Could it be only me or does it give the impression like a few of these remarks look like written by brain dead individuals? 😛 And, if you are posting on other social sites, I would like to keep up with anything new you have to post. Would you list of the complete urls of all your community sites like your Facebook page, twitter feed, or linkedin profile?

  5. My usage database keeps growing although the “Microsoft SharePoint Foundation Usage Data Processing ” timer job has run sucessfully, I have also changed the retension periods, any other idea’s to make the Database smaller?

Leave a Reply

Your email address will not be published. Required fields are marked *