“QUOTED_IDENTIFIER” causes Unexpected Query Plan for Persisted Computed Column query

Author: Shaun Tinline-Jones

Technical Reviewers: Thomas Kejser, Steve Howard, Jaime Alva Bravo, Kun Cheng, Jimmy May

Note: Validation for this post was performed in the SQL CAT Customer Lab on an HP Proliant DL580 G7, Intel Xeon Nehalem E7-4870 2.40 GHz 4 socket, 10 physical cores, 20 logical cores for a total of 40 physical cores, 80 logical cores; 1TB RAM.  SQL Server 2008 R2 was installed on a Fusion-io ioDrive Duo 1.28TB MCL using driver version 2.3.1

We recently engaged a Tier 1 Global ISV in our labs, where the objective was to achieve the highest Business Transactions for their application.

During the testing and optimizing we encountered an interesting behavior where we kept incurring a clustered index scan; irrespective of applying known techniques to generate an index seek. In this blog, I will share the reproducible steps that reveal the following observation:

“If the table has a persisted computed column*, the query optimizer will choose a clustered index scan over a clustered index seek.”

*This applies to the case, which is most common, where the compute expression doesn’t produce a constant.

Here is the code to create the table & SPs that we’ll refer to for describing this behavior:

IF OBJECT_ID('dbo.testComputedCol') IS NOT NULL
DROP TABLE dbo.TestComputedCol
GO

IF OBJECT_ID('dbo.GetResults01') IS NOT NULL
DROP PROC dbo.GetResults01
GO

IF OBJECT_ID('dbo.GetResults02') IS NOT NULL
DROP PROC dbo.GetResults02
GO

CREATE TABLE dbo.TestComputedCol (Col1 int PRIMARY KEY CLUSTERED, Col2 int)
GO

CREATE NONCLUSTERED INDEX NDX_TestComputedCol_Col2 ON dbo.TestComputedCol (Col2)
GO

SET NOCOUNT ON

DECLARE @Count int = 1;

WHILE @Count < 1000
BEGIN
INSERT INTO dbo.TestComputedCol (Col1, Col2) VALUES (@Count, @Count*10)

SET @Count += 1
END
GO

SET QUOTED_IDENTIFIER OFF
GO

CREATE PROC dbo.GetResults01
AS
SELECT * FROM dbo.TestComputedCol WHERE Col1 = 10
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE PROC dbo.GetResults02
AS
SELECT * FROM dbo.TestComputedCol WHERE Col1 = 10
GO

To baseline the scenario we are describing, we run both SPs and confirm that we experience the index seek operation.

EXEC dbo.GetResults01

image

EXEC dbo.GetResults02

image

In our scenario we recognized the need to partition the data to avoid the last page insert contention issue and therefore implemented a computed column, similar to the below statement. While creating the column it is documented that the Quoted Identifier must be ON in order to create the computed column.

Side Note: For resolving the Last Page Insert Contention challenge, we are left with two choices, troubleshoot the reason for the differing behavior or change the insert behavior to include setting a value for the partition bucket. We chose to troubleshoot the issue. This blog is not striving to optimize the partition hash; however it’s important to recognize that resolving the Last Page Insert Contention through a computed column is not optimal.

In its literal sense QUOTED_IDENTIFIER is required to parse quoted strings as identifiers. A recommended practice is to rather use square brackets “[” as an identifier. Therefore it’s not intuitive that the QUOTED_IDENTIFIER setting influences queries against computed columns.

NOTE: When a table is created, the QUOTED_IDENTIFIER option is always stored as ON in the metadata for the table, irrespective of the current setting. However when creating a computed column, the setting must be set to ON.

SET QUOTED_IDENTIFIER ON
GO

ALTER TABLE dbo.TestComputedCol
ADD ComputedCol AS CAST(Col1%10 AS tinyint) PERSISTED NOT NULL

 
In our solution, prior to adding the computed column, a frequently called SP generated a query plan very similar to what you see below:

EXEC dbo.GetResults01

image
 

However after the computed column was added, executing the same SP generated the following query plan.

image

At first this may seem unexpected, and frustrating. However, some may infer that this is expected behavior, as described in article Creating Indexes on Computed Columns. Take note that this is an inference or a source of data points, as we did not add an index to the computed column. Does this happen if the SELECT and WHERE columns are not referencing the computed column? We modified the SP to follow the recommended practice of explicitly specifying only the required columns,

SELECT Col1, Col2 FROM dbo.TestComputedCol WHERE Col1 = 10

Unfortunately, you cannot shake the scan this way!

The only way to resolve the issue is to ensure the QUOTED_IDENTIFIER to ON during the execution of the batch. The challenge is finding the right place to ensure the effective setting at runtime.

Side Note: In this scenario a computed column will be the partitioning column, which requires a PERSISTED property. The above issue doesn’t occur if the computed column is not persisted.

The default value for QUOTED_IDENTIFIERS is ON; however it can be changed at a variety of points. Therefore you may not encounter the above issue, yet when it does occur it may not be intuitive to check the QUOTED_IDENTIFIER setting or even at which point along the path it has been changed.

The setting can be applied:

  1. As a connection attribute
  2. As a database option
  3. As an object attribute
  4. As a runtime SET option

In our case, we unexpectedly started incurring the index scan. We identified these index scans when the table partitioning strategy didn’t produce the benefits we sought. Troubleshooting revealed that when we ran the code (from the SP) as an isolated batch in a Query Analyzer window, we got the seek behavior. If we ran the SP in the same Query Analyzer window but in a separate batch, we got the scan behavior. When we ran the code using the osql utility, we got scan behavior for both batch executions. If we changed the computed column to a constant value, then we got the seek behavior.

The clue to finding a resolution is in the differences between the osql.exe and SSMS outcomes. These utilities have different defaults for connection attributes. The resolution lies in creating the stored procedure with QUOTED_IDENTIFIERS set to ON. An interesting observation is that it’s redundant to set the quoted identifier setting within the stored procedure; the setting at creation time will take precedence.

Let me share with you how ALM (Application Lifecycle Management) practices of the ISV caused this issue to slither its way into the solution.

NOTE: Keep in mind that there are many variations of what I’m about to describe

A developer creates a module, such as an SP, using a tool such as SSMS, VSTS, osql, sqlcmd, and others but most often SSMS. When the developer is done unit testing, a script file is checked into the source control system, below is an example of such a file.

CREATE PROC dbo.GetResults01
AS
SELECT * FROM dbo.TestComputedCol WHERE Col1 = 10
GO

The Build Process then generates a deployment script. There are a variety of ways that deployment scripts can be generated and run against the target SQL Server and database instance. The point in the above scenario is that the QUOTED_IDENTIFIER setting for this object is dependent on the current connection attribute of the deployment script generator.

If the deployment script does not specify QUOTED_IDENTIFIER setting for the objects, then the setting will be dependent on the connection settings when the script is run. The Build Process for the ISV leverages the osql.exe utility, which has a default setting of OFF. It’s not uncommon for build processes to use the SQLCMD.exe, which would cause the same behavior.

NOTE: If the connection does not set this attribute, then the database setting will take precedence. By default, ODBC and OLEDB set the option to ON. For DB-Library the default is OFF. Therefore it is very unlikely that the database default is ever enforced.

Tool

Default Quoted Identifier Setting

osql.exe

OFF

sqlcmd.exe

OFF

SQLCMD Scripts

ON

SSMS

ON

Below are T-SQL queries to identify current QUOTED_IDENTIFIER settings:

SELECT DATABASEPROPERTYEX(DB_NAME(), 'IsQuotedIdentifiersEnabled')
SELECT * FROM sys.databases WHERE is_quoted_identifier_on = 0
SELECT quoted_identifier, * FROM sys.dm_exec_sessions WHERE session_id = @@SPID

They each have cases where they are useful, though I particularly like:

SELECT * FROM sys.objects
WHERE OBJECTPROPERTYEX(object_id, 'IsQuotedIdentOn') = 0 AND TYPE <> 'S'

This last query reveals those objects that deviate from the recommended standard of having QUOTED_IDENTIFIER set to ON.

In conclusion…there are connection settings that can influence the behaviors of the queries and output. Therefore it’s important to standardize on a setting, the more aligned it is with the industry defaults the easier it is to adhere to the standard.

Historically, prior to computed columns, quoted identifiers were used merely distinguish strings from literals. It is recommended that literals are identified by square brackets, and strings are defined by single quotes. It is further recommended to keep quoted identifier setting to ON in all cases, even if it means explicitly adding this to script files before objects are created.

For our scenario, we conclude that it’s not intuitive that the reason behind the index scan is related to a quoted identifiers setting, therefore we standardize that the setting be ON and run the above queries to validate all appropriate objects have this setting to ON as part of validating a build deployment.

Posted in Thoughts | Tagged | Leave a comment

Creating sample Application that uses a RBS Provider

Summary

This is a simple application sample that leverages the RBS Provider. In this case, it’s using a Filestream provider but none of the code itself is aware what type of provider is in use.

This iteration does not attempt to use the more advanced, optimal writing and reading methods. These will come soon enough, for now a reader of this blog should be able to accomplish insights into creating a sample application to demonstrate RBS capability.

Objective/Context

I want to share the experiences of creating a sample application. My intent was simply to understand how a value can be written to the database, and then read from the database when the data must be accessed through RBS. Historically, even when I implemented a Filestream solution, I was able to meet all my programmatic needs through T-SQL. This allowed me to focus on the database design, and infrastructural requirements.

RBS is forcing me to recall very rusty memories about C# coding, go beyond simply reading code and deriving the intent. As I’ve said, my first iteration is to get a record into the database and read it again. I will follow this up with the recommended pattern for optimal performance, which I believe will require a deeper understanding of how buffering plays an important role in the streaming of blob data. All this is to ultimately get to a point where we can scale the solution to have a 200MB/sec insert throughput, and a 100MB/sec read throughput. I’m expecting numerous concurrent connections, so it must do that too. The solution must be able to maintain these throughputs even when there are 100’s of millions of records.

Ok, there is more to be said about what I want the solution to achieve but for now, let’s just remain focus on the sample application.

Planning

I highly recommend that you read the whitepaper “Remote BLOB Storage”. It is 56 pages so you may be interested in first reading this 9-page review on it “Review & musings of whitepaper “Remote BLOB Storage””. The whitepaper is very informative, and reasonably recent enough to be trustworthy.

I referred to the following sample code “How To: Write a BLOB (Remote BLOB Store)” and “How To: Read a BLOB (Remote BLOB Store)”.

I chose a Windows Form project. This helps me visually understand concepts, and I can easily provide input parameters.

Nothing too intellectually challenging

On the first iteration, the writing and reading of the blobs uses basic logic and constructs. As mentioned under the objectives/context section, I will enhance the solution and therefore you can expect additional fields will appear on the form as the application gets more advanced, such as:

    • Ability to read and/or write multiple files
    • Ability to have a multiple connections simultaneously work through the multiple files
    • Ability to evaluate various file sizes, for instance should we be writing 4MB files
    • Ability to evaluate various buffer sizes when writing files, for instance should we be writing 4MB files in 1MB buffer sizes
      • I must remember to investigate the value of implementing Elastic Buffering

NOTE: RBS is managed code; therefore if you are writing in C++ unmanaged code, then you will have to access the RBS API using C++ Managed code.

Writing the code

At the beginning of this article, I referred you to another blog on this topic. There you will have learnt that you have to create the application’s database and BLOB store database prior to running the RBS.msi. Now before you can run this code, you have to:

  1. Enable the database to use RBS (mssqlrbs.rbs_sp_enable_rbs)
  2. Create the table that holds the BlobId column
    1. The column must be varbinary(64)
    2. You must index this column
    3. I suggest that you add other indexes that align with the type of queries you would run to get the BlobID. For instance, if you collect Blobs based on date then index that column.
  3. Register the column of this table (mssqlrbs.rbs_sp_register_column)

These are the database objects I created. This will do for now, but I caution that more attention should be taken.

CREATE TABLE [dbo].[BlobUserTable]
(
    [id]        uniqueidentifier NOT NULL,
    [FileName]  varchar(50)      NOT NULL,
    [BlobId]    varbinary(64)    NOT NULL
)

CREATE UNIQUE NONCLUSTERED INDEX [IX_BlobUserTable_BlobId]
    ON [dbo].[BlobUserTable] ([BlobId] ASC)

In order to use the sample you will require the following references in the C# project:

using Microsoft.Data.SqlRemoteBlobs;
using System.Transactions;
using System.Globalization;
using System.Data.SqlClient;
using System.IO;

There will be several cases where I need to connect to the database:

private string GetSQLConnString()
 {
     string sSqlServer = txtSQLInst.Text;
     string sDatabase = txtDBName.Text;

     string sConnection = string.Format(CultureInfo.InvariantCulture,
      "Data Source={0};Initial Catalog={1};Integrated Security=SSPI",
      sSqlServer, sDatabase);

     return sConnection;
 }

For the list box I made a call to the database to return all the values from the “filename” column. The design could be better, specifically in that the list should include a Primary Key value (hidden) which can be used when selecting which Blob to read.

private void PopulateListBox()
{
    //Get Connection String
    string sConnection = GetSQLConnString().ToString();

    //Prepare the query string
    string sQuery = "SELECT [FileName] AS [BlobName] FROM dbo.BlobUserTable";

    //Call SP to return list of Filenames and fill data set
    using (SqlConnection connection = new SqlConnection(sConnection))
    {
     SqlCommand SQLCmd = new SqlCommand(sQuery, connection);

     connection.Open();

     SqlDataReader reader = SQLCmd.ExecuteReader();

     //Empty the list box
     lbxBlobInDB.Items.Clear();

     //Add them to the list
     while (reader.Read())
     {
         lbxBlobInDB.Items.Add(String.Format("{0}", reader[0]));
     }

     //Close when done reading
     reader.Close();

     connection.Close();
    }
}

This is called as soon as the form launches

public Form1()
{
    InitializeComponent();

    PopulateListBox();
}

You can type the path to the location of the file, and I added the ability to browse using the “openFiledialog” object to select a file. I’ve seen it fail with permission issues when I try to select from certain directories, therefore use files from an accessible location. I’m not running down that rat-hole for this sample.

private void lblBrowse_Click(object sender, EventArgs e)
{
    OpenFileDialog openFileDialog1 = new OpenFileDialog();

    openFileDialog1.InitialDirectory = "c:\\"";
    openFileDialog1.Filter = "txt files (*.txt)|*.txt|All files (*.*)|*.*";
    openFileDialog1.FilterIndex = 2;
    openFileDialog1.RestoreDirectory = true;

    if (openFileDialog1.ShowDialog() == DialogResult.OK)
    {
     try
     {
         txtFilePath.Text = openFileDialog1.FileName.ToString();
         txtFileNameInDB.Text = openFileDialog1.SafeFileName.ToString();
     }
     catch (Exception ex)
     {
         MessageBox.Show("Error: Could not read file from disk. Original error: "
                         + ex.Message);
     }
    }
}

There is an opportunity for the user to create a more identifiable filename for when the Blob is stored in the database. The default is derived from the file name.

When you click on the WriteBlob it will fetch the file that was selected, and write it to the SQL Server database, and of course send it to the RBS Filestream data store. For now I simply confirmed that there were no errors AND the Application’s database had a new record. Later on, I want to confirm that the small files are stored in the tables and large files in the filesystem. There should be no reason that this is not happening.

private void btnWriteBLOB_Click(object sender, EventArgs e)
{
    string sFilePath = txtFilePath.Text;
    string sFileNameInDB = txtFileNameInDB.Text;
    string sConnection = GetSQLConnString().ToString();
    byte[] blobId = WriteBlobPush(sFilePath, sFileNameInDB, sConnection);
}

rivate byte[] WriteBlobPush(string sFilePath, string sFileNameInDB, string sConn)
{
    byte[] blobId = null;

    //Open the connection to the database, Create and write the BLOB.
   using(TransactionScope ts = new TransactionScope(TransactionScopeOption.Required
        , TimeSpan.FromMinutes(10)))
   {
    using (SqlConnection conn = new SqlConnection(sConn))
    {
        conn.Open();

        SqlRemoteBlobContext blobContext = new SqlRemoteBlobContext(conn);
        using (SqlRemoteBlob blob = blobContext.CreateNewBlob())
        {
            using (FileStream fileToRead = new FileStream(sFilePath, FileMode.Open))
            {
                blob.WriteFromStream(fileToRead);
            }
            ts.Complete();

            //Update the application user table with the
            //returned BLOB id and the filename.
         using (SqlCommand cmd = new SqlCommand(string.Format(CultureInfo.InvariantCulture,
           "INSERT INTO BlobUserTable(id, FileName, BlobId) Values('{0}', '{1}', @blobId)",
                Guid.NewGuid(), sFileNameInDB)))
            {
                cmd.Parameters.Add(new SqlParameter("@blobId", blob.GetBlobId()));
                cmd.Connection = conn;
                cmd.ExecuteNonQuery();
            }
        }

        conn.Close();
    }

    lbxBlobInDB.Items.Add(sFileNameInDB);
  }
    return blobId;
}

You should see the listbox get one more record.

You can select an item from the listbox and then select “Read Blob”, you should see the content of your file appear in the box.  Be cautious because the code fixes the length at 256 bytes, which will be handled in my future enhancements. Additionally, pay careful attention to the following scenario (I’m still determining if this actually happens), the scenario is where you store the filename in the database such that it contains the path to the file. When the function refers to it, and the path doesn’t exist then an error will be generated.

private void ReadBlobPull(string sFileNameInDB, string sConn)
 {
     using (TransactionScope ts = new TransactionScope(TransactionScopeOption.Required
            , TimeSpan.FromMinutes(10)))
     {
         using (SqlConnection connection = new SqlConnection(sConn))
         {
           connection.Open();

           byte[] blobId;

           using (SqlCommand cmd = new SqlCommand(string.Format(CultureInfo.InvariantCulture,
               "SELECT BlobId FROM BlobUserTable WHERE FileName = '{0}'", sFileNameInDB)))
           {
               cmd.Connection = connection;
               blobId = (byte[])cmd.ExecuteScalar();
           }

           //Read the BLOB directly from the store to the FileStream.
           SqlRemoteBlobContext blobContext = new SqlRemoteBlobContext(connection);
           using (FileStream readStream = new FileStream(sFileNameInDB, FileMode.Create))
           {
               using (SqlRemoteBlob readBlob = blobContext.OpenBlob(blobId))
               {
                   byte[] content = new byte[256];
                   readBlob.Read(content, 0, content.Length);
                   txtCurrentBlob.Text = Encoding.Default.GetString(content);
               }
               lblCurrentBlob.Text = sFileNameInDB;
           }
       }
   }
 }
In summary, the experience has ensured that I could focus on the fundamentals, learn a few nuances and prepare an environment where the next step allows for more attention to be spent on optimizing RBS specific access patterns.  Most importantly this was fun, and I can’t wait to start working on the buffer logic.
Posted in Work | Tagged , , , | Leave a comment

Review & musings of whitepaper “Remote BLOB Storage”

This is a personal reflection of the 56-page whitepaper “Remote BLOB Storage

Right off the bat, I should let you know that when it comes to RBS I’m always looking for content that is helpful for implementations that don’t relate to Sharepoint.

This paper promised to appeal to people like me. I believe it did a good job in meeting that objective.

The leading paragraphs “The Case for RBS” is ok, and one could nitpick when you see sentences like “If a given SQL Server database would grow to 500GB without RBS enabled, then RBS would be a beneficial option”, but that’s not worth getting upset about because we simply assume that they mean that most of the data might qualify as RBS candidates. It’s often that you’ll hear about the benefits to “Backup & Restore” that RBS offers, such as the whitepaper “SQL Server RBS Performance with SharePoint Server 2010 and StorSimple Storage Solution” states, however it’s very important to understand that no matter where you move the data to, it will inevitably need to get backed up. So statements that say “Backup is now under 1min, whereas previously was 30min” are misleading (Not stated here, but to some degree is inferred).

I really enjoy the paragraph “When Storage Tiers Need to be implemented”, as I believe it can be useful. What is not been said is that if data qualifies to be on different storage tiers based on age, then at some point it needs to be moved. Unfortunately I have not seen any BLOB storage feature, filestream, RBS or FileTable that can leverage set-based mass data movement. They inevitably all require row-by-row mechanisms. Nonetheless, RBS definitely makes available the option to tier data storage far easier than the other BLOB storage options.

Now we get to the primary reason I was reading (this time round) the whitepaper, namely to get advanced knowledge of the installation steps, and write my first sample.

I initially didn’t pay attention to the key word “LOCAL”. This will impact your understanding, and interestingly it made me realize a potential design. It’s possible to install RBS Filestream in the same database as where the application’s data resides, essentially simply creating a filegroup. WOW! That means one could install RBS Filestream provider solution without requiring additional databases…interesting Thinking smile

The document then describes the different components that get installed. In short, you will have to install some components at the application servers, others at the meta-data server, and others at the BLOB store server.

This is how I think it is aligned:

  • RBS Library (Client) and Filestream Provider (Client) –> Application Server
  • RBS Server Component –> Database Server hosting the Application’s data
  • Filestream Provider (Server) –> Database Server hosting the BLOB store
  • Maintainer –> I could be wrong, but I think it’s best to find this on the Database Server hosting the application’s data

NOTE: I installed everything on a single laptop, and used the REMOTE configuration to have the BLOB stored in a dedicated database. I plan to have multiple servers in the new server, and will likely update this review then (or create a separate blog)

Throughout the document there are references made that I thought could create confusion, so here’s my take on the synonymous references:

  • Application’s database = Content database; Meta-data Database; RBS Enabled database
  • BLOB Store = Location where the BLOB data can be found, in our filestream case it’s the filestream enabled database.

I found the detail about the maintainer, as it pertains the Pool sizes and such, presented a little too early in the document. A pool refers to the number of records the maintainer will select per orphan cleanup iteration. I was perplexed and still don’t know why a data table is created per pool, however I know that you must not confuse this with the concept that manages how many objects are stored in the NTFS directory.

NOTE: NTFS will allow billions of documents to be stored on a volume, and within a directory. Customers and product teams have recognized that too many reduces the performance of traversing these files and a general guideline of ~300,000 objects (files) is offered up. I’ll refer to this later on in this blog.

The document goes on to describe HA options, which are fine but keep in mind that Denali offers Database Mirroring support and integration with the AlwaysON feature. Very Cool!!!

The process flow catches me every time. In simple terms I would conclude that an application makes a call to the RBS API to retrieve the BLOB, and several activities occur behind the scenes to return the BLOB. I’m focusing on the installation concepts, so I browse over this for now.

NOTE: I later learned quite a bit about this step, so expect a blog on this at a later date.

What’s important to note is that the application will make a call to the Application’s Database to first retrieve the BlobID, and then when reading or writing a call is directed to the Blob Store database.

BLOB deleting is interesting, though it doesn’t impact installation considerations. It will impact data administration.

Aaahhhh…..now we find those “Do before you run RBS setup” steps

  • Enable Filestream for the SQL Server Instance
  • Create the databases… yes you must create the RBS Blob Store database too
  • Create filegroups and files
    • You can define the max size limit for the filestream directories
    • Don’t get confused about the filestream references found in the DB Options. These are related to FileTable
  • Create a Master Key for each of the databases
    • I haven’t quite worked out when and if I will be using a certificate in a trusted domain where all SQL Server Instances will be connecting through trusted connections. Additionally there doesn’t appear to be any reason for cross-database T-SQL queries. Nonetheless, RBS requires a master key otherwise it will not install

NOTE: I imagine that at some point, a naming convention and more thought is required about the configuration, but I’m going to leave the crossing of that bridge for another day.

Ok, that’s it. Now where do we get the RBS setup file? Historically this was found as part of a separate download, namely the feature pack. Now you can find the rbs.msi file from the install directory, for English edition it is

%Installdirectory%\SQLFULL_x64_ENU\1033_ENU_LP\x64\redist\RemoteBlobStore

The document makes a few assumptions, so while I click “Next” most of the time, I’m very thoughtful about filegroup destinations. Default will imply “Primary” for the internal tables, and your first filestream filegroup for the BLOB data.

For the initial setup, “I simply want to install” approach, the document covers more than I need, namely all the command line stuff. With that said it is insightful to work through each option as it is easy to relate to the GUI and certainly improves understanding. At some point, you realize that you can have multiple provider instances sent to the same database…WOW!!…that would make for a complex design. I would probably avoid that design unless there was a very compelling reason to combine multiple filestream providers into a single database.

The advanced configuration settings are great, and one of the reasons that make RBS a viable alternative than a standard filestream implementation. I mentioned earlier that the general guideline is to not have more than ~300,000 files in a single directory.

NOTE: I was looking for a Windows Server 2008 NTFS filesystem limit to the number of files per directory, but couldn’t find any. So the guideline is from a Windows Server 2003 article – “How NTFS Works

The “Pool Capacity” is where RBS will manage this in the background. The default setting is 200,000. What the document fails to reveal is that this setting will not directly correlate to the number of files found in the directory, in some cases there could be many more files, this can ocucr because the update & delete modifications create a scenario where there are more files than the tables would indicate. An update in filestream, creates a new file, a delete is a deferred operation. After backups and garbage collection have taken place, then yes the files in the directory will reduce.

It’s great to see the RBS service handle the in-band and out-band concepts. There is an optimal size for when objects should be written using the Win32 API, and this occurs around the 1.2MB size. Until this point, TDS protocol is a more efficient for reading and writing BLOBs. The objects are still stored in the filesystem, but access to them is handled differently. This is configurable and occurs transparent to the application.

Another fantastic configuration is the inline BLOB store. This will make sure that those filesystem performance debilitating small files are stored directly in the database, without the application needing to worry about where they are been stored. The paper doesn’t expand on where they are stored other than “in the mdf file”, so we can only assume that they are stored in the rowdata filegroup defined during setup. If these files account for large amounts of data, then backup & restore may become unwieldy, of course at 61KB recommended value you would need to have > billion small files before worrying about that.

The write allocation size is an interesting concept, which the document fails to expand upon. This is an unfortunate gap. I would like to know more about its value proposition. Maybe there isn’t one, because any updates create a new file, so pre-allocating has no gain and may in fact cause degradation but is there a performance gain when creating a file of 1MB, and then streaming the bytes in? I don’t know, and hope to find out going forward.

I didn’t find the Maintainer, XML Configuration Files, multiple instance deployments, and administrative views particular useful at this stage, but took a mental note that this is a place to come to in the future. The backup & restore is discussed, but really it’s pretty simple, but dangerous. You stop the maintainer. You backup the application’s database first, and then you backup the blob store. When you restore, the maintainer must not be running until everything is restored. The diagrams say restore the Blob store first, and that you can start the restore of the content database but that simply amuses me as the content database will (typically) complete within minutes. So I am currently of the mind that you can restore whenever you want, but will not have access to the blobs.

It’s my bet that the following is the best restore approach…though don’t be surprised if I get killed by those more knowledgeable than Nerd smile

    1. Restore Primary Filegroup of Blob store as a Piecemeal Restore
    2. Restore RowData Filegroups of Blob store as a Piecemeal Restore
    3. Make the blob store database available to be accessed
    4. Restore the Content (Application’s) database

I would also be interested to evaluate that the blob store database can have their filegroups turned into Read-Only. Right now I don’t think we can make new records go to a new filegroup, but I do know that we can make new records go to a new blob store. As soon as a blob store is no longer getting populated or modified, then maybe we can make the database read-only (that would be awesome) or at least make the filestream filegroup read-only.

The next section in the document is all about Sharepoint. It’s worth a read, as we think about what the custom application will be capable of doing but for the most part I find the Sharepoint application imposes interesting restrictions.

So you think you can use RBS now….unfortunately not Disappointed smile

Before you can use RBS, you have to do additional steps. These are described in “How To: Install (Remote BLOB Store)”:

  1. Enable the Application’s database for RBS by running SP mssqlrbs.rbs_sp_enable_rbs
  2. Create a user table and enable the column running SP rbs_sp_register_column
    1. Remember to index the column that is been registered

NOTE: Unfortunately the document doesn’t provide recommendations or guidelines as it pertains to indexes, or other considerations. Therefore I know that I must investigate what are the experiences when you have billions of rows.

The document misses the above, yet it goes into detail about the GAC and configuration files. I struggled to get the references in my custom application sample, and I can’t really comment on how much of the GAC and configuration file recommendations actually assisted in resolving the issue. In the end I had to add the Microsoft.Data.SqlRemoteBlobs by browsing for the DLL.

I appreciated the insights into the various approaches to reading and writing the blobs. I would’ve liked the content to include transaction concepts, and guidelines. There is an example that demonstrates writing blobs using a buffer, however no insights about buffer sizes or what one should consider when developing a solution and trying to establish the appropriate buffer size.

I definitely don’t want to enlist a Distributed transaction, and therefore need to know why the transaction scope is required.

I’ve created a basic sample based on the articles “How To: Write a BLOB (Remote BLOB Store)” and “How To: Read a BLOB (Remote BLOB Store)”, I can’t wait to create samples using the other techniques to write and read blob data.

The document did a short blurb on creating a provider, not really an interest area for me right now, so it didn’t get much attention.

In summary, it was a good paper with opportunity for more information and areas that may qualify as redundant. Most importantly, and what I would consider a reason for a HIGH VALUE rating from me, is that it refrained Sharepoint orientated, which has made other documents mostly useless to me.

Posted in Books & Papers, Work | Tagged , , , , | 3 Comments

Optimize Recursive CTE Query

(originally posted on blogs.msdn.com/b/sqlcat – http://bit.ly/1ut2tlu)

MSDN Blogs

Reviewers: Lubor Kollar; Conor Cunningham; Steve Howard; Kun Cheng; James Podgorski; Jimmy May; Joseph Sack; Welly Lee; Neeraj Joshi; Keith Bauer

Introduction

We recently assisted a global ISV to address a performance issue related to a poor performing recursive CTE (Common Table Expression). The ISV wanted the query that was running in excess of 3 minutes to run in less than 15 seconds on their servers. The end result of our efforts was a 3,600% performance improvement.

Note: Validation for this post was performed in the SQL CAT Customer Lab on an HP Proliant DL380 G7, Westmere-EP X5680 3.33GHz 2 socket, 6 physical cores, 12 logical cores for a total of 24 cores; 144GB RAM.

A CTE (http://msdn.microsoft.com/en-us/library/ms186243.aspx), amongst other things, is a very effective single statement for handling hierarchial data, and more specifically, parent-child relationships. A CTE is a reasonable choice when the intention is to iterate through the data, particularly in cases where the schema cannot be changed but recursive queries are required. If you can design a different schema layout, there are often much faster ways to execute queries over hierarchies. CTEs are also a natural choice when converting a CONNECT BY statement from alternative RDBMS platforms (http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/queries003.htm).

There is a scenario where using a CTE construct is significantly less efficient than the traditional approach to traversing the Parent-Child construct. This blog will describe how and when a WHILE loop performs better than the CTE. The characteristics that can contribute to CTEs being a less than optimal choice are:

  • Non-Unique Parent/Child Key
  • Complex Predicates

In our scenario the ISV was attempting to extract all the items that adhered to a certain set of characteristics. An item could exist on its own or be a component (child) within a larger item (parent) and there were no constraints to the number of parents a child could have. For this scenario, the query was recursively traversing the hierarchy in a non-unique fashion.

Below is the original CTE code:

WITH CTE(ItemID, StartPt, StartID, EndPt, EndID, level)
AS
(    — Anchor member definition
SELECT anc.ItemID, anc.StartPt, anc.StartID, anc.EndPt, anc.EndID,0
FROM dbo.CTEIssue AS anc
WHERE anc.StartPt = -1512034855
AND anc.StartID = 1809069910
AND anc.Category IN(-379491138,-379518300,-379520626,472917509,-379494005,-379506005,-2134852210,-379515627,-379515134,-379484415)
UNION ALL
— Recursive member definition
SELECT chi.ItemID, chi.StartPt, chi.StartID, chi.EndPt, chi.EndID, par.level+1
FROM
dbo.CTEIssue AS chi
INNER JOIN
CTE AS par ON(par.EndPt = chi.StartPt AND par.EndID = chi.StartID)
WHERE
chi.Category IN(-379491138,-379518300,-379520626,472917509,-379494005,-379506005,-2134852210,-379515627,-379515134,-379484415)

)
SELECT DISTINCT cti.*
FROM
dbo.CTEIssue AS cti
INNER JOIN
CTE AS cte ON(cti.ItemID = cte.ItemID)
WHERE (cti.EndPt <> -1512034855 OR cti.Category = -379484415)
GO

Identifying the issue

Firstly, we understand that there are cases where an inefficient query is acceptable, provided that it meets the intended objectives. It only becomes an issue when the intended objectives are not attained. In this case, the inefficient CTE meets the expected response times for small data sets or simple predicates and only became an issue once the data set became large and the throughput increased. Another concept to appreciate is that there are cases where a slow operation for small data sets is significantly faster when dealing with larger data sets. For example, nested joins are great for small sets of data but are typically no match for hash joins when the data is large.

It’s relatively easy to recognize that using CTEs is not the most efficient approach when the statement immediately following the CTE expression relies on the DISTINCT of all the columns. Essentially this implies that the recursive logic doesn’t have a primary key in the anchor, and thereby allowing each recursive member to not be unique. This creates a scenario where a huge number of duplicate rows are generated.

Another way to recognize that a CTE is inefficient is to compare the estimated query plan statistics to the actual statistics. The below table highlights the difference between the query plan Estimates versus Actuals.

Estimate Rows Estimate Executions Rows Executes TruncatedOperatorText (First 60 characters)
183.53 NULL 3 1 WITH CTE(ItemID, StartPt, StartID, EndPt, EndID, level) AS
35.22541 1 3 1 |–Nested Loops(Inner Join, OUTER REFERENCES:([Recr1014]
35.22541 1 299197 1 |–Hash Match(Aggregate, HASH:([Recr1014]), RESIDUAL:
1240.829 1 4129317 1 | |–Index Spool(WITH STACK)
1240.829 1 4129317 1 | |–Concatenation
1 1241.829 0 0 | |–Compute Scalar(DEFINE:([Expr1021]=(
1237.773 1 0 0 | | |–Compute Scalar(DEFINE:([Expr10
1237.773 1 1723 1 | | |–Nested Loops(Inner Join,
1237.773 1 1723 1 | | |–Index Seek(OBJECT:([
1 1237.773 1723 1723 | | |–Clustered Index Seek
1.002469 1241.829 4127594 1 | |–Assert(WHERE:(CASE WHEN [Expr1023]>
1.002469 1241.829 4127594 1 | |–Nested Loops(Inner Join, OUTER
1 1241.829 0 0 | |–Compute Scalar(DEFINE:([E
1 1241.829 4129317 1 | | |–Table Spool(WITH STA
3.055948 1240.829 0 0 | |–Compute Scalar(DEFINE:([E
3.055948 1240.829 4127594 4129317 | |–Nested Loops(Inner J
3.055948 1240.829 4127594 4129317 | |–Index Seek(OBJE
1 3791.91 4127594 4127594 | |–Clustered Index
1 35.22541 3 299197 |–Clustered Index Seek(OBJECT:([MCC2].[dbo].[CTEIssu

As you can see – in some cases the estimated rows versus actual rows were significantly skewed.

Improving the performance

The objective of that query was to identify unique rows and remove them from the recursive iteration. If this can be done in the CTE, then that would be ideal, otherwise a more performant alternative is to convert the query into WHILE loop. The CTE is far more efficient than WHILE loop if there is a unique parent/child key. However let me emphasize that when you have non-unique parent/child keys, estimations used to create the query plan is likely to be significantly skewed.

Below is the code that meets the objective of been more restrictive during each iteration. It is followed by an explanation of the reasoning behind the construction of the query, and why it performs better:

SET NOCOUNT ON

DECLARE @StartPt int = -1512034855;
DECLARE @StartID int = 1809069910;
DECLARE @Counter tinyint = 0;
DECLARE @MaxRecursion tinyint = 50;

DECLARE @CategoryList table(Category int NOT NULL)

DECLARE @PtIDPair table
(
    Pt        int        NOT NULL 
    ,ID        int        NOT NULL
    ,lvl    tinyint    NOT NULL
    ,PRIMARY KEY CLUSTERED (lvl, Pt, ID)
)

CREATE TABLE #Item
    (
      ItemID     int NOT NULL
      ,Category     int NOT NULL
      ,StartPt int NOT NULL
      ,StartID     int NOT NULL
      ,EndPt     int NOT NULL
      ,EndID     int NOT NULL
      ,lvl         tinyint NOT NULL
    )

CREATE CLUSTERED INDEX CDX_#Item_Lvl ON #Item ([lvl])
CREATE NONCLUSTERED INDEX NDX_#Item_ItemID ON #Item (ItemID)

INSERT INTO @CategoryList 
VALUES
    (-379491138)
    ,(-379518300)
    ,(-379520626)
    ,(472917509)
    ,(-379494005)
    ,(-379506005)
    ,(-2134852210)
    ,(-379515627)
    ,(-379515134)
    ,(-379484415);

--Gather the anchors
-- There are a set of ItemID values, which happens to contain a common
-- Pt/Id combination.  These are level 0 (Anchors)
INSERT INTO #Item
    (ItemID, Category, StartPt, StartID, EndPt, EndID, lvl)
SELECT
    ItemID, cti.Category, StartPt, StartID, EndPt, EndID, @Counter
FROM    
    dbo.CTEIssue AS cti
    INNER JOIN
    @CategoryList AS lst ON(lst.Category = cti.Category)
WHERE
    StartPt = @StartPt
    AND StartID = @StartID

-- We want to remove the duplicates, from the final result set, so we do our filtering to create a unique set of anchor pairs
INSERT INTO @PtIDPair(Pt, ID, lvl) 
SELECT EndPt, EndID, @Counter
FROM #Item
WHERE lvl = @Counter
GROUP BY EndPt, EndID

WHILE @@ROWCOUNT > 0
 BEGIN
    SET @Counter += 1
    
    IF @Counter = 50
     BEGIN
        RAISERROR('Max Recursion reached', 16, 1)
        BREAK
     END
    
    INSERT INTO #Item(ItemID, Category, StartPt, StartID, EndPt, EndID, lvl)
    SELECT    cti.ItemID, cti.Category, cti.StartPt, cti.StartID, cti.EndPt, cti.EndID, @Counter
    FROM    
        dbo.CTEIssue AS cti
        INNER JOIN
        @CategoryList AS lst ON(lst.Category = cti.Category)
        INNER JOIN
        @PtIDPair AS pr ON(cti.StartPt = pr.Pt AND cti.StartID = pr.ID)
        LEFT OUTER JOIN
        #Item AS itm ON(itm.ItemID = cti.ItemID)
    WHERE
        itm.ItemID IS NULL
        AND pr.lvl = @Counter - 1

    INSERT INTO @PtIDPair(Pt, ID, lvl) 
    SELECT itm.EndPt, itm.EndID, @Counter
    FROM 
        #Item AS itm
        LEFT OUTER JOIN
        @PtIDPair AS pr ON(itm.EndPt = pr.Pt AND itm.EndID = pr.ID)
    WHERE
        pr.ID IS NULL
        AND itm.lvl = @Counter
    GROUP BY
        itm.EndPt, itm.EndID
 END

SELECT
    cti.*
FROM 
    dbo.CTEIssue AS cti
    INNER JOIN
    #Item AS itm ON(cti.ItemID = itm.ItemID)
WHERE 
    cti.EndPt <> -1512034855
    OR cti.Category = -379484415
GO

Side Note: Our investigations revealed that in this case table variables significantly outperformed local temp tables.

The first step was to declare variables and create a temporary table to store the equivalent of the CTE result set. A table variable was used to store the unique JOIN clause rows generated during the iterations. Just before entering the WHILE…END, we created the first record (a.k.a the Anchor Member). In the WHILE…END loop we found all the children based on the JOIN criteria, further filtered by the predicates, and inserted them into the list of unique records. The next iteration would not collect the same children again. This was the most important distinction.

Improving the CTE Query Plan

Compared to the CTE, the WHILE loop is a fair amount of coding. Prior to re-writing the CTE as a WHILE loop, we attempted to optimize the query plan itself, within the constraint of not being able to generate unique values in the join. We modified the CTE to use a table variable versus using an IN clause. We created indexes and updated the statistics, and achieved a 50% performance improvement, but unfortunately and more importantly missing the objective of improving by at least a factor of 12.

Below are the STATISTICS IO and TIME demonstrating the improvement of adding the TVP and indexes:

Original

(3 row(s) affected)

Table ‘CTEIssue’. Scan count 41,592,377, logical reads 137,303,190, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table ‘Worktable’. Scan count 2, logical reads 24,576,797, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times: CPU time = 209197ms, elapsed time = 217344ms.

with TVP

(3 row(s) affected)

Table ‘CTEIssue’. Scan count 4,428,524, logical reads 41,930,102, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table ‘Worktable’. Scan count 2, logical reads 24,575,641, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table ‘#6EF57B66’. Scan count 1, logical reads 19,007,602, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times: CPU time = 143646ms, elapsed time = 151268ms.

Side Note: Table variables do not maintain statistics and are generally not recommended when there are cost-based plan choices that can result in significant performance variability; one would typically leverage a local temporary table in that case. We used a TVP here because the ISV application was creating an IN clause that could reach hundreds of values, and a table variable data type was more robust, and in some cases more efficient.

When we compare the query plans between the CTE and the WHILE loop, we realized that each iteration of the WHILE loop, has the luxury of a different query plan. This is unlike the CTE which only executed against one query plan.

A significant contribution to the long running nature of the CTE query is this portion of the query plan that estimates less than 10 rows on the outer and inner query, but ends up having to iterate through millions of rows on both sides of the query. For a table that only has a few hundred thousand rows, this is a function of the lack of unique join criteria.

Query Plan Statistics

In Conclusion…

The Common Table Expression (CTE) is a powerful syntax in its ability to address the historically challenging objective of recursive queries using T-SQL. It is a natural and often times appropriate syntax when migrating from the CONNECT BY syntax. Recursive CTE queries do have a reliance on the unique parent/child keys in order to get the best performance. If this is not possible to achieve, then a WHILE loop is potentially a much more efficient approach to handling the recursive query.

Our final results completed this frequently run query within 4 seconds from 144 seconds, a 3600% performance improvement!!!

Posted in Thoughts | Tagged , , , , , , | Leave a comment

FILESTREAM Design and Implementation Considerations

Summary: Much of the data used today is unstructured, including text documents, images, and videos. This unstructured data is typically stored outside a relational database and separate from structured data. This separation can make data management much more complex, and if the data is associated with structured storage, the separation can limit performance and file streaming capabilities.

Microsoft SQL Server 2008 includes an enhancement to data storage called FILESTREAM, which lets you store unstructured binary large object (BLOB) data directly in the file system. With FILESTREAM you can take advantage of the Win32 rich set of streaming application programming interfaces (APIs) for better streaming performance. FILESTREAM also provides transactional consistency so structured and unstructured data are always in synch; additionally, you can use Transact-SQL statements to insert, update, query, search, and back up FILESTREAM data.

This white paper is a companion to the information about FILESTREAM found on TechNet. This paper delves deeply into selected topics that should be considered when implementing a solution that uses FILESTREAM, including design considerations, maintenance, and management of a FILESTREAM environment.

To continue reading, please download the whitepaper Download the paper

Posted in Work | Tagged , , , | 2 Comments

Retailer Improves Data Management to Reduce Costs and Safeguard Customer Goodwill

(originally posted at microsoft.com/canada/casestudies – http://bit.ly/11hVm5c)

London Drugs operates a chain of retail locations in Canada. One of the company’s most important lines of business is the PhotoLab, an online service for storing, sharing, and ordering prints of digital photographs. To reduce costs and improve PhotoLab users’ experience, London Drugs deployed a data management solution based on the Microsoft Application Platform. With the solution, the company has drastically reduced backup time and associated costs, simplified management, and improved the functioning of an important, customer-facing system.

Mission-Critical Systems

The volume of data in the London Drugs PhotoLab system, already about 30 terabytes (TB), grows by as much as 30 gigabytes (GB) a day. To comply with a goal of never deleting customer photos, London Drugs needs a highly available and reliable data solution that is simple to administer, back up, and restore.

“Relying on the Microsoft Application Platform gives us confidence that our most important systems will perform exactly as we need them to, when we need them to.” – Samantha Kinoshita, IT Manager for Infrastructure and Operations, London Drugs

Solution Overview

Several years ago, the London Drugs PhotoLab system was becoming difficult to manage and backing up the databases was problematic. Increases in both the popularity of PhotoLab and the size of the digital photographs that customers were uploading resulted in such a massive volume of data that backups took more than 24 hours and disk storage was growing prohibitively expensive. One solution would have been to begin enforcing an advertised time limit for image storage, but the company found this option unacceptable because its customers had grown to expect no such deletions. “In addition to revenue, PhotoLab generates a great deal of customer goodwill and loyalty toward London Drugs,” says Samantha Kinoshita, the company’s IT Manager for Infrastructure and Operations. “To start deleting files would hurt that relationship, so we decided to seek better data management tools instead.”

In 2008, London Drugs began working with Microsoft Services to address the problems. Deployment of Microsoft SQL Server 2005 data management software gave London Drugs access to improved table partitioning and file groups, which reduced the difficulty of the backup and restore process. But backups were still frustratingly large because it was impractical to store the photographs except as unstructured image data in the system’s operational database. Working with a Microsoft Services consultant, London Drugs learned of the FILESTREAM storage attribute available in a beta version of Microsoft SQL Server 2008 and recognized its value proposition for further improving storage and management of the PhotoLab data. “With the FILESTREAM attribute, we can define columns in SQL Server tables specifying that the image data is actually stored in the file system,” explains Shaun Tinline-Jones, the Microsoft Services consultant. “Because most digital photographs these days are at least 1 megabyte and are predicted to grow to 50 megabytes in the future, they are much more efficiently handled by the NTFS file system than by the database.” The solution includes the following components:

 

London Drugs relies extensively on the Microsoft Application Platform throughout its IT environment:ld structure

  • Microsoft SQL Server 2008 Enterprise with Service Pack 1. Manages data for mission-critical systems such as PhotoLab and the company’s supply chain.
  • Microsoft BizTalk Server 2004, Microsoft Message Queuing, and Terminal Services in the Windows Server 2003 operating system. Orchestrates data transfers between supply chain systems and provides employees with access to the data.
  • Windows Server 2003 R2 Enterprise x64 Edition with Service Pack 2. Powers the company’s applications, networks, and web services.
  • IBM System x3850 enterprise server computers with dual quad-core processors and 30 GB of memory in a two-node failover cluster.
  • IBM System Storage DS5300 storage area network.

Top Reasons Why London Drugs Chose Microsoft

  1. London Drugs built its mission-critical PhotoLab offering on the Microsoft Application Platform because:
  2. It delivers the reliability, flexibility, and scalability required for such an important line of business.
  3. Microsoft frequently updates and improves its products, so the solution can more easily evolve to include new technological advances and fit changing business needs.
  4. Integrated tools and products reduce administration and development costs.
  5. Familiarity with the Microsoft environment is widespread in the work force, so costs for training and support are reduced.

Business Pain Points Addressed

ld bsLondon Drugs considers the PhotoLab offering not only a vital source of revenue but also a means of cultivating customers’ loyalty and willingness to transact non-photo-related business with the company. Because it would be relatively easy for an unsatisfied customer to move on to one of the many other free, online photo-sharing services, downtime or unexpected deletions in the PhotoLab system are unacceptable. As a result, London Drugs wants the technologies that underlie PhotoLab to be highly available and reliable. By relying on the Microsoft Application Platform for its PhotoLab offering and throughout its IT environment, London Drugs has been able to:

  • Improve the functioning of an important, customer-facing system.
  • Match the reliability and ease of use of competitors’ offerings.
  • Help safeguard customer data from deletion or loss.
  • Reduce the workloads of IT staff members and make them available for more productive, value-added work.

Technical Pain Points Addressed

For the PhotoLab offering to function properly, London Drugs needs data management tools that can handle large objects, back up 20–30 GB daily and 1.5–2.5 terabytes monthly, and quickly restore from backups. Customers expect their interactions with PhotoLab to be simple, fast, and glitch-free—even during peak times like holidays, when many customers use PhotoLab to order custom greeting cards. With the Microsoft Application Platform, London Drugs has been able to:

  • Process approximately 30,000 photo uploads (40–100 GB) a day.
  • Manage more than 10 million stored images, a data volume of about 30 terabytes.
  • Reduce monthly backup time from more than 24 hours to 5 hours or less.
  • Reduce backup restore time from 24 hours to 30 minutes.
  • Achieve 99.99 percent availability with no unplanned downtime.
  • Make more efficient use of disk space, reducing storage costs and extending the life of the current storage configuration.
  • Have the option of switching from a storage area network to a less expensive network-attached storage configuration.

Conclusion

By relying on the Microsoft Application Platform for its PhotoLab offering and other important IT systems, London Drugs has improved the storage and management of its data and safeguarded the company’s relationship with its customers. London Drugs now has cost-effective data management capabilities that can expand with its business and make it easy to take advantage of future technological advances.

Posted in Case studies | Tagged , , | Leave a comment

Planning for Consolidation with Microsoft SQL Server 2000

(My role was contributor as opposed to author.  It was previously posted on technet.microsoft.com/library – http://bit.ly/1F19rmm)

SQL Server Technical Article

Author: Allan Hirt

Contributors: Tom Davidson and Shaun Tinline-Jones

Technical Reviewers: Prem Mehra, Will Sweeny, JC Armand, Shaun Tinline-Jones, Johnson Noel, Cathan Cook

Applies To: All versions of Microsoft® SQL Server™ 2000

Summary: This white paper is the first in a series of papers focused on server consolidation with Microsoft® SQL Server™ 2000. It is designed as a prescriptive planning guide for both decision-makers and technical audiences alike. This is not an implementation or administration/operations guide for consolidation efforts; those topics are the focus of the other papers in this series.

Read more at http://bit.ly/1F19rmm

Posted in Contributor | Tagged | Leave a comment