Wednesday 28 November 2012

Using the 2012 MDS Web application with Chrome

Problem

MDS Menu items get lost behind the silverlight control on the page when trying to view entities

Issue

This is because of the z-index for the silverlight control compared to the menu items is not layered correctly

Resolution

  1. Browse to C:\Program Files\Microsoft SQL Server\110\Master Data Services\WebApplication\Explorer\ or equivalent
  2. Update AttributeSL.aspx and ExplorerHierarchySL.aspx to have this parameter
    <param name="Windowless" value="true" />

Wednesday 5 September 2012

Expanding date out from a specified date range

Overview

The following script allows you to get one row for every date in a range with extra information attached to that date.

This is an alternative way of building a date table in sql. You could also use this in a view with the date range hard-coded
It is also a very efficient way of expanding out the dates as the CTEs are like macros rather than temp tables so it doesn't actually create millions of rows for every date possibility.

Script

DROP FUNCTION [dbo].[udfExplodeDates]
GO

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

-- =============================================
-- Author:  Jeremy
-- Create date: Sep 2012
-- Description: Explodes out each day in a date range to have one row per day, with some extra info
--    Use CTEs to join onto itself to create millions of rows and then uses the row_number
--    to figure out how many days to go from the start date
-- =============================================
CREATE FUNCTION [dbo].[udfExplodeDates](@startdate datetime, @enddate datetime)
returns table as
return (
with 
 R0 as (SELECT 1 as n UNION ALL SELECT 1) -- 2 rows
,R1 as (SELECT 1 as n FROM R0 t1, R0 t2) -- 4 rows
,R2 as (SELECT 1 as n FROM R1 t1, R1 t2) -- 16 rows
,R3 as (SELECT 1 as n FROM R2 t1, R2 t2) -- 256 rows
,R4 as (SELECT 1 as n FROM R3 t1, R3 t2) -- 65536 rows
,R5 as (SELECT 1 as n FROM R4 t1, R4 t2) -- 65536*65536 rows = 4,294,967,296 rows
,nums as (SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 1)) as num FROM R5)

SELECT DATEADD(day,num-1,@startdate) as the_date, 
    DATENAME(dw,DATEADD(day,num-1,@startdate)) as day_of_week,
    day(DATEADD(day,num-1,@startdate)) as day_of_month,
    EOMONTH(DATEADD(day,num-1,@startdate)) as end_day_of_month --only available in SQL2012 onward
FROM nums
WHERE num <= DATEDIFF(day,@startdate,@enddate) + 1

);

GO

Example Output



Monday 3 September 2012

Keeping static rows in a query table matched to the correct row from the database query in Excel

The Problem

If you create a query in excel, which then creates a table object, and then you add columns to the excel query table that are not part of the query. When you refresh the query the static information stored in the extra columns can become mismatched with the information returned in the query.

For Example if i have this table
order_idproductcomment (static column)
1apple 
3strawberrygoes well with cream
And then refresh the query...
order_idproductcomment (static column)
1apple 
2bananagoes well with cream
3strawberry 
An extra row appears in the middle, we would then have the table saying that bananas go well with cream (when clearly that is crazy)

Workaround

To get around this problem you can use the before and after refresh events on the QueryTable object in Excel

Things to do

  1. Setup QueryTable events for before and after refresh of your query
  2. Create a hidden sheet in your workbook where you can keep a copy of the table

Setting up the QueryTable events

Process Flow

  1. In the beforerefresh event, copy the current table to the hidden sheet
  2. Allow the data to refresh from the query
  3. Clear out the static columns in the newly refreshed data
  4. Loop through the hidden sheet table and match on a unique key to find which rows to populate your static data




Monday 21 May 2012

Query all database files that have potential disk bottlenecks


Overview

Microsoft generally recommends you disk io latency for sql server should average under 20ms for data and 15ms for Logs.

This query should show you all database files on your SQL Server instance that exceed the recommended thesholds.

According to SQL Server Central [http://www.sqlservercentral.com/articles/ssc+clinic/74097/]

  • Less than 10 milliseconds (ms) = very good
  • Between 10-20 ms = okay
  • Between 20-50 ms = slow
  • Greater than 50-100 ms = potentially serious IO bottleneck
  • Greater than 100 ms = definite IO bottleneck

T-SQL Script


/*
 For SQL Versions older than 2012.. replace
sys.dm_io_virtual_file_stats(NULL, NULL)
with
sys.dm_io_virtual_file_stats(-1, -1)

*/
IF OBJECT_ID('tempdb..##dbFiles') IS NOT NULL 
    DROP TABLE ##dbFiles
GO

/* 
 table to hold all database files 
*/
CREATE TABLE ##dbFiles (
      DBName SYSNAME ,
      name VARCHAR(200) ,
      physical_name VARCHAR(2000) ,
      type_desc VARCHAR(200) ,
      state_desc VARCHAR(200) ,
      FILE_ID INT ) 
GO

/* 
 get all the database files for the sql instance
 into a table
*/
EXEC sp_msforeachdb 'use [?]
      insert into ##dbFiles
      select ''?'' as [DBName],name , physical_name,type_desc, state_desc,file_id
      from sys.database_files'

/*
 FIND FILES THAT HAVE UNACCEPTABLE DISK LATENCY
 AND JOIN TO DATABASE_FILES TABLE TO GIVE FRIENDLY NAMES
 THE VIRTUAL FILE STATS ARE FROM WHEN SQL SERVER WAS LAST RESTARTED

  > 20ms data and > 15ms logs
*/
DECLARE @DataReadLatency_ms INT,@DataWriteLatency_ms INT
DECLARE @LogReadLatency_ms INT,@LogWriteLatency_ms INT

SET @DataReadLatency_ms = 20
SET @DataWriteLatency_ms = 20
SET @LogReadLatency_ms = 15
SET @LogWriteLatency_ms = 15

SELECT  DB_NAME(database_id) AS 'db' ,
        df.file_id ,
        io_stall_read_ms / NULLIF(num_of_reads,0) AS 'AVG READ TRANSFER/SEC' ,
        io_stall_write_ms / NULLIF(num_of_writes,0) AS 'AVG WRITE TRANSFER/SEC' ,
        size_on_disk_bytes ,
        df.name ,
        df.physical_name ,
        df.type_desc
FROM    sys.dm_io_virtual_file_stats(NULL, NULL) stat
        JOIN ##dbFiles df ON stat.file_id = df.FILE_ID
                             AND df.DBName = DB_NAME(stat.database_id)
WHERE   num_of_reads > 0
        AND num_of_writes > 0
  -- ONLY INCLUDE UNACCEPTABLE LATENCY AND NOT DIV/0
        AND (( 
     ISNULL((io_stall_read_ms / NULLIF(num_of_reads,0)),0) > @DataReadLatency_ms AND type_desc = 'ROWS'
    OR 
              ISNULL((io_stall_write_ms / NULLIF(num_of_writes,0)),0) > @DataWriteLatency_ms AND type_desc = 'ROWS'
            )
   OR
   ( 
     ISNULL((io_stall_read_ms / NULLIF(num_of_reads,0)),0) > @LogReadLatency_ms AND type_desc = 'LOG'
    OR 
              ISNULL((io_stall_write_ms / NULLIF(num_of_writes,0)),0) > @LogWriteLatency_ms AND type_desc = 'LOG'
            ))
             
-- clean up
DROP TABLE ##dbFiles


Example of results returned by the script



Friday 18 May 2012

Printer Setup Pop-up When Opening Excel File



Overview

I had a problem when trying to open a specific excel file it would prompt the user with a Printer Setup window.
This was a problem because we were automating the excel workbook to refresh overnight and the prompt was stopping the automation from working.

This prompt also came up regardless of setting Application.DisplayAlerts = false in the controlling code behind
Here is an example of what the prompt looks like:

Solution




The issue for me was that one of the sheets in my workbook had been saved with the workbook view "Page Break Preview". This seems to be saved on a per worksheet basis rather than per workbook. So make sure you change it to normal mode for all worksheets within the workbook.


 
Once i changed this back to Normal mode (as below) and then saved the workbook. I was no longer prompted with the Printer Setup screen.



Thursday 17 May 2012

T-SQL View all database files on a SQL Instance


Overview

Sometimes you want to be able to see where all your database files are by doing a query. You can use the sys.database_files table to tell you on an individual database level but if you want to see it for all databases then the following script can help.

T-SQL Script



CREATE TABLE ##dbFiles
    (
      DBName SYSNAME ,
      name VARCHAR(200) ,
      physical_name VARCHAR(2000) ,
      type_desc VARCHAR(200) ,
      state_desc VARCHAR(200)
    )
go

-- executes a command on every database on the instance. The ? denotes the database name each pass through the loop
EXEC sp_msforeachdb '
use [?]
insert into ##dbFiles
select ''?'' as [DBName],name , physical_name,type_desc, state_desc
from sys.database_files
'

SELECT  *
FROM    ##dbFiles

-- clean up
DROP TABLE ##dbFiles


Example of results returned by the script

You can see the physical_name column returned by the query indicates where the database file is stored on the server.


Monday 14 May 2012

Dropping active connections before restoring a database

When restoring test environment databases sometimes you just want to ignore all the current connections to the database and just restore it. 
You will normally see the message:
Msg 3702, Level 16, State 3, Line 1
Cannot drop database “AdventureWorks” because it is currently in use.
Some people use the approach of running and sp_who2 to see the active connections to that database and then run the KILL <SPID> command to drop the connections. This can be a bit tedious especially if there are a lot of connections and you need to kill each spid individually

To get around this you can drop the existing connections to your database using the command 
ALTER DATABASE [<YourDatabaseNameHere>] 
SET SINGLE_USER WITH ROLLBACK IMMEDIATE

T-SQL - AdventureWorks Example

The following is an example of dropping the AdventureWorks connections and then restoring the database

-- this command will drop the existing connections to the database
USE [master]
GO
ALTER DATABASE [AdventureWorks] SET  SINGLE_USER WITH ROLLBACK IMMEDIATE
GO

-- normal restore command
RESTORE DATABASE [AdventureWorks] 
FROM  DISK = N'D:\Backups\AdventureWorks.bak' 
WITH  FILE = 1,  NOUNLOAD,  STATS = 10
GO


Tuesday 27 March 2012

Analysis Services - Shrinkable vs. non-shrinkable memory

Shrinkable vs. non-shrinkable memory

Analysis Services divides memory into two primary categories: shrinkable memory and non-shrinkable memory

An Example with the Intraday processing

The below example uses a staging cube to do the processing and then we synchronise changes to the cube users access.

This example is at the end of the sync of the staging cube to the live cube. This will clear out the data cache of the live cube
You can see in the below diagram that the Datacache decreases to nothing at 2:52PM












Once someone runs another query on the server after a sync, analysis services needs to go to the disk to get the information.
You can see from the below diagram at 2:54 it needed to read from disk and that the datacache at the same time starts building up again in the above diagram












Each of these explained (from ssas2005 perf guide)

When the cleaner is activated, it begins evicting elements of shrinkable memory, based on a cost/benefit algorithm that takes into account a variety of factors, including how frequently the entry is used, the amount of resources required to resolve the entries, and how much space is consumed by related entries. Shrinkable memory elements include the following:

Cached ResultsCached results include the Storage Engine data cache and Query Execution Engine calculation cache. As stated earlier in this document, the Storage Engine data cache contains measure group data and the Query Execution Calculation Engine cache contains calculation results. While both caches can help improve query response times, the data cache provides the most benefit to query performance by storing data that has been cached from disk. In situations of memory pressure, the cleaner shrinks the memory used for cached results. With this in mind, it is a good practice to monitor the usage of memory so that you can minimize the scenarios where elevated levels of memory pressure force the removal of cached results. For more information on how to monitor memory pressure, see Monitoring memory management.
Paged in dimension dataDimension data is paged in from the dimension stores as needed. The paged-in data is kept in memory until the cleaner is under memory pressure to remove it. Note that this is different behavior than previous versions of Analysis Services where all dimension data was resident in memory.
Expired SessionsIdle client sessions that have exceeded a longevity threshold are removed by the cleaner based on the level of memory pressure. Several server properties work together to manage the longevity of idle sessions. For more information on how to evaluate these properties, see Monitoring the timeout of idle sessions.

Non-shrinkable memory elements are not impacted by the Analysis Services cleaner.
Non-shrinkable memory includes the following components:

MetadataFor each Analysis Services database, metadata is initialized and loaded into memory on demand. Metadata includes the definition of all objects in the database (not the data elements). The more objects in your database (including cubes, measure groups, partitions, and dimensions) and the more databases that you have on a given server, the larger the metadata overhead in memory. Note that this overhead is generally not large for most implementations. However, you can experience significant overhead if your Analysis Services server contains hundreds of databases with tens or hundreds of objects per database, such as in hosted solutions. For more information on how to monitor metadata overheard, see Minimizing metadata overhead.
Active SessionsFor each active session, calculated members, named sets, connections, and other associated session information is retained as non-shrinkable memory.
Query Memory and Process MemoryAnalysis Services reserves specific areas of memory for temporary use during querying and processing. During the execution of a query, for example, memory may be used to materialize data sets such as during the cross joining of data. During processing, memory is used to temporarily store, index, and aggregate data before it are written to disk. These memory elements are non-shrinkable because they are only needed to complete a specific server operation. As soon as the operation is over, these elements are removed from memory.

Links

http://download.microsoft.com/download/8/5/e/85eea4fa-b3bb-4426-97d0-7f7151b2011c/ssas2005perfguide.doc
http://download.microsoft.com/download/6/5/6/6567C845-FC8D-4D62-920F-C027A349C889/SSASPerfGuide2008R2.docx

Wednesday 11 January 2012

Finding code in Object Definitions

INFORMATION_SCHEMA Views

Today I was trying to find a quick way of finding all stored procs that had deleted information from a table

So realising the information_schema.routines table held the actual definition for the stored proc, you can actually query the definition to find the code you are looking for.


SELECT *

FROM INFORMATION_SCHEMA.routines
WHERE ROUTINE_DEFINITION LIKE '%DELETE FROM ExampleTable%'


OBJECT_DEFINITION

Perhaps the best way is using the sql function OBJECT_DEFINITION. This will take in the sql server object id and return the definition for that object.

Example:



-- OBJECT_DEFINITION function
SELECT OBJECT_DEFINITION(OBJECT_ID('trigger_ExampleTable'))


I have found this is useful for finding code that is in triggers

SELECT NAME,OBJECT_DEFINITION(object_id) 
FROM sys.objects
WHERE TYPE = 'TR' -- triggers