Wednesday 30 October 2013

1.15 - Linked Server Alias

I take no credit for this - Taken from http://alexpinsker.blogspot.co.uk/2007/08/how-to-give-alias-to-sql-linked-server.html , but very useful.

There are no problems to add SQL linked server to the specific host running it. But what if you need to give it an alias, rather than use hostname as linked server name?
Here is how to do it:
1) Step 1:
• In SQL Server Management Studio open Linked Servers and then 'New Linked Server'.
• Inside of appeared wizard – Select the General tab.
• Specify alias name in "Linked server" field.
• Select SQL Native Client as provider.
• Add sql_server in "Product Name" field (that's the magic).
• In "Data Source" – specify name of the host to be used as linked server.
2) Step 2:
• In Security tab – specify proper security options (e.g. security context)
3) Step 3:
• In Server Options tab – put "Data Access", RPC, "Rpc Out" and "Use Remote Collaboration" to be true.
4) Step 4:
• Enjoy.

Thursday 24 October 2013

1.14 - Product Function In SQL

As some of you may be aware there is no PRODUCT function in SQL so we have to find a solution ourselves. I searched for a few hours and found various ways of doing it using LOG but that did not work fully as it did not take account of negative numbers. I eventually started thinking about using a recursive CTE, tried, failed and looked for an alternative, when I remembered that you could use a variable as a recursive solution and hey presto! It worked; details below:

DECLARE @a DECIMAL(37, 9) = 1.00; -- As 1 * x will always start us off
WITH    cte
          AS (
               SELECT
                    ROW_NUMBER() OVER ( ORDER BY (
                                                   SELECT
                                                        ( NULL )
                                                 ) ) [row]
                FROM
                    sys.tables
UNION ALL --Add random data for test
(SELECT -5 [Row])
UNION ALL --Add random data for test
(SELECT 20 [Row])
UNION ALL --Add random data for test
(SELECT 5.26566666 [Row])
UNION ALL --Add random data for test
(SELECT -3 [Row])
)

    SELECT
            @a = ( @a * [row] ) --Set recursion of the variable
        FROM
            cte


SELECT
CONVERT(VARCHAR, CONVERT(MONEY, @a), 1) --Select in a nice format

Wednesday 2 October 2013

1.12 - Get SQL Service accounts TSql

/*SQL Service Account*/
DECLARE @ServiceaccountName varchar(250)
EXECUTE master.dbo.xp_instance_regread
N'HKEY_LOCAL_MACHINE', N'SYSTEM\CurrentControlSet\Services\MSSQLSERVER',
N'ObjectName',@ServiceAccountName OUTPUT, N'no_output'
SELECT @ServiceAccountName

/*SQL Agent Service Account*/
DECLARE @ServiceaccountName varchar(250)
EXECUTE master.dbo.xp_instance_regread
N'HKEY_LOCAL_MACHINE', N'SYSTEM\CurrentControlSet\Services\SQLServeragent',
N'ObjectName',@ServiceAccountName OUTPUT, N'no_output'
SELECT @ServiceAccountName

Friday 23 August 2013

1.10 - Performance Of The IN Operator

I recently had to look at the performance of a query that was running with high CPU, and taking over 40+ seconds to run.
The scenario -
A stored proc used a view that joined to a table with a list of numbers from a table. This table was generated from a string that was passed to the stored proc as a string (possibly 10 values) separated by commas. This string was then split into a table via a UDF (table function) and this was then used in the IN operator to filter the number of results (it had to be dynamic). All indexes possible were made available, within the efficient running of the system.

If I changed the IN operator to a != or = this changed the query time to < 200ms from 40+Seconds but this was not fit for purpose as in this instance as the filter had to be dynamic. I then changed the IN statement that was contained in the WHERE clause to be part of a inner join and to join on the predicates that the IN operator was using ....and, wow! the number of logical reads on the table slumped from 4+million to 4 and the time down to 6 secs!! Not too bad as it was an offline, but crucial process.

So the long and the short is to remember; If you can join to it do so, as you may pay the price for utilisation of IN statements.

Friday 16 August 2013

1.09 - API CURSORS and linked servers - Grrrrr!!

Like I always say; Every day is a school day.
A normally well performing system today suddenly started using 99% utilisation constantly. After taking a look I noticed that there were some strange requests on the server which all related to FETCH_APICURSORXXXXXXX. Now I could only see what these were by by querying the sys.dm_exec_cursors dmv. I then looked at profiler and saw the millions of sp_cursorfetch transactions and realised that there was a big issue. This was using a cursor because SQL was using an OLEDB connection on the linked server resulting in millions of rows individually returned  for a sizeable dataset.

After a bit of research OPENQUERY resolves this issue but it doesn't use parameters, so you need to use dynamic sql or call sp_executesql from the target server.

I hope this hopes anyone and reduces the the hours that it took me to sort out.

A good reference : http://www.sql-server-performance.com/2006/api-server-cursors/#comment-3847

Wednesday 7 August 2013

1.08 - Intelllisense Refresh

To force a suggestion = CTRL+J   and to force an intellisense refresh = CTRL+SHIFT+R
- Who knew?

1.07 - SQL SERVER – How to Refresh SSMS Intellisense Cache to Update Schema Changes

Have you ever faced situation where you have just created or modified object but SSMS still shows the error. I quite often face this situation where I come across situation where my SSMS Intellisense Cache is not refreshed or updated. This is indeed very frustrating when you are presenting something on stage as the red underline means an error in graved in many people’s minds and it is hard for them to believe when the code with underline runs successfully.

Here is image of the recent situation. Where I had just dropped index but SSMS Intellisense was still showing that the index exists.
When I face this kind of situation, without wasting time I immediately do following : CTRL + SHIFT + R . This will clear the cache of the Intellisense and it removes the underline from the word. You can also refresh the Intellisense cache by using Edit -> Intellisense -> Refresh Local Cache.

Friday 2 August 2013

1.06 - Old And Forgotten Database Recovery Options

I recently stumbled across some old commands to remove the db status from 'suspect' or 'restoring', although I would always recommend using restore database with recovery initially to remove the recovery issue.
These are :

  1. sp_resetstatus
  2. DBCC DBRECOVER (database) - Removed from SQL 2012
These should not be your first option but when all else fails...


Wednesday 31 July 2013

1.05 - Bulk Insert Row Terminators Gotcha

I recently stumbled across this issue when a dev asked me to have a look at a file that he was attempting to import. Basically, the bulk import or the bcp statement was not recognising the row terminator and appending all the data beyond the last column into that very same last column. Annoying, to say the least.
After some researching, I found that this is quite a common problem and most people suggest dynamically building the statement and that seems to work.
e.g.
DECLARE @Cmd NVARCHAR(MAX) = 'BULK INSERT [TableName] FROM ''[InputFile]'' WITH (FIELDTERMINATOR = ''|'', ROWTERMINATOR = ''' +CHAR(10) +''')'

EXEC SP_executeSQL @cmd

However, I am not a fan of dynamic sql and tried to find out if it can be done without and ....woohoo! It can be - You just have to use the hex values for the row terminator and then the World falls back into a sensible orbit and life makes sense again.
e.g.
BULK INSERT [TableName] FROM '[InputFile]' WITH (FIELDTERMINATOR = '|', ROWTERMINATOR = '0xa')

I hope this helps someone out there as it saved me a lot of time after I sussed it. 

Wednesday 26 June 2013

Redgate - SQL In The City

Attended Redgate's SQL in the City last week in London for the second year running. Always worth a visit although it was more sales focused this year with a distinct lack of other vendors. A sign of the times I expect.
Quite a few of the sales pitches, ahem.. I mean informed lectures were discussing what is a nice GUI for what already exists, if you know where to look or you have taken the time to find out. Always good though to listen in to the guys (and girls) from SQLServerCentral (Steve Jones, Grant Finchley, Gail Shaw, etc.) as there is always a few golden nuggets of information that I have not heard or missed and they do seem to take time to speak to you if you are inclined to do so.
Well worth a visit for anyone who works with databases or is in charge of delivering solutions, if they are coming to a town near you.

Thursday 16 May 2013

1.03 - Changing The SQL Server Name After a Move

Well worth knowing. I have seen this a lot with VM's or with other sysprep'd machines.

http://www.sqlservercentral.com/articles/Administration/98346/

1.02 - Slow Deletes

The devs sauntered over to me and asked if I could take a look at an issue of a delete that was timing out at the front end and whilst only removing 410 records.  After a bit of investigation (traces, execution plans, head scratching, etc...). I concluded that it was indeed taking a very long time on one particular part of the script. I ran it through the estimated execution plan and it chirped up and requested an index or two (which I also knew could add to the problem) and I addressed those. Unfortunately no impact at all. I then thought that perhaps the additional indexes and triggers on the tables may be causing the issue so I dropped the unused(ish) indexes and disabled the delete triggers. Still no effect. I then looked at the query plan and thought that perhaps the cached plan was not the optimal one and dropped that from the cache. Still no effect.
I ran the Sproc manually in rollback with the time, io and  xml statistics on to get a feel for what was going on and it appeared to be having a massive amount of logical reads on a child table, whilst taking 90+ seconds. Ah hah! There was a massive amount of reads (4million +) for what was a small delete. I created a non-clustered index on the foreign key that related to my parent table and hey presto! The logical reads now dropped to 1232 and the total SProc now took 183ms.
There are days when things don't always go right and days when they fall into place. This took longer than it should have done but the feeling of getting it sorted in the end is always a great high.
I hope this may help others on the same quest.

Thursday 2 May 2013

1.01 - Map Postcodes in SQL Server Reporting Services

Another fantastic link to a subject that I have needed to look at this week. Sometimes things all fall into place at the right time. 
This link is to a well written and fantastically informative piece of work based around postcode mapping in SSRS. First time I have seen it noted in such easy down to earth steps - Well done the Purple Frog! Take a look; It has lots of possibilities.
Map postcodes in ssrs reporting services

Wednesday 24 April 2013