Wednesday, August 22, 2018

ERROR NT AUTHORITY\ANONYMOUS LOGON when you authenticate against SQL server

I have seen this error so many times and once for all I will write down how to get rid of this. It´s surprisingly often developer and technicians does not know that you need to configure Kerberos and so on to have it up and running.

Some background
When you setup a SQL server and use a common domain account as service account for the service you need to create SPN for this account. If not it will only authenticate with NTML. (If you should use domain account those days are a different story). Very often this is missed. If you run other services like Reporting service, Analyses services etc you often design it to run those on separate servers. This is common design and something you should do if your environment isn’t quite small. If you do as best practices is with the design, you will definitely get problem with authentication if you haven´t think about this.

Example















Let say you run Explorer from a client against a report on a reporting server where you have a datasource configured with windows authentication (As the user viewing the report). This datasource connects to a second SQL server where the database resides. You now have an authentication chain which do a so called double hop. This is not possible to do with NTML authentication, so Kerberos is required.

Configuration
So how do we configure the authentication to work? First the involved service account for SQL Server and Reporting server need to have their respective SPN set. This is done as follow if we assume that the servernames are SQLServer and SQLRSServer and the service account are SQLService and SQLRSService. The domainname and domain is what you havein your environment of course.

SETSPN -s MSSQLSvc/SQLServer.domainname.com:1433 domain\SQLService
SETSPN -s MSSQLSvc/SQLServer.domainname.com domain\SQLService
SETSPN -s http/SQLRSServer.domainname.com domain\SQLRSService

Now the services are ready to use Kerberos authentication. Restart the services and then verify connection with this query if you like, you should see KERBEROS as auth_scheme:

select c.auth_scheme, c.client_net_address, s.program_name from sys.dm_exec_connections c
inner join sys.dm_exec_sessions s
on c.session_id = s.session_id
where c.session_id >50

But then we also need to configure delegation on the reporting services account. Since this account “delegate” authentication to the SQL service account. This is done in the active directory management tool. You can chose the easy one, “Trust this user to any service” or if you are a security freak, you can point out exactly what to trust. Just brows for the service account for SQLServer (SQLService) and you see the SPN registered on this object as the first picture below.


























Some more information from Microsoft about how to do this with Reporting services is find in this link. But they does not mention trusted for delegation which is strange since this is needed.
https://docs.microsoft.com/en-us/sql/reporting-services/report-server/register-a-service-principal-name-spn-for-a-report-server?view=sql-server-2017

Monday, January 16, 2017

Take advantage of parallel inserts in SQL 2016

In SQL 2016 it’s possible to insert in parallel. I was asked to do some test about this and I had no clue how good this was. But after those tests aim very excited about how fast it is.

I had a quite large table with 220 million rows and ca 75GB in size. It had one clustered index on it and recovery set to simple. I tested first an INSERT and then SELECT INTO and INSERT WITH TABLOCK. The last two take advantage of the parallel insert.
Hardware setup was a HP ProLiant BL460 G9 with 256GB RAM, 24 core and Windows 2016. SQL 2016 did run with Max degree of parallelism  set to four. SQL had 128GB as Max memory sett.


A query plan compare show the different. As you can see we have a parallel insert in the second query plan.



The difference in time was amazing. 4,2 minutes compared to 18,2 minutes. A reduce in time by ca 76%, that’s cool. So, when possible remember to evaluate if you can use TABLOCK. I think it´s possible in many ETL processes. Also, its quite common that you do select into any temporary table, then it run a parallel insert anyhow.

Thursday, December 15, 2016

Use Azure with SQL server on premises

In SQL 2016 you can integrate in many ways with Microsoft Azure. I decided to test some scenarios which can be useful for many customer environments.


Store part of on premises databases in Azure Blobstorage

Specific filegroups in Azure

This is typical aimed for data that is less used. For example, archived data or history data. 

In a scenario, we have a huge table with transactions from several years but we are only interested to query the current year. Then we can partition the table so older data reside on a partition where the file is stored in Azure. In our example FG_ARCHIVE is in Azure (See the URL point to the file), rest of the current year is partitioned in four partitions, one for each quarter.

Setup the filegroups and partitions schema and function. 


ALTER DATABASE [DatabaseName] ADD FILEGROUP [FG_1]
GO
ALTER DATABASE [DatabaseName] ADD FILEGROUP [FG_2]
GO
ALTER DATABASE [DatabaseName] ADD FILEGROUP [FG_3]
GO
ALTER DATABASE [DatabaseName] ADD FILEGROUP [FG_4]
GO
ALTER DATABASE [DatabaseName] ADD FILEGROUP [FG_ARCHIVE]
GO
ALTER DATABASE [DatabaseName] ADD FILE ( NAME = N'DatabaseName_data_FG_1', FILENAME = N'F:\Data1\MSSQL\Data\DatabaseName_data_FG_1.ndf' , SIZE = 8192KB , FILEGROWTH = 65536KB ) TO FILEGROUP [FG_1]
GO
ALTER DATABASE [DatabaseName] ADD FILE ( NAME = N'DatabaseName_data_FG_2', FILENAME = N'F:\Data1\MSSQL\Data\DatabaseName_data_FG_2.ndf' , SIZE = 8192KB , FILEGROWTH = 65536KB ) TO FILEGROUP [FG_2]
GO
ALTER DATABASE [DatabaseName] ADD FILE ( NAME = N'DatabaseName_data_FG_3', FILENAME = N'F:\Data1\MSSQL\Data\DatabaseName_data_FG_3.ndf' , SIZE = 8192KB , FILEGROWTH = 65536KB ) TO FILEGROUP [FG_3]
GO
ALTER DATABASE [DatabaseName] ADD FILE ( NAME = N'DatabaseName_data_FG_4', FILENAME = N'F:\Data1\MSSQL\Data\DatabaseName_data_FG_4.ndf' , SIZE = 8192KB , FILEGROWTH = 65536KB ) TO FILEGROUP [FG_4]
GO
ALTER DATABASE [DatabaseName] ADD FILE ( NAME = N'DatabaseName_data_fg_archive', FILENAME = 'https://ifazurebackuptest.blob.core.windows.net/ifdbdatafiles/aisfhub_financialhub_analysis_data_fg_archive.ndf' , SIZE = 8192KB , FILEGROWTH = 65536KB ) TO FILEGROUP [FG_ARCHIVE]
GO

CREATE PARTITION FUNCTION [DatePartitionFunctionNormal](date) AS RANGE RIGHT FOR VALUES (N'2016-01-01T00:00:00.000'
, N'2016-04-01T00:00:00.000'
, N'2016-07-01T00:00:00.000'
, N'2016-10-01T00:00:00.000')
GO

CREATE PARTITION SCHEME [myDateRange1] AS PARTITION [DatePartitionFunctionNormal] TO (FG_ARCHIVE, FG_1, FG_2, FG_3, FG_4, [Primary])
GO



















In the Azure portal, you can see the files. Before and after we have distribute data to it.












The possibilities to split and merge partitions into Azure is the same procedure like any SQL server.
Before switching the partition the structure for my example look like this:

Object
P#
Filegroup
Rows
Comparison
Date
dbo.Tablename
1
FG_ARCHIVE
497617
less than
2016-01-01
dbo. Tablename
2
FG_1
148045
less than
2016-04-01
dbo. Tablename
3
FG_2
45273344
less than
2016-07-01
dbo. Tablename
4
FG_3
100942096
less than
2016-10-01
dbo. Tablename
5
FG_4
72833665
less than
NULL


We now first split the last partition 4 and add a new one partition 5. Then merge the partition 2 into partition 1 so the data is moved from filegroup FG_1 to the archive filegroup FG_ARCHIVE. 

First add a new partition for new data with split date 2017-01-01.

ALTER DATABASE [SomeDatabaseName] ADD FILEGROUP [FG_5]
GO

ALTER DATABASE [AISFHUB_financialhub_analysis] ADD FILE
            ( NAME = Logical_filename_FG_5',
                     FILENAME = N'F:\xxxx_Data1\MSSQL\Data\XXXXXXXX_FG_5.ndf' ,
                     SIZE = 102400KB , FILEGROWTH = 102400KB ) TO FILEGROUP [FG_5]
GO

ALTER PARTITION SCHEME myDateRange1 NEXT USED [FG_5];
GO
ALTER PARTITION FUNCTION DatePartitionFunctionNormal() SPLIT RANGE (N'2017-01-01T00:00:00.000');
GO

Then merge data from FG_1 into FG_ARCHIVE. Merge date is 2016-01-01 and new date for partition 1 is now 2016-04-01.

ALTER PARTITION SCHEME myDateRange1 NEXT USED [FG_ARCHIVE];
GO
ALTER PARTITION FUNCTION DatePartitionFunctionNormal() MERGE RANGE (N'2016-01-01T00:00:00.000');
GO

The rows from FG_1 is now in FG_ARCHIVE. 497617 + 148045 is 645662

Object
P#
Filegroup
Rows
Comparison
Date
dbo. Tablename
1
FG_ARCHIVE
645662
less than
2016-04-01
dbo. Tablename
2
FG_2
45273344
less than
2016-07-01
dbo. Tablename
3
FG_3
100942096
less than
2016-10-01
dbo. Tablename
4
FG_4
72833665
less than
2017-01-01
dbo. Tablename
5
FG_5
0
less than
NULL


During the operation of migrate the data from one partition to another we could see the activity in Azure portal.














Test between Azure and on premises

Test nr one is just a simple Clustered index scan on this large table hitting only the data stored in filegroup FG_ARCHIVE. The query was forced to physically read the data from disk for each run. This difference is huge, about 200ms on SSD compared to a span between 5-8 seconds from Azure.

The second test was a customer defined query which only get data that is stored on filegoup FG_ARCHIVE. In this case, we were not able to see this differences as the first test did. The times was more or less quite the same. Probably this was because of the query did scan whole clustered index also on the other partitions.

Monday, April 25, 2016

Common problem for a DBA - Parameter Sniffing

As a DBA, one common problem I often see at our customers is something call parameter sniffing. Queries or “applications” suddenly start to run very slow. Since the user experience slowness, they of course ask the DBA what is wrong. As I see, it is maybe not the DBA´s role to solve it, but we must be aware about it, and come up with advice how to handle it. So in that way it´s on our table anyway. In this article I will explain basic parameter sniffing, how you can find it and the possible workarounds.


Explanation

From my experience the problem often occurs on SQL statements where there is a column in the WHERE clause with very different amounts of data. Below is an example of a simple table call dbo.customer. We have a big difference in the amount of records between the cities values.


City                     Count
Halmstad          1000000
Kiruna                1
London              100

Paris                   10

To get the data the SQL engine chooses a different approach, if it is a low selectivity or a high one. The city named Kiruna, use a Key Lookup, but for Halmstad a Clustered Index Scan is the best choice.

SELECT * FROM dbo.customer WHERE City = 'Kiruna'
SELECT * FROM dbo.customer WHERE City = 'Halmstad'















If we create a stored procedure for this SELECT statement instead, we will probably see issues with parameter sniffing.

CREATE PROCEDURE USP_GetCustomerByCity @City VARCHAR(50)
AS
SELECT * FROM dbo.customer WHERE City = @City

If we now run this stored procedure with the high selectivity value for city Kiruna it will be compiled with this particular SELECT statements query plan, which use a Key Lookup. Then if we change and run the procedure with the low selectivity value for the City Halmstad it still compile with the first value. This even if a Clustered Index Scan would perform better. You can also see that SQL assume it is one row with this value, see “Estimated Number of Rows”.























If the SQL had run a Clustered Index Scan instead of a Key Lookup, which it should have done with correct compilation, we should have seen a dramatic difference in the reads, as you can see from the output from Statistics IO. First with a Clustered Index Scan and second with a Key Lookup.

Table 'customer'. Scan count 1, logical reads 7316
Table 'customer'. Scan count 1, logical reads 3002730

Find the problematic stored procedures

So how is it possible to troubleshoot this and find the stored procedures who perform badly? Unfortunately, there is no place just to check this in an easy way in SQL.
In one approach to try finding out which procedure suffers or should possible suffer from parameter sniffing; we are interested in queries with big difference in the logical reads. In the DMV sys.dm_exec_query_stats SQL collects information about queries. This DMV has the information from all the queries in the cache. How many times they have executed, amount of rows returned and so on. In this case, we can get use of the MIN and MAX logical reads done for our query. Let us run the statement below.


SELECT qs.execution_count,
    SUBSTRING(qt.text,qs.statement_start_offset/2 +1,
                 (CASE WHEN qs.statement_end_offset = -1
                       THEN LEN(CONVERT(nvarchar(max), qt.text)) * 2
                       ELSE qs.statement_end_offset end -
                            qs.statement_start_offset
                 )/2
             ) AS query_text,
             qs.last_logical_reads,
             qs.max_logical_reads,
             qs.min_logical_reads,
             qs.total_logical_reads
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt
WHERE qt.text like '%USP_GetCustomerByCity%'
ORDER BY qs.execution_count DESC;


This returns the information about this examples particular procedure, and it says it has been executed seven times, and the Max_logical_read is 3002734 compared to the Min_logical_read that is 6. It is quite a big difference, and it reflects the information I received by Statistics IO quite good, when I did run the SELECT statements outside the procedure in the first run.

Therefore, from this perspective we maybe can conclude that this statement suffers from parameter sniffing problem. It is not a guarantee but a good way to start, if you have no knowledge about the procedures in your system.


Workarounds


One way to solve the problem, as I have find myself doing often, is to run EXEC SP_UPDATESTATS. Probably not the best way to solve it, but it often works since it also triggers a recompile of stored procedures. Often you are in hurry and not sure if maybe the statistics is bad, which is a common problem as well. However, a correct solution would of course be to code the procedure correctly instead. There are a few ways to do it, as described in short here.

RECOMPILE
Create the procedure with the option “WITH RECOMPILE”

CREATE PROCEDURE [dbo].[USP_GetCustomerByCity] @City VARCHAR(50)
WITH RECOMPILE
AS
SELECT * FROM dbo.customer WHERE City = @City;

OPTIMIZE FOR
Create the procedure with the option “OPTIMIZE FOR” as the example below. I think this is the best approach in newer versions of SQL.

CREATE PROCEDURE [dbo].[USP_GetCustomerByCity] @City VARCHAR(50)
AS
SELECT * FROM dbo.customer WHERE City = @City
OPTION (OPTIMIZE FOR (@City = 'Halmstad'))

Plan Guide
This option can be useful when you can´t change the procedure. By create a Plan Guide for the procedure you can set the options there instead of changing the procedure.

Local variable
If the parameter values are known while compiling the stored procedure, the optimizer uses the statistics histogram and generate the best plan for the parameters. When we define local variables and use that in the query, the SQL server will not be able use the parameter values to find the optimal value. In this scenario, the optimizer uses density vector information of the statistics, and it will generate the same execution plan for all input parameters.

CREATE PROCEDURE [dbo].[USP_GetCustomerByCity] @City VARCHAR(50)
AS
DECLARE @LocalVariable VARCHAR(50) = @City
SELECT * FROM dbo.customer WHERE City = @LocalVariable

Conclusion

Even if this is a quite common problem, I see that many developers are not aware about this and how SQL works. To find the problem procedures you have to dig into the plan_cache, and it is quite difficult to find the problem. I think and hope the query_store in SQL 2016 can be useful to trouble shoot this kind of problem. I will try it out when it is more available.