Archive for the ‘SQL Analysis Services’ Category

News On SSAS Data Source Support In Power BI

November 7, 2014 Leave a comment

Chris Webb's BI Blog

Yesterday we heard (again) that SSAS will be supported as a data source for cloud-based reports in Power BI. Today, in a session, two new important details on this emerged:

  • It will work with both Tabular and Multidimensional
  • It will connect as the user running the query, so SSAS security (eg dimension security) will work just the same as it does on-premises. No special setup will be needed; there were no details apart from the fact it will work using the EffectiveUserName connection string property.

I’m sure a lot of people will be interested to hear this…

View original post

Write your SQL or MDX code on your WordPress blog

March 13, 2013 Leave a comment

If you need to embed the SQL or MDX source code, use the below template:


Categories: BI, SQL Analysis Services Tags: ,

Lazy-loading with MDX (MDX Pagination)

March 13, 2013 3 comments

In a conventional T-SQL query, pagination can be applied to retrieve a subset of the total result-set. In the web development, we can use this technique to achieve lazy loading. This post will discuss how to do pagination in MDX to achieve lazy-loading

The key is to use subset function in MDX. MSDN has documented the function here.

But before you use the subset function, you have to know the total number of the result set first to calculate the total page number. The below template illustrate how you get the row count:


you can simply wrap the above MDX into a OpenQuery and assign to a variable.

Once you have the row count then you can decide how you control the pagination


In a web page, the initial load is not so fast considering the row count is extra task. But once you go to page 2,3, etc, you’ll feel the performance is really fast.

Categories: BI, SQL Analysis Services Tags: ,

OpenQuery and Ole DB behaviour – MDX runs twice

March 13, 2013 4 comments

Issue: Linked Server + Ole DB provider to SSAS’s combo makes the MDX to be executed TWICE which means doubling the execution time.

Resolution: Unfortunately, there is no way around it. The limitation is Linked Server with Old DB provider. According to my finding, “openquery” theoretically performs better than “exec @query At <linked server>” but on the other hand, the latter supports string longer than varchar(8000) or nvarchar(4000).  Hence, maybe we can use openquery when the string length <= 4000 and use “exec @query At <linked server>” when string >  4000 (nvarchar)

Now I’ll walk through my findings:

Query using openquery:

      'with member measures.x as 100 select {measures.x} on 0 from <Your Cube>'
      'SELECT * FROM OPENQUERY('+@CubeLocation+', '''+ @mdx +''')'      
EXEC sp_executesql @SQL 

Run the above query first time, I get the following:


Run the above query 2nd time, I get the following:


Conclusion: For any new query, it is going to be executed TWICE, the subsequent execution will be run once. However, if you change the query string, it will be executed TWICE again. For example, I add an space before “with” or after “member” or anywhere in the query.

Query using “exec @query At <linked server>”:

      'with member measures.x as 1 select {measures.x} on 0 from <Your Cube>' 
DECLARE @SQL AS NVARCHAR(4000) = 'Exec (''' + @MDX + ''') At ' + @CubeLocation            
EXEC sp_executesql @SQL 

Run the above query  first time, I get the following:


Run the query 2nd time and I get the same behaviour: it is executed TWICE.

Conclusion: Regardless how many time the same query runs, it is always executed TWICE.

For a tiny MDX query, you probably won’t be able to see the time gap between two MDX query as it is executed within milliseconds. But for a long running MDX queries, you will notice there will be non-overlap time gap between the execution duration. I’ve tested a MDX query which runs about 53 seconds natively on SSAS. And If I run it through openquery, you’ll see the below:


As you can see, the query is executed TWICE and the total duration is 54143ms+44697ms = 1.647 minutes. However, the dataset is about 3298 rows and 700+ columns which took about 3 minutes to render in SQL management studio. And I can verify that even SQL Profiler reported the “query end” event, the SSMS is still busy with rendering the result set in the data grid. Also, I verified that if using client tools which support direct SSAS connectivity, e.g. Excel native PivotTable or ADO.NET provider, the query is executed ONCE only.

Overcome varchar(8000) or nvarchar(4000) limit in OpenQuery to SSAS

March 13, 2013 5 comments

OpenQuery has query string length limit to varchar(8000) or nvarchar(4000)

To overcome this limit you can use the below query:

SET @mdx=' <Your MDX longer than varchar(8000)> '
Exec (@mdx) At <Your Linked Server>;

to use dynamic SQL, you can use the below tempalte:

	DECLARE @SQL nvarchar(max)
	SET @SQL = 'Exec (''' + @MDX + ''') At ' + @CubeLocation
	EXEC sp_executesql @SQL
	SET @ErrorMessage = 'Query execution failed!'
	RAISERROR(@ErrorMessage , 16, 1) WITH NOWAIT

However, although the above can bypass the varchar(8000) limit, it brings another side issue. If you MDX returns a empty set, the SQL Server connection will be dropped. And even worse, this error cannot be captured by Try Catch block. Hence, if you need to implement this logic in your SQL stored procedure, you need to implemented MDX Count function to avoid returning no result set. For instance, count the result-set first and if count = 0, there will be no need to go ahead with the query. Here is the code sample for a function to return the row count of your MDX result set:

 SET myset AS
 <Your SET Definition>
 Member [Measures].[cnt] AS
 [Measures].[cnt] ON columns
FROM <Your Cube>
 <Your Slicers>

You can then wrap the above query in dynamic SQL which uses OpenQuery and linked server.

Top N records with OpenQuery against SSAS data sources

March 13, 2013 Leave a comment

OpenQuery + Linked Server is commonly used if you want to querying against SSAS cube through T-SQL. However, you cannot use “top N” to specify how many records you want to retrieve. try SELECT top 1 1 AS id, * FROM OPENQUERY(<Linked Server>,<Your MDX Query>) yourself and you’ll find all records are returned, which means the top 1 is ignored.

However, if you try the following:

Select *
from (	Select top 1000 ROW_NUMBER() over(order by id) as RowNum, t.*
		from (	SELECT top 1 1 AS id, * 
				FROM OPENQUERY(<Linked Server>,<Your MDX Query>)) as t
      ) as ds

You’ll get top 1000 rows if the whole result set contains more.

Now to achieve fake (what I mean by fake is that it is not true pagination at SSAS side) pagination, you do sth like:

		,@end AS INT
set @start = 1
set @end = 20

Select *
from (	Select top 1000
			ROW_NUMBER() over(order by id) as RowNum, t.*
		from (	SELECT top 1 1 AS id, * 
				FROM OPENQUERY(<Linked Server>,<Your MDX Query>)) as t
     ) as ds
where ds.RowNum between @start and @end
Categories: BI, SQL Analysis Services Tags: ,

Crescent, takes the “slicers” to next level

Today I watched the demo of Crescent (the video can be found at here) and it is amazing.

I’ve already used slicers in Excel 2010 and loved it very much. But now, I felt that Crescent is going to take the slicers to next level. From the video, it shows that you can build multiple data grids and charts and make them interactive to each other just like multiple charts in PowerPivot. However, the enhancement in Crescent is that even the legend and the data label in chart plot area is clickable. You can click one item in the chart legend and the data from all charts on the page is updated according to the selection. That is absolutely awesome! The other great feature is the “trace” of the data point. If you have a data point (e.g. bubble) on the chart and place the time onto the “play axis” , you can play the animation that shows how the data point moves during the time period.

Predixion Insight – Data Mining

January 22, 2011 1 comment

As I’m based in Sydney, I missed the webinar due to the time difference between US and AU. But luckily, I received a replay link and watched the demo.  Although I’ve play with Predixion software before, it is still exciting to watch over the demo.

Basically, in addition to the data mining functionality similar to the DM add-in for Excel, Predixion Insight is able to talk to PowerPivot. It is able to apply the trained mining model to the data set in PowerPivot and tag the record with the mining result. Then, using DAX to further format the dataset in PowerPivot and you can easily get a nicely analyzed predictive analytic report. Even more, with PMML support, you can even import SAS, SPSS model into Predixion Insight and then apply it to the dataset in Excel/PowerPivot as well as validation tools built into Predixion Insight to validate the model. Also, with Excel Services in SharePoint 2010, the final report can be uploaded and shared by everyone.

Before I had a bit concern about the data volume supported by Excel as well as Internet speed. Now I am much more confident after watching the demo. The PowerPivot would overcome the Excel data volume limitation. Regarding the data transferring across the Internet, the case given by Predixion software is that one of their client sent a 10 million records PowerPivot to the cloud server and it only takes around 4 minutes to get the mining results from the remote server. I believe in most cases, it would satisfy most of the small to medium organisations.

Now the only concern I have at the moment is the operational BI support. Imaging I have a model in the cloud used for identifying the customer category (e.g. good/bad). If a new customer is set up in the OLTP system, it would be fantastic if the system can query the model and tag the customer. In this case, Excel is not used at all. Instead, a web-service or APIs are required so that the OLTP systems can interact with the cloud mining services.

FW: Days Since Last Event: TSQL vs. MDX vs. DAX

January 19, 2011 Leave a comment

There is a nice blog discussing “Days Since Last Event”, you can find it here.

It compares the same logic implemented by TSQL, MDX and DAX. In my perspective, the TSQL is the most easy to understand. The MDX (2nd solution) is elegant. The DAX one is the hardest to understand. (Tips: don’t read the DAX code in the text, read from the screenshot and the code is formatted better).

Talking about Excel 2010 on SSAS 2008 R2 Named Set Creation

Categories: SQL Analysis Services