Archive

Archive for the ‘PowerPivot’ Category

Displaying Top N, Bottom N, and “All Others” – Dynamically

November 12, 2014 12 comments

I just read Rob Collie’s post Displaying Top N, Bottom N, and “All Others”. Its a great post with lots of details.

I’ve come up with another approach that use the Set to achieve the similar outcome but dynamically. The final solution looks like the below

Dynamic TopN and AllOthers

The base data looks like the below:

base data

Both table as added to Model as the linked table.

I created the following calculated measure in the Power Pivot Model:

SelectedN:=MAX([N])

Total Sales:=SUM(Sales[Sales])

Rank:=RANKX(All(Sales),[Total Sales])

Top N Total Sales:=CALCULATE([Total Sales],FILTER(Sales,[Rank]<=[SelectedN]))

All Other Total Sales:=CALCULATE([Total Sales],FILTER(Sales,[Rank]>[SelectedN]))

In addition to the DAX fomula, I created two sets using MDX

Excel OLAP Set

Set Name: TopN

Set Definition: TOPCOUNT([Sales].[CustomerID].Children, [Measures].[SelectedN], [Measures].[Sum of Sales])

Set Name: AllOthers

Set Definition: [Sales].[CustomerID].Children – [TopN]

* Both of the above two sets has the checkbox “Recalculate set with every update” checked.

You can download the workbook (Excel 2013 version) at: https://onedrive.live.com/?cid=8BA997FE18F7FDAE&id=8BA997FE18F7FDAE%212373

Advertisements
Categories: BI, BISM Tabular, DAX, Excel, PowerPivot Tags: ,

FWD:“I Modified an Existing Table in Power Query and Now it Won’t Refresh”– A Fix

Custom Rollup in BISM tabular (PowerPivot)

As we know, the BISM tabular does not support custom rollup. However, I recently read the article here and found it is possible to achieve custom aggregation via DAX. It is not exactly the same as custom rollup with unary operator in MDX. But it at least can perform different calculation at different hierarchy level. Basically, you need nested Summarize statement to specify the table (calcuated, returned by DAX) to loop with. The drawback is that you have to maintain the DAX.

I wonder is it possible to add a few columns to the date dimension to specify the calculation logic for different level. For example, ‘DimDate'[YearCalculationUnaryOperator], ‘DimDate'[QtrCalculationUnaryOperator], ‘DimDate'[MonthCalculationUnaryOperator], ‘DimDate'[DayCalculationUnaryOperator]. And in the measure, use the DAX looks like:

if (
Count(‘Date'[Date])=1,
(    Switch(‘DimDate'[DayCalculationUnaryOperator],
‘+’, <Calculation expression for +>,
‘-‘, <Calculation expression for ->,
‘/’, <Calculation expression for />,
<default Calculation expression >)
),
if    (    Count(‘Date'[Month])=1,
(    Switch(‘DimDate'[MonthCalculationUnaryOperator],
‘+’, <Calculation expression for +>,
‘-‘, <Calculation expression for ->,
‘/’, <Calculation expression for />,
<default Calculation expression >)
)
, <default Calculation expression for other hierarchy>
)
)

Well, time to dig into the BISM

Install PowerPivot (version 10.50.2500) after applying SQL 2008 R2 SP1

August 30, 2011 Leave a comment

After installing SQL Server 2008 R2 SP1 on my PC, the PowerPivot for Excel 2010 no longer works. It crashes while opening the file and unable to import data.

Reinstall old version of PowerPivot does not solve the Problem. You need to go to Microsoft website to download the latest version of the PowerPivot (version 10.50.2500.0) and install it.

I’m surprised that Microsoft doesn’t add any note about this in SP1 release note.

Categories: PowerPivot

The Great Precedence Project (Filter Rules for DAX) – Evaluation Context in PowerPivot

Here and Here,  PowerPivotPro.com published an excellent blog about the DAX behaviour.

In my opinion, [Measure](ALL(Dim),VALUES(Dim[col])) is a shot from of Calculate([Measure],ALL(Dim),VALUES(Dim[col])), and a more generic form would be Calculate([Measure], exp1, exp2 ,… , expN).

The fact is that to calculate the [Measure], all expressions need be satisfied, which means exp1 and exp2 and….and expN. In addition, any evaluation context on the slicer (as long as connected), report filters and rows/columns on the Pivot table will be considered as well, which means (exp1 and exp2 and….and expN) AND (Existing evaluation context).

I believe this is the underneath rule to explain all the above behaviour. But it is fantastic to see someone list the actual examples. It is hard to visualise the theory in brain and always good to have a quick sample to validate your thoughts. Thank you, Rob. I’m looking forward to seeing Part3 about ALLEXCEPT ()

Categories: PowerPivot

FW: Cleaning Azure Marketplace Data with PowerPivot and Predixion Insight

January 26, 2011 Leave a comment

Jamie McLennan has a great post taling about “Cleaning Azure Marketplace Data with PowerPivot and Predixion Insight

Predixion Insight – Data Mining

January 22, 2011 1 comment

As I’m based in Sydney, I missed the webinar due to the time difference between US and AU. But luckily, I received a replay link and watched the demo.  Although I’ve play with Predixion software before, it is still exciting to watch over the demo.

Basically, in addition to the data mining functionality similar to the DM add-in for Excel, Predixion Insight is able to talk to PowerPivot. It is able to apply the trained mining model to the data set in PowerPivot and tag the record with the mining result. Then, using DAX to further format the dataset in PowerPivot and you can easily get a nicely analyzed predictive analytic report. Even more, with PMML support, you can even import SAS, SPSS model into Predixion Insight and then apply it to the dataset in Excel/PowerPivot as well as validation tools built into Predixion Insight to validate the model. Also, with Excel Services in SharePoint 2010, the final report can be uploaded and shared by everyone.

Before I had a bit concern about the data volume supported by Excel as well as Internet speed. Now I am much more confident after watching the demo. The PowerPivot would overcome the Excel data volume limitation. Regarding the data transferring across the Internet, the case given by Predixion software is that one of their client sent a 10 million records PowerPivot to the cloud server and it only takes around 4 minutes to get the mining results from the remote server. I believe in most cases, it would satisfy most of the small to medium organisations.

Now the only concern I have at the moment is the operational BI support. Imaging I have a model in the cloud used for identifying the customer category (e.g. good/bad). If a new customer is set up in the OLTP system, it would be fantastic if the system can query the model and tag the customer. In this case, Excel is not used at all. Instead, a web-service or APIs are required so that the OLTP systems can interact with the cloud mining services.