.NET Tutorials, Forums, Interview Questions And Answers
Welcome :Guest
Sign In
Win Surprise Gifts!!!

Top 5 Contributors of the Month
Post New Web Links

how ssas generate sql queries when processing dimensions and partitions?

Posted By:      Posted Date: August 28, 2010    Points: 0   Category :Sql Server
hi all, how does ssas2008 generate sql queries to read data from the source, where processing dimensions and partitions? is there any reading meterial that clearly explains how the sql queries are constructed according to properties of dsv, dimensions and measures and partitions.   thanks in advance.Andrew Chen Interested in BI related technologies

View Complete Post

More Related Resource Links

SSAS cubes with shared dimensions

Hi, I'm working on a fairly complicated SSAS project at the moment:  - SSAS provides information to 6 teams, which I have modelled as 6 separate cubes.  Each team should not be able to access the other team's cubes.  - Each cube has 4-5 measure groups, with no cross-over with other teams.  - Each cube has approx. 8 dimensions, 5 of which are conformed dimensions, common across all cubes, and 3 dependent on the specific measure groups.  - Each cube has a couple of perspectives set up to simplify the presentation.  - The underlying data comes from a single data warehouse database.  - It's possible that another team may shortly want to develop their own cube.  The data would be completely separate however they would need to use the same conformed dimensions as above. I'm at the starting stage, so I have set up one cube along these lines and am about to work on a second.  My question is, is this the best structure for the project, in terms of both working with it in VS (the data source view is already horribly messy, but I don't think it can be split up?) and also avoiding processing the conformed dimensions multiple times.  If not, what is the best approach to follow to avoid duplication?  Using linked dimensions sounded like the right approach, until I read about how they were then co

Re-processing Dimensions with a ROLAP cube


Hi all.

I have set up a ROLAP Cube because of the advantage that you do not need to re-process the cube.  AS will always go to the underlying data source for the most up to date data.  Unlike MOLAP you do not need to re-process your cube to refresh the data.

BUT.  I find this is only true for my FACT data (or for my measures).  Say for instance my dimension changes - I add the member "Australia" to the country dimension.  This new data is NOT filtering down to AS automatically, I need to re-process my dimensions for the cube to be valid again.

So my question is this.  If you have dimensions changing with each datawarehouse load, what is the advantage of ROLAP?  Is there a way for ROLAP to figure out that a dimension has changed and re-process it automatically?



New DSV named query or copy and reuse an existing cube dimensions (is the latter valid) - SSAS 2008


I have a [DateAsAt] table in the DSV. This is linked to 3 fact tables to analyse and slice them from an as at date perspective (lets call them FactA, FactB and FactC). But I want to reuse this dimensions (almost like an alias) and although I have tried this and it seems to work, I just want to follow some due dilligence to make sure what I am doing is valid.

I have not created a new Named Calculation for a new date dimension that will be used in a Transactional way (not an as at way), and in stead simply copied the [DateAsAt] dimension, and pasted it back into the dimensions section of the cube (the default name it was given was [DateAsAt 1], but I changed it to [DateTransaction].

Although the relationship between the [DateTransaction] dimenion's source in the DSV (I.e. [DateAsAt]) is linked to FactA, FactB and FactC, I set a new dimension relationship between [DateTransaction] and FactTransactionD in the Dimension Usage area of the cube.

Is this valid? or MUST I create a seperate source for [DateTransaction] in the DSV and set its relationships there?

dimension's incremental process generates processing on all partitions in cube


Hey eb

Have any one noticed this behavior??

Incremental process (process update) of dimension generates reprocessing of indexes on all cube's partitions,even when no change has occured in that dimension.

I am using SSAS 2005 sp2.

This is disturbing because my cube holds some 700 daily partitions so processing indexes on all of them - on an hourly basis - is very time consuming.

Also it flashes that cube's cache!


I did noticed that changing heirarchies memberskeysunique property to True + changing the toppest attribute in this heirarchy mambernamesunique to true solves this problem.

Is this a must then to prevent recalculating indexes on partitions every process update of dimension??



how to ensure aggregations and partitions are effectively being used by MDX queries?


we have added partitions and aggregations on all big measures groups, but want to ensure whether they are really being used by MDX query engine or not. I know we can use profiler for this, but want to know how to interpret those details.


Processing SSAS 2008 cube using third party scheduler

What are the different options to process SSAS 2008 cubes automatically using third party scheduler? What is the most recommended approach?

How to generate pivot tables from SSAS cube programmatically?


I am able to produce about 20 pivot tables from SSAS cube one by one. Is it possible to produce these tables programmatically with header and footer information?


Please note that the end users like to use excel.





SSAS - MDX - need to generate a count for the life of a record



I'm struggling trying to create a calculation which will show what I want.  My requirement is to show a count of 1 while a record is active, and 0 when it is not.  For example, in my data warehouse I have an enrollment record active from 1/1/2010 through 12/31/2010 (12 months).  If the user runs an excel cube report for 2010 by months, I want to show a 1 for all 12 months (if the report were run for quarters, then a 1 for each quarter etc.).  If the user runs a report from 2008 through 2010, the record would show 0 for 2008 and 2009.


Now I could haul off and create a daily record for that enrollment, and have 365 daily records in my DB.  I really don't want to do that. Conceptually here's what I want to accomplish:


IIF([Coverage Start Date].[Date].CurrentMember.FirstChild>=[Coverage Start Date].[Calendar Date] AND [Coverage Start Date].[Date].CurrentMember.FirstChild <= [Coverage End Date].[Calendar Date],1,0)


So if the current member in the example above is Q3 2010, then [Coverage Start Date].[Date].CurrentMember.FirstChild would be '7/1/2010' (first day of Q3).  Since 7/1 is >= 1/1 [Coverage Start Date].[Calendar Date] and <= 12/31 [Coverage End Date].[Calendar Date] this should evaluate to 1.


it doesn't. I ge

influence processing order of partitions of measure group



  • a Measoure Group which needs full processing
  • SQL Server 2008 R2
  • the question is regarding performance

The measure group is partitioned by qarter - therefor I have some 30 partitions at the moment. I issue a processFull of the MEasureGroup and let Aanaysis Services Descide the parallelism. In Common it ends up to process between 8 and 12 partitions in parallel (on the 12 Core Machine).

The point is - my partitions growth over time - so older partitions are smaller then newer partitions. Usually Analysis Services ends up starting the newest partition at last - which is the biggest one - and therefore ends up phasing out the processing on a sinle processs (not utilising) the server efficient - because the newest partitions is somehow the biggest and needs the most time - but had nothing to process anymore.

Is there a way to

  • let the server deside the degree of parallelism (like now)
  • but specify the order (lets say start with the newest partitions and end with the oldest)

I have not found anything in the documentation

  • mabe something about processingpriority
  • or just the oder of the process statements in the parallel processing batch - if I would specify them




After adding a new attribute to a dimension and saving, SSAS unprocessed partitions. Can anyone help



After adding a new attribute to a dimension and saving, SSAS unprocessed partitions. Can anyone help me understand why this happened and point to some reading materials for detail? (can't find any...)


How to show the all level of SSAS dimensions in Excel?

The all-level of dimensions doesn't show up in the PivotTable Field List? I have reports where I want to show one member of a dimensions compared to the total of the dimension (and not the total of the members shown). But I can't select the ALL-level. Is there any way to do this?

Deadlock for a cube processing - SSAS 2005



I have a scheduled job with a step that calls a cube processing. This job run daily. Some times the processing stops for a deadlock. I have seen that when the processing overcomes 60-70 minutes a deadlock occurs. Is it possible that any settings for the Analysis Server instance can affect to the deadlock occurence?

Many thank for your helps.

SSAS dimensional processing memory usage


I have a dimension with 100 million rows.  This is a basic account number dimension that is needed so end users can drilldown to an account number.   There are a few other attributes in this dimension like Customer Name.   Basically, these are the two attributes than can not be normalized any further in our star schema.   These are typically not attributes that you would ever pivot on, but they are attributes you would need to drilldown to.

We have 48 GB of RAM on our server.  The problem is that when this dimension processes it is throwing memory alerts.   The dimension processes very quickly, there are no processing errors, and the drilldown works perfectly.  The problem is that the processing task is using 99+% of available memory and paging these memory usage alerts to operational people in the middle of the night.      Are there any other ways of processing this dimension that would use less memory?  Or, given the facts in this thread is there any other way of architecting my cube so these memory alerts go away?    How much memory is recommended on a server for an SSAS solution that has a dimension of this size?   My cubes are large - 150+ dimensions in some of them with 50 measures, 100s of millions of rows.

My solution is working

SSAS 2005 - Issue with reference dimensions


Hi guys,


Has anyone seen such an error:


Errors in the metadata manager. The '<attribute ID goes here>' intermediate granularity attribute of the '<intermediate dimension name goes here>' measure group dimension does not have an attribute hierarchy enabled.


I'm trying to create a sample cube recreating a problem that I have with a real SSAS database. I'm trying to setup a reference dimension through an intermediate dimension using a non-key attribute (that's what I have to check) and that is so far working with no problems. However, when I try to deploy I get the error above. I tried to delete the existing OLAP db, restart the SSAS service and do a full deploy of the project - still the same result.


I checked - the AttributeHierarchyEnabled for this dimension's attribute is set to True and neither the cube nor the dimension source XML files contained a reference to the AttributeHierarchyEnabled, so this sh

SSAS cubes processing slow



Can i use Process data option in SSIS Analysis services task for sheduling SSAS cubes.

Now im using Process Full option, It is taking 1 hour for processing 44 lakhs records.

If i use the Process data option whether it will reduce process timings..

or tell me the ways to reduce the process timing in scheduling.

If i use Process data instead of Process full any disadvantages there??

Please guide me..



warning mesages while processing SSAS 2005 ubes


I am processing SSAS 2005 cubes from SSIS 2008. So i have developed an SSIS 2008 package which first processes all dimensions and then the cubes using "Analysis service processing task". The cubes gets processed form the SSIS package successfully but I also get a warning message for some of the cubes. The warning message is as below:

"Warning: 0x411C0002 at Process CubeA, Analysis Services Execute DDL Task: Server: Operation completed with 1 problems logged."

I am not sure if I just ignore the above warningmessage or look where the problem is being logged?I have no idea where it is being logged. Can anyone tell what I need to look for such a warning message? Did anyone get it before.


Thanks a lot.

SSAS Deployment and Scheduled Processing Question

So, I originally asked a question about how to deploy a cube to production, and I'm finally at the point where it is deployed, and everything seems to work. However, my SSIS packages are now feeding data into my relational DB (the source of my cube) on a daily basis. How does the cube get refreshed with the new data? I assume I'm not going to deploy every night, and I'm not making any structural changes. So how do I schedule the cube to be refreshed with new data?
ASP.NetWindows Application  .NET Framework  C#  VB.Net  ADO.Net  
Sql Server  SharePoint  Silverlight  Others  All   

Hall of Fame    Twitter   Terms of Service    Privacy Policy    Contact Us    Archives   Tell A Friend