.NET Tutorials, Forums, Interview Questions And Answers
Welcome :Guest
Sign In
Win Surprise Gifts!!!

Top 5 Contributors of the Month
david stephan
Gaurav Pal
Post New Web Links

daily complete cube rebuild four dimensions and fact table including remapping of all surrogate keys

Posted By:      Posted Date: September 08, 2010    Points: 0   Category :Sql Server
Hi SSIS Engineers: Forgive me if this is a multi-forum question. Our primary activity in the next week is to automate the processing in SSIS, where I led the team to create complete processing flows for Full and Add in the order of Dimension, Measure Group, Partition, Cube, Database. These work. The problem occurs in a complete refresh of the ERP database that caused me manual effort inside SSAS, which I plan to find a way to automate in SSIS. I performed a complete refresh of our cube from the ERP source from a time perspective. We are automating this process in SSIS. In SSAS, I had to manually delete the four dimensions from the UDM view via the Solution Explorer. Since the complete refresh increased the surrogate keys in the dimensions and since the names were the same, I couldn't just drop the partition and reprocess the dimensions, since, in effect, new fact rows would have to be mapped to the new keys. SSAS held on to the old keys even with Full Processing of the Dimensions first, then the Cube. Until I dropped--deleted-- the dimensional tables from the Solution Explorer and the UDM then later readded the dimensions with the new surrogate keys (both add, update and delete dimensional attribute changes in full refresh) via the Add Dimension wizard, the cube kept the old surrogate keys and failed in measure group, fact, database and partition processing.

View Complete Post

More Related Resource Links

Copy table including triggers and forein keys


Hi All,


Need help.


I've deleted some of the products from the product table. from 32Million to 22 million products. and in that 22 million, I've added 1.2 Million products. Now, I want that 32 million table bacik.


I've that table on another server. and i copied that table to the server I want. but i can copy only data. i can't copy with all indexes, statistics, constraints etc.


Now, I need to get 32 million records into product table.


Any suggestions.. never done it before.and spent enough time. but getting screwed every time. Thanks in advance.

Aggregations based on Dimensions dragged for Calculated measure from different fact table Using MDX



 I am a newbie to Mdx, i am working on education domain. I have a requirement to calculate student counts based on different fact views.

Especially for state,district and school levels we created three different fact views based on applying some conditions on original fact table.

Now i am using only one calculated measure named "Student Count" in my cube. My requirement is, when i drag my district dimension student count count based on district dimension should appear in my measure, if drag both districts and schools together then school facts student count should appear in my count, if i didn't drag any of the two dimensions then state level count defaultly aggregated.

For this, i thought write an mdx basing on dimension i am dragging, so i started with scope function for getting a different fact view based on different fact view,

Scope([Districts].[Districts],[Mesures].[Student Count])

this= ([Districts].[Districts],[Mesures].[District Student Count])

end scope;

like wise for school dimension and so on, but i am getting same value for both of the cases.

could any one help me out in getting solution for this. Is there anyother way other than creating new cubes for each level.

Thanks in advance. Any suggestions are appreciated.


Lakshman A N, S

Surrogate vs Natural Primary Keys - Data Modeling Mistake 2 of 10

In case you're new to the series I've compiled a list of ten data modeling mistakes that I see over and over that I'm tackling one by one. I'll be speaking about these topics at the upcoming IASA conference in October, so I'm hoping to generate some discussion to at least confirm I have well founded arguments.

The last post in this series Referential Integrity was probably less controversial than this one. After all, who can argue against enforcing referential integrity? But as obvious as surrogate keys may be to some, there is a good deal of diversity of opinion as evidenced by the fact that people continue to not use them.

Fact table in DSV vs partitions pointed to a different table

I am seeing an issue in my cube for a partition that is based on a separate table than the Fact table in the DSV. I have 8 partitions all from different physical tables. In the DSV I used 1 of those 8 partition tables as the "source" of the DSV so I could model the relationships between the fact and the dimensions. On 1 of the 8 it loads over 1 million rows from the partition into the cube, but when I use the browser to show the count in that particular partition it shows the exact same number of records that are in the table that was used in the DSV. The strange thing is all the other partitions work fine except this 1. I have deleted the partition and added it back multiple times and cant get it to work right. Has someone seen this problem before?   I have run into this a couple times, one way of fixing it was to recreate the entire project in a new project, copy all objects from the old projects and rebuild. I cant seem to figure out another way of fixing this.Craig

SSAS Cube & Pivot table

Hello, I am connecting to my SSAS cube through excel Pivot table. I have the following questions: 1) I have a filter for Company. However, instead of selecting multiple companies from a drop down list, I would like to have a range e.g. Company between 100 and 200. We have this "Filter expression and operator" option in SSAS but I dont see in excel. 2) I have 90 users who will be using my cube via pivot tables. How do I automatically have the data in excel refresh automatically such that when the user comes in, in the morning and open the pivot table, they have the refreshed data. Instead of them manually refresh it as it runs the query for a long time. Thanks in advance for all your help and suggestions! KJ

Could not complete cursor operation because the table schema changed

Microsoft SQL Server  2000 - 8.00.2039 Got this error: Could not complete cursor operation because the table schema changed after the cursor was declared. SQLCode: 16943 SQLState: HY000 Is this a known issue?  I suspect the application logic may cause this error. Please advise. Thanks a lot!  

Building Fact and Dim table

How to build Fact and Dim tables for below requirement. Fact records (approx 800k) has to be analyzed every day w.r.t AgeGroup's Every record in fact will have a DOBSID and age/agegroup should be calculated every day based on getdate()

Cube Refreshment with Data updated in Table

HI All, I have cube which is MOLAP,and i have created SSIS package to process the cube,now i want to start the job as soon as data got uploaded into the Relational database. Please guide me in this as i am new to ssas. Regards Shraddha

How to change connection string of a pivot table pointing to SSAS 2005 cube using excel 2003?

Hi All,I am not sure if I should have posted this query to Excel 2003 forum. But posting it here as it applies to SSAS 2005 as well.Ok, let me give the background before I tell the actual problem.We have users on ABC domain and the SSAS server is also on ABC domain. Users on this domain can acess the excel pivots by connecting to cube to browse the data. They leave the Userid & password field blank while they setup the connection string and it works fine. Thanks to windows authentication that takes the credentials of user logged in. Let's say I have two users A and B, they login to ABC domain with their own windows ids.  Now when user A creates a excel file having a cube pivot and then sends this file to user B, user B can refresh and modify the same excel file (he can select new measures to pivot, new hierarchies in filters and so on).Now, let's say I have another user, user C. He has excel 2003 installed on his PC and cannot migrate to excel 2007. He is on different domain XYZ but have a valid windows userid on domain ABC. The domain ABC & XYZ can not be setup to have trusted relationship. Now, when user A sends the same excel file to user C. When user C opens the file and try to refresh it or try to modify the pivot by selecting/deselecting any elements, he gets below error prompt:" An error was encountered in the transport layer." and "Errors in the

Getting counts by 2nd Date Dimension Attribute with Snapshot Style Fact Table

  I have an MDX question finding hard to solve.  I have a Snapshot Fact Table with a snapshot of the records in the source system for each batch date.  All records in the fact table are assigned the batch date with the batch date key.  There are many records for each day and each batch date is an entire copy of the source records.  So, the grain of the fact table is one record for each batch date that exists in the source system.  These facts rows have another date in them for when the record was entered.  This date is different from the batch date in that the batch date is based on the day the batch was processed and the entered date is based on when the record was entered.  If a record was entered many days before, its batch date will be today but its entered date will be several days ago.  Therefore each day a copy of all the records entered the previous batch date and all the records added on today's batch date are present. Fact Table : FactSnaphshotKey (surrogate for easier administration) BatchDateKey (link to batch date dimension – date dimension, first in dimension list so it is used for semi aggregate measures) EnteredDateKey (link to entered date dimension – date dimension) Facts Count – measure for fact table - default measure from Analysis Services cube 2 Dim

Update an accumlating shapshot fact table

This is my first time implmenting an accumulating snapshot fact table and I require some guidance. Accumulating snapshot fact tables show the status at any given moment. It is useful to track items with certain life time, for example: status of order lines.eg everytime there is new piece of information about a particular purchase, we update the fact table record. We only insert a new record in the fact table when there is a new purchase requisition. What I really need to know is how best to handle the updates.  This really feels very similar to managing SCD-1's in dimension processing! Anyone able to advise? thanks in advance Here is a perfect example we can use  http://blog.oaktonsoftware.com/2007/03/accumulating-snapshot-use-accumulating.html Figure 1, below, shows an accumulating snapshot for the mortgage application process. The grain of this fact table is an application. Each application will be represented by a single row in the fact table. The major milestones are represented by multiple foreign key references to the Day dimension—the date of submission, the date approved by mortgage officer, the date all supporting documentation was complete, the date approved by an underwriter, and the date of closing.

SSRS2005: Is it safe to manually remove instances from the Keys table in the report server database?

I have a problem with redundant reporting services instances still hanging around in the initialization pane of the RS configuration wizard and I can't get rid of them. The problem arose when we had new webfarms and before I had removed all references to the old ones from the initialization screen, the old webfarms had been switched off, removed from the network and dismantled. Therefore when I try to 'de-initialize' them, I get an error stating that reporting services cannot connect to them and when I run reports, a fair few are trying to run on the old webfarms which no longer exist and are disappearing into the void. There's a table in the report server database called Keys whch contains details of these intances and i'm wondering if its safe to remove these via SQL commands without breaking reporting services on other machines.   As a side note: All this stems from the fact that when you uninstall Reporting Services, not all of the files / references are removed. So as a general word of warning, if you do an uninstall of RS2005, you'll need to make sure everything's gone before re-installing, throwing away old machines etc. These include folders in IIS and these old instances as mentioned above.  

How 2 join Multiple Keys based table???

I have a table INC with 2 Columns/Fields, i.e. YR and CL set as primary keys by selecting both the columns and selecting primary key symbol with right click. How to set up a FK with the other table INC_DTL's CL which I seek to be restricted to a combination of the INC's 2 fields? Thanx in advance.

Need to Automate in SSIS Cube Drop and Recreate of Dimensions: Had to do on the Solution Explorer Ma

Hi SSAS Engineers: This question is in the context of a complete refresh of the SSAS cube. I'm OK on the SSIS approach thanks to TMcD and past successes (also thanks to him). The question is why did I have to manually drop/delete the dimensions in BIDS SSAS, then readd them with the cube wizard to pull in the dimensions with the same names, but new surrogate key relationships and more surrogate keys in total? We need to Automate in SSIS Cube Drop and Recreate of Dimensions. I originally had to perform this function on the Solution Explorer Manually in SSAS UDM, since the names are the same, but the structures and the data are changing for the dimensions on the refresh (and the fact table is subsequently increasing). Herein lies the problem. We had a keynotfound error in the cube processing, since the old keys were being kept. I had run Full processing on the dimensions, but since the dimensions have the same name (no drop old names and add new names), the old surrogate keys remained. The fact table had the old row count as well, since the surrogate key pipeline to look up the surrogate keys for the new dimensions only had the old dimensions, even though I truncated the fact table. Yes--I had already rebuilt the dimensions in SQL Server and the row counts were the new refresh row counts. The dimension counts were right according to refresh numbers after

Dimensions order in excel pivotal table

The list of dimensions in excel pivotal table are in alphabetical order. is it possible to reorder the dimensins?

Excel pivot table report filter selections from a cube

There are two filters added to the report filter, Date (8/31/2010) and Product (Product A), from a cube I have.  Everything works just fine.  Once the cube is refreshed each day to include the new data, for some reason, only the selection of Date, 8/31/2010, stays but the selection of Product changes to All Products.  Both dimensions are fully processed but the surrogate keys for existing records do not change.  I checked the MDX captured in Profiler and in the where clause, the date selection is passed from the pivot table but the product selection is lost and all products is passed in.  Any thoughts? TIA. 

Avoiding a SELECT distinct query generated by SSAS when using dimension derived from fact table

Hi, I am using a dimension derived out ot a fact table and the factt able primary key is dimension key. Issue is, there are large number of rows and so many attributes. SSAS issues distinct query and it takes large amount of time. Without the distinct statement, query takes only 3 min for 4 million rows. With the distinct, it takes 20 min. Becuase the fact primary key is the dimension key there is no need of a distinct statement. I know there is a option in the dimension to say "By Table" to avoid this. But unfortuantely, i breach the 4 GB limit for strings. Any suggestions for optimization? Thanks,  Sambath
ASP.NetWindows Application  .NET Framework  C#  VB.Net  ADO.Net  
Sql Server  SharePoint  Silverlight  Others  All   

Hall of Fame    Twitter   Terms of Service    Privacy Policy    Contact Us    Archives   Tell A Friend