We currently have a large cube with quite large amounts of data. We currently store the last 2 complete years and the current year. The current partitioning strategy is one partition per year of the first 2 years, the current year excluding the
last 21 days, then the last 21 days. This last partition will build up with the current data and is processed daily. Once a week we do full cube rebuild and the last partition will reset to just 21 days again.
Year Num Rows
We don't have aggregations as we are using Measure Expressions (Exchange Rates) or have Distinct Count measure groups.
This is working ok, but obviously not optimal. I was going to start by splitting these partitions down smaller (say to by Quarter) and for the distinct count measure groups, using the SQLCat recommended optimizations. Then I got wondering. Would
it be better to split the non DC measure group by the DC optimizations too? My thinking is that because we do not have aggregations, we would probably be putting more strain on the storage engine due to repeated fe
View Complete Post