.NET Tutorials, Forums, Interview Questions And Answers
Welcome :Guest
Sign In
Register
 
Win Surprise Gifts!!!
Congratulations!!!


Top 5 Contributors of the Month
Kaviya Balasubramanian
Sgraph Infotech
Imran Ghani
Post New Web Links

Cube Performance Question

Posted By:      Posted Date: May 22, 2011    Points: 0   Category :
 

We have a dedicated Server(16gb ram, 16-core)  which host our Analysis servers(2008), Database engine is not installed in the server. We have 5 Analysis Services databases. Ranging from small to big. Small cubes have 30-50 million rows. Whereas, one of the Big cube has 700 m rows.

I'm having performance issues only with the big one. Everything is in place to optimize the cube (aggregations, partitioned by month, attribute-relationships).

Every time I restart Analysis services, Most of the queries runs decently, but, after few days they start getting slower and worse.

I check perfmon, nothing is pegging, neither the cpu nor memory. Cleared - Cached on every database but nothing works until I restart the services. Can anyone tell me what I should doing instead of restarting the AS services?

 

thanks,

Rok

 

 

 




View Complete Post


More Related Resource Links

Performance Problems with calculate member in a cube

  
Hi,  I'm having performance issue with calculated member (Running total) which is created on one dimension in the cube.  Cube is having three dimension Product Details(Product Model, Product Type) , Time(Year & Month) & Qty type(In Qty & Out qty). See below for details Dimtime Year (Values 2000 to 2020) Month DimProd Product No Product Type Product Group DimQty Type In Qty Out Qty Fact Table Product No Year Month Qty Type of Qty( In Qty or Out Qty ) My MDX query for calculated member is  CREATE MEMBER [Qty Type].[Qty Type].[All].[Total] as      SUM(NULL:[TimeDim].[Hierarchy].CurrentMember , ([Qty Type].[Qty Type].[In Qty], Measures.[Qty]) - ([Qty Type].[Qty Type].[Out Qty], Measures.[Qty])  The formula is --> Total = Previous Period Total Qty + Inqty - Out Qty   This works perfectly when i browse at higher level (Qty Type on Rows & Time On Columns),   but when  i browse the cube adding dimension Product Details( Product group or Product Type) to Drill-through the In Qty & Out Qty,  the query is running and after one hour also it's not finished. Can anyone help me to solve this or propose new structure to resolve this issue.   

MDX query using linked server - performance question

  
Hi, I'm having a strange performance behavior. When I’m running a mdx query on management studio, it runs for 10 minutes (its mainly calc members on a large cube) When I run the same query via linked server and openquery, it runs for 20 minutes. Is the SQL from some reason does a parsing or running the query twice? The server is sql2008 UP1 on 64bit. and the "allow in process" is checked. its the default in sql2008 Thanks in advance, Yoav  

Improving performance of UPDATE CUBE by executing preceding CREATE CACHE for the same cell / subcube

  
Hi All, Recently I came across one of suggestions by Microsoft Advisory consultant regarding optimization data write-back pattern for the SSAS cubes. It was recommended issuing CREATE CACHE command following by UPDATE CUBE for the same cell/subcube definition: CREATE CACHE FOR [MYCUBE] AS ("Cross join of all the dimension members defining subcube to be updated") UPDATE CUBE [MYCUBE] SET ("Cell") = 100, ("Another Cell") = 200,... etc. for every cell. This seems quite weird, but based on our benchmark the entire process (CREATE CACHE and UPDATE CUBE) takes less time than just UPDATE CACHE. Folks, can someone advice on the insights of the process? Shell this approach to be suggested for an enterprise-level planning application? Regards, -h

Pipeline Data Cube Design Question

  

 

The Analysis team wants to analyze sales pipeline data. Our CRM system(Source system) send the Data Warehouse team transaction as the Product moves through the different stages.

 

To mention a few: Creation, CallMade,LeadEngaged,CreditCheck,PapersReceived,Closed.There are more than 100 tracking statuses like mentioned above.

 

Sample transactions fact table:

ID LeadID

cube template question

  

Hi guys,

I'm new to SSAS [Coming from an SAP background], and i was wondering if SSAS has any template cubes that one can use instead of having to create one from scratch each time. If so can anyone piont me in the right direction on how to view the template cubes?


Calculated field Sorting Now Working , but have performance question

  

 Thanks to mitja.GTI.

My sorting is working ...

although i had to take the table that has my calculated column and make a view (so i could filter) and then bind the gridview to the view.

My question is as this.

1.) in order to get my calculated field i had to load my data into a table (as suggested)

2.)Add a column to the table and perform calculations

3.)make a dataview from the tables because i could not get the Gridview.datasource= ds.tables("loads").select("Deadhead <= " & sngRadius

4.)the largest of the 3 tables my query is based on contains about 7000 records. The way i have the logic & programming now everytime a new "truck location" is put in the textbox1 and "find loads/button1" is clicked. It calculates the deadhead miles of all 7000 records before filtering. I am filtering based on the deadhead miles.

Does anyone have any recomendation about how to speed this up a bit.

Also in order to get paging to work i had to rebind/filter the data again ... means looping through all 7000 records each time paging is called. This seems terribly ineffecient as i am already forced to use access instead of SQL.

Wou

partitioned view performance tuning question

  

There are 20 tables with the same schema. And a view

create view Everything
as 
select 1 as area, pk_col1, pk_col2, col1, col2, ... from T1
union all 
select 2 as area, pk_col1, pk_col2, col1, col2, ... from T2
union all 
select 3 as area, pk_col1, pk_col2, col1, col2, ... from T3
union all 
select 4 as area, pk_col1, pk_col2, col1, col2, ... from T4
union all 
select 5 as area, pk_col1, pk_col2, col1, col2, ... from T5
......

However, the foll

Query Performance Question

  

Hi,

I have a view , it gets the data from almost 20 joins , couple case statements and some subqueries and it is not materialized.

I am loading this view's data into a table so it takes 80 minutes to load with select into clause.

there are almost 100 milyon rows in that view.

here is the view , it will be great if I can take 80 min down to 20-30 min.

select
  a,b,c,d,e,f,g,h,i,j,k
 , l =
  case when cd.a is not null then (select name from bbb where dd = caz.dd)
    when cd.a is not null then (select name from cn where d_id = caud.d_id)
    when cd.a is not null then (select name from cn where di_id = cap.d_id)
    else null
  end
 ,m= case when cd.a is not null then (select name from bbb where dd = caz.dd)
    when cd.a is not null then (select name from cn where d_id = caud.d_id)
    when cd.a is not null then (select name from cn where di_id = cap.d_id)
    else null
  end
from
  cd inner join  cpcs on
  cd.c = cpcs.c
 left outer ........
 left outer ........
 
 and cdid = (select id from c where class = 'bbbbe' and

Cube Browsing in Performance Point 2010 as in SSAS and BIDS cube browser

  

Hi All,

Can we get the cube browsing experience as we get in SSAS and BIDS cube browser in SharePoint 2010 using performance point or excel services?

I am most interested in drag and drop features for dimensions and mesures.

More or less same browsing features were available in BIP viewer webpart but it is no more available.

So is there anyway we can drag and drop dimension members and measures in performance point dashboards?

Thanks


Cube Browsing in Performance Point 2010 as in SSAS and BIDS cube browser

  

Hi All,

Can we get the cube browsing experience as we get in SSAS and BIDS cube browser in SharePoint 2010 using performance point or excel services?

I am most interested in drag and drop features for dimensions and mesures.

More or less same browsing features were available in BIP viewer webpart but it is no more available.

So is there anyway we can drag and drop dimension members and measures in performance point dashboards?

Thanks


Question on Creating Aggregations based on Time data in a Cube

  

Hi

I have a fact table that lists projects and the site visit dates that related to those projects.   For each site visit date there are a number of associated KPIs.

However to make life a little bit more complicated these KPIs are prorated by year so for each site visit there may be several rows in the fact table that relate to each year ie 2005,  2006 etc...

The issue I am having is that I have been given a requirement to view each KPI at the site visit level.   My first idea was to create an aggregate table for each site visit with totals for each year ie  2005 sales total,  2006 sales total etc...    The issue is that totals can be for any year between 2005 and 2014 and clearly creating all these hard coded totals means there will be a huge number of KPIs required with in-built time logic at the site visit level ie for each KPI I would need a separate measure for each possible year.

Any other alternatives?   The underlying fact table will of course show the end user a list of site visits and the KPIs relating to each year,  but the end user wants to be able to view the data in a repivoted format so that site visit returns 1 row with the prorated date columns appearing vertically across.   For the time being I cannot see that this is possible without having a huge agg

Training Question - Connection Refused Error in Browsing A Cube In SSAS

  

I am an instructor for a data warehouse design and implementation course at a local college, and I have run into a problem that I have not been able to fix.  Hopefully someone out there can help.  It is the first time this course is being taught in a new lab where we are running SQLExpress 2008 Developer Edition, with a local server instance installed on each workstation, using Windows authentication; this configuration was done by our local LAN/network manager.  We have created a Management Studio source database and a Management Studio warehouse database, using a star schema design.  We have successfully created SSIS packages to extract from the source tables and populate the warehouse tables.  We have been able to create a DSV in SSAS.  When we attempt to process and browse the cube, it fails with a connection refused error from the server.  Configuration Manager tells me the SSAS server process is running.  The data connection works when checked.  We get the same error regardless of which impersonation option is selected. What have I missed?  Can the configuration be adjusted to get around this?  I have spent several hours trying everything I could think of, with no success - it is getting frustrating.

Please advise.  Thanks in advance.


ASP.NET Best Practices for High Performance Applications

  
This article lists the techniques that you can use to maximize the performance of your ASP.NET applications. It provides common issues, design guidelines, and coding tips to build optimal and robust solutions.

Top .NET Performance Problems and how to avoid them

  
Every time I work with one of our .NET customers to help them with managing their application performance I come across the same problems as seen with other clients before: lots of ADO.NET queries, many hidden exceptions in core or 3rd party .NET libraries, slow 3rd party components, inefficient custom code

High-Performance .NET Application Development & Architecture

  
It has always been a goal of project architects to plan an effective strategy from the ground up in regards to an new application. All relevant factors are taken into consideration with respect to the application, from its design and layout to a functional website infrastructure. Pre-.NET strategies and design guidelines still effective now were developed with Microsoft's DNA (Distributed interNet Application) platform. This model successfully served the purpose of architecting N(any number of)-Tier (levels) applications. In its basic sense, as in most robust, distributed applications, you'll architect 3 main layers or Tiers: presentation, business rules and data access.

How to improve the performance of ASP.NET MVC web applications

  
It is an in depth analysis of a Digg-like site and how it went from serving (on a test machine) 6 req/sec to 390 req/sec.

The biggest gain, 74 req/sec to 390 req/sec happened when he introduced data caching and another 25 req/sec where gained when he introduced the compilation of LINQ queries.

High Performance ASP.NET - Speeding Up ASP.NET Pages

  
have been writing a series of blog posts, which I have named High Performance ASP.NET Websites Made Easy! There is no rhyme or reason to the order of these posts and certainly can be read in any order:

Developing High Performance and Scalable ASP.NET Websites
Avoid Chatty Interfaces Between the Tiers in Your ASP.NET Web Application
ASP.NET Page Profiling - Page Tracing - High Performance and Scalable ASP.NET Websites Made Easy
Web Applications: N-Tier vs. N-Layer - Benefits and Trade-Offs
Query Analyzer - Sql Server Database Indexes and Execution Plans
Categories: 
ASP.NetWindows Application  .NET Framework  C#  VB.Net  ADO.Net  
Sql Server  SharePoint  Silverlight  Others  All   

Hall of Fame    Twitter   Terms of Service    Privacy Policy    Contact Us    Archives   Tell A Friend