.NET Tutorials, Forums, Interview Questions And Answers
Welcome :Guest
Sign In
Register
 
Win Surprise Gifts!!!
Congratulations!!!


Top 5 Contributors of the Month
Easy Web
Imran Ghani
Post New Web Links

Slow Performance with a sigle table

Posted By:      Posted Date: October 19, 2010    Points: 0   Category :Sql Server
 

Hi,

We have SQL 2005 Server with a SAP database of 400 GB in out Test enviroment (production machine is DB2 iSeries). We have a very extrange problem just with a sigle table. The table has about 56Mill of Rows an a size of 11 GB of data and 25 GB spent in indexes. We have similar tables in the database (even bigger than this one) with a correct performance.

The performance of bulk deletes or inserts in this single table are extremly slowly. For instance, deleting 25 Mill of rows can take many hours (more than 20). CheckDB and othr maintence plans already applied without any issues.

Any ideas?

 




View Complete Post


More Related Resource Links

Under the Table: How Data Access Code Affects Database Performance

  

In this article, the author delves into some commonly used ways of writing data access code and looks at the effect they can have on performance.

Bob Beauchemin

MSDN Magazine August 2009


Very Slow Performance on 64 bit SQL System

  
Hello, we have a Windows Server 2008 64 bit and a SQL Server 2008 64 bit. The server work and the speed with a quad CPU and 8 GB RAM is nice. But, we use a SQL script with xml (we make a XML paket for replication) is the speed very slow. The script need 40 - 60 min to finish. But the same Database and the same script on a Server 2008 32 bit and SQL Server 2008 32 bit on the same hardware need 22 seconds. We have all updates installed. Have anywere a Idea? Thank

Slow performance when searching in "any part of field" of a SQL Server database using a Microsoft Ac

  
We have just migrated our back end database from Access to MS SQL Server 2008 R2. We have noticed better performance on our searches - unless we select to search in "any part of field" in Access (with Access' built-in search function - we use a mix of Access 2003 and 2007). This takes nearly 20 seconds to find the result, whereas before the migration it was taking 5-7 seconds (compared to instant results we now experience when searching for whole fields). The main fields we search are not large - usually just two words. Obviously it is better to search for a whole or start of a field, but this is not always possible. The contractor that assisted us in this project has told us that this "is just the way SQL works with Access". Is this really true? I find it hard to believe two MS products would have such a big issue between them. I presumed using SQL with an Access front end would be a common setup, which is why I thought this must be a problem with our setup. Is there any way to fix this speed issue?

Very slow performance with a query involving JOINS

  
I am experiencing extremely slow performance on a particular SQL query using SQL compact.  This same query completes in less than one second on my regular SQL server.  With SQL compact, I am actually not even sure if the query ever completes.  As I type this I am testing it from SQL management studio, and it's up to 17 minutes and 40 seconds still executing so far. Since I am not experiencing this on most of my other queries, I have little doubt the slowness is due to the volume of data in these tables, coupled with the fact that I am doing the joins.  I made sure that foreign key constraints are in place for the columns that the joins are based on.  From what I've read, this also ensures that appropriate indexes are created, although I am starting to wonder about that. This is the query: SELECT TOP (200) t.BusinessEntityID, t.BusinessName, t.IsTowTruck, t.IsHospital, t.TowTruckLastUsed, t.BusinessLocked, cn.Address1, cn.City, cn.State, cn.PostalCode, cnp.PhoneNumber as TopPhoneNo, GetDate() as LastUpdatedDate, GetDate() as CreatedDate FROM tblBusinessEntity t LEFT JOIN cnBusinessEntityLocation cn ON cn.cnID in (SELECT TOP (1) cnID FROM cnBusinessEntityLocation where cnBusinessEntityLocation.Active = 1 AND cnBusinessEntityLocation.BusinessEntityID = t.BusinessEntityID

MDX query performance is slow for complex logic implementation in Calculated Members.

  
We have implemented a calculated members which involves multiple calculated members  called in a nested e.q. Cal1 calls Cal2 and Cal2 calls  Cal3  and Cal3 calls Cal4 .  And when browing Cal1 through Excel it takes around 30 mins for getting result from a Partition having row count around 5 million. Follwing are the information on the scenario I am having: We are having partitioning in the cube. one partition contains around 4-6 million records Processers on Query Server: 4 quad core RAM on the Query Server: 64 GB Calculated members having simple calculation logics are giving results in no time. There is a Calculated members which calls 6 intermediate calculated in nested way and with small Data Set like 5-6 k rows it is running fine.   Any help will be much appreciated.   Regards, Sandeep

Slow Replication Performance

  
I have a fairly large table and I have updates 1.2 million rows on the Primary. The replication performance is very slow. There is a text column on this table. I am using profiler to see the performance on the target end. It says CPU time is @200ms and Duration is @210ms. The number of locks acquired on this table is very high. There is not much going on the target end. I cloned this table and performed an update on one row and it takes just 12ms. I am not sure what is going on. Any pointers to improve performance will help!  

How to calculate a SQL Server performance of a query based upon table schema, table size, and availa

  
Hi What is the best way to calculate (without actual access to a SQL Server) the processing speed of a query (self-inner-join) based upon table schema, table size, and hardware (CPU, RAM, Drives)? ThanksThanks Jeff in Seattle

CAST in delete query-Performance is so slow

  
Hi all, I've below join in the delete statementt, but it's taking so long the query run, the performance is really bad because of cast conversion, is there a better way, especially performace wise to fasten this below join, A.PHONE in nvarchar data type and DNC.PHONE_NUMBER  is in bigint data type.   A.PHONE=CAST(DNC.PHONE_NUMBER as varchar (15)) Thank you.

Performance Drop when switching from a #temp table to a @temp table variable

  

I am trying to convert a stored procedure to a table valued function and the performance has taken a HUGE hit and I was wondering if there was anything that can be done about it. Since a table valued function can not use #temp tables it must be converted to a @temp table variable.

Here are some steps I have already taken...

The original stored proc starts off by populating a #temp table via "Select x Into #temp ..."

Leaving it a stored proc for now, I explicitly created the #temp table and did an "Insert Into ... Select From" to more closely model how it must work when using a @temp table variable. There was no discernible performance difference.

Still leaving it as a stored proc, I then swapped out the #temp table with the @temp table variable and now, all of the sudden, the performance drops from sub-second to over a minute!!!

The temp table only has one field defined as an int and it is distinct, so I tried making the field the Primary Key to see if that would help and it did not.

The temp table is created by scanning a table with around 11,000 rows and the temp table itself has about 4400 rows in it (if it makes a difference to anyone).

Does anyone have any suggestions (or hope) for me?

Thanks,
Jim


Slow performance of WCF service calling singleton (which calls COM+!)

  

Scenario:
WCF Service call routing to COM+ application.
netTCPBinding, throttling set pretty high.

[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode=ConcurrencyMode.Multiple)] 

Service has two methods, one simply returns the string was input, the other calls the COM+ component and FOR TESTING returns the string "success" for every call. Therefore, both methods return a simple string.

Client starts 20 threads, each calling the WCF Service twice. (NOTE: A "first call" is made to prime the singleton service and also the COM+ component)

Using log4net and a Stopwatch to log the timings of the clien

Report performance slow

  

The below query is taking about 30 sec to execute in the management studio. I tried to visualize the Estimated execution plan and everything is a clustered index scan. The number of rows in the fact_dailycost table are 384962 and all the dimension tables have less than 1000 rows.

Please let me know how should I start tuning this query. This I'm creating a view and in the SSRS Report, I have a parameter Contract Number that is applied on the top of this view. For the report to display the results it is approximately taking 21/2 minutes.

SELECT

 

* FROM
dbo.Dim_Employee

Performance Drop when switching from a #temp table to a @temp table variable

  

I am trying to convert a stored procedure to a table valued function and the performance has taken a HUGE hit and I was wondering if there was anything that can be done about it. Since a table valued function can not use #temp tables it must be converted to a @temp table variable.

Here are some steps I have already taken...

The original stored proc starts off by populating a #temp table via "Select x Into #temp ..."

Leaving it a stored proc for now, I explicitly created the #temp table and did an "Insert Into ... Select From" to more closely model how it must work when using a @temp table variable. There was no discernible performance difference.

Still leaving it as a stored proc, I then swapped out the #temp table with the @temp table variable and now, all of the sudden, the performance drops from sub-second to over a minute!!!

The temp table only has one field defined as an int and it is distinct, so I tried making the field the Primary Key to see if that would help and it did not.

The temp table is created by scanning a table with around 11,000 rows and the temp table itself has about 4400 rows in it (if it makes a difference to anyone).

Does anyone have any suggestions (or hope) for me?

Thanks,
Jim


sql insert very slow on bulk table

  

Hi,

We are using SQL Server 2005 x86 enterprise edition. We have one table which has 13 crores / 130 million records. When we insert a record in that table it is taking around 30 seconds. Can u please help optimize it.

The table currently has 1 primary key (Clustered Index) and 4 non clustered index.

Please help.

Thanks,

news4naeem


Slow performance with SQL Server 2005 executing parameterized queries with LIKE and an aggregate com

  

We have a slow-performing query and, after some analysis, have narrowed the cause down to using parametrized queries in conjunction with LIKE and aggregate comparisons used in the WHERE clause.  If we use a non-parametrized query (but keep the WHERE clause the same) the query performs much faster and, in fact, executes a different plan.  Using OPTIMIZE FOR UNKNOWN is not an option for us as we are using SQL Server 2005.  Obviously, we'd like to use parametrized queries to prevent SQL injection but the slow perf is unacceptable.  We need the LIKE to support wildcard scenarios.  If we need to we'll use dynamic SQL (scrubbed as much as we can) in lieu of parametrized queries.  We're wondering, however, is there another option that would give us the perf without sacrificing safety?  I can provide a sample database, query, plans, etc. if necessary...I'm assuming SQL Server generates a less efficient plan with the parametrized query as it can't make any assumptions about the parameter values...

Mike Bouck


Which table design is good for performance?

  

hi All,

I have situation to create table with datetime column for OLAP (more select stmt.) database with following conditions,

1.       Need to insert only date (no need time)

2.       Only SQL Server 2005 or less can be used.

3.       More than 10 million records capacity of the table.

----------------------------------------------------------------------------

use tempdb
go
drop table T1;
drop table T2;
go
create table T1(col1 int, colD tinyint, colM tinyint, colY smallint);
create table T2(col1 int, colDate datetime);
go
insert in

Slow performance with SQL Server 2005 executing parameterized queries with LIKE and an aggregate com

  

We have a slow-performing query and, after some analysis, have narrowed the cause down to using parametrized queries in conjunction with LIKE and aggregate comparisons used in the WHERE clause.  If we use a non-parametrized query (but keep the WHERE clause the same) the query performs much faster and, in fact, executes a different plan.  Using OPTIMIZE FOR UNKNOWN is not an option for us as we are using SQL Server 2005.  Obviously, we'd like to use parametrized queries to prevent SQL injection but the slow perf is unacceptable.  We need the LIKE to support wildcard scenarios.  If we need to we'll use dynamic SQL (scrubbed as much as we can) in lieu of parametrized queries.  We're wondering, however, is there another option that would give us the perf without sacrificing safety?  I can provide a sample database, query, plans, etc. if necessary...I'm assuming SQL Server generates a less efficient plan with the parametrized query as it can't make any assumptions about the parameter values...

Mike Bouck


Viewing Video's in Sharepoint 2010--Will it slow down the performance for the end users?

  

I understand how to upload files to the media web part and creating an asset library, however; i'm trying to figure out where these files are being stored and the performance of the users.  I read the digital asset management library information on technet and couldn't find the exact answer i'm looking for. 

So, here's my question:  If I want to upload 10 videos to the asset library, they each may have around 50mb or less for the file size, and we want 200 plus users to access it, will this affect the performance of when these users hit the site?  We want to display the video's without having to buy a media server.  And the main question that will be asked is, who will host the content?  And, where is it residing?  My guess is that tt will be hosted in SP 2010.

I'm trying to incorporate a new intranet portal site for my company that will have a video to be played on the home page of the intranet site and the links will be located on the left and right side of the page with the video webpart in the center.  Will using the Blob cache and bit throttling be sufficient for this request?  I think it will be, I just want to make sure before we role this out.  Any help will be greatly appreciated.  Thanks.

 

 


Categories: 
ASP.NetWindows Application  .NET Framework  C#  VB.Net  ADO.Net  
Sql Server  SharePoint  Silverlight  Others  All   

Hall of Fame    Twitter   Terms of Service    Privacy Policy    Contact Us    Archives   Tell A Friend