The indexing services is currently beating up our SQL server pretty badly...and we have a very robust box. I've identified the root problem...just not sure how to address it. The MSSCrawlURL table in the SSP database currently has 27,964,673
rows and is 30GB in size. Every 30 minutes when our indexer runs an incremental update, this entire table get's scanned (yes, scanned...all of it) into memory - which purges absolutely everything else.
MSSCrawlURL isn't the only offender. MSSCrawlQueeu and MSSBatchHistory are pretty darn big too. From the reading I've done on the Internet, the only SSP databases that had this kind of record count all had problems. So, my questions are:
1) Is there a way to calculate what a reasonable record count should be for these tables. Maybe there's a problem and 28 million rows is just ridiculous.
2) Is there something we should be doing to maintain the SSP. Delete/purge data, etc?
3) Have there been some updates (service packs, hot fixes, etc.) that change table structure, proc, or adhoc SQL such that only a small subset of the table information needs to be processed (instead of always sanning the ENTIRE TABLE...EVERY TIME)?
You guys get the idea...
Thanks in advance.
View Complete Post