.NET Tutorials, Forums, Interview Questions And Answers
Welcome :Guest
Sign In
Win Surprise Gifts!!!

Top 5 Contributors of the Month
david stephan
Gaurav Pal
Post New Web Links

Cannot set Host Distribution Rules with two Crawl databases, crawl component cannot be dismounted

Posted By:      Posted Date: September 02, 2010    Points: 0   Category :SharePoint
When I try to set Host Distribution Rules I get the following error: Redistribution status: Failed - Crawl component GUID-crawl-5 on SERVERNAME cannot be dismounted. Check that the server is available. Farm Topology: 2 Frontend Server 2 Query Server with partitioned Index 2 Index Server with 2 crawl components with 2 crawl Databases 2 Application Server The 2 crawl components then stay in status Initializing, when retrying (only Option), I get the same error again I tried the following steps - Delete the Crawl Component that cannot be unmounted --> stops with error, Server cannot be contactet - Move the Crawl Component to Crawl Database 1 --> error - Reboot the Crawl Server - Take the crawl Server out of the Farm and rejoin, then delete the crawl component --> this worked, I could rebuild the Search Topology - Then try to set the Host Distribution Rules -> same error as in the beginning --> grr Any ideas?   

View Complete Post

More Related Resource Links

Crawl Component Resilience Requires Sharepoint Server Search Administration Component?

I have a single crawl database and two crawl components pointing to that crawl database. One of the crawl components also has the Sharepoint Server Search Administration component running. When I close down the server running the Sharepoint Server Search Administration component crawling stops. I removed the crawl component from the server running Sharepoint Server Search Administration component and reindexed and this worked. So the additional crawl component works fine. Checked that incremental crawl would run in 5 minutes. Downed the Sharepoint Server Search Administration component server and uploaded some new content. Crawling did not run. Can anyone confirm that this is expected behaviour? I am now trying to transfer the Sharepoint Server Search Administration component to the other server as part of Disaster Recovery preparation and this is failing as per another recently opened problem.  

SharePoint search server 2010 crawl rules

My client wants to create a number of scopes by crawling specific subsites of a CMS 2.0 site.  The CMS site is crawled as a website and security is ignored (e.g. results are not security trimmed). As an example, they want to create a scope called “Audit”.  This scope will use a content source which crawls all content starting at http://server/services/audit and http://server/wssservices/audit. The first is the CMS 2.0 site, the second is a WSS 3.0 site that contains documents for the CMS site. I setup the content source with start address of http://server/services/audit and http://server/wssservices/audit with the crawl settings set to ‘only crawl within the server of each start address’. Additionally, I have created rule with path http://server/services/audit* and set the configuration to “Include all items in this path”, with “Crawl SharePoint content as http pages” also selected.  I have created rule with path http://server/wssservices/audit* with the same configuration settings, except  “Crawl SharePoint content as http pages” is not selected . I have also performed a full crawl after creating the content source and crawl rules. What I would expect to happen is that only results from http://server/services/audit or documents linked from http://server/wssservices/audit would show i

Crawl Rules


I'm having difficulty understanding the crawl rules. In what type of situations would you add an include rule (unless you were including a part of a website before excluding the rest)? Does it even make sense to have include rules after exclude rules at all?

Let's say I want to crawl Google.com and have set it up as a content source. Is there any point in adding http://www.google.com/ as an include crawl rule?

Content crawls fail after additional crawl component is added.


I have 2 vm sp servers in my farm connected to a fast box on dedicated hardware and noticed that the content crawls have been kind of sluggish. I added another crawl componet to my web front end and did the whole FAST cert import but when I try and run a crawl it fails and gives the top level error:  Access is denied. Verify that either the Default Content Access Account has access to this repository, or add a crawl rule to crawl this repository. If the repository being crawled is a SharePoint repository, verify that the account you are using has "Full Read" permissions on the SharePoint Web Application being crawled.

I made sure that the search services are running under the under the same service account and that the service account has full read access to my web apps. I haven't been able to find too much documentation regarding mutliple crawl components so I figured I post out here.

Search Topology:
1 Admin Component
2 Crawl Components (APP Server / WFE Server)
1 Admin DB
1 Crawl DB

Perhaps I'm going about improving my crawl performance the wrong way, if that's the case any suggestions would be greatly appreciated.

Spider in .NET: Crawl Web Sites and Catalog Info to Any Data Store with ADO.NET and Visual Basic .NE


Visual Basic .NET comes loaded with features not available in previous versions, including a new threading model, custom class creation, and data streaming. Learn how to take advantage of these features with an application that is designed to extract information from Web pages for indexing purposes. This article also discusses basic database access, file I/O, extending classes for objects, and the use of opacity and transparency in forms.

Mark Gerlach

MSDN Magazine October 2002

Can't start crawl becasue of index move operation


Hi there,

I can't start crawl task. The log says that "Deleted by the gatherer (The start address or content source that contained this item was deleted and hence this item was deleted.)" But I did not change the path of content sources and When I trie to start crawling job it says "Crawling might be paused because a backup or an index move operation is in progress. Are you sure you want to resume this crawl?"

What is index move operation? What should I do? I'll really appreciate the solution greatly. Thanks in advance.



"Content for this URL is excluded by the server because a no-index attribute." in crawl logs


Hi All,

I am getting following error message in Crawl Logs

" Content for this URL is excluded by the server because a no-index attribute. "

Any help in this regard will be greatly appreciated.



crawl stuck on "stopping"


Hi all

I have a problem wiht Moss 2007 search crawl. It was working fine, and suddenly it didn't show new content. I tried to trouble shoot, and saw that it had been running a crawl for more than 2000 hours. I stopped the crawl, and now it's stuck on "stopping".

I have googled and seen that a lot of people had that problem, and this might be because of maintenance job on the sql server (2005) with duplicated index values in the search database, or not having sp2 for sql. I checked, and we didn't have that problem.

Anybody here that has been on this problem, and fixed it? :)


Only crawl one site collection

Hi We have an intranet with about 100 site collections. How can I set up one of those to be in a separate content source that can be crawled more often? Do I need to make two content sources with one containing the other 99 site collections with the setting "Crawl only the SharePoint Site of each start address" and the other one containg my prioritized site collection with the same setting? I also would like to ask if the crawl rules have any effect on in which order the content is crawled. If I put a certain site to be included with order 1 will that site always be crawled first? //Niclas

SharePoint crawl errors on files which are not present

All, I'm noticing 2 errors in my crawl logs. Neither of the files exist anywhere on our site. The URLs are http://.../forms/repair.aspx and http://.../forms/combine.aspx and the error message is 'Error in the Microsoft Windows SharePoint Services Protocol Handler'. Our crawl normally takes about 3 and a half hours. Recently, it's been taking 5-6 hours. These 2 errors are logged at the end of the crawl. While the crawl is running, I see the success count growing and at about 3 and a half hours into the process, the success count stops growing. I'm not sure what the crawl is doing for the next 2 or so hours, but if finally logs the 2 errors mentioned earlier at the end of the crawl, then completes. I have tried resetting the crawled content and changing the index location of the SSP, but neither have worked.  I have also tried excluding the path to these two files with crawl rules, but that hasn't worked. I am on SharePoint 2007 SP2. Any ideas? Thanks

Cannot crawl sharepoint site and mysite after database attach upgrade form sharepoint 2007 to 2010.

After database attach upgrade site and mysite from sharepoint 2007 to 2010 , I have full crawl and get "The crawler could not communicate with the server. Check that the server is available and that the firewall access is configured correctly. If the repository was temporarily unavailable, an incremental crawl will fix this error. ( Error from SharePoint site: HttpStatusCode ServiceUnavailable The request failed with HTTP status 503: Service Unavailable. )" for mysite and get "Access is denied. Verify that either the Default Content Access Account has access to this repository, or add a crawl rule to crawl this repository. If the repository being crawled is a SharePoint repository, verify that the account you are using has "Full Read" permissions on the SharePoint Web Application being crawled." for sharepoint site. The content access account for search is "db_owner" of both of site and mysite. How do I solved this problem ?

FAST Search crawl queue

I am attempting to do a full crawl (FAST Search) of a handful of Word, Excel, PowerPoint documents. Has been running for ~3 hours. If I look at the Crawl Queue report in the Administrative Report Library it shows 150 transactions queued. Crawl Processing per Activity - the last entry on the graph is 300 seconds for Initializing Document Filtering for what looks like the entire 2 hrs SQL Server, SharePoint Server and FAST Search servers appear to all have low utilisation (cpu, memory, disk). There are only 2 warnings in the FAST Search crawl log (don't crawl search site and search cache directory). Success = 0, Error = 0, Warning = 2. Before I setup FAST Search, SP Search took approx 6 minutes to crawl a similar list of documents. How do I troubleshoot the issue?

Crawl keeps failing

I am having a problem. I had a working search that crawled my external data source and returned results, and I shouldn't have messed with it, but I was trying something, and it caused it to return the results twice, so I decided to start over. I deleted the content source, reset the index, and then re-created the content source. However, now it doesn't return any items and I can't search my content source anymore. I get these errors: Top Level Error: Item ID                           URL    1                                    bdc3://cohorts_cohorts/default/00000000%252d00000\                                        %252d0000000000000/cohorts/cohorts&s_ce...and so forth                                   &n

An unrecognized HTTP response was received when attempting to crawl this item

I have just done a dbattach upgrade of our servers and so far everything has come up very nicely. Except for the Search service. I cannot get Search to crawl our 4 web applications. The crawl finishes with errors everytime. I get the following error. "An unrecognized HTTP response was received when attempting to crawl this item. Verify whether the item can be accessed using your browser. ( Error from SharePoint site: HttpStatusCode GatewayTimeout The remote server returned an error: (504) Gateway Timeout. )" Does anybody recognize this. Server configuration: WFE: windows server 2008 R2, Sharepoint 2010 enterprise, SSL(wildcard certificate) DB: windows server 2008 R2, SQL Server 2008 R2 Things i've already tried: Used another browser Set disableloopbackcheck to 1 iisreset reset index modified the hosts file verified DB account extended timeout settings turned off "warn on ssl errors" Any help would be greatly appreciated. Need to go live in a couple of days. Cheers

Crawl Error in Sharepoint 2010 RTM

Hi ,       I am getting the following error on doing a crawl (both Full & Incremental). The SharePoint item being crawled returned an error when requesting data from the web service. ( Error from SharePoint site: *** Index was outside the bounds of the array. ) My Search service account is a Farm administrator and has full control on the entire farm. Thanks,   Vinod.    vinod kumar Bhasyam

URGENT: Remote Host DB Returns Login Failure on All Databases

Hi, I have a mixture of sites hosted in various places powered by multiple databases located at the same place. Today the sites that had been finished a while ago and operating just fine on autopilot with no recent work done started returning login errors, but I also believe I saw an error related to the connection refusing remote connections the first time I saw it. I opened a ticket with my host and it has been listed as "In Progress" for the past three hours now with no improvement.I am hoping someone here might have a better idea how to handle this than they do.I tried adding new users and changing my main login password on their Paralles H-Sphere control panel, but every time I click submit I get an unkown error that the change could not be done. Every dynamic page on all of my sites whose databases are hosted there look like this: Server Error in '/' Application. Login failed for user 'MyUserName'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: Login failed for user 'MyUserName'. Source Error: An unhandled exception was gener

Net Framework 4.0 Slowed XPSP3 Boot Time To A Crawl!

Ok Guys  Net Framework 4.0 showed up in my updates. So I installed it and rebooted. My machine took almost 4 minutes to reboot. I am running XP Media Center SP3, 4gb DDR2 Ram, Intel Pentium D Processor @ 280 ghz. Everything was workinf perfectly ans fast until this time it only took 45 to 60 seconds or so to boot. Wallpaper shows up then forever before anything else besides volume control.  Went back and uninstalled net Framework 4.0 and back up to speed with no problems and boots like normal. No one on any of the forums I have visited have had an honest answer. I have seen numerous posts about this problem with boot times and most have gotten a run around from microsoft techies. Saying it did not install right and to try from here or there to reinstall it. All to no avail. It is so hard to get microsoft to admit they have a problem on a new introduction of a product. Can anyone tell me the truth here and are you working on a fix or patch? Thanks Dale 
ASP.NetWindows Application  .NET Framework  C#  VB.Net  ADO.Net  
Sql Server  SharePoint  Silverlight  Others  All   

Hall of Fame    Twitter   Terms of Service    Privacy Policy    Contact Us    Archives   Tell A Friend