.NET Tutorials, Forums, Interview Questions And Answers
Welcome :Guest
Sign In
Win Surprise Gifts!!!

Top 5 Contributors of the Month
Gaurav Pal
Post New Web Links

Content crawls fail after additional crawl component is added.

Posted By:      Posted Date: October 13, 2010    Points: 0   Category :SharePoint

I have 2 vm sp servers in my farm connected to a fast box on dedicated hardware and noticed that the content crawls have been kind of sluggish. I added another crawl componet to my web front end and did the whole FAST cert import but when I try and run a crawl it fails and gives the top level error:  Access is denied. Verify that either the Default Content Access Account has access to this repository, or add a crawl rule to crawl this repository. If the repository being crawled is a SharePoint repository, verify that the account you are using has "Full Read" permissions on the SharePoint Web Application being crawled.

I made sure that the search services are running under the under the same service account and that the service account has full read access to my web apps. I haven't been able to find too much documentation regarding mutliple crawl components so I figured I post out here.

Search Topology:
1 Admin Component
2 Crawl Components (APP Server / WFE Server)
1 Admin DB
1 Crawl DB

Perhaps I'm going about improving my crawl performance the wrong way, if that's the case any suggestions would be greatly appreciated.

View Complete Post

More Related Resource Links

"Content for this URL is excluded by the server because a no-index attribute." in crawl logs


Hi All,

I am getting following error message in Crawl Logs

" Content for this URL is excluded by the server because a no-index attribute. "

Any help in this regard will be greatly appreciated.



Cannot set Host Distribution Rules with two Crawl databases, crawl component cannot be dismounted

When I try to set Host Distribution Rules I get the following error: Redistribution status: Failed - Crawl component GUID-crawl-5 on SERVERNAME cannot be dismounted. Check that the server is available. Farm Topology: 2 Frontend Server 2 Query Server with partitioned Index 2 Index Server with 2 crawl components with 2 crawl Databases 2 Application Server The 2 crawl components then stay in status Initializing, when retrying (only Option), I get the same error again I tried the following steps - Delete the Crawl Component that cannot be unmounted --> stops with error, Server cannot be contactet - Move the Crawl Component to Crawl Database 1 --> error - Reboot the Crawl Server - Take the crawl Server out of the Farm and rejoin, then delete the crawl component --> this worked, I could rebuild the Search Topology - Then try to set the Host Distribution Rules -> same error as in the beginning --> grr Any ideas?   

Crawl Component Resilience Requires Sharepoint Server Search Administration Component?

I have a single crawl database and two crawl components pointing to that crawl database. One of the crawl components also has the Sharepoint Server Search Administration component running. When I close down the server running the Sharepoint Server Search Administration component crawling stops. I removed the crawl component from the server running Sharepoint Server Search Administration component and reindexed and this worked. So the additional crawl component works fine. Checked that incremental crawl would run in 5 minutes. Downed the Sharepoint Server Search Administration component server and uploaded some new content. Crawling did not run. Can anyone confirm that this is expected behaviour? I am now trying to transfer the Sharepoint Server Search Administration component to the other server as part of Disaster Recovery preparation and this is failing as per another recently opened problem.  

Cannot crawl complex URL's without setting a site-wide rule to 'crawl as http content'. Help!

I have pages within a site that use a query string to provide dynamic data to the user (http://<site>/pages/example.aspx?id=1). I can get the content source to index these dynamic pages only if I create a rule which sets the root site (http://<site>/*) to 'include complex urls' and 'crawl sharepoint content as http content'. This is NOT acceptable as changing the crawling protocol from SharePoint's to HTTP will prevent any metadata from being collected on the indexed items. The managed metadata feature is a critical component to our SharePoint applications. To dispel any wondering of whether or not this is simply a configuration error on my part refer to http://social.technet.microsoft.com/Forums/en-US/sharepointsearch/thread/4ff26b26-84ab-4f5f-a14a-48ab7ec121d5 . The issue mentioned is my exact problem but the solution is unusable as I mentioned before. Keep in mind this is for an external publishing site and my search scope is being trimmed using content classes to only include documents/pages (STS_List_850 and STS_ListItem_DocumentLibrary). Creating a new web site content source and adding it to my scope presents 2 problems: duplicate content in scope and no content class defining it that I know of. What options do I have?

newlly added content type not exist in SPWeb.ContentTypes


I use a C# application (it uses stsadm.exe) to deploy a content type, and in the same program, I try to get the content type by using SPWeb.ContentTypes[name] right after the content type is deployed,  it returns null. if i exist the c# application and re-run the application, I can get the added content type. I don't know why I cannot get the content type the first time after the content type is deployed. Any thoughts?




Unable to crawl the content form any of sites inside WebApplication


   I created two new web application, both the web application contain one site collection and inside contain site. In one of the web application search is working and another one Search is not working, am try to crawl the content from the website but it shows Zero item. But in other site collection it show the crawl item. Am in confused why it happed. Why am unable to crawl the content from the sites

FAST Search Connector won't crawl my Content Sources


Can anyone help me figure out why I am not able to crawl the Content Sources for the FAST Search Connector?

The error from the ULS viewer is:

Failed to connect to 1sv-sp2010.wirestone.internal:13391 Failed to initialize session with document engine: Unable to resolve Contentdistributor

I followed the install steps found at http://technet.microsoft.com/en-us/library/ff381267.aspx, including the post install validation.  FAST seems to be working in every way except the crawl.

The port number 13391 was found in Install_Info document. "Content Distributors (for GUI SSA creation):          1sv-sp2010.wirestone.internal:13391"



External Content Types + Search Service: Cannot crawl my external content type



I created an external content type by creating a new Visual Studio sharepoint project, and creating a content type (The default Entity1 content type). I created a profile page for it and everything, and when I drilled into the content type in central admin - BCS, I saw it wasn't marked as crawlable.

I saw this similar post: http://social.msdn.microsoft.com/forums/en-us/sharepoint2010general/thread/281BCEFD-59EC-41CC-B948-458A4BDA9E49

So I then created an external content type through SPD, leveraging the same code, and creating an external list and profile page. This time, when I drilled into the external content type in the BCS administration, it showed "Crawlable: Yes".

I figured at that point I was good to go, but when I went to my search service application -> Content Sources -> New Content Source and selected Line of Business Data, and selected BDC, it still says "No external data sources to choose from."

I verified also that the account for crawling has permissions for the external content type.

Are there any other things I should be looking for? From everything I read this should "just work" now :)



Content Type or Lists Lookup Column can't show additional fields which are Choice or lookup columns


I am trying to create some content types, but this also happens when creating new columns in a list.

Ok, here is my setup (which is the most basic way to replicate the issue):

List 1: Department List
Columns: Department Name (Single Line of Text)

List 2: Document Owners
Columns: Document Owner (single line of text), Department (Choice or Lookup from Department List), Email (single line of text), Lookup Field (Calculated).

Content Type: Quality Document
Columns: File Name, Title, Document Owners (Lookup from Document Owners List), Email (pulled in with Document Owner), Then i want to also pull in Department from the Document Owners list.

That is where my problem is.  When i select the Document Owners List as the list to get the information from, there isn't the option under "Add a column to show each of these additional fields" to pull in the Department column.  This occurs if i use a Choice in the Document Owners list, or if i do a lookup from the Department List.

My preferred method of implementing this system would be to have a multi-value lookup field in the Department List for Document Owners so that the Owners are attached to the Department, and if the owners change for the department they also change for the document that is for said department.  But, i run into the same issue that i c

Content Query Web Part additional filters



I've been working on a SharePoint intranet implementation and have been trying to use the CQWP to aggregate a number of lists throughout our site on the homepage. I've been using the additional filters to try and filter out certain lists that I dont want to appear, however after making the changes the web part displays:

This query has returned no items. To configure the query for this Web Part, open the tool pane.
I've tried filtering by both the name and also the List ID. I use no custom fields in any lists and an example of the query I've been using is below:
List ID > Contains > {40F0922F-A058-4950-B769-0711932FB071} I have also tried the Is not equal to modifier.
If anyone could give me any help on structuring an additional filter and making it work it would be appreciated.

Access denied when Searc Service Application tries to crawl Sharepoint content


I have just set up a new SP2010 environtment(3 servers: WFE, App, SQL).

When I try to get my Search Service Application to crawl my main SP site and my MySite location, I get the following error in the crawl log:

"Access is denied. Verify that either the Default Content Access Account has access to this repository, or add a crawl rule to crawl this repository. If the repository being crawled is a SharePoint repository, verify that the account you are using has "Full Read" permissions on the SharePoint Web Application being crawled."

Things I have checked:
I have ensured that the default access account has "Full Read" on the web application
-I set up crawl rules for both sources specifying a service account that has admin access to the content on those sites
-I logged in to the SP sites using the service account that the Service Application is using to crawl
-I even created a brand new search service application from scratch and got the exact same results

The only difference between this environment and my test environment, where search works just fine, is that this is the production environment and so it uses FQDN with a host header: http://portal.company.org.


Add Additional Filters to Content Query Web Part


I created a Content Queery Web Part to display Articles from Pages library. I added two more columns to the Pages libraty called "Start Date" and "End Date".

The issue I have is those two fields do not show up in the Additional Filters drop down list box.

Anyone knows what I am missing?

thanks in advance,


Add Additional Filters to Content Query Web Part


I created a Content Query Web Part to display Articles from Pages library. I added two more columns to the Pages libraty called "Start Date" and "End Date".

The issue I have is those two fields do not show up in the Additional Filters drop down list box.

Anyone knows what I am missing?

thanks in advance,


Cannot Edit Multiple Publishing Html Fields (Additional Page Content Fields)



I’ve added new Columns of type HTML, I’ve added these columns to my content type and the follow in my page layout:

<div id="featureArea">

<PublishingWebControls:RichHtmlField ID="rhfFeatureArea" FieldName="FeatureArea"

Recreated SSP "Content sources and crawl schedules" Error?


This is a follow up question to

Okay, I created a new SSP with a different name. Ran fine. However, when I clicked "Content sources and crawl schedules", I got this error:

Could not find stored procedure 'dbo.proc_MSS_GetCrawlHistory'.   at Microsoft.SharePoint.Portal.Search.Admin.Pages.SearchAdminPageBase.ErrorHandler(Object sender, EventArgs e)
   at Microsoft.SharePoint.Portal.Search.Admin.Pages.SearchSSPAdminPageBase.OnError(EventArgs e)
   at System.Web.UI.Page.HandleError(Exception e)
   at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)
   at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)
   at System.Web.UI.Page.ProcessRequest()
   at System.Web.UI.Page.ProcessRequestWithNoAssert(HttpContext context)
   at System.Web.UI.Page.ProcessRequest(HttpContext context)

Using Additional Fields from External Content Type in InfoPath 2010 External Item Picker


Hello, we've created an external content type for our store locations. Each location in the content type has a name, id, manager, region and so on. When we use this BCS content type as a field in a list, we are given the option to "Add a column to show each of these additional fields"

What would the equivalent be in an InfoPath form? When we add an External Item Picker to a form a number of fields are included in the form's data source, including a repeating group called BDCEntity with the following fields:

  • EntityDisplayName
  • EntityInstanceReference
  • EntityId1
  • EntityId2
  • EntityId3
  • EntityId4
  • EntityId5

Entity display name appears to be what we configure as the Display Field Name in the picker properties, and the EntityId1 is the location ID, which is the identifier on the external content type. EntityId2-5 are all blank.

We use region and other location attributes in workflows so how can we access all these other fields?

Thanks for your time.

Peter Newhook SharePoint posts on my blog

Crawl seems to be done, but content source says still crawling

I have a FULL crawl which took 8 hours to complete as the crawl log shows that it is not indexing content anymore, but the crawl status says "FUll Crawling" where it should say idle. Any ideas why this may be happening?
ASP.NetWindows Application  .NET Framework  C#  VB.Net  ADO.Net  
Sql Server  SharePoint  Silverlight  Others  All   

Hall of Fame    Twitter   Terms of Service    Privacy Policy    Contact Us    Archives   Tell A Friend