.NET Tutorials, Forums, Interview Questions And Answers
Welcome :Guest
Sign In
Win Surprise Gifts!!!

Top 5 Contributors of the Month
david stephan
Gaurav Pal
Post New Web Links

"Content for this URL is excluded by the server because a no-index attribute." in crawl logs

Posted By:      Posted Date: August 26, 2010    Points: 0   Category :SharePoint

Hi All,

I am getting following error message in Crawl Logs

" Content for this URL is excluded by the server because a no-index attribute. "

Any help in this regard will be greatly appreciated.



View Complete Post

More Related Resource Links

Why SharePoint incremental crawl doesn't use Audit Logs


Hi All,

I am just wondering why SharePoint (2007) search incremental crawl uses change log. Why is it not using Audit logs if they give more information.

Can someone help me understand this.




Toolbox: Database Audit Logs, Joel on Software, Code Handouts, and More


This month the Toolbox column takes a look at database logging, Joel Spolsky's blog, printing code projects, and ASP.NET reading.

Scott Mitchell

MSDN Magazine May 2008

Toolbox: Generate Office Documents, Monitor Event Logs, and More


Most data-driven Web sites are used as interfaces to collect, process, and summarize information. Reports that summarize the data can be presented to the user in a variety of formats-the most common way is to display the report directly in a Web page.

Scott Mitchell

MSDN Magazine June 2006

Office: Relive the Moment by Searching Your IM Logs with Custom Research Services


Often, IM conversations contain important information you'd like to keep and reuse. Fortunately, MSN Messenger 6.2 has a feature to keep a conversation history permanently in XML format. This article shows you how to leverage that conversation history by consolidating IM exchanges so they are indexed, searchable, and ultimately reusable using the Microsoft Office 2003 Research and Reference task pane.

John R. Durant

MSDN Magazine February 2005

Spider in .NET: Crawl Web Sites and Catalog Info to Any Data Store with ADO.NET and Visual Basic .NE


Visual Basic .NET comes loaded with features not available in previous versions, including a new threading model, custom class creation, and data streaming. Learn how to take advantage of these features with an application that is designed to extract information from Web pages for indexing purposes. This article also discusses basic database access, file I/O, extending classes for objects, and the use of opacity and transparency in forms.

Mark Gerlach

MSDN Magazine October 2002

Can't start crawl becasue of index move operation


Hi there,

I can't start crawl task. The log says that "Deleted by the gatherer (The start address or content source that contained this item was deleted and hence this item was deleted.)" But I did not change the path of content sources and When I trie to start crawling job it says "Crawling might be paused because a backup or an index move operation is in progress. Are you sure you want to resume this crawl?"

What is index move operation? What should I do? I'll really appreciate the solution greatly. Thanks in advance.



'There is no Web named' in logs


I was wondering if anybody could help me.  When i look at logs in C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\LOGS i get a lot of errors saying 'There is no Web named'.

For example:-
09/08/2008 09:29:37.17  w3wp.exe (0x1BA0)                        0x296C Windows SharePoint Services    General                        8kh7 High     There is no Web named "/quality/Audit Reports/Forms/AllItems.aspx". 

There are no errors in server application event log.  I was hoping to find out what this means and if i should worry about the errors. 


Chris Rees.

crawl stuck on "stopping"


Hi all

I have a problem wiht Moss 2007 search crawl. It was working fine, and suddenly it didn't show new content. I tried to trouble shoot, and saw that it had been running a crawl for more than 2000 hours. I stopped the crawl, and now it's stuck on "stopping".

I have googled and seen that a lot of people had that problem, and this might be because of maintenance job on the sql server (2005) with duplicated index values in the search database, or not having sp2 for sql. I checked, and we didn't have that problem.

Anybody here that has been on this problem, and fixed it? :)


Sharing folders under 12\logs


Hi Admins, 

I have a general question.  developer team does not have direct access to server. But they frequently need to dig logs for errors. (yes we have a test server but when switched to prod. environment, programs sometimes give errors that are need to be digged in prod)

Is it ok to "share" the folder program files\common files\microsoft shared\web server extensions\12\logs folder, so they can map the Web front End's logs folder to their computers ?

I dont really think it sounds good to make a sharing on production server like this,it is a sort of customization, I need your advices. is there a significant disadvantage that causes "don't!!" for this kind of operation?


Thanks in advance..


Re-run WSS 3.0 Usage Analysis Logs


WSS3.0SP2, 64-bit.

The service which runs and collects the log files for the Usage Analysis report was disabled for some reason. We have it re-activated but the logs for when it was down were not processed. How do we re-process those older logs?

Previously asked at  http://www.microsoft.com/communities/newsgroups/en-us/default.aspx?&lang=&cr=&guid=&sloc=en-us&dg=microsoft.public.sharepoint.windowsservices&p=1&tid=f676c989-a369-4761-94e7-ef9b1c28fde7 and referred to here.

Only crawl one site collection

Hi We have an intranet with about 100 site collections. How can I set up one of those to be in a separate content source that can be crawled more often? Do I need to make two content sources with one containing the other 99 site collections with the setting "Crawl only the SharePoint Site of each start address" and the other one containg my prioritized site collection with the same setting? I also would like to ask if the crawl rules have any effect on in which order the content is crawled. If I put a certain site to be included with order 1 will that site always be crawled first? //Niclas

SQL Server Logs too big under Management

Hi, The sqlserver logs ("Current - date","Archive-date"...) is too big to open easily (esp at the time of crisis). Any idea how can I restrict the size of these logs to manageable size so that they can be read immediately? Is there any way to Archive daily logs separately? Thanks Chander

Sharepoint and IIS/Windows Event Logs in a single view

Hi - I'm coinsidering addng sharepoint log reading capabilities into an application I've built for reading windows and IIS logs (for more info http://logenvy.com). I know there are quite a few applications capable of reading sharepoint logs (a number are listed here http://stackoverflow.com/questions/781179/sharepoint-2007-log-viewer). Does anyone think the capability to analyse windows and IIS logs alongside sharepoint logs would be useful, or have any logging requirements not well met by the current set of tools? I'm especially keen to explore useful/interesting data visualizations on top of data logs.

SharePoint crawl errors on files which are not present

All, I'm noticing 2 errors in my crawl logs. Neither of the files exist anywhere on our site. The URLs are http://.../forms/repair.aspx and http://.../forms/combine.aspx and the error message is 'Error in the Microsoft Windows SharePoint Services Protocol Handler'. Our crawl normally takes about 3 and a half hours. Recently, it's been taking 5-6 hours. These 2 errors are logged at the end of the crawl. While the crawl is running, I see the success count growing and at about 3 and a half hours into the process, the success count stops growing. I'm not sure what the crawl is doing for the next 2 or so hours, but if finally logs the 2 errors mentioned earlier at the end of the crawl, then completes. I have tried resetting the crawled content and changing the index location of the SSP, but neither have worked.  I have also tried excluding the path to these two files with crawl rules, but that hasn't worked. I am on SharePoint 2007 SP2. Any ideas? Thanks

Cannot crawl sharepoint site and mysite after database attach upgrade form sharepoint 2007 to 2010.

After database attach upgrade site and mysite from sharepoint 2007 to 2010 , I have full crawl and get "The crawler could not communicate with the server. Check that the server is available and that the firewall access is configured correctly. If the repository was temporarily unavailable, an incremental crawl will fix this error. ( Error from SharePoint site: HttpStatusCode ServiceUnavailable The request failed with HTTP status 503: Service Unavailable. )" for mysite and get "Access is denied. Verify that either the Default Content Access Account has access to this repository, or add a crawl rule to crawl this repository. If the repository being crawled is a SharePoint repository, verify that the account you are using has "Full Read" permissions on the SharePoint Web Application being crawled." for sharepoint site. The content access account for search is "db_owner" of both of site and mysite. How do I solved this problem ?

FAST Search crawl queue

I am attempting to do a full crawl (FAST Search) of a handful of Word, Excel, PowerPoint documents. Has been running for ~3 hours. If I look at the Crawl Queue report in the Administrative Report Library it shows 150 transactions queued. Crawl Processing per Activity - the last entry on the graph is 300 seconds for Initializing Document Filtering for what looks like the entire 2 hrs SQL Server, SharePoint Server and FAST Search servers appear to all have low utilisation (cpu, memory, disk). There are only 2 warnings in the FAST Search crawl log (don't crawl search site and search cache directory). Success = 0, Error = 0, Warning = 2. Before I setup FAST Search, SP Search took approx 6 minutes to crawl a similar list of documents. How do I troubleshoot the issue?

Crawl keeps failing

I am having a problem. I had a working search that crawled my external data source and returned results, and I shouldn't have messed with it, but I was trying something, and it caused it to return the results twice, so I decided to start over. I deleted the content source, reset the index, and then re-created the content source. However, now it doesn't return any items and I can't search my content source anymore. I get these errors: Top Level Error: Item ID                           URL    1                                    bdc3://cohorts_cohorts/default/00000000%252d00000\                                        %252d0000000000000/cohorts/cohorts&s_ce...and so forth                                   &n
ASP.NetWindows Application  .NET Framework  C#  VB.Net  ADO.Net  
Sql Server  SharePoint  Silverlight  Others  All   

Hall of Fame    Twitter   Terms of Service    Privacy Policy    Contact Us    Archives   Tell A Friend