View Complete Post
I need to crawl update of newly created/updated documents within document libraries. My understanding says i need to create the new content source and associate the crawls.. and newly created and updated documents would be crawled in the next sceduled incremental
crawl. But my question is how i can create the dadicated content source for document libraries as the content as there is no such content source type.
Any suggestion or workaround would be greatly appreciated.
Hello I have a problem with managed metadata on two different servers and would like to ask you to try it.
I assume you have managed metadata service running.
1, Open some document library in some site
2, Create column - managed metadata type and select some termset and set it as required.
3, Upload a document and fill in metadata when you will be asked.
4, Then edit properties and check, that metadata are there.
5, Save this library as template.
6, Deploy document library from this template and try steps 3+4.
My problem is, you can select metadata, but they are not saved. e.g. when you check document properties managed metadata are not there.
Can you confirm this behavior ?
Thank you very much.
P.S.: one more weird thing. If you do metadata filling using office 2010 compatible application. Metadata remain there, but they are not visible in document library properties.
Anyone have any thoughts on how I would populate data into document quick parts?
The answer seems pretty simple with content types and such, but the complication is that I want to pull in contact information from a list
local to the site in which the library resides. For example:
A project site is created. The company's contact information is copied down to the site via the site creation process. A Letter Standard content type is managed throughout the site, and pulls the contact information from the list and populates
the appropriate pieces within the document automatically.
Anyone done something similar to this?
I currently have a document library that contains over 1,800 records in it. Each document has a metadata column called "Archived" associated with it. This property can either be True or False. Currently, every document in the document library gets indexed
by the Search Engine Crawling service. I would like to filter out the documents that have the archived property set to True, so that they don't appear in the search result. Is there a way to create a crawl rule that would be specific to a particular Document
Library, and that could filter documents out based on metadata?
My alternative is to create and event handler that would move the document out into another document library that won't get indexed whenever the property gets set to true. However, I would prefer to use a cleaner out-of-the-box way to achieve this.
Anybody knows if it is feasible?
We introduce you to "Oslo" and demonstrate how MSchema and MGraph enable you to build metadata-driven apps. We'll define types and values in "M" and deploy them to the repository.
MSDN Magazine February 2009
Here the author uses Document Information Panels in the Microsoft 2007 Office system to manipulate metadata from Office docs for better discovery and management.
MSDN Magazine April 2008
When my users run a search against a site with a bunch of document libraries they will often search for terms that are in the title of the doc, if they do know the title they will put that in. What I would like to do is have the results ordered
such that if any hits are on the document metadata (eg Title) then they are presented first and any results that are from hits on the content of the documents are presented later.
So, if they get the title spot on in their search query that document will appear first .... if they don't get it right then the likelyhood is that the title is something like what they entered so similar titled docs are presented first and then those
with matching content ....
How can I achieve this?
We have an website that have 3 libraries. The "first Library" is used only to edit documents and it's called "my documents". When a user finish of edit your documents here, they start a workflow that will check fields in the document and move the document
for the second library (using Sharepoint Designer Workflow - without custom actions).
In "second library" we have a big workflow. This WF has some steps for approve, collect feedback, generate sub-tasks and others things related for the document. This WF use many custom actions in (Sharepoint Designer).
When this WF arrives to the end, the document is moved to "third library", that is a official repository of documents in the enterprise.
The entire cycle works fine. But when someone creates a internal copy of the document in the "first library" to begin again the flow, we receive a Invalid Url error. It occurs when any update operations is executed over the document item, the value is conserved
but the error is displayed. After it, we can not execute other workflows over the item.
If we save the document in local disk, delete the library item, and then upload it again, the problem is solved.
Some one have an idea about why and how solve it? We are using WSS 3.0.
The following is an error registred in log file when you d