View Complete Post
Database setup can be a tough and time-consuming process and sometimes fall victim to human error. Microsoft Installer or InstallShield can help, as can your own custom installer. In this article, the author tackles one approach to writing database installers and demonstrates the process with a working code sample.
MSDN Magazine September 2004
I have some logging that has to be done, which are some database update or inserts.
However this logging is of no importance to the user, so I only want to process the data/logs after the page is completed for the user.As of now, if I process the data while the page is loading, I go from 1.4 to 2.0 speed (server side time processing of code), which is quite a bit comparing it is of no use to the user.
So I want this code only executed after that the page is complete.
I've been checking out Ajax, async pages etc, but the problem is it still gets executed before the page is rendered to the client, which means the client will have to wait longer then it is not being processed.
I've been checking out Threadpool.queueBackgroundWorker, however if I get it correct, this will still be executed before the page is shown to the user, as it has to complete before the prerender.
Then I thought about creating a new thread, and do the processing there, which would not have my page waiting to complete the data/logging.However when I have 100 or 200 users loading pages at the same time, then that would mean I would be creating 200 threads on those loads, which I doubt will be good for performance. So if I want to solve this, I would have to create my own threadpool and only assign for example a max of 40 threads to it which can pr
I guess I have posted my question at the wrong place before:
So hopefully somebody in this forum can advise me on how to make my code work properly. Here's what I did:
I created a Windows service that opens a named pipe to receive data from an external application. Latter one connects and transfers up to
250 MB of data via a named pipe after it serialized a LIST of objects.
My windows service deserializes the data back into a LIST of objects, processes them and outputs the entire list into a file.
My issues are:
1.The data transfer is extremely slow! 200 MB take about 7 minutes via named pipe vs. 20 seconds if a file is used.
QUESTION: is it possible to speed up the data transfer somehow?
2.After my Windows service based processing is done I dispose the named pipe, clear the list and even call GC.Collect() but still my service is not releasing the memory.
3.Also I continue to have very high CPU load (98%) on all cores (2xQuad-core) although the service is not doing anything.
It would be great if somebody could advise me on some of the p
I've created a data source in BIDS using provider OraOLEDB.Oracle.1. The Oracle database is 10gR2. The client installed on my computer is 10gR2. The connexion is OK when I use Test Connexion in BIDS.
In the data source view I created a very simple query: select * from user_objects. All is fine, the fields are correctly defined.
I've created a dimension based on this query with object_id as the key.
I've deployed the solution to the server. All is working fine for the moment.
But when I process the dimension I get a dozen of error messages like (sorry error messages are in french) :
"Erreur OLE DB : Erreur OLE DB ou ODBC : Syntax error near 'OraOLEDB.Oracle.1' on line 8; 42000."
Erreurs dans le moteur de stockage OLAP : Une erreur s'est produite lors du traitement de dimension portant l'ID 'Test' et le nom 'Test'.
Erreurs dans le moteur de stockage OLAP : Une erreur s'est produite lors du traitement de l'attribut 'SUBOBJECT NAME' de la dimension 'Test' de la base de données 'CTRL_DOMAINES'.
Serveur : L'opération a été annulée.
Erreur OLE DB : Erreur OLE DB ou ODBC : Syntax error near 'OraOLEDB.O
Have any one noticed this behavior??
Incremental process (process update) of dimension generates reprocessing of indexes on all cube's partitions,even when no change has occured in that dimension.
I am using SSAS 2005 sp2.
This is disturbing because my cube holds some 700 daily partitions so processing indexes on all of them - on an hourly basis - is very time consuming.
Also it flashes that cube's cache!
I did noticed that changing heirarchies memberskeysunique property to True + changing the toppest attribute in this heirarchy mambernamesunique to true solves this problem.
Is this a must then to prevent recalculating indexes on partitions every process update of dimension??
I have a deadlock problem that occurs every 5 minutes on SQL update. I get the following error randomly - means qurey failed one time every 200 calls. I'm using transaction scope to manage transaction.
using(TransactionScope scope = new TransactionScope())
SELECT ... FROM TABLE1
UPDATE TABLE1 SET [F1] = @F1, [F2] = @F2, [F3] = @F3, [F4] = @F4, [F5] = @F5 WHERE UId = @UId
System.Data.SqlClient.SqlException: Transaction (Process ID 51) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction
out of 1000 records 10 records not adding to cube when i am incrementally processing the partition. what could be possible reasons for this exclusion?
Query i have binded with incremental processing returning 1000 but the difference between measure group count (before and after partition processing) is not 1000 it is less by 10 records.
what could be possible reasons for this exclusion?
I am doing incremental process to append records modified since last processing date. And it is binded with a query. Number of records query returning are not fully appending to cube partition. what could be possible reasons for this exclusion.
ex: query used to do incremental partitioning returns 1049 records (on relations DB) and then after cube processing i found the difference in measuregroup count (before and after) is not equal to above count..but less by 6 records. why is it excluding
them? what could be possible reasons for this exclusion?
Thanks in advance
We want to enumerate all .NET Framework 4.0 Processes on a particular machine. And then want to enumerate Application Domains for those processes.
After enumerating Application Domains, we want to fetch data about each Application Domain using following APIs.
We tried this using ICorPublishAppDomain Interface as well as following code :
I'm building what will either be a desktop or web app which has to run 4 SQL queries in sequence. I have worked out the first one which takes 3 parameters and am getting it to return results the way I expect. However, the next three build off of the one before. Can I pass the DataSet from one query to the next and how do I reference it properly in the subsequent queries so I can get them all to run correctly?
Curiosity here because I keep thinking I'm going to crack this soon.
I am facing a strange problem while serializing the data from hooked process for injector (which injected the spy dll into some other process). I am successfully landing spy dll into other managed process. But when there I want to serialize custom
objects from that spyDll (which actually landed inside other managed process which was hooked) , here problem occurs. Here I want to serialize the data and Lists or .NET dictionaries e.g say Dictionary>. Sometime it throughs exception such as "cannot serialize
Dictionary>.....", some time it says that mark all classes and types inside hooked process as serializeable. I also tried Json to do that but I failed somehow to use it properly.
Can any body having experience to get the data from spy dll to injector Successfully, which then can be transmitted to the module which is using injector to get some data from remote processes via hooking and injection then after.Serialization works
perfectly fine in simple console bases applications, but the way I want to get the data is totally different , as there's one MessageHookProc function , whic