.NET Tutorials, Forums, Interview Questions And Answers
Welcome :Guest
Sign In
Win Surprise Gifts!!!

Top 5 Contributors of the Month
david stephan
Gaurav Pal
Post New Web Links

What makes a time series so slow to deploy and process?

Posted By:      Posted Date: May 22, 2011    Points: 0   Category :

I'm trying to use a SQL Server 2008 time series model.

I have 2,241 Customers x 27 Attributes x 365 days worth of data:

ForDate, CustID, Score, A1 ... A27

I want to predict Score.  ForDate and CustID are my keys.  A1 - A27 are my inputs and Score is set to PredictOnly.  Can somebody give me some insight into exactly what the algorithm is doing?

From looking at a status message it appears this model is on track to run about 8 weeks before I can see something?  Am I interpreting this correctly?  This was the status message after four hours...

Status: Learning Time Series tree for 'Score (3404)' in main model (200 of 62749)...

Is there anything I can do to speed this along?   For example Score is a double ...

View Complete Post

More Related Resource Links

Slow page load during a list query one time during the day


We have a monitoring tool set to check to see if the home pages for our 3 web apps load in under 60 seconds every 10 minutes.  All 3 web apps load in under 3 seconds on every 10 minute check except for one exception.  One check every day one of the web app's home page takes longer than 60 seconds to load.  This happens at 11:45PM when there is very little user usage on the environment.  The characteristics of this page are as follows: The only thing on the page is a list view web part which was added by the browser.  The page has not been customized with SPD or code at all.  The list that it pulls is a simple links list that has 281 items on it.  The view pulls all 281 items and displays them in sets of 100.  I cannot find any associated event in the server events and/or SharePoint ULS logs nor are there any daily sharepoint timer jobs running at that time.  Our full index happens at midnight with incremental happening hourly.  Our enviroment is 2 WFEs, 1 App/indexer and we have a separate SQL cluster backend.  

Could someone lead me possibly in the direction I should take next in my troubleshooting?   

Why does .NET Framework 4 client profile slow down boot time network creation?

I've been running a Windows XP sp3 KVM virtual machine for a long time now, and some recent update made the initial network startup go from a few seconds at boot time to around 90 seconds. Using Add/Remove program to uninstall recent updates points the finger at .NET Framework 4 client profile. With it installed, if I right click on network and ask for properties right after booting, there is a 90 second delay before the network dialog with the list of network interfaces finally appears. In addition, the network drive I have automounted at login is not accessible for the same 90 seconds. As an experiment, I tried going into the device manager and deleting the network interface, and if I reboot after that it takes 90 seconds for the new hardware wizard to appear. All these 90 second delays vanish if I remove this .NET update. The network comes up as soon as the system boots.

Slow load time of custom assembly on x64 compared to x86.

I am developing an application that automatically generates assemblies using the CSharpCodeProvider (.Net Framework 3.5).  These assemblies contain a single class with a very large number of local variables and methods.  After compilation I create an instance of this class (using Activator.CreateInstance) to be used elsewhere in the application.  When the application is deployed on a 32-bit system (tested on Win 7 Prof 32-bit and on Win XP) the CreateInstance method returns after 1 or 2 seconds, i.e. it first JITs the assembly before creating the instance.  However, on a 64-bit system (tested on Win 7 Home 64-bit, Win Server 2003 64-bit and Win Server 2008 64-bit) CreateInstance takes up to 5 minutes to return. Is there any reason why it would be that much slower on a 64-bit machine? I have tried many different options when compiling, such as setting the platform to x64, using the /optimize flag, etc, but none of these have made a difference. I also tried calling PrepareMethod on all the method in the class.  When I do this in the 32-bit environment, it takes approximately 10 seconds.  In the 64-bit environment it takes more than 3 minutes. As a last resort I tried ngen before constructing the object, but this made no difference in the execution time.

Time series algorithm parametrs

Hello everybody! I have the following questions to You: I have mining model with data by working hours(every day from 08AM-05PM). What PEREODICITY_HINT I have to use? {24} or {9} The second problem is that on the chart I see historical data wiht axis format 08AM-05PM, but in prediction part the periods is 00-24. It is not correct! I want to see the same 08AM-05PM periods too!   Help me please!   Thank You!  

what's the Best practice about exploiting the time series predections history?

Hi all, We are predicting our revenue with two frequency : The first is monthly , the second is weekly with the same parameters daily . The first aimed to have a big picture of our performance during the month , the second is more operational and it's used to drive operational action. So every day we train the second structure with all data and  MTS predicts different value each day basis on the new data used in training. Example: Day of process : 05/07/2010 data used to train : 01/01/2002 ---->05/07/2010 values predicted between : 06/07/2010------>31/07/2010 the value for 18/07/2010 : 35000 $ ------------------------------------------------------------------- Day of process : 10/07/2010 data used to train : 01/01/2002 ---->10/07/2010 values predicted between : 11/07/2010------>07/08/2010 the value for 18/07/2010 : 42000 $   Certainly The second method (predict daily with new training data)  value will be more accruate but it can't be used to drive strategy because it's change every day. Any reflex about this ? saving all data (all prediction for all series every day for the 30 coming steps)? or update coming value with the actualised prediction(the manager will be confused )  

Advice on collecting data in a time series

Hi everyone,  I just wanted to get some ideas of what you would think be the best way to collect data/numbers that are part of a time series?  Let's say I'm collecting monthly data from the users, related to some product and I'd like to be able to provide them a simple and efficient way of entering these numbers based on some month end period.  So for instance on 6/30/2009, they could enter some numbers for a set of data points that pertain to that product.  Would one of the data controls (such as GridView or DetailsView) be sufficient to do this?  I know the GridView isn't so much able to save data but I believe the DetailsView has some functionality for that. In the end, I'd really like to provide a seemless way to do show this and ability to enter and save this.  Any ideas would be appreciated.Thanks

SSIS 2K5 - Deploy package with C# - Really Slow



For a DataWareHouse project, I did an WIX  install manager to Create à DataWareHouse, Deploy SSIS package and Deploy a OLAP cube.

When It come to deploy (load) dtsx package, it takes about 40-60 secondes to load each package (I have around 50 packages so its really too long)

This is the code i'm using :

	    DirectoryInfo di = new DirectoryInfo(session["INSTALLLOCATION"] + @"Sources");
        FileInfo[] rgFiles = di.GetFiles("*.dtsx");
        Microsoft.SqlServer.Dts.Runtime.Application app = new Microsoft.SqlServer.Dts.Runtime.Application();
        foreach (FileInfo fi in rgFiles)
          string pkg = session["INSTALLLOCATION"] + @"Sources\" + fi.Name + "";
 		 Package p = app.LoadPackage(pkg, null); //THIS TAKE 40-60 SECONDE TO EXECUTE       


Slow BinaryReader Position Change Time


Hello, I have a question about basically a laggy position change time with a large file (actually, in this case I'm using a 20GB or so disk drive).  Basically what I'm doing is writing my own class that uses the CreateFile function to read raw data from a local disk drive, and then have an extract file class that allows me to seek to blocks (the file system is FAT) that the file occupies, then read the data, piece it together, etc.  I noticed though, that when extracting an 800KB file, it took about 10 seconds to do the process, and the main cause was that changing my reader's position took anywhere from 250 milliseconds to one complete second.

The short version: When extracting a file using my own classes, it took way longer than it should have, and the cause is changing the IO's position.


The code is exactly as you'd probably imagine:

for (int i = 0; i < f.BlocksOccupied.Length - 1; i++)
          br.BaseStream.Position = m.GetBlockOffset(f.BlocksOccupied[i], f.PartInfo);
So what I am asking is if there is any way to speed up the process of moving my posit

Optimizing Data Mining processing time for a Time Series model in SQL Server 2008 R2


Does anyone have any references around optimizing the initial processing time for a data mining model? Books, blogs, etc. I have the Wiley “Data Mining with SQL Server 2008” book, and while I’ve learned a lot from it, it doesn’t seem to cover much around trouble shooting things like processing time. I also have a few other books that have a chapter or two on data mining, but again just basic “here’s what it is and how to set it up”, nothing that quite covers trouble shooting or optimization. I’ve also checked out the various Data Mining blogs/sites.


I’ve got a Microsoft Time Series model I’m basing on a Cube. Very simple, trying to forecast sales. I have one dimension which is the list of products (about 1600), a second which is the time dimension, finally the measure is daily sales figures for each, about 3.2 million rows in total. On a brand new server, with two quad core processors and 16 gig of ram it took 40 hours to process. Seems rather high?

Still about slow initialization time in WCF



A few days ago I was having problems with client WCF initialization time. I though I had solved the problem by removing the automatic proxy detection option but it seems like there are still some computers where the 1st call takes way to much time (and in those cases, the proxy configuration options are ok). So, I've profiled the application and I've noticed that it takes most of the time on the RealProxy.PrivateInvoke method:


42,92% Authenticate - 8113 ms - 1 call - Mercados.WinForms.Security.MercadosUserManager.Authenticate(String, String)   (from Mercados.WinForms.Security.IUserManager)
  18,39% PrivateInvoke - 3477 ms - 1 call - System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData &, Int32)
  13,80% ChannelFactory<TChannel>..ctor - 2608 ms - 1 call - System.ServiceModel.ChannelFactory<TChannel>..ctor(String)
  10,19% CreateChannel - 1926 ms - 1 call - System.ServiceModel.ChannelFactory<TChannel>.CreateChannel()
  0,39% Dispose - 74 ms - 1 call - System.ServiceModel.ChannelFactory.Dispose()   (from System.IDisposable)
  0,00% ObtemRoles - 0 ms - 1 call - Mercados.WinForms.Security.MercadosUserManager.ObtemRoles(TipoUtilizador)
  0,00% get_Credentials - 0 ms - 2 calls - System.ServiceModel.ChannelFactory.get_Credentials()

wsp files taking alot of time to deploy

In order to integrate SSRS 2008 with MOSS 2007, we added a server as a WFE on the farm. this wfe has SSRS 2008 configured. but after that the wsp files, though not giving any error... are taking more than 30 minutes to deploy and retract... the configuration of the farm is simple too. 2 front end, 1 search and 1 database. i have tried restarting the timer service and turning the timer service off on the wfe2 (where SSRS 2008 is also present).... any ideas?
Adnan Shamim

Unusual results using Time Series Algorithm


I am trying to forecast Q4 2010 activity.  The dataset is constant with no gaps from 200401 - 200909.  I want to compare my model with the actual Q4 2009 to verify its accuracy.  The dataset has two columns (month - Key Time and Sales - Predict Only).  I set the PERIODICITY_HINT to {12} since our data is very seasonal.  The problem is every December has increased in sales from the previous year in the dataset, but the prediction fails to recognize this trend.  Is the problem that for 2009 I only give 9 months?  Please help.  A snapshot of the data set is below:



200401 866 200402 709 200403 748 200404 1053 200405 829 200406 957 200407 872 200408 917 200409




200907 3909 200908 3829 200909





Cube, Build, Deploy, Process

Dear all,

I'd like to get simple and clear explanation of the cube in data mining, and 3 notions we encounter a lot : Build, Deploy, and Process.

(1) What is the cube that is created when we deploy a mining solution/project?
  I wonder what type of cubes they are because although the dialog on deploy/process
  show that cube, after successful deployment we still don't see the cube in Cubes folder
  of the project.

(2) Why the SQL Server created that cube? Even though we process only one table
  and only use case-table (without nested table)

(3) Can someone explain these 3 concepts with CLEAR differences between them?
  (A) Build
  (B) Deploy
  (C) Process

As far as I know, the stages are like that : build, then deploy, then process.  Also, it seems
to me that those operations do not create objects inside 'Relational' database, but create
objects (binary and text, with text files usually in XMLA programming language) in the
related project's folders and subfolders.  Any good explanation is appreciated.


Cannot assign multiple keys in the Time Series Algorithm


We had a small problem, which we really need to know how to solve as soon as possible, as I told you in our last session that we are running out of time, and this issue is stopping us from completing our predictions. I will try to mention our exact business case and explain it as much detailed as I can so you can help us as soon as possible.


                                Kindly check the following scenario:


                                Table name: X

                                Columns: ID [Identity column], DATE_TIME [its granularity is hour], NODE_NAME, REASON, REQUESTS


Slow on data retrieval for the every 1st time


Hi all,

I had developed a POS system which connected to a local database (MS SQL SERVER EXPRESS 2005).

I'm facing an issue whereby after the cashier key in barcode, the description, price per unit will be displayed very slow for the every FIRST time. The susequent items will be very fast.

This happen when the system is idle for sometime. If continuous use, will not have this issue.

Any advise or setting i can do to improve this performance issue?




Active X Control slow to load first time



I have an Active X control, which when first called or loaded in my asp.net application, is really slow to load. However, after the first load, it is really quick!!

My question is this, "How do I make my Active X control, when first called, load much faster? Is there away to preload the active x to the page so when it is used or called, it doesn't take so long to load?"

I have checked to see whether my active x is being called correctly by my javascript code, and it does. All my Active X does is to make a call to Outlook and to set some user properties. Not much.

Please help, but it has been doing my head in for days.

Converting daily snapshots into format for time series analysis


I wasn't sure if this was an app question or a database question, so I'll start here first ...

Desired End State:
I'd like a process/app/database/whatever that converts daily inventory snapshots into one or two tables for easy analysis over time. Ideally, I'd like the solution to be efficient, flexible, and resource-sensitive.

What is the best way to go about this?

Current State/Background:
I'm working with a vendor who sends me daily Excel sheets with assets under management. The reports are based a snapshot of master records from the inventory system. For discussion, let's say a sample looks like this:

AssetID, Serial#, Status
123456, ABC123, Deployed
365494, D2-F39, Retired
B63489, 123GR2, Pending

The inventory is mostly static. Of the 40K rows in the spreadsheet, only 20 or so change from day-to-day. Further, once an asset is retired, its record never changes again.

Because they're complete snapshots of the master data, the spreadsheets are massive (over 40K rows, currently around 40 MB each). This makes them awkward to work with.

I have two main business requirements:
1) From today's spreadsheet, I need an easy way to identify the 20 rows that changed yesterday.
2) For a specific asset, I need to trac

ASP.NetWindows Application  .NET Framework  C#  VB.Net  ADO.Net  
Sql Server  SharePoint  Silverlight  Others  All   

Hall of Fame    Twitter   Terms of Service    Privacy Policy    Contact Us    Archives   Tell A Friend