.NET Tutorials, Forums, Interview Questions And Answers
Welcome :Guest
Sign In
Win Surprise Gifts!!!

Top 5 Contributors of the Month
Gaurav Pal
Post New Web Links

Average Column in fact table - how to aggregate

Posted By:      Posted Date: October 13, 2010    Points: 0   Category :Sql Server


I have a column in my fact table called Average Handling Time, here I keep an average value - this is at a leaf level.

What should I set the aggregate to in my cube. For example, when I aggregate this to team level I don't want the sum but the average.

I did try the AverageOfChildren but didn't get the right value - see below



User Average  Handling Time
a 400
b 1011
c 602
d 298
e 393

View Complete Post

More Related Resource Links

Display column from child table. Possible ?



I'am using dnamic entity with EF4. On a list page of a table, I would like to display a column containing information from a child table of the current element. Sample :

Order List :

Order Data | Required Date | Shipped Date | etc... | ... | Customer Name (foreign key with tostring() method override) | Customer Postal Code (Column that I want to add) |

I don't known how to do that. Is it possible ? Maybe I have to to create my own Metadata proxy that will add dynamicly a column on the MetaColumn list of the table.

Does someone have an easier or better idea ?

Thank you for any help.

DataGrid: Tailor Your DataGrid Apps Using Table Style and Custom Column Style Objects


One of the most enduring challenges in writing user interfaces is figuring out how to display large amounts of data efficiently and intuitively without bewildering the user. The problem becomes particularly thorny when the interface must reflect hierarchical relationships within the data that the user needs to modify. The Windows Forms DataGrid control gives developers a powerful and flexible tool to meet this challenge. This article explains its basic operations and shows how to extend the DataGrid to display columns of data in an application-appropriate manner.

Kristy Saunders

MSDN Magazine August 2003

SQLCE table column DefaultValues don't show up in XSD Dataset designer

Using Visual Studio 2008 with SQL CE 3.5, I notice that default values in the creation scripts for the database tables are not reflected in the dataset designer XSD file.  For example, the following SQL script creates the non-nullable table column names "Content" with a default value of 'Image':    "Content" nchar(20) NOT NULL DEFAULT 'Image',But in the column properties of the dataset designer (XSD) panel, this column correctly shows up as non-nullable, but with a DBnull default value as follows:     Name: Content    Allow DBnull: False    DefaultValue: <DBnull>Am I missing something somewhere, or is this a VS bug?  Also, how do I get the XSD file to regenerate after schema changes in the database?  Sqlmetal doesn't do it.Thanks,    -BGood

Fetch Identity column just after inserting a row in table

Hi, Please help me with this question. I have a table and I insert a row into it. How can i select the latest inserted row? Can the 'inserted table' keyword be used outside trigger? ( I mean can we use it in above scenario?? If yes how)   Thanks in advance Tiya

can alias in select be used for selecting other column in that table?

Hi All,I want to use an alias name in a select clause to select other column in that table? select   top 1 (   case when CreatedByName <> '' then 'yy'         else 'xx' end) as filName, (filName + 'xx')from Order       But it throws error like " Invalid column name 'fileName'."Could you please help me out?

How to get a Table name from the column Value in Sql Server?

Hi All,I have a number of tables in the database and i have a column value as "abc" coming from one of the tables in the database,Now i need to find the table name from where this column value is coming?

Fact table in DSV vs partitions pointed to a different table

I am seeing an issue in my cube for a partition that is based on a separate table than the Fact table in the DSV. I have 8 partitions all from different physical tables. In the DSV I used 1 of those 8 partition tables as the "source" of the DSV so I could model the relationships between the fact and the dimensions. On 1 of the 8 it loads over 1 million rows from the partition into the cube, but when I use the browser to show the count in that particular partition it shows the exact same number of records that are in the table that was used in the DSV. The strange thing is all the other partitions work fine except this 1. I have deleted the partition and added it back multiple times and cant get it to work right. Has someone seen this problem before?   I have run into this a couple times, one way of fixing it was to recreate the entire project in a new project, copy all objects from the old projects and rebuild. I cant seem to figure out another way of fixing this.Craig

Building Fact and Dim table

How to build Fact and Dim tables for below requirement. Fact records (approx 800k) has to be analyzed every day w.r.t AgeGroup's Every record in fact will have a DOBSID and age/agegroup should be calculated every day based on getdate()

Is it suggested to have extra column on the SSISConfigurations SQL Configuration table

Hello, I have all my configurations are saved in SQL Server table. SO I just wanted to know if it is advisible to have extra column on that table? Thanks, Prabhat

How to recover from Derived Column Transformation Editor corrupting table metadata?

Hi there, In attempting to use the DCTE to replace the value in a field with a trimmed version (SSN = trim(SSN)), it seems that my meta-data has become corrupted by the Derived Column drop-down list. As a result, #1: I no longer see my incoming SSN field in the "columns" tree, and any reference to it is deemed "invalid", even though it's value does make it out of the Data Transformation process. How can I get back the reference to this field without redoing this entire task? And, #2: in the process, DCTE created new columns with "SSN" prefixed by the task name, such as "trim character fields.SSN". How can I delete these? It seems that one slip of the mouse in this form can lead to irreversible corruption of the meta-data, which the "debugger" then references and uses to "invalidate" subsequent work. I have tried everything I can think of to refresh this, including using the "Advanced Editor" and reloading the entire package. Any ideas? Thanks, Karl Kaiser

wrapping a table into a multi column page

I have a table populated with some data fields that spans 5 pages.  I am trying to set up the layout to be a multi column report (just two columns) in which the table wraps at the end to populate the right section of the page How can this be done?Javier Guillen

Getting counts by 2nd Date Dimension Attribute with Snapshot Style Fact Table

  I have an MDX question finding hard to solve.  I have a Snapshot Fact Table with a snapshot of the records in the source system for each batch date.  All records in the fact table are assigned the batch date with the batch date key.  There are many records for each day and each batch date is an entire copy of the source records.  So, the grain of the fact table is one record for each batch date that exists in the source system.  These facts rows have another date in them for when the record was entered.  This date is different from the batch date in that the batch date is based on the day the batch was processed and the entered date is based on when the record was entered.  If a record was entered many days before, its batch date will be today but its entered date will be several days ago.  Therefore each day a copy of all the records entered the previous batch date and all the records added on today's batch date are present. Fact Table : FactSnaphshotKey (surrogate for easier administration) BatchDateKey (link to batch date dimension – date dimension, first in dimension list so it is used for semi aggregate measures) EnteredDateKey (link to entered date dimension – date dimension) Facts Count – measure for fact table - default measure from Analysis Services cube 2 Dim

Update an accumlating shapshot fact table

This is my first time implmenting an accumulating snapshot fact table and I require some guidance. Accumulating snapshot fact tables show the status at any given moment. It is useful to track items with certain life time, for example: status of order lines.eg everytime there is new piece of information about a particular purchase, we update the fact table record. We only insert a new record in the fact table when there is a new purchase requisition. What I really need to know is how best to handle the updates.  This really feels very similar to managing SCD-1's in dimension processing! Anyone able to advise? thanks in advance Here is a perfect example we can use  http://blog.oaktonsoftware.com/2007/03/accumulating-snapshot-use-accumulating.html Figure 1, below, shows an accumulating snapshot for the mortgage application process. The grain of this fact table is an application. Each application will be represented by a single row in the fact table. The major milestones are represented by multiple foreign key references to the Day dimension—the date of submission, the date approved by mortgage officer, the date all supporting documentation was complete, the date approved by an underwriter, and the date of closing.

daily complete cube rebuild four dimensions and fact table including remapping of all surrogate keys

Hi SSIS Engineers: Forgive me if this is a multi-forum question. Our primary activity in the next week is to automate the processing in SSIS, where I led the team to create complete processing flows for Full and Add in the order of Dimension, Measure Group, Partition, Cube, Database. These work. The problem occurs in a complete refresh of the ERP database that caused me manual effort inside SSAS, which I plan to find a way to automate in SSIS. I performed a complete refresh of our cube from the ERP source from a time perspective. We are automating this process in SSIS. In SSAS, I had to manually delete the four dimensions from the UDM view via the Solution Explorer. Since the complete refresh increased the surrogate keys in the dimensions and since the names were the same, I couldn't just drop the partition and reprocess the dimensions, since, in effect, new fact rows would have to be mapped to the new keys. SSAS held on to the old keys even with Full Processing of the Dimensions first, then the Cube. Until I dropped--deleted-- the dimensional tables from the Solution Explorer and the UDM then later readded the dimensions with the new surrogate keys (both add, update and delete dimensional attribute changes in full refresh) via the Add Dimension wizard, the cube kept the old surrogate keys and failed in measure group, fact, database and partition processing.

How to get column value difference of rows in same table

Dear all, I need a TSQL statement to find the difference of values of two rows in the same table, by taking into consideration some conditions. I have the following rowss in a table named table1 dbname  sqlinst     size1   ddate sqldb1    inst1        200    1/1/2009 sqldb1    inst1        250    1/1/2010 sqldb1    inst1        170    1/1/2008 sqldb2    inst2        300    1/1/2009 sqldb2    inst2        340    1/1/2010 I need to find the difference between size1 values, for columns where their dbname and sqlinst are the same. I also need to define in TSQL that the ddate of row from where I subtract (size1) from is 1/1/2010 and that the ddate of the row I subtract (size1) is 1/1/2009 (e.g. in the above example: for sqldb1 inst1, I need to perform 250-200 and ignore 170, and for sqldb2 inst2  340-300). Please let me know if you have a solution for this. A million thanks!

Wierd case of: Msg 213, Column name or number of supplied values does not match table definition

Hi, I'm working on several triggers (that happen after insert or update) in order to log the changes in a different table. They all follow a similar syntax and are working fine, except for this one... I've reduced the next code to the minimum that gives an error, so we can safely assume the other parts of the trigger are working fine. INSERT INTO [Adt].[WardUnitStayLog] SELECT t.* FROM [Adt].[WardUnitStay] t INNER JOIN inserted i ON i.[Id] = t.[Id]; I've used this same syntax (but on different tables) in other triggers, and these are working perfectly fine. The above query provides the next error: Column name or number of supplied values does not match table definition. I've checked both tables for differences in the columns, but to no avail... (I've checked them manually and by outerjoining the information_schema.columns) (I've also checked the order in wich these columns are defined, they match over the two tables) These are the creation scripts for the tables: CREATE TABLE [Adt].[WardUnitStay] ( [Id] [dbo].[Id] IDENTITY(1,1) NOT NULL, [UnifiedUnitStayId] [dbo].[Id] NOT NULL, [WardCd] [dbo].[Cd] NOT NULL, [_FirstAtTm] [dbo].[Dtm] NOT NULL, [_IsReservation] BIT NOT NULL, [_LastAtTm] [dbo].[Dtm] NOT NULL, [_LastBedCd] [dbo].[Cd] NULL, [_LastPhysicianUid]

what is stored the guidlocal column in MSmerge_genhistory table (MS SQL Server 2000)?

In MSmerge_genhistory table there is a column called guidlocal. I noticed that for a large subset of the records on both the publisher and the subscriber, the value in this column is '00000000-0000-0000-0000-000000000000'. What does it mean? BOL says that guidlocal is: "Local identifier of the changes identified by generation at the Subscriber." but I still can't make it out what does '00000000-0000-0000-0000-000000000000' mean?
ASP.NetWindows Application  .NET Framework  C#  VB.Net  ADO.Net  
Sql Server  SharePoint  Silverlight  Others  All   

Hall of Fame    Twitter   Terms of Service    Privacy Policy    Contact Us    Archives   Tell A Friend