Feeds:
Posts
Comments

Archive for the ‘How To’ Category

As explained in the How To Export A timeXtender Project, it is possible to export a project definition (the meta data) into a XML file. This would not be very useful, unless it can be imported again :-)

The following explains how to import a project definition from a file

How to import a project

  1. Click the timeXtender ribbon menu icon in the upper left corner
  2. Select the Import/Export menu option
  3. Select the Import Project… menu option
  4. In the dialog that open, click the ellipsis button
  5. Select a file and click the Open button
  6. Click the OK button to start the import

Project Import Dialog

 

A note on version control

The project that is persisted in a file contains a single version of the project.

Every project is assigned a unique id when it is created (a global unique identifier, or in short a guid). During import this id is compared to existing project in the repository and if a project with the same id exists, you will see the following prompt

Project Import Dialog II

Save project as latest version

This will save the imported project definition into the repository, essentially “overwriting” the existing project as the latest version of this project.

 

Save project as a new project

Consider this like a save as function, that allows you to save a copy of the imported project with a new name and id, side by side with the original project in the same repository

Next you will be see the following prompt

Project Import Dialog III

Then you will be asked to specify a new name for the imported project

Project Import Dialog IV

 

The connection manager

Following a successful import, a prompt will ask if you want to run the connection manager. This provides a wizard style interface to set up all connections on the imported project

Project Import Dialog V

Read Full Post »

In timeXtender there is a feature called multiple environments, that automates the process of deploying project definitions between environments such as from development to test or from test to production.

If you ever need to manually copy or move a project definition between environments, the import and export features can be used.

The following is a very short guide that shows how to export a project.

How to export a project

  1. Click the timeXtender ribbon menu icon in the upper left corner
  2. Select the Import/Export menu option
  3. Select the Export Project… menu option
  4. In the dialog that opens, specify a location and name for the file
  5. Click the OK button

The export file is created in XML format, thus it is recommended to use a .xml file extension.

Project Export Dialog

 

A note on version control

The export function will persist the project “as is” in memory to the file system, thus the file will not contain any history about previous versions of the project.

Read Full Post »

If you follow the Kimball dimensional modeling approach, most dimensions will be star dimensions, some might be snowflake. Sometimes a business process will not fit this model, where one fact is associated with a single dimension member. This is when many to many dimensional modeling can be applied.

The example for this post will be the joint bank account scenario (accounts with multiple owners). I know this has been used over and over again, but hey this gives you the chance to make a direct comparison of the implementation in timeXtender with BIDS.

The joint accounts example

I will look at Jane and John, a couple with both personal and joint accounts

  • Jane has a personal checking account (balance 1000)
  • John has a personal checking account (balance 1000)
  • Jane and John have a joint savings account (balance 1000)

In this example I want the balance of the joint saving account to appear under both Jane and John. The main purpose here is to show how the many to many model affects the aggregation over the dimensions (the total is not equal to simply the sum of all members).

The model can be further refined by i.e. adding a ownership calculation to split the balance, but this will not be he focus of this post.

The following Excel view of the finished cube shows that Jane and John have a balance of 1000 each on their personal checking accounts, and both have access to the 1000 on the joint savings account. At the same time, the total balance of all accounts is still only 3000.

 

M2M Excel Pivot

 

The data model

The data model for the joint accounts can be seen below. In a real life scenario I would use surrogate keys, but to keep the focus on solving the main problem here, I will be using natural keys.

By examining the structure it appears that there is no direct relationship between the fact and the customer dimension tables (fact Account Balance and dim Customers). This relationship can only be established using the many to many join table (factLess CustomerAccount).

 

M2M DW Tables

The OLAP model

I trust that you already know the basics of how to create dimensions and cubes in timeXtender, thus I will only cover the specifics needed to add the many to many relationship.

The OLAP model will consist of the following

  • Account dimension
  • Customer dimension
  • Account Balance cube with fact tables (measure groups)
    • fact Account Balance (for balance measure)
    • factLess CustomerAccount (intermediate fact w/dummy measure)

Prior to adding the many to many relationship, the model will look this in timeXtender

image

 

Please notice that the Customer dimension is not related to the fact Account Balance fact table

M2M Dimension Relations

 

Adding the many to many relationship

The steps to add the many to many relationship are

  1. Right-click the Customer dimension under cube dimensions. On the context menu, select the Add Many To Many Relation
  2. In the dialog that appears, select the fact and factless tables as shown below

M2M Many 2 Many Join

 

The many to many relationship now appears under the cube dimension

 

M2M OLAP Model Complete

Read Full Post »

At the BI Day conference in Belgium last week, one of our partners mentioned that he had just been with one of his consultants at a customer, and had seen a cool timeXtender feature that he did not know existed. As he is most likely not the only one, let me tell you about the timeXtender remote control.

As we are speaking about the BI-Day conference, I cannot help pointing your attention to the fact that two out of four vision awards for business impact were won by timeXtender powered solutions. Cisco won in the private sector category and Fleet Support won the EMEA category.

What is the Remote Control?

The remote control is a small system tray application, that enables remote execution of a timeXtender project without the need to install the full client. It is also known as “The CEO button”, because of its simple interface that allows execution with a single click.

Prerequisites

The remote control uses the multiple environments feature of timeXtender for remote execution, thus you need to be on either corporate or enterprise edition of timeXtender and have a server environment running.

System tray icon state

The remote control appears as a led light icon in the system tray (right side of the Windows taskbar). The icon will appear in one of the following colors

led_green

Application ready
Last execution was successful

led_yellow

Project is being executed

led_red

An error occurred during last execution

Step 1 – Configuration

Righ-click the icon and choose the settings menu option. In the settings dialog, choose a server connection. The server settings will match those of an environment configured and running in the timeXtender server service.

Remote Control Settings Entered

Step 2 – Select project and package

Use the Projects and Packages menu option to display the list of projects and execution packages available on the server. Select a package and use the Set as Default button.

Remote Control Projects and Packages

Step 3 – Execute the project

To execute the execution package selected above, use the Start update menu option. This will connect to the server, load the project and execute the execution package. While the package is executing, the led will change to yellow and subsequently into green or red, depending if it was a successful or failed execution.

Read Full Post »

In this post I will show you how to use prototyping to demonstrate business requirements and get rapid and actionable feedback. How you got the business requirements to demonstrate, is not important for the purpose of this post, but assuming that you have read some of my earlier blog posts, it should be clear that I recommend taking an agile approach.

The goals for the prototype

  • Demonstrate business requirements
  • Deliver fast, using minimum resources
  • Minimize rework

I am going to build a prototype using the dimensional model approach as per best practice. Although not strictly necessary for the purpose of a prototype, I am even using surrogate keys, as I will most likely reuse the prototype for the permanent implementation after one or more very short iterations.

The four easy steps

  • Build the data warehouse star schema
  • Enter sample data in the model
  • Build the dimensional model in OLAP
  • Build a test front end in Excel using Pivot Tables

Step 1 – Build the data warehouse star schema

In a dimensional model, the business requirements can be illustrated as this star schema (also known as a star join schema). For those unfamiliar with dimensional modeling, the yellow table is called a fact table. It is the center of a star schema and represents the numeric measurements of an event and has keys (links) to the dimensions that describe the event. The blue tables are called dimension tables. In this example we can describe an event: A sale and describe it in the context of customer, product sold and the point in time at which the event took place.

Remembering that the purpose is to demonstrate the business requirements as fast as possible and with a minimum of rework, I will implement the dimensional model above with manually entered sample data in the data warehouse database. When I build the dimensional model in the timeXtender meta layer it will look like this. Please note the red fields at the bottom of the fact table. These fields are lookup fields that are used to translate the natural key joins between my dimension and fact tables into surrogate based joins.

Step 2 – Enter sample data in the model

Using the custom data feature in timeXtender, I will enter just enough data to be able to demonstrate the model. As a side note, you would be amazed at the difference it makes to your business stakeholders, if you use recognizable data compared to just more or less random values.

At this point in time, the model including the manually entered demo data only exists in the timeXtender metal model in memory. If I use the deploy and execute features at this point, timeXtender wll generate all necessary SQL code in the physical data warehouse database, load it with demo data and save the project definition to the timeXtender project repository database. The processed fact table will look like this.

  • If you do not need a multidimensional representation for your data (an OLAP cube), the data are at this point already accessible from your front end of choice.

Step 3 – Build the dimensional model in OLAP (Analysis Services)

I will use a drag and drop approach to create a very simple OLAP representation of the star schema. This will include standard dimensions, standard measures and will look like this in timeXtender

Step 4 – Build a test front end in Excel

I will use the most common of all BI front end tools to build a view on the data that I have just built. A Pivot Table is an excellent way to get a feel of the dimensions and measures of a cube.

What is next?

The Excel workbook I just created can now be delivered to my business stakeholders. They can then use this to verify the business requirements and provide instant feedback. The business stakeholders might come back with extensions or modifications to the dimensional model after seeing the first result. This might lead me down one or more additional iterations of the above steps, adjusting the model according to more detailed requirements. This is all fine and exactly why prototyping is such an efficient way to interact between data warehouse developers and stakeholders. The time I spend at this early point in the process will save me from late and costly rework.

Once the model has been verified I can continue to identify data sources needed to load actual data into the dimensional model just built. I will most likely need to make minor changes to the dimensional model on the data warehouse, such as different data types for non system fields. Except for these minor changes, the entire dimensional model including the cube can be reused and put into production after proper testing with real life data.

 

Video link

My colleague Thomas Lørup Duun has made a video on this topic available on YouTube. The video is actually only 15 minutes in real time, so the header in this post is a bit conservative :-)

Read Full Post »

I will be the first to admit it, documentation is not fun. Talking to the business, understanding their requirements, building the dimensional model, developing the physical data warehouse and delivering the results to the business. That is both fun and valuable. Documentation is simply something I have to do.

I do it for the internal and external auditors. I do it for the next developer on the project. I do it for myself, because I cannot remember every little detail of everything I did, and unlike what some developers might say – not all code is self explaining. I do documentation because I have to, and because it is invaluable to the business and the continued development, expansion and enhancement of the data warehouse.

Since there is no doubt about the importance of doing the documentation, and since it is in no way fun, I can fortunately document an entire project in five minutes.

As you are most likely well aware of at this time, timeXtender is a meta data driven tool that generates code for SQL, SSIS and SSAS. Or to emphasize even more, timeXtender is the meta data thus just needs to put it in a document.

The elements of documentation

The automated PDF/XPS documentation that can be generated in timeXtender contains the following elements

  • Automatic meta data. Every field, table, data source, dimension, cube, measure etc. is automatically included in the documentation.
  • Custom documentation. Virtually every element within a timeXtender project can be described further using custom documentation in Rich Text format.

Add a custom description to an element

If you want to add what we at timeXtender call inline documentation, you simply

  1. Right click an element such as a table, field or measure
  2. Select the Description option in the context menu
  3. Add the description (which can be formatted using the simple editor or you can even paste in images from the clipboard)

Inline documentation is very powerful, fast and easy to add, simply just because you can describe/document an element on the element it self. There is no other tools involved. When working with a project you can also instantly see if documentation has been added to an element.

Custom Documentation

Generating the document

Creating the actual document is as easy as this

  1. Right click the project
  2. Select the Documentation option in the context menu
  3. Specify the document format, path and description settings
  4. Specify any optional settings such as colors and font
  5. Hit the OK button to generate the document

The documentation setup

 

The document will open once it has been created. The screen shot below shows a PDF formatted document. Please notice the attachment icon to the left. Any custom added descriptions will be attached inline to the document (depending on the setting in the above dialog).

 

Documentation PDF

Read Full Post »

With the constantly increasing data volumes that needs to be handled in data warehouses, SQL Server data compression can be a good way to save on storage – and perhaps even get a performance gain in throughput capacity.

As with most of my other posts, the purpose is not to cover the how to details of a manual implementation in SQL Server, just as I will not cover the technical differences between row and page compression. The purpose of this post is to make you aware of the existence of data compression, what it does and how it is implemented in timeXtender.

What is data compression?

With the danger of being too simplistic, if you think of data compression as being to the contents of a database what zip is to the file system, you are pretty close. It does exactly what the name indicates, it compresses the data, hence saves storage just as zipping a file.

There are two types of data compression, row compression and page compression. Row compression saves storage by using what is called a variable length storage format even for fixed length data types such as CHAR and by not storing zero and NULL values. Page compression uses a more complex technique, including considering redundancy in data.

Using data compression

Considering the nature of staging and data warehouse databases, page compression is well suited for data warehouse databases. Row compression is better suited for staging databases, due to the many changes (updates) normally performed in the data cleansing process.

For ease of use, there are only two parameters in timeXtender that controls the use of data compression

  • Table classification Any table classified as large dimension table or large fact table, uses data compression if supported by the target
  • Deployment target The deployment target for the staging or data warehouse database needs to be an Enterprise Edition of SQL Server 2008+

And just to clarify. A staging database table will always use row compression, whereas a data warehouse table will always use page compression.

Data Compression

Deployment Target

Storage savings

Our field experience shows the following average storage savings

Compression type Storage savings
Row (staging) 40-60 %
Page (data warehouse) 50-90 %

In addition we have seen performance gains, that can be be explained by the reduced disk IO. If the bottle neck in general is IO and there is available memory on the server, you can expect a performance gain.

A word of caution

When you first use data compression in a database, SQL Server changes the internal storage of the database. This will prevent you from restoring a backup of a compression enabled database onto another SQL Server instance, not using data compression.

Software prerequisites

Data compression is a feature that requires SQL Server Enterprise Edition, thus also requires an Enterprise Edition license of timeXtender. It is supported from SQL Server 2008 and timeXtender version 4.5.

References

If you are interested in behind the scenes details of data compression, SQL Server Central is good source. Search for data compression (and make sure not to confuse it with backup compression).

Read Full Post »

Older Posts »

%d bloggers like this: