Kategorien
Allgemeines Qlik Script

Datamodelling Qlik Concatenate, Power BI Combine, SQL Union

The last 6 months have been quiet full of many new things to learn – after working more than 10 years with qlik and SQL a new player named “Power BI” has been added to my skills. So getting more and more used to it, I feel, like I should give other qlik developers to understand faster how both world solve the same issues;)

The issue is simple and I use it very often for the appending two fact tables with each other. I keep this solutions “small” so I’m not showing you how to create a “manual” demo table…

Solution in SQL

Select
 eventname
 ,timestamp
FROM table1
UNION ALL
Select
 eventname
 ,timestamp
FROM table2

So this looks quiet simple 😉

Solution in Qlikscript

Table1:
LOAD
 eventname
 ,timestamp
FROM table1.csv;

Concatenate(Table1)
LOAD
 eventname
 ,timestamp
FROM table2.csv;

Looks simple too!;)

Solution in PowerQuery

...
 table1 = Source,
 table2 = SourceTwo,
 ResultTable = Table.Combine({table1,table2}),
...

This seems to be the shortest code sample, anyway you should not forgett, that the datasources are different, so for example 1, this can only be run on a SQL-Server. Qlik Script includes the datasource already in the script (csv-files) and for powerquery I ignored to add the connection to the datasources, which would have blown up the sample!

Kategorien
VB.NET

Execute MSSQL Task with VB.NET Console Application

Download the latest Visual Studio Community Edition (about 4gb). Then start a new Project “Console VB.NET” and use the following code for executing a stored procedure on your MS SQL Server:

Imports System.Data.SqlClient

Module Module1

    Sub Main()

        Dim jobConnection As SqlConnection
        Dim jobCommand As SqlCommand
        Dim jobReturnValue As SqlParameter
        Dim jobResult As Integer

        jobConnection = New SqlConnection("Data Source=(localhost);Initial Catalog=LN_DataReplication;Integrated Security=SSPI")
        jobCommand = New SqlCommand("ReplicateTables", jobConnection)
        jobCommand.CommandType = CommandType.StoredProcedure
        jobCommand.CommandTimeout = 0

        jobReturnValue = New SqlParameter("@RETURN_VALUE", SqlDbType.Int)
        jobReturnValue.Direction = ParameterDirection.ReturnValue
        jobCommand.Parameters.Add(jobReturnValue)

        jobConnection.Open()
        jobCommand.ExecuteNonQuery()
        jobResult = DirectCast(jobCommand.Parameters("@RETURN_VALUE").Value, Integer)
        jobConnection.Close()

        Select Case jobResult
            Case 0
                Console.WriteLine("SQL Server Agent job, ReplicateTables, executed successfully.")
            Case Else
                Console.WriteLine("SQL Server Agent job, ReplicateTables, failed to execute.")
        End Select
        Console.Read()

    End Sub

End Module
Kategorien
Allgemeines Allgemeines

Qlik Sense SetAnalysis and Power BI Dax Comparisson

For my new job I have to move my skills from qlik to microsoft power bi which – at the beginning – is quite challenging. So to keep a bridge in my mind and to speed up my knowledge curve, I document here the samples on how thinks work in qlik sense/view and how in Power BI. Maybe some of the examples need more explanation? Feel free to ask, otherwise it’s as short as possible, so I can keep an overview.

Total Sales for all years

QS: SUM({$<Calendar.Year=>} TotalSales)
PowerBI: CALCULATE(SUM(TotalSales);FILTER(ALL(Calendar)))

Kategorien
Allgemeines

Excel macro loop by sheet

Collect data from multiple excel tabs and put them together at a new collector sheet:

Sub WorksheetLoop()

Dim WS_Count As Integer
Dim I As Integer

' Set WS_Count equal to the number of worksheets in the active
' workbook.
WS_Count = ActiveWorkbook.Worksheets.Count

' Begin the loop.
For I = 1 To WS_Count

' Insert your code here.
' The following line shows how to reference a sheet within
' the loop by displaying the worksheet name in a dialog box.
MA_NAME = ActiveWorkbook.Worksheets(I).Name

vJanuar = ActiveWorkbook.Worksheets(I).Cells(8, 3)
vFebruar = ActiveWorkbook.Worksheets(I).Cells(8, 4)
vMarch = ActiveWorkbook.Worksheets(I).Cells(8, 5)

vzJanuar = ActiveWorkbook.Worksheets(I).Cells(9, 3)
vzFebruar = ActiveWorkbook.Worksheets(I).Cells(9, 4)
vzMarch = ActiveWorkbook.Worksheets(I).Cells(9, 5)

ActiveWorkbook.Worksheets(239).Cells(I, 1) = MA_NAME

ActiveWorkbook.Worksheets(239).Cells(I, 3) = vJanuar
ActiveWorkbook.Worksheets(239).Cells(I, 4) = vFebruar
ActiveWorkbook.Worksheets(239).Cells(I, 5) = vMarch

ActiveWorkbook.Worksheets(239).Cells(I, 7) = vzJanuar
ActiveWorkbook.Worksheets(239).Cells(I, 8) = vzFebruar
ActiveWorkbook.Worksheets(239).Cells(I, 9) = vzMarch

Next I

End Sub
Kategorien
Allgemeines

MS SQL PIVOT with text as Fieldtype

WITH PivotData AS
(
SELECT
       ItemCodeBase.ItemCode AS ItemCode
	  ,CAST(ITEMTEXT.[FieldAlias] as VARCHAR(MAX)) AS FieldAlias
	  ,CAST(MUTLITEXT.Trans as VARCHAR(MAX)) AS Trans     
  FROM DWHLN_TEST.dbo.ItemsTextCheckCreh as ItemCodeBase
	  inner join SAP002.SAP_Master.dbo.OMLT AS ITEMTEXT on ITEMTEXT.PK = ItemCodeBase.ItemCode  COLLATE DATABASE_DEFAULT
	  INNER JOIN SAP002.SAP_Master.dbo.MLT1 AS MUTLITEXT ON ITEMTEXT.TranEntry = MUTLITEXT.TranEntry
  where 
	1=1
    AND MUTLITEXT.LangCode = '9' and FieldAlias in ('U_COR_BU_TXTS','U_COR_BU_TXTQ')
)

SELECT 
	ItemCode
	,[U_COR_BU_TXTS]
	,[U_COR_BU_TXTQ]
FROM
	PivotData
	PIVOT(
		MAX (Trans)
		FOR FieldAlias in ([U_COR_BU_TXTS],[U_COR_BU_TXTQ])) AS P;

The trick is, that you cast the text/string into varchar and then you can use MAX as aggregation type to return the readable value.

Kategorien
Allgemeines

Power BI and MS CRM Entity Tables

Temporary Entity Tables build with DAX Query and using the CRM Tables:

  • StringMapBase
  • MetadataSchema.Entity
Entity and Translation
LeadStatus = CALCULATETABLE(StringMapBase;FILTER('MetadataSchema Entity';'MetadataSchema Entity'[Name] = "Lead");FILTER(StringMapBase;StringMapBase[LangId]=1033 && StringMapBase[AttributeName] = "StateCode"))
Kategorien
Allgemeines

Power BI Calender DAX

Generating a calender was a very important task within qlik sense/view – so even as the date functions within power query or dax are awesome, I still prefer to have an “inbuild” calender. To achieve this goal, you can switch to the “Modeling” view -> add a “Table” and then just copy and paste the statement below:

Date = 
ADDCOLUMNS (
CALENDAR (DATE(2000;1;1); DATE(2025;12;31));
"%DateAsInteger"; FORMAT ( [Date]; "YYYYMMDD" );
"Year"; YEAR ( [Date] );
"Monthnumber"; FORMAT ( [Date]; "MM" );
"YearMonthnumber"; FORMAT ( [Date]; "YYYY/MM" );
"YearMonthShort"; FORMAT ( [Date]; "YYYY/mmm" );
"MonthNameShort"; FORMAT ( [Date]; "mmm" );
"MonthNameLong"; FORMAT ( [Date]; "mmmm" );
"DayOfWeekNumber"; WEEKDAY ( [Date] );
"DayOfWeek"; FORMAT ( [Date]; "dddd" );
"DayOfWeekShort"; FORMAT ( [Date]; "ddd" );
"BusinessDayOfWeek"; WEEKDAY ( [Date] ) & " " & FORMAT([Date];"ddd");
"DayInMonth"; DAY([Date]);
"BusinessDayInMonth"; DAY([Date]);
"Quarter"; "Q" & FORMAT ( [Date]; "Q" );
"YearQuarter"; FORMAT ( [Date]; "YYYY" ) & "/Q" & FORMAT ( [Date]; "Q" );
"CYTD"; if([Date] <= TODAY() && [Date] >= DATE(YEAR(TODAY());1;1);1;0);
"PYTD"; if([Date] <= TODAY()-365 && [Date] >= DATE(YEAR(TODAY()-365);1;1);1;0);
"CYLastWeek"; IF(YEAR([Date]) = YEAR(Today()) && WEEKNUM([Date]) = WEEKNUM(TODAY()) - 1;1;0);
"CYThisMonth"; IF(YEAR([Date]) = YEAR(Today()) && MONTH([Date]) = MONTH(TODAY());1;0);
"CYLastMonth"; IF(YEAR([Date]) = YEAR(Today()) && MONTH([Date]) = MONTH(TODAY()) - 1;1;0)
)

The funny part is, that PowerBI provides an “autocalender” function, which goes through the whole dataset and searches for the lowest date value and the latest date value, which can produce a hugh amount of data if you have e.g. 31.12.9999 as a max future date…

Kategorien
Allgemeines

T-SQL read tables and columns from a linked server

SELECT lst.name AS Table_Name, 
       lsc.name AS Column_Name, 
       lsc.max_length, 
       (schema_id) As Schema_name

FROM [SERVER].[DB].[sys].[tables] lst
    INNER JOIN [SERVER].[DB].[sys].[columns] lsc
        ON lst.OBJECT_ID=lsc.object_id

ORDER BY schema_name, lst.name, lsc.name
Kategorien
Allgemeines

T-SQL execute procedure for each row of a master table

From time to time it’s needed to loop over the rows of a master table – e.g. different companies within your organisation are stored within different databases. So to avoid creating “long” SQL-scripts where you just change the db information, you can ease it up by:

DECLARE @oneid int -- or the appropriate type

DECLARE the_cursor CURSOR FAST_FORWARD
FOR SELECT spro.Id  
    FROM SomeTable as spro 
        INNER JOIN [Address] addr ON addr.Id = spro.Id  
        INNER JOIN City cty ON cty.CityId = addr.CityId 
    WHERE cty.CityId = @CityId

OPEN the_cursor
FETCH NEXT FROM the_cursor INTO @oneid

WHILE @@FETCH_STATUS = 0
BEGIN
    EXEC UpdateComputedFullText @oneid

    FETCH NEXT FROM the_cursor INTO @oneid
END

CLOSE the_cursor
DEALLOCATE the_cursor
Kategorien
Jedox

Jedox Percentage on Percentage on Percentage Split with “instant” preview

Usecase

Imagine you have a customer who wants to split revenue by multiple dimensions not based upon total figures nor on percentual splits. E.g. you plan 10k and split it to 3 Segments by

  • 20% Seg A
  • 40% Seg B
  • 40% Seg C

And after this by ProductGroup and Product which are in one Dimension

  • 20% ProdGroup I
    • 50% Prod A
    • 50% Prod B
  • 80% ProdGroup II

Design

Important: I struggled by trying to solve this with a “one cube solution” as two many “unknown” levels are existing to identify the storage of the percentage value. So to get a stable an performant solution, you need to stick to the following steps:

  1. Identify the Split Points, in our example for the segments (which are the first split level) the percentages will be stored on the base elements for Seg A, B, C
  2. Then we need to create a helper Cube which will store the percentages for this first level, as we will have different percentages by Year, Version and Szenario, they need to be added to seperate the values later

Cube Layout within Integrator Constant Table Extract:

Definition of split cube step 1

From this extrace we go straight to the cube load:

Cube load – will create the cube on the first execution

So in your modeler a new cube should have appeared:

The “storage” cube for the factors on level 1 – segments

So within the worksheet (report designer) to keep it manageble I will add a named range to store the cube name:

Named Range within worksheet

So for the entry of the factors we create a DynaRange for the Segments:

And allow the user to enter the values on the base elements. Conditional Format on the ELEVEL information.

Check for Baseelements

=PALO.ELEVEL(RNG_CONNECTION,”Segments”,$D42)=0

The calculation of the “temp result” will be done in the Split 1 cube by using the formula

Rule within result cube

=[‘_Sales_Budget_Top_Down’:’_split_faktor_Segments’] = B:[‘_Sales_Budget_Top_Down’:’_faktor_Segments’] * PALO.DATA(“”,”SALES_BUDGET_LIC_REV_TOP_DOWN_REDUX”,!’Years’,”Total_Year”,!’Versions’,!’Szenario’,”Reporting_Currency”,”Total_Segments”,”Sales Hierachy”,”Total_TD_Account_Business_Type”,”Total_TD_Products”,”_input”) / 100

The next step is tricky, as we have the product group and the product in one dimension, so we aren’t able to store the value for the percentage on a base element as needed. to solve this issue we create a helper dimension for the product which will allow us to store the percentage on a base element. To do this, we first extract all values from the dimension that we wanna “split” to give it even more complexity, here we have a structure of different depths…

For our first helper, we actually focus on the values of level2. To do so, we need to get to an TreeFH Transformation step which will only be possible with a “in between step” using a tableview:


And after this the treeFH where we only use the first 2 levels:

Improvements

As the tree fh will hold “no distinct” values, you can improve by adding like a aggregation inbetween, as we don’t have that many rows, I will step a head an create the dimension diretly
The big picture for the product level 1 helper dimension

So as we did for the segments, we will create a helper cube to store the factors, this time we need to take care that the factors are although related to the segments:

From here we proceed as above and the cube should look like this:

Then we need to add the following rule to this cube which looks like:

Faktor Segment * Faktor ProductGroup / 100 /  100

[‘_Sales_Budget_Top_Down’:’_split_faktor_TD_Products’] = B:[‘_Sales_Budget_Top_Down’:’_faktor_TD_Products’] *
PALO.DATA(“”,”SPLIT_BY_STEP1_SEGMENT”,!’Years’,!’Versions’,!’Szenario’,!’Segments’,”_faktor_Segments”)*
PALO.DATA(“”,”SALES_BUDGET_LIC_REV_TOP_DOWN_REDUX”,!’Years’,”Total_Year”,!’Versions’,!’Szenario’,”Reporting_Currency”,”Total_Segments”,”Sales Hierachy”,”Total_TD_Account_Business_Type”,”Total_TD_Products”,”_input”) / 10000

After this you can again adopt the values in the worksheet to show the correct figures from within the 2nd helper cube.

For the products itself, it will become easy, as we can use the product dimension “as-is”. Which will mean we create a cube design as followed:

And here the cube:

Split By Step3 Product Level 2 (Base Elements)

Important, this time (as we stay in the same dimension) we only exchange the helper dimension of the products with the full product dimension and let it rock!

And pls. add the following rule

Faktor Segment * Faktor ProductGroup * Faktor Product / 100 /  100 / 100

[‘_Sales_Budget_Top_Down’:’_split_faktor_TD_Products’] = B:[‘_Sales_Budget_Top_Down’:’_faktor_TD_Products’] * PALO.DATA(“”,”SPLIT_BY_STEP1_SEGMENT”,!’Years’,!’Versions’,!’Szenario’,!’Segments’,”_faktor_Segments”) * PALO.DATA(“”,”SPLIT_BY_STEP2_PRODUCT_L1″,!’Years’,!’Versions’,!’Szenario’,!’Segments’,PALO.EPARENT(“”,”TD_Products”,!’TD_Products’,1),”_faktor_TD_Products”) * PALO.DATA(“”,”SALES_BUDGET_LIC_REV_TOP_DOWN_REDUX”,!’Years’,”Total_Year”,!’Versions’,!’Szenario’,”Reporting_Currency”,”Total_Segments”,”Sales Hierachy”,”Total_TD_Account_Business_Type”,”Total_TD_Products”,”_input_ABR”) / 1000000

Again we add the named range:

So like this you will be able to have like a report where you can show values by each “split” and written back to all base elements by helper cube.