Friday, February 10, 2012

Smart Event Triggers using Command Manager

If you are in a reasonably large environment it's likely you have Events that send out reports (via subscriptions) to your users.  Hopefully these Events are fired by triggers, which in turn are fired by whatever enterprise scheduling tool you are using.  I won't get into why time based Events are bad practice today as I want to talk about something else.

I had an issue that one of the events we had had so many subscriptions attached to it (that fired so many different jobs) that if was triggered it would crash the I-Sever.  Yes this is a 32 bit environment but it's crashes like this that will have Microstrategy telling you it's time to upgrade.  The issue is the I-Sever can only hold so many jobs (either executing or waiting) at one time I think we established it was around 500 jobs before it crashed.  However this event would invoak around 1500 jobs.  So something had to be done.

Initially what I would do is extract all the subscriptions for the event using command manager.  If you have never seen the list of out of the box outlines before make sure you do as there is plenty there

I was interested in this one


Then I took this line

LIST [ALL] SUBSCRIPTIONS FOR SCHEDULE "schedule_name" FOR PROJECT "project_name";

altered it to be

LIST ALL SUBSCRIPTIONS FOR SCHEDULE "MySchedule" FOR PROJECT "MyProject";

If you run that it returns a list that you can export to excel if you like.  Once there I using a little bit of filtering and a pivot table and created another script using this as a base:

TRIGGER [ALL] SUBSCRIPTIONS [FOR SCHEDULE "schedule_name"] FOR OWNER "login_name" FOR PROJECT "project_name";

I altered it to be like this:

TRIGGER ALL SUBSCRIPTIONS FOR SCHEDULE "MySchedule" FOR OWNER "TheOwner" FOR PROJECT "MyProject";

What I had was around 50 lines where the thing that changed each time TheOwner.  I then put that back into command manager and ran each one, waiting for the job monitor to clear then started the next  one.  It allowed me to break up the one event into a series of smaller events.  But it was very manual and I knew there was a better way.

It wasn't until I found this post at Bryan's Micostrategy Blog that I knew what I should do.  In Bryans post he accesses the subscription list then uses that in a loop statement to change the owner of the subscription.  Which is very handy if a user leaves and you need to claim all their subscriptions.  Where as Bryan alters the subscription I knew that I could extract the GUID from each Trigger it then do the same again in a loop.  The trick would be having a pause between each Trigger or even better checking the number of jobs in the I-Sever before triggering the next subscription.

I also wanted to record what was being set in case of a failure or some sorts.  At least then I could take the log and compare it to the full set and work out what still needs to be sent.

So here is what the end result looks like (not this is a Procedure and not a script - but it uses Syntax from the script world):

This script must be run via Command Manager as a Procedure.  To run it, simply go to File -> New Procedure and paste it in.  You'll need to set the variables at the top, and also under the Test Information tab on the right hand side of Command Manager, you'll need to connect to your IServer.

//You will need to set these

String sProject = "MyProject";
String sSchedule = "MySchedual";

// File Logging Properties - Again will need to be changed as per your requirements
String sPCID = "pcName";
String USER_NAME = "UserName";
String FILE_PATH = "\\\\" + sPCID +"\\C$\\Users\\" + USER_NAME +"\\Desktop";
String FILE_NAME = "trigger_log.txt";
File f = new File(FILE_PATH, FILE_NAME);
Calendar cal = Calendar.getInstance();

BufferedWriter logWriter = null;

try {
logWriter = new BufferedWriter(new FileWriter(f, true));
} catch(Exception e) {
printOut("Error Creating log file writter: " + e.toString());
}

//Gets all the subscriptions for your event / project
ResultSet oSubs = executeCapture("LIST ALL SUBSCRIPTIONS FOR SCHEDULE \'" + sSchedule +"\' FOR PROJECT \'" + sProject +"\';");

oSubs.moveFirst();

try {
while(!oSubs.isEof()) {
                //Finds the GUID
String sGUID = oSubs.getFieldValueString(DisplayPropertyEnum.GUID);

                //Sets a value - we use it soon
int jobcheck = 100;

while (jobcheck > 50) {
                        //Counts the actual number of jobs running, the 50 above is the max number of jobs that can                       be running
ResultSet oJobs = executeCapture("LIST ACTIVE JOBS ;");
int count = 0;
oJobs.moveFirst();
while (!oJobs.isEof()) {
count = count +1;
oJobs.moveNext();
}
jobcheck = count;
Thread.sleep(1000);
}
//Send subscription - We only get to here if jobCheck < 50
String s = ("TRIGGER SUBSCRIPTION GUID " + sGUID +" FOR PROJECT \'" + sProject +"\';");
execute(s);
//Log output
String log_message = cal.getTime() + ": Project = '" + sProject + "' GUID = '" + sGUID + "'\n";

try{
if(!f.exists()) {
f.createNewFile();
}
if(logWriter != null) {
logWriter.write(log_message);
}
}catch (Exception e) {
printOut("Error writing to log: " + e.getMessage());
}

printOut(log_message);
//Just wait a bit before trying again
Thread.sleep(5000);

oSubs.moveNext();
}
} finally {
try {
logWriter.close();
} catch(Exception e) {
// :(
}
}



I would suggest you trial this in your Development environment, set up some subscriptions to yourself and try it.  Adjust the Max Jobs number (currently 50 and could move to a variable if you wish) and try it out.

Taking it further you could alter this to accept run time parameters and call it from a scheduling tool meaning all your Events would work like this.  However I might save that for another post.



Sunday, February 5, 2012

Currency Rates in Microstrategy

There are a number of methods to implement exchange rates in Microstrategy.  Query the knowledge base and you are given a few suggestions (TN Key: 8112)

Method 1- Use of an Exchange Rate fact table
Method 2 - Fixed conversion within a metric
Method 3 - Store Sales Data in both Local Currency and a Common Currency
Method 4 - Prompt the User for the Exchange Rate


all of which will work (in most cases) but none which really deal with the problem efficiently.  They do offer an idea as to what the solution might be.  The holding of the actual rates in a table (Method 1) sounds like a good thing to do as exchange rates have a tendency to change.  However where this method falls down is the suggested use of a compound metric and or level metric.

Compound metrics give you issues later on with transformations or filters (as each will need to be added to the base metric) and level metrics always just look like a work around for a poor schema to me.

When dealing with exchange rates it's important to know how many currencies you need to report in now how many currencies your actual data is.  Generally you will be required to have at least 2 reporting currencies

1. Your Local Currency of where the company is based
2. US Dollars (for now at least it's still the world's currency)

There will also be what I call "Original currency" which will be specific to the country the sales were made in.  Original currency therefore covers the rest, you may have business in 40 currencies and people in those areas need to understand how much they sold in local currency, which they can always do via "Original Currency".

The key to this design is how your underlying Information Mart is structured.  As always you should have two layers to this area, one for all the tables and one for the views.  There will be at least as many views as tables (usually more) and only those views will be exposed to Microstrategy in the warehouse catalogue.

Your base fact tables will all be in Original currency, the ETL team need to do nothing for you at this stage just populate a table that will look like this

You will also need a currency rate table that looks like this


Here you need to know what the rate column really says.  It represents what you will multiply anything that is CURRENCY_FROM to get a value in CURRENCY_TO.  So from above the Aussie dollar is worth more that both the New Zealand and US Dollars for this month.  The key thing in this table is you will have the full list of possible currencies sales can occur in present in the CURRENCY_FROM column and only the ones you are required to report on in the CURRENCY_TO column.

This is really a second generation solution as I did use a different method previously (it actually required the same tables but a lot of the work was done in Microstrategy) but would fail do deal with situations when fact data was at a different level to the given set of exchange rates.  We started with monthly fact data which matched the rates we had, but as time progressed we created daily fact tables and found the problem pretty quick.  So just bear with me as this method will cater for it all.

The first generation solution actually provided half the solution for this one.  I won't go into too much detail but ever metric had a formula that looked like

Sum(Sales * Conversion@Rate)

It wasn't a compound metric but relied on a couple of attributes (Conversion and Currency) with some interesting attribute forms and the a key set of filters that would drive the "Convert too" process.  I won't explain it in full as it's defective.  It does however provide the basis for the SQL that is required in the views you present to Microstrategy.

So moving into the next part of the solution.  What you will need to do is set up a view structure for the warehouse catalogue that looks like this:

Here is the example SQL you will need for each view - Even if you have 5 or 6 reporting currencies (which would be rare) once you get one done a quick find and replace (creating one file per currency) will mean the views are done in no time. (You need one view per reporting currency - in this case I have 3)

V_DATA_SALES_USD would be defined as:


SELECT S.MONTH_ID           ,
       'USD' AS CURRENCY_USD,
       S.STATE              ,
       S.ACCOUNT            ,
       S.SALES_AMT * R.RATE AS SALES_AMT
FROM   DATA_SALES S
       INNER JOIN DIM_CURRENCY_RATE R
       ON     S.MONTH_ID    = R.MONTH_ID
       AND    S.CURRENCY    = R.CURRENCY_FROM
       AND    R.CURRENCY_TO = 'USD';


V_DATA_SALES_AUD would be defined as:


SELECT S.MONTH_ID           ,
       'AUD' AS CURRENCY_AUD,
       S.STATE              ,
       S.ACCOUNT            ,
       S.SALES_AMT * R.RATE AS SALES_AMT
FROM   DATA_SALES S
       INNER JOIN DIM_CURRENCY_RATE R
       ON     S.MONTH_ID    = R.MONTH_ID
       AND    S.CURRENCY    = R.CURRENCY_FROM
       AND    R.CURRENCY_TO = 'AUD';



V_DATA_SALES_ORG would be defined as:


SELECT S.MONTH_ID           ,
       'ORG' AS CURRENCY_ORG,
       S.STATE              ,
       S.ACCOUNT            ,
       S.SALES_AMT
FROM   DATA_SALES S;

There are a few things to note about these views.

1.  The fact names are all called the same thing (SALES_AMT), this means just one fact in Microstrategy.
2.  There is a hard coded column in each view (CURRENCY_USD) which contains the currency the view is in ('USD') - This one I will explain soon.

If you had currency rates at the month level and fact data required at the day level then all you do is join your currency table to your day level data lookup table using month, which then allows you to join to the fact on the day.  This means every day in the month is given the same rate.  If you get more detailed rates in the future just change the view and it will all still work.

As with most of my work the hard work is done before you even open desktop.  However you are not done yet.  Right now you have three views which would return data that looks like this:


Everything is represented in the reporting currency, so all that remains is a method to get Microstrategy to pick the right table based in user input.

The key to understanding this is all to do with Fact Entry Levels.  What these are is the lowest possible level any fact can be calculated at.  These are worked out when you update the schema and depend on what attributes and facts you have attached to each table.  You may have come across a situation where you you use an aggregated view of data.  e.g from Above we could have a COUNTRY level of data that aggregates (sums up) all of the State data.  You then have a Country attribute which will only be attached at this level, but we could of course work out a country total by using the State level and summing the states via a hierarchy.  So even thou SALES_AMT is held at both a State and Country level Microstrategy knows that it enters at State (for the Geography Hierarchy at least) level.  So State will form part of the fact entry level.

Knowing this allows us to get Microstrategy to do our bidding.  What we do is create 3 Attributes

Currency AUD
Currency USD
Currency ORG

To do this, create one row lookup tables (views) for them (use some sort of system dummy row in your database eg SELECT 'AUD' from sysdummy).  The value in here need to match whatever value you hard code in your views.   In this example for Aussie Dollars it will be 'AUD'.  The column name will match the name given in your views e.g CURRENCY_AUD.  Import this into Microstrategy.

So when you create this attribute set the Lookup to be the one row view above then use the Automatic column mapping to catch all the AUD views you have created.  You are likely to have a more complex example that just the one fact table I use above.  Create all the other required facts etc etc then update the schema (Make sure you facts are defined on multiple views for my example it would be in 3 places, because the column names are the same Homogeneous column mapping takes care of this for you)

Create a series of Filters that Select the one row from each of your currency views.  e.g Filter AUD picks AUD, Filter USD picks USD.

Create a report and add one of the Currency filters to the report, add a metric based on the fact and run it.  What you will see is Microstrategy has picked the correct view.  It does this because by defining a filter that has Currency AUD = 'AUD' and including the fact SALE_AMT you force it to pick the only view that meets that definition.  If you look at your facts you will see that all the Currency Attributes are defined in the fact entry level.  Which means it can start at any of those places.  Of course the way it's built it will always pick at least one of these.

This does mean that every report that uses facts / metrics built on views like this need a currency filter.  Ideally you create an object prompt containing all the Currency Filters (set a default) and just ensure all your developers put that on reports before they do anything else.  If they forget then it won't "break" as such, Microstrategy will choose a currency view to use based on the "table size" it has measured during the latest schema update.  You could cheat here and set the size of each table such that your default currency will always be picked if a currency filter is missing.