December 18, 2015

Moving Maps

Financial Data Quality Management (FDQM, formerly FDM, formerly Upstream, currently FDMEE) is our primary tool for moving data from our general ledger systems into HFM for consolidation and reporting.  Part of the power of FDQM is that it can translate identifiers for accounts, business units, and other dimensions from the source systems so they match your HFM metadata.  This is accomplished through mapping tables.  FDQM structures locations into a logical hierarchy and associates a map and input format with each location (internally called partitions).  If each location has a different GL then you could potentially have a unique map per location but in most instances locations that use the same GL will use the same map.

http://www.xkcd.com/1500/

What if you need to migrate the mapping tables from one system to another, say from production to development?  With the FDQM user console you can export the maps to Excel and then upload them into the target environment.  But you have to do this one dimension at a time for each location.  If you have dozens of locations and many custom dimensions this is tedious and time consuming.

From the FDQM WorkBench you can export everything about your FDQM application to an .xml file.  This can include custom scripts and reports as well as standard items like locations and import formats.  When you select Locations it will select everything associated with the locations such users and privileges.  You must click the Options tab and select Export maps with locations to include the mapping tables in the export.


Exporting locations is mostly an all or nothing affair.  You can choose to not export User Privileges, but if you deselect any other item then Locations is automatically deselected which prevents you from exporting the maps.  From the FDQM perspective this makes sense since you might get an incomplete description of the location which would result in errors on the target system.  But it also means you can't export just the maps.

Problem overview

Our situation is a company that spun off some business units to form a new company.  The HFM metadata was adjusted to remove the now unused entities and accounts.  The locations in FDQM were also updated and reorganized to reflect the new structure of the company without the business units that were spun off.

A test environment was created to facilitate validation of these changes and obtain user acceptance testing.  To accomplish this we needed to ensure the current maps from production were available to the testers.  The problem was that if we imported the locations using the Workbench export from production it would overwrite the updated structure leaving us with the original structure and the discarded locations.

So we needed a way to move just the maps from production to the test environment.

Can we use SQL?

FDQM uses SQL on the back end to hold all of the structural information and some logging.  The table structure isn't well documented and not entirely obvious.  The maps are held in one of the tDataMapSeg tables.


There are 50 tDataMapSeg tables and no documentation to define which table to use.  Notice that tDataMapSeg table includes the fields DataKey, ParitionKey, and CatKey.  These are primary keys into other tables.  So not only would we need complex queries to use SQL to copy the data between environments, we potentially introduce unexpected and unsupportable errors if we tinker with key fields.

Can we use the XML?

The export produced by WorkBench is a well structured .xml file.  There is a section for each item and the members of that item are enumerated within the XML tree along with their properties and values.


In this example the userInfo element defines the users and their attributes and the userPrivileges element defines which partitions (locations) the user can access

The problem with the maps is that they are defined as base64 encoded strings.  While this allows the binary map table to be exported to text there isn't an easy way to parse the maps.


Can we copy the encoded strings?

PowerShell version 3 and later has powerful tools for reading and manipulating XML files.  We can easily traverse the XML tree, access specific attributes and values, and adjust those attributes and values.  (You can actually do a whole lot more but for this situation that is all we need to do.)

If you are unfamiliar with PowerShell there are many excellent resources available on the interwebs for every level of expertise.  My other blog, 2scriptornot2.blogspot.com, has lots of sample code and links to some good resources.

So our strategy is:
  1. Export the locations from the production and test environments to .xml
  2. Iterate through the locations in the test .xml
  3. Find the matching location in production .xml
  4. Copy the encoded string for the mapping table from production to test
  5. Save the test .xml and import it with WorkBench to update the maps
One wrinkle I ran into is that the map for a location may be broken into several sections and the number of sections didn't always match between the environments.  I added a test for that in the code and skip those locations but note in the output that the location was skipped.  For those locations we exported the dimensions manually from the FDQM console and copied them over.  This only happened for a couple locations so was a reasonable work around.

The commented code is:

# Powershell script to copy just the mapping tables from Source to Target
#    for only the locations that are in Target
#

# Read the .xml files
#
echo 'Reading Source'
[xml]$Source = get-content 'Source.xml'
echo 'Reading Target'
[xml]$Target  = get-content 'Target.xml'

# Process only the Target locations
#
foreach ($L in $Target.weblinkconfiguration.Locations.povPartitions.povPartition) {
    echo $L.PartName

    # Get the matching povPartition from the Source tree
    #
    $SourcePart = $Source.weblinkConfiguration.Locations.povPartitions.povPartition | `
          where {$_.partName -eq $L.partName}

    # There may be more than one map per partition.
    #  Force these to be arrays so we can loop through them
    #
    [array]$SourceMaps = $SourcePart.Maps
    [array]$TargetMaps  = $L.Maps

    # Make sure the counts match.  If not we will migrate those mapping tables manually
    #
    if ($Sourcemaps.count -eq $TargetMaps.Count) {
        for ($i=0; $i -lt $TargetMaps.count; $i++) {

    # The actual mapping table is a base64 blob in the innerText of povPartition node.
    #   Copy that from Source to Target
    #
            $TargetMaps[$i].innerText = $SourceMaps[$i].innerText
        }
    } else {
        echo ("*** " + $L.PartName + "Map counts don't match : Source="+ `
              $SourceMaps.count + " , Target=" + $TargetMaps.count)
    }
}

# Save the results
#
echo 'Saving update'
$Target.Save('NewTarget.xml')

If you use this code replace the Source.xml, Target.xml, and NewTarget.xml with the full path to your export files.

Some things to note are that I can reference a specific element by listing each limb of the tree separated by a periods.  So I can use one line to drill down to the .povPartition element to access the Maps element.  (I include that tree path in the previous screenshot for a visual reference.)  I also use PowerShell typecasting to force the maps into an array even if there is just one element which allows for simpler coding.

By using a script we can repeat the process consistently and reliably over the course of several rounds of testing.  And the whole process from exporting, to copy, to importing can be completed in a few minutes.

No comments:

Post a Comment