October 5, 2017

Library Access

Once upon a time several people were working on several different applications in our development environment.  We were adjusting shared dimensions, building and deleting HFM applications, and generally kicking sand around the playground.

http://aminoapps.com/page/cartoon/9847425/forget-the-internet-when-you-have-a-library-card-arthur-meme-contest

Then one day one of the developers couldn't get into the Application Library.  They got an error dialog with the title Requested Service not found and the helpful text:

----
Requested Service not found
----
Code: com.hyperion.awb.web.common.DimensionServiceException

Description: An error occurred processing the result from the server.

Actor: none


We also noticed we couldn't get to the Dimension Library.  And our scheduled taskflows had stopped working.  Since we could still login and open our HFM and Essbase applications we could rule out issues with foundation services, HFM, and Essbase.  Everything pointed to a problem with EPMA.

Analysis


While there were errors in the EPMA logs there wasn't an obvious smoking gun.  We engaged both Oracle support and our integration partner.  The odd thing about Oracle support support is that there is no option for the EPMA product.  All of the disciplines are cross-trained on EPMA but you have to really hunt around for someone who is an EPMA expert.

Between the three of us we noticed this set of errors in the DimensionServer.log:

[2017-08-26T23:57:18.764-04:00] [EPMADIM] [NOTIFICATION:32] [EPMADIM-1] [EPMADIM.Hyperion.DimensionServer.LibraryManager] [tid: EPMA Server Startup] [ecid: disabled,0] Starting 11.1.2.2.00 FixInvalidDynamicPropertyReferences task
[2017-08-26T23:57:19.263-04:00] [EPMADIM] [INTERNAL_ERROR:32] [EPMADIM-1] [EPMADIM.Hyperion.DimensionServer.LibraryManager] [tid: EPMA Server Startup] [ecid: disabled,0] Failed 11.1.2.2.00 FixInvalidDynamicPropertyReferences task    at System.String.InternalSubStringWithChecks(Int32 startIndex, Int32 length, Boolean fAlwaysCopy)
   at Hyperion.DimensionServer.DAO.LibraryManagerDAO.FixInvalidDynamicPropReferencesForDimension(Int32 libraryID, Int32 applicationID, Int32 dimensionID, String dynamicProps, Int32 propertiesDimID, Int32 dynamicPropsPropID)
   at Hyperion.DimensionServer.DAO.LibraryManagerDAO.<>c__DisplayClass64.<FixInvalidDynamicPropReferencesForLib>b__63(DALDatasetCommand command)
   at Hyperion.DataAccessLayerCore.DataAccessLayer.ExecLoadSQL(String SQL, Action`1 onSetParams, Action`1 onRead, DatabaseContext context)
   at Hyperion.DataAccessLayerCore.DataAccessLayer.ExecLoadSQL(String SQL, Action`1 onSetParams, Action`1 onRead)
   at Hyperion.DimensionServer.DAO.LibraryManagerDAO.FixInvalidDynamicPropReferencesForLib(Int32 libraryID)
   at Hyperion.DimensionServer.DAO.LibraryManagerDAO.FixInvalidDynamicPropReferences()
   at Hyperion.DimensionServer.LibraryManager.FixInvalidDynamicPropertyReferences()
[2017-08-26T23:57:19.294-04:00] [EPMADIM] [ERROR:32] [EPMADIM-1] [EPMADIM.Hyperion.DimensionServer.Global] [tid: EPMA Server Startup] [ecid: disabled,0] An error occurred during initialization of the Dimension Server Engine:  startIndex cannot be larger than length of string.
Parameter name: startIndex.    at Hyperion.DimensionServer.LibraryManager.FixInvalidDynamicPropertyReferences()
   at Hyperion.DimensionServer.Global.Initialize(ISessionManager sessionMgr, Guid systemSessionID, String sqlConnectionString)
[2017-08-26T23:57:19.294-04:00] [EPMADIM] [NOTIFICATION:32] [EPMADIM-1] [EPMADIM.Hyperion.DimensionServer.Utility.ChannelUtility] [tid: EPMA Server Startup] [ecid: disabled,0] Listening using IPv4

(I'm copying everything in the block above so it gets found by search engines.  But I highlighted the failed task that clued us into the solution.)  These errors were thrown each time EPMA services restarted.  And the errors started around the time the taskflows started failing.  Earlier in the log we just see the FixInvalidDynamicPropertyReferences task start and finish.  This looked like the culprit.

Solution


If you bingoogle “DimensionServer FixInvalidDynamicPropertyReferences” you find a couple articles that show the error above:


Both articles note the cause as a specific Oracle patch.  It is an older patch that was superseded in our environments.  Since there were no recent patches applied we initially glossed over this.

But the fix is to run a SQL UPDATE statement to replace blank strings with NULL values:

UPDATE DS_Property_Dimension
   SET c_property_value = null
   FROM DS_Property_Dimension pd
      JOIN DS_Library lib
         ON lib.i_library_id = pd.i_library_id
      JOIN DS_Member prop
         ON prop.i_library_id = pd.i_library_id
            AND prop.i_dimension_id = pd.i_prop_def_dimension_id
            AND prop.i_member_id = pd.i_prop_def_member_id
      JOIN DS_Dimension d
         ON d.i_library_id = pd.i_library_id
            AND d.i_dimension_id = pd.i_dimension_id
  WHERE
      prop.c_member_name = 'DynamicProperties' AND
      pd.c_property_value IS NOT NULL AND pd.c_property_value = '';

Since we had exhausted all other avenues we replaced the UPDATE statement with a SELECT statement:

SELECT c_property_value
   FROM DS_Property_Dimension pd
      JOIN DS_Library lib
         ON lib.i_library_id = pd.i_library_id
      JOIN DS_Member prop
         ON prop.i_library_id = pd.i_library_id
            AND prop.i_dimension_id = pd.i_prop_def_dimension_id
            AND prop.i_member_id = pd.i_prop_def_member_id
      JOIN DS_Dimension d
         ON d.i_library_id = pd.i_library_id
            AND d.i_dimension_id = pd.i_dimension_id
  WHERE
      prop.c_member_name = 'DynamicProperties' AND
      pd.c_property_value IS NOT NULL AND pd.c_property_value = '';

The query returned a handful of records where the c_property_value was blank instead of NULL.  We ran the UPDATE query and after restarting the services the libraries, taskflows, and other EPMA features were available.

Conclusion


Prior to this we had issues with dimensions in one of the applications under development.  Part of fixing that required deleting and importing dimensions and adjusting attributes.  Our suspicion is that somewhere along the line blanks got imported instead of NULLs.

The first lesson is that EPMA issues can be tricky to resolve.  It seems odd that Oracle wouldn't have support staff focused on EPMA since that binds all of the other EPM featuers together along with foundation services.

The other lesson is that if you have an issue and find something that looks like it might help don't automatically discard it just because your situation is different.  Many times there is a way to check if the solution might apply to you.

June 11, 2017

HFM Data Audit (part 2)

In the previous post I detailed the mechanics of enabling and exporting data audits in HFM.  But what can you do with data once it's exported?

One option is to keep those .csv files for an arbitrary period of time.  If an issue arises you could search all those files for data of interest.  But you need some utility to search many files for text of interest and you could run into space issues which would require deleting deleting old data.

I prefer to import the data into a SQL database.  The data can be kept as long as needed and is easier to search.

With SQL server we can use the bcp utility to import the .csv files into a database table.  The table should match the fields we get in the .csv file.  The following is a sample create table statement to match the .csv file format.

CREATE TABLE [dbo].[AuditRecords](
[UserName]   [nvarchar](32) NULL,
[Activity]   [nvarchar](16) NULL,
[Timestamp]  [datetime] NULL,
[ServerName] [nvarchar](8) NULL,
[Scenario]   [nvarchar](16) NULL,
[Year]       [smallint] NULL,
[Period]     [nvarchar](3)  NULL,
[Entity]     [nvarchar](32) NULL,
[Value]      [nvarchar](16) NULL,
[Account]    [nvarchar](40) NULL,
[ICP]        [nvarchar](24) NULL,
[Custom1]    [nvarchar](32) NULL,
[Custom2]    [nvarchar](24) NULL,
[Custom3]    [nvarchar](8)  NULL,
[Custom4]    [nvarchar](32) NULL,
[DataValue]  [nvarchar](24) NULL
)

Now that we have a table to hold our data we need to import it.  The bcp utility uses an import format file to specify how to parse fields in the file being imported.  The specifications for the format file are kind of klunky which you can read about here.  But this is the format file I use:

10.0
16
1   SQLCHAR   0   64      ";"      1     UserName           SQL_Latin1_General_CP1_CI_AS
2   SQLCHAR   0   32      ";"      2     Activity           SQL_Latin1_General_CP1_CI_AS
3   SQLCHAR   0   24      ";"      3     Timestamp          ""
4   SQLCHAR   0   16      ";"      4     ServerName         SQL_Latin1_General_CP1_CI_AS
5   SQLCHAR   0   32      ";"      5     Scenario           SQL_Latin1_General_CP1_CI_AS
6   SQLCHAR   0   7       ";"      6     Year               ""
7   SQLCHAR   0   6       ";"      7     Period             SQL_Latin1_General_CP1_CI_AS
8   SQLCHAR   0   64      ";"      9     Entity             SQL_Latin1_General_CP1_CI_AS
9   SQLCHAR   0   32      ";"      10    Value              SQL_Latin1_General_CP1_CI_AS
10  SQLCHAR   0   80      ";"      11    Account            SQL_Latin1_General_CP1_CI_AS
11  SQLCHAR   0   48      ";"      12    ICP                SQL_Latin1_General_CP1_CI_AS
12  SQLCHAR   0   64      ";"      13    Custom1            SQL_Latin1_General_CP1_CI_AS
13  SQLCHAR   0   48      ";"      14    Custom2            SQL_Latin1_General_CP1_CI_AS
14  SQLCHAR   0   16      ";"      15    Custom3            SQL_Latin1_General_CP1_CI_AS
15  SQLCHAR   0   64      ";"      16    Custom4            SQL_Latin1_General_CP1_CI_AS
16  SQLCHAR   0   48      "\r\n"   17    DataValue          SQL_Latin1_General_CP1_CI_AS

Now we have a place to store our audit data and a way to import it.  The last step is to automate the whole process.  I run a PowerShell script daily that uses the AuditExport_cmd.exe to export yesterday's audit data to a .csv, and then import it into the database.  I also retain the .csv files in case something goes haywire with the bcp import.  I organize these retention folders by month and year in a parent folder named D:\LogArchives and periodically purge old retention folders.

The files created by the AuditExport utility have a maximum size of 17 Mb which may create more than one file per export.  The script needs the ability to import multiple files in a single batch.  The full script will export the data to a holding area, create a folder for retention if needed, then loop through the files in the holding area, import each with bcp into the database, then move file to the retaining folder.

One issue I ran into is that the data exported is in unicode format but bcp requires ASCII.  Also bcp runs on the SQL server.  I resolved this by importing the .csv file using Get-Content then exporting it using Out-File with the ASCII -encoding switch.  Since I have to run bcp on the SQL server the Out-File target is a share on the SQL server.  That share also has the bcp format file and a batch file to run the import.  This resolves the file format issue and keeps everything tidy on the SQL server.

The batch file on the SQL server is:

bcp DataAuditArchive.dbo.AuditRecords in T:\DataAudit\TempDataAudit.csv -U DataAudit_update -P Password123 -F 2 -f T:\DataAudit\DataAudit.fmt -e T:\DataAudit\DataAudit.err

Here my database is named DataAuditArchive and the table is AuditRecords.  The batch file, bcp format file, error file, and temporary copy of the .csv being imported are all in the T:\DataAudit folder.  This one line batch file gets called from a PowerShell script that runs on the HFM server.

I also want to save all of the output in case something goes wrong.  I do this by piping all of the results to a log file that gets saved in the monthly archive folder.  This gives an audit trail for the script file processing to allow troubleshooting.  If I find that one or more of the .csv files did not import I can import those manually from the retaining folder to ensure a complete audit record.

In the PowerShell script below the $Folder variable holds the name of the current months retaining folder which will house all of the .csv files for the month as well as the log file.  The $Working folder houses the initial export from the AuditExport utility.  The $BCPWorking folder is the share on the SQL Server that has the batch file, bcp import format, and the ASCII version of the .csv file currently being processed.

# Powershell script to archive and truncate data audit logs ending yesterday.
#

$Utility = 'D:\Oracle\Middleware\EPMSystem11R1\products\FinancialManagement\Utilities\HFMAuditExtractCmdLine_x64.exe'
$AppName = 'COMMA' # Name of the HFM Application
$Parms = ' -u D:\HFM.udl -a '+$AppName+' '  # UDL file and application name
$Data = ' -r '                              # flag to extract and truncate the data audit logs
#$Data = ' -x '                             # flag to extract and truncate the data audit logs
$Logs = 'D:\LogArchives\'                   # Parent folder for the log files
$TempFolder = 'Daily' # Store the dump from the HFMAuditExtract utility here
$BcpWorking = '\\SqlSvr\DataAudit'          # local folder on SQL server for BCP to use for import
$Start = ' -s 1/1/2013'                     # Arbitrary start date to make sure we get all previous data

$D = get-date

# New folder for each month that is named with the year and month
#
$Folder = $D.ToString('yyyy')+"_"+$D.ToString('MM')+$D.ToString('MMM')

# Capture everything through yesterday
#
$D = $D.AddDays(-1)
$End = ' -e '+$D.ToString('MM')+'/'+$D.ToString('dd')+'/'+$D.ToString('yyyy')

$Target = $Logs+$Folder                              # Destination for the logs
if (-not (test-path $Target)) { MkDir $Target }      # Create the destination folder if needed
$Working = $Logs+$TempFolder
$Output = ' -d '+$Working                            # Add the -d so this can be used as a parameter for the utility.

# Extract and truncate the task log
#   NOTE: the truncate task for data also does an extract
#
$CMD=$Utility+$Output+$Start+$End+$Parms+$Data       # Create the string to execute.  Path of utility and all parameters
echo $CMD                                            # Echo so we know what we did
invoke-expression $CMD                               # Execute the AuditExtract command line

# Process the file(s) that were created in the TempFolder
#
$TempFile = $BCPWorking + "\TempDataAudit_$AppName.csv"
$ScriptLog = $Target + '\BcpImport.log'
$BatchName = "T:\DataAudit\Import$AppName.cmd"

$AuditLogs = get-childitem $Working -filter "Data*$AppName*.csv"
foreach ($Log in $AuditLogs) {
    echo $Log.FullName

# Export the file be ANSI encoding because bcp has trouble with format files and unicode data files
#
    get-content $Log.FullName | out-file $TempFile -encoding ASCII

#   Run bcp to import the data
#
  $BcpResults = invoke-command -computername SqlSrv -scriptblock { Param($Bat) & cmd.exe /C $Bat } -ArgumentList $BatchName

# Move the file dated archive folder
#
  $MoveResults = Move-Item $Log.FullName $Target -passthru

# Delete the ASCII file we just imported
#
  if (test-path $TempFile) { del $TempFile }

# And save the results to the script log
#
  (' * * * ') | out-file $ScriptLog -append
  $Log.FullName | out-file $ScriptLog -append
  (' ') | out-file $ScriptLog -append
  $BcpResults | out-file $ScriptLog -append
  (' ') | out-file $ScriptLog -append
  $MoveResults | out-file $ScriptLog -append
}

This script is scheduled to run daily which keeps the DataAudit table in the HFM database lean and efficient.  The audit table becomes our source of truth unless the event happened today in which case we use the Data Audit tab in Workspace.

May 8, 2017

HFM Data Audit (part 1)

Don't you hate it when you're watching a TV show and it builds to a crecendo only to leave you with a "To be continued..." screen at the end.  You invest all that time just to get teased into watching next week.  I'm letting you know up front I'm not covering everything I intend to show about HFM data auditing.  This post will deal with the mechanics of configuring and exporting the data audits.  The next post will address what we can do with the exports to make them useful for reference.


We used to get reports from users in the field that somebody moved their cheese.  Fingers were pointed, accusations made, email storms erupted, tsunamis demolished fragile shore lines, meteors collided in the stratosphere, all cascading into a cacophony of dissonance that gradually decrecendoed until the next crisis wherein the entire process repeated.  The decision was made to enable data auditing in HFM to prevent such catastrophes.

The good thing about data auditing in HFM is that it records every transaction including data entry, data loads, data clears and data copies.  The bad thing about data auditing in HFM is that it records every transaction, so unless you are surgical about what gets audited you can generate an enormous log.


Data auditing is configured at the intersection Scenario and Account.  For Scenarios the field Enable Data Audit has to be set to Y (yes) or O (override) instead of the default of N.  Y indicates that everything in that scenario gets audited, O indicates the auditing is optional and will defer to the configuration of the Account dimension.  Unless you have a small Account dimension you should configure the audited scenarios with the O.  Only audit scenarios where users make changes.


For the Accounts dimension change the Enable Data Audit field to Y for the accounts to be audited in Scenarios configured with O.  Again, only audit accounts where users make changes.  There is no value to auditing accounts that are calculated.  FDQM it has its own audit trail so if you use that to load data you should try to omit accounts that FDQM loads and just audit accounts where users manually submit or clear data.  In our case we knew there were specific areas that generated problems so we focused on accounts in those areas.


The audit data can be viewed in workspace by selecting Navigate > Administer > Consolidation Administration > Audit > Data Audit.  The resulting data can be filtered by any of the POV fields.  The data grid will show who did it, what they did, when they did it, where they did it, the POV of the intersection touched, and the value assigned.

The data audit details are in the table <AppName>_DataAudit.  The HFM administrators guide advises that the number of records in the table be less than 500,000 to avoid degrading performance.  For a large company half a million records can be added in an afternoon of data loads during month end close.

The audit data can be exported and truncated from the Consolidation Administration tab.  But a better solution is to find an automated way of exporting and truncating the table.  In the EPMSystem11R1\products\FinancialManagement\Utilities folder is the command line utility HFMAuditExtractCmdLine.exe.  This utility has switches to truncate and/or extract the data and task audit logs for a consolidation application.  Data can be extracted or truncated within a specified date range and extracted to any available location.  Note that the utility requires a Universal Data Link (.udl) file which must be configured with the connection string to the HFM database along with credentials that have at least db_datawriter access to the database.


The command line to extract the data is pretty straightforward.  Assign the destination folder, full path to the .udl file, specify the application, delimiter, start, and end dates, and the switches for which operations to perform.  One quirk I found, at least with the 64-bit version of the utility for 11.1.2.2.500, is that -k and -r switches both export and truncate the data instead of just truncating.

HFMAuditExtractCmdLine_x64 -d c:\Extracts -u C:\MyHFM.udl -a Comma -s 2017/01/01 -e 2017/02/29 -r

This example command will extract the data to the C:\Extracts folder, use the C:\MyHFM.udl file to connect to the HFM database and the Comma application, then extract and truncate all of the data audit records recorded between January an February of 2017.  Note that in the example I'm using the default delimiter which is the semicolon.  If you specify a delimiter it must be a printable character since there is a known issue with the command line utility which can't process [Tab] as a delimiter.

The command line utility is available up to version 11.1.2.3, but is not bundled with version 11.1.2.4 and above.  However the format of the table has not changed between versions so the utility will still work.  Oracle patch 9976978 can be downloaded to get an older version of the utility.  You can also get the latest PSU for a previous version which will have a copy of the utility in the files\products\FinancialManagement\Utilities folder.

The utility can be configured in a batch file or run directly from the Task Scheduler.  But we still need to adjust the end date.  In the next post I will detail sample code that exports the data to a .csv file, adjusting for the date, and imports that .csv file into a database.  I use this as a nightly scheduled process to keep the HFM database lean and preserve the audit data for forensic examination.

February 14, 2017

More Trouble with Temp Files

The title of this post kind of gives away the plot before I even start the story.  But it does give me a chance to use this picture in my blog.


We update our HFM application metadata monthly.  We first deploy the changes in our development environment and test any required changes to member lists, rules, etc.  After the metadata is loaded into EPMA the HFM application has to be redeployed.  In this case the redeploy failed.  The message displayed was almost as useless as a tribble:

The custom error module does not recognize this error.
This is like something you would find on the DailyWTF, an error that tells you nothing more than it's an error.

In the Consolidation Administration tab was an error message that was a little more descriptive, but equally unenlightening.

Could not determine wsdl ports.
Searching the Oracle knowledge base and binggoolging turned up nothing useful.  In cases dealing with workspace we sometimes resolve unusual issues by clearing the browser cache and reopening the browser, but that didn't help.

The deployment failed at 6% which normally indicates a problem with EPMA.  Everything on the EPMA server looked normal and there were no obvious errors in the logs.  We tried restarting just the EPMA services which did not help.  We then restarted all the EPM services to close all connections, flush java caches, and try to clear up whatever was causing the error, but the redeploy still failed at the same step.

Then I checked the HFM server which in our case is a different host than EPMA.  The C: drive was almost full.  Because this is a dev environment I don't have any alerts configured so I didn't have advance warning of the problem.

Analysis

A handy tool for finding disk hogs is the sysinternals du utility.  It runs from the command line but the syntax is easy and you can use keyboard shortcuts to quickly drill down to find the problem folders.

While DU can scan a whole drive and find the hogging subfolders this can take a bit of time.  My strategy is to check just one level use the -L 1 paramenter, then drill down from there one level at a time.  This is usually a quicker way of finding the offending folder.

Here I look in the C:\Users folder because I suspect the problem is one of the profiles.  Clearly the problem is the profile taking over 5Gb with an account that starts with r and ends with v.

Using DOS shortcuts I hit the up arrow key to repeat the last command, add the backslash, type an [r] to start the username, use the [Tab] key to auto-fill the rest of the name, and press [Enter] to see results for the next level down.  Repeat this technique until you find the problem folder.

In this case it was the user profile of the service account that runs the HFM and other EPM servcies.  In the AppData\Local\Temp folder there were a bunch of temp files, many of them tens of megs in size.  After deleting all the temp files from previous years we freed up 5Gb of drive space and the deployment succeeded.


Even though this processing is handled by EPMA, there is still data being written to the HFM server.  Presumably this is so HFM has something to process once its turn in the deployment comes.

Conclusion

This is another episode where temp files don't get cleaned up after execution.  Also note that the .tmp files use the convention of a 4 hex digit as part of the name.  Had we not run out of drive space it is conceivable we would have run into name collision like we did with the Permanent Temporary Files.

This has become another task during the monthly maintenance where logs and other items get purged or truncated to keep the logs manageable and drive space clear.

What is curious in all of this is why aren't these temp files cleaned up as a matter of course?  While it makes sense to keep temp files around for troubleshooting if a process fails, surely whatever process generates these things could have a final step of cleaning up its droppings after it receives a success notice.  Even children know enough to clean up after their dog.


January 25, 2017

FDQM scripting errors

Financial Data Quality Management (FDQM) is used to move data into the EPM financial applications HFM and Essbase.  It has powerful and flexible facilities for parsing flat files for import.  It can also use scripts to import data, define conditional mappings, or that get triggered at certain events.  The scripts have a .uss extension but use vbScript as the language.  Scripts are stored in the \Data\Scripts folder of the application and can be edited directly or using the FDM Workbench.

Because we have the full power of vbScript there is a lot of sophisticated processing available.  We can also leverage ADO database connections to pull data from any accessible relational data source.

This post focuses on import scripts.  In a recent project I ran into two issues that gave error messages, one of which was not particularly useful and the solution undocumented.

The situation is a working script that pulls data from a SQL datamart for a single entity.  The script generates a SQL command that includes a WHERE clause which uses the FDQM POV to define the location.  The customer wanted another copy of the script to pull data for all entities, then use that new script as the import format for the parent location.

Easy-peasy.  We copy the script, rename it, update our WHERE clause to pull for all entities, create another import format that uses the new script, and assign that format to our parent location.  In this case the initial script was called HFMSQL and our edited copy is called HFMSQL_ALL.  (I'm won't detail the mechanics of managing the import formats and locations since that is well documented.)

Error 1

The first error is reasonably descriptive.


Error: An error occurred importing the file.
Detail: Script filename [HFMSQL_ALL.uss] is different from
the procedure name entered in the script file

The first line of an import script defines a function.  That function has to have the same name as the import script file.

The error was caused by keeping the original the function name HFMSQL for the new HFMSQLscript HFMSQL_ALL.

Making the function name match the script name resolves the error.

Error 2

This is the error that gave me fits.


Error: Import failed. Invalid data or Empty content.

This looks like we aren't getting any data out of our edited query.  A handy troubleshooting tool is to use the vbScript FileSystemObject to dump information to a text file.

Create a FileSystemObject and then use the CreateTextFile method to make the debug file.  The parameter True indicates that we will overwrite an existing file.

Later in the script I write the SQL query string which is generated by the script to the debug file

The WriteLine method writes the SQL string to the debug file.  The next step sends the SQL string to the SQL connection and pulls it into the record set that gets processed later in the script.

I re-ran the import, then opened the debug file and copied the SQL query and ran that directly on the SQL server.  The query returned the records we expected.  So I don't have empty content, but how could the data be invalid?

Just to be sure I added similar debugging statements to the original working script and adjusted the new script to pull from just the location where the working script was attached.  The SQL statements that got generated were identical and running them directly on the SQL server returned the same results set.

Sometimes it helps to bring in a fresh pair of eyes so I worked with an associate who double checked my findings.  We then created another copy of the working script and ran into the issue in Error 1.  While fixing that error when he suggested searching for the original script name.  At the bottom of the script we found this:


This is a standard practice most sample scripts you find.  After the load completes you set the ActionType and ActionValue which gives the success message on completion.  But note the last line which sets the function to return True to signal a successful completion to FDQM.  This variable name has to match the function name which has to match the script name.

So this is another easy fix but it took a while to resolve, partly because we weren't paying attention when we copied the scripts, but mostly because the error message does not indicate the real source of the problem.  Hopefully this post can save someone trouble next time they run into this error.