Use Flow services to encapsulate a sequence of services within a single service and manage the flow of data among them, and create complex, advanced integration scenarios involving multiple application endpoints.
Flow services provide you with rich data mapping capabilities, a familiar debugging mechanism, a large collection of built-in services, and more.
Overview
No subtopics in this section
IBM webMethods Integration offers various features that enables you to automate tasks based on specific requirements. However, there are some scenarios where you want to create complex integrations that require advanced data transformation and custom logic implementation. Earlier you could do this by switching to the Flow Editor using the App Switcher and create the required integrations in the Flow Editor.
We have simplified this process by providing you with flow services directly in your IBM webMethods Integration project, thus eliminating the need to access the Flow Editor through the App Switcher. With flow services, you can encapsulate a sequence of services within a single service and manage the flow of data among them, and create complex, advanced integration scenarios involving multiple application endpoints.
In the Flow Editor, you used a graphical drag and drop tool to create integrations. You set up your integrations by plugging blocks together in the workspace and coding concepts were represented as interlocking blocks.
In flow services, you can easily build a flow service by adding steps and selecting the constructs including Connectors, Controls, flow services, and Services from within the steps. The editor is visually enhanced to offer ease of use and is more interactive.
A flow service step is a basic unit of work that IBM webMethods Integration interprets and runs at run time. The steps are the building blocks of a flow service and are shown as rectangular blocks prefixed with step numbers. Steps may be compared to an actual implementation of a service in a programming language, where the business logic or the algorithm is written. IBM webMethods Integration lists steps sequentially from top to bottom and also runs them in that order.
You can simultaneously open five flow services in different tabs in a browser.
Comparing Workflows and Flow services
No subtopics in this section
Workflows and flow services enable you to automate and optimize monotonous tasks based on a set of predefined rules and business logic. These features give you the power to connect apps, devices, and on-premises systems with only clicks and zero code.
Although workflows and flow services help you to accomplish the same goal, there are significant differences between both the features.
The following table showcases the differences between workflows and flow services.
Workflows
Flow Services
Designed for Citizen developers. No code.
Designed for Integration specialists. Low code.
Offers a visual, drag and drop interface to create business use cases.
Offers steps and constructs from within the steps to create business use cases.
Offers in-built triggers to automatically trigger workflows when relevant events occur and actions to perform specific tasks.
Offers in-built applications and services to perform specific tasks.
Supports executing other flow services and workflows from within a workflow.
Supports executing other flow services from within a flow service.
Ideal for scenarios where basic data transformation on application data is required. This means you can execute simple business use cases with workflows.
Ideal for scenarios where complex integrations and advanced data transformation are required. This means you can execute simple as well as complex business use cases with flow services.
Migrating Flow Editor Integrations to Flow services
If you are an existing customer and have created integrations in the Flow Editor, you can migrate those integrations to Flow services in IBM webMethods Integration using the Migrate Integrations functionality. You can migrate integrations from a Flow Editor project to the same project in flow services.
How it works
In IBM webMethods Integration, ensure that you have the Developer and Admin roles assigned from the Settings > Roles page.
In IBM webMethods Integration, select the project where you want to migrate the Flow Editor integrations and click the Flow services tab. IBM webMethods Integration displays the number of integrations available for migration.
Click Migrate Integrations.
A dialog box appears displaying the list of integrations that will be migrated. Click OK to continue.
A dialog box appears showing the migration results. Click OK.
All integrations available in the Flow Editor project are migrated and available in Flow services in case of a successful migration. In case of any errors, it is recommended to recreate those integrations in Flow services.
Note
After you migrate the integrations, you will not be able to open or edit the same integrations in the Flow Editor.
If a Flow Editor integration refers to another integration, both the integrations will be migrated.
After migrating flow services, if the pipeline variables used in if or elseif blocks are dropped, the else block will not work properly. In such cases, update the migrated flow service from the user interface and save it.
Core elements, constructs, and components of Flow services
Let us see the core elements, constructs, and components that are used to create and run a flow service.
On the new flow service page, click on the rectangular box as shown below.
By default, the left panel lists the recently used Connectors, Controls, Project services, and Built-in services.
You can type a keyword in the rectangular box and search for the available elements. IBM webMethods Integration filters the data based on what you type in the search box.
Click All to view the categories available on the right panel, which you can use to build the Flow service.
Categories
Displays the following categories:
Connectors
Controls
Project Services
Built-in Services
Connectors
Displays the connectors available to create the Flow service.
Connectors are grouped in the following categories on the flow services panel.
Predefined Connectors
Predefined and configurable connectors. These connectors allow you to connect to the SaaS providers.
Note
A few connectors are deprecated in this release. A deprecated connector displays the Deprecated label just below the connector where ever it appears in the user interface. Deprecated connectors will continue to work as before and are fully supported by IBM. If you are using deprecated connectors in your existing Workflows and/or flow services, they will work as expected. No new feature enhancements will be made for deprecated connectors. If you are creating new Workflows or flow services, it is recommended to use the provided alternative connectors instead of the deprecated connectors. The deprecation is applicable only for Actions. The deprecation is not applicable for Triggers, that is, Triggers are supported for both deprecated and alternative connectors. For the list of triggers, see the documentation for alternative connectors.
REST Connectors
You can define REST Resources and Methods and create custom REST connectors. You can invoke a REST API in a flow service by using a REST connector.
SOAP Connectors
Displays custom SOAP connectors. Custom SOAP connectors enable you to access third party web services hosted in the cloud or on-premises environment. You can also invoke a SOAP API in a flow service by using a SOAP connector.
On-Premises Connectors
On-Premises applications uploaded from on-premises systems.
Flat File Connectors
Displays the Flat File connectors created either manually or from a sample file.
Controls
Controls are the programming constructs in a flow service. This allows you to run a specified sequence based on a field value, try a set of steps, and catch and handle failures. The panel displays conditional expressions, looping structures, and transform pipeline. Conditional expressions perform different computations or actions depending on whether a specified boolean condition evaluates to true or false.
Sequence
Use the Sequence step to build a set of steps that you want to treat as a group. Steps in a group are run in order, one after another.
Conditional Controls
If: Use the If statement to return one value if a condition is true and another value if it’s false.
If Else: Use the If statement to return one value if a condition is true and another value if it’s false. Else runs if the result of a previous test condition evaluates to false.
Else If: The If step is used to evaluate a boolean condition and if the condition is true, statements inside the If step are run. The If step can be followed by an optional else step, which runs when the boolean expression is false. The If statements are run from the top towards the bottom. You can use one If or Else..If.. statement inside another If or Else..If..statement(s).
Nested Condition: Nested conditions comprise condition statements contained within other condition statements. Conditions consisting of multiple statements are connected using the logical AND and OR operators.
Loops
Loops run a set of steps multiple times based on the block you have chosen. It repeats a sequence of child steps once for each element in an array that you specify. For example, if your pipeline contains an array of purchase-order line items, you could use a Loop to process each line item in the array. Loop requires you to specify an input array that contains the individual elements that are used as input to one or more steps in the Loop. At run time, the Loop runs one pass of the loop for each member in the specified array. For example, if you want to run a Loop for each line item stored in a purchase order, you would use the document list in which the order’s line items are stored as the Loop’s input array.
A Loop takes as input an array field that is in the pipeline. It loops over the members of an input array, executing its child steps each time through the loop. For example, if you have a flow service that takes a string as input and a string list in the pipeline, use Loops to invoke the flow service one time for each string in the string list. You identify a single array field to use as input when you set the properties for the Loop. You can also designate a single field for the output. Loop collects an output value each time it runs through the loop and creates an output array that contains the collected output values.
Repeat
Repeat: The repeat step iterates only on an array and the number of executions is equal to the size of the array.
Repeat for Count: Repeat for Count specifies the maximum number of times IBM webMethods Integration re-runs the child steps in the Repeat step. If the count is set to 0, the Repeat step does not re-run the child steps. If the count is set to any value > 0, the Repeat step re-runs the child steps up to this number of times. Children of a Repeat step always run at least once. The count property specifies the maximum number of times the children can be re-run. The concept is the same as Repeat for each. The only difference here is that you iterate for a specific count. It can be specified directly on a step or through a variable (I/O or Pipeline).
Repeat for Input Output: The concept is same as Repeat for each. The only difference here is that you can iterate on two different arrays concurrently.
While: While loop is used to iterate a part of the program several times. If the number of iterations are not fixed, it is recommended to use the While loop.
Do Until: Do Until loops are similar, except that they repeat their bodies until some condition is false.
Break: Use the Break step only within a loop. This allows you to break out of the containing loop, that is, it allows you to break the program execution out of the loop it is placed in. In case of nested loops, it breaks the inner loop first and then proceeds to outer loops. You cannot attach child steps to the Break step.
Note
In case of an infinite loop, there is a default timeout configured in IBM webMethods Integration. If the time taken for execution exceeds this limit, the flow service execution is terminated. Contact your administrator for customizing the default timeout.
Error Handling
Try, Catch, Finally: You can use these steps to try a sequence of flow service steps, catch and handle any failures that occur, and then perform any cleanup work. When adding the Try, Catch, and Finally steps to a flow service, decide which usage pattern you want to use. This can be a Try-Catch, a Try-Finally, or a Try-Catch-Finally. Identify the logic for which you want to provide exception handling. The steps for this logic belongs in the Try step. Then decide whether you want to handle exceptions and if so, which ones. Catch steps can handle all exceptions or specific exceptions. If the service handles exceptions, what recovery logic or logging needs to be added? Lastly, identify any cleanup logic that you want the service to perform, that is, decide if you need to include a Finally step.
The general usage pattern for try-catch-finally is a single Try step followed by zero or more Catch steps, followed by zero or one Finally step. Catch steps can be configured to handle specific failures. The Finally step runs a set of steps after a Try step completes successfully or fails. If a Catch step runs, IBM webMethods Integration runs the Finally step after the Catch step completes.
The Try-Catch usage pattern consists of a single Try step followed by one or more Catch steps. The Try step contains any number of child steps to be run. The Catch step contains any number of child steps to be run if the Try step fails.
The Try-Finally usage pattern consists of a single Try step followed by a Finally step. The Try step contains any number of child steps to be run. The Finally step contains any number of child steps to run regardless of the outcome of the Try step. The Try-Finally usage pattern does not handle any failures that occur in the Try step. As a result, any failure from the Try step remains pending. After the Finally step completes, IBM webMethods Integration propagates the failure to the parent flow step of the Try step if the Finally step completes normally.
The Try-Catch-Finally usage pattern is a combination of Try-Catch and Try-Finally. A Try-Catch-Finally consists of a Try step that contains logic to be attempted, followed by one or more Catch steps to handle any failure that occurs and run recovery logic. This is followed by a single Finally step to perform any clean up.
The Finally step contains logic that is run regardless of whether the Try step succeeds or fails. Often, the Finally step contains clean up logic that needs to run regardless of the outcome of the preceding Try or Catch steps.
Throw Error: You can attach the throwerror step inside any step except the catch section of the try catch step. You can explicitly throw an exception with a custom error message. If you use this step inside the try section of the try catch step, the error is caught in the catch section. If you use the value of a pipeline variable for this custom error message, type the variable name between % symbols, for example, %mymessage%. The variable you specify must be a String. You cannot attach child steps to the Throwerror step. If you add a Throwerror step inside a try catch step, any changes done to the pipeline variables inside the try step are reset to the previous values existing in the pipeline.
Exit
This step exits the entire flow service and signals success or failure as part of the exit.
Exit Flow service signaling success step allows you to successfully terminate and exit from the currently running flow service. You cannot attach child blocks to the Exit flow service signaling success step.
Exit Flow service signaling failure step abnormally terminates the currently running flow service with a failure message. You can specify the text of the failure message that is to be displayed. If you want to use the value of a pipeline variable for this flow service message, type the variable name between % symbols, for example, %mymessage%. The variable you specify must be a String. You cannot attach child steps to the Exit flow service signaling failure step.
Switch, Case
Switch allows a variable to be tested for equality against a list of values. Each value is called a case, and the variable being switched on is checked for each case, that is, Switch evaluates a variable and skips to the value that matches the case. For example, if the Switch variable evaluates as “A”, then case “A” is run.
A switch statement can have an optional default case, which must appear at the end of the switch. The default case can be used for performing a task when none of the cases are true. You cannot insert multiple default statements.
You can include case steps that match null or empty switch values. A switch value is considered to be null if the variable does not exist in the pipeline or is explicitly set to null. A switch value is considered to be an empty string if the variable exists in the pipeline but its value is a zero length string. Switch runs the first case that matches the value, and exits the block.
Branch
Branch is a conditional execution of steps and constitutes a group of expressions. Within a Branch, IBM webMethods Integration runs the first expression that evaluates to true. Expressions are the conditions on the pipeline variables.
At run time, Branch evaluates the conditions provided and runs the first expression whose condition is evaluated to true. If none of the expressions are true, the default expression if included, is run.
The following figure is the default template of a Branch construct.
If you want to perform different actions on different values of one or more pipeline variables, use the Branch construct. In the following example, action is a pipeline variable created using the Define input and output fields dialog box.
The Branch contains two conditional expressions and a default expression.
Scenario 1
If the value of action starts with mult, IBM webMethods Integration evaluates the first expression (action = /^mult/) and runs step 3, that is, performs the multiplication operation.
Scenario 2
If the value of action is addition (action == “addition”), then the Branch starts its execution from step 2. As step 2 is evaluated to false, the execution moves to the next expression, that is, step 4. Step 4 is evaluated to true, hence step 5 is run, that is, the addition operation is performed. Remaining expressions in the Branch, if any, are ignored and the execution falls through to the next step after the Branch in the flow service.
Scenario 3
Let us assume that the value of action is subtraction. The Branch then starts its execution from step 2. As IBM webMethods Integration evaluates step 2 to false, the execution moves to the next expression, that is, step 4. IBM webMethods Integration evaluates Step 4 to false, hence evaluates step 6 ($default), that is, runs step 7, which makes the execution exit the flow service.
Note
If you are specifying a field in a document or in a document reference, format it as document/documentVariable. For example, if you want to specify a field name, from the document employeeProfile, then format it as employeeProfile/name.
If you specify a literal value in an expression, the value you specify must exactly match the run-time value of the pipeline variable. Further, the value must be case sensitive.
IBM webMethods Integration runs only the first target step whose expression evaluates to true. If none of the expressions evaluate to true, none of the child steps are invoked, and the execution falls through to the next step in the flow service, if there is no default expression.
If you want to prevent the execution from falling through a Branch step when an unmatched value occurs at run time, include a default target step to handle unmatched cases. Branch can have zero to many default expressions. IBM webMethods Integration runs the first sequentially encountered default expression.
The default step does not need to be the last step of a Branch but IBM webMethods Integration always evaluates the default step at the end.
Any step other than an expression cannot be a direct child of a Branch step. Further, you cannot add the expression step anywhere outside a Branch. If you are branching on expressions, ensure that the expressions you assign to the target steps are mutually exclusive. In addition, do not use null or empty values when branching on expressions. IBM webMethods Integration ignores such expressions and does not display any errors. You can provide multiple conditions for each expression and can also use regular expressions, for example, /^mult/. The expressions you create can also specify a range of values for the variables.
Specify the value of the variables in the expressions, as mentioned in the following table:
To Match…
Specify…
That exact string
A string
The string representation of the object’s value. Example for Boolean object: true Example for Integer object: 123
A constrained object value
Any string matching the criteria specified by the regular expression. Example: /^REL/
A regular expression
An empty string
A blank field
A null value
$null
Any unmatched value (that is, run the step if the value does not match any other label)
$default
Transform Pipeline
As systems rarely produce data in the exact format that the other systems need, at times you need to transform data in terms of format, structure, or values of data. Using the Transform Pipeline control in the flow service you can do transformations on data in the Pipeline.
You can insert multiple transformers in a single Transform Pipeline step, to perform multiple data transformations. When multiple transformers are added, the Transform Pipeline step behavior is as follows:
All transformers are contained within a Transform Pipeline step and do not appear as a separate flow service step in the editor.
All transformers are independent of each other and do not execute in a specific order.
Consequently, the output of one transformer cannot be used as the input to another transform. These characteristics make the Transform Pipeline step different than that of a normal step in a flow service.
Inserting a Transform Pipeline Step in a Flow service
Create a flow service. A flow service step is created initially without any data.
Select Transform Pipeline from the step drop-down list. The Transform Pipeline step is added.
Adding Transformers
Create a flow service. A flow service step is created initially without any data.
Select Transform Pipeline from the step drop-down list.
Click on the flow service step. The Pipeline panel appears.
Click on the Transformers column header. The Select Transformer step is added under the Transformers column.
Select the function from Select Transformer. The input field values are modified based on the selected transformer.
Mapping Pipeline Fields to Transformer Fields
The pipeline fields can be mapped to the service fields of the transformers added.
Go to the Transformer step (for which the transformers have been added) in the Pipeline panel.
Click in the Transformer step. The fields are displayed.
Link the fields as per your requirements.
Note
At a time, only one transformer can be expanded. On collapsing, all the existing transformers are visible.
You can delete a transformer using the delete icon against each transformer. If a transformer is deleted, the mappings made to its service fields will also be deleted.
Mapping Fields Directly
The Pipeline Input fields can be directly mapped to any of the Pipeline Output fields. This is known as Direct Mapping.
Example
Let’s see how transformers work with the help of an example. In the below example, we will transform the case of a string field label to upper case.
1.Provide a name and description of the new flow service and click the I/O icon.
2.Define an input field Currency Code and click Done.
3.Select Transform Pipeline.
4.Click the pipeline mapping icon and open the pipeline mapping window.
5.On the pipeline mapping window, click the Add New Transformer option.
The Select Transformer panel appears.
6.Click ALL on the transformer panel.
Note
You must specify the pipeline input and output variables in the corresponding flow when invoking a flow service in the transform step to ensure visibility of the signature.
You cannot use the Controls category within a flow service transform step.
7.Select the toupper service.
8.Click the Expand icon.
9.Map Currency Code to inString and value to Currency Code.
10.Save the flow service and run it. In the Input values dialog box, type usd as the Currency Code.
11.Click Run on the Input values dialog box. The transformer converts the string field usd in lowercase to uppercase.
Note
You can delete a transformer by clicking the delete icon against each transformer. If a transformer is deleted, the mappings made to the service variables are also deleted.
Project Services
Displays the flow services available in the selected project. This enables you to invoke other flow services from the current flow service. The input and output of the referred flow service must be accordingly mapped for a successful execution.
Built-in Services
Displays the service categories. An extensive library of built-in services is available for performing common tasks such as transforming data values, performing simple mathematical operations, and so on.
Service input and output parameters are the names and types of fields that the service requires as input and generates as output and these parameters are collectively referred to as a signature.
Note
Related services are grouped in categories. Click here for information on the service categories, and input parameters, output parameters, and usage notes if any for each service.
Tasks associated with Flow Services
No subtopics in this section
You can perform the following tasks for flow services by clicking the icons available on the flow services panel.
Icons
Task/Description
Provide a flow service name and description. The asterisk denotes that the flow service is modified.
Search for text in the flow service, inside the editor, and within the steps.
Debug a flow service and inspect the data flow during the debugging session. For more information, see Debug Flow services.
Run the flow service after you save it. This enables you to test the flow service execution in real time and view the test results.
View the runtime log entries for a selected runtime directly within the Flow Editor.
This option allows you to log business data at the flow service level. You need to first click the I/O icon and define the input and output fields. Then click the Log business data icon and select the defined fields in the Log business data dialog box. You can choose whether you want to log business data only when errors occur On Failure, or choose Always to always log business data. The default setting is On Failure. When selecting fields for logging, you can specify the same display name for more than one field, but this is not recommended. Having the same display name might make monitoring the fields at run time difficult. The business data log appears in the Execution Details page under Business Data. For more information, see Log Business Data.
Click the Navigator icon to view a summary of the flow service steps, which appear on the right panel. Move the slider available on the navigator panel to move through the flow service. If you move through the main flow service page, the navigator view will automatically move. You can click on a navigator step to go to that step in the flow service.
Click the Default mode to view more spacious, visual representation of the integration flow. Each step or action is displayed with ample space around it, making it easier to visualize and interact with each element.
Click the Compact mode option to display actions and steps in a more condensed format. This mode minimizes the space between elements, allowing more steps to be visible on the screen without scrolling.
Press Ctrl and select a step or multiple steps in the flow service. Then click the delete step(s) icon to delete the selected step or multiple steps in the flow service. You can also delete steps by right-clicking on a step on the flow service editor and selecting the delete option.
Press Ctrl and select a step or multiple steps in the flow service. Then click the move step(s) icon and select the move steps up, down, right, or left options. You can also move steps by right-clicking on a step on the flow service editor and selecting the available options.
Cut, Copy, Paste, Duplicate, Enable, and Disable steps in the flow service. You can also right-click on a step on the flow service editor and add a step or a child step for the selected step, cut, copy, and paste steps, copy the selected step(s) and paste the step(s) after the selected step(s) by clicking Duplicate, delete a step, and move steps up, down, right, or left.
Note
You can perform cut, copy, and paste actions within the same flow service or another flow service in the same project. However, if you perform these actions from one flow service to another in a different project, you might encounter deployment errors.
Undo a step action.
Redo a step action.
Whenever you save a flow service, a newer version is added to the Version History with the default commit message. You can also provide a custom commit message by clicking the drop-down arrow beside the Save option and selecting Save with message.
- Whenever you save a flow service, a newer version is added to the Version History with the default commit message. You can also provide a custom commit message by clicking the drop-down arrow beside the Source Control option and selecting Save with message. -Version History: View the version commit history of the flow service.
Access Help topics or take a tour.
The following options are available: - Schedule: Define a schedule for the flow service execution. Select Once if you want to schedule the flow service to run just once immediately or run once at a specified date and time. - Select Recurring if you want to define a recurrence pattern. You can define a recurrence pattern daily, weekly, monthly, and in hours. Select the frequency (Hourly, Daily, Weekly, Monthly) with which the pattern recurs, and then select the options for the frequency. Click the + icon to repeat the execution for daily, weekly, and monthly schedules. Click the delete icon to delete the selected execution time for daily, weekly, and monthly schedules. - Select Prevent concurrent executions option to skip the next scheduled execution if the previous scheduled execution is still running, except when the previous scheduled execution is running for more than 60 minutes (default). In this case, the next scheduled execution starts even if the previous scheduled execution is running. IBM support if you want to change the 60 minutes default value. If you do not select the Prevent concurrent executions option, the next scheduled execution starts, even if the previous scheduled execution has not yet completed. In the Input Value Set panel, provide inputs to the flow service based on the defined input signature. Click Delete if you want to permanently remove the current recurrence schedule.
Note
Any time stamp displayed in IBM webMethods Integration is based on the time zone you have specified in IBM webMethods iPaaS. All time zones available in IBM webMethods iPaaS are currently not supported in IBM webMethods Integration. If a time zone in IBM webMethods iPaaS is not supported, then the time stamp in IBM webMethods Integration defaults to the Pacific Standard Time (PST) time zone. You can pause a flow service execution that was scheduled by clicking the Pause option on the Overview page of a flow service.
- Key Shortcuts: View keyboard shortcut keys.
Note
Occasionally, there may be instances where keyboard shortcuts in the Flow Editor coincide with those of the browser. If this occurs, it is advisable to avoid using the shortcuts and instead follow the regular methods to access the editor.
- Execution History: View successful and failed flow service executions, operations, and business data logs. - Version History: View the version commit history of the flow service.
An account is linked and configured. Note: For a migrated or imported flow service, the step-level validation might show that the account is not configured. This means that the steps are linked to an account alias but the account does not exist. In such a case, you can create the account inline and refresh the page, or you can go to the Connectors page and create the account with the same name as the alias.
An account is not linked.
A red circle on a step may appear for many error scenarios, for example, if an account is not configured, or the associated action is not selected, or for any invalid constructs.
Map or edit existing pipeline mappings.
Click the ellipsis icon and perform the following tasks: - Comment: Add a note for the selected step. - Disable: Disable or deactivate a step. You can enable the step by clicking the enable step icon . You can cut, copy, or delete a step after you disable it. - Log business data: This option allows you to log business data at the step level. Note: At the step level, the Log business data option is enabled only for connectors. In the Log business data dialog box, choose whether you want to log business data only when errors occur On Failure, or choose Always to always log business data. The default setting is On Failure. Expand the Input Fields and Output Fields trees to display the fields available in the signature, and select the check boxes next to the fields you want to log. If you want to define a display name for a field, type the display name beside the field. The display name defaults to the name of the selected field, but it can be modified. When selecting fields for logging, you can have the same display name for more than one field, but this is not recommended. Having the same display name might make monitoring the fields at run time difficult. The business data log appears in the Execution Details page under Operations > Business Data. See the Log Business Data section for more information. - Delete: Delete the step.
Click this icon to open the referenced flow service if a flow service references any other flow service.
Indicates count of only the immediate child steps.
Note
Flow services can accommodate a maximum of 300 steps. Exceeding this maximum step limit might lead to performance issues.
The Define I/O feature allows you to define the input and output fields for a flow service. You can define the input and output fields from the Define input and output fields screen. You can access the screen by clicking the Define I/O icon () from the flow services action bar.
The Define input and output fields screen has two tabs:
Input Fields - Allows you to define the fields that the flow service requires as input.
Output Fields - Allows you to define the fields that the flow service returns to the client or calling program.
You can declare the fields of a flow service in one of the following ways:
Add a new set - Specify the input and output fields manually. For more information, see Declaring Fields Manually.
Document Reference - Specify the input and output fields through a document type. When you assign a document type, you cannot add, modify, or delete the fields. For more information, see Declaring Fields using Document Reference.
Note
The declared input and output fields are automatically displayed in the Pipeline Input and Pipeline Output columns of the Pipeline panel.
The field names are case sensitive. For more information about the data types supported for the input and output fields, see Default Pipeline Mapping Rules and Behavior.
Guidelines for Defining Input and Output Fields
Although declaring input and output fields for a flow service is optional, it is strongly recommended that you make it a practice to declare the fields for every flow service that you create due to the following reasons:
Declaring fields makes the flow service’s input and output fields available in the pipeline.
Declaring fields allows you to:
link data to and (or) from the flow service.
assign default input values to the flow service.
run the flow service and enter initial input values.
Declaring fields makes the input and output requirements of your flow service known to other developers might want to call your flow service from their programs.
Input Fields
Specify all inputs that a calling program must supply to this flow service. For example, if a flow service invokes two other flow services, one that takes a field called AcctNum and another that takes OrderNum, you must define both AcctNum and OrderNum as input fields for the flow service.
Note
The purpose of declaring input fields is to define the inputs that a calling program or client must provide when it invokes this flow service. You do not need to declare inputs that are obtained from within the flow service itself. For example, if the input for one flow service is derived from the output of another flow service, you do not need to declare that field as an input field.
When possible, use field names that match the names used by the flow services. Fields with the same name are automatically linked to one another in the pipeline. If you use the same field names used by flow service’s constituent services, you reduce the amount of manual data mapping that needs to be done. When you specify names that do not match the ones used by the constituent flow services, you must manually link them to one another.
Avoid using multiple inputs that have the same name. Although the application permits you to define multiple input fields with the same name, during execution, one field might overwrite the other. The fields are not processed correctly within the flow services or by other flow services that invoke this flow service.
Ensure that the fields match the data types of the fields they represent in the flow service. For example, if a flow service expects an integer called AcctNo, define that input field as an integer.
Output Fields
Specify all output fields that you want this flow service to return to the calling program or client.
Specify all output fields that you want this flow service to return to the calling program or client.
Ensure that the names of output fields match the names used by the flow services that produce them. Like input fields, if you do not specify names that match the ones produced by the flow service’s constituent services, you must manually link them to one another.
Avoid using multiple outputs that have the same name. Although the application interface permits you to define multiple output fields with the same name, the fields may not be processed correctly within the flow service or by other flow services that invoke this flow service.
Ensure that the fields match the data types of the fields they represent in the flow service. For example, if a flow service produces a string called AuthorizationCode, ensure that you define that variable as a string.
Declaring Fields Manually
You can declare fields manually in one of the following ways:
- Using Add a new set
Loading through XML or JSON
Using Add a new set
Here, the instructions are explained for input fields. You can follow the same instructions to add output fields from the Output Fields tab.
Go to the Define input and output fields screen.
Select Add a new set in the Input Fields tab.
Click Add in the Data Fields section. The fields are listed under Data fields.
Enter the details as per your requirements.
Click Done. The input fields are declared.
Note
When you select the Type as String in the Display Type field, select:
Text Field if you want the input entered in a text field.
Large Editor if you want the input entered in a large text area instead of a text field. This is useful if you expect a large amount of text as input for the field, or you need to have new line characters in the input.
In the Pick List field, define the values that will appear as choices when IBM webMethods Integration prompts for input at run time.
In the Content Type field, you can define constraints for the field such as, minimum value of a field or maximum length of a string. For example,
It is crucial to keep in mind that the characters “/” and “&” should not be utilized while defining field names. Using these symbols in a field name will generate an error. To avoid this issue, it is advisable to opt for field names comprising solely of alphanumeric characters and underscores.
Password if you want the input entered as a password.
Large Editor if you want the input entered in a large text area instead of a text field. This is useful if you expect a large amount of text as input for the field, or you need to have new line characters in the input.
In the Pick List field, define the values that will appear as choices when IBM webMethods Integration prompts for input at run time.
In the Content Type field, you can define constraints for the field such as, minimum value of a field or maximum length of a string. For example,
It is crucial to keep in mind that the characters “/” and “&” should not be utilized while defining field names. Using these symbols in a field name will generate an error. To avoid this issue, it is advisable to opt for field names comprising solely of alphanumeric characters and underscores.
Loading through XML or JSON
Here, the instructions are explained for input fields. You can follow the same instructions to add output fields from the Output Fields tab.
Go to the Define input and output fields screen.
Select Add a new set in the Input Fields tab.
Perform one of the following actions:
Load XML - Click Load XML in the Data Fields section to define the content in XML. The Type or paste XML content text box appears.
Load JSON - Click Load JSON in the Data Fields section to define content in JSON. The Type or paste JSON content text box appears.
Type the details as per your requirements. Alternatively, you can paste the details from an XML or JSON file.
Perform one of the following actions:
Load XML - Click Load XML if you have defined the fields in the XML format.
Load JSON - Click Load JSON if you have defined the fields in the JSON format.
The input fields are defined.
Declaring Fields using Document Reference
You can use a document type to define the input or output fields for a flow service. For example, if there are multiple flow services with identical input fields but different output fields, you can use a document type to define the input fields rather than manually specifying individual input fields for each flow service. When you assign a Document Type to the Input or Output side, you cannot add, modify, or delete the fields on that part of the tab.
You can select Document Type from the Document reference drop-down list. A Document Type can be created using the Document Types option under Project > Configurations > Flow service.
You can create pipeline fields as document references, create document types comprising of document references, and also define the signature of flow services comprising of document references.|
Go to the Define input and output fields screen.
Select Document Reference in the Input Fields tab. The Choose Document Reference drop-down list appears listing all documents available in the project.
Select the document from the Choose Document Reference drop-down list.
Click Done. The input fields are declared.
Input and Output Field Validations
The Input and Output Field Validations feature enables you to validate the input and output fields during runtime. While declaring the input and output fields you can provide constraints such as, minimum value a field can accept, maximum length of a string, and so on. During runtime, IBM webMethods Integration validates the values and if the constraints are not satisfied, then the flow service execution fails.
You can enable this feature using the options Validate input and Validate output in the Input Fields and Output Fields tabs of the Define input and output fields screen.
Value Set
When running a service using IBM webMethods Integration, you frequently need to submit input values. A value set allows you to save these input values. If you need to execute the same flow service again in the future with the same input values, you can use the previously saved value set instead of manually inputting the input values.
Example
In this example, you will learn about defining a value set for a service. Assume, you use a *Slacktest service, to create an integration that posts a user defined message on a specific slack channel.
Let us see how to provide input values to a flow service. We will create a flow service slacktest that posts a user defined message on a specific Slack channel.
Basic Flow
1.Create the flow service.
2.Click the Define IO () icon and define the input fields channel and message.
3.Click the mapping icon () and define the pipeline mapping as shown in the following illustration.
4.Click the Run icon (). The Input values dialog box appears.
Note
If you do not define any input fields in a flow service, then the Input values dialog box does not appear and the flow service executes directly.
5.Specify the input values as general and Hello Team !!! and save the value set as slack_valueset_1.
You can run the flow service without saving the value set too.
Note
If you do not define any input fields in a flow service, then the Input values dialog box does not appear and the flow service executes directly.
5.Specify the input values as general and Hello Team !!! and save the value set as slack_valueset_1.
You can run the flow service without saving the value set too. A value set is a placeholder to store run time input values. You can use the same value set if you want to run the same flow service with the same input values. This is useful when you have multiple input fields. Value sets are stored in your browser’s local storage and the value sets are retained till the local storage is cleared.
Include empty values for string types: If you do not provide an input value for a string field, then at run time, the value will be null by default. If you select this option, then at run time, the value will be an empty string.
6.Click Run.
7.View the result. The values for the input fields channel and message are displayed.
Upload Input Values
The Upload Input Values feature allows you to upload a JSON file containing a value set that needs to be provided as input to a service.
When uploading a JSON file, if there is a value set with the same name, the following warning appears:
To overwrite an existing value set, select Overwrite values in {valueSetName}
To rename the existing value set, select Rename to save as new value set.
Note
The JSON file can be a maximum of 200 KB in size. Exceeding this limit may impact the performance of the user interface.
Download Input Values
The Download input values feature allows you to download the input values used in a service as a JSON file. When you click Download input values, the input values are downloaded as a JSON file in the {flowServiceName}-{valueSetName}.json format and saved in the default download folder on your local system.
This downloaded file can be exported or used later based on your requirements.
The Pipeline feature offers a graphical representation view for all of your data and allows you to map the data between flow service’s user input, and connectors or services. Pipeline is the general term used to refer the data structure in which the input and output values are maintained for each step of a flow service. The pipeline starts with the user defined input to the flow service and collects inputs and outputs from every step to the next step in the flow service. When the flow service executes, pipeline has access to data until the previous step’s output.
Pipeline Panel
The Pipeline panel enables you to map the input and output fields. You can access the Pipeline panel by clicking the mapping icon () on the flow services step. The Pipeline panel appears only when you select any operation of the connector or service and an account must be configured in case of connector. The flow service generates the fields based on the operation and the service or connector selected. You can define the input or output fields in the Define IO section and map them to the service or connector fields.
The Pipeline panel has the following columns:
Pipeline Input - Displays the results from the previous step. For the first step, the input fields defined in the Define IO section are listed. The fields listed in the Pipeline Input column are sent as inputs to the Service Input fields.
Service Input - Displays input fields of the service or connector selected in the step.
Service Output - Displays output fields of the service or connector selected in the step.
Pipeline Output - Displays the results produced for the step. These results are sent as inputs to the next step’s Pipeline Input . The last step results indicate the output of the pipeline.
You can map the fields in the following order:
Pipeline Input fields to Service Input fields
Service Output fields to Pipeline Output fields
Note
For easy identification purpose, services are indicated with fn text in the column header and the connector icon is displayed for connectors.
You can perform the following actions while mapping the input and output fields in a flow service. The Taskbar is available at the top right-corner of the Pipeline panel.
Name and Icon
Description
Recommended Mappings ( )
Recommends mappings based on the pattern of previous mappings. The recommended mappings are represented with dotted lines. For more information, see Smart Mapping.
Show only Mapped ()
Displays only the mapped fields.
Copy ()
Copies a field from the Pipeline Input and Pipeline Output sections. This icon is enabled only when a field is selected. For more information, see Copy and Paste Fields.
Clear ()
Clears all mappings and values set for fields in the pipeline. This icon is enabled always. For more information, see Clear Values.
Delete ()
Deletes the selected mapping. You can also delete the values set for input and output fields, delete the map between fields, and delete the drop value. This icon is enabled if a particular mapping is selected by clicking on the mapping line. For more information, see Delete Mappings.
Move ()
Moves the selected field to either left, right, up, or down in the pipeline. The left and right icons are enabled only when the selected field has a possibility to move to the immediate parental hierarchy (if any). The up and down icons are enabled only when the selected field has a possibility to move one level up or down its sibling field.
Map ()
Connects service and pipeline fields in the pipeline with a line called a map (link). For more information, see Map Pipeline Fields.
Set Value ()
Sets the values for input and output fields. For more information, see Set Values.
Drop ()
Drops a field from the Pipeline Input or Pipeline Output sections. For more information, see Drop Fields.
Map by Condition ()
Sets a condition on a pipeline map. Only when the condition is satisfied the target field is assigned a value. For more information, see May by Condition.
Allows you to efficiently manage and inspect pipeline data. Provides a flexible way to control screen space.
Close ()
Closes the mapping panel and returns to the flow service. This icon is always enabled. Whatever the state of mappings or data manipulations done till that point are saved.
Paste ()
Pastes the field in the pipeline after a copy field action. For more information, see Copy and Paste Fields.
Add ()
Adds a field to the pipeline. You can add fields that were not declared as input or output fields (through Define IO) of the flow service. For more information, see Add Fields.
Note
You can also access these actions by right-clicking a field in the pipeline as shown:
Set Values
The Set Values feature allows you to set values for the fields by clicking on the field in the Pipeline panel.
The behavior of the Set Value feature is as follows:
For String type fields
Accepts any non null value such as alphabets, numbers, and symbols.
Sets the field with an empty (blank) value if you open the Set Value screen and do not specify any value.
For PickList
PickList is a drop-down list where you can choose one of the values from a predefined list. Based on the operation selected, you can either provide custom values or select a value from the predefined list. This operation is decided by the Service for the respective fields.
When User is Allowed to Enter Custom Values
On Default Set Value
On Clear All
True
“(empty)
“
False
options[0]
options[0]
Note
When running a set value operation on the root document node, if the document node contains more than 60 child nodes, the set value operation may experience slight performance issues.
For example, let us see the Adding Integers (AddInts) functionality using the Math service.
Create a flow service. For example, AddingIntegers. A flow service step is created initially without any data.
Select the AddInts operation in the flow service step.
Click on the flow service step. The Pipeline panel appears. Two service input fields num1 and num2 are listed in the Service Input column.
Click any of the fields, for example, num1. The Set Value dialog box appears.
Enter the value in the num1 field.
Note
When you paste data equal to or exceeding 0.5 MB in the text box, the system disables editing due to performance limitations. However, you can still use the respective buttons to copy and paste the data and perform editing in an external tool.
Click Save. The value is set and the corresponding field is represented with the SetValue icon. If you export this step, the set value of this field is exported.
Repeat the above steps to set values for the required fields.
Pipeline Variable Substitution
The Perform pipeline variable substitution check box indicates whether you want the application to perform pipeline variable substitution at runtime. If selected, the application replaces the pipeline variable with the runtime value of the variable while running the flow service.
Note
You can assign values of string or string[] fields to other string or string[] fields in the pipeline.
The Perform pipeline variable substitution check box indicates whether you want the application to perform pipeline variable substitution at runtime. If selected, the application replaces the pipeline variable with the runtime value of the variable while running the flow service.
Note
You can assign values of a only string or string[] fields to another string or string[] fields in the pipeline.
Clear Values
The Clear Values feature allows you to reset the field values. If the field is a document or document reference type, Clear All resets the nested fields as well. The default values of different types on Set Value (without giving any input, just opening the Set Value screen and save) and Clear All are as follows:
Field Type
On Default Set Value
On Clear All
String
“(empty)
“
Boolean
False
False
Array
[]
[]
Float, Int, Double, Long, Short
nothing set
“
Document, Document Reference
An object with all String fields at all levels are set to “, nothing is set for other types.
An object with all String fields at all levels are set to “, nothing is set for other types.
Map Pipeline Fields
The Map Pipeline Fields feature allows you to connect service and pipeline fields in the pipeline with a line called a map (link). You can select two input fields and click the Map icon to create a map between them. Creating a map between fields copies the value from one field to another at run time. There are two types of mapping:
Implicit Mapping
Explicit Mapping
Note
When migrating integrations from Flow Editor to flow services, the existing mappings with references to fields whose definition has changed become invalid. For example, if a mapping references a field test which was originally an integer, but later became a float, then the existing mapping to the field test as integer is invalid. Following migration, you are prompted to remove these invalid references. You have to either remove those references or reassign correctly in order to proceed.
Implicit Mapping
Within a flow service, IBM webMethods Integration implicitly maps fields whose names are the same and whose data types are compatible. flow service connects implicitly mapped fields with a dotted line.
Explicit Mapping
You can map input fields from the Pipeline Input column to Service In fields, and also can map the output from the Service Out to a different field in the Pipeline Out column. Explicit Mapping can be achieved in the following ways:
Drag and Drop - Drag the mouse pointer from the source field and drop on the target field.
Note
In order to map fields, you must drag the mouse pointer from the source field on to the text of the target field. Otherwise, the mapping does not happen.
Map Icon - Select a source field and target field by clicking on it and then click in the toolbar. The fields are mapped.
Delete Mappings
The Delete Mappings feature allows you to remove mappings . You can delete each map individually or all mappings at a time.
Remove each mapping - You can do this in one of the following ways:
Using Toolbar - Select a map by hovering on the map line (arrow) and click in the toolbar.
Keyboard - Select a map by hovering on the map line (arrow) and press the Delete key.
Removing all Mappings - Click the icon in the toolbar to delete all maps at a time.
Search Fields
The Search feature allows you to search fields. When you type field names in the Search text box, IBM webMethods Integration performs a dynamic search across all fields (including nested fields), and displays the resultant fields in the respective panels. Additionally, each column has its own search bar that allows you to search in the respective column.
Expand Mapping Panel
The Expand feature allows you to resize the Pipeline panel. On clicking the Expand icon, the Pipeline panel size increases and you can resize the column widths as per your requirements to view the mappings clearly.
Tip
The Expand Mapping Panel option is useful in scenarios when there are many mappings and the current default column width is not sufficient for clear view of the mappings. You can click Expand to increase the Pipeline panel size and resize the column for viewing the mappings with more clarity.
Note
For any nested structures, you can easily expand or collapse all nodes by clicking the Expand All and Collapse All buttons. The Expand All and Collapse All buttons appear only when there are nested structures in a Transform step.
When any parent node is in expanded mode, then the button name is changed to Collapse All until all nodes are in collapsed state
Copy and Paste Fields
The Copy and Paste feature allows you to copy any field and paste in the pipeline. Depending on the context, you can either paste the field or the field path. For example, if you copy a field and paste the field in the Set Value dialog box, the field path is pasted. Alternatively, you can use keyboard shortcuts Ctrl+C to copy and Ctrl+V to paste fields.
You can perform the copy and paste actions in the following ways:
Copy fields within the Pipeline panel
Copy and paste fields between the Define IO screen and Pipeline panel
Copy across flow services
Note
To copy any field name in the pipeline panel, you right-click the field and select Copy field name from the displayed menu. The complete path of the field is copied. You could use this option if you would like to extract field names only.
Drop Fields
The Drop feature allows you to remove fields from a pipeline that are no longer used by the subsequent steps. You can drop fields from only the Pipeline Input and Pipeline Out columns.
By dropping the unwanted fields you can reduce the size of the pipeline at run time. Also, the length and complexity of the Pipeline Input and Pipeline Out columns is reduced, thus making the pipeline panel much easier to use when you are working with a complex flow service. If any output field is dropped, then that particular field is not sent as Pipeline Input to the next step. This way you can restrict the flow of fields from one step to another.
For easy identification purpose, the Drop icon appears after a field is dropped. For example, .
You cannot drop a field in the following scenarios:
If the field has a value set.
If the field belongs to another flow service that is a parent of the existing flow service.
Add Fields
The Add Fields feature allows you to add fields to a pipeline.
Map by Condition
The Map by Condition feature allows you to define conditions for the maps (links) drawn between fields in a pipeline. A condition consists of one or more expressions that allows you to:
check for the existence of a field in the pipeline
check for the value of a field
compare a field to another field
For example, map the fields BuyersTotal and OrderTotal only if BuyersTotal has a value, that is, Not Null.
During runtime, the application runs all conditional mappings and assigns a value to the target field only when the condition is true. Otherwise, the application ignores the mapping. In scenarios where the fields are mandatary and the conditions are not satisfied, you might observe flow service runtime failure issues.
You can map multiple source fields to the same target field if the mappings to the target have conditions. In some scenarios, where you have mapped multiple source fields to a single target field, at most, only one of the conditions you define can be true at run time. If more than one condition to the same target field evaluates to true, then the result is not definite because the order in which the conditions are run is not guaranteed.
You can define conditions for a map using the Condition Editor screen. The Condition Editor screen can be accessed by selecting the map and clicking the Map by Condition () icon on the Pipeline panel or by double-clicking the map.
The Condition Editor screen functionality is similar to that of the Expression Editor screen. For more information about the Condition Editor screen description, see Expression Editor.
Points to Consider when Defining Conditions for Maps
Conditions cannot be defined for the implicitly mapped fields.
Conditions assigned to each map must be mutually exclusive when mapping multiple source fields to one target field.
The condition expression given for the map behaves similar to the flow service containing a Branch condition. For more information on Branch steps, see Branch.
Adding Conditions to a Map
Example
Assume that there is a flow service CentimetertoMeterConversion that converts the values provided by you from centimeters to meters. If the meter value is zero, then a custom message is logged. The flow service includes the following:
Service: Math multiplyFloats
Service Input - num1 and num2
num1 takes the input values provided by you. num2 is a constant conversion unit, 0.01
Conditional Step: If meters value is equal to zero, then display a custom message
Let us modify the logic of the flow service such that it converts only specific values instead of all values based on the following conditions:
num1 = minValue > 1 and maxValue < 9999
default = minValue < 1
Before you Begin
Log in to your tenant.
Ensure that you have created the flow service as described in the example.
Basic Flow
Go to the CentimetertoMeterConversion flow service.
Select the centimeters to num1 map and click from the pipeline tool bar. The Condition Editor screen appears.
By default, the first field is selected and displayed in the editor.
Ensure that the Enable Condition during execution check box is selected. Otherwise, the condition is not considered during runtime.
Define the conditions. For more information on how to define conditions and sub conditions, see Creating Complex Expressions.
As per the example, the condition minValue > 1 and maxValue < 9999 is defined for the selected mapping as shown in the following iluustration:
Click Save. The condition is added to the map and the icon is added to indicate that the map has a condition defined.
Map the default pipeline Input field to num1.
Define condition for the Default pipeline Input field.
Note
To remove a condition from a map:
1. Select the map and click the Map By Condition icon. The Condition Editor appears.
2. Click Remove Condition.
Click Run. The results are displayed based on the values you provide.
Default Pipeline Mapping Rules and Behavior
Before you start mapping fields, it is recommended to go through the mapping guidelines to avoid runtime issues due to incorrect mappings.
If a field has a Set Value, you cannot either map or drop fields and vice-versa.
If a field is defined in pipeline and if the field is not mapped or a value is not set, then the field is not available in subsequent steps.
Set Value cannot be configured in the Pipeline Input and Service Output columns.
If a parent is mapped, child field of that parent cannot be mapped.
If a child field is mapped, parent of that child field cannot be mapped.
Target field can be mapped only once, except in case of array indices and map by condition.
Fields with different Object constraints cannot be mapped. If you map fields with different Object constraints and the Validate Input or Validate Outputoptions are selected, the runtime result is undetermined.
Data type mappings that are allowed between pipeline fields is as follows:
Mapping occurs. Target field data type is typecasted to Character. For example, if there is a mapping between Character A and Integer X, then Integer X is typecasted as Character and the value of A is mapped to X.
Char
String
Mapping does not occur at run time.
Char
Char[], Object[]
Mapping occurs. Target field is type casted to Character data type and by default zeroth index (Index[0]) is mapped with Character value.
Target field data type is typecasted to Character. For example, if there is a mapping between Character A and Integer X, then Integer X is typecasted as Character and the value of A is mapped to X.
Mapping occurs. Target field data type is typecasted to Date. For example, if there is a mapping between Date A and Integer X, then Integer X is typecasted as Date and the value of A is mapped to X.
Date
String
Mapping does not occur at run time.
Date
Date[], Object[]
Mapping occurs. Target field is type casted to Date data type and by default zeroth index (Index[0]) is mapped with Date value.
Mapping occurs. Target field data type is typecasted to Date.For example, if there is a mapping between Date A and Integer X, then Integer X is typecasted as Date and the value of A is mapped to X.
String, Short, Long, Integer, Float, Double, Boolean, Byte Array (b[]), Big Decimal, Big Integer, Object, Char, Date
Mapping occurs. Target field data type is typecasted to String. For example, if there is a mapping between String ABC and Integer X, then Integer X is typecasted as String and the value of ABC is mapped to X.
String
String[], Object[]
Mapping occurs. Target field is type casted to String data type and by default zeroth index (Index[0]) is mapped with String value./td>
Mapping occurs. Target field data type is typecasted to Float. For example, if there is a mapping between Float F = 55.25 and Integer X, then Integer X is typecasted as Float and the value of 55.25 is mapped to X.
Float
Float[], Object[]
Mapping Occurs. Target field data type is typecasted to source field data type and by default the zeroth index (Index[0]) is mapped with the Float value.
Mapping occurs. Target field data type is typecasted to Integer. For example, if there is a mapping between Integer I = 55 and Float F, then Float F is typecasted as Integer and the value of 55 is mapped to X.
Integer
Integer[], Object[]
Mapping Occurs. Target field data type is typecasted to Integer and by default the zeroth index (Index[0]) is mapped with the Integer value.
Integer
String [], Short [], Long [], Float[], Double [], Boolean [], BigDecimal[], BigInteger[],Char[],Date[], String[][]
Short, Long, Integer, Float, Double, Boolean, Byte Array (b[]), Big Decimal, Big Integer, Object, Char, Date
Mapping occurs. Target field data type is typecasted to Short. For example, if there is a mapping between Short 325 and Integer X, then Integer X is typecasted as Short and the value of 325 is mapped to X.
Short
Short [], Object []
Mapping Occurs. Target field data type is typecasted to Short and by default the zeroth index (Index[0]) is mapped with the Short value.
Short, Long, Integer, Float, Double, Boolean, Byte Array (b[]), Big Decimal, Big Integer, Object, Char, Date
Mapping occurs. Target field data type is typecasted to Long. For example, if there is a mapping between Long L and Integer X, then Integer X is typecasted as long and the value of L is mapped to X.
Long
Long [], Object []
Mapping Occurs. Target field data type is typecasted to Long and by default the zeroth index (Index[0]) is mapped with the Long value.
Short, Long, Integer, Float, Double, Boolean, Byte Array (b[]), Big Decimal, Big Integer, Object, Char, Date
Mapping occurs. Target field data type is typecasted to Double. For example, if there is a mapping between Double D1 = 4.673854 and Float F, then Float F is typecasted as Double and the value of 4.673854 is mapped to F.
Double
Double[], Object[]
Mapping Occurs. Target field data type is typecasted to Double and by default the zeroth index (Index[0]) is mapped with the Double value.
Mapping occurs. Target field data type is typecasted to Booelan. For example, if there is a mapping between Boolean B = True and Double D, then Double D is typecasted as Boolean and the value of True is mapped to D.
Boolean
Boolean [], Object[]
Mapping Occurs. Target field data type is typecasted to Boolean and by default the zeroth index (Index[0]) is mapped with the Boolean value.
Boolean
String [], Short [], Long [], Integer [], Float[], Double [], Big Decimal [], Big Integer [], Char[],Date[], StringTable[][]
Mapping occurs. Target field data type is typecasted to Big Decimal. For example, if there is a mapping between Big Decimal D1 = 4.673854 and Float F, then Float F is typecasted as Big Decimal and the value of 4.673854 is mapped to F.
BigDecimal
BigDecimal[], Object[], BigInteger[]
Mapping Occurs. Target field data type is typecasted to Boolean and by default the zeroth index (Index[0]) is mapped with the Boolean value.
Mapping occurs. Target field data type is typecasted to Big Integer. For example, if there is a mapping between Big Integer B1 = 4673854 and Float F, then Float F is typecasted as Big Integer and the value of 4673854 is mapped to F.
BigInteger
BigInteger[], Object[], BigDecimal[]
Mapping Occurs. Target field data type is typecasted to Big Integer and by default the zeroth index (Index[0]) is mapped with the Big Integer value.
Mapping between a multidimensional data type (Target) and any single dimensional data type (Source), then the Target field is assigned to the specified element of the Source field. For rules to know whether mapping occurs at runtime, see the tables in the Default Pipeline Mapping Rules and Behaviour section.
Single Dimension Data Types - String, Short, Long, Integer, Float, Double, Boolean, Byte Array (b[]), Big Decimal, Big Integer, Object
Multi Dimension Data Types - String [], Short [], Long [], Integer [],Float [], Double [], Boolean [], Big Decimal [], Big Integer [], Object[]
Document, Document References can be mapped only with Document, Document References And arrays of these (Document [], Document Reference[]).
The Indexed Mapping feature allows you to map array fields and specify which element in the array you want to map. Few examples are:
For String Lists and Object Lists, you can specify the index for the list element you want to link. For example, you can map the third element in a String List to a String.
For Document Lists, you can specify the index for the Document that you want to link. For example, you can map the second Document in a Document List to a Document field.
For a field in a Document List, you can specify the index of the Document that contains the value that you want to link. For example, if the Document List POItems contains the String ItemNumber, you can link the ItemNumber value from the second POItems Document to a String.
The mapping between array indexes can be performed using the Indexed Mapping dialog box. The Indexed Mapping dialog box automatically appears when you map array fields in the Pipeline panel.
Note
Index mapping for the Document and Document Reference data types are not supported.
The Indexed Mapping dialog box has the following options:
Map on array level - Allows you to map arrays without providing indexes. This is the default option.
Map specific elements in an array - Allows you to map specify the mapping between arrays at an index level. On selecting this option, the indexes are listed for you to specify the mapping. For example, you can link the second element in a String List to a String or link the third Document in a Document List to a Document field. You can map either all indexes or only few of them. If you do not map all indexes, the mapping between the indexed items is only displayed and the mapping for the main array is not shown in the Pipeline panel.
Note
If you have not mapped:
any indexes and the Map specific elements in an array option is selected, then the parent fields are automatically mapped.
any one of the target index with the source index, then that index element is mapped to the parent.
If you have mapped multiple source indexes to a single target, then the result is not definite because the order in which the mappings are run is not guaranteed.
If you want to add or delete a row index, you can use the Add and Delete buttons in the Indexed Mapping dialog box. To delete a single index entry, use the Delete button adjacent to the index entry row. You cannot modify any existing mappings that are indexed.
Guidelines for Mapping Array Fields
To map array elements, you need to know the index for the element’s position in the array. Array index numbering begins at 0 (the first element in the array has an index of 0, the second element has an index of 1, and so on).
If you map to an array and specify an index that does not exist, IBM webMethods Integration increases the length of the array to include the specified array index. For example, suppose a String List has length 3. You can map to the String List and specify an index of 4; that is, you can map to the fifth position in the String List. At run time, IBM webMethods Integration increases the length of the String List from 3 to 5.
Each element in an array can be the source or target of a map; that is, each element in the array can be the start or end of a map. For example, if a source String List variable contains three elements, you can map each of the three elements to a target field.
If the source and target fields are arrays, you can specify an index for each field. For example, you can map the third element in a source String List to the fifth element in the target String List.
If you do not specify an array index for an element when mapping to or from arrays, the default behavior of the Pipeline panel is used. For information about the default behavior of the Pipeline, see Default Pipeline Rules for Mapping Array Fields.
At run time, the map (copy) fails if the source array index contains a null value or if you specify an invalid source or target index (such as a letter or non-numeric character). IBM webMethods Integration generates journal log messages (at debug level 6 or higher) when maps to or from array fields fail.
Mapping Document, Document List, Document Reference, or Document Reference List fields - When working with Document types in the pipeline, you can link a source field to the Document or to the children of the Document field:
A Document and its children cannot be targets at the same time. That is, if you have mapped a Document as the target, the Document’s child fields cannot be mapped as target. The same is applicable for Document List.
If you have mapped to a child field of a Document, then the child’s parent Document cannot be mapped as target at the same time. The same is applicable for Document List.
If you map two Document fields and specify an index that does not exist, IBM webMethods Integration overwrites the structure of the target Document with the structure of the source Document. For example, suppose there are two Document fields D1 and D2 and both have length 3. If you map D1 with D2 and specify an index of 4; that is, map to the fifth position in the D2, then at run time, IBM webMethods Integration increases the length of D2 from 3 to 5.
You cannot map a nested Document List to a target Document List when the Document Lists have different sizes. A nested Document List is one that is contained within a parent Document List. Document Lists are considered to have different sizes when they have a different number of entries within the lists. If you need to move values from the source Document List to the target, create a user code that uses a LOOP flow step to assign values from the source to the target one by one.
When a Document Reference or Document Reference List refers to an Integration Server (IS) document type that contains identically named fields that are of the same data type and both identically named fields are assigned a value or are linked to another field, the application might not maintain the order of the document contents in the pipeline when the service runs. For example, the application might group all the identical fields at the end of the document. To prevent the change in the order of document contents, set default values for the identically named fields, insert a MAP step in the service before the step in which you want to map or assign a value to the fields. In the MAP step, under Pipeline Out, select the Document Reference field and assign default values to the identically named fields.
Default Pipeline Rules for Mapping Array Fields
When you create maps between scalar and array fields, you can specify which element of the array field you want to map to or from. Scalar fields are those that hold a single value, such as String and Object. Array fields are those that hold multiple values, such as String List and Object List. For example, you can map a String to the second element of a String List.
If you do not specify which element in the array that you want to map to or from, the application uses the default rules in pipeline to determine the value of the target field. The following table lists the default pipeline rules considered for mapping to and from array fields:
Field 1 - Field 2
Behavior
A scalar field - An array field that is empty (the field does not have a defined length)
The map defines the length of the array field; that is, it contains one element and has length of one. The first (and only) element in the array is assigned the value of the scalar field.
A scalar field - An array field with a defined length
The length of the array is preserved and each element of the array is assigned the value of the scalar field.
An array field - A scalar field
The scalar field is assigned the first element in the array.
An array field - An array field that does not have a defined length
The map defines the length of the target array field; that is, it will be the same length as the source array field. The elements in the target array field are assigned the values of the corresponding elements in the source array field.
An array field - An array field that has a defined length
The length of the source array field must equal the length of the target array field. If the lengths do not match, the map does not occur. If the lengths are equal, the elements in the target array field are assigned the values of the corresponding elements in the source array field.
IBM leverages the collective intelligence of the IBM webMethods Integration community to suggest which fields should be mapped in a data map by learning from the mappings you create.
Smart mapping provides you with recommendations while mapping the pipeline data and utilizes a series of algorithms to determine the likely accuracy of a given suggestion, enabling you to toggle between high, medium, and low levels of mapping confidence. A Machine Learning (ML) algorithm is applied to provide the suggestions. The ML algorithm learns from the mappings you create and provides suggestions automatically to map similar fields. The algorithm benefits from having more data from more number of users.
Smart mapping is anonymous and does not store any customer-specific information or actual data of any kind and is available to only those tenants who provide their consent to share their mapping information. Individual fields contained within the records are indexed. Specifically, the field names and their hierarchy are indexed, as well as the mapping relationships between them.
After providing consent, if you have the relevant role permissions, you can choose to opt-out of smart mapping. Your maps will not be indexed, that is, if you opt out of smart mapping, you will not be able to see or utilize any mapping suggestions and your maps will no longer be indexed. However, as the index is anonymous, any maps and profiles indexed during the opt-in period will remain in the database. Further, the data collected is confined to that particular data center where the tenant resides.
Mapping recommendations are not tenant specific, so mapping inputs from one tenant maybe used to make recommendations for another tenant. When you do the mapping, the data collected does not reflect immediately as a recommendation for another user as the information is recorded and processed in our database only at specified intervals.
Note
For trial tenants, the mapping data is collected by default. Further, for trial tenants and for the Free Forever Edition, smart mapping is always enabled and cannot be changed. For paid tenants, only an Administrator has the permission to enable or disable smart mapping.
The following information is not indexed:
Business Data log
Static values that you set in the pipeline map
Accounts
Reference Data
Pipeline Data
Certificates
User Details
Static values in any application operations
How it works
To provide your consent to share the mapping information, from the IBM webMethods Integration navigation bar, click on the profile icon located at the top-right corner of the home page, and select Settings > Preferences.
On the Configure Tenant Preferences page, select the Publish integration mappings to recommendations engine option and click Save to enable smart mapping. This provides you mapping recommendations whenever you do mapping. By enabling this option, you are also providing us your consent to collect your mapping information. For trial tenants and for the Free Forever Edition, this option is always enabled by default and cannot be changed.
Select a project and then the flow service for which you want to do smart mapping.
On the pipeline mapping screen, select Recommend Mappings to see the recommended mappings. If you select the Show only mapped option, the Recommend mappings option is automatically cleared. When the screen is initially loaded, only high recommendations appear by default.
Drag the recommendation accuracy slider to view the mapping recommendations and filter the recommendations based on their recommendation accuracy. The recommendation accuracy or confidence number appears when you hover the pointer over a mapping and is the level of mapping confidence of that specific recommendation.
Based on the level of mapping confidence, the recommendations are grouped into High, Medium, and Low categories:
Mappings have the highest probability
Mappings have medium probability
Mappings have the least probability
You can select more than one category by moving the slider. In the following example, mappings that have medium and high probability are displayed.
Select a mapping and click Accept to hard map it. You can select one or more recommendations and accept only those recommendations (Selected (x)), or you can accept all the recommendations that appear on the screen (All shown (x)). The accepted recommendation behaves like an actual mapping.
To unmap a hard mapping, select the hard mapping and then click the delete icon available on the pipeline action bar. Once the hard mapping is deleted, the recommendations appear again. To hard map the recommendations again, click Accept and select the desired option.
Note
If you are an Administrator of a paid tenant, and have cleared the Publish Integration Mappings to Recommendations Engine option, IBM webMethods Integration displays a message to inform you whether you want to enable the smart mapping feature. If you select Yes, the Predict Mappings option appears in the pipeline mapping screen.
The Expression Editor is an interface that allows you to define complex expressions (conditions) using the conditional controls such as If, Else If, While, and Do Until. A complex expression is formed by combining simple conditions (rules) with logical operators AND and OR or negating these conditions with logical negation operator NOT.
You can access Expression Editor by clicking the Expression () icon on the Flow service step. The Expression icon appears only when you select a conditional construct control in the Flow service step.
Components of Expression Editor
Expression Editor consists of two sections:
Expression View
Expression Builder
Expression View
The Expression View section displays the conditions defined. Additionally, when you hover the mouse pointer on operator of the condition, the condition’s scope is highlighted for easy reference as shown in the following illustration:
Expression Builder
The Expression Builder section allows you to create complex conditions by defining rules. By default, a rule step is displayed to start with the condition definition. A rule step consists of Left Operand, Operator, and Right Operand to define a condition. The operands list the fields available in the Pipeline. You can group multiple rules to define sub conditions.
Expression Builder contains the following controls that aid you in defining the complex conditions:
Group-level Controls
Operator - lists the set of logical operators used to join the rules.
Add Group - groups a set of rules to define sub conditions.
Add Rule - displays the fields to define a rule.
Remove Group - deletes a group (sub condition) from the complex condition.
Negate Group - reverses a sub condition’s meaning. This control is a toggle button control. When you negate a group, the Negate Group button is renamed as Remove Negation and allows you to remove the negation for that group.
Rule-level Controls
Add Rule () - displays the fields for you to define a rule.
Negate Rule () - reverses a rule’s meaning.
Delete Rule () - deletes a rule.
At any time you can either modify or delete a part or whole of the complex condition using these controls.
When you hover on the operands, the complete path of the operand appears.
Creating Complex Expressions
Example
Let us create a complex expression for checking the vaccination slot availability with the following conditions:
Location is Washington
Zipcode is 20010
Vaccine can be either Pfizer, Moderna, or Sputnik
Age is less than or equal to 45 and slot is available
Venue is Howard University Hospital
Before you Begin
Log in to your tenant.
Ensure that the fields are defined.
Basic Flow
Go to Flow services.
Select the project where you want to create the new flow service. You can also create a new project.
Click to create a flow service. The Start Creating the Flow service screen appears.
Provide a name for the flow service. For example, GetVaccinationSlotStatus and an optional description for the new flow service.
Select a conditional control in the flow service step. For example, If.
Click the Expression () icon on the conditional flow service step. The Expression Editor screen appears.
Perform the following steps to define a rule:
Select a left operand.
Select an operator.
Select a right operand.
The rule is defined. As per the example, the rule 1 - Location is Washington is defined as follows:
[Optional] Click or Add Rule. A new rule step is added.
Repeat the above steps to define more rules as per your requirement.
Select the operator (group-level) from the And drop-down list to conjunct the previous and new rules. As per the example, rule 2 - Zipcode is 20010 is defined and both are combined with an And operator as follows:
Perform the following steps to define a group:
Click Add Group. A new rule step appears and is grouped under a separate block.
Select the operands and operator to define the rule.
[Optional] Click or Add Rule (group-level). A new rule step is added to the group.
[Optional] Select the operator from the And drop-down list (group-level) to conjunct the previous and new rules in the group.
Repeat the above steps to define all rules in the group. The sub condition is defined.
As per the example, group 1 - Vaccine can be either Pfizer, Moderna, or Sputnik, group 2 - Age is less than or equal to 45 and slot exists are defined as follows. The groups and other rules are combined with an And operator.
Define the rule 3 - Venue is Howard University Hospital as per the example. The expression created is as follows:
[Optional] Perform the following steps to remove a rule or rule group:
Rule - Click adjacent to the rule that must be deleted. The rule is deleted from the complex condition.
Rule Group - Click Remove Group in the group that must be deleted. The rule group is deleted from the complex condition.
Tip
You can use the keys Ctrl+Z and Ctrl+Y to undo and redo the delete actions.
[Optional] Perform the following steps to negate a rule or rule group:
Rule - Click adjacent to the rule that must be negated. The rule is deleted from the complex condition.
Rule Group - Click Negate Group in the group that must be negated. The rule group is deleted from the complex condition. The button is renamed as Remove Negation and allows you to remove the negation for that group.
Click on the Expression Editor screen. The condition is added to the flow service step. You can click the down and up arrows in the flow service step to view and hide the complete expression.
The complex condition is created and you can proceed with the other flow service steps.
Creating Flow services
No subtopics in this section
See the following examples to learn how to create flow services. Click here for information on how to manage roles and project permissions.
Get leads from Salesforce CRM and create corresponding customer leads in Microsoft Dynamics 365 CRM.
Before you begin
Log in to your tenant.
Check if you have the Developer and Admin roles assigned from the Settings > Roles page.
Note
Click here for information on how to manage roles and project permissions.
Obtain the credentials to log in to Salesforce CRM and Microsoft Dynamics 365 CRM back end accounts.
In IBM webMethods Integration, create Salesforce CRM and Microsoft Dynamics 365 CRM accounts.
Basic Flow
Select the project where you want to create the new Flow service. You can also create a new project.
Click the Flow services tab and on the Flow services page, click the icon.
Provide a name, for example, SalesforceToMSDynamics365CRM, and an optional description for the new flow service.
Type Salesforce in the search box and select Salesforce CRM.
Tip
In a flow service step, you can perform search in one of the following ways:
- Alias-based search: Type a keyword and the matching results are displayed. List of supported keywords for alias-based search are as follows:
- map: Transform pipeline
- iteration/loop: While, Do Until, Repeat, Repeat for Count, Repeat-Input-Output
- selection/condition: If, Else, If Else, Else If, Switch, Branch, Case
- jump: Break, Exit
- exception: Try, Catch, Finally, Try Catch, Try Finally, Try Catch Finally
- Restricted scope search: Type a keyword followed by a pipe symbol or space or tab to search within a specific scope, for example, to search within flow services, type fs|. List of supported keywords for restricted scope search are as follows:
- ctrl: Controls
- con: Connectors
- fs: Flow services
- svc: Services
Select the queryleads operation.
Select the SalesforceCRM_1 account. You can also create or configure an account inline.
Click the icon to add a new step.
Type repeat, select Repeat to create a repeat step, and then select the /Lead option.
Select Microsoft Dynamics 365 CRM and then select the associated createLead action.
Select the msdynamics_1 account. You can also create or configure an account inline.
Click the mapping icon to map the input and output fields.
Click the Pipeline Input fields (FirstName and LastName) and drag it to the service input (firstname and lastname). The service output is automatically mapped to the Pipeline Output by a dotted line.
Save the flow service and run it by clicking the Run icon.
View the flow service execution results for a successful run. The lead ID generated is acf4f843-2970-ea11-a811-000d3a4da920.
Note
If a value is null for a field or if the datatype is unknown, then the obj (object) icon is displayed.
Download the results.
16.You can also view the previous flow service execution results.
Note
By default, all flow service execution results are retained for 30 days. You can optionally specify the number of days (up to 30 days) for which you would like to retain the flow service execution logs by clicking the Modify Retention Period link available at Monitor > Execution Results > Flow service Execution. Once the retention period is over, the flow service execution logs are deleted automatically.
Get attendees from Concur and create contacts in Salesforce CRM
Get attendees from Concur and create contacts in Salesforce CRM.
Before you begin
Log in to your tenant.
Check if you have the Developer and Admin roles assigned from the Settings > Roles page.
Note
Click here for information on how to manage roles and project permissions.
Obtain the credentials to log in to Concur and Salesforce CRM back end accounts.
In IBM webMethods Integration, create Concur (Concur_1) and Salesforce CRM (SalesforceCRM_1) accounts.
Basic Flow
Select the project where you want to create the new flow service. You can also create a new project.
Click the Flow services tab and on the Flow services page, click the icon.
Provide a name, for example, ConcurAttendeeToSalesforce, and an optional description for the new flow service.
Type Concur in the search box, select Concur, and then in the Type to choose action field, select Add Custom Operation.
Do the steps as shown in the following images to create the GetConcurAttendees custom operation.
Select GetConcurAttendees, click the icon, and select the Concur_1 account. You can also create this account inline.
Click the icon to add a new step.
Type repeat, select Repeat to create a repeat step, and then select the /item option.
Select Salesforce CRM and then select the associated createcontact action.
Select the SalesforceCRM_1 account. You can also create or configure this account inline.
Click the mapping icon to map the input and output fields.
Map the input and output fields as shown below. Double-click the OtherCity and OtherCountry fields and set the required input values. You can also select the field and click the Set Value icon . The service output is automatically mapped to the Pipeline Output by a dotted line.
Save the flow service and click Run.
View the flow service execution results.
Retrieve files stored in Amazon Simple Storage Service (S3) bucket and log the content of the files
Retrieve files stored in Amazon Simple Storage Service (S3) bucket and log the content of the files.
Before you begin
Log in to your tenant.
Check if you have the Developer and Admin roles assigned from the Settings > Roles page.
Note
Click here for information on how to manage roles and project permissions.
Obtain the credentials to log in to Amazon S3 back end account.
In IBM webMethods Integration, create an Amazon S3 account, for example, AmazonSimpleStorageServiceS3_1.
Amazon S3 Bucket name to retrieve the files.
Basic Flow
1.Select the project where you want to create the new flow service. You can also create a new project.
2.Click the Flow services tab and on the Flow services page, click the icon.
3.Provide a name, for example, AmazonS3, and an optional description for the new flow service.
4.Type Amazon Simple Storage Service (S3) in the search box, select it, and then select the action or operation, getBucket.
5.Select the AmazonSimpleStorageServiceS3_1 account. You can also create or configure an account inline.
6.Click the mapping icon to set a value for getbucketinput.
7.Expand getBucketInput and click bucketName.
8.Type softwareagbucketaws as the value for bucketName.
9.Click the icon to add a new step. Then type repeat to select the Repeat step and select the /content option.
10.Select Amazon Simple Storage Service (S3) and choose the getObject action to retrieve the object from the specified bucket.
11.Click the mapping icon to map the input and output fields.
12.As shown below, click the pipeline input fields (name and key) and drag it to the service input (bucketName and objectName). The service output (getObjectOutput) is automatically mapped to the pipeline output by a dotted line.
.
13.Click the + icon to add a new step as shown below.
14.Select the Flow function and then select the logCustomMessage service. You can also type logCustomMessage and select it. This logs a message, which you can view in the flow service execution results screen.
15.Map the input (stream) to message. This step logs the contents of the files in the bucket.
16.Save the Flow service and click Run.
17.You can also view the execution result in the Monitor > Flow service Execution page. Click the execution result name link to view the execution details.
Creating Custom Operations in Flow services
No subtopics in this section
IBM webMethods Integration provides predefined connectors, which contain SaaS provider-specific information that enables you to connect to a particular SaaS provider. Further, each connector uses an account to connect to the provider’s back end and perform operations. Each connector comes with a predefined set of operations. You can also create your own custom operations while creating a flow service.
Let us see how to create a custom operation while creating a flow service and then use that custom operation to create a Salesforce CRM back end account.
1.After you log in, select a project or create a new project where you want to create the flow service.
2.Click the Flow services tab and on the Flow services page, click the icon.
3.Provide a name SalesforceCreateAccountCustom and description of the new flow service.
4.Type Salesforce in the search box, select Salesforce CRM, and then select Add Custom Operation.
5.On the Connect to account page, select the supported Authentication type and the Salesforce CRM account created from the drop-down list. Provide a name and description of the custom operation.
6.Select the create operation.
Note that for REST-based connectors, after selecting the operation, you can click Headers to add input Headers, if required. IBM webMethods Integration displays the default HTTP transport headers for the operation, along with their default values. At run time, while processing the headers, IBM webMethods Integration substitutes the values as necessary. In order to customize the headers, do the following:
a. Click + Add to add a custom Header.
b. Click the icon to specify the header name and an optional default value for the header variable. If the variable is null in the input pipeline, this value will be used at run time. If the variable already has an existing default value defined, this value will overwrite the existing value at run time.
c. If headers appear in the signature, select Active to activate the headers in the signature.
d. To delete a custom header that you have added, click Delete.
Note
You cannot delete the required headers.
You can also customize the parameters for REST-based connectors after selecting the operation, by clicking the Parameter option. Review the operation parameter details. IBM webMethods Integration displays the parameter Name and Description, the Data Type used to represent the kind of information the parameter can hold, the parameterization Type of the request, and the default value needed to access the operation. To specify a default value for the parameter, click the icon and then type or paste the default value. The default value is used at run time. You cannot add or delete request parameters. You can only modify the default value of a parameter. All parameters appear in the signature.
Now let us go back to our example.
7. After you select the create operation, click Next, and then select the Business Object Account. Business Objects appear only for certain connectors and operations.
8.Select the data fields and confirm the action to create the custom operation. Data fields appear only for certain connectors and operations.
9.On the Flow service editor, click the icon to define the input and output fields.
10.Create two Input Fields, AccountName and CityName.
11.Click to edit the flow service mapping. Only fields selected earlier are shown in the input panel.
12.Save the flow service and click Run. Provide the custom field values in the AccountName and CityName fields, for example, Software
As the results show, the Software account is created.
IBM webMethods Integration supports the multipart/form-data media type using which you can embed binary data such as files into the request body. Though application/x-www-form-urlencoded is a more natural way of encoding, it becomes inefficient for encoding binary data or text containing non-ASCII characters. The media type multipart/form-data is the preferred media type for request payloads that contain files, non-ASCII, and binary data.
For example, if you want to create a user as well as upload a photo, the request has to be a multipart request where one part is an image file while the other part is a JSON request body. Further, uploading the CRM individual contact data to Salesforce is time consuming, so using the MIME/Multipart attachments capability, you can upload the csv/json file containing multiple contacts data into Salesforce in a single run.
For some connectors and operations, for example, for SalesforceR Bulk v2 Data Loader, DocuWare, Google Drive, Amazon S3, and FreshService, IBM webMethods Integration supports multipart request body.
Example of a multipart request
A multipart/form-data request body contains a series of parts separated by a boundary delimiter, constructed using Carriage Return Line Feed (CRLF), “–”, and also the value of the boundary parameters. The boundary delimiter must not appear inside any of the encapsulated parts.
Each part has the Name, Type, Content-Type, and Part ID fields.
While adding a custom action, for example, for the Freshservice Create Child Ticket With Attachments operation, you can click Attachments to view the list of all the configured parts to be sent to the service provider. You can send a multipart/form-data payload which contains either a file, or text, or a document type.
The parts to be sent to the service provider appear in the input signature. All options including the Add option to add a custom part are disabled if the resource is not of type multipart/form-data. Currently, multipart/form-data payload is supported only at the request level, that is, only in the input signature.
Name - Name of the file part as documented in the SaaS provider API documentation.
Content Type - Content type of the part’s content (if the Type is TEXT), or the file you are uploading (if the Type is FILE) or the serialization type to which the document will be serialized and become the part content (if the Type is DOCUMENT).
Type - IBM webMethods Integration defines the following three part types:
TEXT - Represents a simple text part of a multipart/form-data payload where the content of the part is of type text/plain. You can use this part to send raw text data as the content of a part, in a multipart request. From a file upload perspective, this part is generally used to convey extra information about the upload behavior, like target folder paths/folder names where the file has to be uploaded.
DOCUMENT - Some back ends expect application/json or application/xml content in the part body. This represents a part where the content of the part is of type application/json or application/xml.
FILE - Represents a binary part of a multipart/form-data payload where the content of the part is binary or the content of the file itself. To upload a file, you will normally use this part. For file upload kind of use cases, this part is the main or mandatory part required by the service provider.
Active/Inactive - Represents whether this part will be included as part of the service signature.
Example to create a ticket with attachments (multipart data) in Freshservice
Before you begin
Log in to your tenant.
Check if you have the Developer and Admin roles assigned from the User Management > Roles page.
Obtain the credentials to log in to the Freshservice back-end account.
In IBM webMethods Integration, create a File Transfer Protocol (FTP/FTPS) account FTPS_4 and a Freshservice account Freshservice_1.
Basic Flow
Select the project where you want to create the new flow service. You can also create a new project.
Click the Flow services tab and on the Flow services page, click the icon.
Provide a name for the flow service, for example, CreateTicketWithAttachment, and an optional description for the new flow service.
Upload a file from the FTP server to Freshservice. Select the File Transfer Protocol (FTP/FTPS) connector, the getFile operation, and the FTPS_4 account.
Let us upload the hello.txt file available on FTP server in Freshservice. Click Set Value for remoteFile and select the same file in the Set Value - remoteFile dialog box. Click Save.
Select the Freshservice connector. As you are creating a ticket with an attachment, select the createTIcketWithAttachment operation and the account as Freshservice_1.
Click the Edit Mapping icon. Under Pipeline Input, map contentStream to value.
Set the following values on the mapping screen:
Set the value for filename as hello.txt.
Set the value for email as abc@xxxx.com.
Set the value for subject as create ticket.
Set the value for description as create ticket with attachment.
Set the value for priority as 1.
Set the value for status as 2.
Set the value for requester_id as 27000450816. This is the agent ID in Freshservice.
Set the value for phone as 123456789.
Set the value for source as test.
Save and run the flow service.
Go to Freshservice, click Tickets, and check that the ticket is generated.
Create bulk accounts in Salesforce using a multipart predefined operation
For some connectors and operations, for example, for the Salesforce R Bulk v2 Data Loader connector and createAndUploadDataUsingMultipart predefined operation, IBM webMethods Integration supports multipart request body.
Before you begin
Log in to your tenant.
Check if you have the Developer and Admin roles assigned from the User Management > Roles page.
Obtain the credentials to log in to the Salesforce back end account.
In IBM webMethods Integration, create the Salesforce R Bulk v2 Data Loader account, SFBulkV2_02Apr.
Basic Flow
1.Select the project where you want to create the new flow service. You can also create a new project.
2.Click the Flow services tab and on the Flow services page, click the icon.
3.Provide a name and an optional description for the new flow service.
4.Type IO in the search box, select IO, and then select the stringToStream service.
5.Click the mapping icon to map the input and output fields.
6.Set the below value for string:
7.Set the value for encoding as UTF-8.
8.Click the icon to add a new step.
9.Select the Salesforce(R) Bulk v2 Data Loader connector and the createAndUploadDataUsingMultipart predefined operation. Select the SFBulkV2_02Apr account.
10.Click the mapping icon to map the input and output fields.
11.Map inputStream to value. The service output is automatically mapped to the Pipeline Output by a dotted line.
12.Set values for fileroot as shown below:
13.For fileRoot1, set values for name as content, contentType as text/csv, type as FILE, and filename as content.
14.Save the flow service and run it by clicking the Run icon.
15.View the flow service execution results for a successful run.
IBM webMethods Integration has an extensive library of services for performing common integration tasks such as transforming data values, performing simple mathematical operations, and so on.
Services are invoked at run time. While creating a flow service, you can sequence services and manage the flow of data among them.
Related services are grouped in categories. Input and output parameters are the names and types of fields that the service requires as input and generates as output and these parameters are collectively referred to as a signature.
Use the Alert services to notify users when predefined conditions, such as password expiry, license key updates, rule violations, or service errors, are met, ensuring timely awareness and response to critical system events.
Use Datetime services to build or increment a date/time. The services in datetime provide more explicit timezone processing than similar services in the Date category.
Use IO services to convert data between byte[ ], characters, and InputStream representations. These services are used for reading and writing bytes, characters, and streamed data to the file system. These services behave like the corresponding methods in the java.io.InputStream class. These services can be invoked only by other services. Streams cannot be passed between clients and the server, so these services will not execute if they are invoked from a client.
Use List services to retrieve, replace, or add elements in an Object List, Document List, or String List, including converting String Lists to Document Lists.
Use Math services to perform mathematical operations on string-based numeric values. Services that operate on integer values use Java’s long data type (64-bit, two’s complement). Services that operate on float values use Java’s double data type (64-bit IEEE 754). If extremely precise calculations are critical to your application, you should write your own Java services to perform math functions.
Use Transaction services only in conjunction with the Database Application operations. These services are applicable when the Database Application account is of type Transactional.
Use Reference Data services to upload reference data that defines the set of permissible values to be used by other data fields.
Note
When deploying a flow service in a multi-instance setup, the service is synchronized across both instances but operates on only one instance during execution. For example, if the flow service includes an alert.emit service, it exists in both instances but functions only within one instance at a time.
Alert services
Use the Alert services to notify users when predefined conditions, such as password expiry, license key updates, rule violations, or service errors, are met, ensuring timely awareness and response to critical system events.
The following Alert services are available:
Service
Description
emit
Performs an alert when a pre-configured condition is met.
channels
Gets all alert channels in the system. Channels help classify alerts and let you subscribe to a specific type of alert.
countAll
Gets the count of all alerts in the system, read and unread, or a subset of alerts that meet the filter criteria.
fetchAll
Fetch all alerts in the system, including read and unread, or a subset based on filter criteria.
getSettings
Gets the settings of the system notifier.
setSettings
Updates the settings of the system notifier.
deregisterChannel
Deregisters a registered custom channel.
getChannel
Gets the details of a specific channel.
registerChannel
Registers a custom channel.
updateChannel
Updates the settings of a channel.
severities
Lists the pre-defined severities that you can associate with an alert channel. The severity associated with an alert lets you assess the relative impact of a business event on the system.
emit
Generates an alert when a pre-configured condition is met. For example, User or Master Password Expiry, License Key Update, Enterprise Gateway Rule Violation, Service Error, and so on. This service is invoked by IBM webMethods Integration components to raise alerts.
Input Parameters
channelID: Integer. Identifier of the channel associated with the alert.
severity: String. Severity associated with the alert channel.
subject: String. Subject of the alert message.
contentType: String. Type of information in the alert. For example, text/plain.
content: String. Content of an alert. For example, Alert {0} Message Size Limit filter failed.\nClient IP Address: {1}\n Request URL : {2}\n Date & Time : {3}
content
Output Parameters
alertId: Long. A unique identifier assigned to each alert emitted.
channels
Gets all alert channels in the system. Channels help classify alerts and let you subscribe to a specific type of alert.
Input Parameters
None.
Output Parameters
Channels: Document List. The list of alert channels in IBM webMethods Integration. By default, there are 13 channels.
All channels have the following keys:
id: Integer. Identifier of a channel. This is a system assigned value and cannot be modified.
Note
Channel IDs for custom channels start with 1001.
displayName: String. Name of the alert channel.
description: String. Description for the channel.
emissionEnabled: Boolean. Indicates whether an alert is generated for the channel and stored in the database or not. A value of:
true indicates that alerts are emitted for the channel and stored in the database.
false indicates that alerts are disabled for the channel.
systemNotificationEnabled: Boolean. Indicates whether an alert generated for the channel is displayed as a notification or not.
true indicates that notifications are generated for the channel.
false indicates that notifications are disabled for the channel.
Usage Notes
The default channels, events when alerts are raised for a channel, and the corresponding severity are described in the following table:
Channel ID
Channel Name
Events
Severity
1
Service Error
Service execution errors.
Note
By default, service error notifications are disabled.
Error
2
Server Error
Server errors such as HTTP 404.
Note
By default, server error notifications are disabled.
Error
3
Password Management
A user password expires. A user password is about to expire. Default administrator password is not changed.
Critical Warning
4
License Management
IBM webMethods Integration license expires. IBM webMethods Integration license is about to expire.
Critical Warning
5
Certificate Management
A security certificate is about to expire.
Warning
6
Account Locking
Multiple login attempt failures that result in a locked account. A locked account is unlocked.
Gets the count of all alerts in the system, read and unread, or a subset of alerts that meet the filter criteria.
Input Parameters
filter: Document. Optional. To get the count of a subset of alerts, specify the criteria using the following keys:
startTime: Long. Optional. Start of the time interval in which alerts are emitted. Specify the value in UNIX epoch format in milliseconds.
endTime: Long. Optional. End of the time interval in which alerts are emitted. Specify the value in UNIX epoch format in milliseconds.
severities: String[]. Optional. List of severities.
channelIds: Object[]. Optional. List of channel identifiers.
user: String. Optional. The IBM webMethods Integration user who triggered the alerts or Administrator for alerts emitted by the server. For example, specify Administrator for Password Management alerts.
keyword: String. Optional. A keyword that identifies the alerts.
Output Parameters
count: Long. The count of all alerts in the system, read and unread, or a subset of alerts, if the filter criteria are passed as inputs.
Usage Notes
To get the count of all alerts in the system, read and unread, do not specify the filter criteria as inputs.
fetchAll
Fetch all alerts in the system, including read and unread, or a subset based on filter criteria.
Input Parameters
filter: Document. Optional. To fetch a subset of alerts, specify the criteria using the following keys:
startTime: Long. Optional. Start of the time interval in which alerts are emitted. Specify the value in UNIX epoch format in milliseconds.
endTime: Long. Optional. End of the time interval in which alerts are emitted. Specify the value in UNIX epoch format in milliseconds.
severities: String[]. Optional. List of severities.
channels: Object[]. Optional. List of channel identifiers.
user: String. Optional. The IBM webMethods Integration user who triggered the alerts or Administrator for alerts emitted by the server. For example, specify Administrator for Password Management alerts.
keyword: String. Optional. A keyword that identifies the alerts.
sortOrder: String. Optional. The order in which the fetched alerts are sorted. Select ASC or DESC. The default is DESC.
sortBy: String. Optional. The criteria to sort the fetched alerts. Select one of the following options:
channel
severity
subject
timestamp
The default is timestamp.
pageSize: Integer. The number of notifications on a page.
pageNumber: Integer. In a paginated list of notifications, the page where IBM webMethods Integration loads the notifications.
Note
Page number starts with 0.
Output Parameters
alerts: Document List. A list of the alerts retrieved by the service. Each alert has the following details:
id: Long. Alert identifier.
ts: Long. Date and time when the alert is generated, in the UNIX epoch time format.
serverId: String. IP address of the host where IBM webMethods Integration is running.
memberName: String. Member name of the machine that hosts IBM webMethods Integration in a cluster.
user: String. IBM webMethods Integration user who triggered the alert.
channelId: Integer. Channel identifier for the alert.
channel: String. Channel for the alert.
severityId: Integer. Severity identifier of the alert.
severity: String. Severity of the alert.
subject: String. Subject of the alert message.
contentType: String: Text that identifies the type of content in the alert such as text/plain and text/html.
content String. Content of the alert.
Usage Notes
To fetch all read and unread alerts in the system, do not specify the filter criteria as inputs. The number of notifications on a page is identified by pageSize and the current page is identified by pageNumber.
getSettings
Gets the settings of the system notifier.
Input Parameters
None.
Output Parameters
enabledEmissionChannels: Object[]. List of channels for which alerts can be generated and stored in the database.
enabledSystemNotifierChannels: Object[]. List of channels for which alerts are generated and displayed as notifications.
systemNotifierSeverityCutoff: String. Notifications are generated only for alerts with severity equal to, or more than this value.
retentionPeriodInDays: Integer. Determines how long the alerts are stored in the database.
certificateExpiryWarningInDays: Integer. Number of days before the certificate expiry date, when the alert should be generated.
setSettings
Updates the settings of the system notifier.
Input Parameters
enabledEmissionChannels: Object[]. List of channels for which alerts can be generated and stored in the database.
enabledSystemNotifierChannels: Object[]. List of channels for which alerts are generated and displayed as notifications.
systemNotifierSeverityCutoff: String. Notifications are generated only for alerts with severity equal to, or more than this value.
retentionPeriodInDays: Integer. Determines how long the alerts are stored in the database.
certificateExpiryWarningInDays: Integer. Number of days before the certificate expiry date when the alert should be generated.
Output Parameters
message: String. Indicates whether the system notifier settings are updated or not.
Usage Notes
Based on the value of the retentionPeriodInDays parameter, a purge job clears the alerts stored in the database.
deregisterChannel
Deregisters a registered custom channel.
Input Parameters
channelId Integer: Identifier of the channel that you want to deregister.
Output Parameters
deregistered: Boolean. Indicates whether the channel is deregistered or not. A value of:
true indicates that the channel is deregistered.
false indicates that the channel is not deregistered.
Usage Notes
Channel IDs for custom channels are generated by the system and cannot be modified. The first custom channel gets 1001 as the channel ID and subsequent custom channels have consecutive numbers. Once a custom channel is deregistered, the channel ID is no longer available for a new custom channel until IBM webMethods Integration is restarted.
Deregistering a channel also deletes all the alerts for that channel.
getChannel
Gets the details of a specific channel.
Input Parameters
channelId Integer: Identifier of the channel that you want to retrieve.
Output Parameters
displayName String. Name of the channel.
description String. Description for the channel.
emissionEnabled: Boolean. Indicates whether an alert is generated for the channel and stored in the database or not. A value of:
true indicates that alerts are emitted for the channel and stored in the database.
false indicates that alerts are disabled for the channel.
systemNotificationEnabled: Boolean. Indicates whether an alert generated for the channel is displayed as a notification or not.
true indicates that notifications are generated for the channel.
false indicates that notifications are disabled for the channel.
registerChannel
Registers a custom channel.
Input Parameters
displayName: String. Name of the channel that you want to register.
description: String. Optional. Description of the channel.
emissionEnabled: Boolean. Optional. Indicates whether an alert is generated for the channel and stored in the database or not. Set to:
true to emit alerts for the channel and store them in the database.
false to disable alerts for the channel.
The default value is true for custom channels as well as system channels other than Service Error and Server Error.
systemNotificationEnabled: Boolean. Optional. Indicates whether the alert generated for the channel is displayed as a notification or not. Set to:
true to generate notifications for the channel.
false to disable notifications for the channel.
The default value is true for custom channels as well as system channels other than Service Error and Server Error.
Output Parameters
channelId: Integer. Identifier of the registered channel.
Note
Channel IDs for custom channels start with 1001.
Usage Notes
For custom channels, channel IDs are generated by the system and cannot be modified. The first custom channel gets 1001 as the channel ID and subsequent custom channels have consecutive numbers. Once a custom channel is deregistered, the channel ID is no longer available for a new custom channel until IBM webMethods Integration is restarted.
updateChannel
Updates the settings of a channel.
channelId: Integer. Identifier of the alert channel that you want to update.
Note
Channel IDs for custom channels start with 1001.
description: String. Optional. Description of the channel.
emissionEnabled: Boolean. Optional. Indicates whether an alert is generated for the channel and stored in the database or not. Set to:
true to emit alerts for the channel and store them in the database.
false to disable alerts for the channel.
The default value is true for custom channels as well as system channels other than Service Error and Server Error.
systemNotificationEnabled: Boolean. Optional. Indicates whether the alert generated for the channel is displayed as a notification or not. Set to:
true to generate notifications for the channel.
false to disable notifications for the channel.
The default value is true for custom channels as well as system channels other than Service Error and Server Error.
Output Parameters
message: String. Indicateswhether the channel is successfully updated or not.
Usage Notes
Invoke the alert:updateChannel service to ignore a notification of a specific type or restore an ignored notification from IBM webMethods Integration Administrator.
severities
Lists the pre-defined severities that you can associate with an alert channel. The severity associated with an alert lets you assess the relative impact of a business event on the system.
Input Parameters
None.
Output Parameters
Severities: Document List. The list of default severities available in the system. By default, 4 severities are configured.
id: Integer. Identifier for a severity. The default severities and their IDs are as follows:
0: Critical, 1: Error, 2: Warning, 3: Information
displayName: String. The severity name.
B2B Utils services
generateResponse
Generates a response for the processing rule Call an integration action.
Note
generateResponse does not support large document.
Input Parameters
inputContent: Object. Content data sent as part of integration response after encoding.
readContentAs: String. Data type in which inputContent field value is passed.
bytes.inputContent is read as bytes.
string. inputContent is read as a string.
contentType: String. Content-Type passed corresponds to inputContent field. For example, application/EDI, application/x12, text/plain and so on.
attachments: Document List. (optional) List of attachments sent as part of integration response, if any.
name: String. Name of the attachment.
inputContent: Object. Content of the attachment.
readContentAs: String. Data type in which inputContent field value is passed.
bytes. inputContent is read as bytes.
string. inputContent is read as String.
contentType: String. Content type of the attachment. For example, application/zip if the attachment is a ZIP file.
encoding: String. (Optional) Encoding of the attachment. Default value is UTF-8.
otherHeaders: Document. (Optional) Add key and value strings of the header to the attachment.
headers: Document. (optional) Headers sent as part of the integration response, if any.
errorCode: String. Value of error code passed as part of integration response.
errorMessage: String. Value of error message passed as part of integration response.
encoding: String. Type of character set used for encoding inputContent value. This can be any IANA registered character set. Default UTF-8.
Output Parameters
response: Document. Response is a composite object containing the parameters:
content: String. Encoded in Base64. Value is specified in encoding.
type: String. Value specified in contentType.
encoding: String. Type of encoding used to encode the inputContent. Value is specified in encoding.
attachments: Document. List of attachments received from the input parameter * attachments*, if any.
headers: Document. Headers received from the input parameter headers.
error: Document. Error is a composite object containing the fields:
code: String. Error code received from the input. Value is specified in errorCode.
message: String. Error message associated with the input. Value is specified in errorMessage.
constructSubmitInput
Prepare the necessary input used for submitting the business documents to IBM webMethods B2B product instance.
Input Parameters
type: String. The input parameter type that is generated for AS2, AS4, RNIF20, RNIF11, or cXML.
DoctypeName: String. The name of the document type. This is case-sensitive.
For AS4 Push, use ebMS 3.0 UserMessage.
For AS4 Pull, use ebMS 3.0 PullRequest Signal.
For RNIF, it is the xml document type based on the PIP.
For cXML, it is the xml document type.
SenderID: String. Required for AS4, and cXML. Sender Identity value from the partner profile. For example, 123456789.
SenderIDType: String. Required for AS4, and cXML. Sender Identity type from the partner profile. For example, DUNS, DUNS+4.
ReceiverID: String. Required for AS4, and cXML. Receiver Identity value from the partner profile. For example, 123456789.
ReceiverIDType: String. Required for AS4, and cXML. Receiver Identity type from the partner profile. For example, DUNS, DUNS+4.
ConversationID: String. Required for RNIF when acting as a responder. The set of related messages that create a conversation between two partners.
In an AS4 message, when the value is not specified, the system generates a unique ID. The value maps to Messaging/UserMessage/CollaborationInfo/ConversationId.
otherParams: Document. (Optional) Other keys that you need to pass as Params in the submit service.
as2: Document. Required for AS2 type document. Configure the following AS2 parameters:
body: Document. (Optional) AS2 body document. For more information to configure inputContent, readContentAs, contentType and encoding see, Input parameters of Submit.
attachments []: Document List. (Optional). For more information to configure AS2 attachments (name, inputContent, readContentAs, contentType, encoding and otherHeaders).
headers: Document. (Optional) Add key and value strings of the header to the payload.
params: Document. A document that provides parameters on how IBM webMethods B2B recognizes and processes a AS2 document.
as2SenderID: String. Sends the EDIINT AS2 identity as AS2-From value when using AS2 out channel in IBM webMethods B2B.
as2ReceiverID: String. Sends the EDIINT AS2 identity as AS2-To value when using AS2 out channel in IBM webMethods B2B.
as4: Document. Required for AS4 type document. Configure the following AS4 parameters:
attachments []: (Optional) Document List. List of attachments to be used in RNIF message. See Configuring RNIF attachments.
params: Document. A document that provides parameters on how IBM webMethods B2B recognizes and processes a RNIF document. Configuring RNIF params.
cXML: Document. Required for cXML type document. Configure the following cXML parameters:
body: Document. cXML body document. For more information on configuring inputContent, readContentAs, contentType and encoding see, Input parameters of Submit.
attachments []: Document List. (Optional). For more information on configuring cXML attachments (name, inputContent, readContentAs, contentType, encoding and otherHeaders) see, attachments of Submit.
params: Document. A document that provides parameters on how IBM webMethods B2B recognizes and processes a cXML document.
Configuring cXML params.
payloadID: String. A unique number of the document.
userAgent: String. Optional. The software or application that processes the cXML data. For example, Ariba Network, Procure Software 3.3.
sharedSecret: String. The password the sender shares with the receiver for security or authentication purpose.
sync: String. Indicates if the message is synchronous or asynchronous. Valid values are:
deploymentMode: String. Optional. Indicates whether the request is a test request or a production request. The valid values are test or production.
sender []: Document. Optional. Sender of the cXML document. Sends the sender identity as cXML-From value when using cXML out channel in IBM webMethods B2B. Add the idType and idValue to recognize and verify the identity of the sender who has initiated the HTTP connection for processing the cXML message.
idType: String. Identity type from the partner profile. For example, DUNS, DUNS+4.
idValue: String. Identity value from the partner profile. For example, 123456789.
receiver []: Document. Optional. Receiver of the cXML document. Sends the receiver identity as cXML-To value when using cXML out channel in IBM webMethods B2B. Add the idType and idValue to recognize and verify the identity of the receiver of the cXML message.
idType: String. Identity type from the partner profile. For example, DUNS, DUNS+4.
idValue: String. Identity value from the partner profile. For example, 123456789.
Configuring an AS4 body
Document. To configure an AS4 body document. This is optional.
name: String. (Optional when only body is sent) Name of the payload content.
xmlInputContent: Object. XML content to submit to the IBM webMethods B2B product instance for processing. This is sent as SOAP BODY.
readContentAs: String. Data type in which xmlInputContent field value is accepted.
bytes. xmlInputContent is read as bytes.
string. xmlInputContent is read as string.
stream. xmlInputContent is read as stream. IBM recommends you to use readContentAs as stream if the size of the payload is over 5 MB.
encoding: String. (Optional) Type of character set used for encoding xmlInputContent value. This can be any IANA registered character set. The default value is UTF-8.
partInfo: Document. (Optional) AS4 payload part information.
schemaLocation: String. (Optional) URI of the schema. The value of this parameter maps to the location attribute of the Messaging/UserMessage/PayloadInfo/PartInfo/Schema element.
schemaVersion: String. (Optional) Version of the schema. The value of this parameter maps to the version attribute of the Messaging/UserMessage/PayloadInfo/PartInfo/Schema element.
schemaNamespace: String. (Optional) Target namespace of the schema.
The value of this parameter maps to the namespace attribute of the Messaging/UserMessage/PayloadInfo/PartInfo/Schema element.
description: String. (Optional) Description of the content. The value of this parameter maps to the Messaging/UserMessage/PayloadInfo/PartInfo/Description element.
properties[]: Document List. (Optional) List of name and value pairs that are sent along with the message. The value of this parameter maps to the Messaging/UserMessage/PayloadInfo/PartInfo/PartProperties element.
Note
For the final AS4 payload of MimeType application/octet-stream, then add the property name PayloadMimeType and value as application/octet-stream.
name: String. Name of the property.
The value of this parameter maps to the name attribute of the Messaging/UserMessage/PayloadInfo/PartInfo/PartProperties/Property element.
value: String. Value of the property.
The value of this parameter maps to the Messaging/UserMessage/PayloadInfo/PartInfo/PartProperties/Property element.
Configuring AS4 attachments [ ]
Document List. To configure AS4 attachments. This is optional.
name: String. Name of the attachment content.
inputContent: Object. Content of the attachment. It is sent as SOAP attachments.
readContentAs: String. Data type in which inputContent field value is passed.
bytes. inputContent is read as bytes.
string. inputContent is read as string.
stream. inputContent is read as stream. IBM recommends you to use readContentAs as stream if the size of the payload is over 5 MB.
contentType: String. Content type of the attachment.
For example, application/zip if the attachment is .zip file.
encoding: String. (Optional) Encoding of the attachment. The default value is UTF-8.
partInfo: Document. (Optional) AS4 attachment part information.
schemaLocation: String. (Optional) URI of the schema.
The value of this parameter maps to the location attribute of the Messaging/UserMessage/PayloadInfo/PartInfo/Schema element.
schemaVersion: String. (Optional) Version of the schema.
The value of this parameter maps to the version attribute of the Messaging/UserMessage/PayloadInfo/PartInfo/Schema element.
schemaNamespace: String. (Optional) Target namespace of the schema.
The value of this parameter maps to the namespace attribute of the Messaging/UserMessage/PayloadInfo/PartInfo/Schema element.
description: String. (Optional) Description of the content.
The value of this parameter maps to the Messaging/UserMessage/PayloadInfo/PartInfo/Description element.
properties[]: Document List. (Optional) List of name value pairs that are sent along with the message. The value of this parameter maps to the Messaging/UserMessage/PayloadInfo/PartInfo/PartProperties element.
Note
For the final AS4 payload of MimeType application/octet-stream, then add the property name PayloadMimeType and value as application/octet-stream.
name: String. Name of the property. The value of this parameter maps to the name attribute of the Messaging/UserMessage/PayloadInfo/PartInfo/PartProperties/Property element.
value: String. Value of the property. The value of this parameter maps to the Messaging/UserMessage/PayloadInfo/PartInfo/PartProperties/Property element.
Configuring AS4 params
Document. A document that provides parameters on how IBM webMethods B2B recognizes and processes a AS4 document. Configure AS4 params.
messageId: String. (Optional) Unique identifier of the message. When value is not specified, the system generates a unique messageId.
When the messageID is generated, the value of this parameter maps to Messaging/UserMessage/MessageInfo/MessageId.
soapAction: String. (Optional) Value of the action parameter in the MIME type.
pmodeId: String. (Optional for Push request) TPA agreement ID.
push: Document. (Required when DoctypeName is ebMS 3.0 UserMessage) AS4 push document.
sync: Boolean. (Optional) Synchronous or Asynchronous push request. Default value is false. Valid values are:
true. Synchronous push reply (Used in MEP binding Two-Way/Sync for replying).
false. Asynchronous push request (Used in MEP bindings for requesting).
refToMessageId: String. (Required when sync is true). Identifier used in two-way MEPs where the responding MSH user message refers to the initiating MSH user message.
toPartyIdRole: String. (Required when pmodeId or agreementRef is not specified) Initiator or responder role of the party in the message exchange.
fromPartyIdRole: String. (Required when pmodeId or agreementRef is not specified) Initiator or responder role of the party in the message exchange.
agreementRef: String. (Required when pmodeId is not specified) The value to use for the submit request.
When pmodeId and agreementRef value is empty, then a tuple of From/ PartyId, From/Role, To/PartyId, To/Role, Service, and Action is used to identify the TPA agreement.
For noagreementRef element in the final payload, agreementRef and pmodeId values in the service input and TPA agreement field should be empty.
service: String. (Required when PmodeId or agreementRef is not specified) Name of the service use to identify the RequestUM leg used for the current transaction.
IBM webMethods B2B uses the service and action parameters to determine which RequestUM leg to use for the current transaction. IBM webMethods B2B compares the value of this parameter to the value of leg/businessInfo/service that is configured in all RequestUM legs of the TPA. The RequestUM leg that contains a match for both the service and action parameters is the leg that will be used for the current transaction.
When there is only one RequestUM leg configured in the TPA, the service parameter does not have to be defined.
action: String. (Required when PmodeId or agreementRef is not specified) Name of action use to identify the RequestUM leg used for the current transaction.
IBM webMethods B2B uses the service and action parameters to determine which RequestUM leg to use for the current transaction. IBM webMethods B2B compares the value of this parameter to the value of leg/businessInfo/action that is configured in all RequestUM legs of the TPA. The RequestUM leg that contains a match for both service and action is the leg that will be used for the current transaction.
When there is only one RequestUM leg configured in the TPA, the action parameter does not have to be defined.
serviceType: String. (Optional) Name of the service type that indicates how the sender and receiver will interpret the service element.
When no value is enter for this parameter, the service parameter must be a URI.
messageProperties: Document List. (Optional) List of name and value pairs that are sent along with the message. These pairs map to the zero or more Property child elements within Messaging/UserMessage/MessageProperties.
name: String. The name of property.
value: String. The value of property.
type: String. (Optional) The type of property.
endpointUrl: String. Endpoint address of the recipient. It retrieves from the operation retrievePeppolParticipantDetails.
Note
When exchanging documents over the peppol network, configuring the endpointUrl is mandatory, as it is used to send the documents in real-time.
pull: Document. (Required when DoctypeName is ebMS 3.0 PullRequest Signal) AS4 pull document.
mpc: String. (Optional) Indicates MPC (Message partitioning channel) from where to pull the queued message. Default MPC is used when not specified.
simpleSelectionItems[]: Document List. (Optional) Select a list of simple elements.
element: String. The name of the element that has to be retrieved. This parameter refers to refToMessageId, conversationId, agreementRef, service, and action.
elementvalue: String. The value of the element.
attributeList[]: Document List. (Optional) A list of attributes.
element: String. The name of the attribute.
elementvalue: String. The value of the attribute.
complexSelectionItems[]: Document List. (Optional) Select a list of complex elements.
element: String. The name of the element that has to be retrieved. This parameter refers to From, To, and MessageProperties.
elementvalue: String. The value of the element.
attributeList: Document List. (Optional) A list of attributes.
element: String. The name of the attribute.
elementvalue: String. The value of the attribute.
childItems[]: Document List. (Optional) A list of child items.
element: String. The name of the child item.
elementvalue: String. The value of the child item.
attributeList[]: Document List. (Optional) List of attributes.
element: String. The name of the attribute.
elementvalue: String. The value of the attribute.
Configuring an RNIF body
Document. To configure an RNIF body document.
xmlInputContent: Object. XML content to submit to the IBM webMethods B2B product instance for processing.
readContentAs: String. Data type in which inputContent field value is passed.
bytes. xmlInputContent is read as bytes.
string. xmlInputContent is read as string.
stream. xmlInputContent is read as stream. IBM recommends you to use readContentAs as stream if the size of the payload is over 5 MB.
encoding: (Optional) String. Encoding of the content body.
Configuring an RNIF attachments [ ]
Document List. To Configure a list of attachments to be use in RNIF message. This is optional.
name: String. Name of the attachment content
inputContent: Object. Content of the attachment.
readContentAs: String. Data type in which inputContent field value is passed.
bytes. inputContent is read as bytes.
string. inputContent is read as string.
stream. inputContent is read as stream. IBM recommends you to use readContentAs as stream if the size of the payload is over 5 MB.
contentType: String. Content type of the attachment.
encoding: (Optional) String. Encoding of the attachment.
Configuring an RNIF params
Document. A document that provides parameters on how IBM webMethods B2B recognizes and processes a RNIF document. Configure RNIF params.
rnif20: Document Configure the following parameters:
pipInfo: (Optional) Document. Document that represents RNIF 2.0 Partner Interface Processes Information (PIP)
senderFocalRole: String. Validator of GlobalPartnerRoleClassificationCode tag describing fromRole in the service header. For valid values, see the fromService and fromRole details in the PIP specification corresponding to the executing PIP, activity, or action.
receiverFocalRole: String. Validator of GlobalPartnerRoleClassificationCode tag describing toRole in the service header. For valid values, see the toService and toRole details in the PIP specification corresponding to the executing PIP, activity, or action.
processCode String. Code or name of the PIP. For example, 3A4.
processVersion: String. Version of the PIP process. Select either 1.1 or 1.4.
transactionCode: String. Transaction associated with each RosettaNet PIP document.
actionCode: String. Query action associated with each RosettaNet PIP document.
enableSyncResponse: (Optional) Boolean. The flag to indicate that the transaction is expecting a sync response. Valid values are:
true: The transaction is expecting a sync response.
false: The transaction is not expecting a sync response. This is by default.
response: (Optional) Document. Response is a composite object containing the parameters.
messageTrackingID: String. Unique instance identifier to identify message.
inReplyToGlobalBusinessActionCode: (Optional) String. Action associated with request RosettaNet PIP document, for which IBM webMethods B2B sends the response.
initiatingPartnerLocationID: String. Location ID of the partner. The initiating partner location ID: //ServiceHeader/ProcessControl/KnownInitiatingPartner/PartnerIdentification/locationID/Value for which IBM webMethods B2B sends the response.
rnif11: Document. Configure the following parameters:
pipInfo: (Optional) Document. Document that represents RNIF 1.1 Partner Interface Processes Information (PIP).
senderFocalRole: String. Validator of the GlobalPartnerRoleClassificationCode tag describing fromRole in the service header. For valid values, see the fromService and fromRole details in the PIP specification corresponding to the executing PIP, activity, or action.
receiverFocalRole: String. Validator of the GlobalPartnerRoleClassificationCode tag describing toRole in the service header. For valid values, see the toService and toRole details in the PIP specification corresponding to the executing PIP, activity, or action.
processCode: String. Code or name of the PIP. For example, 3A4.
processVersion: String. Version of the PIP process. Select either 1.1 or 1.4.
transactionCode: String. Transaction associated with each RosettaNet PIP document.
actionCode: String. Query action associated with each RosettaNet PIP document.
response: (Optional) Document. Response is a composite object containing the parameters.
actionIdentityInstanceIdentifier: String. Unique instance identifier to identify message.
inReplyToGlobalBusinessActionCode: (Optional) String. Action associated with request RosettaNet PIP document, for which IBM webMethods B2B sends the response.
Output Parameters
Output of constructSubmitInput operation is the input parameters for submit operation. See Submit.
parseContent
Parses request content passed by the processing rule Call an integration action as bytes or string data type. For more information, see Call an integration.
Note
parseContent does not support large document.
Input Parameters
inputContent: String. Content passed in the integration request.
loadContentAs: String. Data type in which outputContent field value is passed.
bytes. outputContent is generated as bytes.
string. outputContent is generated as a string.
encoding: String. Type of character set used for encoding inputContent value. This can be any IANA registered character set.
Default UTF-8.
Output Parameters
outputContent: Object. Content data corresponds to the data type option set for the loadContentAs field.
Compress Services
Use Compress services to compress the data before sending the HTTP request and decompress it after receiving the HTTP response.
The following Compress services are available:
Service
Description
compressData
Performs compression of data.
decompressData
Performs decompression of data.
compressData
Compresses the data before sending the HTTP request using any of the specified compression schemes.
Input Parameters
data: Document - Data that you want the compressData service to compress. Specify data using one of the following keys. When you use more than one key, string is appended first, bytes is appended second, and stream is appended last. IBM webMethods Integration uses the first key that it encounters.
string: string Optional - Text that you want the compressData service to compress.
bytes: byte[] Optional - Data that you want the compressData service to compress.
stream:java.io.InputStream Optional - Data that you want the compressData service to compress.
encoding: String. Optional - Name of a registered, IANA character set that specifies the encoding to use when converting the String to an array of bytes (for example: ISO-8859-1).
compressionScheme: String - The compression method you want the compressData service to apply to compress the data. The supported compression schemes are gzip and deflate.
loadAs: string - Form in which you want the compressData service to store the output data. Set to:
bytes to store the data as a byte[].
stream to store the data as a java.io.InputStream.
Output Parameters
compressedData: Document - Compressed data after applying the compression scheme.
bytes: byte[] Conditional - Compressed data represented as a byte[ ]. bytes is returned only when the loadAs input parameter is set to bytes.
stream: java.io.InputStream Conditional - Compressed data represented as an InputStream. stream is returned only when the loadAs input parameter is set to stream.
decompressData
Decompresses the data based on the response header of the HTTP response.
Input Parameters
data: Document. Data that you want the decompressData service to decompress. Specify data using one of the following keys.
bytes: byte[] Optional - Data that you want the decompressData service to decompress.
stream: java.io.InputStream Optional - Data that you want the decompressData service to decompress.
compressionScheme: String - The compression scheme you want the decompressData service to apply to decompress the data. The supported compression schemes are gzip and deflate.
loadAs: string - Form in which you want the decompressData service to store the returned document. Set to:
bytes to return the data as a byte[].
stream to return the data as a java.io.InputStream.
Output Parameters
decompressedData: Document. Decompressed data after applying the compression algorithm.
bytes: byte[] Conditional - Decompressed data represented as a byte[]. bytes is returned only when the loadAs input parameter is set to bytes.
stream: java.io.InputStream Conditional - The decompressed data represented as an InputStream. stream is returned only when the loadAs input parameter is set to stream.
Date Services
Use Date services to generate and format date values.
Pattern String Symbols - Many of the Date services require you to specify pattern strings describing the data’s current format and/or the format to which you want it converted. For services that require a pattern string, use the symbols in the following table to describe the format of your data. For example, to describe a date in the January 15, 1999 format, you would use the pattern string MMMMM dd, yyyy. To describe the format 01/15/99, you would use the pattern string MM/dd/yy.
Symbol
Meaning
Presentation
Example
G
era designator
Text
AD
y
year
Number
1996 or 96
M
month in year
Text or Number
July or Jul or 07
d
day in month
Number
10
h
hour in am/pm (1-12)
Number
12
H
hour in day (0-23)
Number
0
m
minute in hour
Number
30
s
second in minute
Number
55
S
millisecond
Number
978
E
day in week
Text
Tuesday or Tue
D
day in year
Number
189
F
day of week in month
Number
2 (2nd Wed in July)
w
week in year
Number
27
W
week in month
Number
2
a
am/pm marker
Text
PM
k
hour in day (1-24)
Number
24
K
hour in am/pm (0-11)
Number
0
z
time zone
Text
Pacific Standard Time or PST or GMT-08:00
Z
RFC 822 time zone (JVM 1.4 or later)
Number
-0800 (offset from GMT/UT)
‘
escape for text
Delimiter
’ ‘
single quote
Literal
'
Time Zones - When working with date services, you can specify time zones. The Earth is divided into 24 standard time zones, one for every 15 degrees of longitude. Using the time zone including Greenwich, England (known as Greenwich Mean Time, or GMT) as the starting point, the time is increased by an hour for each time zone east of Greenwich and decreases by an hour for each time zone west of Greenwich. The time difference between a time zone and the time zone including Greenwich, England (GMT) is referred to as the raw offset.
The following table identifies the different time zones for the Earth and the raw offset for each zone from Greenwich, England. The effects of daylight savings time are ignored in this table.
Note
Greenwich Mean Time (GMT) is also known as Universal Time (UT).
ID
Raw Offset
Name
MIT
-11
Midway Islands Time
HST
-10
Hawaii Standard Time
AST
-9
Alaska Standard Time
PST
-8
Pacific Standard Time
PNT
-7
Phoenix Standard Time
MST
-7
Mountain Standard Time
CST
-6
Central Standard Time
EST
-5
Eastern Standard Time
IET
-5
Indiana Eastern Standard Time
PRT
-4
Puerto Rico and U.S. Virgin Islands Time
CNT
-3.5
Canada Newfoundland Time
AGT
-3
Argentina Standard Time
BET
-3
Brazil Eastern Time
GMT
0
Greenwich Mean Time
ECT
+1
European Central Time
CAT
+2
Central Africa Time
EET
+2
Eastern European Time
ART
+2
(Arabic) Egypt Standard Time
EAT
+3
Eastern African Time
MET
+3.5
Middle East Time
NET
+4
Near East Time
PLT
+5
Pakistan Lahore Time
IST
+5.5
India Standard Time
BST
+6
Bangladesh Standard Time
VST
+7
Vietnam Standard Time
CTT
+8
China Taiwan Time
JST
+9
Japan Standard Time
ACT
+9.5
Australian Central Time
AET
+10
Australian Eastern Time
SST
+11
Solomon Standard Time
NST
+12
New Zealand Standard Time
Examples - You can specify timezone input parameters in the following formats:
As a full name. For example:
Asia/Tokyo: America/Los_Angeles
You can use the java.util.TimeZone.getAvailableIDs() method to obtain a list of the valid full name time zone IDs that your JVM version supports.
As a custom time zone ID, in the format GMT[+ | -]hh[ [:]mm]. For example:
GMT+2:00: All time zones 2 hours east of Greenwich (that is, Central Africa Time, Eastern European Time, and Egypt Standard Time)
GMT-3:00: All time zones 3 hours west of Greenwich (that is, Argentina Standard Time and Brazil Eastern Time)
GMT+9:30: All time zones 9.5 hours east of Greenwich (that is, Australian Central Time)
As a three-letter abbreviation from the table above. For example:
PST: Pacific Standard Time
Note
Three-letter abbreviations
All three-letter abbreviations are deprecated. This is because some three-letter abbreviations can represent multiple time zones, for example, “CST” could represent both U.S. “Central Standard Time” and “China Standard Time”. Hence, use the full name or custom time zone ID formats instead.
Invalid Dates
The dates you use with a date service must adhere to the java.text.SimpleDateFormat class.
If you use an invalid date with a date service, the date service automatically translates the date to a legal date. For example, if you specify 1999/02/30 as input, the date service interprets the date as 1999/03/02 (two days after 2/28/1999).
If you use 00 for the month or day, the date service interprets 00 as the last month or day in the Gregorian calendar. For example, if you specify 00 for the month, the date service interprets it as 12.
If the pattern yy is used for the year, the date service uses a 50-year moving window to interpret the value of yy. The date service establishes the window by subtracting 49 years from the current year and adding 50 years to the current year. For example, if you are running IBM webMethods Integration in the year 2000, the moving window would be from 1951 to 2050. The date service interprets 2-digit years as falling into this window (for example, 12 would be 2012, 95 would be 1995).
The following Date services are available:
Service
Description
calculateDateDifference
Calculates the difference between two dates and returns the result as seconds, minutes, hours, and days.
compareDates
Compares two dates and returns the result as integer.
currentNanoTime
Returns the current value of the running Java Virtual Machine’s high-resolution time source, in nanoseconds.
dateBuild
Builds a date String using the specified pattern and the specified date services.
dateTimeBuild
Builds a date/time string using the specified pattern and the specified date services.
dateTimeFormat
Converts date/time (represented as a String) string from one format to another.
elapsedNanoTime
Calculates the time elapsed between the current time and the given time, in nanoseconds.
formatDate
Formats a Date object as a string.
getCurrentDate
Returns the current date as a Date object.
getCurrentDateString
Returns the current date as a String in a specified format.
incrementDate
Increments a date by a specified period.
calculateDateDifference
Calculates the difference between two dates and returns the result as seconds, minutes, hours, and days.
Input Parameters
startDate: String - Starting date and time.
endDate: String - Ending date and time.
startDatePattern: String - Format in which the startDate parameter is to be specified (for example, yyyyMMdd HH:mm:ss.SSS).
endDatePattern: String - Format in which the endDate parameter is to be specified (for example, yyyyMMdd HH:mm:ss.SSS).
Output Parameters
dateDifferenceSeconds: String - The difference between the startingDateTime and endingDateTime, truncated to the nearest whole number of seconds.
dateDifferenceMinutes: String - The difference between the startingDateTime and endingDateTime, truncated to the nearest whole number of minutes.
dateDifferenceHours: String - The difference between the startingDateTime and endingDateTime, truncated to the nearest whole number of hours.
dateDifferenceDays: String - The difference between the startingDateTime and endingDateTime, truncated to the nearest whole number of days.
Usage Notes
Each output value represents the same date difference, but in a different scale. Do not add these values together. Make sure your subsequent flow service steps use the correct output, depending on the scale required.
compareDates
Compares two dates and returns the result as an integer.
Input Parameters
startDate: String - Starting date to compare against endDate.
endDate: String - Ending date to compare against startDate.
startDatePattern: String - Format in which the startDate parameter is specified (for example, yyyyMMdd HH:mm:ss.SSS).
endDatePattern: String - Format in which the endDate parameter is specified (for example, yyyyMMdd HH:mm:ss.SSS).
Output Parameters
result: String. Checks whether startDate is before, the same, or after the endDate based on the values as follows:
+1: The startDate is after the endDate.
0: The startDate is the same as the endDate.
-1: The startDate is before the endDate.
Usage Notes
If the formats specified in the startDatePattern and endDatePattern parameters are different, IBM webMethods Integration takes the units that are not specified in the startDate and endDate values as 0.
That is, if the startDatePattern is yyyyMMdd HH:mm and the startDate is 20151030 11:11 and if the endDatePattern is yyyyMMdd HH:mm:ss.SSSand the endDate is 20151030 11:11:55:111, then the compareDates service considers start date to be before the end date and will return the result as -1.
To calculate the difference between two dates, use the calculateDateDifference service.
currentNanoTime
Returns the current value of the running Java Virtual Machine’s high-resolution time source, in nanoseconds. This method can only be used to measure elapsed time and is not related to any other notion of system or wall-clock time. The value returned represents nanoseconds since some fixed but arbitrary origin time (perhaps in the future, so values may be negative). Use this service in-conjunction with elapsedNanoTime service.
Input Parameters
None.
Output Parameters
nanoTime: java.lang.Long - The value returned represents the time elapsed since an arbitrary fixed point in nanoseconds, which is not necessarily related to the actual current date and time.
dateBuild
Builds a date String using the specified pattern and the specified date services.
Input Parameters
pattern: String - Pattern representing the format in which you want the date returned. If you do not specify pattern, dateBuild returns null. If pattern contains a time zone and timezone is not specified, the default time zone is used.
year: String Optional - The year expressed in yyyy or yy format (for example, 01 or 2001). If you do not specify year or you specify an invalid value, dateBuild uses the current year.
month: String Optional - The month expressed as a number (for example, 1 for January, 2 for February). If you do not specify month or you specify an invalid value, dateBuild uses the current month.
dayofmonth: String Optional - The day of the month expressed as a number (for example, 1 for the first day of the month, 2 for the second day of the month). If you do not specify dayofmonth or you specify an invalid value, dateBuild uses the current day.
timezone: String Optional - Time zone in which you want the output date and time expressed. Specify a time zone code as shown in the “Time Zones” section, for example, EST for Eastern Standard Time. If you do not specify timezone, the value of the server’s “user timezone” property is used. If this property has not been set, GMT is used.
locale: String Optional - Locale in which the date is to be expressed. For example, if locale is en (for English), the pattern EEE d MMM yyyy will produce Friday 23 August 2002, and the locale of fr (for French) will produce vendredi 23 août 2002.
Output Parameters
value: String - The date specified by year, month, and dayofmonth, in the format of pattern.
dateTimeBuild
Builds a date/time string using the specified pattern and the specified date services.
Input Parameters
pattern: String - Pattern representing the format in which you want the time returned. For pattern-string notation, see the “Pattern String Symbols” section. If you do not specify pattern, dateTimeBuild returns null. If pattern contains a time zone and the timezone parameter is not set, the default time zone is used.
year: String Optional - The year expressed in yyyy or yy format (for example, 01 or 2001). If you do not specify year or you specify an invalid value, dateTimeBuild uses the current year.
month: String Optional - The month expressed as a number (for example, 1 for January, 2 for February). If you do not specify month or you specify an invalid value, dateTimeBuild uses the current month.
dayofmonth: String Optional - The day of the month expressed as a number (for example, 1 for the first day of the month, 2 for the second day of the month). If you do not specify dayofmonth or you specify an invalid value, dateTimeBuild uses the current day.
hour: String Optional - The hour expressed as a number based on a 24-hour clock. For example, specify 0 for midnight, 2 for 2:00 A.M., and 14 for 2:00 P.M. If you do not specify hour or you specify an invalid value, dateTimeBuild uses 0 as the hour value.
minute: String Optional - Minutes expressed as a number. If you do not specify minute or you specify an invalid value, dateTimeBuild uses 0 as the minute value.
second: String Optional - Seconds expressed as a number. If you do not specify second or you specify an invalid value, dateTimeBuild uses 0 as the second value.
millis: String Optional - Milliseconds expressed as a number. If you do not specify millis or you specify an invalid value, dateTimeBuild uses 0 as the millis value.
timezone: String Optional - Time zone in which you want the output date and time expressed. Specify a time zone code as shown in the “Time Zones” section, for example, EST for Eastern Standard Time. If you do not specify timezone, the value of the server’s “user timezone” property is used. If this property has not been set, GMT is used.
locale: String Optional - Locale in which the date is to be expressed. For example, if locale is en (for English), the pattern EEE d MMM yyyy will produce Friday 23 August 2002, and the locale of fr (for French) will produce vendredi 23 août 2002.
Output Parameters
value: String - Date and time in format of pattern.
dateTimeFormat
Converts date/time (represented as a String) string from one format to another.
Input Parameters
inString: String - Date/time that you want to convert. Important: If inString contains a character in the last position, that character is interpreted as 0. This can result in an inaccurate date. For information about invalid dates, see the “Notes on Invalid Dates” section.
currentPattern: String - Pattern string that describes the format of inString.
newPattern: String - Pattern string that describes the format in which you want inString returned.
locale: String Optional - Locale in which the date is to be expressed. For example, if locale is en (for English), the pattern EEE d MMM yyyy will produce Friday 23 August 2002, and the locale of fr (for French) will produce vendredi 23 août 2002.
lenient: String Optional - A flag indicating whether an exception will appear if the inString value does not adhere to the format specified in currentPattern parameter. Set to:
true to perform a lenient check. This is the default.
In a lenient check, if the format of the date specified in the inString parameter does not match the format specified in the currentPattern parameter, the date in the format specified in the currentPattern parameter will be interpreted and returned. If the interpretation is incorrect, the service will return an invalid date.
false to perform a strict check.
In a strict check, an exception will appear if the format of the date specified in the inString parameter does not match the format specified in the currentPattern parameter.
Output Parameters
value: String - The date/time given by inString, in the format of newPattern.
Usage Notes
As described in the “Notes on Invalid Dates” section, if the pattern yy is used for the year, dateTimeFormat uses a 50-year moving window to interpret the value of the year.
If currentPattern does not contain a time zone, the value is assumed to be in the default time zone.
If newPattern contains a time zone, the default time zone is used.
elapsedNanoTime
Calculates the time elapsed between the current time and the given time, in nanoseconds.
Input Parameters
nanoTime: java.lang.Long - Time in nanoseconds. If nanoTime is less than zero, then the service treats it as zero.
Output Parameters
elapsedNanoTime: java.lang.Long - The difference between the current time in nanoseconds and nanoTime. If nanoTime is greater than the current nano time, the service returns zero.
elapsedNanoTimeStr: String - The difference between the current time in nanoseconds and nanoTime. The difference is expressed as a String, in this format: [years] [days] [hours] [minutes] [seconds] [millisec] [microsec] . If nanoTime is greater than the current nano time, the service returns zero.
formatDate
Formats a Date object as a string.
Input Parameters
date: java.util.Date Optional - Date/time that you want to convert.
pattern: String - Pattern string that describes the format in which you want the date returned.
timezone: String Optional - Time zone in which you want the output date and time expressed. Specify a time zone code as shown in the Time Zones section, for example, EST for Eastern Standard Time. If you do not specify timezone, the user’s time zone is used, else GMT is used.
locale: String Optional - Locale in which the date is to be expressed. For example, if locale is en (for English), the pattern EEE d MMM yyyy will produce Friday 23 August 2002, and the locale of fr (for French) will produce vendredi 23 août 2002.
Output Parameters
value: String - The date/time given by date in the format specified by pattern.
getCurrentDate
Returns the current date as a Date object.
Input Parameters
None.
Output Parameters
date: java.util.Date - Current date.
getCurrentDateString
Returns the current date as a String in a specified format.
Input Parameters
pattern: String - Pattern representing the format in which you want the date returned.
timezone: String Optional - Time zone in which you want the output date and time expressed. Specify a time zone code as shown in the “Time Zones” section, for example, EST for Eastern Standard Time. If you do not specify timezone, the value of the server’s “user timezone” property is used. If this property has not been set, GMT is used.
locale: String Optional - Locale in which the date is to be expressed. For example, if locale is en (for English), the pattern EEE d MMM yyyy will produce Friday 23 August 2002, and the locale of fr (for French) will produce vendredi 23 août 2002.
Output Parameters
value: String - Current date in the format specified by pattern.
incrementDate
Increments a date by a specified amount of time.
Input Parameters
startDate: String - Starting date and time.
startDatePattern: String - Format in which the startDate parameter is specified (for example, yyyyMMdd HH:mm:ss.SSS).
endDatePattern: String Optional - Pattern representing the format in which you want the endDate to be returned. If no endDatePattern is specified, the endDate will be returned in the format specified in the startDatePattern parameter.
addYears: String Optional - Number of years to add to startDate. The value must be an integer between -2147483648 and 2147483647.
addMonths: String Optional - Number of months to add to startDate. The value must be an integer between -2147483648 and 2147483647.
addDays: String Optional - Number of days to add to startDate. The value must be an integer between -2147483648 and 2147483647.
addHours: String Optional - Number of hours to add to startDate. The value must be an integer between -2147483648 and 2147483647.
addMinutes: String Optional - Number of minutes to add to startDate. The value must be an integer between -2147483648 and 2147483647.
addSeconds: String Optional - Number of seconds to add to startDate. The value must be an integer between -2147483648 and 2147483647.
addMilliSeconds: String Optional. Number of milliseconds to add to startDate. The value must be an integer between -2147483648 and 2147483647.
timezone: String Optional - Time zone in which you want the endDate to be expressed. Specify a time zone code, for example, EST for Eastern Standard Time. If you do not specify timezone, the value of the server’s “user timezone” property is used. If this property has not been set, GMT is used.
locale: String Optional - Locale in which the endDate is to be expressed. For example, if locale is en (for English), the pattern EEE d MMM yyyy will produce Friday 23 August 2002, and the locale of fr (for French) will produce vendredi 23 août 2002.
Output Parameters
endDate: String - The end date and time, calculated by incrementing the startDate with the specified years, months, days, hours, minutes, seconds, and/or milliseconds. The endDate will be in the endDatePattern format, if specified. If no endDatePattern is specified or if blank spaces are specified as the value, the endDate will be returned in the format specified in the startDatePattern parameter.
Usage Notes
The addYears, addMonths, addDays, addHours, addMinutes, addSeconds, and addMilliSeconds input parameters can take positive or negative values. For example, If startDate is 10/10/2001, startDatePattern is MM/dd/yyyy, addYears is 1, and addMonths is -1, endDate will be 09/10/2002.
If you specify only the startDate, startDatePattern, and endDatePattern input parameters and do not specify any of the optional input parameters to increment the period, the incrementDate service just converts the format of startDate from startDatePattern to endDatePattern and returns it as endDate.
The format of the date specified in the startDate parameter must match the format specified in the startDatePattern and the format of the date specified in the endDate parameter must match the endDatePattern format.
Datetime Services
Use Datetime services to build or increment a date/time. The services in datetime provide more explicit timezone processing than similar services in the date category.
Providing Time Zones
You can specify timezone input parameters to the datetime services in the following formats:
As a full name.
Example: Asia/Tokyo America/Los_Angeles
You can use java.time.ZoneId.getAvailableZoneIds. () method to obtain a list of the valid full name time zone IDs that your JVM version supports.
As UTC.
Example: UTC - 5h
As a custom time zone ID, in the format GMT[+ | -]hh[ [:]mm].
Example: GMT+2:00 Time zones 2 hours east of Greenwich GMT-3:00 Time zones 3 hours west of Greenwich (that is, Argentina Standard Time and Brazil Eastern Time)
GMT+9:30 Time zones 9.5 hours east of Greenwich (that is, Australian Central Time)
As a three-letter abbreviation.
Example: PST Pacific Standard Time
Important: As some three-letter abbreviations can represent multiple time zones (for example, “CST” could represent both U.S. “Central Standard Time” and “China Standard Time”), all abbreviations are deprecated. Use the full name, UTC, or custom time zone ID GMT formats instead.
The following Datetime services are available:
Service
Description
build
Builds a date/time string using the specified pattern and the supplied date/time elements.
increment
Increments or decrements a date and time by a specified amount of time.
build
Builds a date/time string using the specified pattern and the supplied date/time elements.
Input Parameters
pattern: String - The pattern with which to format the string. For more information about these pattern letters and symbols, see the Oracle Java API documentation for the DateTimeFormatter class.
year: String Optional - The year expressed as a 4-digit Integer. If you do not specify year, the year will be from the current date which is determined by the JVM in which IBM webMethods Integration runs. If you specify an invalid value for year, the service will end with an error from the JDK.
month: String Optional - The month expressed as an Integer where January is 1. If you do not specify month, the month will be from the current date which is determined by the JVM in which IBM webMethods Integration runs. If you specify an invalid value for month, the service will end with an error from the JDK.
dayOfMonth: String Optional - The day of the month expressed as an Integer, starting with 1 as the first day of the month. If you do not specify dayOfMonth, the day of the month will be from the current date which is determined by the JVM in which IBM webMethods Integration runs. If you specify an invalid value for day, the service will end with an error from the JDK.
hour: String Optional - The hour of the day expressed as an Integer from 0 through 23. If you do not specify hour, the hour will be from the current time which is determined by the JVM in which IBM webMethods Integration runs. If you specify an invalid value for hour, the service will end with an error from the JDK.
minute: String Optional - The minute expressed as an Integer from 0 through 59. If you do not specify minute, the minute will be from the current time which is determined by the JVM in which IBM webMethods Integration runs. If you specify an invalid value for minute, the service will end with an error from the JDK.
second: String Optional - The seconds of the hour expressed as an Integer from 0 through 59. If you do not specify second, the second will be from the current time which is determined by the JVM in which IBM webMethods Integration runs. If you specify an invalid value for second, the service will end with an error from the JDK.
millis: String Optional - The number of milliseconds expressed as a Long. If you do not specify millis, the millis will be from the current time which is determined by the JVM in which IBM webMethods Integration runs. If you specify an invalid value for millis, the service will end with an error from the JDK.
timezone: String Optional - The time zone. If you specify a value for timezone, the service ignores the useSystemTimeZone parameter value. It is recommended to supply the full name for timezones, such as Asia/Tokyo, or use UTC. If pattern includes a timezone, you must specify timezone input parameter value or set useSystemTimeZone to true.
useSystemTimeZone: String Optional - Indicates whether the service uses the time zone of the IBM webMethods Integration JVM if timezone was not specified. Set to:
true to use the time zone of IBM webMethods Integration when timezone is not specified.
false if you do not want the service to use the timezone of IBM webMethods Integration if timezone was not set. The default is false.
If pattern includes a timezone, you must specify the timezone input parameter value or set useSystemTimeZone to true.
To match the behavior of date:dateTimeBuild which produced a date/time that always included a time zone set useSystemTimeZone to true. This ensures that if timezone is not specified, the resulting date/time will include a time zone.
locale: String Optional - The locale in which to express the date.
Output Parameters
value: String - The formatted date and time.
Usage Notes
The build service replaces the date:dateBuild and date:dateTimeBuild services which are deprecated.
If you specify a parameter that does not exist in the supplied pattern, the service ignores that parameter.
If you do not specify a timezone, useSystemTimeZone is set to false, and the pattern includes a time zone, the service ends with an exception.
If a time zone is provided as input to the service either in the timezone parameter or by setting useSystemTimeZone to true, the build service calculates the date/time starting with a “zoned” date/time. The resulting values can differ when daylight savings time transitions are in effect. If no time zone is provided as input to the service either by not specifying timezone or by setting useSystemTimeZone to false, then the build service calculates the date/time starting with an “unzoned” date/time.
The build service is similar to date:dateBuild and date:dateTimeBuild, however, the build service allows the building of a date/time that does not include a time zone. Furthermore, the build service assembles a date/time using each of the provided parameters. Consequently the build service can build a date/time with a value that would be invalid in the current time zone, such as a date/time that would fall into the gap of a daylight saving time transition. This is unlike the date:dateBuild and date:dateTimeBuild services which build a local java.util.Date object that uses the timezone of the machine running IBM webMethods Integration. The date:dateBuild and date:dateTimeBuild service then applies the offset between the local timezone and the specified timezone.
increment
Increments or decrements a date and time by a specified amount of time.
Input Parameters
startDate: String - Starting date and time.
startDatePattern: String - Pattern in which the startDate value is specified. For more information about these pattern letters and symbols, see the Oracle Java API documentation for the DateTimeFormatter class.
endDatePattern: String - Pattern in which to format the resulting date/time. For more information about these pattern letters and symbols, see the Oracle Java API documentation for the DateTimeFormatter class. If no endDatePattern is specified, the endDate will be returned in the format specified in the startDatePattern parameter.
timezone: String Optional - The time zone to use for parsing startDate and formatting endDate. The service uses timezone to parse the startDate String and convert it from the time zone specified in the input, if one was provided. For example, if the timezone input parameter is MST and the startDate is “2019-02-21 11:30:00 EST”, then the service converts the startDate time from EST to MST. The service uses timezone when formatting endDate as a String. That is, the timezone determines the time zone in which the service expresses the endDate.
locale: String Optional - Locale in which the endDate is to be expressed.
addYears: String Optional - The number of years to add to startDate. The value must be an integer between -2147483648 and 2147483647.
addMonths: String Optional - The number of months to add to startDate. The value must be an integer between -2147483648 and 2147483647.
addDays: String Optional - The number of days to add to startDate. The value must be an integer between -2147483648 and 2147483647.
addHours: String Optional - The number of hours to add to startDate. The value must be an integer between -2147483648 and 2147483647.
addMinutes: String Optional - The number of minutes to add to startDate. The value must be an integer between -2147483648 and 2147483647.
addSeconds: String Optional - The number of seconds to add to startDate. The value must be an integer between -2147483648 and 2147483647.
addMilliseconds: String Optional - The number of milliseconds to add to startDate. The value must be an integer between -2147483648 and 2147483647.
useSystemTimeZone: String Optional - Whether to use the system time zone to increment the date/time when the startDate value does not have a time zone. Set to:
false if the startDate does not include a time zone, the timezone parameter was not set, and you want to increment the date/time without being affected by a time zone. This is the default.
true to use the system time zone when incrementing the startDate.
When useSystemTimeZone is false and no timezone is provided, the resulting endDate will not include a timezone. To match the behavior of date:incrementDate, which produced a date/time that always included a time zone, set useSystemTimeZone to true.|
useSameInstant: String Optional - Whether to use the same Instant, where Instant represents a moment on the timeline in UTC, or the same unzoned date/time when applying a different time zone. Using the same Instant will usually result in changes to the unzoned date/time when the time zone and its offset are applied. Not using the same Instant will result in the unzoned date/time being unchanged when the time is applied. If useSameInstant is true, the increment service uses the Instant (the absolute time with no time zones) to determine how the timezone value is used.
If useSameInstant is false, the increment service uses the unzoned date/time and only changes the time zone. The default is true.
Note
For the purpose of this documentation, the term “unzoned date/time” is synonymous with the term “local date/time”.
For example, if startDate is a date/time of “2019-02-25 08:25:00 CET” (Central European Time, UTC+1:00) and the specified timezone value is America/New_York (EST, UTC-5:00) , the value of useSameInstant has the following impact:
If useSameInstant is true, then 2019-02-25 08:25:00 CET becomes 2019-02-25 02:25:00 EST
If useSameInstant is false, then 2019-02-25 08:25:00 CET becomes 2019-02-25 08:25:00 EST
Output Parameters
endDate: String - The incremented date and time.
Usage Notes
The increment service replaces the date:incrementDate service which is deprecated.
The increment service can be used to decrement a date and time by specifying negative numbers. The addYears , addMonths , addDays , addHours , addMinutes , addSeconds , and addMilliSeconds input parameters can take positive or negative values.
The service ends with an exception if the format of the date specified in the startDate parameter does not match the format specified in the startDatePattern.
If endDatePattern includes a time zone, such as “z”, then the input string and startDatePattern must also have time zone fields, or timeZone must be set, or useSystemTimeZone must be true. Otherwise the service ends with an error.
If you specify only the startDate, startDatePattern, and endDatePattern input parameters and do not specify any of the optional input parameters to increment the period, the increment service just converts the format of startDate from startDatePattern to endDatePattern and returns it as endDate.
If you specify a value for timezone, the service ignores the useSystemTimeZone parameter value.
If you specify a value for timezone and the startDate includes a time zone, then the service uses the supplied timezone to convert the startDate time zone.
If you do not specify a value for timezone and the startDate includes a time zone, then the service uses the time zone in the startDate and ignores the useSystemTimeZone parameter.
If you do not specify a value for timezone, the startDate does not include a time zone, and useSystemTimeZone parameter is true, then the service uses the system time zone.
If startDate does not include a time zone, you do not specify a value for timezone, and useSystemTimeZone is false, the resulting endDate will not include a time zone.
The increment service is similar to date:incrementDate, however, the increment service provides more specific handling of time zones. To match the behavior of date:incrementDate, set useSameInstant to true.
Document Services
Use Document services to perform operations on documents.
The following Document services are available:
Service
Description
bytesToDocument
Converts an array of bytes to a document.
deleteDocuments
Deletes the specified documents from a set of documents.
documentListToDocument
Constructs a document from a document list by generating key/value pairs from the values of two elements that you specify in the document list.
documentToBytes
Converts a document to an array of bytes.
documentToDocumentList
Expands the contents of a document into a list of documents. Each key/value pair in the source document is transformed to a single document containing two keys (whose names you specify). These two keys will contain the key name and value of the original pair.
findDocuments
Searches a set of documents for entries matching a set of Criteria.
groupDocuments
Groups a set of documents based on specified criteria.
insertDocument
Inserts a new document in a set of documents at a specified position.
removeNullFields
Removes null fields from a given document.
searchDocuments
Searches a set of documents for entries matching a set of Criteria.
sortDocuments
Sorts a set of input documents based on the specified sortCriteria.
bytesToDocument
Converts an array of bytes to a document. This service can only be used with byte arrays created by executing the documentToBytes service.
Input Parameters
documentBytes: Object - An array of bytes (byte[]) to convert to a document.
If documentBytes is null, the service does not return a document or an error message.
If documentBytes is not a byte array, the service throws an exception.
If documentBytes is zero-length, the service produces an empty document.
Output Parameters
document: Document - A document.
Usage Notes
Use this service with the documentToBytes service, which converts a document into a byte array. You can pass the resulting byte array to the bytesToDocument service to convert it back into the original document.
In order for the document-to-bytes-to-document conversion to work, the entire content of the document must be serializable. Every object in the document must be of a data type known to IBM webMethods Integration, or it must support the java.io.Serializable interface.
If IBM webMethods Integration encounters an unknown object in the document that does not support the java.io.Serializable interface, that object’s value will be lost. It will be replaced with a string containing the object’s class name.
deleteDocuments
Deletes the specified documents from a set of documents.
Input Parameters
documents: Document List - Set of documents that contain the documents you want to delete.
indices: String List - Index values of documents to be deleted from the documents parameter document list.
Output Parameters
documents: Document List - List of documents whose indices do not match the values in indices parameter.
deletedDocuments: Document List - List of deleted documents.
Usage Notes
The deleteDocuments service returns an error if the indices parameter value is less than zero or more than the number of documents in the documents input parameter.
documentListToDocument
Constructs a document from a document list by generating key/value pairs from the values of two elements that you specify in the document list.
Input Parameters
documentList: Document List - Set of documents that you want to transform into a single document.
Note
If the documentList parameter contains a single document instead of a Document List, the documentListToDocument service does nothing.
name: String - Name of the element in the documentList parameter whose value provides the name of each key in the resulting document.
Important
The data type of the element that you specify in the name parameter must be String.
value: String - Name of the element in the documentList parameter whose values will be assigned to the keys specified in name. This element can be of any data type.
Output Parameters
document: - Document Document containing the key/value pairs generated from the documentList parameter.
Usage Notes
The following example illustrates how the documentListToDocument service would convert a document list that contains three documents to a single document containing three key/value pairs. When you use the documentListToDocument service, you specify which two elements from the source list are to be transformed into the keys and values in the output document. In the following example, the values from the pName elements in the source list are transformed into key names, and the values from the pValue elements are transformed into the values for these keys.
A documentList containing these three documents:
Key
Value
pName
cx_timeout
pValue
1000
Key
Value
pName
cx_max
pValue
2500
Key
Value
pName
cx_min
pValue
10
Would be converted to a document containing these three keys:
Key
Value
cx_timeout
1000
cx_max
2500
cx_min
10
documentToBytes
Converts a document to an array of bytes.
Input Parameters
document: Document - Document to convert to bytes.
If document is null, the service does not return an output or an error message.
If document is not a document, the service throws an exception.
If document contains no elements, the service produces a zero-length byte array.
Output Parameters
documentBytes: Object - A serialized representation of the document as an array of bytes (byte[]).
Usage Notes
Use the documentToBytes service with the bytesToDocument service, which converts the byte array created by this service back into the original document.
The documentToBytes service is useful when you want to write a document to a file, an input stream, or a cache.
In order for the document-to-bytes-to-document conversion to work, the entire content of the document must be serializable. Every object in the document must be of a data type known to IBM webMethods Integration, or it must support the java.io.Serializable interface. If IBM webMethods Integration encounters an unknown object in the document that does not support the java.io.Serializable interface, that object’s value will be lost. IBM webMethods Integration will replace it with a string containing the object’s class name.
documentToDocumentList
Expands the contents of a document into a list of documents.
Each key/value pair in the source document is transformed to a single document containing two keys (whose names you specify). These two keys will contain the key name and value of the original pair.
Input Parameters
document: Document - Document to transform.
name: String - Name to assign to the key that will receive the key name from the original key/value pair. In the example above, this parameter was set to pName.
value: String - Name to assign to the key that will receive the value from the original key/value pair. In the example above, this parameter was set to pValue.
Output Parameters
documentList: Document List - List containing a document for each key/value pair in the document parameter. Each document in the list will contain two keys, whose names were specified by the name and value parameters. The values of these two keys will be the name and value (respectively) of the original pair.
Usage Notes
The following example shows how a document containing three keys would be converted to a document list containing three documents. In this example, the names pName and pValue are specified as names for the two new keys in the document list.
A document containing these three keys:
Key
Value
cx_timeout
1000
cx_max
2500
cx_min
10
Would be converted to a document list containing these three documents:
Key
Value
pName
cx_timeout
pValue
1000
Key
Value
pName
cx_max
pValue
2500
Key
Value
pName
cx_min
pValue
10
findDocuments
Searches a set of documents for entries matching a set of criteria.
Input Parameters
documents: Document List - Set of documents from which the documents meeting the retrieve criteria are to be returned.
matchCriteria: Document - Criteria on which the documents in the documents parameter are to be matched. Parameters for matchCriteria are: path: Name of the element in documentList whose value provides the value for the search text. The value for key can be a path expression. For example, “Family/Chidren[0]/ BirthDate” retrieves the birthday of the first child from the input Family document list.
compareValueAs Optional - Allowed values are string, numeric, and datetime. The default value is string.
datePattern Optional - Pattern will be considered only if compareValueAs is of type datetime. Default value is MM/dd/yyyy hh:mm:ss a.
joins - List of join criteria. Each join criteria consists of:
value Optional - Allowed values are string, numeric, and datetime. The default value is string.
joinType - Specifies the way two joins can be linked. Values are “and” or “or”. Default value is “and”.
Output Parameters
result documents: Document List - List of documents that match the retrieve criteria.
groupDocuments
Groups a set of documents based on specified criteria.
Input Parameters
documents: Document List - Set of documents to be grouped based on the specified criteria.
groupCriteria: Document List - The criteria on which the input documents are to be grouped. Valid values for the groupCriteria parameter are:
key - Key in the pipeline. The value for key can be a path expression. For example, “Family/Chidren[0]/BirthDate” retrieves the birthday of the first child from the input Family document list.
compareStringsAs. Optional - Valid values for compareStringsAs are string, numeric, and datetime. The default value is string.
pattern. Optional - Pattern will be considered only if the compareStringsAs parameter is of type datetime.
Note
If key is not found in all the input documents, the documents that do not match the groupCriteria are grouped together as a single group.
Output Parameters
documentGroups: Document List - List of documents where each element represents a set of documents grouped based on the criteria specified.
Usage Notes
The following example illustrates how to specify the values for the groupCriteria parameter:
key
compareStringsAs
pattern
name
string
age
numeric
birthdate
datetime
yyyy-MM-dd
The input documents will be grouped based on name, age, and birth date.
insertDocument
Inserts a new document in a set of documents at a specified position.
Input Parameters
documents: Document List - Set of documents in which a new document is to be inserted.
insertDocument: Document. The new document to be inserted to the set of documents specified in the documents parameter.
index: String. Optional. The position in the seat which the document is to be inserted. The index parameter is zero-based. if the value for the index parameter is not specified, the document will be inserted at the end of the document list specified in the documents parameter.
Output Parameters
documents: Document List. Document list after inserting the new document.
removeNullFields
Removes null fields from a given document. Optionally, by specifying trimStringFields and removeEmptyStringFields, you can trim leading and trailing spaces in a string field, and after trimming the fields, if the string field length is zero, they can be removed. For array fields, it reduces the size of the array by the number of null fields. After removing all the null fields, if the size is zero, the array field is removed from the given document.
Note
Use this operation carefully as it is a destruction operation and removes the fields from the given document. For trimming string fields, a space character is defined as any character whose codepoint is less than or equal to U+0020 (the space character).
Input Parameters
document: Documents. A document from where fields having value null are removed.
trimStringFields: If specified, for fields whose type is String, would remove the leading and trailing space characters. Default value is false.
removeEmptyStringFields: If specified, if a string field is empty, it is removed from the given document. If both trimStringFields and removeEmptyStringFields are true, then after removing the spaces, if there are no characters in the string, it is removed from the document. Default value is false.
Output Parameters
document: Document. Returns the document having non-null fields.
searchDocuments
Searches a set of documents for entries matching a set of Criteria.
Input Parameters
documents: Document List. Set of documents from which the documents meeting the search criteria are to be returned.
searchCriteria: Document. Criteria on which the documents in the documents parameter are to be searched. Valid values for searchCriteria parameters are:
key: Name of the element in documentList whose value provides the value for the search text. The value for key can be a path expression. For example, “Family/Chidren[0]/BirthDate” retrieves the birthday of the first child from the input Family document list.
value: Optional. Any search text. If no value is specified, the service searches for null in the document list.
compareStringsAs: Optional. Allowed values are string, numeric, and datetime. The default value is string.
pattern: Optional. Pattern will be considered only if the compareStringsAs value is of type datetime.
sorted: String Optional - The value of the sorted parameter is true if the document list is already sorted based on the search criteria and same search key; otherwise false. If the value for the sorted parameter is set to true, the required documents are searched faster.
Output Parameters
resultdocuments: Document List - List of documents which are matching the search criteria.
documentListIndices: String List - Positions of search documents in the document list.
documents: Document List - List of documents that were input.
Usage Notes
For example, if you want to search a set of documents for documents where BirthDate is 10th January 2008, the values for the searchCriteria parameter would be:
key
value
compareStringsAs
pattern
Birthdate
2008-01-10
datetime
yyyy-MM-dd
sortDocuments
Sorts a set of input documents based on the specified sortCriteria.
Input Parameters
documents: Document List - Set of documents that are to be sorted.
sortCriteria: Document List - Criteria based on which the documents in the documents parameter are to be sorted. Valid values for sortCriteria parameters are:
key -. Name of the element in documentList whose value provides the value based on which the documents are to be sorted. The value for key can be a path expression. For example, “Family/Chidren[0]/BirthDate” retrieves the birthday of the first child from the input Family document list.
order: Optional - Allowed values are ascending and descending. The default value is ascending.
compareStringsAs: Optional - Allowed values are string, numeric, and datetime. Default value is string.
pattern: Optional - The value for pattern will be considered only if the compareStringsAs value is of type datetime.
Note
If key is not found in all the input documents, the sorted list of documents appears at the end or start of the list based on the order specified. If the order is ascending, then all the documents that do not match the sort criteria appears at the top of the list, followed by the sorted list. If the order is descending, the sorted list will appear at the top, followed by the documents that do not match the sort criteria.
Output Parameters
documents: Document List - The documents sorted based on the sort criteria specified in the sortCriteria parameter.
Usage Notes
For example, if you want to sort a set of documents based on name, age, and then on birth date, the values for sortCriteria parameter would be:
key
order
compareStringsAs
pattern
Name
ascending
string
Age
descending
numeric
Birthdate
ascending
datetime
yyyy-MM-dd
Flat File Services
Use Flat File services to convert data bytes, data stream, and data string to a document and vice versa.
The following Flat File services are available:
Service
Description
delimitedDataBytesToDocument
Converts delimited data bytes (byte array) to a document.
delimitedDataStreamToDocument
Converts delimited data stream to a document.
delimitedDataStringToDocument
Converts delimited data string to a document.
documentToDelimitedDataBytes
Converts a document to delimited data bytes (byte array object).
documentToDelimitedDataStream
Converts a document to a delimited data stream.
documentToDelimitedDataString
Converts a document to a delimited data string.
delimitedDataBytesToDocument
Converts delimited data bytes (byte array) to a document.
Input Parameters
delimitedDataBytes: java.lang.Byte[ ] - Delimited data in bytes (Byte array) to convert to a document.
fieldQualifier: String Optional - The delimiter to use for separating entries in delimitedDataBytes. Default is comma (,).
textQualifier: String Optional - The character to use for quoted elements. Default is double quote (“).
useHeaderRowForFieldNames: String Optional - Consider first line as header row and use the delimited data of this line as property names in the output document.
Set to:
true - The delimited data of first line will be used as the property name in the output document. This is the default.
false - column1, column2…columnN will be used as the property name in the output document.
skipBOMBytes: Optional - Automatically skips the ByteOrderMark bytes from the input stream which includes an encoded ByteOrderMark as its first bytes.
encoding: String Optional - The encoding to use while parsing the delimited data.
Output Parameters
document: Document - Document resulting from the conversion of delimitedDataBytes. This document contains document array rows[ ] corresponding to the delimited data.
delimitedDataStreamToDocument
Converts delimited data stream to a document. The permissible size of the content stream is based on your tenancy.
Input Parameters
delimitedDataStream: java.io.InputStream - Delimited data in an input stream to convert to a document.
fieldQualifier: String Optional - The delimiter to use for separating entries in delimitedDataStream. Default is comma (,).
textQualifier: String Optional - The character to use for quoted elements. Default is double quote (“).
useHeaderRowForFieldNames: String Optional - Consider first line as header row and use the delimited data of this line as property names in the output document.
Set to:
true - The delimited data of first line will be used as the property name in the output document. This is the default.
false - column1, column2…columnN will be used as the property name in the output document.
skipBOMBytes: Optional - Automatically skips the ByteOrderMark bytes from the input stream which includes an encoded ByteOrderMark as its first bytes.
encoding: String Optional - The encoding to use while parsing the delimited data.
Output Parameters
document: Document - Document resulting from the conversion of delimitedDataStream. This document contains document array rows[ ] corresponding to the delimited data.
delimitedDataStringToDocument
Converts delimited data string to a document.
Input Parameters
delimitedDataString: String - Delimited string to convert to a document.
fieldQualifier: String Optional - The delimiter to use for separating entries in delimitedDataString. Default is comma (,).
textQualifier: String Optional - The character to use for quoted elements. Default is double quote (“).
useHeaderRowForFieldNames: String Optional - Consider first line as header row and use the delimited data of this line as property names in the output document.
Set to:
true - The delimited data of first line will be used as the property name in the output document. This is the default.
false - column1, column2…columnN will be used as the property name in the output document.
encoding: String Optional - The encoding to use while parsing the delimited data.
Output Parameters
document: Document - Document resulting from the conversion of delimitedDataString. This document contains document array rows[ ] corresponding to the delimited data.
documentToDelimitedDataBytes
Converts a document to delimited data bytes (byte array object).
Input Parameters
document: Document - Document to be converted to delimited data bytes (byte array object). This document contains document array rows[ ] corresponding to the delimited data.
fieldQualifier: String Optional - The delimiter to use for separating entries in delimitedDataBytes. Default is comma (,).
textQualifier: String Optional - The character to use for quoted elements. Default is double quote (“).
useFieldNamesForHeaderRow: String Optional - The first line in the output delimited data delimitedDataBytes will be constructed using the property names in the input document array document\rows[ ].
Set to:
true - Property names in the input document array document\rows[ ] will be used as the first row in the output delimitedDataBytes.
false - column1, column2…columnN will be used as the first row in the output delimitedDataBytes.
encoding: String Optional - The encoding to use while parsing the delimited data.
Output Parameters
delimitedDataBytes: Object - Delimited data byte array object resulting from the conversion of a document.
documentToDelimitedDataStream
Converts a document to a delimited data stream.
Input Parameters
document: Document - Document to be converted to delimited data stream. This document contains document array rows[ ] corresponding to the delimited data.
fieldQualifier: String Optional - The delimiter to use for separating entries in delimitedDataStream. Default is comma (,).
textQualifier: String Optional - The character to use for quoted elements. Default is double quote (“).
useFieldNamesForHeaderRow: String Optional - The first line in the output delimited data delimitedDataStream will be constructed using the property names in the input document array document\rows[ ].
Set to:
true - Property names in the input document array document\rows[ ] will be used as the first row in the output delimitedDataStream.
false - column1, column2…columnN will be used as the first row in the output delimitedDataStream.
encoding: String Optional - The encoding to use while parsing the delimited data.
Output Parameters
delimitedDataStream: java.io.InputStream - Delimited data stream resulting from the conversion of a document.
documentToDelimitedDataString
Converts a document to a delimited data string.
Input Parameters
document: Document - Document to be converted to delimited data string. This document contains document array rows[ ] corresponding to the delimited data.
fieldQualifier: String Optional - The delimiter to use for separating entries in delimitedDataString. Default is comma (,).
textQualifier: String Optional - The character to use for quoted elements. Default is double quote (“).
useFieldNamesForHeaderRow: String Optional - The first line in the output delimited data delimitedDataString will be constructed using the property names in the input document array document\rows[ ].
Set to:
true- Property names in the input document array document\rows[ ] will be used as the first row in the output delimitedDataString.
false - column1, column2…columnN will be used as the first row in the output delimitedDataString.
encoding: String Optional - The encoding to use while parsing the delimited data.
Output Parameters
delimitedDataString: String - Delimited data byte string resulting from the conversion of a document.
Flow Services
Use flow services to perform utility-type tasks.
The following flow services are available:
Service
Description
clearPipeline
Removes all fields from the pipeline. You may optionally specify fields that should not be cleared by this service.
countProcessedDocuments
Counts the number of documents processed by a flow service. Details about the processed documents can be viewed in the Execution Results screen.
getHTTPRequest
Gets information about the HTTP request, received by IBM webMethods Integration.
getLastError
Obtains detailed information about the last error that was trapped within a flow service.
getLastFailureCaught
Returns information about the last failure that was caught by a CATCH step.
getSessionInfo
Obtains detailed information about the current logged-in user session. Also provides the current flow service name and the execution result reference identifier.
getRetryCount
Retrieves the retry count and the maximum retry count for a service.
logCustomMessage
Logs a message, which can be viewed in the Execution Results screen.
setCustomContextID
Associates a custom value with an auditing context. You can use the custom value to search for flow service executions based on the custom ID in the IBM webMethods Integration Monitor screen.
setHTTPResponse
Sets the HTTP response information to be returned by IBM webMethods Integration.
sleep
Causes the currently executing flow service to pause for the specified number of seconds.
throwExceptionForRetry
Throws an ISRuntimeException and instructs the to re-execute a service using the original service input.
clearPipeline
Removes all fields from the pipeline. You may optionally specify fields that should not be cleared by this service.
Input Parameters
preserve: String List Optional - Field names that should not be cleared from the pipeline.
Output Parameters
None.
countProcessedDocuments
Counts the number of documents processed by a flow service. Details about the processed documents can be viewed in the Execution Results screen.
Input Parameters
status: String Optional - Valid values are success or fail. Set status to success to count the number of successfully processed documents, else set it to fail. Default value is success.
incrementBy: String Optional - Increment the number of documents processed by a flow service. Every time the service is used, successful or failed documents are incremented by the given value. Default value is 1.
Output Parameters
None.
Usage Notes
To increment the number of documents processed by a list, use the sizeOfList service in the List service category.
getHTTPRequest
Retrieves information about the HTTP request, received by IBM webMethods Integration.
Input Parameters
None.
Output Parameters
httpRequest: Document - Contains the HTTP request received by IBM webMethods Integration and includes following details:
headers: Document - Contains the header fields from the HTTP request.
requestURL: String - URL used by the client to invoke the service.
method: String - HTTP method used by the client to request the top-level service. Possible values are GET, PUT, POST, PATCH, and DELETE.
Note
The getHTTPRequest service does not retrieve the Authorization header.
getLastError
Obtains detailed information about the last error that was trapped within a flow service.
Input Parameters
None.
Output Parameters
lastError: Document - Information about the last error, which contains details of the time, error, user, block, and call stack information.
time: String - Date and time the event occurred, in the format yyyy/MM/dd HH:mm:ss.SSS
error: String Optional - Error message of the exception.
localizedError: String Optional - Error message in the language that corresponds to the server locale.
user: String - User who executed the flow service.
block: Document - Contains the following fields:
name: String - flow service, Operation, or Service name.
type: String - Connector, flow service, or Service.
details: String Optional - Account and Application name if the Block Type is Application.
callStack: Document List - The call stack information describing where the error occurred including details of the block. Each document represents a block on the call stack. The first document in the list represents the block that threw the error and the last document in the list represents the top level block. It contains the following fields:
name: String - flow service, Operation or Service name.
type: String - Connector, flow service, or Service.
You can use this service in the catch section of the try-catch block. Each execution of a flow service or a service (whether the flow service or the service succeeds or fails) updates the value returned by getLastError. Consequently, getLastError itself resets the value of lastError. Therefore, if the results of getLastError will be used as input to subsequent flow services, map the value of lastError to a variable in the pipeline.
If a map has multiple transformers, then a subsequent call to getLastError will return the error associated with the last failed transformer in the map, even if it is followed by successful transformers.
getLastFailureCaught
Returns information about the last failure that was caught by a CATCH step.
Input Parameters
None.
Output Parameters
failure: Object - Last failure caught by a CATCH step. The failure parameter is null if no failure has been caught.
failureMessage: String, Conditional - Message associated with the failure. The service returns a failureMessage output parameter only if a failure has been caught.
failureName: String, Conditional - Exception class name. The service returns a failureName output parameter only if a failure has been caught.
Usage Notes
If a CATCH step can handle multiple failures, use the getLastFailureCaught service to determine which failure the step caught.
You can rethrow a caught failure by using the getLastFailureCaught service to determine the failure caught by the CATCH step, then use an EXIT step that is configured to signal failure to rethrow the failure. getLastFailureCaught need not be executed within a CATCH step. The service returns the caught failure any time after it has been invoked.
getSessionInfo
Obtains detailed information about the current logged-in user session. Also provides the current flow service execution result reference identifier.
Input Parameters
None.
Output Parameters
$session: Document - Returns information about the current logged-in user session. Also provides the current flow service name and the execution result reference identifier.
tenantId: String - Tenant Identifier.
stageId: String - The stage ID where the current flow service resides.
user: Document - Returns user details.
name: String - Name of the user who is executing the service.
- CustomContextID: String - Returns the current flow service execution context ID. You can set the context ID in a flow service by using the setCustomContextID service available under the Flow category.
- integrationName: String - The name of the flow service. If flow service A has a referenced flow service B, and if the getSessionInfo service is called in flow service B, then integrationName will be A but if flow service B is executed independently, then integrationName will be B.
- executionResultReference: String - Returns the current flow service execution result reference identifier. For example, you can pass the identifier to an on-premises operation and trace the flow service execution.
getRetryCount
Retrieves the retry count and the maximum retry count for a flow service.
The retry count indicates the number of times the IBM webMethods Integration system has re-executed a flow service. For example, a retry count of 1 indicates that the IBM webMethods Integration system tried to execute the flow service twice (the initial attempt and then one retry).
The maximum retry count indicates the maximum number of times the IBM webMethods Integration system can re-execute the flow service if it continues to fail because of a transient error.
Input Parameters
None.
Output Parameters
retryCount: String - The number of times the IBM webMethods Integration system has re-executed the flow service.
maxRetryCount: String The maximum number of times the IBM webMethods Integration system can re-execute the flow service.
Usage Notes
Although the getRetryCount service can be invoked at any point in a flow service, the getRetryCount service retrieves retry information for the flow service when invoked by a subscriber. The getRetryCount service does not retrieve retry information for a nested flow service (a flow service that is invoked by another flow service).
The maximum number of times IBM webMethods Integration retries a flow service depends on the retry properties set in the subscriber that invokes the flow service.
logCustomMessage
Logs a message, which can be viewed in the Execution Results screen.
Input Parameters
message: String - Custom message to be logged, which can be viewed in the Execution Results screen.
Output Parameters
None.
setCustomContextID
Associates a custom value with an auditing context. This custom value can be used to search for flow service executions in the Monitor screen.
Input Parameters
id: String Optional - The custom value for the current auditing context. Specify a value that you want to associate with the auditing context. The ID length must be less than or equal to 36 characters. In the event that an ID exceeds 36 characters, only the first 36 characters are stored.
Output Parameters
None.
Usage Notes
Each client request creates a new auditing context. The auditing context is the lifetime of the top-level service. Once the custom context identifier is set, IBM webMethods Integration includes that value in each service audit record it logs in the current context. Calls to this service affect audit logging only for the current request.
This service is useful when IBM webMethods Integration is configured to log to a database. When the server logs information about a service to the database, it includes the custom context identifier in the service log. On the IBM webMethods Integration Monitor screen, you can use the custom value as search criteria to locate and view all corresponding service audit records.
If IBM webMethods Integration is configured to log to a file system, IBM webMethods Integration writes the custom context identifier with the service audit records to a file. This file is not accessible on the Monitor screen. You cannot query service records when logging to a file.
If this service is invoked without a specified value for id, IBM webMethods Integration writes a null value for the custom context identifier field for all subsequent service audit records that it logs in the current context.
setHTTPResponse
Sets the HTTP response information to be returned by IBM webMethods Integration.
Input Parameters
httpResponse: Document - Contains the HTTP response that will be returned by IBM webMethods Integration and includes following details:
headers: Document Optional - Contains the header fields to be returned in the HTTP response.
responseCode: String Optional - HTTP status code to be returned to the client. The response codes and phrases are defined in https://tools.ietf.org/html/rfc7231#section-6. If you provide a value for responseCode that is not listed in RFC 7321, Section 6, you must also provide a value for reasonPhrase.
responsePhrase: String Optional - HTTP reason phrase to be returned to the client. If no reason is provided, the default reason phrase associated with responseCode will be used. You must provide a reasonPhrase for any responseCode that is not listed in RFC 7321, Section 6.
responseString: String Optional - Response to be returned to the client, specified as a string.
responseBytes: byte[ ] Optional - Response to be returned to the client, specified as a byte array.
responseStream: java.io.InputStream Optional - Response to be returned to the client, specified as an InputStream.
Output Parameters
None.
Note
When working with a flow service, always set the responseString parameter in the setHTTPResponse service as the last step. This is so because the flow service does not run in debug mode as the setHTTPResponse service stops it when the responseString parameter is set. However, the setHTTPResponse service can be used at any step in a flow service if responseString is not being set.
sleep
Causes the currently executing flow service to pause for the specified number of seconds.
Input Parameters
seconds: String - The number of seconds to pause the currently executing flow service. The value must be an integer between 1 second and 60 seconds.
Output Parameters
None.
throwExceptionForRetry
Throws an exception and instructs IBM webMethods Integration to re-execute the flow service using the original service input.
Input Parameters
wrappedException: Object Optional - Any exception that you want to include as part of this exception. This might be the exception that causes the throwExceptionForRetry service to execute. For example, if a Salesforce connector fails to connect to the server due to connection timeout you can use this service inside Catch block to retry the flow service again.
message: String Optional - A message to be logged as part of this exception.
Output Parameters
None.
Usage Notes
Use the throwExceptionForRetry service to handle transient errors that might occur during flow service execution. A transient error is an error that arises from a condition that might be resolved quickly, such as the unavailability of a resource due to network issues or failure to connect to a database. The flow service might execute successfully if IBM webMethods Integration waits and then retries the flow service. If a transient error occurs, the flow service can catch this error and invoke throwExceptionForRetry to instruct the to retry the service.
The throwExceptionForRetry service must be used for transient errors only.
Only top-level services or subscribers can be retried. That is, a service can be retried only when it is invoked directly by a client request or by a subscriber. The service cannot be retried when it is invoked by another flow service (that is, when it is a nested flow service).
You can invoke the getRetryCount service to retrieve the current retry count and the maximum specified retry attempts.
Hashtable Services
Use Hashtable services to create, update, and obtain information about the hashtable.
The following Hashtable services are available:
Service
Description
containsKey
Checks for the existence of a hashtable element.
createHashtable
Creates a hashtable object.
get
Gets the value for a specified key in the hashtable.
listKeys
Lists all the keys stored in the hashtable.
put
Adds a key/value pair in the hashtable.
remove
Removes a key/value pair from the hashtable.
size
Gets the number of elements in the hashtable.
containsKey
Checks for the existence of a hashtable element.
Input Parameters
hashtable: java.util.Hashtable - Hashtable in which to check for the existence of a hashtable element.
key: String - Hashtable element to be checked for.
Output Parameters
containsKey: String - Indicates whether the specified hashtable element exists.
A value of:
true - Indicates that the element exists.
false - Indicates that the element does not exist.
createHashtable
Creates a hashtable object.
Input Parameters
None.
Output Parameters
hashtable: java.util.Hashtable - The new hashtable object.
get
Gets the value for a specified key in the hashtable.
Input Parameters
hashtable: java.util.Hashtable - Hashtable from which to retrieve the specified value.
key: String - Key of the hashtable element whose value is to be retrieved.
Output Parameters
value: Object - Value of the input hashtable element.
listKeys
Lists all the keys stored in the hashtable.
Input Parameters
hashtable: java.util.Hashtable - Hashtable from which the keys are to be listed.
Output Parameters
keys: String[ ] - List of keys stored in the input hashtable.
put
Adds a key/value pair in the hashtable.
Input Parameters
hashtable: java.util.Hashtable - Hashtable to which the key/value pair is to be added.
key: String - Key of the element to be added to the hashtable.
value: Object - Value of the element to be inserted into the hashtable.
Output Parameters
hashtable: java.util.Hashtable - Hashtable object after the insertion of the key/value pair.
remove
Removes a key/value pair from the hashtable.
Input Parameters
hashtable: java.util.Hashtable - Hashtable from which to remove the key/value pair.
key: String - Key of the hashtable element to be removed.
Output Parameters
hashtable: java.util.Hashtable - Hashtable object after the key/value pair is removed.
value: Object - Value of the hashtable element that was removed. Returns null if the input key is not found in the hashtable.
size
Gets the number of elements in the hashtable.
Input Parameters
hashtable: java.util.Hashtable - Hashtable from which the number of elements stored in it is to be retrieved.
Output Parameters
size: String - Number of elements in the hashtable.
IO Services
Use IO services to convert data between byte[ ], characters, and InputStream representations. These services are used for reading and writing bytes, characters, and streamed data to the file system and behave like the corresponding methods in the java.io.InputStream class. These services can be invoked only by other services. Streams cannot be passed between clients and the server, so these services will not run if they are invoked from a client.
The following IO services are available:
Service
Description
bytesToStream
Converts a byte[ ] to java.io.ByteArrayInputStream.
close
Closes an InputStream or a reader object and releases the resources.
createByteArray
Creates a byte array of the specified length.
mark
Marks the current position in the InputStream or reader object.
markSupported
Enables you to test whether your InputStream or reader object supports the mark and reset operations.
read
Reads a specified number of bytes from the InputStream and stores them into a buffer.
readAsString
Reads the data from a reader object and converts it to a string.
reset
Repositions the InputStream or the reader object to the position at the time the mark service was last invoked on the stream.
skip
Skips over and discards the specified number of bytes or characters from the input stream or a reader object.
streamToBytes
Creates a byte[ ] from data that is read from an InputStream.
streamToReader
Converts a java.io.InputStream to a java.io.Reader object.
streamToString
Creates a string from data that is read from an InputStream.
stringToReader
Converts a string object to a String Reader object.
stringToStream
Converts a string to a binary stream.
bytesToStream
Converts a byte[ ] to java.io.ByteArrayInputStream.
Input Parameters
bytes: byte[ ] - The byte array to convert.
length: String Optional - The maximum number of bytes to read and convert. If length is not specified, the default value for this parameter is the length of the input byte array.
offset: String Optional - The offset into the input byte array from which to start converting. If no value specified, the default value is zero.
Output Parameters
stream: java.io.ByteArrayInputStream - An open InputStream created from the contents of the input bytes parameter.
Usage Notes
This service constructs stream from the byte array using the constructor ByteArrayInputStream(byte[ ]). This constructor does not make a copy of the byte array, so any changes to bytes will be reflected in the data read from the stream.
close
Closes an InputStream or a reader object and releases the resources.
Input Parameters
inputStream: java.io.InputStream. Optional - An open InputStream.
Note
You can use either inputStream or reader to specify the input object. If both the input parameters are provided, then both the objects will be closed.
reader: java.io.Reader.Optional - An open reader object.
Output Parameters
None.
Usage Notes
If the InputStream is already closed, invoking this service has no effect. However, leaving an InputStream open may cause errors that may not be recoverable. Use the close service to explicitly close the input stream when a service leaves it open.
createByteArray
Creates a byte array of the specified length.
Input Parameters
length: String. The length of the byte array to be created.
Output Parameters
bytes: Object The new byte array.
Usage Notes
The read service reads data from an InputStream into a byte array. You can use this service to create the byte array. Invoking this service is the equivalent of the Java code new byte[length].
mark
Marks the current position in the InputStream or reader object. A subsequent call to reset repositions this stream at the last marked position. Marking and respositioning the input stream allows subsequent service calls to re-read the same bytes.
Input Parameters
stream: java.io.InputStream. Optional - The InputStream.
Note
You can use either stream or reader to specify the input object. If both stream and reader input parameters are provided, then both the objects will be marked.
reader: java.io.Reader. Optional - The reader object.
limit: String - The maximum number of bytes that can be read before the mark position becomes invalid. If more than this number of bytes are read from the stream after the mark service is invoked, the reset service will have no effect.
Output Parameters
stream: java.io.InputStream. Conditional - The InputStream. Returned only if the input parameter is stream.
reader: java.io.Reader. Conditional - The reader object. Returned only if the input parameter is reader.
Usage Notes
If the InputStream does not support the mark operation, invoking this service has no effect.
Either of the optional input parameters, stream or reader, is required.
markSupported
Enables you to test whether your InputStream or reader object supports the mark and reset operations.
Input Parameters
stream: java.io.InputStream. Optional - The InputStream.
Note
You can use either stream or reader to specify the input object. If both stream and reader input parameters are provided, then the stream input parameter is ignored.
reader: java.io.Reader. Optional - The reader object.
Output Parameters
stream: java.io.InputStream. Conditional - The InputStream. Returned only if the input parameter is stream.
supported: String - Indicates whether the stream supports the mark and reset operations. A value of:
true indicates that the InputStream supports the mark and reset operations.
false indicates that the InputStream does not support the mark and reset operations.
reader: java.io.Reader. Conditional - The reader object. Returned only if the input parameter is reader.
Usage Notes
Either of the input parameters, stream or reader, is required.
read
Reads a specified number of bytes from the InputStream and stores them into a buffer.
Input Parameters
stream: Object - The InputStream. Object from which the service is to read bytes.
offset: String. Optional - The offset into the byte array in the buffer to which the data is written. If no value is supplied, this defaults to 0.
length: String. Optional - The maximum number of bytes to read from the InputStream. If no value is supplied, the default is the length of buffer. If the value supplied for length is greater than the length of buffer, an exception will be thrown.
buffer: Object - The buffer into which data is written. This is a byte array, which can be created from a flow service by invoking createByteArray.
Output Parameters
stream: Object - The InputStream. If any bytes were read from the stream, the stream is repositioned after the last byte read.
buffer: Object - The buffer into which data was written.
bytesRead: String - The number of bytes read from the InputStream and copied to buffer. If there is no more data because the end of the stream has been reached, bytesRead will be -1.
readAsString
Reads the data from a reader object and returns the contents as a string.
Input Parameters
reader: java.io.Reader - The reader object.
length: String - The maximum number of characters to read from the input reader object.
Output Parameters
reader: java.io.Reader - The reader object.
lengthRead: String - The number of characters read from the input reader object. If there is no more data because the end of the stream has been reached, lengthRead will be -1.
value: String - The data read from the reader object, or null if end of stream has been reached.
Usage Notes
The readAsString service does not automatically close the reader object. To close the reader, use the close service.
readerToString
Reads the data from a reader object and converts it to a string.
Input Parameters
reader: java.io.Reader - The reader object.
Output Parameters
string: String - Data read from the reader object.
Usage Notes
The readerToString service does not automatically close the reader object. To close the reader, use the close service.
reset
Repositions the InputStream or the reader object to the position at the time the mark service was last invoked on the stream.
Input Parameters
stream: java.io.InputStream. Optional - The InputStream.
Note
You use either stream or reader to specify the input object. If both stream and reader input parameters are provided, then both the objects will be reset.
reader: java.io.Reader. Optional - The reader object.
Output Parameters
stream: java.io.InputStream. Conditional - The InputStream. Returned only if the input parameter is stream.
reader: java.io.Reader. Conditional - The reader object. Returned only if the input parameter is reader.
Usage Notes
If the InputStream does not support the reset operation, invoking this service has no effect.
Either of the input parameters, stream or reader, is required.
skip
Skips over and discards the specified number of bytes or characters from the input stream or a reader object.
Input Parameters
stream: java.io.InputStream. Optional - The InputStream.
Note
You can use either stream or reader to specify the input object. If both stream and reader input parameters are provided, then the stream and reader object data will be skipped.
reader: java.io.Reader. Optional - The reader object.
length: String - The number of bytes or characters to skip.
Output Parameters
stream: java.io.InputStream. Conditional - The InputStream. Returned only if the input parameter is stream.
reader: java.io.Reader. Conditional - The reader object. Returned only if the input parameter is reader.
bytesSkipped: String. Conditional - The actual number of bytes that were skipped. Returned only if the input parameter is stream.
charactersSkipped: String. Conditional - The number of characters that were skipped. Returned only if the input parameter is reader.
Usage Notes
The Skip service uses the Java methodskip, which might skip some smaller number of bytes, possibly zero (0). This happens due to conditions such as reaching the end of file before n bytes have been skipped. For more information about the skip method, see the Java documentation on the InputStream class.
Either of the optional input parameters, stream or reader, is required.
If both stream and reader input parameters are specified and if an exception occurs during the stream object usage, then the operations are not performed on the reader object also.
streamToBytes
Creates a byte[ ] from data that is read from an InputStream.
Input Parameters
stream: java.io.InputStream - The InputStream that you want to convert.
Output Parameters
bytes: byte[ ] - The bytes read from stream.
Usage Notes
This service reads all of the bytes from stream until the end of file is reached, and then it closes the InputStream.
streamToReader
Converts a java.io.InputStream to a java.io.Reader object.
Input Parameters
inputStream: java.io.InputStream - The InputStream to convert to a reader object.
encoding: String. Optional - Name of a registered, IANA character set (for example, ISO-8859-1). If you specify an unsupported encoding, the system throws an exception. If no value is specified or if the encoding is set to autoDetect, the default operating system encoding is used.
Output Parameters
reader: java.io.Reader - The reader object read from inputStream.
streamToString
Creates a string from data that is read from an InputStream.
Input Parameters
inputStream: java.io.InputStream - The InputStream to convert to a string.
encoding: String Optional - Name of a registered, IANA character set (for example, ISO-8859-1). If you specify an unsupported encoding, the system throws an exception. If no value is specified the encoding will be UTF-8.
Output Parameters
string: String - Data read from inputStream and converted to a string.
stringToReader
Converts a string object to a StringReader object.
Input Parameters
string: String - The string to convert to a StringReader object.
Output Parameters
reader: java.io.StringReader - The StringReader object.
stringToStream
Converts a string to a binary stream.
Input Parameters
string: String - The string object to be converted.
encoding: String - Optional. Name of a registered, IANA character set, for example, ISO-8859-1. If you specify an unsupported encoding, the system throws an exception. If no value is specified, the encoding will be UTF-8.
Output Parameters
inputStream: java.io.ByteArrayInputStream - An open InputStream created from the contents of string.
JSON Services
Use JSON services to convert JSON content into a document and to convert a document into JSON content.
The following JSON services are available:
Service
Description
closeArrayIterator
Closes the iteration. The iterator object used in an iteration cannot be reused after this service runs.
documentToJSONBytes
Converts a document to JSON bytes (byte array).
documentToJSONStream
Converts a document to a JSON stream.
documentToJSONString
Converts a document to a JSON string.
getArrayIterator
Returns a batch iterator object.
getNextBatch
Gets the next batch of array elements by parsing the array paths in the iterator object returned by the getArrayIterator service.
jsonBytesToDocument
Converts JSON content in bytes (byte array) to a document.
jsonStreamToDocument
Converts content from the JSON content stream to a document.
jsonStringToDocument
Converts content from the JSON string to a document.
closeArrayIterator
Closes the iteration. The iterator object used in an iteration cannot be reused after this service runs.
Input Parameters
iterator: Object - The iterator object returned by the getNextBatch service.
Output Parameters
None.
documentToJSONBytes
Converts a document to JSON bytes (byte array).
Input Parameters
document: Document - The document to be converted to JSON bytes (byte array).
encodeDateAs: String. Optional. Specifies how java.util.Date instances in the document are encoded in the returned JSON.
long to encode java.util.Date instances as timestamps, specifically the number of milliseconds since Jan 1, 1970 00:00:00.
ISO8601 to encode java.util.Date instances as strings in a standard ISO format of: YYYY-MM-DD’T’HH:mm:ss.sssZ.
ISO_LOCAL_DATE to encode java.util.Date instances as strings in a standard ISO format without an offset.
ISO_DATE to encode java.util.Date instances as strings in a standard ISO format with the offset if available.
ISO_ZONED_DATE_TIME to encode java.util.Date instances as strings in a standard ISO format with the offset and zone if available.
ISO_INSTANT to encode java.util.Date instances as strings in a standard ISO format in UTC.
BASIC_ISO_DATE to encode java.util.Date instances as strings in a standard ISO format without an offset.
RFC_1123_DATE_TIME to encode java.util.Date instances as strings in a standard ISO format.
null to use the dateEncoding setting already in effect for the HTTP client making the request.
encodeStringAsNumber: String. Optional.
true, to convert all the numbers in the string format to numbers by removing the quotes.
false, to retain all the numbers without converting.
The default value is false.
encodeStringAsBoolean: String. Optional.
true, to convert all the boolean values in the string format to Boolean by removing the quotes.
false, to retain all the boolean values without converting.
The default value is false.
Output Parameters
jsonBytes: Object - JSON bytes (byte array) resulting from the conversion of a document.
documentToJSONStream
Converts a document to a JSON stream.
Input Parameters
document: Document - The document to be converted to a JSON stream.
encodeDateAs: String. Optional. Specifies how java.util.Date instances in the document are encoded in the returned JSON.
long to encode java.util.Date instances as timestamps, specifically the number of milliseconds since Jan 1, 1970 00:00:00.
ISO8601 to encode java.util.Date instances as strings in a standard ISO format of: YYYY-MM-DD’T’HH:mm:ss.sssZ.
ISO_LOCAL_DATE to encode java.util.Date instances as strings in a standard ISO format without an offset.
ISO_DATE to encode java.util.Date instances as strings in a standard ISO format with the offset if available.
ISO_ZONED_DATE_TIME to encode java.util.Date instances as strings in a standard ISO format with the offset and zone if available.
ISO_INSTANT to encode java.util.Date instances as strings in a standard ISO format in UTC.
BASIC_ISO_DATE to encode java.util.Date instances as strings in a standard ISO format without an offset.
RFC_1123_DATE_TIME to encode java.util.Date instances as strings in a standard ISO format.
null to use the dateEncoding setting already in effect for the HTTP client making the request.
encodeStringAsNumber: String. Optional.
true, to convert all the numbers in the string format to numbers by removing the quotes.
false, to retain all the numbers without converting.
The default value is false.
encodeStringAsBoolean: String. Optional.
true, to convert all the boolean values in the string format to Boolean by removing the quotes.
false, to retain all the boolean values without converting.
The default value is false.
Output Parameters
jsonStream: java.io.InputStream - JSON stream resulting from the conversion of a document.
documentToJSONString
Converts a document to a JSON string.
Input Parameters
document: Document - The document to be converted to a JSON string.
prettyPrint: String - Formats the jsonString output parameter for human readability by adding carriage returns and indentation to the JSON content.
Set to:
true to format the jsonString output field for human readability.
false to leave the jsonString output field in its unformed state.
The service will not add any additional carriage returns or indentation to the JSON content.
encodeDateAs: String. Optional. Specifies how java.util.Date instances in the document are encoded in the returned JSON.
long to encode java.util.Date instances as timestamps, specifically the number of milliseconds since Jan 1, 1970 00:00:00.
ISO8601 to encode java.util.Date instances as strings in a standard ISO format of: YYYY-MM-DD’T’HH:mm:ss.sssZ.
ISO_LOCAL_DATE to encode java.util.Date instances as strings in a standard ISO format without an offset.
ISO_DATE to encode java.util.Date instances as strings in a standard ISO format with the offset if available.
ISO_ZONED_DATE_TIME to encode java.util.Date instances as strings in a standard ISO format with the offset and zone if available.
ISO_INSTANT to encode java.util.Date instances as strings in a standard ISO format in UTC.
BASIC_ISO_DATE to encode java.util.Date instances as strings in a standard ISO format without an offset.
RFC_1123_DATE_TIME to encode java.util.Date instances as strings in a standard ISO format.
null to use the dateEncoding setting already in effect for the HTTP client making the request.
encodeStringAsNumber: String. Optional.
true, to convert all the numbers in the string format to numbers by removing the quotes.
false, to retain all the numbers without converting.
The default value is false.
encodeStringAsBoolean: String. Optional.
true, to convert all the boolean values in the string format to Boolean by removing the quotes.
false, to retain all the boolean values without converting.
The default value is false.
Output Parameters
jsonString: Object - JSON string resulting from the conversion of a document.
getArrayIterator
Returns a batch iterator object.
Input Parameters
jsonStream: Object - JSON content to be converted to a document (an IData object).
arrayPaths: String List - The paths of the arrays to be parsed in the JSON input stream. Only the array elements from the paths mentioned in this parameter are considered even though the JSON stream might have more data. For example, to retrieve toppingA1 from the following JSON content, provide the array path as, /topping/0/toppingA/0/toppingA1.
This parameter must have only array paths. You must not enter individual array elements or other fields. Array paths must follow the JSON pointer syntax.
containsKey: String - Indicates whether the specified hashtable element exists. A value of:
true - Indicates that the element exists.
false - Indicates that the element does not exist.
decodeRealAsDouble: String. Optional - Converts real numbers from jsonStream to either a Float or Double Java wrapper type. Set to:
true to convert real numbers to Double Java wrapper type. This is the default.
false to convert real numbers to Float Java wrapper type.
Note
The decodeRealAsDouble parameter overrides the value specified by the decodeRealAsDouble server configuration parameter. If no value is supplied for decodeRealAsDouble, IBM webMethods Integration uses the value set for decodeRealAsDouble server configuration parameter.
decodeIntegerAsLong: String. Optional - Converts integers from jsonStream to either a Long or Integer Java wrapper type. Set to:
true to convert integers to Long Java wrapper types. This is the default.
false to convert integers to Integer Java wrapper types.
Note
The decodeIntegerAsLong parameter overrides the value specified by the decodeIntegerAsLong server configuration parameter. If no value is supplied for decodeIntegerAsLong, IBM webMethods Integration uses the value specified in the decodeIntegerAsLong server configuration parameter.
decodeRealAsString: String. Optional - Converts real numbers in the jsonStream to String. Set to:
true to convert real numbers to String.
false to not convert real numbers to String. The real numbers are then converted to either Float or Double Java wrapper type depending on the value specified in decodeRealAsDouble. This is the default value.
Note
The decodeRealAsString parameter overrides the value specified by the decodeRealAsString server configuration parameter. If no value is supplied for decodeRealAsString, IBM webMethods Integration uses the value set in decodeRealAsString server configuration parameter.
unescapeSpecialChars: String. Optional - Controls whether IBM webMethods Integration unescapes the special characters ‘\n’, ‘\r’, ‘\t’, ‘\b’, ‘\f’, ‘\\’, ‘\”’ while parsing JSON documents. Set to:
true to unescape these special characters (that is, ‘\n’ will be replaced with new line, similarly other characters will also be replaced) in the output document. This is the default.
false to keep these characters as is in the output document.
Note
The unescapeSpecialChars parameter overrides the value specified by the unescapeSpecialChars server configuration parameter. If no value is supplied for unescapeSpecialChars, IBM webMethods Integration uses the value specified in the unescapeSpecialChars server configuration parameter.
Output Parameters
iterator: Object - A batch iterator object that has the list of arrays to be parsed in the JSON input stream. This object is passed as input to the getNextBatch service.
Usage Notes
None.
getNextBatch
Gets the next batch of array elements by parsing the array paths in the iterator object returned by the getArrayIterator service. This service returns the array elements in batches based on the batch size provided in the input. The batch size can vary across invocations of this service. A batch is a set of elements that can be retrieved from an array path at once, based on the batch size. To retrieve the remaining elements in the array path or elements from the next array path in the iterator, invoke the service in a loop until there are no more array paths to iterate.
Input Parameters
iterator: Object - The iterator object returned by the getArrayIterator service.
batchSize: Object - Number of array elements that the service should retrieve in one batch.
Note
This value must be lesser or equal to the value of the iterator.maxBatchSize server configuration property. Similarly, it must be greater or equal to the value of the iterator.minBatchSize server configuration property. Otherwise, the service throws an error.
Output Parameters
batch: Document - IData object that contains the following keys.
arrayPath: String - The path of the array elements retrieved in a batch. Since the service can iterate over multiple arrays, each batch contains the array path parsed in current iteration to help you identify the array to which the elements in the batch belong.
documents [ ]: Document List - If array elements in the batch are JSON objects, they are returned as documents.
values [ ]: Object List - If array elements in the batch are not JSON objects, they are returned as values.
iterationStatus: Document - IData object that contains the following keys.
hasNext: String - Indicates whether there are more array elements in the iterator beyond this batch, which the service can retrieve. The value can be true or false.
iteration: String - Indicates the current iteration. It starts from 1. If the service runs N times to get all the array elements, then this value is N.
numberOfElementsInBatch: String - Indicates the number of elements in the current batch. This value is same as batchSize. However, this value can be less than the batch size for the last batch. For example, if there are 7 elements in an array and the batch size is 5, then the last batch will have only 2 elements.
totalElementsParsed: String - Indicates the number of array elements parsed until the current iteration. For example, if the number of elements parsed in the first iteration is 10, second iteration is 20, and third iteration is 7, then in the third iteration, the value of this parameter is 37.
Usage Notes
The getNextBatch service completes the retrieval of elements from one array in the iterator and starts retrieving from the next array that matches in the subsequent iteration. This is explained in the following example:
Suppose you want the service to parse a JSON file with 2 arrays, for example, A and B with 6 and 2 elements each. Set the batchSize input parameter to 5 and invoke the service in a loop until the service returns the hasNext parameter as false. In the first batch of the output, the first 5 elements of the array A are returned and in the next batch, only the last element of the array A is returned, even though the batch size is 5. At this point, the hasNext parameter is true because array B is not parsed yet. In the next batch, the service returns both the elements of array B. Since there are no more elements left either in the array A or B, the hasNext parameter becomes false.
Guidelines
If two array paths overlap, the path that the service finds first in the input JSON stream is parsed and the other path is ignored. If you provide arrayPath[0] as /a/b and arrayPath[1] as /a/b/0/c to parse the following array, arrayPath[1] is ignored as the paths overlap and the first path retrieves all elements.
If an invalid array path is provided in the input, the getNextBatch service does not return any result on parsing the array.
If the arrayPaths input parameter is not set, then the getNextBatch service parses the input considering that the input stream has a single anonymous array at the root level.
If the arrayPaths input parameter is set, then it cannot contain null or empty elements.
If one or more array paths are invalid, then the getArrayIterator service still creates an iterator object containing these paths. However, the getNextBatch service ignores the invalid paths.
If all the paths are invalid, the getNextBatch service returns the documents and values parameters as null, the hasNext parameter as false, and all other output parameters as 0.
jsonBytesToDocument
Converts JSON content in bytes (byte array) to a document.
Input Parameters
jsonBytes: java.lang.Byte[ ] - JSON content in bytes (byte array) to convert to a document.
decodeRealAsDouble - String Optional - Converts real numbers from jsonBytes to either a Float or Double Java wrapper type.
Set to:
true to convert real numbers to Double Java wrapper types.
false to convert real numbers to Float Java wrapper types.
Default value is true.
decodeIntegerAsLong: String Optional - Converts integers from jsonBytes to either a Long or Integer Java wrapper type.
Set to:
true to convert integers to Long Java wrapper types.
false to convert integers to Integer Java wrapper types.
Default value is true.
decodeRealAsString: String. Optional. Converts real numbers in the jsonStream to String. Set to:
true to convert real numbers to String.
false to not convert real numbers to String. The real numbers are then converted to either Float or Double Java wrapper type depending on the values specified in decodeRealAsDouble.
Default value is false.
decodeNullRootAsEmpty: String. Optional. Converts a null value that IBM webMethods Integration retrieves from JSON content to either IData or empty IData. Set to:
true to convert the null value to empty IData. The subsequent encoding of empty IData creates a JSON text of “{}”. This JSON content is different from the original JSON content (null) as the original null value gets converted to JSON text of “{}”.
false to convert the null value to IData.
Default value is false.
unescapeSpecialChars: String. Optional. Controls whether IBM webMethods Integration unescapes the special characters ‘\n’, ‘\r’, ‘\t’, ‘\b’, ‘\f’, ‘\’, ‘\“’ while parsing JSON documents. Set to:
true to unescape these special characters (that is, ‘\n’ will be replaced with new line, similarly other characters will also be replaced) in the output document.
false to keep these characters as is in the output document.
Default value is true.
Output Parameters
document: Document - Document resulting from the conversion of jsonBytes.
jsonStreamToDocument
Converts content from the JSON content stream to a document. The permissible size of the content stream is based on your tenancy.
Input Parameters
jsonStream: java.io.InputStream - JSON content in an input stream to convert to a document.
decodeRealAsDouble - String Optional - Converts real numbers from jsonStream to either a Float or Double Java wrapper type.
Set to:
true to convert real numbers to Double Java wrapper types.
false to convert real numbers to Float Java wrapper types.
Default value is true.
decodeIntegerAsLong: String Optional - Converts integers from jsonStream to either a Long or Integer Java wrapper type.
Set to:
true to convert integers to Long Java wrapper types.
false to convert integers to Integer Java wrapper types.
Default value is true.
decodeRealAsString: String. Optional. Converts real numbers in the jsonStream to String. Set to:
true to convert real numbers to String.
false to not convert real numbers to String. The real numbers are then converted to either Float or Double Java wrapper type depending on the values specified in decodeRealAsDouble.
Default value is false.
decodeNullRootAsEmpty: String. Optional. Converts a null value that IBM webMethods Integration retrieves from JSON content to either IData or empty IData. Set to:
true to convert the null value to empty IData. The subsequent encoding of empty IData creates a JSON text of “{}”. This JSON content is different from the original JSON content (null) as the original null value gets converted to JSON text of “{}”.
false to convert the null value to IData.
Default value is false.
unescapeSpecialChars: String. Optional. Controls whether IBM webMethods Integration unescapes the special characters ‘\n’, ‘\r’, ‘\t’, ‘\b’, ‘\f’, ‘\’, ‘\“’ while parsing JSON documents. Set to:
true to unescape these special characters (that is, ‘\n’ will be replaced with new line, similarly other characters will also be replaced) in the output document.
false to keep these characters as is in the output document.
Default value is true.
Output Parameters
document: Document - Document resulting from the conversion of jsonStream.
jsonStringToDocument
Converts content from the JSON content string to a document.
Input Parameters
jsonString: String - JSON content string to convert to a document.
decodeRealAsDouble - String Optional - Converts real numbers from jsonString to either a Float or Double Java wrapper type.
Set to:
true to convert real numbers to Double Java wrapper types.
false to convert real numbers to Float Java wrapper types.
Default value is true.
decodeIntegerAsLong: String Optional - Converts integers from jsonString to either a Long or Integer Java wrapper type.
Set to:
true to convert integers to Long Java wrapper types.
false to convert integers to Integer Java wrapper types.
Default value is true.
decodeRealAsString: String. Optional. Converts real numbers in the jsonStream to String. Set to:
true to convert real numbers to String.
false to not convert real numbers to String. The real numbers are then converted to either Float or Double Java wrapper type depending on the values specified in decodeRealAsDouble.
Default value is false.
decodeNullRootAsEmpty: String. Optional. Converts a null value that IBM webMethods Integration retrieves from JSON content to either IData or empty IData. Set to:
true to convert the null value to empty IData. The subsequent encoding of empty IData creates a JSON text of “{}”. This JSON content is different from the original JSON content (null) as the original null value gets converted to JSON text of “{}”.
false to convert the null value to IData.
Default value is false.
unescapeSpecialChars: String. Optional. Controls whether IBM webMethods Integration unescapes the special characters ‘\n’, ‘\r’, ‘\t’, ‘\b’, ‘\f’, ‘\’, ‘\“’ while parsing JSON documents. Set to:
true to unescape these special characters (that is, ‘\n’ will be replaced with new line, similarly other characters will also be replaced) in the output document.
false to keep these characters as is in the output document.
Default value is true.
Output Parameters
document: Document - Document resulting from the conversion of jsonString.
List Services
Use List services to retrieve, replace, or add elements in an Object List, Document List, or String List, including converting String Lists to Document Lists.
The following List services are available:
Service
Description
addItemToVector
Adds an item or a list of items to a java.util.Vector object.
appendToDocumentList
Adds documents to a document list.
appendToStringList
Adds Strings to a String list.
sizeOfList
Returns the number of elements in a list.
stringListToDocumentList
Converts a String list to a document list.
vectorToArray
Converts a java.util.Vector object to an array.
addItemToVector
Adds an item or a list of items to a java.util.Vector object.
Input Parameters
vector: java.util.Vector Optional - The vector object to which you want to add an item or list of items. If no value is specified, the service creates a new java.util.Vector object to which the item(s) will be added.
item: Object Optional - Item to be added to the vector object.
Note
You can use either item or itemList to specify the input object. If both item and itemList input parameters are specified, the item as well as the list of items will be added to the vector object.
itemList: Object[ ] Optional - List of items to be added to the vector object.
addNulls: String Optional - Specifies whether a null item can be added to the vector object. Set to:
false to prevent null values from being added to the vector object. This is the default.
true to allow null values to be added to the vector object.
Output Parameters
vector: java.util.Vector - Updated vector object with the list of items added or an empty vector in case no items are added.
Usage Notes
Either of the optional input parameters, item or itemList, is required.
appendToDocumentList
Adds documents to a document list.
Input Parameters
toList: Document List Optional - List to which you want to append documents. If you do not specify toList, the service creates a new list.
fromList: Document List Optional - Documents you want to append to the end of toList.
fromItem: Document Optional - Document you want to append to the end of toList. If you specify both fromList and fromItem, the service adds the document specified in fromItem after the documents in fromList.
Output Parameters
toList: Document List - The toList document list with the documents in fromList and fromItem appended to it.
Usage Notes
The documents contained in fromList and fromItem are not actually appended as entries to toList. Instead, references to the documents in fromList and fromItem are appended as entries to toList. Consequently, any changes made to the documents in fromList and fromItem also affect the resulting toList.
appendToStringList
Adds Strings to a String list.
Input Parameters
toList: String List Optional - List to which you want to append Strings. If the value of toList is null, a null pointer exception error is thrown. If you do not specify toList, the service creates a new list.
fromList: String List Optional - List of Strings to add to toList. Strings are added after the entries of toList.
fromItem: String Optional - String you want to append to the end of toList. If you specify both fromList and fromItem, the service adds the String specified in fromItem after the Strings specified in fromList.
Output Parameters
toList: String List - The toList String list with the Strings from fromList and fromItem appended to it.
Usage Notes
The Strings contained in fromList and fromItem are not actually appended as entries to toList. Instead, references to the Strings in fromList and fromItem are appended as entries to toList. Consequently, any changes made to the Strings in fromList and fromItem also affect the resulting toList.
sizeOfList
Returns the number of elements in a list.
Input Parameters
fromList: Document List, String List, or Object List Optional - List whose size you want to discover. If fromList is not specified, the service returns a size of 0.
Output Parameters
size: String - Number of entries in fromList.
fromList: Document List, String List, or Object List - Original list.
Usage Notes
For example, if fromList consists of:
fromList[0] = “a“
fromList[1] = “b“
fromList[2] = “c“
The result would be:
size=”3“
stringListToDocumentList
Converts a String list to a document list.
Input Parameters
fromList: String List Optional - List of Strings (a String[ ]) that you want to convert to a list of documents. If fromList is not specified, the service returns a zero length array for toList.
key: String Optional - Key name to use in the generated document list.
Output Parameters
toList: Document List - Resulting document list.
Usage Notes
Creates a document list containing one document for each element in the fromList. Each document will contain a single String element named key.
vectorToArray
Converts a java.util.Vector object to an array.
Input Parameters
vector: java.util.Vector - The object to be converted to an array.
stronglyType: String Optional - If this option is specified, the service expects all items in the vector to have the same Java type as the first non-null item in the vector. If the service detects an item of a different type, an error appears. Set to:
false to convert the vector to an object array. This is the default.
true to convert the vector to a strongly typed array holding the same type of objects.
Output Parameters
array: Object[ ] - Converted object array.
Math Services
Use Math services to perform mathematical operations on string-based numeric values. Services that operate on integer values use Java’s long data type (64-bit, two’s complement). Services that operate on float values use Java’s double data type (64-bit IEEE 754). If extremely precise calculations are critical to your application, you should write your own Java services to perform math functions.
The following Math services are available:
Service
Description
absoluteValue
Returns the absolute value of the input number.
addFloatList
Adds a list of floating point numbers (represented in a string list) and returns the sum.
addFloats
Adds one floating point number (represented as a String) to another and returns the sum.
addIntList
Adds a list of integers (represented in a String list) and returns the sum.
addInts
Adds one integer (represented as a String) to another and returns the sum.
addObjects
Adds one java.lang.Number object to another and returns the sum.
divideFloats
Divides one floating point number (represented as a String) by another (num1/num2) and returns the quotient.
divideInts
Divides one integer (represented as a String) by another (num1/num2) and returns the quotient.
divideObjects
Divides one java.lang.Number object by another (num1/num2) and returns the quotient.
max
Returns the largest number from a list of numbers.
min
Returns the smallest number from a list of numbers.
multiplyFloatList
Multiplies a list of floating point numbers (represented in a String list) and returns the product.
multiplyFloats
Multiples one floating point number (represented as String) by another and returns the product.
multiplyIntList
Multiplies a list of integers (represented in a String list) and returns the product.
multiplyInts
Multiplies one integer (represented as a String) by another and returns the product.
multiplyObjects
Multiplies one java.lang.Number object by another and returns the product.
randomDouble
Returns the next pseudorandom, uniformly distributed double between 0.0 and 1.0.
roundNumber
Returns a rounded number.
subtractFloats
Subtracts one floating point number (represented as a String) from another and returns the difference.
subtractInts
Subtracts one integer (represented as a String) from another and returns the difference.
subtractObjects
Subtracts one java.lang.Number object from another and returns the difference.
toNumber
Converts a string to numeric data type.
absoluteValue
Returns the absolute value of the input number.
Input Parameters
num: String - Number whose absolute value is to be returned.
Output Parameters
positiveNumber: String - Absolute value of the input number.
addFloatList
Adds a list of floating point numbers (represented in a string list) and returns the sum.
Input Parameters
numList: String List - Numbers (floating point numbers represented in a string list) to add.
Output Parameters
value: String - Sum of the numbers in numList. If a sum cannot be produced, value contains one of the following:
Infinity - The computation produces a positive value that overflows the representable range of a float type.
-Infinity- The computation produces a negative value that overflows the representable range of a float type.
0.0 - The computation produces a value that underflows the representable range of a float type (for example, adding a number to infinity).
NaN - The computation produces a value that cannot be represented as a number (for example, any operation that uses NaN as input, such as 10.0 + NaN = NaN).
Usage Notes
Make sure the strings that are passed to the service in numList are in a locale-neutral format (that is, using the pattern -####.##). Passing locally formatted strings may result in unexpected results. For example, calling addFloats in a German locale with the arguments 1,23 and 2,34 will result in the value 357, not 3.57 or 3,57.
addFloats
Adds one floating point number (represented as a String) to another and returns the sum.
Input Parameters
num1: String - Number to add.
num2: String - Number to add.
precision: String Optional - Number of decimal places to which the sum will be rounded. The default value is null.
Output Parameters
value: String - Sum of the numbers in num1 and num2. If a sum cannot be produced, value contains one of the following:
Infinity - The computation produces a positive value that overflows the representable range of a float type.
-Infinity - The computation produces a negative value that overflows the representable range of a float type.
0.0 - The computation produces a value that underflows the representable range of a float type (for example, adding a number to infinity).
NaN - The computation produces a value that cannot be represented as a number (for example, any operation that uses NaN as input, such as 10.0 + NaN = NaN).
Usage Notes
Make sure the strings that are passed to the service in num1andnum2 are in a locale-neutral format (that is, using the pattern -####.##). Passing locally formatted strings may result in unexpected results. For example, calling addFloats in a German locale with the arguments 1,23 and 2,34 will result in the value 357, not 3.57 or 3,57.
addIntList
Adds a list of integers (represented in a String list) and returns the sum.
Input Parameters
numList: String List - Numbers (integers represented as Strings) to add.
Output Parameters
value: String - Sum of the numbers in numList.
Usage Notes
Make sure the strings that are passed to the service in numList are in a locale-neutral format (that is, using the pattern -####.##). Passing locally formatted strings may result in unexpected results. For example, calling addFloats in a German locale with the arguments 1,23 and 2,34 will result in the value 357, not 3.57 or 3,57.
addInts
Adds one integer (represented as a String) to another and returns the sum.
Input Parameters
num1: String - Number (integer represented as a String) to add.
num2: String - Number (integer represented as a String) to add.
Output Parameters
value: String - Sum of num1 and num2.
Usage Notes
Ensure that the result of your calculation is less than 64 bits in width (the maximum width for the long data type). If the result exceeds this limit, it will generate a data overflow.
Ensure that the strings that are passed to the service in num1andnum2 are in a locale-neutral format (that is, using the pattern -####.##). Passing locally formatted strings may result in unexpected results. For example, calling addFloats in a German locale with the arguments 1,23 and 2,34 will result in the value 357, not 3.57 or 3,57.
addObjects
Adds one java.lang.Number object to another and returns the sum.
Input Parameters
num1: java.lang.Number - Number to add. See the Usage Notes for supported sub-classes.
num2: java.lang.Number - Number to add. See the Usage Notes for supported sub-classes.
Output Parameters
value: java.lang.Number - Sum of the numeric values of num1 and num2.
Usage Notes
This service accepts the following sub-classes of java.lang.Number: java.lang.Byte, java.lang.Double, java.lang.Float, java.lang.Integer, java.lang.Long, java.lang.Short.
This service applies the following rules for binary numeric promotion to the operands in order:
If either operand is of type Double, the other is converted to Double.
Otherwise, if either operand is of type Float, the other is converted to Float.
Otherwise, if either operand is of type Long, the other is converted to Long.
Otherwise, both operands are converted to type Integer.
These promotion rules mirror the Java rules for numeric promotion of numeric types.
divideFloats
Divides one floating point number (represented as a String) by another (num1/num2) and returns the quotient.
Input Parameters
num1: String - Number (floating point number represented as a String) that is the dividend.
num2: String - Number (floating point number represented as a String) that is the divisor.
precision: String Optional - Number of decimal places to which the quotient will be rounded. The default value is null.
Output Parameters
value: String - The quotient of num1 / num2. If a quotient cannot be produced, value contains one of the following:
Infinity - The computation produces a positive value that overflows the representable range of a float type.
-Infinity - The computation produces a negative value that overflows the representable range of a float type.
0.0 - The computation produces a value that underflows the representable range of a float type (for example, dividing a number by infinity).
NaN - The computation produces a value that cannot be represented as a number (for example, the result of an illegal operation such as dividing zero by zero or any operation that uses NaN as input, such as 10.0 + NaN = NaN).
Usage Notes
Make sure the strings that are passed to the service in num1andnum2 are in a locale-neutral format (that is, using the pattern -####.##). Passing locally formatted strings may result in unexpected results. For example, calling addFloats in a German locale with the arguments 1,23 and 2,34 will result in the value 357, not 3.57 or 3,57.
divideInts
Divides one integer (represented as a String) by another (num1/num2) and returns the quotient.
Input Parameters
num1: String - Number (integer represented as a String) that is the dividend.
num2: String - Number (integer represented as a String) that is the divisor.
Output Parameters
value: String - The quotient of num1 / num2.
Usage Notes
Make sure the strings that are passed to the service in num1andnum2 are in a locale-neutral format (that is, using the pattern -####.##). Passing locally formatted strings may result in unexpected results. For example, calling addFloats in a German locale with the arguments 1,23 and 2,34 will result in the value 357, not 3.57 or 3,57.
divideObjects
Divides one java.lang.Number object by another (num1/num2) and returns the quotient.
Input Parameters
num1: java.lang.Number - Number that is the dividend. See the Usage Notes for supported sub-classes.
num2: java.lang.Number - Number that is the divisor. See the Usage Notes for supported sub-classes.
Output Parameters
value: java.lang.Number - Quotient of num1 / num2.
Usage Notes
This service accepts the following sub-classes of java.lang.Number: java.lang.Byte, java.lang.Double, java.lang.Float, java.lang.Integer, java.lang.Long, java.lang.Short.
This service applies the following rules for binary numeric promotion to the operands in order:
If either operand is of type Double, the other is converted to Double.
Otherwise, if either operand is of type Float, the other is converted to Float.
Otherwise, if either operand is of type Long, the other is converted to Long.
Otherwise, both operands are converted to type Integer.
These promotion rules mirror the Java rules for numeric promotion of numeric types.
max
Returns the largest number from a list of numbers.
Input Parameters
numList: String List - List of numbers from which the largest number is to be returned.
Output Parameters
maxValue: String - Largest number from the list of numbers.
min
Returns the smallest number from a list of numbers.
Input Parameters
numList: String List - List of numbers from which the smallest number is to be returned.
Output Parameters
minValue: String - Smallest number from the list of numbers.
multiplyFloatList
Multiplies a list of floating point numbers (represented in a String list) and returns the product.
Input Parameters
numList: String List - Numbers (floating point numbers represented as Strings) to multiply.
Output Parameters
value: String - Product of the numbers in numlist. If a product cannot be produced, value contains one of the following:
Infinity: The computation produces a positive value that overflows the representable range of a float type.
-Infinity: The computation produces a negative value that overflows the representable range of a float type.
0.0 - The computation produces a value that underflows the representable range of a float type (for example, multiplying a number by infinity).
NaN - The computation produces a value that cannot be represented as a number (for example, the result of an illegal operation such as multiplying zero by zero or any operation that uses NaN as input, such as 10.0 + NaN = NaN).
Usage Notes
Make sure the strings that are passed to the service in numList are in a locale-neutral format (that is, using the pattern -####.##). Passing locally formatted strings may result in unexpected results. For example, calling addFloats in a German locale with the arguments 1,23 and 2,34 will result in the value 357, not 3.57 or 3,57.
multiplyFloats
Multiples one floating point number (represented as String) by another and returns the product.
Input Parameters
num1: String - Number (floating point number represented as a String) to multiply.
num2: String - Number (floating point number represented as a String) to multiply.
precision: String Optional - Number of decimal places to which the product will be rounded. The default value is null.
Output Parameters
value: String - Product of the numeric values of num1 and num2. If a product cannot be produced, value contains one of the following:
Infinity - The computation produces a positive value that overflows the representable range of a float type.
-Infinity |The computation produces a negative value that overflows the representable range of a float type.
0.0 |The computation produces a value that underflows the representable range of a float type (for example, multiplying a number by infinity).
NaN |The computation produces a value that cannot be represented as a number (for example, the result of an illegal operation such as multiplying zero by zero or any operation that uses NaN as input, such as 10.0 + NaN = NaN).
Usage Notes
Make sure the strings that are passed to the service in num1andnum2 are in a locale-neutral format (that is, using the pattern -####.##). Passing locally formatted strings may result in unexpected results. For example, calling addFloats in a German locale with the arguments 1,23 and 2,34 will result in the value 357, not 3.57 or 3,57.
multiplyIntList
Multiplies a list of integers (represented in a String list) and returns the product.
Input Parameters
numList: String List - Numbers (floating point numbers represented as Strings) to multiply.
Output Parameters
value: String - Product of the numbers in numList.
Usage Notes
Make sure the result of your calculation is less than 64 bits in width (the maximum width for the long data type). If the result exceeds this limit, it will generate a data overflow.
Make sure the strings that are passed to the service in numList are in a locale-neutral format (that is, using the pattern -####.##). Passing locally formatted strings may result in unexpected results. For example, calling addFloats in a German locale with the arguments 1,23 and 2,34 will result in the value 357, not 3.57 or 3,57.
multiplyInts
Multiplies one integer (represented as a String) by another and returns the product.
Input Parameters
num1: String - Number (integer represented as a String) to multiply.
num2: String - Number (integer represented as a String) to multiply.
Output Parameters
value: String - Product of num1 and num2.
Usage Notes
Make sure the result of your calculation is less than 64 bits in width (the maximum width for the long data type). If the result exceeds this limit, it will generate a data overflow.
Make sure the strings that are passed to the service in num1andnum2 are in a locale-neutral format (that is, using the pattern -####.##). Passing locally formatted strings may result in unexpected results. For example, calling addFloats in a German locale with the arguments 1,23 and 2,34 will result in the value 357, not 3.57 or 3,57.
multiplyObjects
Multiplies one java.lang.Number object by another and returns the product.
Input Parameters
num1: java.lang.Number - Number to multiply. See the Usage Notes for supported sub-classes.
num2: java.lang.Number - Number to multiply. See the Usage Notes for supported sub-classes.
Output Parameters
value: java.lang.Number - Product of num1 and num2.
Usage Notes
This service accepts the following sub-classes of java.lang.Number: java.lang.Byte, java.lang.Double, java.lang.Float, java.lang.Integer, java.lang.Long, java.lang.Short.
This service applies the following rules for binary numeric promotion to the operands in order:
If either operand is of type Double, the other is converted to Double.
Otherwise, if either operand is of type Float, the other is converted to Float.
Otherwise, if either operand is of type Long, the other is converted to Long.
Otherwise, both operands are converted to type Integer.
These promotion rules mirror the Java rules for numeric promotion of numeric types.
randomDouble
Returns the next pseudorandom, uniformly distributed double between 0.0 and 1.0.
Random number generators are often referred to as pseudorandom number generators because the numbers produced tend to repeat themselves over time.
Input Parameters
fractionLength: String - Specifies the number of digits for the generated random number. For example, if you specify 7, the random number
generated will be 0.6536596.
Output Parameters
number: String - Generated random number.
roundNumber
Returns a rounded number.
Input Parameters
num: String - Number to be rounded.
numberOfDigits: String - Specifies the number of digits to which you want to round the number.
roundingMode: String Optional - Specifies the rounding method. Valid values for the roundingMode parameter are RoundHalfUp, RoundUp, RoundDown, RoundCeiling, RoundFloor, RoundHalfDown, and RoundHalfEven. The default value is RoundHalfUp.
Output Parameters
roundedNumber: String - The rounded number.
subtractFloats
Subtracts one floating point number (represented as a String) from another and returns the difference.
Input Parameters
num1: String - Number (floating point number represented as a String).
num2: String - Number (floating point number represented as a String) to subtract from num1.
precision: String Optional - Number of decimal places to which the difference will be rounded. The default value is null.
Output Parameters
value: String - Difference of num1 - num2. If a difference cannot be produced, value contains one of the following:
Infinity - The computation produces a positive value that overflows the representable range of a float type.
-Infinity - The computation produces a negative value that overflows the representable range of a float type.
0.0 - The computation produces a value that underflows the representable range of a float type (for example, subtracting a number from infinity).
NaN - The computation produces a value that cannot be represented as a number (for example, the result of an illegal operation such as multiplying zero by zero or any operation that uses NaN as input, such as 10.0 - NaN = NaN).
Usage Notes
Make sure the strings that are passed to the service in num1andnum2 are in a locale-neutral format (that is, using the pattern -####.##). Passing locally formatted strings may result in unexpected results. For example, calling addFloats in a German locale with the arguments 1,23 and 2,34 will result in the value 357, not 3.57 or 3,57.
subtractInts
Subtracts one integer (represented as a String) from another and returns the difference.
Input Parameters
num1: String - Number (integer represented as a String).
num2: String - Number (integer represented as a String) to subtract from num1.
Output Parameters
value: String - Difference of num1 - num2.
Usage Notes
Make sure the result of your calculation is less than 64 bits in width (the maximum width for the long data type). If the result exceeds this limit, it will generate a data overflow.
Make sure the strings that are passed to the service in num1 andnum2 are in a locale-neutral format (that is, using the pattern -####.##). Passing locally formatted strings may result in unexpected results. For example, calling addFloats in a German locale with the arguments 1,23 and 2,34 will result in the value 357, not 3.57 or 3,57.
subtractObjects
Subtracts one java.lang.Number object from another and returns the difference.
Input Parameters
num1: java.lang.Number - Number. See the Usage Notes for supported sub-classes.
num2: java.lang.Number - Number to subtract from num1. See the Usage Notes for supported sub-classes.
Output Parameters
value: java.lang.Number - Difference of num1 - num2.
Usage Notes
This service accepts the following sub-classes of java.lang.Number: java.lang.Byte, java.lang.Double, java.lang.Float, java.lang.Integer, java.lang.Long, java.lang.Short.
This service applies the following rules for binary numeric promotion to the operands. The following rules are applied in order:
If either operand is of type Double, the other is converted to Double.
Otherwise, if either operand is of type Float, the other is converted to Float.
Otherwise, if either operand is of type Long, the other is converted to Long.
Otherwise, both operands are converted to type Integer.
These promotion rules mirror the Java rules for numeric promotion of numeric types.
toNumber
Converts a string to numeric data type.
Input Parameters
num: String - Number (represented as a string) to be converted to numeric format.
convertAs: String Optional - Specifies the Java numeric data type to which the num parameter is to be converted. Valid values for the convertAs parameter are java.lang.Double, java.lang.Float, java.lang.Integer,java.math.BigDecimal,java.math.BigInteger, java.lang.Long. The default value is java.lang.Double.
Output Parameters
num: java.lang.Number - Converted numeric object.
MIME Services
Use MIME services to create MIME messages and extract information from MIME messages.
The following MIME services are available:
Service
Function
addBodyPart
Adds a body part (header fields and content) to a specified MIME object.
addMimeHeader
Adds one or more header fields to a specified MIME object.
createMimeData
Creates a MIME object.
getBodyPartContent
Retrieves the content (payload) from the specified MIME object.
getBodyPartHeader
Returns the list of header fields for the specified body part.
getContentType
Returns the value of the Content-Type message header from the specified MIME object.
getEnvelopeStream
Generates an InputStream representation of a MIME message from a specified MIME object.
getMimeHeader
Returns the list of message headers from a specified MIME object.
getNumParts
Returns the number of body parts in the specified MIME object.
getPrimaryContentType
Returns the top-level portion of a MIME object’s Content-Type value.
getSubContentType
Returns the sub-type portion of a MIME object’s Content-Type value.
mergeHeaderAndBody
Concatenates the contents of the header and body mapped to the input.
addBodyPart
Adds a body part (header fields and content) to a specified MIME object.
Input Parameters
mimeData: Document - MIME object to which you want to add a body part. (This IData object is produced by createMimeData).
content: java.io.InputStream or Object - Content that you want to add to the MIME object. content can be an InputStream or another MIME object. Use an InputStream to add an ordinary payload. Use a MIME object to add a payload that is itself a MIME message.
isEnvStream:String - Flag that specifies whether content is to be treated as a MIME entity.
Important
This parameter is only used if content is an InputStream.
Set this parameter to one of the following values:
yes to treat content as a MIME entity. addBodyPart will strip out the header fields from the top of content and add them to mimeData as part headers. The remaining data will be treated as the payload.
Note
addBodyPart assumes that all data up to the first blank line represents the entity’s header fields.
no to treat content as an ordinary payload.
mimeHeader: Document - Specifies the part headers that you want to add with this body part. Key names represent the names of the header fields. The values of the keys represent the values of the header fields. For example, if you wanted to add the following header fields:
X-Doctype: RFQ
X-Severity: 10
You would set mimeHeader as follows:
key
Value
X-Doctype
RFQ
X-Severity
10
Be aware that the following MIME headers are automatically inserted by getEnvelopeStream when it generates the MIME message:
Message-ID
MIME-Version
Additionally, you use the content, encoding, and description parameters to set the following fields:
Content-Type
Content-Transfer-Encoding
Content-Description
If you set these header fields in mimeHeader and you create a single-part message, the values in contenttype, encoding, and description, if specified, will override those in mimeHeader. See usage notes.
contenttype: String Optional - The value of the Content-Type header for this body part. For single-part messages, this value overrides the Content-Type value in mimeHeader, if one is present. Defaults to text/plain. See usage notes.
encoding: String Optional - Specifies how the body part is to be encoded for transport and sets the value of the Content-Transfer-Encoding header. For single-part messages, this value overrides the Content-Transfer-Encoding value in mimeHeader, if one is present. Defaults to 7bit. See usage notes.
Note
This parameter determines how the payload is to be encoded for transport. When you add a payload to mimeData, it should be in its original format. The getEnvelopeStream service will perform the encoding (as specified by encoding) when it generates the final MIME message.
Set to:
7bit to specify that content is 7-bit, line-oriented text that needs no encoding. This is the default.
8bit to specify that content is 8-bit, line-oriented text that needs no encoding.
Note
This encoding value is not recommended for messages that will be transported via SMTP over the Internet, because the data can be altered by intervening mail servers that can’t accommodate 8-bit text. To safely transport 8-bit text, use quoted-printable encoding instead.
binary to specify that content contains binary information that needs no encoding.
Note
This encoding value is not recommended for messages that will be transported via SMTP over the Internet, because the data can be altered by intervening mail servers that can’t accommodate binary data. To safely transport binary data, use base64 encoding instead.
quoted-printable to specify that content contains 7 or 8-bit, line-oriented text that you want to encode using the quoted-printable encoding scheme.
base64 to specify that content contains an arbitrary sequence of octets that you want to encode using the base64 encoding scheme.
uuencode to specify that content contains an arbitrary sequence of octets that you want to encode using the uuencode encoding scheme.
description: String Optional - Specifies the value of the Content-Description header for this body part.
multipart: String Optional - Flag that determines how addBodyPart behaves if mimeData already contains one or more body parts. By default, addBodyPart simply appends a new body part to mimeData if it already contains a payload. (This allows you to construct multi-part messages.) However, you can override this behavior if you want to either replace the existing payload with the new body part or throw an exception under these circumstances (see replace parameter, below).
Set to:
yes to append a new body part to mimeData. This is the default.
no to replace the existing payload with the new body part. (Depending on the value of replace, this setting may cause addBodyPart to throw an exception.)
replace: String Optional - Flag that specifies whether addBodyPart replaces the existing payload or throws an exception when it receives a mimeData that already contains a payload. This parameter is only used when multipart is set to no. Set to:
yes to replace the existing payload with the new body part. This is the default.
no to throw an exception.
Output Parameters
mimeData: Document - MIME object to which the body part was added.
Usage Notes
This service operates on the MIME object (mimeData) produced by createMimeData.
The way in which the contenttype and encoding parameters are applied depends on whether the finished message is single-part or multipart.
For single-part messages:
contenttype specifies the Content-Type for the entire MIME message. It overrides any value assigned to the Content-Type header in mimeHeader. If Content-Type is not specified in contenttype or mimeHeader, the value of the Content-Type header defaults to text/plain.
encoding specifies the Content-Transfer-Encoding for the entire MIME message. It overrides any value assigned to the Content-Transfer-Encoding header in mimeHeader. If Content-Transfer-Encoding is not specified in encoding or mimeHeader, the value of the Content-Transfer-Encoding header defaults to 7bit.
For multipart messages:
contenttype specifies the Content-Type for an individual body part. The Content-Type for the entire MIME message is automatically set to multipart/mixed, or to multipart/subType if a subtype was specified when the MIME object was created. See createMimeData.
encoding specifies the Content-Transfer-Encoding for an individual body part. The Content-Transfer-Encoding header in mimeHeader, if present, specifies the encoding for the entire MIME message. If Content-Transfer-Encoding is not specified in mimeHeader, or if the specified value is not valid for a multipart message, the value of the Content-Transfer-Encoding header defaults to 7bit. (7bit, 8bit, and binary are the only encoding values valid for multipart messages.)
addMimeHeader
Adds one or more header fields to a specified MIME object.
Input Parameters
mimeData: Document - MIME object to which you want the header fields added. (This IData object is produced by createMimeData).
mimeHeader: Document - Header fields that you want to add to the MIME object. Key names represent the names of the header fields. The values of the keys represent the values of the header fields. For example, to add the following header fields:
X-Doctype: RFQ
X-Severity: 10
You would set mimeHeader as follows:
key
Value
X-Doctype
RFQ
X-Severity
10
Be aware that the following MIME headers are automatically inserted by getEnvelopeStream when it generates the MIME message:
Message-ID
MIME-Version
If you set these values in mimeHeader, getEnvelopeStream will overwrite them at run time.
Output Parameters
mimeData: Document - MIME object to which the header fields were added.
Usage Notes
This service operates on the MIME object (mimeData) produced by createMimeData.
If you add MIME headers before you add multiple body parts, the header fields will be added to each of the body parts. If you do not want this behavior, either drop mimeHeader from the pipeline immediately after you execute addMimeHeader, or invoke addMimeHeader after you’ve added all body parts to the MIME object.
Be aware that the contenttype and encoding parameters used by the addBodyPart service will override any Content-Type or Content-Transfer-Encoding settings in mimeData. Moreover, in certain cases, the getEnvelopeStream will override these settings when it generates a multipart message. For information about how the Content-Type or Content-Transfer-Encoding headers are derived at run time, see the Usage Notes under addBodyPart.
createMimeData
Creates a MIME object.
If no input parameter is passed to this service, the service creates an empty MIME object. Otherwise, the service creates a MIME object containing the elements (header fields and content) from the MIME message in input.
If you are building a MIME message, you use this service to create an empty MIME object. You populate the empty MIME object with header fields and content, and then pass it to getEnvelopeStream, which produces the finished MIME message.
If you are extracting data from a MIME message, you use this service to parse the original MIME message into a MIME object so that you can extract its header fields and content using other services.
Input Parameters
input: java.io.InputStream Optional - MIME entity you want to parse. If input is not provided, createMimeData creates an empty MIME object.
mimeHeader: Document Optional - Specifies header fields that you want to add to the MIME object. Key names represent the names of the header fields. The values of the keys represent the values of the header fields.
Note
This parameter is ignored when input is passed to this service.
For example, if you wanted to add the following header fields:
Doctype: RFQ
Severity: 10
You would set mimeHeader as follows:
key
Value
X-Doctype
RFQ
X-Severity
10
Be aware that the following MIME headers are automatically inserted by getEnvelopeStream when it generates the MIME message:
Message-ID
MIME-Version
If you set these values in mimeHeader, getEnvelopeStream will overwrite them at run time.
subType: String Optional - String that specifies the subtype portion of the Content Type header, when the message is a multipart message and you want something other than the default value of mixed. For example, if you want the Content Type header to be multipart/related in the resulting message, set subType to related. subType is ignored if the resulting message is not a multipart message.
decodeHeaders: String Optional - Specifies how the MIME header is to be decoded. Set to:
" "(empty String) to decode headers based on the value of the global watt property watt.server.mime.decodeHeaders. This is the default.
NONE to specify that the MIME header or body part headers do not need decoding.
ONLY_MIME_HEADER to decode the MIME header only.
ONLY_BODY_PART_HEADERS to decode the body part headers only.
BOTH to decode the MIME header and the body part headers.
Output Parameters
mimeData: Document - MIME object. If input was passed to createMimeData, mimeData will contain the parsed MIME message. If input was not passed to createMimeData, mimeData will be empty.
encrypted: String Conditional - Indicates whether input was an encrypted message. This parameter is not present when the service creates a new, empty MIME object. A value of:
true indicates that the message is encrypted (the original message stream is in stream).
false indicates that the message is not encrypted.
signed: String Conditional - Flag whose value indicates whether input was a signed message. This parameter is not present when the service creates a new, empty MIME object. A value of:
true indicates that the message is signed (the original message stream is in stream).
false indicates that the message is not signed.
certsOnly: String Conditional - Flag whose value indicates whether input contained only digital certificates. This parameter is not present when the service creates a new, empty MIME object. A value of:
true indicates that the message contains only certificates.
false indicates that the message contains a regular payload.
stream: java.io.InputStream Conditional - InputStream containing the original MIME message from input. This parameter is present only when input is an S/MIME message.
Usage Notes
All of the other MIME services operate on the mimeData IData object produced by this service. They do not operate directly on MIME message streams.
Important
You can examine the contents of mimeData during testing and debugging. However, because the internal structure of mimeData is subject to change without notice, do not explicitly set or map data to/from these elements in your service. To manipulate or access the contents of mimeData, use only the MIME services that is provided.
getBodyPartContent
Retrieves the content (payload) from the specified MIME object.
You use this service for both single-part and multi-part messages.
To retrieve content from a multi-part message, you set the index (to select the part by index number) or contentID (to select the part by contentID value) parameter to specify the body part whose content you want to retrieve. To get the content from a single-part message, you omit the index and contentID parameters or set index to 0.
Input Parameters
mimeData: Document - MIME object whose content you want to retrieve. (This IData object is produced by createMimeData).
index: String Optional - Index number of the body part whose content you want to retrieve (if you want to retrieve the content from a specific body part). The first body part is index number zero.
Note
If contentID is specified, index is ignored.
contentID: String Optional - Value of the Content-ID header field of the body part whose content you want to retrieve (if you want to retrieve the payload from a specific body part).
Output Parameters
content: IData - The payload of the specified body part.
encrypted: String - Flag whose value indicates whether content is an encrypted MIME message. A value of:
true indicates that content is an encrypted message.
false indicates that content is not an encrypted message.
signed: String - Flag indicating whether content is a signed MIME message. A value of:
true indicates that content is a signed MIME message.
false indicates that content is not a signed MIME message.
certsOnly String - Flag whose value indicates whether content is a certs-only MIME message. A value of:
true indicates that content is a certs-only message.
false indicates that content is not a certs-only message.
Usage Notes
This service operates on the MIME object (mimeData) produced by createMimeData.
If you omit index or contentID when retrieving content from a multi-part message, getBodyPartContent returns the payload from the first body part. If you use index or contentID to select a body part that does not exist in mimeData, content will be null.
getBodyPartHeader
Returns the list of header fields for the specified body part.
Input Parameters
mimeData:Document - MIME object whose message headers you want to retrieve. (This IData object is produced by createMimeData).
index: String Optional - Index number of the body part whose header fields you want to retrieve. The first body part is index zero.
Note
If contentID is specified, index is ignored.
contentID: String Optional - Value of the Content-ID header field of the body part whose header fields you want to retrieve.
decodeHeaders: String Conditional - Flag whose value indicates whether to decode encoded headers in the MIME object. Set to:
true to indicate that the headers should be decoded.
false to indicate that the headers should not be decoded. This is the default.
Output Parameters
mimeHeader: Document - IData object containing the message headers. Key names represent the names of the header fields. The value of a key represents the value of that header field. For example, if the original message contained the following message header fields:
Content-Type: text/xml
X-Doctype: RFQ
X-Severity: 0
get Body Part Header would return the following IData object:
Key
Value
Content-Type
text/xml
X-Doctype
RFQ
X-Severity
0
Usage Notes
This service operates on the MIME object (mimeData) produced by createMimeData.
If you omit index or contentID, getBodyPartHeader returns the message headers from the first body part. If you use index or contentID to select a body part that does not exist in mimeData, content will be null.
getContentType
Returns the value of the Content-Type message header from the specified MIME object.
Input Parameters
mimeData: Document - MIME object whose Content-Type you want to discover (This IData object is produced by createMimeData).
Output Parameters
contentType: String - Value of the MIME object’s Content-Type header field. Note that this service returns only the media type and subtype portion of this header field’s value. It does not return any parameters the value may include. For example, if the message’s Content-Type header were:
Content-Type: text/plain;charset=UTF8
contentType would contain:
text/plain
Usage Notes
This service operates on the MIME object (mimeData) produced by createMimeData.
getEnvelopeStream
Generates an InputStream representation of a MIME message from a specified MIME object.
Input Parameters
mimeData: Document - MIME object from which you want to generate the MIME message(This IData object is produced by createMimeData).
index: String Optional - Index number of the body part for which you want to generate the MIME message (if you want to generate the message from a specific body part). The first body part is index number zero.
contentID: String Optional - Value of the Content-ID header field of the body part from which you want to generate the MIME message (if you want to generate the message from a specific body part).
Note
If index is specified, contentID is ignored.
returnMimeMessage: String. Optional. Specifies whether the MIME message is returned as a javax.mail.internet.MimeMessage object when any of the body parts in the message exceed the large data threshold.
Set to:
- yes to return the MIME message in the mimeMessage output parameter as a MimeMessage when the large data threshold is exceeded. This is the default.
- no to return the MIME message as an InputStream in the envStream output parameter when the large data threshold is exceeded.
suppressHeaders: String List Optional - Names of header fields that are to be omitted from message. You can use this option to exclude header fields that getEnvelopeStream generates by default, such as Content-Type and content-encoding.
createMultipart: String Optional - Specifies whether a multipart message is to be created, even if mimeData contains only one body part. Set to:
yes to create a multipart message (Content-Type message header is set to “multipart/mixed”).
no to create a message based on the number of body parts in mimeData. This is the default.
If the message contains only one body part, Content-Type is set according to the contenttype setting specified when that body part was added to mimeData.
If the message contains multiple body parts, Content-Type is automatically set to “multipart/mixed.”
Output Parameters
envStream: java.io.InputStream - The MIME message as an InputStream.
mimeMessage: javax.mail.internet.MimeMessage. Conditional. This service returns a mimeMessage instead of an envStream if any of the body parts have data mimeMessage greater than the specified threshold.
Usage Notes
This service operates on the MIME object (mimeData) produced by createMimeData.
If you omit index or contentID, getEnvelopeStream generates the MIME message from the entire contents of the mimeData. If you use index or contentID to select a body part that does not exist in mimeData, content will be null.
getEnvelopeStream automatically inserts the MIME-Version and Message-ID message headers into the MIME message it puts into envStream.
getMimeHeader
Returns the list of message headers from a specified MIME object.
Input Parameters
mimeData: Document - MIME object whose message headers you want to retrieve (This IData object is produced by createMimeData).
Output Parameters
mimeHeader: Document Conditional - An IData object containing the message headers. Key names represent the names of the header fields. The value of a key represents the value of the header fields. For example, if the original message contained the following message header fields:
This service operates on the MIME object (mimeData) produced by createMimeData.
getNumParts
Returns the number of body parts in the specified MIME object.
Input Parameters
mimeData:Document - MIME object whose parts you want to count (This IData object is produced by createMimeData).
Output Parameters
numParts: String - The number of body parts in the MIME object.
Usage Notes
This service operates on the MIME object (mimeData) produced by createMimeData.
getPrimaryContentType
Returns the top-level portion of a MIME object’s Content-Type value.
Input Parameters
mimeData: Document - MIME object whose Content-Type you want to discover (This IData object is produced by createMimeData).
Output Parameters
primContentType: String - Message’s top-level Content-Type. For example, if the message’s Content-Type header were:
Content-Type: multipart/mixed
primContentType would contain:
multipart
Usage Notes
This service operates on the MIME object (mimeData) produced by createMimeData.
getSubContentType
Returns the sub-type portion of a MIME object’s Content-Type value.
Input Parameters
mimeData: Document - MIME object whose sub-type you want to discover (This IData object is produced by createMimeData).
Output Parameters
subContentType: String - Message’s sub-type. For example, if the message’s Content-Type header were:
Content-Type: multipart/mixed
primContentType would contain:
mixed
Usage Notes
This service operates on the MIME object (mimeData) produced by createMimeData.
mergeHeaderAndBody
Concatenates the contents of the header and body mapped to the input.
You can use this service to reassemble the message into its original form so that it can be used as input to the createMimeData service (or any other service that requires the entire http response as an InputStream).
Input Parameters
headerLines: Document - IData object containing the message headers (The message headers are returned in the lines document inside the header output parameter).
body: Document - IData object containing the body of the message. This document must contain the body of the message in one of the following keys:
bytes: byte[ ] Optional - Body of the message.
stream: java.io.InputStream Optional - The body of the message.
Output Parameters
stream: java.io.InputStream - InputStream containing the reassembled tap message.
Usage Notes
Use this service to merge the Headers and Body to get the original MIME message.
Schema Services
Use Schema services to validate objects and to validate the pipeline.
The following Schema services are available:
Service
Description
validate
Validates an object using an IS document type or a schema.
validatePipeline
Validates the pipeline against a document type.
validate
Validates an object using an IS document type, XML document type, or an IS schema.
.
Input Parameters
object: Document. Where Document is an IData to be validated.
conformsTo: Select the list of document types from the project.
maxErrors: String. Optional. Number of errors to be collected. Default value is 1. When the number of errors found is equal to maxErrors, the validation processor stops validation and returns the result. If maxErrors is set to -1, the validation processor returns all errors.
ignoreContent: String. Optional. Flag that specifies whether the validation processor will validate content keys of the type String or String List. Set to:
true to ignore content (that is, do not validate keys of these types).
false to validate content. This is the default.
failIfInvalid: String. Optional. Flag that indicates whether the service should fail and throw an exception if the object is invalid. Set to:
true to indicate that the service should fail if the object is invalid.
false to indicate that service should signal success and return errors to the pipeline if the object is invalid. This is the default.
.
Output Parameters
isValid: String. Flag that indicates whether or not the validation was successful. A value of:
true indicates that the validation was successful.
false indicates that the validation was unsuccessful.
errors: Document List. Errors encountered during validation. Each document will contain the following information:
pathName: String. Location of the error in XQL.
errorCode: String. Error code (for example, VV-001).
errorMessage: String. Error message (for example, Missing Object).
Usage Notes
When validating supplied XML against an IS document type, IBM webMethods Integration uses the Java regular expression compiler by default.
When validating against an IS document type, if the Allow null property is set to false for a field in the document type and the corresponding element in the instance document carries the attribute xsi:nil, IBM webMethods Integration displays the [ISC.0082.9026] Undefined Object found error.
When validating against an IS document type, if the Allow null property is set to false for a field in the document type and the corresponding element in the instance document contains content or contains child elements, IBM webMethods Integration displays the [ISC.0082.9024] FieldName cannot have content or child elements since xsi:nil is true error.
When validating XML, IBM webMethods Integration uses the W3C recommendation XML Schema Part 2: Datatypes.
validatePipeline
Validates the pipeline against a document type.
Input Parameters
conformsTo: Select the list of document types of a project.
maxErrors: String. Optional. Number of errors to be collected. Default value is 1. When the number of errors found is equal to maxErrors, the validation processor stops validation and returns the result. If maxErrors is set to -1, the validation processor returns all errors.
ignoreContent: String. Optional. Flag that specifies whether the validation processor will validate content keys of the type String, String List, or String Table. Set to:
true to ignore content (that is, do not validate keys of these types).
false to validate content. This is the default.
failIfInvalid: String. Optional. Flag that indicates whether the service should fail and throw an exception if the object is invalid. Set to:
true to indicate that service should fail if object is invalid.
false to indicate that service should simply signal success and return errors to the pipeline if object is invalid. This is the default.
Output Parameters
isValid: String. Flag that indicates whether or not the validation was successful. A value of:
true indicates that the validation was successful.
false indicates that the validation was unsuccessful.
errors: Document List. Errors encountered during validation. Each document will contain the following information:
pathName: String. Location of the error in XQL.
errorCode: String. Error code (for example, VV-001).
errorMessage: String. Error message (for example, Missing Object).
Storage Services
Use Storage services to insert, retrieve, update, and remove entries from a data store.
When using the storage services, keep in mind that the short-term store is not intended to be used as a general-purpose storage engine. Rather, it is primarily provided to support shared storage of application resources and transient data in IBM webMethods Integration. It is recommended not to use the short-term store to process high volumes, large data records, or to permanently archive records.
Note
User specific data which may be considered as personal data will be stored and retained till the retention period defined in Execution Results.
These services are a tool for maintaining state information in the short-term store. It is up to the developer of the flow service to make sure that the flow service keeps track of its state and correctly handles restarts.
Locking Considerations
The following sections describe in general how the storage services handle locking requests.
Entry Locking
To maintain data integrity, the short-term store uses locking to ensure that multiple threads do not modify the same entry at the same time. For insertions and removals, the short-term store sets and releases the lock. For updates, the client must set and release the lock. Using locking improperly, that is, creating a lock but not releasing it, can cause deadlocks in the short-term store.
The following guidelines can help you avoid short-term store deadlocks:
Release locks in the thread through which they were set. In other words, you cannot set a lock in one thread and release it in another. The safest way to do this is to release each lock in the flow service that acquired it.
Unlock entries before the flow service completes. Entries remain locked until released using a put or an explicit unlock. To accomplish this, always pair a call to get or lock with a call to put or unlock so that every lock is followed by an unlock. In addition, use a try-catch pattern in your flow service so that an exception does not prevent the flow service from continuing and releasing the lock.
Data Store Locking
When a storage service locks an entry, the service also implicitly locks the data store in which the entry resides. This behavior prevents another thread from deleting the entire data store and the entries it contains while your thread is working with the entry. When the locked entry is unlocked, the implicit lock on the data store is also released.
Be careful when explicitly unlocking data stores. Consider the following example:
User_A locks an item. This creates two locks: an explicit lock on the entry, and an implicit lock on the data store.
User_A later unlocks the data store explicitly while still holding the lock on the entry.
User_B locks, then deletes the data store, including the entry locked by User_A in the first step.
When User_A explicitly unlocked the data store in step 2, User_B was able to delete the entry the User_A was working with.
Automatic Promotion to Exclusive Lock
If a storage service tries to acquire an exclusive lock on an object, but finds a shared lock from the same thread already in place on the object, the service will try to promote the lock to an exclusive lock.
If a storage service that requires an exclusive lock encounters a shared or exclusive lock held by another thread, it will wait until the object becomes available. If the object remains locked for the period specified by the waitlength parameter passed by the service, the service will fail.
Sample Flow service for Checkpoint Restart
The following diagram shows how to create checkpoint restarts into your flow services. It explains the logic of a flow service and shows where the various storage services are used to achieve checkpoint restarts.
The following Storage services are available:
Service
Description
add
Inserts a new entry into a data store.
deleteStore
Deletes a data store and all its contents. Any data in the data store is deleted. If the data store does not exist, the service takes no action.
get
Retrieves a value from a data store and locks the entry and the data store on behalf of the thread that invoked the service.
keys
Obtains a list of all the keys in a data store.
lock
Locks an entry and/or data store on behalf of the thread invoking this service.
put
Inserts or updates an entry in a data store. If the key does not exist in the data store, the entry is inserted.
remove
Removes an entry from a data store.
unlock
Unlocks an entry or a data store.
add
Inserts a new entry into a data store.
If the key already exists in the data store, the service does nothing.
Input Parameters
storeName: String - Name of the data store in which to insert the entry.
key: String - Key under which the entry is to be inserted.
value: Document - Value to be inserted.
Output Parameters
result: String - Flag indicating whether the entry was successfully added.
A value of:
true indicates that the new entry was inserted successfully.
false indicates that the entry was not inserted (usually because an entry for key already exists).
error: String - Error message generated while inserting the new entry into the data store.
deleteStore
Deletes a data store and all its contents. Any data in the data store is deleted. If the data store does not exist, the service takes no action.
Input Parameters
storeName: String - Name of the data store to delete.
waitLength: String Optional - Length of time, in milliseconds, that you want to wait for this data store to become available for deletion if it is already locked by another thread.
Output Parameters
count: String - Number of data store entries that were deleted. If the store does not exist, this value is 0.
Usage Notes
This service obtains an exclusive lock on the data store, but no locks on the individual entries in the data store. If this service finds a shared lock from the same thread on the data store, the service will automatically promote the lock to an exclusive lock. The exclusive lock prevents other threads from acquiring locks on the data store or entries within the data store during the delete operation.
get
Retrieves a value from a data store and locks the entry and the data store on behalf of the thread that invoked the service.
Important
This service does not automatically release the lock on the data store or entry after performing the get operation, so you need to ensure that the lock is released by calling the put or unlock services. If you do not release the lock, IBM webMethods Integration will release the lock at the end of the flow service execution.
Input Parameters
storeName: String - Name of the data store from which you want to retrieve the entry.
key: String - Key of the entry whose value you want to retrieve.
waitLength: String Optional - Length of time, in milliseconds, that you want to wait for this entry to become available if it is already locked by another thread.
lockMode: String Optional - Type of lock you want to place on the entry.
Set to:
Exclusive - To prevent other threads from reading or updating the entry while you are using it. The service also obtains a shared lock on the data store. An exclusive lock on an entry allows you to modify the entry.
Read - Is obsolete. If this value is specified, the service obtains a shared lock.
Share - To prevent other threads from obtaining an exclusive lock on the entry. The service also obtains a shared lock on the data store. A shared lock on an entry allows you to read, but not modify, the entry. This is the default.
Output Parameters
value: Document - Retrieved entry. If the requested entry does not exist, the value of this parameter is null.
Usage Notes
If you request an exclusive lock and the service finds a shared lock from the same thread on the entry, the service will automatically promote the shared lock on the entry to an exclusive lock.
When this service locks an entry, it also acquires a shared lock on the associated data store to prevent another thread from deleting the data store, and the entries it contains, while your thread has the entry locked.
When storing and retrieving the flow state in the short-term store for checkpoint restart purposes, ensure that the value of key is unique to the transaction.
keys
Obtains a list of all the keys in a data store.
Input Parameters
storeName: String - Name of the data store from which you want to obtain a list of keys.
Output Parameters
keys[ ]: String List - Keys for the data store specified in storeName.
lock
Locks an entry and/or data store on behalf of the thread invoking this service.
Important
When you lock an entry or data store using this service, you must release the lock by using a put or an explicit unlock. If you do not release the lock, IBM webMethods Integration will release the lock at the end of the flow service execution. Further, be careful when releasing locks with the unlock service. If you release a lock on a data store, another thread can obtain a lock on the data store and delete it, and the entries it contains, even if your thread still has locks on one or more of the entries.
Input Parameters
storeName: String - Name of the data store containing the entry.
key: String Optional - Key of the entry that you want to lock. If key is not supplied and you request:
A shared lock, the service obtains a shared lock on the data store, allowing other threads to read and modify entries, but not to delete them.
An exclusive lock, the service obtains an exclusive lock on the data store, preventing other threads from locking the data store and the entries, thereby preventing those threads from reading, modifying, or deleting the entries or the data store.
If both storeName and key are specified and you request:
A shared lock, the service obtains a shared lock on the data store and the entry.
An exclusive lock, the service obtains a shared lock on the data store and an exclusive lock on the entry.
waitLength: String Optional - Length of time, in milliseconds, that you want to wait for this entry to become available if it is already locked by another thread.
lockMode: String Optional - Type of lock you want to place on the entry or data store.
Set to:
Exclusive - To prevent other threads from obtaining a lock on the data store or entry. An exclusive lock on an entry allows you to modify the entry, and prevents other threads from reading or modifying the entry. An exclusive lock on a data store also locks the entries in the data store. In addition, an exclusive lock on a data store allows you to delete the data store.
Read - Is obsolete. If this value is specified, the service obtains a shared lock.
Share - To prevent other threads from obtaining an exclusive lock on an entry or a data store. A shared lock on an entry allows you to read, but not modify, the entry. A shared lock on a data store prevents another thread from deleting the data store. This is the default.
Output Parameters
None.
Usage Notes
If you have not specified a key, and your flow service does not invoke put or unlock, or your flow service throws an exception before invoking put or unlock, the entire data store remains locked.
If the key does not exist in the data store at the time your flow service executes, the lock service inserts the key with an empty value and takes the lock on the entry.
If you request an exclusive lock on an entry, the service obtains an exclusive lock on the entry and a shared lock on the data store. If this service finds a shared lock from the same thread on the entry, the service will automatically promote the shared lock on the entry to an exclusive lock.
If you request a shared lock on an entry, the service obtains a shared lock on the entry and a shared lock on the data store.
If you request a shared lock on an entry or a data store and this service finds an exclusive lock from the same thread, the existing exclusive lock will be reused. The exclusive lock will not be demoted to a shared lock.
If you request an exclusive lock on a data store, and this service finds a shared lock from the same thread on the data store, the service will automatically promote the shared lock on the data store to an exclusive lock.
put
Inserts or updates an entry in a data store. If the key does not exist in the data store, the entry is inserted.
If the requested entry is not currently locked by the thread that invoked this service, the put service will automatically attempt to lock the entry for the duration of the put operation.
The service obtains an exclusive lock on the entry and a shared lock on the data store. If the service finds a shared lock from the same thread on the entry, the service will automatically promote the shared lock to an exclusive lock.
This service releases the lock when the put operation has completed.
Input Parameters
storeName: String - Name of the data store into which you want to insert or update the entry.
key: String - Key where you want to insert or update the entry.
value: Document - Value to be inserted or updated.
waitLength: String Optional - Length of time, in milliseconds, that you want to wait for this entry to become available if it is already locked by another thread. If the wait length expires before a lock is obtained, the service fails and throws an exception. This parameter is used only when your service did not explicitly lock the entry beforehand.
Output Parameters
error: String - Error message generated while inserting the new entry into the data store.
Usage Notes
When storing and retrieving the flow state in the short-term store for checkpoint restart purposes, ensure that the value of key is unique to the transaction.
remove
Removes an entry from a data store. This service obtains an exclusive lock on the entry and a shared lock on the data store.
Input Parameters
storeName: String - Name of the data store from which to remove an entry.
key: String - Key of the entry that you want to remove.
waitLength: String Optional - Length of time, in milliseconds, that you want to wait for this entry to become available for deletion if it is already locked by another thread.
Output Parameters
result: String - Flag indicating whether the entry was successfully removed.
A value of:
true indicates that the entry was removed successfully.
false indicates that the entry was not removed (usually because an entry for key does not exist).
unlock
Unlocks an entry or a data store.
When a flow service retrieves an entry using the get service, the entry is locked to prevent modification by other users before the flow service completes. The entry remains locked until the lock owner invokes a put service. To unlock a service without using the put service, use the unlock service.
In addition, if a flow service uses the lock service to lock an entry or data store, you must use the unlock or put service to release the lock.
Important
Be careful when releasing locks with this service. If you release a lock on a data store, another thread can obtain a lock on the data store and delete it, and the entries it contains, even if the original thread still has locks on one or more of the entries.
Input Parameters
storeName: String - Name of the data store in which to unlock an entry.
key: String Optional - Key of the entry that you want to unlock. If key is not supplied, the lock will be removed from the data store specified in storeName, but any locks on entries in the data store will remain.
Output Parameters
None.
String Services
Use String services to perform string manipulation and substitution operations.
The following String services are available:
Service
Description
base64Decode
Decodes a Base-64 encoded string into a sequence of bytes.
base64Encode
Converts a sequence of bytes into a Base64-encoded String.
bytesToString
Converts a sequence of bytes to a String.
compareStrings
Performs a case-sensitive comparison of two strings, and indicates whether the strings are identical.
concat
Concatenates two strings.
fuzzyMatch
A given string is not exactly matched against a set of strings. If the match is above similarityThreshold, it returns the matchedValue. If more than one string has not exactly matched, then the first matched string is returned.
HTMLDecode
Replaces HTML character entities with native characters.
HTMLEncode
Replaces HTML-sensitive characters with equivalent HTML character entities.
indexOf
Returns the index of the first occurrence of a sequence of characters in a string.
isAlphanumeric
Determines whether a string consists entirely of alphanumeric characters (in the ranges A–Z, a–z, or 0–9).
isDate
Determines whether a string follows a specified date pattern.
isNullEmptyOrWhitespace
Determines if a string is null, empty, or only whitespace.
isNullOrBlank
Checks a string for a null or a blank value.
isNumber
Determines whether the contents of a string can be converted to a float value.
length
Returns the length of a string.
lookupDictionary
Looks up a given key in a hash table and returns the string to which that key is mapped.
lookupTable
Locates a key in a String Table and returns the string to which that key is mapped.
makeString
Builds a single string by concatenating the elements of a String List.
messageFormat
Formats an array of strings into a given message pattern.
numericFormat
Formats a number into a given numeric pattern.
objectToString
Converts an object to string representation using the Java toString() method of the object.
padLeft
Pads a string to a specified length by adding pad characters to the beginning of the string.
padRight
Pads a string to a specified length by adding pad characters to the end of the string.
replace
Replaces all occurrences of a specified substring with a substitute string.
stringToBytes
Converts a string to a byte array.
substitutePipelineVariables
Replaces a pipeline variable with its corresponding value.
substring
Returns a substring of a given string.
tokenize
Tokenizes a string using specified delimiter characters and generates a String List from the resulting tokens.
toLower
Converts all characters in a given string to lowercase.
toUpper
Converts all characters in a given string to uppercase.
trim
Trims leading and trailing white space from a given string.
URLDecode
Decodes a URL-encoded string.
URLEncode
URL-encodes a string.
base64Decode
Decodes a Base-64 encoded string into a sequence of bytes.
Input Parameters
string: String - A Base64-encoded String to decode into bytes.
encoding: String Optional - Specifies the encoding method. Default value is ASCII.
Output Parameters
value: byte[ ] - The sequence of bytes decoded from the Base64-encoded String.
base64Encode
Converts a sequence of bytes into a Base64-encoded String.
Input Parameters
bytes: byte[ ] - Sequence of bytes to encode into a Base64-encoded String.
useNewLine: String Optional - Flag indicating whether to retain or remove the line breaks.
Set to:
true to retain the line breaks. This is the default.
false to remove the line breaks.
encoding: String Optional - Specifies the encoding method. Default value is ASCII.
Output Parameters
value: String - Base64-encoded String encoded from the sequence of bytes.
Usage Notes
By default, the base64Encode service inserts line breaks after 76 characters of data, which is not the canonical lexical form expected by implementations such as MTOM. You can use the useNewLine parameter to remove the line breaks.
bytesToString
Converts a sequence of bytes to a String.
Input Parameters
bytes: byte[ ] - Sequence of bytes to convert to a String.
encoding: String Optional - Name of a registered, IANA character set (for example, ISO-8859-1). If you specify an unsupported encoding, the system throws an exception. To use the default encoding, set encoding to autoDetect.
ignoreBOMChars: String Optional - Flag indicating whether or not the byte order mark (BOM) characters in the input sequence of bytes are removed before converting the byte array to string.
Set to:
true to remove the byte order mark (BOM) characters before converting the input sequence of bytes to string, if the byte array contains BOM characters.
false to include the byte order mark (BOM) characters while converting the input sequence of bytes to string. The default is false.
Output Parameters
string: String - String representation of the contents of bytes.
compareStrings
Performs a case-sensitive comparison of two strings and indicates whether the strings are identical.
Input Parameters
inString1: String Optional - String to compare against inString2. This input variable can be null.
inString2: String Optional - String to compare against inString1. This input variable can be null.
Output Parameters
isEqual: String - Indicates whether or not inString1 and inString2 are identical.
true indicates that inString1 and inString2 are identical.
false indicates that inString1 and inString2 are not identical.
Note
If both inString1 and inString2 are null, the service considers the strings to be identical and returns true.
concat
Concatenates two strings.
Input Parameters
inString1: String - String to which you want to concatenate another string.
inString2: String - String to concatenate to inString1.
Output Parameters
value: String - Result of concatenating inString1 with inString2 (inString1 + inString2).
fuzzyMatch
A given string is not exactly matched against a set of strings. If the match is above similarityThreshold, it returns the matchedValue. If more than one string has not exactly matched, then the first matched string is returned.
Input Parameters
inString: String (Required) - Text to be matched. Text should not be empty or null.
matchData [ ]: String (Required) - Array of strings, which are used for matching. If the string array value is either empty or null, it is not used for matching.
similarityThreshold: String (Optional) - If the inexact match score is above the given threshold, then service output contains the matchedValue parameter. Default value is 0.65. Valid values should be between 0.0 and 1.0. Value 0.0 represents no match and value 1.0 represents an exact match.
algorithm: String (Optional) - The algorithm used for an inexact match. Default value is Levenshtein. Supported algorithms are Levenshtein and JaroWinkler.
Output Parameters
matchedValue: String (Optional) - If the inexact match is above similarityThreshold, then the returned value contains the matched string.
similarity: String (Optional) - If the inexact match is above similarityThreshold, then it contains a similarity score. It provides the measure of how close the match is. The returned value can be between 0.0 and 1.0. Value 0.0 represents no match and value 1.0 represents an exact match.
Usage Notes
Search the web for more information about Levenshtein and JaroWinkler algorithms.
HTMLDecode
Replaces HTML character entities with native characters.
Specifically, this service:
Replaces this HTML character entity…
With…
>
>
<
<
&
&
"
"
Input Parameters
inString: String - An HTML-encoded String.
Output Parameters
value: String - Result from decoding the contents of inString. Any HTML character entities that existed in inString will appear as native characters in value.
HTMLEncode
Replaces HTML-sensitive characters with equivalent HTML character entities.
Specifically, this service:
Replaces this native language character…
With…
>
>
<
<
&
&
"
"
‘
'
These translations are useful when displaying text in an HTML context.
Input Parameters
inString: String - The character you want to encode in HTML.
Output Parameters
value: String - Result from encoding the contents of inString. Any HTML-sensitive characters that existed in inString, for example, > or &, will appear as the equivalent HTML character entities in value.
indexOf
Returns the index of the first occurrence of a sequence of characters in a string.
Input Parameters
inString: String - String in which you want to locate a sequence of characters.
subString: String - Sequence of characters to locate.
fromIndex: String Optional - Index of inString from which to start the search. If no value is specified, this parameter contains 0 to indicate the beginning of the string.
Output Parameters
value: String - Index of the first occurrence of subString in inString. If no occurrence is found, this parameter contains -1.
isAlphanumeric
Determines whether a string consists entirely of alphanumeric characters (in the ranges A–Z, a–z, or 0–9).
Input Parameters
inString: String Optional - String to be checked for alphanumeric characters.
Output Parameters
isAlphanumeric: String - Indicates whether or not all the characters in inString are alphanumeric.
true indicates that all the characters in inString are alphanumeric.
false indicates that not all the characters in inString are alphanumeric.
The service returns false if inString is not specified.
isDate
Determines whether a string follows a specified date pattern.
Input Parameters
inString: String Optional - String to be checked for adherence to the specified date pattern.
pattern: String - Date format for specifying the inString parameter (for example, yyyyMMdd HH:mm:ss.SSS). For more information about the pattern strings that can be specified for the date, see the “Pattern String Symbols” section.
Output Parameters
isDate: String - Indicates whether or not inString follows the specified date pattern.
true indicates that inString follows the specified date pattern.
false indicates that inString does not follow the specified date pattern.
The service returns false if inString is not specified.
Usage Notes
The service returns an error if both inString and pattern are not specified.
You can specify any random string (for example, 111212) as both inString and pattern. The service returns true if the same user-defined string is specified as both inString and pattern. This is because the java.text.SimpleDateFormat class parses the user-defined input string and pattern to a valid date when the particular input values are identical.
isNullEmptyOrWhitespace
Determines if a string is null, empty, or only whitespace.
Input Parameters
inString: String. Optional. String to be checked.
*ifPresent: Boolean. Optional.
If the value is set to true, the service checks whether inString is present or not, and only then proceeds with validation of the input.
If the value is set to false, the service throws an exception when inString is absent. Else, the service proceeds to validate the input.
Output Parameters
isNullEmptyOrWhitespace: String. Indicates whether inString is null, empty, or only whitespace.
true indicates that inString has a null value, is empty, or is only whitespace.
false indicates that inString is not null, not empty, or is not only whitespace.
Examples
Service with inString = “ \t\n\r\n” and ifPresent = true, returns true
Service with inString=” \t\n\r\n” and ifPresent = false, returns true
Service with inString = “abcd” and ifPresent = true, returns false
Service with inString = “abcd” and ifPresent = false, returns false
Service with ifPresent = true, returns true
Service with ifPresent = false, throws an exception.
Usage Notes
string:isNullEmptyOrWhitespace replaces string:isNullOrBlank which is deprecated.
isNullOrBlank
Checks a string for a null or a blank value.
Input Parameters
inString: String Optional - String to be checked for a null or a blank value.
ifPresent: Boolean Optional - Set to one of the following values:
true indicates the service checks whether inString is present or not, and only then proceeds with validation of the input.
false indicates the service throws an exception when inString is absent. Else, the service proceeds to validate the input.
Output Parameters
isNullorBlank: String - Indicates whether or not inString has a null or a blank value.
true indicates that inString has either a null or a blank value.
false indicates that inString contains a value that is not null.
If inString is not specified, the service considers the string to be blank and returns true.
isNumber
Determines whether the contents of a string can be converted to a float value.
Input Parameters
inString: String Optional - String to be checked for conversion to float.
Output Parameters
isNumber: String - Indicates whether or not inString can be converted to a float value.
true indicates that inString can be converted to a float value.
false indicates that inString cannot be converted to a float value.
The service returns false if inString is not specified.
length
Returns the length of a string.
Input Parameters
inString: String - String whose length you want to discover.
Output Parameters
value: String - The number of characters in inString.
lookupDictionary
Looks up a given key in a hash table and returns the string to which that key is mapped.
Input Parameters
hashtable: java.util.Hashtable - Hash table that uses String objects for keys and values.
key: String - Key in hashtable whose value you want to retrieve. The key is case sensitive.
Output Parameters
value: String - Value of the string to which key is mapped. If the requested key in hashtable is null or if key is not mapped to any value in hashtable, the service returns null.
lookupTable
Locates a key in a String Table and returns the string to which that key is mapped.
Input Parameters
lookupTable: String [ ] [ ]. A multi-row, multi-column string table in which to search.
keyColumnIndex: String. Index of the “key” column. Default is 0.
valueColumnIndex: String. Index of the “value” column. Default is 1.
key: String. Key to locate.
Note
The key is case sensitive.
ignoreCase: String. Optional. Flag indicating whether to perform a case-sensitive or case-insensitive search.
Set to:
true to perform a case-insensitive search.
false to perform a case-sensitive search. This is the default.
useRegex: String. Optional. Flag indicating whether the values in the table are to be interpreted as regular expressions.
Note
The regular expressions in the table should not include slashes. For example, use hello., not /hello./.
Set to:
true to interpret the key column values in the table as regular expressions.
false to interpret the key column values in the table as literal values (that is, not regular expressions). This is the default.
Output Parameters
value- String First value in the “value” column whose key matches key. If no match is found, this parameter is null.
makeString
Builds a single string by concatenating the elements of a String List.
Input Parameters
elementList[ ]: String List - Strings to concatenate.
separator: String - String to insert between each non-null element in elementList.
Output Parameters
value: String - Result from concatenating the strings in elementList. Strings are separated by the characters specified in separator.
messageFormat
Formats an array of strings into a given message pattern.
Input Parameters
pattern: String - Message that includes “placeholders” where elements from argumentList are to be inserted. The message can contain any sequence of characters. Use the {n} placeholder to insert elements from argumentList, where n is the index of the element that you want to insert.
For example, the following pattern string inserts elements 0 and 1 into the message: Test results: {0} items passed, {1} items failed.
Note
Do not use any characters except digits for n.
argumentList: String List Optional - List of strings to use to populate pattern. If argumentList is not supplied, the service will not replace placeholders in pattern with actual values.
Output Parameters
value: String - Result from substituting argumentList into pattern. If pattern is empty or null, this parameter is null.
numericFormat
Formats a number into a given numeric pattern.
Input Parameters
num: String - The number to format.
pattern: String - A pattern string that describes the way in which num is to be formatted:
This symbol…
Indicates…
0
A digit.
#
A digit. Leading zeroes will not be shown.
.
A placeholder for a decimal separator.
,
A placeholder for a grouping separator.
;
A separation in format.
-
The default negative prefix.
%
That num will be multiplied by 100 and shown as a percentage.
X
Any character used as a prefix or suffix (for example, A, $).
'
That special characters are to be used as literals in a prefix or suffix. Enclose the special characters within “ (for example, ‘#‘).
The following are examples of pattern strings:
Pattern
Description
#,###
Use commas to separate into groups of three digits. The pound sign denotes a digit and the comma is a placeholder for the grouping separator.
#,####
Use commas to separate into groups of four digits.
$#.00
Show digits before the decimal point as needed and exactly two digits after the decimal point. Prefix with the $ character.
'#'#.0
Show digits before the decimal point as needed and exactly one digit after the decimal point. Prefix with the # character. The first character in a pattern is the dollar sign ($). The pound sign denotes a digit and the period is a placeholder for decimal separator.
Output Parameters
value: String - num formatted according to pattern. If pattern is an empty (not null) string, the default pattern of comma separators is used and the number of digits after the decimal point remains unchanged.
objectToString
Converts an object to string representation using the Java toString() method of the object.
Input Parameters
object: Object - The object to be converted to string representation.
Output Parameters
string: String - String representation of the input object converted using the Java toString() method of the object.
padLeft
Pads a string to a specified length by adding pad characters to the beginning of the string.
Input Parameters
inString: String - String that you want to pad.
padString: String - Characters to use to pad inString.
length: String - Total length of the resulting string, including pad characters.
Output Parameters
value: String - Contents of inString preceded by as many pad characters as needed so that the total length of the string equals length.
Usage Notes
If padString is longer than one character and does not fit exactly into the resulting string, the beginning of padString is aligned with the beginning of the resulting string. For example, suppose inString equals shipped and padString equals x9y.
If length equals…
Then value will contain…
7
shipped
10
x9yshipped
12
x9x9yshipped
If inString is longer than length characters, only the last length characters from inString are returned. For example, if inString equals acct1234 and length equals 4, value will contain 1234.
padRight
Pads a string to a specified length by adding pad characters to the end of the string.
Input Parameters
inString: String - String that you want to pad.
padString: String - Characters to use to pad inString.
length: String - Total length of the resulting string, including pad characters.
Output Parameters
value: String - Contents of inString followed by as many pad characters as needed so that the total length of the string equals length.
Usage Notes
If padString is longer than one character and does not fit exactly into the resulting string, the end of padString is aligned with the end of the resulting string. For example, suppose inString equals shipped and padString equals x9y.
If length equals…
Then value will contain…
7
shipped
10
shippedx9y
12
shippedx9y9y
If inString is longer than length characters, only the first length characters from inString are returned. For example, if inString equals 1234acct and length equals 4, value will contain 1234.
replace
Replaces all occurrences of a specified substring with a substitute string.
Input Parameters
inString: String - String containing the substring to replace.
searchString: String - Substring to replace within inString.
replaceString: String - Character sequence that will replace searchString. If this parameter is null or empty, the service removes all occurrences of searchString from inString.
useRegex: String Optional - Flag indicating whether searchString is a regular expression. When regular expressions are used to specify a search string, replaceString may also contain interpolation fields (for example, “$1”) that match parenthetical subexpressions in searchString.
Set to:
true - To indicate that searchString is a regular expression.
false - To indicate that searchString is not a regular expression. This is the default.
Output Parameters
value: String - Contents of inString with replacements made.
stringToBytes
Converts a string to a byte array.
Input Parameters
string: String - String to convert to a byte[ ].
encoding: String Optional - Name of a registered, IANA character set that specifies the encoding to use when converting the String to an array of bytes (for example: ISO-8859-1). To use the default encoding, set this value to autoDetect. If you specify an unsupported encoding, an exception will be thrown.
Output Parameters
bytes: byte[ ] - Contents of string represented as a byte[ ].
substitutePipelineVariables
Replaces a pipeline variable with its corresponding value.
Input Parameters
inString: String Optional - String containing the pipeline variable to replace. Specify the name of the pipeline variable between the % symbols (for example, %phone%).
Output Parameters
value: String - Contents of inString with the pipeline variable replaced.
Usage Notes
The service returns an error if inString is not specified.
If inString does not contain any variable between the % symbols, or contains a value other than the pipeline variable between the % symbols, the service does not perform any variable substitution from the pipeline.
If you want to include the % symbol in the output, you can specify it as \% in inString. To specify the value of the pipeline variable as a percentage in the output, append \% after the variable name in inString. For example, suppose a pipeline variable revenueIncreasePercent has a value of 100.
If inString equals…
Then value will contain…
%revenueIncreasePercent%\%
100%
The service cannot be used for substitution of global variables.
substring
Returns a substring of a given string.
Input Parameters
inString: String - String from which to extract a substring.
beginIndex: String - Beginning index of the substring to extract (inclusive).
endIndex: String - Ending index of the substring to extract (exclusive). If this parameter is null or empty, the substring will extend to the end of inString.
Output Parameters
value: String - Substring from beginIndex and extending to the character at endIndex - 1.
tokenize
Tokenizes a string using specified delimiter characters and generates a String List from the resulting tokens.
This service does not return delimiters as tokens.
Input Parameters
inString: String - String you want to tokenize, that is, break into delimited chunks.
delim: String - Delimiter characters. If null or empty, the service uses the default delimiters \t\n\r, where t, n, and r represent the white space characters tab, new line, and carriage return.
useRegex: Boolean Optional - If the value is set to true, IBM webMethods Integration supports recognizing the delimiter character set for the delim parameter as a regular expression. If the value is set to false, IBM webMethods Integration considers the delimiter
character set for the delim parameter as an individual character. This is the default.
Output Parameters
valueList [ ]: String List - Strings containing the tokens extracted from inString.
toLower
Converts all characters in a given string to lowercase.
Input Parameters
inString: String - String to convert.
language: String Optional - Lowercase, two-letter ISO-639 code. If this parameter is null, the system default is used.
country: String Optional - Uppercase, two-letter ISO-3166 code. If this parameter is null, the system default is used.
variant: String Optional - Vendor and browser-specific code. If null, this parameter is ignored.
Output Parameters
value: String - Contents of inString, with all uppercase characters converted to lowercase.
toUpper
Converts all characters in a given string to uppercase.
Input Parameters
inString: String - String to convert.
language: String Optional - Lowercase, two-letter ISO-639 code. If this parameter is null, the system default is used.
country: String Optional - Uppercase, two-letter ISO-3166 code. If this parameter is null, the system default is used.
variant: String Optional - Vendor and browser-specific code. If null, this parameter is ignored.
Output Parameters
value: String - Contents of inString, with all lowercase characters converted to uppercase.
trim
Trims leading and trailing white space from a given string.
Input Parameters
inString: String - String to trim.
Output Parameters
value: String - Contents of inString with white space trimmed from both ends.
URLDecode
Decodes a URL-encoded string.
Input Parameters
inString: String URL-encoded string to decode.
Output Parameters
value: String - Result from decoding inString. If inString contains plus (+) signs, they will appear in value as spaces. If inString contains %hex encoded characters, they will appear in value as the appropriate native character.
URLEncode
URL-encodes a string.
Encodes characters the same way that data posted from a WWW form is encoded, that is, the application/x-www-form-urlencoded MIME type.
Input Parameters
inString: String - String to URL-encode.
Output Parameters
value: String - Result from URL-encoding inString. If inString contains non-alphanumeric characters (except [-_.*@]), they will appear in value as their URL-encoded equivalents (% followed by a two-digit hex code). If inString contains spaces, they will appear in value as plus (+) signs.
Transaction Services
Use Transaction services only in conjunction with Database Connector operations. These services are applicable when the Database Connector account is of type Transactional.
The following Transaction services are available:
Service
Description
commit
Commits an explicit transaction.
rollback
Rolls back an explicit transaction.
setTimeout
Manually sets a transaction timeout interval for implicit and explicit transactions.
start
Starts an explicit transaction.
commit
Commits an explicit transaction.
Input Parameters
commitTransactionInput: Document - Information for each commit request.
transactionName: String - The name of an explicit transaction that you want to commit. The transactionName must have been previously used in a call to Transaction:start. This value must be mapped from the most recent Transaction:start that has not previously been committed or rolled back.
Output Parameters
None.
Usage Notes
This service must be used in conjunction with the Transaction:start service. If the transactionName parameter was not provided in a prior call to Transaction:start, a run-time error will be returned.
rollback
Rolls back an explicit transaction.
Input Parameters
rollbackTransactionInput: Document List - Information for each rollback request.
transactionName: String - The name of an explicit transaction that you want to roll back. The transactionName must have been previously used in a call to Transaction:start. This value must be mapped from the most recent Transaction:start that has not previously been committed or rolled back.
Output Parameters
None.
Usage Notes
This service must be used in conjunction with the Transaction:start service. If the given transactionName parameter was not provided in a prior call to Transaction:start, a run-time error will be returned.
setTimeout
Manually sets a transaction timeout interval for implicit and explicit transactions.
Input Parameters
timeoutSeconds: Integer - The number of seconds that the implicit or explicit transaction stays open before the transaction manager marks it for rollback.
Output Parameters
None.
Usage Notes
You must call this service before you call the Transaction:start service. If the execution of a transaction takes longer than the transaction timeout interval, all transacted operations are rolled back.
start
Starts an explicit transaction.
Input Parameters
startTransactionInput: Document - Information for each start transaction request.
transactionName: String Optional - Specifies the name of the transaction to be started. If you leave this parameter blank, the Database Application will generate a name for you. In most implementations it is not necessary to provide your own transaction name.
Output Parameters
startTransactionOutput: Document - Information for each start transaction request.
transactionName: String - The name of the transaction the service just started.
Usage Notes
This service is intended for use with the Transaction:commit or Transaction:rollback service. The transactionName value returned by a call to this service can be provided to Transaction:commit (to commit the transaction) or Transaction:rollback (to roll back the transaction).
Utils Services
Contains utility services.
The following Utils services are available:
Service
Description
generateUUID
Generates a random Universally Unique Identifier (UUID).
createMessageDigest
Generates a message digest for a given message.
deepClone
Clones an object using the default Java serialization mechanism.
transcode
Transcodes data from one encoding to another.
generateUUID
Generates a random Universally Unique Identifier (UUID).
Input Parameters
None
Output Parameters
UUID: String - A randomly generated Universally Unique Identifier (UUID).
createMessageDigest
Generates a message digest for a given message.
Input Parameters
algorithm: String. Name of the algorithm that you want to use to compute the message digest. Must be one of the following: MD5, SHA-1, SHA-256, SHA-384, or SHA-512.
input: byte[ ]. Optional. Message for which you want the digest generated where the message is in the form of a byte array. If both input and inputAsStream are provided, inputAsStream takes precedence.
inputAsStream: java.io.InputStream. Optional. Message for which you want to generate a message digest where the message is in the form of an input stream. If both input and inputAsStream are provided, inputAsStream takes precedence.
Output Parameters
output: byte[ ]. Conditional. Computed digest in the form of a byte array. output is returned when the input parameter input is provided.
outputAsStream: OutputStream. Conditional. Computed digest in the form of an output stream. outputAsStream is returned when the input parameter inputAsStream is provided.
deepClone
Clones an object using the default Java serialization mechanism.
Input Parameters
originalObject: java.io.Serializable. Object to be cloned.
Output Parameters
clonedObject: Object. Copy of the originalObject.
transcode
Transcodes data from one encoding to another.
Input Parameters
inputData: Document. Data to be transcoded. Depending on the data type of the source data, use string or bytes variable to specify the data.
You must provide input data to string or bytes. If you do not specify input data in either string or bytes, the utils:transcode service ends with an exception. If you specify input data for both string and bytes, the utils:transcode service uses the contents of string, ignoring bytes.
Key
Description
string
String. Optional. String containing the data to convert to another encoding.
bytes
byte[ ]. Optional. Sequence of bytes to convert to another encoding.
sourceEncoding: String. Encoding used for the source data. This must be an encoding supported by the JRE used with IBM webMethods Integration.
targetEncoding: String. Encoding to which the source data needs to be transcoded. This must be an encoding supported by the JRE used with IBM webMethods Integration.
onTranscoding Error: String. Optional. Specifies the action to take when encountering unmappable characters. When transcoding data from one character set to annother, it is possible that a character in the source encoding cannot be mapped to a character in the target encoding. Specify one of the following:
replace to replace an unmappable character with a replacement character. If you select replace, specify a replacement character in replaceWith.
ignore to drop any unmappable characters.
report to throw a ServiceException if the service encounters any unmappable characters. This is the default.
replaceWith: Document. Optional. Character used to replace an unmappable character found during transcoding. Specify a replacement character for characters that cannot be mapped in string or bytes.
The utils:transcode service uses a replacement character only when onTranscodingError is set to replace.
If onTranscodingError is set to replace and a replacement character is not specified, the utils:transcode service uses the default replacement character of space (“\u0020”).
If you specify a replacement character for both string and bytes, the utils:transcode service uses the contents of string, ignoring bytes.
Key
Description
string
String. Optional. The replacement character to use for unmappable characters in the input data.
bytes
byte[ ]. Optional. The replacement character to use for unmappable characters in the input data.
outputAs: String. Optional. The data type to use for the output data. Specify one of the following:
string
bytes
If you do not specify a value for outputAs, the utils:transcode service returns the output in the same data type used for inputData. For example, if you supplied the source data to inputData/string, the service returns the target data to the outputData/string output parameter.
normalizationForm: String. Optional. The Unicode normalization form to use during transcoding.
Specify one of the following:
none: Indicates that normalization is not done.
NFC (Normalization Form C): Canonical Decomposition, followed by Canonical Composition. This is the default.
NFD (Normalization Form D). Canonical Decomposition.
NFKC (Normalization Form KC). Compatibility Decomposition, followed by Canonical Composition
NFKD (Normalization Form KD) Compatibility Decomposition.
Output Parameters
outputData: Document Transcoded data. The utils.:transcode service returns the transcoded data in outputData/string or outputData/bytes, depending on the value of the outputAs input parameter. If a value was not specified for outputAs, the service returns the transcoded data in the same data type as the supplied input data.
Key
Description
string
String. Conditional. Transcoded contents of inputData as a String. The utils:transcode service string returns this output parameter if the input data was supplied in inputData/string or outputAs was set to string.
bytes
byte[ ]. Conditional. Transcoded contents of inputData as a byte[]. The utils:transcode service bytes returns this output parameter if the input data was supplied in inputData/bytes or outputAs was set to bytes.
XML Services
Use XML services to convert a document to XML content and XML content to a document.
The following XML services are available:
Service
Description
documentToXMLBytes
Converts a document to XML content bytes, as a byte array object.
documentToXMLStream
Converts a document to XML stream, as a java.io.InputStream object.
documentToXMLString
Converts a document to XML content string.
getXMLNodeType
Returns information about an XML node.
queryXMLNode
Queries an XML node.
xmlBytesToDocument
Converts XML content bytes (byte array) to a document.
xmlNodeToDocument
Converts an XML node to a document.
xmlStreamToDocument
Converts an XML content stream to a document.
xmlStringToDocument
Converts an XML string to a document.
xmlStringToXMLNode
Converts a String, byte[ ], or InputStream containing an XML document to an XML node.
getXMLNodeIterator
Creates and retrieves a NodeIterator.
getNextXMLNode
Retrieves the next XML node from a NodeIterator.
freeXMLNode
Frees the resources allocated to a given XML node.
documentToXMLBytes
Converts a document to XML content bytes, as a byte array object. This service will recurse through a given document and build an XML representation from the elements within it. Key names are turned into XML elements, and the key values are turned into the contents of those elements.
Input Parameters
document: Document - Document that is to be converted to XML. Note that if you want to produce a valid XML document (one with a single root node), document must contain only one top-level document that is, a single document. The name of that document will serve as the name of the XML document’s root element. If you need to produce an XML fragment, for example, a loose collection of elements that are not encompassed within a single root element, then document can contain multiple top level elements.
nsDecls [ ]: Document Optional - Namespaces associated with any namespace prefixes that are used in the key names in document. Each entry in nsDecls represents a namespace prefix/URI pair, where a key name represents a prefix and the value of the key specifies the namespace URI.
addHeader: String Optional - Flag specifying whether the header element <?xml version=“1.0”?> is to be included in the resulting XML String.
Set to:
true to include the header. This is the default.
false to omit the header. Omit the header to generate an XML fragment or to insert a custom header.
attrPrefix: String Optional - Prefix that designates keys containing attributes. The default prefix is “@”.
encode: String Optional - Flag indicating whether to HTML-encode the data. Set this parameter to true if your XML data contains special characters, including the following: < > & " '
Set to:
true to HTML-encode the data. For example, the string expression 5 < 6 would be converted to <expr>5 < 6</expr>, which is valid. If you do not want a leading & (ampersand) character encoded when it appears as part of a character or entity reference, set preserveRefs to true.
false to To not HTML-encode the data. This is the default. For example, the string expression 5 < 6 would be converted to <expr>5 < 6</expr>, which is invalid.
strictEncodeElements: String. Optional. Controls the behavior of the encode parameter. If this parameter is set to true when the encode parameter is true, then only the strictEncodeElements < > & characters in the element content are HTML-encoded (the apostrophe and quote characters are not HTML-encoded).
Default value is false.
preserveRefs: String Optional - Flag indicating whether the leading & (ampersand) of a well-formed entity or character reference is left as & or further encoded as & when the data is to be HTML-encoded (encode is set to true).
Set to:
true to preserve the leading & (ampersand) in an entity or character reference when the service HTML-encodes the data.
false to To encode the leading & (ampersand) as & when the & appears in an entity or character reference. This is the default. The service ignores the value of preserveRefs when encode is set to false.
documentTypeName: String (picklist). Optional. Document type that describes the structure and format of the output document. You can use this parameter to ensure that the output includes elements that might not be present in document at run time, or to describe the order in which elements are to appear in the resulting XML String. If you are using derived type processing, you must provide this parameter.
generateRequiredTags: String. Optional. Flag indicating whether empty tags are to be included in the output document if a mandatory element appears in the document generateRequiredTags type specified in documentTypeName but does not appear in document. Set to:
true to include mandatory elements if they are not present in document.
false to omit mandatory elements if they are not present in document.
Default value is false.
Note: The generateRequiredTags is only applicable if documentTypeName is provided.
generateNilTags: String. Optional. Flag indicating whether the resulting XML string includes the attribute xsi:nil for elements that are null. Set to:
true to generate the xsi:nil attribute for an element if the Allow null property for the corresponding field is set to true and the field is null in document.
Note: generateRequiredTags must also be set to true to generate the xsi:nil attribute in the XML String.
false to omit the xsi:nil attribute even if a nillable field is, in fact, null.
Default value is false.
enforceLegalXML: String Optional - Flag indicating whether the service throws an exception when document contains multiple root elements or illegal XML tag names.
Set to:
true to throw an exception if document would produce an XML String containing multiple root elements and/or illegal XML tag names.
false to allow the resulting XML String to contain multiple root elements and/or illegal XML tag names. You would use this setting, for example, to create an XML fragment composed of multiple elements that were not all enclosed within a root element. This is the default.
dtdHeaderInfo: String. Optional. Contents of the DOCTYPE header to be inserted into the XML String.
Key
Description
systemID
String Optional. System identifier for the DTD, if any.
publicID
String Optional. Public identifier for the DTD, if any.
rootNSPrefix
String Optional. Namespace prefix of the rootLocalName, if any.
rootLocalName
String Optional. Local name (excluding the namespace prefix) of the root element.
bufferSize: String. Optional. Initial size (in bytes) of the String buffer that documentToXMLString uses to assemble the output XML String. If the String bufferSize buffer fills up before documentToXMLString is finished generating the XML String, it reallocates the buffer, expanding it by this amount each time the buffer becomes full.
Output Parameters
xmlBytes: Object. XML content bytes (byte array) produced from document. To include namespaces, ensure that you do the following:
Include the appropriate namespace prefix in the key names in document. For example, to produce an element called acctNum that belongs to a namespace that is represented by the “GSX” prefix, you would include a key named GSX:acctNum in document.
Define the URIs for the prefixes that appear in document. You can do this through nsDecls or by including an @xmlns key in the element where you want the xmlns attribute to be inserted.
documentToXMLStream
Converts a document to xml stream, as a java.io.InputStream object. This service will recurse through a given document and build an XML representation from the elements within it. Key names are turned into XML elements and the key values are turned into contents of those elements.
Input Parameters
document: Document - Document that is to be converted to XML. Note that if you want to produce a valid XML document (one with a single root node), document must contain only one top-level document that is, a single document. The name of that document will serve as the name of the XML document’s root element. If you need to produce an XML fragment, for example, a loose collection of elements that are not encompassed within a single root element, then document can contain multiple top level elements.
nsDecls [ ]: Document Optional - Namespaces associated with any namespace prefixes that are used in the key names in document. Each entry in nsDecls represents a namespace prefix/URI pair, where a key name represents a prefix and the value of the key specifies the namespace URI.
addHeader: String Optional - Flag specifying whether the header element <?xml version=“1.0”?> is to be included in the resulting XML String.
Set to:
true to include the header. This is the default.
false to omit the header. Omit the header to generate an XML fragment or to insert a custom header.
attrPrefix: String Optional - Prefix that designates keys containing attributes. The default prefix is “@”.
encode: String Optional - Flag indicating whether to HTML-encode the data. Set this parameter to true if your XML data contains special characters, including the following: < > & " '
Set to:
true to HTML-encode the data. For example, the string expression 5 < 6 would be converted to <expr>5 < 6</expr>, which is valid. If you do not want a leading & (ampersand) character encoded when it appears as part of a character or entity reference, set preserveRefs to true.
false to not HTML-encode the data. This is the default. For example, the string expression 5 < 6 would be converted to <expr>5 < 6</expr>, which is invalid.
strictEncodeElements: String. Optional. Controls the behavior of the encode parameter. If this parameter is set to true when the encode parameter is true, then only the strictEncodeElements < > & characters in the element content are HTML-encoded (the apostrophe and quote characters are not HTML-encoded).
Default value is false.
preserveRefs: String Optional - Flag indicating whether the leading & (ampersand) of a well-formed entity or character reference is left as & or further encoded as & when the data is to be HTML-encoded (encode is set to true).
Set to:
true to preserve the leading & (ampersand) in an entity or character reference when the service HTML-encodes the data.
false to encode the leading & (ampersand) as & when the & appears in an entity or character reference. This is the default. The service ignores the value of preserveRefs when encode is set to false.
documentTypeName: String (picklist). Optional. Document type that describes the structure and format of the output document. You can use this parameter to ensure that the output includes elements that might not be present in document at run time, or to describe the order in which elements are to appear in the resulting XML String. If you are using derived type processing, you must provide this parameter.
generateRequiredTags: String. Optional. Flag indicating whether empty tags are to be included in the output document if a mandatory element appears in the document generateRequiredTags type specified in documentTypeName but does not appear in document. Set to:
true to include mandatory elements if they are not present in document.
false to omit mandatory elements if they are not present in document.
Default value is false.
Note: The generateRequiredTags is only applicable if documentTypeName is provided.
generateNilTags: String. Optional. Flag indicating whether the resulting XML string includes the attribute xsi:nil for elements that are null. Set to:
true to generate the xsi:nil attribute for an element if the Allow null property for the corresponding field is set to true and the field is null in document.
Note: generateRequiredTags must also be set to true to generate the xsi:nil attribute in the XML String.
false to omit the xsi:nil attribute even if a nillable field is, in fact, null.
Default value is false.
enforceLegalXML: String Optional - Flag indicating whether the service throws an exception when document contains multiple root elements or illegal XML tag names.
Set to:
true to throw an exception if document would produce an XML String containing multiple root elements and/or illegal XML tag names.
false to allow the resulting XML String to contain multiple root elements and/or illegal XML tag names. You would use this setting, for example, to create an XML fragment composed of multiple elements that were not all enclosed within a root element. This is the default.
dtdHeaderInfo: String. Optional. Contents of the DOCTYPE header to be inserted into the XML String.
Key
Description
systemID
String Optional. System identifier for the DTD, if any.
publicID
String Optional. Public identifier for the DTD, if any.
rootNSPrefix
String Optional. Namespace prefix of the rootLocalName, if any.
rootLocalName
String Optional. Local name (excluding the namespace prefix) of the root element.
bufferSize: String. Optional. Initial size (in bytes) of the String buffer that documentToXMLString uses to assemble the output XML String. If the String bufferSize buffer fills up before documentToXMLString is finished generating the XML String, it reallocates the buffer, expanding it by this amount each time the buffer becomes full.
Output Parameters
xmlStream: java.io.InputStream - XML content stream produced from document. To include namespaces, ensure that you do the following:
Include the appropriate namespace prefix in the key names in document. For example, to produce an element called acctNum that belongs to a namespace that is represented by the “GSX” prefix, you would include a key named GSX:acctNum in document.
Define the URIs for the prefixes that appear in document. You can do this through nsDecls or by including an @xmlns key in the element where you want the xmlns attribute to be inserted.
documentToXMLString
Converts a document to xml content string. This service will recurse through a given document and build an XML representation from the elements within it. Key names are turned into XML elements, and the key values are turned into the contents of those elements.
Input Parameters
document: Document - Document that is to be converted to XML. If you want to produce a valid XML document (one with a single root node), document must contain only one top-level document that is, a single document. The name of that document will serve as the name of the XML document’s root element. If you need to produce an XML fragment, for example, a loose collection of elements that are not encompassed within a single root element, then document can contain multiple top level elements.
nsDecls [ ]: Document Optional - Namespaces associated with any namespace prefixes that are used in the key names in document. Each entry in nsDecls represents a namespace prefix/URI pair, where a key name represents a prefix and the value of the key specifies the namespace URI.
addHeader: String Optional - Flag specifying whether the header element <?xml version=“1.0”?> is to be included in the resulting XML String.
Set to:
true to include the header. This is the default.
false to omit the header. Omit the header to generate an XML fragment or to insert a custom header.
attrPrefix: String Optional - Prefix that designates keys containing attributes. The default prefix is “@”.
encode: String Optional - Flag indicating whether to HTML-encode the data. Set this parameter to true if your XML data contains special characters, including the following: < > & " '
Set to:
true to HTML-encode the data. For example, the string expression 5 < 6 would be converted to <expr>5 < 6</expr>, which is valid. If you do not want a leading & (ampersand) character encoded when it appears as part of a character or entity reference, set preserveRefs to true.
false to not HTML-encode the data. This is the default. For example, the string expression 5 < 6 would be converted to <expr>5 < 6</expr>, which is invalid.
strictEncodeElements: String. Optional. Controls the behavior of the encode parameter. If this parameter is set to true when the encode parameter is true, then only the strictEncodeElements < > & characters in the element content are HTML-encoded (the apostrophe and quote characters are not HTML-encoded).
Default value is false.
preserveRefs: String Optional - Flag indicating whether the leading & (ampersand) of a well-formed entity or character reference is left as & or further encoded as & when the data is to be HTML-encoded (encode is set to true).
Set to:
true to preserve the leading & (ampersand) in an entity or character reference when the service HTML-encodes the data.
false to encode the leading & (ampersand) as & when the & appears in an entity or character reference. This is the default. The service ignores the value of preserveRefs when encode is set to false.
documentTypeName: String (picklist). Optional. Document type that describes the structure and format of the output document. You can use this parameter to ensure that the output includes elements that might not be present in document at run time, or to describe the order in which elements are to appear in the resulting XML String. If you are using derived type processing, you must provide this parameter.
generateRequiredTags: String. Optional. Flag indicating whether empty tags are to be included in the output document if a mandatory element appears in the document generateRequiredTags type specified in documentTypeName but does not appear in document. Set to:
true to include mandatory elements if they are not present in document.
false to omit mandatory elements if they are not present in document.
Default value is false.
Note: The generateRequiredTags is only applicable if documentTypeName is provided.
generateNilTags: String. Optional. Flag indicating whether the resulting XML string includes the attribute xsi:nil for elements that are null. Set to:
true to generate the xsi:nil attribute for an element if the Allow null property for the corresponding field is set to true and the field is null in document.
Note: generateRequiredTags must also be set to true to generate the xsi:nil attribute in the XML String.
false to omit the xsi:nil attribute even if a nillable field is, in fact, null.
Default value is false.
enforceLegalXML: String Optional - Flag indicating whether the service throws an exception when document contains multiple root elements or illegal XML tag names.
Set to:
true to throw an exception if document would produce an XML String containing multiple root elements and/or illegal XML tag names.
false to allow the resulting XML String to contain multiple root elements and/or illegal XML tag names. You would use this setting, for example, to create an XML fragment composed of multiple elements that were not all enclosed within a root element. This is the default.
dtdHeaderInfo: String. Optional. Contents of the DOCTYPE header to be inserted into the XML String.
Key
Description
systemID
String Optional. System identifier for the DTD, if any.
publicID
String Optional. Public identifier for the DTD, if any.
rootNSPrefix
String Optional. Namespace prefix of the rootLocalName, if any.
rootLocalName
String Optional. Local name (excluding the namespace prefix) of the root element.
bufferSize: String. Optional. Initial size (in bytes) of the String buffer that documentToXMLString uses to assemble the output XML String. If the String bufferSize buffer fills up before documentToXMLString is finished generating the XML String, it reallocates the buffer, expanding it by this amount each time the buffer becomes full.
Output Parameters
xmlString: Object - XML document string produced from document. To include namespaces, ensure that you do the following:
Include the appropriate namespace prefix in the key names in document. For example, to produce an element called acctNum that belongs to a namespace that is represented by the “GSX” prefix, you would include a key named GSX:acctNum in the document.
Define the URIs for the prefixes that appear in document. You can do this through nsDecls or by including an @xmlns key in the element where you want the xmlns attribute to be inserted.
getXMLNodeType
Returns information about an XML node.
Input Parameters
rootNode: com.wm.lang.xml.Document - XML node about which you want information.
Output Parameters
systemID: String Conditional - System identifier, as provided by the DTD associated with rootNode. If rootNode does not have a system identifier, this value is null.
publicID: String Conditional - Public identifier, as provided by the DTD associated with rootNode. If rootNode does not have a public identifier, this value is null.
rootNamespace: String - URI of the XML namespace to which rootNode’s root element belongs.
rootNSPrefix: String Conditional - Namespace prefix of root element in rootNode, if any.
rootLocalName: String Conditional - Local name (excluding the namespace prefix) of the root element in rootNode, if any.
queryXMLNode
Queries an XML node.
The fields parameter specifies how data is extracted from the node to produce an output variable. This output variable is called a “binding,” because the fields parameter binds a certain part of the document to a particular output variable. At run time, this service must include at least one fields entry. The service must include at least one entry in fields. The result of each query you specify in fields is returned in a variable whose name and type you specify.
Input Parameters
node: The XML node that you want to query. This parameter supports the following types of input:
com.wm.lang.xml.Node - XML node that you want to query. An XML node can be produced by xmlStringToXMLNode or an XML content handler.
nsDecls: Document Optional - Namespaces associated with any namespace prefixes used element to specify elements in fields/query. Each entry in nsDecls represents a namespace prefix/URI pair, where a key name represents a prefix and the value of the key specifies the namespace URI.
fields: Document List Optional - Parameters describing how data is to be extracted from node. Each document in the list contains parameters for a single query, as follows:
name: String: Name to assign to resulting value.
resultType: String - Object type that the query is to yield. The following shows the allowed underlying value and the corresponding data type for resultType.
Object: Object
Object[ ] - Object List
Record - Document
Record[ ] - Document List
String - String
String[ ] - String List
String[ ][ ] - String Table
query: String - Query identifying the data to be extracted from node.
queryType: String - Query language in which query is expressed. Valid values are WQL and XQL.
onnull: String - Code indicating what you want queryXMLNode to do when the result is null. Set to one of the following:
continue - To indicate that all result values are acceptable for this query (including null).
fail - To indicate that the service should fail if the result of this query is null and continue in all other cases.
succeed - To indicate that the service should continue if the result of this query is null and fail in all other cases.
fields [ ]: Document List - Parameters that support recursive execution of bindings. Each fields list defines bindings for one level of the output with the top level being the pipeline and the first level down being contents of a document or document list in the pipeline.
Output Parameters
Document: Results from the queries specified in fields. This service returns one element for each query specified in fields. The specific names and types of the returned elements are determined by the fields/name and field/resultType parameters of the individual queries.
Usage Notes
If queryXMLNode fails, it throws an exception. Common reasons for queryXMLNode to fail include:
A variable that has no query string assigned to it.
A syntax error in a query string.
A query fails the “Allows Null” test.
The node variable does not exist or it is null.
xmlBytesToDocument
Converts XML content bytes (byte array) to a document. This service transforms each element and attribute in XML content bytes to an element in a Document.
Input Parameters
xmlBytes: Object - XML content bytes that is to be converted to a document.
nsDecls: Document Optional - Namespace prefixes to use for the conversion. This parameter specifies the prefixes that will be used when namespace-qualified elements are converted to key names in the resulting Document. For example, if you want elements belonging to a particular namespace to have the prefix GSX in the resulting Document (for example, GSX:acctNum), you would associate the prefix GSX with that namespace in nsDecls. This is important because incoming XML documents can use any prefix for a given namespace, but the key names expected by a target service will have a fixed prefix. Namespace prefixes in nsDecls also define the prefixes used by the arrays, documents, and collect parameters. Each entry in nsDecls represents a namespace prefix/URI pair, where a key name represents a prefix and the value of the key specifies the namespace URI.
preserveUndeclaredNS: String Optional - Flag indicating whether or not IBM webMethods Integration keeps undeclared namespaces in the resulting document. An undeclared namespace is one that is not specified as part of the nsDecls input parameter.
Set to:
true to preserve undeclared namespaces in the resulting document. For each namespace declaration in the XML document that is not specified in the nsDecls parameter, IBM webMethods Integration adds the xmlns attribute as a String variable to the document. IBM webMethods Integration gives the variable a name that begins with “@xmlns” and assigns the variable the namespace value specified in the XML document. IBM webMethods Integration preserves the position of the undeclared namespace in the resulting document.
falseto ignore namespace declarations in the XML document that are not specified in the nsDecls parameter. This is the default.
preserveNSPositions: String Optional - Flag indicating whether or not IBM webMethods Integration maintains the position of namespaces declared in the nsDecls parameter in the resulting document.
Set to:
true to preserve the position of namespaces declared in nsDecls in the resulting document. For each namespace specified in the nsDecls parameter, IBM webMethods Integration adds the xmlns attribute to the document (IData) as a String variable named “@xmlns:NSprefix” where “NSprefix” is the prefix name specified in nsDecls. IBM webMethods Integration assigns the variable the namespace value specified in the XML document. This variable maintains the position of the xmlns attribute declaration within the XML document.
falseto not maintain the position of the namespace declarations specified in nsDecls in the resulting document. This is the default.
Output Parameters
document: Document - Document representation of nodes and attributes in node.
xmlNodeToDocument
Converts an XML node to a document.
This service transforms each element and attribute in the XML node to an element in a Document.
Notes:
The XML version attribute is converted to an element named @version.
The resulting document is given the same name as the XML document’s root element and is a child of the document variable that this service returns.
Simple elements are converted to String elements.
Complex elements and simple elements that have attributes are converted to documents. Note that keys derived from attributes are prefixed with a “@” character to distinguish them from keys derived from elements. Also note that when a simple element has an attribute, its value is placed in an element named *body.
Repeated elements can be collected into arrays using the makeArrays and/or arrays parameters. See makeArrays and arrays below for additional information about producing arrays.
While creating a document, the xmlNodeToDocument service assigns a value of emptyString to the fields that are empty in the document.
Input Parameters
node: XML node that is to be converted to a document. This parameter supports the com.wm.lang.xml.Node and org.w3c.dom.Node types of input.
attrPrefix: String Optional - Prefix that is to be used to designate keys containing attribute values. The default is “@”.
arrays [ ]: String List Optional - Names of elements that are to be generated as arrays, regardless of whether they appear multiple times in node. For example, if arrays contained rep and address for an XML document, xmlNodeToDocument would generate element rep as a String List and element address as a Document List. If you include namespace prefixes in the element names that you specify in arrays, you must define the namespaces associated with those prefixes in nsDecls.
makeArrays: String Optional - Flag indicating whether you want xmlNodeToDocument to automatically create an array for every element that appears in node more than once.
Set to:
true to automatically create arrays for every element that appears more than once in node. This is the default.
falseto create arrays for only those elements specified in arrays.
collect: Document Optional - Elements that are to be placed into a new, named array (that is, a “collection”). Within collect, use key names to specify the names of the elements that are to be included in the collection. Then set the value of each key to specify the name of the collection in which you want that element placed. For example, if you want to place the name and rep elements in an array called originator, you would set collect as follows:
Key: name
Value: originator
Key: rep
Value: originator
If the set of elements in a collection are all simple elements, a String List is produced. However, if the set is made up of complex elements, or a combination of simple and complex elements, a Document List is produced. When this is the case, each member of the array will include a child element called *name that contains the name of the element from which that member was derived. You may optionally include namespace prefixes in the element names that you specify in collect; however, if you do, you must define the namespaces associated with those prefixes in nsDecls. You cannot include an element in more than one collection.
nsDecls: Document Optional - Namespace prefixes to use for the conversion. This parameter specifies the prefixes that will be used when namespace-qualified elements are converted to key names in the resulting Document. For example, if you want elements belonging to a particular namespace to have the prefix GSX in the resulting Document (for example, GSX:acctNum), you would associate the prefix GSX with that namespace in nsDecls. This is important because incoming XML documents can use any prefix for a given namespace, but the key names expected by a target service will have a fixed prefix. Namespace prefixes in nsDecls also define the prefixes used by the arrays, documents, and collect parameters. Each entry in nsDecls represents a namespace prefix/URI pair, where a key name represents a prefix and the value of the key specifies the namespace URI.
documents[ ]: String List Optional - Names of any simple elements that are to be generated as documents instead of Strings. The document produced for each element specified in documents[ ] will have the same name as the source element from which it is derived. It will contain a String element named *body that holds the element’s value. If you include namespace prefixes in the element names that you specify, you must define the namespaces associated with those prefixes in nsDecls.
documentTypeName: String (picklist). Optional. Document type that describes the structure and format of the output document. You can use this parameter to ensure that the output includes elements that might not be present in document at run time, or to describe the order in which elements are to appear in the resulting XML String. If you are using derived type processing, you must provide this parameter.
mixedModel: String Optional - Flag specifying how mixed-content elements (elements containing both text values and child elements) are to be converted.
Set to:
true to place top-level text in an element named *body.
falseto omit top-level text and include only the child elements from mixed-content elements.
preserveUndeclaredNS: String Optional - Flag indicating whether or not IBM webMethods Integration keeps undeclared namespaces in the resulting document. An undeclared namespace is one that is not specified as part of the nsDecls input parameter.
Set to:
true to preserve undeclared namespaces in the resulting document. For each namespace declaration in the XML document that is not specified in the nsDecls parameter, IBM webMethods Integration adds the xmlns attribute as a String variable to the document. IBM webMethods Integration gives the variable a name that begins with “@xmlns” and assigns the variable the namespace value specified in the XML document. IBM webMethods Integration preserves the position of the undeclared namespace in the resulting document.
falseto ignore namespace declarations in the XML document that are not specified in the nsDecls parameter. This is the default.
preserveNSPositions: String Optional - Flag indicating whether or not IBM webMethods Integration maintains the position of namespaces declared in the nsDecls parameter in the resulting document.
Set to:
true to preserve the position of namespaces declared in nsDecls in the resulting document. For each namespace specified in the nsDecls parameter, IBM webMethods Integration adds the xmlns attribute to the document (IData) as a String variable named “@xmlns:NSprefix” where “NSprefix” is the prefix name specified in nsDecls. IBM webMethods Integration assigns the variable the namespace value specified in the XML document. This variable maintains the position of the xmlns attribute declaration within the XML document.
false to not maintain the position of the namespace declarations specified in nsDecls in the resulting document. This is the default.
useNamespacesOfDocumentType: String. Optional. Flag indicating whether or not IBM webMethods Integration uses the namespaces defined in the document type specified for the useNamespacesOf DocumentType documentTypeName input parameter when creating a document from an XML string. Set to:
True to use the namespaces defined in the document type specified in documentTypeName and those in nsDecls when creating elements in the output document.
False to use the namespaces defined in the nsDecls parameter only.
Default value is false.
Output Parameters
document: Document - Document representation of the nodes and attributes in node.
xmlStreamToDocument
Converts an XML content stream to a document. This service transforms each element and attribute in the XML content stream to an element in a Document.
Input Parameters
xmlStream: java.io.InputStream - XML content stream that is to be converted to a document.
nsDecls [ ]: Document Optional - Namespace prefixes to use for the conversion. This parameter specifies the prefixes that will be used when namespace-qualified elements are converted to key names in the resulting document object. For example, if you want elements belonging to a particular namespace to have the prefix GSX in the resulting document, for example, GSX:acctNum, you would associate the prefix GSX with that namespace in nsDecls . This is important because incoming XML documents can use any prefix for a given namespace, but the key names expected by a target service will have a fixed prefix. Namespace prefixes in nsDecls also define the prefixes used by the arrays, documents, documentTypeName, and collect parameters. Each entry in nsDecls represents a namespace prefix/URI pair, where a key name represents a prefix and the value of the key specifies the namespace URI.
Parameters for nsDecls [ ] are:
prefix: Key name.
uri: Key value.
preserveUndeclaredNS: String Optional - Flag indicating whether or not IBM webMethods Integration keeps undeclared namespaces in the resulting document. An undeclared namespace is one that is not specified as part of the nsDecls input parameter.
Set to:
true to preserve undeclared namespaces in the resulting document. For each namespace declaration in the XML document that is not specified in the nsDecls parameter, IBM webMethods Integration adds the xmlns attribute as a String variable to the document. IBM webMethods Integration gives the variable a name that begins with “@xmlns” and assigns the variable the namespace value specified in the XML document. IBM webMethods Integration preserves the position of the undeclared namespace in the resulting document.
false to ignore namespace declarations in the XML document that are not specified in the nsDecls parameter. This is the default.
preserveNSPositions: String Optional - Flag indicating whether or not IBM webMethods Integration maintains the position of namespaces declared in the nsDecls parameter in the resulting document.
Set to:
true to preserve the position of namespaces declared in nsDecls in the resulting document. For each namespace specified in the nsDecls parameter, IBM webMethods Integration adds the xmlns attribute to the document as a String variable named “@xmlns:NSprefix” where “NSprefix” is the prefix name specified in nsDecls. IBM webMethods Integration assigns the variable the namespace value specified in the XML document. This variable maintains the position of the xmlns attribute declaration within the XML document.
false to not maintain the position of the namespace declarations specified in nsDecls in the resulting document. This is the default.
Output Parameters
document: Document - Document representation of nodes and attributes in node.
xmlStringToDocument
Converts an XML string to a document. This service transforms each element and attribute in the XML string to an element in a Document.
Input Parameters
xmlString: String - XML string that is to be converted to a document.
nsDecls [ ]: Document Optional - Namespace prefixes to use for the conversion. This parameter specifies the prefixes that will be used when namespace-qualified elements are converted to key names in the resulting document object. For example, if you want elements belonging to a particular namespace to have the prefix GSX in the resulting document, for example, GSX:acctNum, you would associate the prefix GSX with that namespace in nsDecls . This is important because incoming XML documents can use any prefix for a given namespace, but the key names expected by a target service will have a fixed prefix. Namespace prefixes in nsDecls also define the prefixes used by the arrays, documents, documentTypeName, and collect parameters. Each entry in nsDecls represents a namespace prefix/URI pair, where a key name represents a prefix and the value of the key specifies the namespace URI.
Parameters for nsDecls [ ] are:
prefix: Key name.
uri: Key value.
preserveUndeclaredNS: String Optional - Flag indicating whether or not IBM webMethods Integration keeps undeclared namespaces in the resulting document. An undeclared namespace is one that is not specified as part of the nsDecls input parameter.
Set to:
true to preserve undeclared namespaces in the resulting document. For each namespace declaration in the XML document that is not specified in the nsDecls parameter, IBM webMethods Integration adds the xmlns attribute as a String variable to the document. IBM webMethods Integration gives the variable a name that begins with “@xmlns” and assigns the variable the namespace value specified in the XML document. IBM webMethods Integration preserves the position of the undeclared namespace in the resulting document.
false to ignore namespace declarations in the XML document that are not specified in the nsDecls parameter. This is the default.
preserveNSPositions: String Optional - Flag indicating whether or not IBM webMethods Integration maintains the position of namespaces declared in the nsDecls parameter in the resulting document.
Set to:
true to preserve the position of namespaces declared in nsDecls in the resulting document. For each namespace specified in the nsDecls parameter, IBM webMethods Integration adds the xmlns attribute to the document as a String variable named “@xmlns:NSprefix” where “NSprefix” is the prefix name specified in nsDecls. IBM webMethods Integration assigns the variable the namespace value specified in the XML document. This variable maintains the position of the xmlns attribute declaration within the XML document.
false to not maintain the position of the namespace declarations specified in nsDecls in the resulting document. This is the default.
arrays [ ]: String List Optional - Names of elements that are to be generated as arrays, regardless of whether they appear multiple times. For example, if arrays contained the following values for an XML document: rep and address, xmlStringToDocument would generate element rep as a String List and element address as a Document List. If you include namespace prefixes in the element names that you specify in arrays, you must define the namespaces associated with those prefixes in nsDecls.
makeArrays: String Optional - Flag indicating whether you want xmlStringToDocument to automatically create an array for every element that appears more than once.
Set to:
true to automatically create arrays for every element that appears more than once. This is the default.
false to create arrays for only those elements specified in arrays.
Output Parameters
document: Document - Document representation of nodes and attributes in node.
xmlStringToXMLNode
Converts a String, byte[ ], or InputStream containing an XML document to an XML node.
An XML node is a representation of an XML document that can be consumed by IBM webMethods Integration.
Input Parameters
xmldata: String Optional - String containing the XML document to convert to an XML node. If you specify xmldata, do not specify $filedata or $filestream.
$filedata: byte[ ] Optional - byte[ ] containing the XML document to convert to an XML node. If you specify $filedata, do not specify xmldata or $filestream.
$filestream: java.io.InputStream Optional - InputStream containing the XML document to convert to an XML node. If you specify $filestream, do not specify xmldata or $filedata.
encoding: String Optional - Character encoding in which text is represented. Specify UTF-8 for XML files and ISO-8859-1 for HTML files. To have the parser attempt to detect the type of encoding, specify autoDetect, (which is the default, if encoding is not specified).
expandDTD: String. Optional. Flag indicating whetherreferences to parameter entities in the XML document’s DTD are to be processed. Set to:
true to expand references to parameter entities to theirfull definition.
false to ignore references to parameter entities.
Default value is false.
isXML: String Optional - Flag specifying whether the input document is XML or HTML. (xmlStringToXMLNode must know this so that it can parse the document correctly.) Set to:
autoDetect - To parse the document based on its type. When you use this option, xmlStringToXMLNode detects the document’s type based on its document type declaration as indicated by a <!DOCTYPE...\> or <?XML...\> tag. If it cannot determine the document type, it parses it as HTML. This is the default.
true to parse the document as XML.
false to parse the document as HTML.
expandGeneralEntities: String. Optional. Flag indicating whether service should expand references to general entities in the XML document’s DTD. Set to:
true to expand references to general entities to their full definition.
false to ignore references to general entities.
Default value is true.
validateXML: String. Optional. Flag indicating whether IBM webMethods Integration validates the incoming XML document to determine whether it is well-formed XML before converting the XML document. Set to:
true to validate the incoming XML. If validation fails, the service ends with a ServiceException.
false to skip validation.
Default value is false.
Output Parameters
node: com.wm.lang.xml.Node - XML node representation of the XML document in xmlData. This object can be used as input to services that consume XML nodes.
Usage Notes
The input parameters xmldata, $filedata, and $filestream are mutually exclusive. Specify only one of the preceding parameters. IBM webMethods Integration checks the parameters in the following order: $filedata, $filestream, and xmldata, and uses the value of the first parameter with a value.
getXMLNodeIterator
Creates and retrieves a NodeIterator.
A NodeIterator iterates over the element node descendants of an XML node and returns the element nodes that satisfy the given criteria. The client application or flow service uses the service getNextXMLNode to get each node in turn. NodeIterators can only be created for XML nodes (not for HTML nodes).
getXMLNodeIterator is useful for loading and parsing documents on demand. NodeIterators are also useful for promptly delivering relevant information as it becomes available in the document, rather than waiting for the entire document to load initially. This service is particularly intended for handling large documents or documents that load gradually.
NodeIterator provides a moving-window mode in which the only node resident in memory is the last node returned by getNextXMLNode. In this mode, when getNextXMLNode is called, all nodes preceding the newly returned node become invalid, including those previously returned by getNextXMLNode. The client must fully complete processing preceding nodes before advancing the window by calling getNextXMLNode again. In moving-window mode, the document consumes at least enough memory to hold the most recently returned node.
The moving-window mode allows the server to process multi-megabyte XML documents using very little memory. This mode may only be used on a node that represents an entire XML document and not on any descendant node.
Input Parameters
Node: The XML node for which you want to create a NodeIterator can represent either an entire XML document or an element within an XML document. However, if the NodeIterator will be used in moving-window mode, it is necessary to use a complete XML document. This is because moving window mode is only meaningful for managing the loading process of a document, and operating on a node implies that the node has already been loaded.
Criteria: String List Optional. Pattern strings identify the nodes that the iterator is to return. A pattern string may take either the form <localname> or the form <prefix>:<localname>.
When a pattern takes the first form, it identifies an element whose local name is <localname> and that belongs to the default XML namespace.
When a pattern takes the second form, it identifies an element whose local name is <localname> and whose XML namespace is given by the prefix <prefix>.
If the input parameter nsDecls declares this prefix, the namespace URI of the element must match the URI declared for the prefix. If the prefix is not declared in nsDecls, the prefix is matched against prefixes found in the XML. Both <prefix> and <localname> can each optionally take the value “*” to match any namespace or local name. A “*” prefix also matches elements residing in the default namespace.
If you do not specify criteria, all element node children of the root element are returned.
nsDecls: Document Optional. Namespaces associated with any namespace prefixes used in criteria. Each entry in nsDecls represents a namespace prefix/URI pair, where a key name represents a prefix and the value of the key specifies the namespace URI.
movingWindow: String Optional. Flag indicating whether the NodeIterator is to iterate using a moving window, as described above. In moving-window mode, the entire document preceding the node most recently returned by getXMLNodeIterator is discarded. Subsequent attempts to return preceding portions of the document will yield either the repeating text PURGED or the proper data, depending on whether the data falls within an area that the server was able to discard. When iterating with a moving window, the current node should be queried and completely examined before requesting the next node.
Set to:
true to use the NodeIterator in moving-window mode.
false to use the NodeIterator in normal mode. This is the default.
Output Parameters
Iterator: NodeIterator for use with the service getNextXMLNode.
getNextXMLNode
Retrieves the next XML node from a NodeIterator.
Input Parameters
Iterator: NodeIterator from which to retrieve the next node.
Output Parameters
Next
Document Conditional. The requested node. It is null when the NodeIterator has no more nodes to return. Otherwise, next will contain the following:
Key
Description
next
String Element type name of the node. If the element belongs to a namespace and the namespace was declared at the time the NodeIterator was constructed, name will have the prefix declared for that namespace. If the namespace is not declared, name will use prefix that occurs in the XML.
node
XML node identified by the input criteria used to originally generate the NodeIterator.
It is possible that all calls to getNextXMLNode on a given NodeIterator will yield the same document instance, with varying values for the instance’s entries. Therefore, applications should assume that each call to getNextXMLNode invalidates the document returned by the previous call. This approach maximizes server speed and minimizes resource usage.
Usage Notes
A NodeIterator is acquired via the service getXMLNodeIterator. The output of that service is a document containing the element type name of the node and the node itself. The instance of this document is only valid until the next getNextXMLNode call on the same NodeIterator, because getNextXMLNode uses the same document object for each call.
freeXMLNode
Frees the resources allocated to a given XML node.
You can optionally invoke this service when using a NodeIterator to iterate over an XML node, and you decide to halt the processing of the node before reaching the end. By explicitly calling freeXMLNode, you immediately release the resources associated with the node. While it is not mandatory to call this service upon completing the processing of an XML node with a NodeIterator, doing so can enhance server performance. Note that once you have freed an XML node using this service, the node becomes unstable and should not be utilized by any subsequent processes.
Input Parameters
rootNode: XML node whose resources you want to release. Specify the same type of input that you supplied to getXMLNodeIterator.
Output Parameters
None.
E2E Monitoring Services
setCustomTransactionId
You can specify a custom transaction ID as part of your IBM webMethods Integration Flow services. This persists as part of your transaction trace stored in End-to-End Monitoring. A custom transaction ID allows you to reference or search a transaction based on your own identifier such as order ID or shipping reference.
Use this function to set the custom transaction ID in End-to-End Monitoring. You can use this function multiple times in the same Flow Service or associated child services. Using this function appends the values with delimiter commas and sets it to End-to-End Monitoring custom transaction ID. After setting the custom transaction ID, you can use any of these values to search for the transaction in End-to-End Monitoring, and to create Deep Link URLs.
Input Parameters
e2eTransactionId: String - Custom transaction ID that uniquely identifies the End-to-End Monitoring transaction.
Output Parameters
None.
Note
1024 characters is the maximum length of the values this function can set in End-to-End Monitoring, including the delimiter commas. New values do not get appended on exceeding this maximum length.
setCustomTransactionIds
You can specify a custom transaction ID as part of your IBM webMethods Integration Flow services. This persists as part of your transaction trace stored in End-to-End Monitoring. A custom transaction ID allows you to reference or search a transaction based on your own identifier such as order ID or shipping reference.
Use this function to set the custom transaction ID in End-to-End Monitoring as an array of key value pairs. You can use this function to set multiple key value pairs in a single call as it accepts an array input. You can use this function multiple times in the same Flow Service or associated child services. Using this function equates the keys to the values with the equal to (=) character, and appends the multiple key value pairs with delimiter commas and sets them to End-to-End Monitoring custom transaction ID. After setting the custom transaction ID, you can use any of these values to search for the transaction in End-to-End Monitoring, and to create Deep Link URLs.
Input Parameters
e2emTransactionIds: Document List - Custom transaction IDs as key-value pairs that uniquely identify the End-to-End Monitoring transaction.
Output Parameters
None.
Note
1024 characters is the maximum length of the values this function can set in End-to-End Monitoring, including the delimiter commas, keys, values and equal to (=) characters. New values do not get appended on exceeding this maximum length.
You can search for the key or the value in End-to-End Monitoring user interface. However, you cannot search for key-value pairs in the key=value format as End-to-End Monitoring filters do not support special characters. In contrast, deep link URLs support key-value pairs.
While executing a flow service, IBM webMethods Integration allows you to log business data at the step level as well as at the Flow service level.
Note
Log business data functionality has a 16 MB limitation on the size of data that can be logged. This limitation is applicable at the step level as well as at the flow service level.
Log business data at the step level
1.Provide a name and description of the flow service.
2.As we will query contacts from Salesforce CRM and log the business data, select the Salesforce CRM connector, queryContacts operation, and SalesforceCRM_2 as the account.
3.Select the Log business data option available at the step level as shown below. At the step level, the Log business data option is enabled only for connectors.
4.In the Log business data dialog box, choose Always to always log business data. As we will query the contacts from Salesforce CRM, select the output fields and specify the display names for AccountId, LastName, and FirstName as shown below.
5.Run the flow service and go to the Execution History page by clicking the Execution history option.
6.Click on the execution history entry to view the execution details page. As you can see below, the business data is logged.
Log business data at the flow service level
Note
In the Monitor > Flow service execution page, only top-level flow services are visible. To view child-level flow service information, enable Log business data with its value as Always at the flow service level. This information is not directly available on the Monitor page but is one of the options within the top-level flow service operations.
The steps to do this are explained below:
1.Click the icon. Let us define the input field FirstName as shown below.
2.Click the Log business data option available at the Flow service level.
3.In the Log business data dialog box, choose Always to always log business data.
4.In an earlier step, you had defined the input field FirstName. Select the input field FirstName as shown below and specify the display name as First Name of Customer.
5.Run the flow service. For the FirstName field, enter the value John and again click Run.
6.Click the Execution history option to go to the Execution History page.
7.Click on the execution history entry to view the execution details page. As you can see below, the business data is logged.
Reference data is data that defines the set of permissible values to be used by other data fields. It is a collection of key-value pairs, which can be used to determine the value of a data field based on the value of another data field. For example, the value of a status field in an Application can be “Canceled” and that needs to be interpreted as “CN” in another Application.
IBM webMethods Integration allows you to upload reference data from a text file containing tabular data separated by a character, for example, a comma, semicolon, and so on. The uploaded file should not have an empty column heading or space in the first row, and the first row cannot be empty.
Reference data appears under Services in the flow services workspace. You can access the uploaded reference data in flow services as a list of documents by using the reference data service and providing an appropriate name. You can filter the documents returned into the pipeline by the reference data service.
You can Delete, Download, or Edit a reference data from the Reference Data screen. If a reference data is used in a flow service, you will not be able to delete that reference data. You must remove the reference data from the flow service and then delete the reference data. The Download option allows you to download the previously uploaded reference data, edit it, and then upload the modified file.
Reference Data Signature
Reference data signature is derived from the column names of the uploaded text file. You can filter the Reference data by providing an appropriate matchCriteria. The output of Reference data is a list of documents that match the specified matchCriteria.
Input Parameters
matchCriteria: Document. Criteria on which documents from the Reference data are matched. Parameters for matchCriteria are:
path: Column names of the Reference data.
compareValueAs: Optional. Allowed values are string, numeric, and datetime. The default value is string.
datePattern: Optional. Pattern is considered only if compareValueAs is of type datetime. Default value is MM/dd/yyyy hh:mm:ss a.
joins: List of join criteria. Each join criteria consists of:
Let us see how to create a reference data and use it in a flow service with the help of an example.
In this example, we will upload a file that contains a list of courier vendors as reference data in IBM webMethods Integration, and then import the list of vendors as contacts into Salesforce CRM.
Before you begin
Log in to your tenant.
Check if you have the Developer and Admin roles assigned from the Settings > Roles page.
Obtain the credentials to log in to the Salesforce CRM back end account.
Create a Salesforce CRM account in IBM webMethods Integration. You can also create this account inline at a later stage.
Basic Flow
1.Select the project where you want to create the reference data. You can also create a new project.
2.Click Configurations > Flow service > Reference Data > Add Reference Data.
3.Provide a name (CourierVendors) and an optional description for the reference data.
4.For the Reference Data File, click Browse file and select the file. Both csv and a text file having tabular data are supported. The maximum file size you can upload is 1 MB. As shown in the sample, the file should not have an empty column heading or space in the first row and that row cannot be empty. This is because the first row of data is read as column headings.
5.Click Next to define the reference data. Select the Field separator and the Text qualifier. Determine the encoding of the reference data file and from the File Encoding drop down list, select the same encoding.
6.Click Next to preview the data. If you select an incorrect encoding, garbage characters may appear in the preview pane.
7.Click Done to create the reference data. The new reference data appears in the Reference Data page. The reference data also appears under the Services category in the flow services workspace.
8.Go to Flow services and create a new flow service. Provide a name ImportCourierVendorsAsContacts and description of the flow service. Then type reference data in the first step and select Reference Data.
9.Select CourierVendors. Click the icon to add a new step. We need to create a contact for each vendor, so type Repeat to select the Repeat step, and then select the CourierVendors option.
10.Select Salesforce CRM, the action as createcontact and associate the SalesforceCRM_1 account.
11.Click the mapping icon and map the input and output fields as shown below.
12.Save and run the flow service, and view the results.
Note
As Reference Data generally varies across different environments, you need to reconfigure the Reference Data as per the respective environment. So if you are publishing a project that has a flow service with Reference Data, the Reference Data will not be published along with the flow service. You are required to create the Reference data in the destination environment. Further, if a flow service you are exporting has Reference Data, the Reference Data will not be exported along with the flow service. You are required to create the reference data in the destination environment.
Cloning Flow services
No subtopics in this section
The Clone feature allows you to copy the existing functionality of a flow service into a new flow service. This is particularly useful if you want to recreate an existing flow service but change few options. You need not start from scratch and create all over again. When you clone a flow service, the clone is exactly the same as the original. Obviously, you can make changes whenever you want, just like any other flow service.
To clone a flow service:
Locate the flow service that you want to clone.
Click the ellipsis icon () available on the flow service and click Clone from the displayed menu. The Clone Flow service dialog box appears.
Enter the Flow service Name and Project.
Click Clone. The flow service is created under the specified project. You can modify according to your needs as any other flow service.
Note
Ensure that you create the account or reference data associated with the respective asset in the target project.
You can create a clone of a cloned flow service.
Editing a cloned flow service will not affect the original flow service from which it was cloned.
A cloned flow service containing the Messaging connector placed within the same project inherits all assets, configurations, and authorizations of the original flow service. However, cloning a flow service to a different project copies items of the original flow service and does not retains assets, configurations, and authorizations. You need to create the account or reference data associated with the respective asset in the destination project.
Whenever a flow service is cloned, then any child flow services with the same name are overridden.
Caching is an optimization feature that can improve the performance of integrations. Service caching operates according to the input signature of the enabled service. When caching is enabled, the service result is stored using the input signature values as the key. If the service defines an output signature, only the fields included in the output signature are cached. However, if there is no defined output signature, the entire pipeline at that specific point is cached.
During subsequent executions of the service with the same input, the cached entry in the service result cache is retrieved. The cached output fields are then merged with the input pipeline and subsequently returned to the client rather than invoking the flow service again.
Caching can significantly improve response time of flow services. For example, flow services that retrieve information from busy data sources such as high-traffic commercial web servers could benefit from caching.
Note
Caching is not available by default for a tenant. To enable the Caching functionality, contact IBM support.
When are Cached Results Returned?
When you enable caching for a flow service, IBM webMethods Integration manages the cached results differently, depending on whether the integration has input parameters. It is recommended that a cached flow service has input parameters.
When a cached flow service has input parameters, at run time IBM webMethods Integration scopes the pipeline down to only the declared input parameters of the flow service. IBM webMethods Integration compares the scoped-down inputs to the previously stored copy of the inputs. If a cached entry exists with input parameters that have the same values, IBM webMethods Integration returns the cached results from the previous invocation.
Note
If a cached entry with input parameter values that are identical to the current invocation does not exist in the cache, IBM webMethods Integration runs the service and stores the results in the cache.
When a cached flow service does not have input parameters (for example, date/time) and previous results do not exist in the cache, at run time IBM webMethods Integration runs the service and stores the results. When the flow service runs again, IBM webMethods Integration uses the cached copy. In other words, IBM webMethods Integration does not use the run-time pipeline for the current invocation; you will always receive cached results until the cache expires.
Points to Note
If a cached integration input signature includes a Document Reference or Document Reference List variable and the referenced document type changes or is modified, you must reset the service cache. If you do not reset it, IBM webMethods Integration uses the old, cached input parameters at run time until such time as the cached results expire. You can reset the cache from the Integrations page.
If multiple IBM webMethods Integration servers are available, then the cache is available in all servers and the cache behavior is independent. The caches are not shared between servers. For example, if you have run an integration in Server 1, then Server 1 runs the integration and caches the results. Next time if you have run the same integration with same input values on Server 2, then Server 2 runs the integration and caches the results.
The default cache size for IBM webMethods Integration is 10K elements.
Cache settings are not stored in version history.
Cache settings are not saved when you do the following actions:
Clone
Import
Publish-Deploy
You have to manually enable the cache settings.
If a tenant is associated with multiple runtime servers, then the cached flow service is run at least once in each server to cache the results in that server. For example, assume a tenant is configured with two runtime servers and a flow service is configured to be cached for an hour. When this flow service is run for every minute, in this case the flow service is listed twice under the Flow services executions table in Monitor for each hour because the flow service must run on both runtime servers at least once to cache the results.
Types of Flow services to Cache
While caching the flow service results can improve performance, not all flow services must be cached. You must never cache the flow services if the cached results might be incorrect for subsequent invocations or if the flow service performs tasks that must be run each time the flow service is invoked. Following are guidelines for you to consider when determining whether to cache the results for a flow service.
Flow services Suited for Caching
Flow services that require no state information. If a service does not depend on state information from an earlier transaction in the client’s session, you can cache its results.
Flow services that retrieve data from data sources that are updated infrequently. Flow services whose sources are updated on a daily, weekly, or monthly basis are good for caching.
Flow services that are invoked frequently with the same set of inputs. If a flow service is frequently invoked by clients using the same input values, it is beneficial to cache the results.
Flow services Not Suited for Caching
Flow services that perform required processing. Some flow services contain actions that must be processed each time a client invokes it. For example, if a flow service contains accounting logic to perform charge back and you cache the results, the server does not run the flow service, so the flow service does not perform charge back for the subsequent invocations of the service.
Flow services that require state information. Do not cache flow services that require state information from an earlier transaction, particularly information that identifies the client that invoked it. For example, you do not want to cache a flow service that produced a price list for office equipment if the prices in the list vary depending on the client who initially connects to the data source.
Flow services that retrieve information from frequently updated sources. If a flow service retrieves data from a data source that is updated frequently, the cached results can become outdated. Do not cache flow services that retrieve information from sources that are updated in real-time or near real-time, such as stock quote systems or transactional databases.
Flow services that are invoked with unique inputs. If a flow service manages many unique inputs and very few repeated requests, you gain little by caching the results. You might even degrade server performance by quickly consuming large amounts of memory.
Enabling Caching
Open the flow service that you want to cache.
Click the ellipsis icon () available on the flow service steps page and click Settings from the displayed menu. The Settings dialog box appears.
Drag the Enable/disable cache switch to right to enable the cache. The default value is Disabled.
Enter the number of minutes that the flow service results must be stored in cache in the Cache expire field. The default value is 15 minutes.
The expiration timer begins when the server initially caches the results. The server does not restart the expiration timer each time it retrieves the results from cache. The minimum cache expiration time is one minute.
Click Done. The flow service page appears.
Save and run the flow service. If the flow service runs successfully, the results are stored in cache. Otherwise, the results are not cached.
Note
You can clear cache for any flow service using the Clear cache option available on the respective flow service card.
A document type contains a set of fields used to define the structure and type of data in a document. You can use a document type to specify the input or output parameters for a flow service. Input and output parameters are the names and types of fields that a flow service requires as input and generates as output. These parameters are collectively referred to as a signature.
For example, a flow service takes two string values, an account number (AcctNum ) and a dollar amount (OrderTotal ) as inputs and produces an authorization code (AuthCode ) as the output. If you have multiple flow services with identical input parameters but different output parameters, you can use a document type to define the input parameters rather than manually specifying individual input fields for each flow service.
Benefits of creating a document type
Document types provide the following benefits:
Using a document type as the input or output signature for a flow service reduces the effort required to build a flow service.
Using a document type to build document or document list fields reduces the effort and time needed to declare input or output parameters or build other document fields.
Document types improve accuracy because there is less possibility to introduce a typing error while typing field names.
Document types make future changes easier to implement because you make a change at one place (the document type) rather than everywhere the document type is used.
How do I create a document type?
You can create a document type in the following ways from Projects > select a project > Configurations > Flow service > Document Types > Add Document Type:
Build from scratch
Create an empty document type and define the structure of the document type yourself by inserting fields to define its contents and structure. For more information, see Creating Document Types from Scratch.
Build from XML Schema Definition
Create a document type from an XML Schema Definition. The structure and content of the document type matches that of the source file. For more information, see Creating Document Types from an XML Schema Definition.
You can also create document types for already created REST connectors from Projects > select a project > Connectors > REST > Document Types option or from the Request Body and Response Body panels while creating a REST connector.
Document types created for a REST connector do not appear in the Projects > select a project > Configurations > Flow service > Document Types page but appears in the Document Types panel for the selected REST connector.
Note
Creation of document type with identical names are not allowed.
You can copy a document type from the Document Types page or from the Document Type page for a REST connector. Currently you cannot copy a document type across projects. When you copy a flow service or a Workflow across projects, the document type associated with it is also copied.
If a document type is used in any other document type or a flow service, you cannot delete the document type and a warning message appears as follows:
Other than direct dependencies, if there are document types that are created along with the selected document type from the same xsd source, and if any of these documents types are used in any flow services or document types (other than these generated document types), then also you are not allowed to delete the document type.
Build from SAP®
Create a document type from an SAP RFC structure or SAP IDoc, that matches the structure of the document and content of the SAP RFC structure or SAP IDoc respectively. For more information, see Creating a new Document Type from SAP®.
Creating Document Types from Scratch
No subtopics in this section
Creating a document type from scratch allows you to create an empty document type, and specify the structure of the document type by inserting fields to define its contents and structure.
To add or edit a document type from scratch
1.From the IBM webMethods Integration navigation bar, click Projects. Select a project and click Configurations > Flow service > Document Types. From the Document Types page, you can add, edit, delete, or copy a document type. To edit an existing document type, on the Document Types page, click the Edit icon for the document type.
2.To create a new document type from scratch, from the Document Types page, click Add Document Type > Build from scratch.
3.Provide a name and description of your document type. Required fields are marked with an asterisk on the page.
4.Click Load XML to generate a document type from the XML structure or click Load JSON to generate a document type from the JSON structure.
5.Click the icon to add a new field and update the field properties.
Provide the Name and Type of the field to define the structure and content of the document type. The type can be a String, Document, Document Reference, Object, Boolean, Double, Float, Integer, Long, or Short. If you select the Type as Document Reference, select a Document Reference. Types are used to declare the expected content and structure of the signatures, document contents, and pipeline contents.
If you select the Type as String, in the Display Type field, select Text Field if you want the input entered in a text field. Select Password if you want the input entered as a password, with asterisks reflected instead of characters. Select Large Editor if you want the input entered in a large text area instead of a text field. This is useful if you expect a large amount of text as input for the field, or you need to have new line characters in the input. In the Pick List field, define the values that will appear as choices when IBM webMethods Integration prompts for input at run time.
In addition to specifying the name and type of the field, and whether the type is an Array, you can set properties that specify an XML Namespace and indicate whether the field is required at runtime by selecting the Required field option. Select the Content Type you can apply to String, String list, or String table variables. See Content Types and Variable Constraints for information.
You can add a field from the fields panel by clicking the duplicate field icon. You can also copy a field from the fields panel and depending on the context, you can either paste the field or the field path. For example, if you copy a field and paste the field in the Set Value window in a flow service, the field path will be pasted. If you copy an array item, the path that is pasted includes the item index. You cannot modify or paste the child fields of a Document Reference. When defining a document type, it is recommended to avoid adding identically named fields to the document. In particular, do not add identically named fields that are of the same data type.
You can assign an XML namespace and prefix to a field by specifying a URI for the XML namespace property and by using the prefix:fieldName format for the field name. For example, suppose a field is named eg:account and the XML namespace property is set to http://www.example.com. The prefix is eg, the localname is account, and the namespace name is http://www.example.com.
Keep the following points in mind when assigning XML namespaces and prefixes to a field:
The field name must be in the format: prefix:fieldName.
You must specify a URI in the XML namespace property.
Do not use the same prefix for different namespaces in the same document type, input signature, or output signature.
Click Save after you have entered the details and constraints for each field. The new document type appears in the Document Types page.
Note
When you edit a document type, any change is automatically propagated to all flow services that use or reference the document type.
Creating Document Types from an XML Schema Definition
No subtopics in this section
Let us see how to create a document type from an XML Schema Definition with the help of an example. We will create a document type from an XML Schema Definition and then use the document type to create Accounts in Salesforce CRM.
Check if you have the Developer and Admin roles assigned from the Settings > Roles page.
Obtain the credentials to log in to the Salesforce CRM back end account.
Create the Salesforce Account (SalesforceCRM_1) in IBM webMethods Integration. You can also create this account inline at a later step.
1.Log in to your tenant and from the IBM webMethods Integration navigation bar, click Projects. Select a project and then select Configurations > Flow service > Document Types. From the Document Types page, you can add, edit, delete, or copy a document type. To edit an existing document type, on the Document Types page, click the Edit icon for the document type.
2.To create a new document type from an XML Schema Definition, from the Document Types page, click Add Document Type > Build from XML Schema Definition.
3.Provide a name and description of your document type, for example, customerOrder. Required fields are marked with an asterisk on the page.
4.On the Source selection panel, under XML schema source, do one of the following to specify the source file for the document type:
To use an XML Schema Definition that resides on the Internet as the source, select URL. Then, type the URL of the resource. The URL you specify must begin with http: or https:.
To use an XML Schema Definition that resides in your local file system as the source, select File. Then click Browse and select the file. You can add additional imported or included XML schema files.
Note
The maximum file upload size is 5 MB which includes the primary source file and additional files, if any.
5.Click Next and on the Processing options panel, under Content model compliances, select a content model compliance to indicate how strictly IBM webMethods Integration represents content models from the XML Schema Definition in the resulting document type. Let us select None.
6.You can specify whether IBM webMethods Integration enforces strict, lax, or no content model compliance when generating the document type. Content models provide a formal description of the structure and allowed content for a complex type. The type of compliance that you specify can affect whether IBM webMethods Integration generates a document type from a particular XML Schema Definition successfully. Currently, IBM webMethods Integration does not support repeating model groups, nested model groups, or the any attribute. If you select strict compliance, IBM webMethods Integration does not generate a document type from any XML Schema Definition that contains those items.
Select…
To…
Strict
Generate the document type only if IBM webMethods Integration can represent the content models defined in the XML Schema Definition correctly. Document type generation fails if IBM webMethods Integration cannot accurately represent the content models in the source XML Schema Definition. Currently, IBM webMethods Integration does not support repeating model groups, nested model groups, or any attribute. If you select strict compliance, IBM webMethods Integration does not generate a document type from any XML Schema Definition that contains those items.
Lax
When possible, generate a document type that correctly represents the content models for the complex types defined in the XML Schema Definition. If IBM webMethods Integration cannot correctly represent the content model in the XML Schema Definition in the resulting document type, IBM webMethods Integration generates the document type using a compliance mode of None. When you select lax compliance, IBM webMethods Integration generates the document type even if the content models in the XML Schema Definition cannot be represented correctly.
None
Generate a document type that does not necessarily represent or maintain the content models in the source XML Schema Definition.
7.If you select strict or lax compliance, do one of the following to specify whether document types generated contain multiple *body fields to preserve the location of text in instance documents.
Select the Preserve text position check box to indicate that the document type generated preserves the locations for text in instance documents. The resulting document type contains a *body field after each field and includes a leading *body field. In instance documents for this document type, IBM webMethods Integration places text that appears after a field in the *body.
Clear the Preserve text position check box to indicate that the document type generated does not preserve the locations for text in instance documents. The resulting document type contains a single *body field at the top of the document type. In instance documents for this document type, text data around fields is all placed in the same *body field.
8.If you want IBM webMethods Integration to use the Xerces parser to validate the XML Schema Definition, select the Validate schema using Xerces check box.
Note
IBM webMethods Integration automatically uses an internal schema parser to validate the XML Schema Definition. However, the Xerces parser provides more strict validation than the internal schema parser. As a result, some schemas that the internal schema parser considers to be valid might be considered invalid by the Xerces parser.
9.Click Next and on the Select root nodes panel, select the elements that you want to use as the root elements for the document type. The resulting document type contains all of the selected root elements as top-level fields in the generated document type.
10.Click Save.
IBM webMethods Integration creates the document type.
Note
You can modify the document type by clicking the Edit icon. However, the Edit action is not recommended as the edited document type asset cannot be reverted to the original state. For example, if you change any field name such as Area to Province you cannot reset it to Area again.
Also, you can see that other document types are created automatically from the same xsd source file. This is because if an element in the XML Schema Definition is a complex type, IBM webMethods Integration creates the document type that defines the structure of the above complex type automatically, along with the main document type. For example, in the below EmployeeDetails.xsd file, the field address is a complex type containing Doorno, street, city, and pincode as its fields.
Then two document types are created when we create the document type with this xsd source. One document type is the main document type, Employeedetails and the other document type is docTypeRef_AddressType, which represents the structure of the complex type.
Note
If you have selected strict compliance and IBM webMethods Integration cannot represent the content model in the complex type accurately, IBM webMethods Integration does not generate any document type.
If you have selected lax compliance and indicated that IBM webMethods Integration should preserve text locations for content types that allow mixed content (you selected the Preserve text position check box), IBM webMethods Integration adds *body fields in the document type only if the complex type allows mixed content and IBM webMethods Integration can correctly represent the content model declared in the complex type definition. If IBM webMethods Integration cannot represent the content model in a document type, IBM webMethods Integration adds a single *body field to the document type.
If the XML Schema Definition contains an element reference to an element declaration whose type is a named complex type definition (as opposed to an anonymous complex type definition), IBM webMethods Integration creates a document type for the named complex type definition only if it is referred multiple times in the schema.
IBM webMethods Integration uses the prefixes declared in the XML Schema or the ones you specified as part of the field names. Field names have the format prefix:elementName or prefix:@attributeName.
If the XML Schema does not use prefixes, IBM webMethods Integration creates prefixes for each unique namespace and uses those prefixes in the field names. IBM webMethods Integration uses “ns” as the prefix.
If the XML Schema Definition contains a user-specified namespace prefix and a default namespace declaration, both pointing to the same namespace URI, IBM webMethods Integration uses the user-specified namespace prefix and not the default namespace.
If the namespace prefix in the XML Schema and the default namespace point to the same namespace URI, IBM webMethods Integration gives preference to the user-specified namespace prefix over the default namespace.
11.Now let us use the document type customerOrder to create Accounts in Salesforce CRM. Go to the Flow services page, click the + icon to create a new flow service, and then provide a name CreateAccountsDocType and description of the flow service.
Click the I/O icon as shown above, and define the input and output fields. Select the document reference and save it.
12.Select Salesforce CRM and link the SalesforceCRM_1 account.
13.Click the mapping icon and map the input and output fields.
14.Close the mapping panel by clicking the icon, click the run icon and enter the input values.
15.Click Run and inspect the results.
Content Types
No subtopics in this section
The following table identifies the content types you can apply to String or String list variables. Each of these content types corresponds to a built-in simple type defined in the specification XML Schema Part 2: Datatypes.
Content Types
Description
anyURI
A Uniform Resource Identifier Reference. The value of anyURI may be absolute or relative. Constraining Facets enumeration, length, maxLength, minLength, pattern Note: The anyURI type indicates that the variable value plays the role of a URI and is defined like a URI. URI references are not validated because it is impractical for applications to check the validity of a URI reference.
True or false. Constraining Facets pattern Example true, 1, false, 0
byte
A whole number whose value is greater than or equal to –128 but less than or equal to 127. Constraining Facets enumeration, fractionDigits, maxExclusive, maxInclusive, minExclusive, minInclusive, pattern, totalDigits Example -128, -26, 0, 15, 125
date
A calendar date from the Gregorian calendar. Values need to match the following pattern: CCYY-MM-DD Where CC represents the century, YY the year, MM the month, DD the day. The pattern can include a Z at the end to indicate Coordinated Universal Time or to indicate the difference between the time zone and coordinated universal time. Constraining Facets enumeration, maxExclusive, maxInclusive, minExclusive, minInclusive, pattern Example 1997-08-09 (August 9, 1997)
dateTime
A specific instant of time (a date and time of day). Values need to match the following pattern: CCYY-MM-DDThh:mm:ss.sss Where CC represents the century, YY the year, MM the month, DD the day, T the date/time separator, hh the hour, mm the minutes, and ss the seconds. The pattern can include a Z at the end to indicate Coordinated Universal Time or to indicate the difference between the time zone and Coordinated Universal Time. Constraining Facets enumeration, maxExclusive, maxInclusive, minExclusive, minInclusive, pattern Example 2000-06-29T17:30:00-05:00 represents 5:30 pm Eastern Standard time on June 29, 2000. (Eastern Standard Time is 5 hours behind Coordinated Universal Time.)
decimal
A number with an optional decimal point. Constraining Facets enumeration, fractionDigits, maxExclusive, maxInclusive, minExclusive, minInclusive, pattern, totalDigits Example 8.01, 290, -47.24
double
Double-precision 64-bit floating point type. Constraining Facets enumeration, maxExclusive, maxInclusive, minExclusive, minInclusive, pattern Example 6.02E23, 3.14, -26, 1.25e-2
duration
A length of time. Values need to match the following pattern: PnYnMnDTnHnMnS Where nY represents the number of years, nM represents the number of months, nD is the number of days, T separates the date and time, nH the number of hours, nM the number of minutes and nS the number of seconds. Precede the duration with a minus (-) sign to indicate a negative duration. Constraining Facets enumeration, maxExclusive, maxInclusive, minExclusive, minInclusive, pattern Example P2Y10M20DT5H50M represents a duration of 2 years, 10 months, 20 days, 5 hours, and 50 minutes
ENTITIES
Sequence of whitespace-separated ENTITY values declared in the DTD. Represents the ENTITIES attribute type from the XML 1.0 Recommendation. Constraining Facets enumeration, length, maxLength, minLength
ENTITY
Name associated with an unparsed entity of the DTD. Represents the ENTITY attribute type from the XML 1.0 Recommendation. Constraining Facets enumeration, length, maxLength, minLength, pattern, whiteSpace
float
A number with a fractional part. Constraining Facets enumeration, maxExclusive, maxInclusive, minExclusive, minInclusive, pattern Example 8.01, 25, 6.02E23, -5.5
gDay
A specific day that recurs every month. Values must match the following pattern: —DD Where DD represents the day. The pattern can include a Z at the end to indicate Coordinated Universal Time or to indicate the difference between the time zone and Coordinated Universal Time. Constraining Facets enumeration, maxExclusive, maxInclusive, minExclusive, minInclusive, pattern Example ---24 indicates the 24th day of each month
gMonth
A Gregorian month that occurs every year. Values must match the following pattern: –MM Where MM represents the month. The pattern can include a Z at the end to indicate Coordinated Universal Time or to indicate the difference between the time zone and Coordinated Universal Time. Constraining Facets enumeration, maxExclusive, maxInclusive, minExclusive, minInclusive, pattern Example --11 represents November
gMonthDay
A specific day and month that recurs every year in the Gregorian calendar. Values must match the following pattern: –MM-DD Where MM represents the month and DD represents the day. The pattern can include a Z at the end to indicate Coordinated Universal Time or to indicate the difference between the time zone and Coordinated Universal Time. Constraining Facets enumeration, maxExclusive, maxInclusive, minExclusive, minInclusive, pattern Example --09-24 represents September 24th
gYear
A specific year in the Gregorian calendar. Values must match the following pattern: CCYY Where CC represents the century, and YY the year. The pattern can include a Z at the end to indicate Coordinated Universal Time or to indicate the difference between the time zone and Coordinated Universal Time. Constraining Facets enumeration, maxExclusive, maxInclusive, minExclusive, minInclusive, pattern Example 2001 indicates the year 2001.
gYearMonth
A specific month and year in the Gregorian calendar. Values must match the following pattern: CCYY-MM Where CC represents the century, YY the year, and MM the month. The pattern can include a Z at the end to indicate Coordinated Universal Time or to indicate the difference between the time zone and Coordinated Universal Time Constraining Facets enumeration, maxExclusive, maxInclusive, minExclusive, minInclusive, pattern Example 2001-04 indicates April 2001.
A name that uniquely identifies an individual element in an instance document. The value for ID needs to be a valid XML name. The ID datatype represents the ID attribute type from the XML 1.0 Recommendation. Constraining Facets enumeration, length, maxLength, minLength, pattern, whiteSpace
IDREF
A reference to an element with a unique ID. The value of IDREF is the same as the ID value. The IDREF datatype represents the IDREF attribute type from the XML 1.0 Recommendation. Constraining Facets enumeration, length, maxLength, minLength, pattern, whiteSpace
IDREFS
Sequence of white space separated IDREFs used in an XML document. The IDREFS datatype represents the IDREFS attribute type from the XML 1.0 Recommendation. Constraining Facets enumeration, length, maxLength, minLength
int
A whole number with a value greater than or equal to -2147483647 but less than or equal to 2147483647. Constraining Facets enumeration, fractionDigits, maxExclusive, maxInclusive, minExclusive, minInclusive, pattern, totalDigits Example -21474836, -55500, 0, 33123, 4271974
integer
A positive or negative whole number. Constraining Facets enumeration, fractionDigits, maxExclusive, maxInclusive, minExclusive, minInclusive, pattern, totalDigits Example -2500, -5, 0, 15, 365
language
Language identifiers used to indicate the language in which the content is written. Natural language identifiers are defined in IETF RFC 1766. Constraining Facets enumeration, length, maxLength, minLength, pattern, whiteSpace
long
A whole number with a value greater than or equal to -9223372036854775808 but less than or equal to 9223372036854775807. Constraining Facets enumeration, fractionDigits, maxExclusive, maxInclusive, minExclusive, minInclusive, pattern, totalDigits Example -55600, -23, 0, 256, 3211569432
Name
XML names that match the Name production of XML 1.0 (Second Edition). Constraining Facets enumeration, length, maxLength, minLength, pattern, whiteSpace
NCName
Non-colonized XML names. Set of all strings that match the NCName production of Namespaces in XML. Constraining Facets enumeration, length, maxLength, minLength, pattern, whiteSpace
negativeInteger
An integer with a value less than or equal to –1. Constraining Facets enumeration, fractionDigits, maxExclusive, maxInclusive, minExclusive, minInclusive, pattern, totalDigits Example -255556, -354, -3, -1
NMTOKEN
Any mixture of name characters. Represents the NMTOKEN attribute type from the XML 1.0 Recommendation. Constraining Facets enumeration, length, maxLength, minLength, pattern, whiteSpace
NMTOKENS
Sequences of NMTOKENS. Represents the NMTOKENS attribute type from the XML 1.0 Recommendation. Constraining Facets enumeration, length, maxLength, minLength
nonNegativeInteger
An integer with a value greater than or equal to 0. Constraining Facets enumeration, fractionDigits, maxExclusive, maxInclusive, minExclusive, minInclusive, pattern, totalDigits Example <br> 0, 15, 32123
nonPositiveInteger
An integer with a value less than or equal to 0. Constraining Facets enumeration, fractionDigits, maxExclusive, maxInclusive, minExclusive, minInclusive, pattern, totalDigits, whiteSpace Example -256453, -357, -1, 0
normalizedString
Represents white space normalized strings. Set of strings (sequence of UCS characters) that do not contain the carriage return (#xD), line feed (#xA), or tab (#x9) characters. Constraining Facets enumeration, length, maxLength, minLength, pattern, whiteSpace Example MAB-0907
positiveInteger
An integer with a value greater than or equal to 1. Constraining Facets enumeration, fractionDigits, maxExclusive, maxInclusive, minExclusive, minInclusive, pattern, totalDigits Example 1, 1500, 23000
short
A whole number with a value greater than or equal to -32768 but less than or equal to 32767. Constraining Facets enumeration, fractionDigits, maxExclusive, maxInclusive, minExclusive, minInclusive, pattern, totalDigits Example -32000, -543, 0, 456, 3265
string
Character strings in XML. A sequence of UCS characters (ISO 10646 and Unicode). By default, all white space is preserved for variables with a string content constraint. Constraining Facets enumeration, length, maxLength, minLength, pattern, whiteSpace Example MAB-0907
time
An instant of time that occurs every day. Values must match the following pattern: hh:mm:ss.sss Where hh indicates the hour, mm the minutes, and ss the seconds. The pattern can include a Z at the end to indicate Coordinated Universal Time or to indicate the difference between the time zone and Coordinated Universal Time. Constraining Facets enumeration, maxExclusive, maxInclusive, minExclusive, minInclusive, pattern Example 18:10:00-05:00 (6:10 pm, Eastern Standard Time) Eastern Standard Time is 5 hours behind Coordinated Universal Time.
token
Represents tokenized strings. Set of strings that do not contain the carriage return (#xD), line feed (#xA), or tab (#x9) characters, leading or trailing spaces (#x20), or sequences of two or more spaces. Constraining Facets enumeration, length, maxLength, minLength, pattern, whiteSpace
unsignedByte
A whole number greater than or equal to 0, but less than or equal to 255. Constraining Facets enumeration, fractionDigits, maxExclusive, maxInclusive, minExclusive, minInclusive, pattern, totalDigits Example 0, 112, 200
unsignedInt
A whole number greater than or equal to 0, but less than or equal to 4294967295. Constraining Facets enumeration, fractionDigits, maxExclusive, maxInclusive, minExclusive, minInclusive, pattern, totalDigits Example 0, 22335, 123223333
unsignedLong
A whole number greater than or equal to 0, but less than or equal to 18446744073709551615. Constraining Facets enumeration, fractionDigits, maxExclusive, maxInclusive, minExclusive, minInclusive, pattern, totalDigits Example 0, 2001, 3363124
unsignedShort
A whole number greater then or equal to 0, but less than or equal to 65535. Constraining Facets enumeration, fractionDigits, maxExclusive, maxInclusive, minExclusive, minInclusive, pattern, totalDigits Example 0, 1000, 65000
Customizing a String Content Type
No subtopics in this section
Instead of applying an existing content type or simple type to a String or String list, you can customize an existing type and apply the new, modified type to a variable. You can customize a content type or simple type by changing the constraining facets applied to the type.
When you customize a type, you actually create a new content type. The constraining facets you can specify depend on the content type. Note that content types and constraining facets correspond to datatypes and constraining facets defined in XML Schema. For more information about constraining facets for a datatype, see the specification XML Schema Part 2: Datatypes (http://www.w3.org/TR/xmlschema-2/).
To customize a content type
1.Select the variable to which you want to apply a customized content type and click the edit icon.
2.Click the Content type drop-down arrow and then in the Content type list, select the content type you want to customize. The constraining facet fields below the Content type list are made available for data entry.
3.In the fields below the Content type field, specify the constraining facet values you want to apply to the content type and click Save.
Note
The constraining facets displayed below the Content type list depend on the primitive type from which the simple type is derived. Primitive types are the basic data types from which all other data types are derived. For example, if the primitive type is string, the constraining facets Enumeration, Length, Minimum Length, Maximum Length, Whitespace, and pattern are displayed. For more information about primitive types, see XML Schema Part 2: Datatypes at http://www.w3.org/TR/xmlschema-2/.
Variable Constraints
No subtopics in this section
You apply content constraints to variables in the document types that you want to use as blueprints in data validation. Content constraints describe the data a variable can contain. At validation time, if the variable value does not conform to the content constraints applied to the variable, the validation engine considers the value to be invalid.
When applying content constraints to variables, do the following:
Select a content type: A content type specifies the type of data for the variable value, such as string, integer, boolean, or date. A content type corresponds to a simple type definition in a schema.
Set constraining facets: Constraining facets restrict the content type, which in turn, restrict the value of the variable to which the content type is applied. Each content type has a set of constraining facets. For example, you can set a length restriction for a string content type, or a maximum value restriction for an integer content type.
For example, for a String variable named itemQuantity, specify a content type that requires the variable value to be an integer. You could then set constraining facets that limit the content of itemQuantity to a value between 1 and 100.
The content types and constraining facets correspond to the built-in data types and constraining facets in XML Schema. The World Wide Web Consortium (W3C) defines the built-in data types and constraining facets in the specification XML Schema Part 2: Datatypes (http://www.w3c.org/TR/xmlschema-2).
Applying Constraints to a Variable
No subtopics in this section
You can apply content constraints to variables in the document types that you want to use as blueprints in data validation.
To apply constraints to a variable
1.On the Document Types page, for a document type, click the Edit icon. Select a field and click the edit field icon to view the field properties panel.
You can apply constraints to the fields of a document type, and also the fields declared on the Input/Output page after you click the Define I/O option in a flow service. If the selected variable is a String or String list, and you want to specify content constraints for the variable, do the following:
If you want to use a content type that corresponds to a built-in simple type in XML schema, in the Content type list, select the type for the variable contents. To apply the selected type to the variable, click Save.
2.Repeat this procedure for each variable to which you want to apply the constraints in the document type and click Save.
Lock and Unlock Flow Services
No subtopics in this section
IBM webMethods Integration allows you to manage a flow service during the development life cycle by auto locking. When you edit a flow service, it is automatically locked for you. This restricts multiple users from editing the flow service at the same time. After you edit a flow service and save the changes, you can exit the edit mode to unlock the flow service and make it available for other users.
If you are editing a flow service and if another user opens the flow service, then that user sees the following message.
A flow service cannot be edited in view mode.
After you edit a flow service and save the changes, the other user sees the following message:
If you are editing a flow service and another user tries to delete the flow service, the other user sees the following message:
If you have kept the user session idle for quite sometime or because of any other issues a flow service lock remains and is not resolved, then only a user with Admin role can unlock the flow service and make it available for editing.
To unlock a flow service, click the ellipsis icon available on a flow service and select Unlock.
If the Admin unlocks the flow service, all editing permissions are automatically revoked for the user who has locked the flow service, and any unsaved changes will be lost. The following message appears after the Admin unlocks the flow service:
Further, all users who are currently editing or viewing the flow service, see the following message:
Version Management of Flow services
No subtopics in this section
IBM webMethods Integration allows you to view the version history of a flow service. Click the ellipsis icon and select the Version history option available on the flow services tool bar panel to view the version history.
When you save a flow service, a new version is added to the Version History with the default commit message. You can also provide a custom commit message by clicking the drop-down arrow beside the Save option and selecting Save with message. You can click on any version on the Version history panel and view the corresponding flow service version.
To restore an earlier version, select the earlier version of the flow service, and click the Restore icon .
If you have reverted to an earlier version and there is a scheduled execution for the flow service, the reverted version runs as per the defined schedule.
Note
If a flow service references any other flow service, then the pipeline mapping of the referenced flow service is also restored to that particular version. But if the pipeline mapping of the referenced flow service has been modified in a later version, the modification might break the mappings and the flow service execution will not be successful.
If a flow service references any document types, reference data, custom operations, REST connectors, and SOAP connectors, and if those references have been modified, then those references will not be restored.
If you delete a flow service and then create another flow service with the same name, the version history of the deleted flow service will be available.
You can debug a flow service and can inspect the data flow during the debugging session. The flow service gets automatically saved when you start the debug session.
You can do the following in debug mode:
Start a flow service in debug mode, specify the input values, and inspect the results.
Examine and edit the pipeline data before and after executing the individual steps.
Monitor the execution path, execute the steps one at a time, or specify breakpoints where you want to halt the execution.
To start the Debug session, from the Flow services page, select a flow service, insert the breakpoints, and then click the Debug icon . The debug session halts at the first step even if you have not inserted a breakpoint.
If you have defined input fields for the flow service, the Input Values dialog box appears where you can specify the input values, else the Debug panel appears if no input fields are defined.
Note
If you have not defined any input fields or if there are no pipeline output variables, then while debugging, a message appears as the pipeline is empty. Further, if you disable a step, that step will not be considered while debugging. The Flow service Execution page (Monitor > Flow service Execution) as well as the Execution History page do not display any execution logs for a debug session.
If a flow service has a child flow service, IBM webMethods Integration will not step into the child flow service during a debug session.
The following table describes the options available while debugging the flow service:
Icon
Applicable for…
Action/Description
Insert Breakpoints
Insert a breakpoint in a flow service by clicking on the step number. To remove a breakpoint, click on the step number where the breakpoint is inserted. Breakpoints are recognized only when you run a flow service in debug session. A breakpoint is a point where you want processing to pause when you debug the flow service. Breakpoints can help you isolate a section of code or examine data values at a particular point in the execution path. For example, you might want to set a pair of breakpoints before and after a particular step, so that you can examine the pipeline before and after that step executes.When you run a flow service that contains a breakpoint, the flow service is executed up to, but not including the designated breakpoint.
Icon
Applicable for…
Action/Description
Disable Breakpoints
Ignores all breakpoints inserted in the flow service steps.
Icon
Applicable for…
Action/Description
Enable Breakpoints
Enables all breakpoints inserted in the flow service steps.
Icon
Applicable for…
Action/Description
Resume
Resumes the debug session but pauses at the next breakpoint.
Icon
Applicable for…
Action/Description
Stop
Terminates the debug session. A debug session might also stop by itself for the following reasons: - The flow service that you are debugging executes to completion (error or success). - You select Step over for the last step in the flow service. - You Exit the flow service.
Icon
Applicable for…
Action/Description
Restart
Restarts the debug session from the first step.
Icon
Applicable for…
Action/Description
Step over
Executes the flow service on a step-by-step basis. For conditional controls: - Conditions are evaluated - If values are true, then the steps inside it are executed on the next step over.
Icon
Applicable for…
Action/Description
Clear all breakpoints
Removes all breakpoints inserted in the flow service.
Icon
Applicable for…
Action/Description
Close
Closes the Debug panel and goes back to the flow service.
Modifying the current pipeline data while debugging
While debugging, you can modify the contents of the pipeline by clicking on the field values as shown below. The changed values are not applied on the current step but on successive steps when you do a Step over or Resume.
While modifying the pipeline, keep the following points in mind:
You can modify the pipeline data only during an active debug session.
When you modify values in the pipeline, the changes apply only to the current debug session. The flow service is not permanently changed.
You can modify existing variables but cannot add new variables to the pipeline.
Note
While running or debugging flow services and testing operations, if the input has any duplicate keys, or if the service returns an output with duplicate keys, you can view those keys.
IBM webMethods Integration allows you to restart or resume failed Flow service executions. If an execution fails, you can resume that execution from the point where it had failed. Resuming an execution does not execute the previously successful operations but executes only the failed operations and operations that are not yet executed. When a flow service execution is restarted, the execution occurs from the beginning.
Note
The Restart and Resume options are available only if you have the required capability for restarting and resuming flow services. Contact IBM support to enable the Restart and Resume capability as these options are not available by default in the user interface.
The Restart and Resume options are not available for flow services created through Swagger (REST APIs) and WSDL (SOAP APIs).
If a flow service is invoked through a Messaging subscriber then the Restart and Resume options are not applicable.
The following table provides information on when you can restart or resume a flow service execution:
Execution Result Status
Restartable
Resumable
Successful Executions
Yes
No
Failed Executions
Yes
Yes
Completed with Errors
Yes
No
Running Executions
No
No
Note
You can restart one or multiple flow services as per your requirements.
If a flow service has only the restart capability enabled, resuming it from the flow service Executions listing page will also trigger a restart of the execution instead of resuming from the point it failed in the previous run.
The flow service executions list displays two buttons in the Actions column for the relevant execution: Restart and Resume.
How to restart a single Flow service
From the IBM webMethods Integration navigation bar, click Projects. Select a project and then select Flow services.
Click the ellipsis icon available on the flow service and select Overview.
On the Overview page, select the Enable Flow service to be restarted option. If the flow service is updated, you cannot restart its executions that have occurred before the update.
Further, flow services using Operations and other flow services, which have fields of type Object in their signature, may not execute properly when restarted or resumed.
Select the flow service and run it.
Provide the input values, if needed, and then click Run.
Go to the Monitor > Execution Results > Flow service Execution page. Click on the execution log and on the execution details page, click the Restart option. Alternatively, you can also restart the execution from the Monitor page. Click on the Restart button displayed in the Actions column.
Note
The Flow Service Executions listing screen on the Monitor page displays two buttons named Restart and Resume in the Actions column. If a flow service has only the restart capability enabled, resuming it from the Flow Service Executions listing page will also trigger a restart of the execution instead of resuming from the point it failed in the previous run.
Provide the input values, and then click Run to restart the execution from the beginning.
How to resume a single Flow Service
From the IBM webMethods Integration navigation bar, click Projects. Select a project and then select Flow services.
Click the ellipsis icon available on the flow service and select Overview.
On the Overview page, select the Enable Flow service to be resumed option. Enabling this option increases the execution time of a flow service. If the flow service is updated, you cannot restart or resume its executions that have occurred before the update.
Further, flow services using Operations and other flow services, which have fields of type Object in their signature, may not execute properly when resumed.
Select the flow service and run it.
Provide the input values, if needed, and then click Run.
Go to the Monitor > Execution Results > Flow service Execution page. Click on the execution log and on the execution details page, click the Resume option. Alternatively, you can also resume the execution from the Monitor page. Click on the Resume button displayed in the Actions column.
Note
The flow service Executions listing screen on the Monitor page displays two buttons named Restart and Resume in the Actions column. If a flow service has only the restart capability enabled, resuming it from the flow service Executions listing page will also trigger a restart of the execution instead of resuming from the point it failed in the previous run.
Provide the input values, and then click Run to restart the execution from the beginning. Click the Resume option, edit the input data, and resume the execution from the point where it had failed in the previous run.
How to restart multiple flow services
You can select multiple successful or failed flow service executions to restart in just one click.
From the IBM webMethods Integration navigation bar, click Projects. Select a project and then select Flow services.
Click the ellipsis icon available on the flow service and select Overview.
On the Overview page, select the Enable Flow service to be restarted option. Enabling this option increases the execution time of a flow service. If the flow service is updated, you cannot restart or resume its executions that have occurred before the update. Further, flow services using Operations and other flow services, which have fields of type Object in their signature, may not execute properly when restarted or resumed.
In the Executions table, select the checkbox beside the name of successful or failed flow service and click the Restart button.
Note
While restarting multiple flow service executions, if one of the selected flow services fail, you will need to make necessary changes to the corresponding flow service from the canvas only. Editing the input data is not supported when you restart multiple flow services.
You can select a maximum of 150 flow service executions to restart at a time.
IBM webMethods Integration allows you to trigger the execution of a flow service from an external system. This option provides you with another way to trigger flow service executions from a software application, for example, a REST client, apart from manual and scheduled executions from the user interface.
On the external program, provide the HTTP URL, Method, required JSON body, and necessary header parameters, including the required security credentials (user name and password) to invoke the flow service. After the flow service is executed, the response contains the pipeline data.
How it Works
Log in to your tenant and from the IBM webMethods Integration navigation bar, click Projects. Select a project and then click Flow services.
Click the ellipsis icon available on a flow service and select Overview.
The Overview page appears.
On the Overview page, select the Enable Flow service to be invoked over HTTP option. Once the flow service is enabled to be invoked over HTTP, the HTTP request URL appears.
Note
When you enable the Invoke Flow service over HTTP option for a flow service, and clone, export, or import it, the option stays enabled in the cloned, exported, or imported flow service.
Click the Advanced details section to view the HTTP Method, sample JSON input, and the parameters that are required to invoke the flow service from an external system.
Synchronous URL
You can execute flow services synchronously using the run URL:
run - Flow service executes and the response contains the pipeline data.
sub-domain is a domain that is part of the primary domain.
run - Flow service executes and the response contains the pipeline data.
stagename is the name of the active stage.
integrationname is the name of the Flow service.
Note: You must provide your user name and password to execute the Flow service from the external program, else you may encounter the 401 - Unauthorized User Error.
Asynchronous URL
You can execute flow services asynchronously using the submit URL:
submit - Flow service has been submitted for execution and the response contains a status indicating whether the flow service has been submitted for execution. When the request is submitted for execution using the submit option, the response contains a reference to the execution result identifier so that a new HTTP call can be made later to get the execution results.
Application Status Codes for submit
0 - SUCCESS: Successfully submitted the flow service for execution.
-1 - ERROR: Problem while submitting the flow service for execution.
To get the execution results, construct the URL of the new HTTP call from the URI field available in the Response section.
To construct the URL of the new HTTP call, add the response URI obtained from resultReference in the Response section to: https://sub-domain.domain.com
Response URI format: https://sub-domain.domain/integration/rest/external/integration/execution/result?resultReference=765733-6a21-4b02-864f-e958f698373
HTTP Status Codes
200 - OK
500 - Internal Server Error
401 - Unauthorized User Error
Note: You must provide your user name and password to execute the flow service from the external program, else you may encounter the 401-Unauthorized User Error. Further, if the query response HTTP status code is 404 - Not Found, it means that either the flow service is not yet run or the resultReference is not correct.
7.On the Postman app Request page, select POST. Then copy the Synchronous URL from the IBM webMethods Integration flow service Overview page and paste it in the Enter Request URL field as shown below.
8.Go to the IBM webMethods Integration flow service Overview > Advanced details page and get the Content-Type and Accept header parameters.
9.Go to the Postman app Request page, click Headers, and enter the Content-Type and Accept values as shown below.
10.On the Postman app Request page, select POST, click Authorization, select the Type as Basic Auth, enter your IBM webMethods Integration login user name and password, and click Send as shown below.
The result appears in the Body section at the lower part of the Postman Request page. The execution result also appears on the IBM webMethods Integration flow service Execution History page and on the Monitor > Execution Results > Flow service Execution page under Execution Logs.
Services can be exposed over the internet using HTTPS. Additionally, the capability to flag service invocation as private has been introduced, ensuring exclusive access for components within your tenant’s network, while external clients over the internet are restricted. This feature provides the advantage of enabling policy-enforced access to the service through API Gateway.
To enable private service invocation, you need to activate two options: Enable Flow service invocation over HTTP and Enable private invocation. The latter is only accessible when the former is selected. This configuration ensures that flow service invocation remains private, restricting access to components within your tenant’s network.
How Private Service Access Works
To enable private service invocation, following are the essential steps:
Enabling private service invocation in flow service
Create and configure the API
Enable HTTPS protocol
Define routing configurations
Activate the API
Before you begin
You must have the API Gateway’s manage APIs or activate/deactivate APIs functional privilege assigned to perform this task.
You must have the API Gateway’s manage aliases functional privilege assigned to perform this task.
Coordinate with the support team to obtain the secondary host (internal domain) designated for the private service invocation and then create an alias in IBM webMethods API Gateway.
Enable private service invocation in flow service
Log in to your tenant and from the IBM webMethods Integration navigation bar, click Projects.
Select a project and then click Flow services.
Click the ellipsis icon available on a flow service and select Overview.
The Overview page appears.
On the Overview page, select the Enable Flow service invocation over HTTP option. Once the flow service is enabled to be invoked over HTTP, the private service URL appears.
Select Enable private invocation option for a private mode.
After generating the URLs, the next step is to copy them in API Gateway to create an API.
Create and configure the API
Click the APIs menu, click Create API, and select Create API from scratch option.
Under API details, provide an API name, and then click Technical information option.
In the Server URL field, enter the flow service domain name.
Click Resources and methods option and click Add resources.
Provide a Resource name, and under Resource path field, enter the resource location where the flow service can be accessed. So, the Resource path in this scenario will be:
Go to the Policies menu, click Edit, and select HTTPS checkbox under Protocol.
Define routing configurations
Go to Routing. Under Endpoint URI, provide alias and resource path as inputs.
To enter alias as input, enter ${aliasname}.
To enter resource path as input, enter ${sys:resource_path}
Set required HTTP Connection Timeout (maximum 5 minutes) and Read Timeout (maximum 5 minutes).
Click Save.
Activate the API
Click Activate and then confirm the activation action.
Copy Gateway Endpoint URL
Go to API details menu, scroll down to Gateway endpoint(s) section, and copy the endpoint.
Use this URL to execute the associated private flow service by sending HTTP requests to it.
Note
If you have set any authentication method at the time of configuring the private service invocation, then ensure that you provide the necessary authentication details when sending the HTTP request.
IBM webMethods Integration allows you to export flow services from the Flow services page. You can export flow services only if you have the required capability for exporting flow services. You can export a flow service from one tenant and import that flow service to another tenant.
How It Works
1.From the IBM webMethods Integration navigation bar, click Projects. Select a project and then click Flow services. The Flow services page appears.
2.Click the ellipsis icon available on the flow service and select Export. If a flow service you are exporting uses Document Types, SOAP, or REST Connectors, those components are also exported along with the flow service.
Note
If a flow service you are exporting has Reference Data, the Reference Data will not be exported along with the flow service.
The Confirm Export dialog box appears.
3.Click Export to export the flow service. The flow service is downloaded as a zip file to your default download folder. The zip file has the same name as that of the flow service. Do not modify the contents of the exported zip file. If you modify the contents of the zip file, the flow service cannot be imported back to IBM webMethods Integration.
Exporting a Flow service having an On-Premises Connector
After exporting a flow service that has an on-premises connector, if you want to import the flow service, then before importing the flow service, ensure that you upload the same on-premises connector to IBM webMethods Integration. Else you will not be able to import the flow service.
IBM webMethods Integration allows you to import flow services from a zip file that was earlier exported from IBM webMethods Integration. You can export flow services from one tenant and import those flow services to another tenant. You can import flow services provided you have the required capability.
Note
If you want to import a flow service that has an on-premises connector, before importing the flow service, ensure that you upload the same on-premises connector to IBM webMethods Integration, else you will not be able to import the flow service.
How It Works
1.From the IBM webMethods Integration navigation bar, click Projects. Select a project and then click Flow services. The Flow services page appears.
2.Click Import and select the zip file that contains the exported flow service.
While importing a flow service, if a dependent flow service conflicts with an existing flow service in the same project, you can:
Rename the dependent flow service you are importing. This creates a new flow service within the same project.
Reuse the existing flow service by clearing the check boxes and clicking Submit.
If you want to overwrite the existing flow service, click Submit.
Points to Consider when Importing Flow Services
If you are importing a flow service that uses a Messaging connector, you will need to create the Accounts and destinations, and then configure them in the imported flow service.
If the flow service you are importing use SOAP or REST connectors and if those connectors do not exist in your system, continue importing the flow service. The connectors are imported along with the flow service. After importing, create the Accounts and then configure them in the imported flow service.
If a flow service you are importing uses an on-premises connector and if the connector does not exist in your system, the Account appears only after you have uploaded the on-premises connector.
If an Account is used in multiple steps in a flow service, after importing the flow service in a different project, the Account name appears in the relevant steps as shown.
In the flow service step as shown, the account appears as configured, but is not available in the project.
In such cases, create the Account with the same name. The Account will be automatically configured in the relevant steps. If you create the Account with a different name, you have to configure the Account manually at each step. If an Account with the same name already exists in the project, then the Account will be automatically linked in the relevant steps.
Best Practices for Flow Services
No subtopics in this section
Displaying Flow Service Syntactic Errors: If there are syntactic errors present while creating flow services, IBM webMethods Integration system displays error messages and offers guidance to users. Ensure all syntactic errors are corrected to enable successful saving and functioning of the flow service.
Note
If any existing flow services contain syntactic errors, it may affect the flow service execution . You must correct these errors and rerun the affected flow services.
Avoid Cross-Project References: IBM webMethods Integration does not support cross-project references. Avoid cutting or copying flow service steps from one project and pasting them into another project’s flow service. This can lead to errors and inconsistencies in your flow services. Instead, consider utilizing the clone, export, import, publish and deploy functionalities.
Ensure that there are no cross-project references when you copy-paste the flow steps. For flows created after version 10.16.5, the following message appears:
Handling Inconsistent State Error: If you face an error indicating that flow services are in an inconsistent state after copying or pasting steps, follow these steps to resolve the issue:
Delete the Step: In the destination project’s flow service, delete the step that was copied from another project.
Restore Previous Version: Alternatively, consider restoring the flow service to an earlier version from the version history within the destination project.
If you find that the above steps do not address the issue and inconsistencies persist, contact IBM support.
Optimize Document Type Usage within APIs: Document types defined within APIs are intentionally designed for use exclusively within the scope of those APIs. It’s essential to maintain this boundary and avoid accessing them in FlowServices outside the APIs.
Be mindful that attempting to access document types defined within APIs outside their designated scope can potentially result in deployment errors. It’s important to adhere to these usage boundaries to ensure a smooth development and deployment process.
On-premise connector:
You can import flow services using an on-premises connector to different environments. However, you cannot use the flow service until you configure the on-premises connector in the destination environment.
Ensure to configure the on-premises connector in the destination environment before publishing a project that has a flow service using on-premises connector. This is so because, the on-premises connector is not published to the destination environment along with the flow service.
The errors that are preventing the successful export of the flow service.
Error
This operation cannot be performed as Integrations <flow-name> are in inconsistent state.
Use case
Consider a scenario where you have a complex flow service that includes child flows and reference data.
Some of these components might have invalid constructs. These are marked with a red circle to highlight the issues.
Exporting the flow service without resolving the highlighted steps leads to an export failed error when attempting to export the flow service.
Resolution
To resolve the error, follow these steps:
Identify Invalid Constructs: Look for child flow steps and reference data with invalid constructs within the flow service. These are marked with a red circle to highlight the issues.
Delete Child Flow Steps with Invalid Constructs: Remove any child flow steps that have invalid constructs. These steps may be causing inconsistencies or errors in the flow.
Review Reference Data: Examine the reference data for invalid constructs and ensure that it is correctly configured. Remove or fix any reference data with issues.
If you still find issues, consider renaming and adding the reference data on your own.
Publish-Project errors with invalid constructs
This section addresses the challenges faced when attempting to publish complex flow services in a project, including parent flows, child flows, and reference data.
Error
Operation cannot be performed as Integrations <flow-name> are in an inconsistent state.
Use Case
Consider a scenario where you have a complex flow, which includes parent and child flows, along with reference data. For example,
Some of these components have invalid constructs. These are marked with a red circle to highlight the issues.
While attempting to publish this flow, you encounter the aforementioned error.
Resolution
To resolve the error, follow these steps:
Verify that any references related to the removed steps or components are either correctly resolved or removed from the flow.
If your flow includes child flows, ensure that they are correctly configured and do not rely on any removed steps or components.
Review any referenced data to confirm its validity and check if it is still linked to the removed steps or components.
Once all dependencies are appropriately resolved, attempt to publish the flow once more.
Handling large payloads and performance optimization
The flow service encountered performance issues when processing large datasets, especially when iterating datasets with over 32,000 elements. The issue occurred due to the following two problems:
Problem 1: Variable persistence in child flows
Use case
When invoking a child flow within a parent flow, variables from the parent flow are carried over to the child flow, causing unexpected behavior and potential performance issues.
Example
Parent flow invokes a child flow without dropping the variable document, which is used in both the flows.
In the child flow, document variable persists, leading to unintended iterations and errors in conditional logic. When input data is improper, the repeat loop fails to run, resulting in unexpected behavior.
Error
There is no error message displayed, yet the child flows fail to run properly due to the persistence of the document variable.
Resolution
Before invoking the child flow, drop unnecessary variables.
Additionally, you can use ClearPipeline to specify which variables to preserve. For more information, see ClearPipeline.
Ensure proper cleanup of variables to avoid interference between the parent and child flows.
Problem 2: Inefficient handling of large payloads
Use case
Attempt to convert a huge dataset (over 32,000 rows) to string format using loops. The problem resulted from the use of a loop method to iterate through each element in the document list, with mapping and operations being conducted independently. Even after implementing optimizations such as addItemToVector and vectorToArray, the loop’s execution time remained around two minutes.
Error
There is no error message displayed, yet the services took longer time than expected.
Resolution
Use a temporary variable to store the document array and map it efficiently.
Convert the parent document to a JSON string directly, avoiding unnecessary iterations and operations on individual elements.
Identify efficient methods for handling large datasets, such as direct conversion to desired formats.
Minimize unnecessary iterations and operations, especially when dealing with large payloads.
Optimize data processing logic to improve performance and reduce run time.
Additional headers were observed when running a REST application
When a REST application flow service is run with changed header attributes, sw6 headers may be observed in the pipeline output result.
Note that sw6 is a valid request header used by End-to-End Monitoring for monitoring IBM product runtimes. This has no impact on feature functionalities.
Assets Supported in Flow Services
No subtopics in this section
All assets that can be used with workflows and flow services can be cloned, exported, imported, published, and deployed. These functions are helpful when migrating, promoting, or copying the existing flows.
Clone - Allows you to create a new workflow or flow service that is identical but separate from the original workflow or flow service You can clone the workflow or flow service to the same or different project but within a tenant.
Export and Import - Allows you to export or import a workflow or flow service that can be used in multiple projects. When you export a workflow or flow service a zip file is created containing all assets along with its dependencies used in that workflow or flow service. The zip file is saved to your local drive’s default download path and this zip file can be used to import those assets either in the same tenant or a different tenant. It can be in a different region as well.
Publish and Deploy - Allows you to promote assets across regions, cloud providers, and datacenters. It is possible to select multiple assets and publish them all at the same time. The project and assets are named the same. In case of redeploying, it updates the existing assets with the new data.
The following table lists the functions that are supported in the current product version for flow service:
Feature
Clone
Export and Import
Publish and Deploy
Flow service with Predefined connectors
Yes
Yes
Yes
Flow service with Custom CloudStreams connectors
Yes
Yes
Yes
Flow service with REST connectors
Yes
Yes
Yes
Flow service with SOAP connectors
Yes
Yes
Yes
Flow service with On-Premise connectors
Yes
Yes
Yes
Flow service with Flatfile connectors
Yes
Yes
Yes
Flow service with connectors (For example, SFTP/FTP/PGP/Cloud Container/SMTP/HTTP)
Yes
Yes
Yes
Flow service with backend Node.js connectors (For example, MYSQL, Pusher)
Yes
Yes
Yes
Flow service with CLI based Node.js connectors (For example, PubNub, Workspan, Weather Underground)
Yes
Yes
Yes
Flow service with Adapters (For example, Database and SAP ERP)
Yes
Yes
Yes
Flow service with Messaging
Yes
Yes
Yes
Flow service with Reference data Note: Reference data will not be exported or published, but the flow service can be exported or published.