More Tutorials

Creating Custom Actions using Swagger Files

IBM webMethods Integration supports 250+ connectors to help you create smart integrations faster. Further, it allows you to create custom actions/triggers using the Connector Builder or by importing your existing Swagger files.

In this tutorial, we will understand how to create a custom action using a Swagger file.

In order to create a custom action, install the Connector Builder on your system first.

Installing Connector Builder

To install the Connector Builder, follow the instructions given below:

Note
The Node version should be between 8.14.0 and 10.14.2.
Note
To get the tenant developer key, navigate to Tenant profile icon->Settings->My Information->Developer key.

Importing Swagger file

Once you have installed the Connector Builder, the next step is to create a custom action for the application of your choice. To do so, follow the instructions given below:

Note
If the authorization type is set to ‘Connection’ or ‘Token’, the relevant action will be created automatically.

On selecting ‘Other’, a list of available OAuth providers will be displayed on the screen. Select the required OAuth provider from the given list.

To proceed without creating an authorization, select ‘Skip’. the required actions will be created automatically.

Note
You can alternatively add an authorization using the command:

wmio auth

To learn how to add an authorization using this command, click here.

Note
The created OAuth will be used by all the actions that are present in the V1 folder of the action directory.

Testing the action

Testing the custom action is an optional step. However, it is recommended to perform this step to ensure that the code you have written is working as expected.

Note
The testing will be done against the mock data you have provided in the code.

We will now understand how to test the action with the help of an example. Suppose we want to test an action that is created using the example swagger file, foldersfolder_id__itemsget. To do this, follow the steps given below:

To test the action, add mock_input object below the output object as shown below:

Note
The mock_input object should have all keys (of the input schema) present in the input object.

With this, the action is ready to be tested. Now you can run the command - wmio test - to test the action.

Note
  • A file named auth.conf will be automatically created at the root level of the current directory. This file contains an object with Authorization details (as shown below) of the action that needs to be tested. The Authorization details will be used by CLI while testing the action.

  • Ensure that the index.json file contains only those actions in the array that you would want to test.

  • In case the index.json file contains an empty array of actions, then only the passed access token will be validated. The validation will be performed using the validate function present in the authentication.js file.

Deploying the action

Once you have tested your action, the next step is to deploy it. To do so, run the following command:

wmio deploy

With this, your action will be available for you to use locally. Refresh the browser to see the deployed action added in the Connectors panel under the Services tab.

Error Description

Here is a list of some common errors you might face while deploying the action to your tenant.

Executing the action

The process of executing the action remains the same as that of any other action on IBM webMethods Integration.

Prerequisites for executing the action

Before you execute any newly created action, ensure that:

Editing an existing custom action

You can edit an existing custom action created through a Swagger file. To do so, follow the steps given below:

Deleting an existing custom action

You can delete an existing custom action created using a Swagger file. To do so, follow the steps given below:

Help

Given below is the list of all commands used to create a custom action using a Swagger file:

  • npm i @webmethodsio/wmiocli -g: Installs node module to create and deploy connectors on IBM webMethods Integration
  • wmio login: Configures the deploy key and logs you into IBM webMethods Integration
  • wmio init: Initializes a new IBM webMethods Integration app in your directory
  • wmio swagger <swagger_file>: Imports all APIcalls from the specified swagger file as actions in the connector
  • wmio auth: Creates an authentication scaffolding in the app directory
  • wmio test: Tests and runs the created custom connectors
  • wmio deploy: Builds app and deploys it to IBM webMethods Integration
  • wmio pull: Deletes an existing custom connector
  • How Dynamic Forms work with Google Sheets

    Dynamic Forms make it easier to work with actions that expect data for custom fields. In this tutorial, we will understand how to use the Dynamic Forms feature in a Google Sheets action, with the help of an example.

    How it works

    Let’s say, you want to create a workflow that adds a new row to Google Sheets each time a note is created in Evernote. To do this, follow the steps given below:

    1. Add and configure Evernote - New Note trigger.

    2. Drag Google Sheets connector on canvas and connect it to Evernote trigger (Start icon) and Stop icon.

    3. Double-click the Google Sheets connector, and select Add Row action from the Select Action drop down list. Select/add the account for Google Sheets that you want to use to execute this action. Read more about Creating Accounts.

      Once you have specified the account, click Next. You will see the configuration window similar to the one given below:

      • Spreadsheet ID: Select/specify the ID of the spreadsheet in which you want to add a row.

      • Sheet ID: Select/specify the ID of the sheet in which you want to add a row.

      As soon as you specify the Sheet ID, IBM webMethods Integration will automatically fetch the headers of all columns of the specified sheet in the configuration window.

      You can then add/enter the relevant values for each of these columns. You can keep a column blank if you do not want to provide a value for it. This makes it easy for you to add rows to your Google Sheets spreadsheets.

      Once you have entered all the required details, click Next, test the action, and click Done to return to the canvas.

      Note
      Currently, the Dynamic Forms feature has been enabled only for the actions of the Google Sheets application.

    How to add rows to an existing Smartsheet

    You can add rows to an existing Smartsheet by using Smartsheet - Add Rows action. Follow the steps to achieve this.

    How it works

    1. Add the Smartsheet connector to canvas and connect it to Start icon.

    2. Double-click the Smartsheet connector, and select Add Rows action from the Select Action drop down list. Read more about Creating accounts.

    3. Select/add the account for Smartsheets that you want to use to execute this action.

    4. Once this is done, click Next, and the following details in the action configuration window that appears next:

      • Sheet ID: Select/specify the ID of the sheet in which you want to add rows.

      • Row Location (Mandatory): Select the location where you want to add a row from the options given under the drop-down list. If you select Child of Parent option, you have to enter the Parent ID , and if you select Sibling , you have to enter the Sibling ID .

      Note
      Child of Parent adds a new row as a child of the specified row and Sibling adds a new row at the same hierarchical level of the specified row.
      • Parent ID: Select the ID of the parent row. The new row is added as the first child of this parent row.

      • Insert Row in the Child List At: Select the position of the row you want to add.

      You add multiple rows and columns by clicking on the ADD link.

    5. Once you have entered the relevant details, click Next, test the action, and click Done.

      This will take you back to canvas.

    6. Next, Save the workflow and click Test.

      This will add a new row (along with the value, if any) at the specified location. Please note that to see the reflected changes, you will need to refresh the selected sheet in Smartsheet.

    How to execute private webhook-based workflows

    You can execute private webhook-based workflows using:

    1. IBM webMethods API Gateway
    2. the HTTP connector
    3. the customer environment

    IBM webMethods API Gateway

    Perform the following steps to execute a private webhook-based workflow using IBM webMethods API Gateway:

    1. Enable Private Webhook option for your workflow.

      The Webhook URL shows two distinct URLs: Internal url and Private url.

    2. Copy the Internal url provided on the webhook configuration page once the Private Webhook is enabled.

      Following is an example of the Internal url after enabling the private webhook where demo is the tenant name.

      http://demo.int.webmethods/private/runflow/run/sync/v2/2FTxGYM0ko

      Use this URL for the rest of the configurations.

    3. Create an Alias in IBM webMethods API Gateway.

      Login to IBM webMethods API Gateway, click the Profile icon, and select Aliases.

      Click Create Alias, provide an alias name, and then select Technical information option.

      In the Default value field, paste the Internal url, and then remove everything after the /run part. So, the Default value in this scenario will be:

      http://demo.int.webmethods/private/runflow/run

      Next, Save the Alias.

    4. Create and configure the API.

      Click the APIs menu, click Create API, and select Create API from scratch option.

      Under API details, provide an API name, and then click Technical information option.

      In the Server URL field, paste the wehbook URL, and then remove everything after the /run part.

      Following is an example of the Server URL:

      http://demo.int.webmethods/private/runflow/run

      Click Add.

      Next, click Resources and methods option and click Add resources.

      Provide a Resource name and under Resource path field, enter everything after the /run part from the webhook URL. So, the Resource path in this scenario will be:

      /sync/v2/2FTxGYM0ko

      Select the required HTTP method(s) and then click Add.

    5. Enable HTTPS protocol.

      Go to the Policies menu, click Edit, and enable the HTTP and HTTPS protocols.

    6. Define routing configurations.

      Go to Routing. Under Endpoint URI, provide alias and resource path as inputs.

      • To enter alias as input, enter ${aliasname}.
        For example, ${PrivateInternal}
      • To enter resource path as input, enter ${sys:resource_path}

      Set required HTTP Connection Timeout (maximum 5 minutes) and Read Timeout (maximum 5 minutes).

      Click Save.

    7. Activate the API.

      Click Activate and then confirm the activation action.

    8. Copy Gateway Endpoint URL.

      Go to API details menu, scroll down to Gateway endpoint(s) section, and copy the endpoint.

      Now you can use this URL to execute the associated private webhook-based workflow by sending HTTP requests to it.

    Note
    If you have set any webhook authentication method at the time of configuring the webhook, then ensure that you provide the necessary authentication details when sending the HTTP request.

    HTTP connector

    Perform the following steps to execute a private webhook-based workflow using the HTTP connector:

    1. Enable Private Webhook option for your workflow.

      The Webhook URL shows two distinct URLs: Internal url and Private url .

    2. Copy the Internal url provided on the webhook configuration page once the Private Webhook is enabled.

      Following is an example of the Internal url after enabling the private webhook where demo is the tenant name.

      http://demo.int.webmethods/private/runflow/run/sync/v2/2FTxGYM0ko

      Use this URL for the rest of the configurations.

    Customer environment

    Perform the following steps to execute a private webhook-based workflow using the customer environment:

    1. Enable Private Webhook option for your workflow.

      The Webhook URL shows two distinct URLs: Internal url and Private url.

    2. Copy the Private url provided on the webhook configuration page once the Private Webhook is enabled.

      Following is an example of the Private url after enabling the private webhook where demo is the tenant name.

      https://demo.private.int-aws-us.webmethods.io/private/runflow/run/sync/v2/2FTxGYM0ko

      Use this URL for the rest of the configurations.

    How to use Swagger Action

    You can access the APIs and make API calls of any service that is built either using RAML or Swagger. The integration platform provides this novel functionality with action - Swagger.

    In this guide, we will help you understand how to use Swagger.

    How to use Swagger

    The Swagger action lets you access the APIs and make API calls to any service that is built using Swagger. In essence, this action lets you create a custom action form by simply entering a JSON file.

    1. Add the Swagger action to the canvas as shown below.

    2. Double-click on the Swagger action, you will see a window appearing on the screen. In the Swagger File URL field, enter the URL for the swagger JSON file of the service you want to get the APIs of, optionally enter the service account credentials, and click on the ‘Fetch’ button. For example, if you want to retrieve the APIs of Petstore, enter http://petstore.swagger.io/v2/swagger.json in the Swagger File URL field, enter credentials and click on Fetch. (Petstore is a sample Swagger JSON).

    3. A new Select an API field will be automatically added to the form. Select the API you want to execute and then click on Next.

    4. Once you have selected an API, the API form will automatically display all the mandatory as well as optional fields of the selected API call.

    5. Enter relevant data in the fields, and click on Done to return to your canvas. Now, when you test the workflow with this action, the selected API call will be executed.

    How to read the contents of a downloaded file

    When you download any file using Google Drive, Dropbox, or other such applications, it is saved temporarily in the IBM webMethods Integration engine local storage. In order to read/access this file, you need to use other online file storage applications where this file can be uploaded. Let’s understand how it works with the help of an example.

    Suppose you need to download a file using Google Drive and read it. To do this, follow the steps given below:

    Configure the Google Drive-Download File by ID action

    1. Add the Google Drive connector to canvas, connect it to the Start icon, and double-click the connector icon. This will open the configuration window where you can select the action you want to use.

      • Select Action: For the purpose of this tutorial, select the downloadFileById action from the drop down list. This will populate the below two fields:

      • Name: Provide a label for the action you want to configure.

      • Authorize Google Drive: Select the account you want to use to execute the action, from the drop‐down list. If you have not created an account yet, click on the + button. You will see two options:

        • Default Authorization: Select this option to automatically generate all keys required to execute all Google Drive actions.
        • Or: Select this option to provide the relevant keys required to execute all Google Drive actions.
        • Select one of the options, provide the required information as prompted on the screen, and click Add.
        • This will create a new account for your Google Drive account. Once you create an account, you can use it to execute all other Google Drive actions and triggers.
    2. Once this is done, click Next, test the action, and click Done.

      At this stage, if you run the workflow, the specified file will be downloaded to the engine local storage. To read/access this file, use Dropbox Download file action (or any other similar action of a cloud storage application).

    Configure the Dropbox-Upload File action

    1. Add Dropbox connector to canvas, connect it to the Google Drive connector, and double-click the Dropbox connector.

    2. Select the Upload File action, provide a label for action, and add or select the account you want to use to execute the action, just like we did for Google Drive.

    3. Once you have entered all the details, click NEXT, test the action, and click Done

    Save and run the workflow

    Connect the Dropbox application icon with the Stop icon, Save the workflow, and click on Test.

    This will download the specified file from your Google Drive account and upload it to the specified folder in Dropbox, where you can read the file.

    How to change the state of a Nest thermostat

    You can remotely change the state of your Nest thermostat with IBM webMethods Integration.

    Let’s understand how to do it with the help of an example.

    How it works

    Let’s say, you want to create a workflow that sets the state of your Nest thermostat to home at 6 PM everyday. To do this, follow the steps given below:

    1. Add and configure the Clock trigger as shown below:

      • Trigger Label: Enter a suitable name for the trigger.

      • Trigger type: Select the Repeat From option from the drop-down list. This instructs the trigger to repeat at regular intervals, starting from the specified date and time.

      • Date: Select the date from which you want to start the repeat trigger.

      • Time: Enter the time from which you want to start the repeat trigger. For the purpose of this workflow, we will set it to 6 PM. (18:00).

      • Timezone: Select the timezone for the trigger start time.

      • Runs on every: Select the trigger interval. The trigger will be fired off every time after the interval specified here. Since we need to get the temperature every day, select 1 Day from the drop-down list.

      Once you have entered the details, click Done. This will take you back to canvas.

    2. Add Nest connector to the canvas and connect it to the Clock trigger.

      Next, double-click the Nest connector icon, select the Set State of Structureaction from the Select Action drop down list, and select the Nest account you want to use to execute the action.

      Once this is done, click Next.

      In the action configuration form that appears next, provide the details as given below:

      • Set State: Set the state to home. This action will ensure that the state is changed to home when the clock triggers off at a specified time.

    3. Once you have entered all the details, click the Next button, Test the action, and click the Done button to return to the canvas. Connect this action to the Stop icon and Save the workflow. Now, every day at 6 PM, this integration will be triggered. As a result, it will set the state of your Nest thermostat from away to home.

    How Return Data on Sync Webhook works

    The Return Data on Sync Webhook action enables you to run a workflow by calling a URL (webhook URL) and get the workflow’s output in the body of the URL.

    We will understand how this action works with the help of an example.

    Example

    Let’s say, you have a workflow that creates a board in your Trello account, posts the details of the board on the specified Slack channel, and also sends board details via Email.

    Once all the actions have been configured and connected, double-click the Start icon. A new screen will appear containing a list of trigger services. Select Webhook trigger from the list.

    On clicking the Webhook trigger, a unique webhook URL will be generated. You can optionally add an extra layer of security to your webhook URL by enabling the webhook authentication option. Click on Next and then on Done to go back to the canvas.

    At this point, if you hit this webhook URL in any browser, the workflow will get executed automatically, and you will receive the following response in the browser window:

    Now, drag and drop Return Data on Sync Webhook as the last action of your workflow. In our example, we will add the action after the Send an email action as shown in the image below:

    Configure the Return Data on Sync Webhook action and insert values in the given fields as per the instructions given below:

    Once you have added this action to your workflow, ‘/sync/v2/’ will be automatically included in your webhook URL, as shown in the image below.

    The ‘Return Data on Sync Webhook’ will sync the workflow with webhook URL and will send the workflow output to it when the Webhook URL is hit.

    Note
    • When you enable ‘Return Data on Sync Webhook’ for your webhook, it expects a response within 180 seconds after the triggering event. If the response is not received within the specified period, the ‘408 Request Timeout’ error will be thrown. This error means the server chooses to terminate the connection rather than continue waiting for the response.
    • However, if you are using any other web browsers, like Google Chrome, Microsoft Edge, etc., other than Mozilla Firefox, you may observe automatic retries performed by the browser before getting the ‘408 Request Timeout’ error. This means, your browser is automatically making multiple requests to get a response, and hence, you may have to wait for more than 180 seconds before you see the ‘408 Request Timeout’ error.
    • If you are using services like Postman, and the response is not received within 180 seconds, the ‘408 Request Timeout’ error will appear soon after the timeout duration.

    Now when the Webhook URL is hit, you can see the required board details on the screen.

    How to retrieve the value of a custom field

    There are some applications that allow you to create custom fields while creating objects or records. When you integrate IBM webMethods Integration with such applications, it provides a Custom Fields block that lets you pass values for the specified custom fields. In this guide, we will understand how to retrieve the values of these custom fields.

    Let’s say you want to retrieve the name (form field) and address (custom field) of a particular attendee of your EventMobi account, and send it to a specific recipient by email. To do this, follow the steps given below:

    Add actions

    1. Add EventMobi connector and Send an Email action on canvas and connect them as shown below:

    Configure the Get Attendee Details action

    1. Double-click the EventMobi application icon, select the Get Attendee Details action and add or create the EventMobi account you want to use to execute the action.

    2. Double-click the EventMobi application icon, select the Get Attendee Details action and add or create the EventMobi account you want to use to execute the action.

    3. Once this is done, click Next.

      Here, you will see the action configuration form. Provide the values for the form input fields as instructed below:

      • Event Shortcode: Enter the event shortcode. If your event app URL is eventmobi.com/sample123, then sample123 will be the event shortcode.

      • Attendee ID: Select/specify the ID of the attendee whose name and address details you want to retrieve.

    4. Once you have entered the details, click Next, test the action (optional), and click on Done. This will take you back to the canvas.

    Configure the Send an Email action

    Configure the Send an Email action. To do so, double-click the Send an Email action and click Next. In the action configuration window that appears, provide values for the following fields:

    First, click on the first_name key given on the left-hand side of the configuration window under Get Attendee Details output parameters. This will retrieve the name of the attendee.

    The next parameter we need (i.e., Address key) is defined as a custom field and hence will be listed under the custom_fields_values key of Get Attendee Details

    Locate the custom_fields_values key and click on the + icon given beside it to expand it. You will see a list of properties for the specified custom field. Since we need just the value of the custom field, click on the value key. This will retrieve the address of the attendee.

    Once you have entered the details, click Next, test the action, and click Done to return on the canvas. Then, click the Test button located at the upper-right corner of the canvas to run the workflow. This will retrieve the name and address of the specified attendee from your EventMobi account and will send it to the specified recipient through Gmail.

    Note
    In case you have defined more than one custom field, you can retrieve the values of multiple custom fields, by changing the index value of the parameter. For example, if you have defined three customs fields, the index value of the first custom field will be 0, the index value of the second custom field will be 1, and the index value of the third custom field will be 2.

    The parameters to be passed to retrieve the values of first and second custom fields are \{\{$a23.custom\_fields\_values\[0\].value\}\} and \{\{$a23.custom\_fields\_values\[1\].value\}\}.

    How to get data from an action and push it to another...

    You can use the output of the first action as an input for another action by using the keys given on the left-side of the configuration window. Let’s see how to do this with the help of an example.

    In this example workflow, we will fetch the details of a particular campaign from your Marketo account, and use those details to create a new campaign in your Salesforce account.

    Add Actions

    1. To do this, add Marketo and Salesforce connectors to your IBM webMethods Integration canvas, and connect them in the order shown below:

    Use Output

    1. Configure the Marketo-getLeadById action

      Double click the Marketo connector icon and select the getLeadByIdaction from the Select Action drop down list. When you do this, two more fields will be populated in the window.

      Provide the name for the action and select/add the account for Marketo , and click Next.

      Select/specify the ID of the campaign that you want to fetch in the Campaign ID field.

      Once this is done, click Next and test the action by clicking on the Test button.

      This serves two purposes. First, it checks whether the action has been configured properly or not, and second, it fetches the details of the latest campaign and displays it in the output tab.

      Once this is done, click Done to return to the canvas.

    2. Configure the Salesforce - createLeads action

      We will now configure the Salesforce — createLeads action. To do so, double-click the Salesforce connector icon, select the Create Campaign action in the window that appears, and select the appropriate Salesforce account you want to use to execute the action. Once this is done, click Next.

      Here, you can see the output data of the Marketo — getLeadById action on the left-side of the screen. This output data includes output keys returned by the action output. You can use these keys to provide inputs for the Salesforce - Create Leads action.

    Similar Instances

    Similarly, you can add other details of the campaign, such as company and city.

    Once you are done with this, click Next.

    You can now optionally test the configured action by clicking Test or go back to canvas by clicking Done.

    Now, when you Save and Test this workflow, IBM webMethods Integration will automatically fetch the details of the specified Marketo campaign, and using those details, create a new Salesforce campaign.

    How to retrieve the PagerDuty incident details and post them on Cisco Webex Teams space and Slack channel

    You can fetch the details of the incoming PagerDuty incidents and instantly post them on Cisco Webex Teams space and Slack channel.

    How it works

    To do this, follow the steps given below:

    1. Add and configure the PagerDuty - New Incident trigger as instructed below:

      • Trigger Label: Enter a suitable name for the trigger.

      • Select Trigger: Select New Incident trigger from the options available in the drop-down list. This instructs IBM webMethods Integration to trigger the workflow whenever a new incident is added in a particular service in your PagerDuty account.

      • Authorize PagerDuty: In order to use this trigger, you will have to first create PagerDuty account. Click the + button, and create your account using either Default Authorization option or service access token (i.e., Or option). If you have already created an account, select it from the drop-down list.

      • Service ID: Select/specify the ID of the service for which the trigger is to be set. In PagerDuty, the service ID could be found in the URL of the service. E.g.: PQQB1HE

      Once you have entered the details, click Done. This will take you back to canvas.

    2. Add Cisco Webex Teams connector to the canvas and connect it to the PagerDuty trigger.

      Double-click the Cisco Webex Teams connector icon, select the Post New Message action from the Select Action drop down list, and select the Cisco Webex Teams account you want to use to execute the action.

      Once this is done, click Next. In the action configuration form that appears next, provide the details as given below:

      • Space ID: Select/specify the ID of the space on which you want to post the message.

      • Message: Specify the message that you want to post in the space. In this case, we will enter a custom message, along with the output of the PagerDuty trigger.

      Once you have entered the details, click Done. This will take you back to canvas.

    3. Add Slack connector to the canvas and connect it to the PagerDuty trigger.

      Next, double-click the Slack connector icon, select the chatPostMessage action from the Select Action drop down list, and select the Slack account you want to use to execute the action. Once this is done, click Next.

      In the action configuration form that appears next, provide the details as given below:

      • channel: Enter the ID of the channel on which you want to post the message.

      • text: Specify the message that you want to send to the channel. In this case, we will enter a custom message, along with the output of the PagerDuty trigger.

      Once you have entered all the details, click the Next button, Test the action, and click the Done button to return to the canvas.

    4. Connect the Cisco Webex Teams and Slack connectors to the Stop icon and Save the workflow.

      Now, whenever a new incident is triggered in the specified service in PagerDuty, this workflow will send the incident details to the specified Cisco Webex Teams space and Slack channel.

    How to run a flow service with an XML input

    In this tutorial, let’s see the additional steps required to support the XML input for the Flow services and generate an XML output using the below examples:

    Accept XML as input for Flow service

    1. Create the Flow service.

    2. Click the icon and define the input field node with type as Object as shown below.

      When an XML input is provided to any service for execution, IBM webMethods Integration converts the XML input to XML node and sends the XML node to the service for execution.

    3. Select xmlNodeToDocument service to convert XML node to a document.

    4. Use math service AddInts to perform mathematical operations on string-based numeric values.

    5. Now, the pipeline will have two fields node and document.

    6. Add required fields under the document and map it to the AddInts service.

      This groups the elements under the document field.

    Sending XML output from Flow service

    1. Add Transform Pipeline to group the variables that you want to send in a response and drop all unwanted variables.

    2. Next, use the documentToXMLString service to convert the response document to XML String.

    3. Select the Flow function and then select the SetHttpResponse service to set the HTTP response to success or failure along with the response XML.

      Map the XML string to response string and set the response code as required.

    4. On the Overview page, select the Enable Flow service to be invoked over HTTP option. Once the Flow service is enabled to be invoked over HTTP, the HTTP request URL appears.

    5. Download the Postman app (Windows 64-bit) from https://www.postman.com/downloads, install the app, and then sign-in to Postman.

    6. On the Postman page, click Create a request.

      Invoking a service from Postman page appears as shown below:

    How to Create Custom OAuths for Google Connectors

    You can create custom OAuths for supported Google connectors. Custom OAuths can then be used to configure supported triggers and actions.

    When you are configuring an action or a trigger, you are prompted to select an existing account or create a new account. To create custom OAuth, click the + icon located beside the Authorize {connector_name} label. You will be redirected to the screen given below:

    Here, you need to enter the relevant details associated with your Google service account to create custom OAuth.

    Retrieving Client ID and Client Secret

    You need to have a relevant Google connector app in Google Cloud Platform to retrieve its Client ID and Client Secret. We will first understand how to create an app for a Google Connector (if you have already created an app, skip to step 4).

    Let’s say you want to create an app for Dialogflow. To do this, follow the steps given below:

    1. Login to Google Cloud Platform and create a new project

      First, login to Google Cloud Platform. You will be prompted to create a new project.

      Once you have entered the relevant details, click Create. This will redirect you to the project dashboard.

    2. Enable service APIs

      Click on the + ENABLE APIS AND SERVICES button.

      This will take you to the Google API library where you can view a list of all services supported by Google.

      Click the service for which you want to create an app.

      This will set up the selected service API in your account. Click on Enable to enable it for your account.

    3. Set up OAuth Client ID

      Once the API is enabled, you will be redirected to the API dashboard.

      Click on the Credentials menu listed in the left-side panel, and then select the OAuth Client ID option from the CREATE CREDENTIALS dropdown menu.

      You will be prompted to Configure the consent screen details. To do so, click the Configure consent screen button.

      This will take you to OAuth Consent screen screen where you need to provide the following details:

      • Application type: Specify whether you want to create a public or private application.

      • Application name: Provide a suitable name for your application.

      • Authorized domains: Enter the domain(s) for which you want to enable OAuth access.

      Once this is done, click Save.

      Set Application type to Web in the next screen that appears and specify the redirect URL in the Authorized redirect URIs field.

      Note
      The redirect URI should match with the domain(s) you have specified in the previous screen.


      Once this is done, click Create. With this, your app will be successfully created.
    4. Retrieve Client ID and Client Secret

      As soon as the app is created, you will see the Client ID and Client Secret in the pop-up window.

      You can alternatively view the Client ID and Client Secret by navigating to relevant project dashboard, and clicking the Credentials menu listed in the left-side panel.

      You will see the list of all existing OAuths clients.

      Locate the OAuth client of which Client ID and Client Secret keys you want to retrieve, and click on the Edit OAuth Client icon given against it.

      This will take you to the OAuth Client configuration screen where you can see the Client ID and Client secret for the selected app at the top of the page.

      Copy the Client ID and Client Secret from here and add them to the respective fields in the Add Account window.

    Retrieving Access Token and Request Token

    You can use services like Postman to retrieve the Access Token and Request Token. To do this, you need to perform two steps where you make two API requests to relevant Google service.

    1. Retrieve authorization code required to fetch access token and request token.

      Open Postman and set up the first request as given below:

      Key Value
      client_id {Enter client_id}
      redirect_uri {Enter redirect_uri}
      response_type code
      scope {Enter relevant scope from this list}
      include_granted_scopes true
      state pass through value
      prompt consent
      access_type offline

      Once this is done, you will notice that the params passed by you are appended to the request URL. Copy this modified request URL, paste it in any browser, and hit enter.

      You will be prompted to select the Gmail account you want to use. Once you do this, you will see the consent screen for the app created by you.

      Click Allow. This will take you to the homepage of your app. Copy the value of the code key displayed in the address bar and save it in any text editor. We will need this key in the next step when we send the request to retrieve the Access Token and Request Token.

    2. Retrieve Access Token and Refresh Token

      Open Postman and set up the second request as given below:

      Key Value
      client_id {Enter client_id}
      redirect_secret {Enter client_secret}
      redirect_uri {Enter redirect_uri}
      grant_type authorization_code
      code {code retrieved from step 1}

      Once you have entered all the details, click Send. This will return the Access Token and Request Token along with other details in the Body tab of the response.

      Copy the Access Token and Request Token from here and add them to the respective fields in the Add Account window.

      Note
      The value of Refresh URL will always be https://www.googleapis.com/oauth2/v4/token and the value of Grant Type will always be refresh_token for all Google connectors.

    List of Scopes for Google Services

    Table given below contains the scope(s) to be used while sending the first request in order to retrieve the Access Token and Refresh Token.

    Note
    Use space separator in case of multiple scopes.
    Service Name Scope(s)
    Gmail https://www.googleapis.com/auth/gmail.modify https://www.googleapis.com/auth/gmail.readonly https://www.googleapis.com/auth/gmail.compose
    Dialogflow https://www.googleapis.com/auth/cloud-platform https://www.googleapis.com/auth/dialogflow
    Google Contacts https://www.google.com/m8/feeds
    https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/userinfo.profile
    Google Forms https://spreadsheets.google.com/feeds
    https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/userinfo.profile https://www.googleapis.com/auth/drive.readonly
    Google Tasks https://www.googleapis.com/auth/tasks
    https://www.googleapis.com/auth/tasks.readonly https://www.googleapis.com/auth/taskqueue.consumer https://www.googleapis.com/auth/taskqueue https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/userinfo.profile
    Google Sheets https://spreadsheets.google.com/feeds/
    https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/userinfo.profile https://www.googleapis.com/auth/drive.readonly
    Google Drive https://www.googleapis.com/auth/drive.metadata https://www.googleapis.com/auth/drive.file
    https://www.googleapis.com/auth/drive https://www.googleapis.com/auth/drive.apps.readonly https://www.googleapis.com/auth/drive.scripts https://www.googleapis.com/auth/drive.install https://www.googleapis.com/auth/drive.appdata https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/userinfo.profile
    Google Calendar https://www.googleapis.com/auth/calendar https://www.googleapis.com/auth/calendar.readonly https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/userinfo.profile
    Google Apps Admin https://www.googleapis.com/auth/admin.directory.device.mobile https://www.googleapis.com/auth/admin.directory.device.mobile.readonly https://www.googleapis.com/auth/admin.directory.device.mobile.action https://www.googleapis.com/auth/admin.directory.group.member https://www.googleapis.com/auth/admin.directory.group.member.readonly https://www.googleapis.com/auth/admin.directory.group https://www.googleapis.com/auth/admin.directory.group.readonly https://www.googleapis.com/auth/admin.directory.orgunit https://www.googleapis.com/auth/admin.directory.orgunit.readonly https://www.googleapis.com/auth/admin.directory.user https://www.googleapis.com/auth/admin.directory.user.readonly https://www.googleapis.com/auth/admin.directory.user.alias https://www.googleapis.com/auth/admin.directory.user.alias.readonly https://www.googleapis.com/auth/admin.directory.user.security https://www.googleapis.com/auth/admin.directory.rolemanagement https://www.googleapis.com/auth/admin.directory.rolemanagement.readonly https://www.googleapis.com/auth/admin.directory.userschema https://www.googleapis.com/auth/admin.directory.userschema.readonly https://www.googleapis.com/auth/admin.directory.notifications https://www.googleapis.com/auth/admin.directory.customer https://www.googleapis.com/auth/admin.directory.customer.readonly https://www.googleapis.com/auth/admin.directory.domain https://www.googleapis.com/auth/admin.directory.domain.readonly https://www.googleapis.com/auth/admin.directory.resource.calendar https://www.googleapis.com/auth/admin.directory.resource.calendar.readonly https://www.googleapis.com/auth/admin.directory.device.chromeos https://www.googleapis.com/auth/admin.directory.device.chromeos.readonly
    Google Analytics https://www.googleapis.com/auth/analytics https://www.googleapis.com/auth/analytics.edit https://www.googleapis.com/auth/analytics.manage.users https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/userinfo.profile
    Google Cloud PubSub https://www.googleapis.com/auth/pubsub
    https://www.googleapis.com/auth/cloud-platform
    Google Analytics Reporting https://www.googleapis.com/auth/analytics https://www.googleapis.com/auth/analytics.edit https://www.googleapis.com/auth/analytics.manage.users https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/userinfo.profile
    Google BigQuery https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigquery.readonly https://www.googleapis.com/auth/cloud-platform https://www.googleapis.com/auth/devstorage.full_control https://www.googleapis.com/auth/devstorage.read_only https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/userinfo.profile
    Google Translator https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/userinfo.profile
    https://www.googleapis.com/auth/cloud-translation
    https://www.googleapis.com/auth/cloud-platform
    Google Cloud Storage https://www.googleapis.com/auth/devstorage.full_control https://www.googleapis.com/auth/devstorage.read_only https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/userinfo.profile

    How to Use Account Store, Flow Store, and Memory Store

    IBM webMethods Integration allows you to store key-value pairs and fetch them when required during a workflow execution. IBM webMethods Integration provides this functionality with three data store actions, namely, Account Store, Flow Store, and Memory Store. Though all the three data store actions seem to perform the same task, there are certain key differences in their features that make them ideal for different scenarios. Let us now know exactly what each of these actions do and understand how to use them.

    How to Use Account Store

    The Account Store action lets you store one or more key-value pairs with a global scope and retrieve them during any workflow execution. This means that, each key-value pair you store using the Account Store action can be retrieved while working with any workflow of any project in your tenant. Let us understand this with the help of an example.

    Let’s say we have two different projects: Default and Recipes. In Default, we will create a workflow to set a key-value pair. Next, we will retrieve the value of this key using the workflow created in the Recipe project.

    Set Key-Value Pairs with Account Store

    1. Add action to canvas

      Add Account Store action on canvas and connect it to the Start and Stop icons:

    2. Configure the Account Store action

      Double-click the Account Store action icon, optionally provide a suitable name for this action, and then click Next. This will take you to the Account Store action configuration screen. Configure this action as per the instructions given below:

      • Account Store: As we want to set a key-value pair, select Set from the drop-down menu.
      • Key (mandatory): Provide a suitable name for the key you want to set.
      • Value: Enter the value you want to assign to the specified key.
      • You can set as many key-value pairs as you want by clicking on the + button. Once you have entered all the required details, click Next, optionally Test the action (click here to know how), and then click on Done to return to the canvas.

        Note
        If you add a new key with a name that already exists but with a different value, IBM webMethods Integration overwrites the previously stored value of the key. If you want to delete the value of an existing key, keep the corresponding value field empty. Once you do that, save and run the workflow to delete the data in the value field permanently.

        With this, you have successfully created and stored the key-value pairs. Once you add the key-value pairs, you can fetch them anytime in future. Let’s now understand how to retrieve values of the stored keys.

    Retrieve Values of Stored Keys from Account Store

    We will now understand how to retrieve the value of stored keys with Account Store action. For this, we will create a workflow in the project named Recipes with Account Store action to retrieve the value of a stored key and then post the details on a Slack channel.

    1. Add actions/connectors to canvas:

      Add Account Store action and Slack connector to canvas and connect them.

      Now that the actions are added to the canvas, the next step is to configure these actions.

    2. Configure Account Store action

      Double-click the Account Store action icon, optionally provide a suitable name for this action, and then click Next. This will take you to the Account Store action configuration screen. Configure this action as per the following instructions:

      • Account Store: As we want to get the value for a stored key, select Get from the drop-down list.

      • Key (mandatory): Provide the name of the stored key, the value you want to retrieve. You can fetch values of multiple keys by clicking the + button.

      Once you have entered all the required details, click Next, optionally Test the action (click here to know how), and then click Done to return to the canvas.

      Next, configure the Slack - Post New Message action as you would normally do to post what we retrieved earlier on the specified Slack channel.

      Once this is done, Save the workflow. Now when you Run the workflow, IBM webMethods Integration will automatically fetch the value of the key and post it on the specified Slack channel.

    Delete the stored Key-Value pair from Account Store

    IBM webMethods Integration allows you to delete stored key-value pairs from the Account Store action.

    Note
    As the Account Store action stores key-value pairs with a global scope, you can delete the stored key-value pairs from any workflow of any project in your tenant.

    If you want to delete a stored key-value pair from Account store, select Delete from the drop-down list, specify the name of the key you want to delete, and then click Next.

    Doing this will successfully delete the key and its associated value from the Account Store action.

    How to Use Flow Store

    The Flow Store action allows you to store one or more key-value pairs and retrieve them when required from within the same workflow of the project. This means the keys stored inside a workflow using Flow Store can be retrieved on all subsequent executions of that particular workflow, but they cannot be retrieved outside of that workflow.

    Let’s understand how to set key-value pairs and retrieve their values using the Flow Store action with the help of an example.

    Set Key-Value Pairs with Flow Store

    1. Add action to canvas

      Add Flow Store action on the canvas and connect to the Start and Stop icons:

      Now that the action is added to the canvas, the next step is to configure the Flow Store action.

    2. Configure the Flow Store action

      Double-click the Flow Store action icon, optionally provide a suitable name for this action, and then click Next. This will take you to the Flow Store action configuration screen. Configure this action as per the following instructions:

      • Flow Store: As we want to set a key-value pair, select Set from the drop-down list.
      • Key (mandatory): Provide a suitable name for the key you want to set.
      • Value: Enter the value you want to assign to the specified key. You can set as many key-value pairs as you want by clicking the + button.
      • Note
        If you add a new key with a name that already exists but with a different value, IBM webMethods Integration will overwrite the previously stored value of the key.

        Once you have entered all the required details, click Next, optionally Test the action (click here to know how), and then click Done to return to the canvas.

        With this, you have successfully created and stored the key-value pair(s). Once you have created and stored the key-value pairs, you can retrieve them from the same workflow. Let’s now understand how to get or retrieve values of the stored keys.

    Retrieve Values of Stored Keys with Flow Store

    To get or retrieve values of the stored keys, we will have to add the Flow Store action in the same workflow where we had stored and then post its details on a Slack channel.

    1. Add actions/connectors to the canvas. Add Flow Store and Slack action on the canvas and connect to the Start and Stop icons:

      Now that the actions are added, the next step is to configure these actions.

    2. Configure the Flow Store action

      Double-click the Flow store action icon, optionally provide a suitable name for this action, and then click Next. This will take you to the Flow Store action configuration screen. Configure this action as per the following instructions:

      • Flow Store: As we want to get the value for a stored key, select Get from the drop-down list.

      • Key (mandatory): Provide the name of the stored key, the value you want to retrieve. You can fetch the value of multiple stored keys by clicking the + button.

      Once you have entered all the required details, click Next, optionally Test the action (click here to know how), and then click Done to return to the canvas.

      Next, configure the Slack - Post New Message action as you would normally do to post the key value details we retrieved earlier on the specified Slack channel.

      Once this is done, Save the workflow. Now when you Run the workflow, IBM webMethods Integration will automatically fetch the value of the key and post it on the specified Slack channel.

      Note
      If you store a value for a particular key in a workflow within a project and try to retrieve its value in another workflow within the same project, the workflow will not successfully run.

    Delete the stored Key-Value pair from Flow Store

    IBM webMethods Integration allows you to delete stored key-value pairs from the Flow Store action.

    Note
    As the Flow Store action lets you store one or more key-value pairs and retrieve from within the same workflow of the project, you can only delete the stored key-value pairs from that same workflow of that particular project.

    If you want to delete a stored key-value pair from the Flow store, select Delete from the drop-down list, specify the name of the key you want to delete, and then click Next.

    Doing this will successfully delete the key and its associated value from the Flow Store action.

    How to Use Memory Store

    The Memory Store action lets you store and retrieve one or more key-value pairs which is limited to only single workflow run. This means that the keys stored inside a workflow using Memory Store remain active for only one execution. On the subsequent execution of the same workflow, you cannot retrieve the keys stored in the previous execution. So if you want to set key-value pairs and get the values of stored keys in the same workflow, you will have to add two Memory Store actions in the same workflow.

    Note
    While Flow Store and Memory Store both allow you to set and retrieve keys from within the same workflow only, Flow Store lets you retrieve the keys on all subsequent executions of that workflow whereas Memory Store lets you retrieve the keys on only first execution of that workflow.

    Let’s now understand how to set key-value pairs and retrieve the values using the Memory Store action with the help of an example. Let’s say we have a workflow that sets a key-value pair, gets the values for the stored key, and posts the value details on the specified Slack channel.

    Set Key-Value Pairs and Retrieve Values of Stored Keys with Memory Store

    1. Add actions/connectors to canvas

      Add two Memory Store actions, one for setting key-value pairs and another one for retrieving values of stored keys to the canvas, along with the Slack connector. Connect these actions and the connector to the Start and Stop icons:

    2. Configure Memory Store action for setting Key-Value Pairs

      Double-click the first Memory store action icon to set key-value pairs. Optionally provide a suitable name for this action, and then click Next. This will take you to the Memory Store action configuration screen. Configure this action as per the following instructions:

      • Memory Store: As we want to set a key-value pair, select Set from the drop-down list.

      • Key (mandatory): Provide a suitable name for the key you want to set.

      • Value: Enter the value you want to assign to the specified key. You can set as many key-value pairs as you want by clicking the + button.

      After you have entered all the required details, click Next, optionally Test the action (click here to know how), and then click Done to return to the canvas.

      With this, you have successfully created and stored the key-value pairs.

    3. Configure the Memory Store action for retrieving values for stored keys

      Double-click the second Memory Store action icon to get value details of the stored key. Optionally provide a suitable name for this action, and then click Next. This will take you to the Memory Store action configuration screen. Configure this action as per the following instructions:

      • Memory Store: As we want to get the value for a stored key, select Get from the drop-down list.

      • Key (mandatory): Provide a suitable name of the stored key, the value you want to retrieve. You can retrieve the value of multiple stored keys by clicking the + button.

      Once this is done, click Next, optionally Test the action (click here to know how), and then click Done to return to the canvas.

      Next, configure the Slack - Post New Message action as you would normally do, to post the key value details we retrieved earlier on the specified Slack channel.

      Once this is done, Save the workflow. Now, when you Run the workflow, you will get the value details on the specified Slack channel. If you try to execute this workflow again, you will not get the desired result, as each Memory Store action is limited to one workflow run only.

    Delete the stored Key-Value pair from Memory Store

    IBM webMethods Integration allows you to delete stored key-value pairs from the Memory Store action.

    Note
    As the Memory Store action is limited to only a single workflow run, the keys stored inside a workflow remain active for only one execution. So if you want to delete a key-value pair, then you will have to add two Memory Store actions in the same workflow in the canvas, one for setting key-value pairs and another one for deleting key-value pairs.

    To configure the second Memory Store action, select Delete from the drop-down list, specify the name of the key you want to delete, and then click Next.

    Doing this will successfully delete the key and its associated value from the Memory Store action.

    How to use the Database Application

    Let’s understand how to use the Database Application with the help of an example.

    Let’s say you have a Flow service on your cloud container environment named DBTest_FS(flow_service) insideprojects(DBTest). You want to invoke this Flow service and execute it.

    Note: The Flow service DBTest_FS(flow_service) used in the example is a simple Flow service that selects a record from the customer database.

    To invoke and execute DBTest_FS(flow_service), we will have to use the Database. To do so, follow the steps given below:

    1. Start creating a Flow service. Select Database from the dropdown list.

    2. Select Add Custom Operation in the dropdown list next to Database.

      This will redirect you to the Connect to account configuration screen.

      Note: You can alternatively select an existing action (if any) from the Select Action drop-down list.

    3. Connect to a Database account.

      Select an existing Database account (if any) from the drop-down list, provide a suitable name and description for the action you want to create and click Next or select the ’+’ button to add a new Database account.

      In the ‘Add Account’ configuration screen, provide the following details:

      • Name - Provide a suitable name for the account you want to add.
      • Description - Provide a suitable description for the account you want to add.
      • Database - Select the supported database to connect to from the dropdown list.
      • Driver Group - Select the existing driver group from the drop-down list or select the ’+’ button to add a new driver group.

        This will take you to the ‘Add Driver’ configuration screen where you will need to provide the following details:

        1. Database - Select the supported database to connect to from the dropdown list.
        2. Driver Group - Provide a name to the driver group.
        3. Browse - Select the JDBC driver jar to upload.

        Save the details.

      • Transaction Type - Select the transaction type from the drop-down list.

      • DataSource Class - Select the datasource class from the drop-down list.

      • Server Name - Enter the server that hosts the database. For example, sample.adapter.db.com.

      • User Name - Enter the user name associated with the account on the database server.

      • Password - Enter the password for the specified user name.

      • Database Name - Enter the database name to which the connection connects to.

      • Port Number - Enter the port number.

      • Truststore Alias - Select the alias name of the IBM webMethods Integration truststore configuration.

      • Network Protocol - Enter the network protocol that the connection uses when connecting to the database.

      • Property Name - Enter the property name. You can select the property from the drop-down list or use the ’+’ button to add a new property

      • Property Value - Enter the property value.

      Click Add.

      This will take you to the Connect to account configuration screen of ‘Add Custom Action’ wizard.

      Note: Now, if you click on the Connect to Database drop-down list, you will see the added account in the list. This account can now be used to execute any Database custom action created under the same project.

      • Select an account from the Database drop-down list, provide a suitable name and description for the action you want to create.
      • Click Next.

    4. Select the action.

      Click Next.

    5. Select the Tables.

      Here, you will see all the available tables, views or synonyms. Select the table.

      Here, in our example, we have added the table CUSTOMER from the existing tables.

      Click Next.

    6. Select the Joins.

      Here, you will see all the selected tables, views or synonyms. Add a join.

      Here, in our example, we have not added any joins.

      Click Next.

    7. Select the Data Fields.

      Here, you will see the selected data fields. Create or add a new data fields.

      Here, in our example, we have added the data fields in the table.

      You can also change the Output Field Type and Sort Order for a data field. Here, in our example, we have changed the Output Field Type to the data fields in the table.

      Click Next.

    8. Select the Condition.

      Here, you will see all the conditions created. Add single or multiple conditions.

      Here, in our example, we have added a condition on data field PERSONID.

      Click Next.

    9. Select the Parameters.

      Edit Execution Parameters.

      Click Next.

    10. Verify the summary.

      Click Done.

    11. The Flow service window appears. Click icon to run the Flow service.

      You will see the output data for the configured custom action.

    How to use the Database Application for Custom SQL

    Let’s understand how to use the Database Application for Custom SQL with the help of an example.

    Let’s say you have a Flow service on your environment named DBTest_FS(Flow service) inside DBTest(project). You want to invoke this Flow service and run it.

    Note: The Flow service DBTest_FS(flow_service) used in the example is a simple Flow service that uses the Custom SQL action from the database application.

    To invoke and run DBTest_FS(Flow service), create an account to connect to the Database. To do so, do the following:

    1. Start creating a Flow service. Select Database from the drop-down list.

    2. Select Add Custom Operation in the dropdown list next to Database.

      This will redirect you to the Connect to account configuration screen.

      Note: You can alternatively select an existing action (if any) from the Select Action drop-down list.

    3. Connect to a Database account.

      Select an existing Database account (if any) from the drop-down list, provide a suitable name and description for the action you want to create and click Next or select the ’+’ button to add a new Database account.

      In the ‘Add Account’ configuration screen, provide the following details:

      • Name - Provide a suitable name for the account you want to add.
      • Description - Provide a suitable description for the account you want to add.
      • Database - Select the supported database to connect to from the drop-down list.
      • Driver Group - Select the existing driver group from the drop-down list or select the ’+’ button to add a new driver group.

        In the ‘Add Driver’ configuration screen, provide the following details:

        1. Database - Select the supported database to connect to from the drop-down list.
        2. Driver Group - Provide a name for the driver group.
        3. Browse - Select the JDBC driver jar to upload.

        Save the details.

      • Transaction Type - Select the transaction type from the drop-down list.

      • DataSource Class - Select the datasource class from the drop-down list.

      • Server Name - Enter the server that hosts the database. For example, sample.adapter.db.com.

      • User Name - Enter the user name associated with the account on the database server.

      • Password - Enter the password for the specified user name.

      • Database Name - Enter the database name to which the connection connects to.

      • Port Number - Enter the port number.

      • Truststore Alias - Select the alias name of the IBM webMethods Integration truststore configuration.

      • Network Protocol - Enter the network protocol that the connection uses when connecting to the database.

      • Property Name - Enter the property name. You can select the property from the drop-down list or use the ’+’ button to add a new property

      • Property Value - Enter the property value.

      Click Add.

      This will take you to the Connect to account configuration screen of ‘Add Custom Action’ wizard.

      Note: Now, if you click on the Connect to Database drop-down list, you will see the added account in the list. This account can now be used to execute any Database custom action created under the same project.

      • Select an account from the Database drop-down list, provide a suitable name and description for the action you want to create.
      • Click Next.

    4. Select Custom SQL action.

      Click Next.

    5. Type the Custom SQL query. You can pass values at runtime using “?“as the value placeholder for each column in the SQL statement.

      Here, in our example, we have given the Custom SQL query for table EMP_A where ID =?

      Click Next.

    6. You will see the input details of the SQL.

      Here, in our example, we have provided the input field ID.

      Click Next.

      Note: If inputs are not auto-populated for the SQL query, you can add them explicitly using the Add Input button.

    7. You will see the output fields as per the entered SQL.

      Here, in our example, we have selected all the columns of the table EMP_A as output.

      Click Next.

      Note: If outputs are not auto-populated for the SQL query, you can add them explicitly using the Add Output button.

    8. Edit Execution Parameters.

      Click Next.

      Note: This is an optional configuration.

    9. Verify the summary.

      Click Done.

      Note: To see the signature, click Show Input/Output.

    10. The Flow service window appears. Click icon to run the Flow service.

      Here in our example, we have given 1 as the input value for input field ID.

      You will see the output data for the configured custom action.

    How to use the Database Application for Dynamic SQL

    Let’s understand how to use the Database Application for Dynamic SQL with the help of an example.

    Let’s say you have a Flow service on your environment named DBTest_FS(Flow service) inside DBTest(project). You want to invoke this Flow service and run it on IBM webMethods Integration.

    Note: The Flow service DBTest_FS(Flow service) used in the example is a simple Flow service that uses the Dynamic SQL action from the database application.

    To invoke and run DBTest_FS(Flow service), create an account to connect to the Database. To do so, do the following:

    1. Start creating a Flow service. Select Database from the drop-down list.

    2. Select Add Custom Operation in the drop-down list next to Database.

      This will redirect you to the Connect to account configuration screen.

      Note: You can alternatively select an existing action (if any) from the Select Action drop-down list.

    3. Connect to a Database account.

      Select an existing Database account (if any) from the dropdown list, provide a suitable name and description for the action you want to create and click Next or select the ’+’ button to add a new Database account.

      In the ‘Add Account’ configuration screen, provide the following details:

      • Name - Provide a suitable name for the account you want to add.
      • Description - Provide a suitable description for the account you want to add.
      • Database - Select the supported database to connect to from the dropdown list.
      • Driver Group - Select the existing driver group from the drop-down list or select the ’+’ button to add a new driver group.

        In the ‘Add Driver’ configuration screen, provide the following details:

        1. Database - Select the supported database to connect to from the drop-down list.
        2. Driver Group - Provide a name for the driver group.
        3. Browse - Select the JDBC driver jar to upload.

        Save the details.

      • Transaction Type - Select the transaction type from the drop-down list.

      • DataSource Class - Select the datasource class from the drop-down list.

      • Server Name - Enter the server that hosts the database. For example, sample.adapter.db.com.

      • User Name - Enter the user name associated with the account on the database server.

      • Password - Enter the password for the specified user name.

      • Database Name - Enter the database name to which the connection connects to.

      • Port Number - Enter the port number.

      • Truststore Alias - Select the alias name of the IBM webMethods Integration truststore configuration.

      • Network Protocol - Enter the network protocol that the connection uses when connecting to the database.

      • Property Name - Enter the property name. You can select the property from the drop-down list or use the ’+’ button to add a new property.

      • Property Value - Enter the property value.

      Click Add.

      This will take you to the Connect to account configuration screen of ‘Add Custom Action’ wizard.

      Note: now,If you click on the Connect to Database drop-down list, you will see the added account in the list. This account can now be used to execute any Database custom action created under the same project.

      • Select an account from the Database drop-down list, provide a suitable name and description for the action you want to create.
      • Click Next.

    4. Select the Dynamic SQL action.

      Click Next.

    5. Type the Dynamic SQL query statement, part of which you set at run time using input fields.

      Here, in our example, we have given the Dynamic SQL query for table EMP_A, at run time, the ‘where’ input field is set to where ID = 2.

      Click Next.

    6. Input field is added at run time. You need to add input explicitly, If you have placeholder “?” in the SQL query to receive input to the query at runtime.

      Click Next.

    7. You will see the output fields as per the entered SQL when the Flow service is run.

      Click Next.

      Note: To add output explicitly, click Add Output, if you have output for SQL statement and you want to add alias name for output fields.

    8. Edit Execution Parameters.

      Click Next.

      Note: This is an optional configuration.

    9. Verify the summary.

      Click Done.

      Note: To see the signature, click Show Input/Output.

    10. The Flow service window appears. Give the input value and click icon to run the Flow service.

      Here, in our example, we have given value for the input field as, where ID = 2

      You will see the output data for the configured custom action.

    How to Create and Run a Deploy Anywhere Flow Service?

    Note
    To try this use case, you need the deploy anywhere flow services capability which is not available by default. To enable the capability, contact your Account Executive.

    Summary

    In this tutorial you will learn how to create a deploy anywhere flow service. Let’s use the GetCurrentDate service to return the current date and time in a specific format, which can be used in a variety of integration scenarios.

    Before you begin

    1. Access to IBM webMethods Integration Tenant: Get your credentials and log in to IBM webMethods Integration tenant with Develop Anywhere, Deploy Anywhere capability.

    2. Register an Edge Runtime: You need to register an edge runtime to run your integrations that you plan to use for this use case. For more information, see Registering Edge Runtimes.

    3. Create an Integration Project: You need to create a project in Develop Anywhere Deploy Anywhere feature to store the assets created in this use case. However, you can also use the default cloud Runtime project if you do not want to create a new project.

    For this tutorial, let’s assume that you are using an edge runtime named ER_GetCurrentDate and an integration project named ProjectCurrentDate. For your convenience, you can also use your current edge runtime and project, instead of creating new ones.

    Once you have completed all the tasks mentioned, you are ready to proceed with creating the GetCurrentDate service.

    Basic Flow

    1. Go to the integration project you’ve set up for this specific use case, then access the integrations and click Flow services.

    2. Click the (plus icon).

    3. Select Deploy Anywhere Flow Service and click Create.

      The Flow Editor page appears.

    4. Provide a name, for example, FlowAnyGetCurrentDate, and an optional description for the new service.

    5. Select the edge runtime.

      A confirmation message appears.

    6. On the Flow Editor page, click the rectangular box to add a flow step as follows:

    7. Select getCurrentDateString operation in the flow step.

    8. Click View/Edit pipeline.

      The Pipeline mapping screen appears.

    9. Double-click each parameter under the Input column to define the values as shown in the following illustrations:

      a. pattern: Represents the format in which you want the date returned.

      Click Save.

      b. timezone: (optional) Specify a time zone code as shown in Time Zones (for example, EST for Eastern Standard Time).

      Click Save.

      c. locale: (optional) Locale in which the date is to be expressed. For example, if locale is en (for English), the paern EEE d MMM yyyy will produce Friday 23 August 2002, and the locale of fr (for French) will produce vendredi 23 août 2002.

      Click Save.

    10. Click (Save).

    11. Click (Sync).

    12. Click (Run) to run the service.

      You can view the test results.

    Making HTTP Requests in Integration Projects

    Note
    To try this use case, you need the deploy anywhere flow services capability, which is not available by default. To enable the capability, contact your Account Executive.

    Summary

    In this tutorial, you will learn how to make HTTP client requests to an external HTTP endpoint from withing an Integration project. Enabling the HTTP client service allows you to create more powerful, data-driven, and automated flow services and workflows that can interact with external services and APIs, offering greater flexibility and functionality in your design process.

    Before you begin

    Complete the following tasks before beginning the basic flow.

    Basic Flow

    1. Go to Projects > Integrations > Flow services and then click .

    2. Select Deploy Anywhere Flow Service.

    3. Click Create.

    4. Provide a name, for example, CheckWeatheratmylocation, and an optional description for the new deploy anywhere flow service.

    5. Select an edge runtime ER_Test_1 .

    6. Select the Client function.

    7. Select the http service from the list and click View/Edit Pipeline.

      The pipeline panel displays the HTTP Client service data variables.

    8. Define the following Input fields:

      • url: URL of the resource that you want to access.

      • method: Select Get.

    9. Save the deploy anywhere flow service and click (Sync).

    10. Click Run to validate the data in the output view.

    How to use packages developed with Service Designer in an Integration project

    Note
    • To try this use case, you need the deploy anywhere flow services capability which is not available by default. To enable the capability, contact your Account Executive.

    • For downloading webMethods Service Designer, see webMethods Service Designer.
      New users starting with version 11.0.7, contact your Account Executive to obtain the webMethods Service Designer executable files.

    Summary

    If you have webMethods packages containing services and integrations that you have developed using Service Designer, you can deploy and use them in an integration project through version control; namely by creating a GitHub repository for each package. Let’s illustrate this procedure with an example.

    Before you begin

    Complete the following tasks before beginning the basic flow.

    Basic Flow

    Utilizing the packages generated in Service Designer requires the following steps:

    1. Create a package in Service Designer

    2. Create a flow service in Service Designer

    3. Create a Local Service Development Project

    4. Configure Git Repository in Designer

    5. Import the webMethods Package into an Integration Project

    6. Validating the imported services

    Create a package in Service Designer

    Note
    For more information about how to create packages in Service Designer, see the Service Designer documentation.
    1. Click File > New > Package.

    2. In the New Integration Server Package dialog box, select the Integration Server on which you want to create the package.

    3. In the Name field, type the name for the new package using any combination of letters, numbers, and the underscore character.

    4. Click Finish. The Service Designer refreshes the Package Navigator view and displays the new package.

    Create a flow service in Service Designer

    Note
    For more information about how to create a flow service in Service Designer, see the Service Designer documentation.
    1. Click File > New > Flow Service.

    2. In the New Flow Service dialog box, select the folder in which you want to save the flow service.

    3. In the Element name field, type a name for the flow service using any combination of letters, numbers, and/or the underscore character.

    4. If you have a template you want to use to initialize a default set of properties for the service, select the template from the Choose template list.

    5. Click Next.

    6. On the Select the Source Type panel, select Empty Flow.

    7. Click Finish to create the empty flow service.

    Create a Local Service Development Project

    1. In the Package Navigator view, right-click the package for which you want to create a local service development project and select Create Local Service Development Project.

      The Service Designer creates the local service development project and displays the Share Project wizard.

    2. In the Share Project wizard, select the Use or Create repository in parent folder of project checkbox.

    3. Click Create Repository.

    4. Click Finish.

      In the Package Navigator and Package Explorer, you can see the repository link created for the project.

    5. In Git Staging, click + to add the unstaged changes to staged changes.

    6. Enter a commit message.

    7. Click Commit.

    Configure Git Repository in Service Designer

    1. Create an empty Git repository.

    2. In Service Designer, go to Git Repositories, expand the local service development project, and select Remotes.

    3. Right-click Remotes and select Create Remote.

    4. In the remote page that appears, enter the name, and select the action for configuration.

    5. Click Create.

    6. In the Configure Push page that appears, click Change to add the URI.

    7. In the Select a URI window that appears, enter the location and authentication details.

    8. Click Finish. The URI appears in the Configure Push page.

    9. Click Advanced to add the specification for push (the action).

    10. Select the source and destination reference, and click Add Spec.

    11. Click Finish.

    12. Click Save and Push.

    13. In the Push Results window that appears, click Close. You can see the configured remote repository in the Git Repositories tab.

    14. Right-click the local commit and select Push to Origin. All the changes are updated in the created Git repository.

    Import the webMethods Package into an Integration Project

    1. Go to project folder Acme that you have already created.

    2. Click Packages.

    3. Click Add package.

      The Add Package page appears.

    4. Click Git.

    5. Provide the URL of the package repository and select an account.

    6. Click Next.

    7. Select a particular branch or tag of the repository to use as the source for the package in the Git branch/Tag field.

    8. Click Pull.

      A confirmation message appears after a successful import of a package.

    Validating the imported services

    1. Go to the project folder Acme.

    2. Select Flow services and click

    3. Create a Flow service. For example, ValidateOrderRequest

    4. Select an edge runtime.

    5. Select Package services and select acme.

    6. Select an operation InspectLineItems

    7. To define your data type, click .

    8. Define the Input data type. For example, OrderRequest.

    9. Define the Output data type. For example, IsValid.

    10. Save the service and click (Sync)

    11. Click Run to validate the data.

      You will see that the purchase order request is valid.

    Reusing webMethods Connector Asset Packages in an Integration Project

    Summary

    This tutorial explains the procedure to reuse the webMethods packages with adapter assets such as connections and adapter services, developed using Service Designer in IBM webMethods Integration using the packages capability.

    Note
    To try this use case, the Develop Anywhere, Deploy Anywhere and Central Control, Distributed Execution capabilities must be enabled for your tenant. Contact IBM support to enable the capability.

    Assume that the AcmeConnectors package exists with the following details:

    Let’s import the package to IBM webMethods Integration and deploy the services used in the package across runtimes.

    Before you begin

    Complete the following tasks before beginning the basic flow:

    Basic Flow

    Using the packages created in Service Designer requires the following steps:

    1. Import the webMethods package with connectors into an Integration Project.

    2. Add driver libraries to establish connections with the backend.

    3. Verify the imported assets to ensure that the assets are accessible in IBM webMethods Integration.

    4. Run the imported service in cloud and edge runtimes.

    Import the webMethods package with connectors into an Integration Project

    1. Go to the AcmeConnectors project.

    2. Click Packages.

    3. Click Add package.

      The Add Package page appears.

    4. Click Git.

    5. Provide the URL of the package repository and select an account.

    6. Click Next.

    7. Select a particular branch or tag of the repository to use as the source for the package in the Git branch/Tag field.

    8. Click Pull.

      A confirmation message appears and the AcmeConnectors project is imported and listed under the AcmeConnectorsProject which is the main project package.

    Add driver libraries

    1. Go to the AcmeConnectors project.

    2. Click Packages. The packages including the project package (AcmeConnectorsProject) and the imported packages (AcmeConnectors) appears.

    3. Click the project package (AcmeConnectorsProject). The package information page appears.

    4. Click Libraries > Add Library.

    5. In the Add library page, perform the following:

      a. Click (Browse) icon to select the library to upload

      b. Always select Server as the Classpath type. For more information about Classpath type, see Package Libraries

      The library added is listed.

      Note
      Restart the cloud runtime if you have imported the SAP® ERP, Kafka, and MQ assets.

    Verify the imported assets

    1. Go to the AcmeConnectors project.

    2. Click Packages. The packages including the project package (AcmeConnectorsProject) and the imported packages (AcmeConnectors) appears.

    3. Go to the imported package AcmeConnectors. The package information page appears.

    4. Click Assets. The assets including accounts, and services imported appear.

    5. Go to Packages and click the imported package. For example: <- AcmeConnectors. The Integrations > Packages page appears.

    6. Click Connectors > Deploy Anywhere.

    7. Expand the dropdown for the connector to view the accounts available.

      Note
      Restart cloud runtime or contact the runtime administrator if the connector assets are not visible.

    Run the imported service in runtimes

    1. Go to the AcmeConnectors project.

    2. Click Integrations > Flow services.

    3. Create a flow service. For example, ValidateServiceCRT.

    4. Click the step and select PACKAGE SERVICES from the dropdown.

    5. Do the following:

      • Select the imported package. For example, AcmeConnectors.
      • Select the service to run. For example, getAllEUCustomers.
      • Select the runtime. For example, Cloud Runtime or edge runtimes.
      • Sync if you want to run the service on an edge runtime.

      Cloud runtime service sample:

      Edge runtime service sample:

    6. Save the flow service.

    7. Go to the project AcmeConnectors.

    8. Click Integration Runtimes > <Select your runtime> > Connections to view accounts on the runtime selected.

      Cloud runtime accounts sample:

      Edge runtime accounts sample:

    9. Select the account to enable. For example, connEU.

    10. Click the (Edit) icon to edit the selected connection.

      • Update the password.
      • Click Save connection.

    11. Toggle the Disabled icon to enable the account.

      Go to the Integration Runtimes page.

    12. Click Projects > AcmeConnectors > Integrations > Flow services > ValidateServiceCRT.

    13. Sync if you want to run the service on an edge runtime.

    14. Click the (Run) icon.

      Sample of the adapter service run on an cloud runtime:

      Sample of the adapter service run on an edge runtime:

    How to reuse services between Integration projects using webMethods Packages

    Note
    To try this use case, you need the deploy anywhere flow services capability which is not available by default. To enable the capability, contact your Account Executive.

    Summary

    In this tutorial, you will learn how to share and collaborate with other developers and build modular integrations based on reusable services and other assets with webMethods packages. You can also take advantage of any packages that you may have written for older versions of our on-premise platform, that is 10.15 or older. This allows you to modernize your existing webMethods platform towards a microservice and cloud friendly architecture without having to rewrite mapping logic or other critical business logic.

    Before you begin

    Before you begin with this use case, ensure that you complete the following tasks:

    Once you have completed all the tasks mentioned, you are ready to proceed with committing the packages to Git.

    Basic Flow

    Utilizing the webMethods packages requires the following steps:

    1. Committing the Packages to Git

    2. Linking a Package from Git to a Project

    3. Using an edge runtime to Run Services

    Committing the Packages to Git

    Use the following steps to add the packages to the Git repository, but make sure that you are at the root level of your package, before running the commands.

    1. Open the Windows command prompt.

    2. Go to the packages folder, for example,

      $ cd <SAG_HOME>/IntegrationServer/instances/default/packages/TempPkgSD.git

    3. Use the following commands in the sequence mentioned to commit the packages to Git repo:

      a. $ git init

      b. $ git add .

      c. $ git commit -m ‘commit message’

      d. $ git branch -M main

      e. $ git remote add https://repositoryname/username/TempPkgSD.git
      replace repositoryname and username with the actual git repository name and github username details.

      f. $ git push -u origin main

      After completing these steps, you have successfully added the content to your local GitHub repository similar to the following example:

    Linking a Package from Git to a Project

    1. Select the TempProject and go to Packages.

      A default associated package is generated.

    2. Click Add package.

      The Add package page appears.

    3. Select Git.

    4. In the Git URL field, add the package repository URL and select a server.

    5. Click Next.

    6. Add Git branch/tag.

      Note
      The main branch is shown by default; however, you have the flexibility to select any other branch or tag as needed.
    7. Click Pull. The Package added successfully message appears.

      After completing these steps, you have successfully added the package in Git to the Project folder.

      The package information view must look similar to the following example:

    Using an Edge Runtime to Run Services

    1. Go to Projects and click TempProject.

    2. Click Flow services.

    3. Provide a name, for example, Project_Service_Flow, and an optional description for the deploy anywhere flow service.

    4. Select the edge runtime Test_EdgeRuntime.

    5. Click All and select Package Services.

      Develop anywhere flow services available for that specific package appears.

    6. Select the TempPkgSD package services.

    7. Select a service from the package.

    8. Save the deploy anywhere flow service and click (Sync).

    9. Click (Run) to run the service.

      You can view the test results.

    How to use Database Applications in deploy anywhere flow services and workflows

    Note
    To try this use case, you need the deploy anywhere flow services capability which is not available by default. To enable the capability, contact your Account Executive.

    Summary

    An online consumer goods retailer wants to provide discounts to customers based on customer demographics such as age, gender, and customer loyalty. As the data used is personal, the process of extracting information must reside in the customer region to comply with GDPR requirements. For example, EU customer information must reside only in the EU region and US customer information must reside only in the US region.

    The process of zone categorization can be specific to a region. For example, customers of age >= 65 will belong to zone ‘ASZ’ and get a discount of 50% on shipping in EU region, whereas customers of age >= 65 will belong to zone ‘UPZ’ and get a discount of 5% on product in US region.

    The discount percentage is calculated based on the zone.

    Now, let’s create an API to access the region-specific database to retrieve customer information and then calculate the discount percentage based on their discount zone.

    Assumptions

    Before you begin

    Before you begin with this use case, ensure that you complete the following tasks:

    Once you have completed all the tasks mentioned, you are ready to proceed.

    Basic flow

    To achieve the goal, the following tasks must be completed in the mentioned order:

    1. Create a deploy anywhere flow service to identify the customer’s discount zone for an EU customer.

    2. Create a deploy anywhere flow service to identify the customer’s discount zone for an US customer.

    3. Create a deploy anywhere flow service to identify the percentage for the discount zone.

    4. Create a workflow to calculate the customer’s discount zone based on the region provided.

    Create a deploy anywhere flow service to identify the customer’s discount zone for an EU customer

    Follow the steps to create a deploy anywhere flow service, which retrieves the customer’s personal data and subsequently applies rules to calculate the discount zone.

    1. In the Flow services page, click (Plus) icon to create a new deploy anywhere flow service.

    2. Select Deploy Anywhere Flow Service and click Create.

    3. Provide a name and description for the deploy anywhere flow service. Description is optional. For example, GetCustomerInfo.

    4. From the list, select an Edge runtime on which you run a deploy anywhere flow service. For example, CPM_EU.

    5. On the Flow service step, type Database to select the Database connector.

    6. In the Type to choose action box, click Add Custom Operation.

      The Add custom action wizard appears. The wizard redirects and guides you through a series of pages to configure an account to retrieve information from the database.

    7. Perform the following in the Account page to create an account that the service can use to access the database.

      a. Enter the name and description for the account.

      b. Select the CPM_EU runtime.

      c. Click the (Plus) icon to add a new database account.

      You are redirected to the Add account wizard.

      d. Enter the following details in the Add account wizard:

      • Action name: Name for the account you want to add.

      • Description: Short description for the account you want to add.

      • Select default runtime: The edge runtime on which the action or account is deployed. By default, the selection on the Flow Editor page is considered. However, if you change the edge runtime on Add account wizard, the connection is redeployed.

        Note
        The edge runtime selected in the Select default runtime takes precedence during redeployment of connection.
      • Connect to account Database: Database you want to connect to.

      • Driver Group: Driver group used to connect to the database.

      • Transaction Type: Transaction type you want to perform.

      • DataSource Class: Datasource class to be used.

      • Server Name: Name of the server that hosts the database.

      • User Name: Username associated with the account on the database server.

      • Password: Password for the specified user name.

      • Database Name: Name of the database to which you are connecting.

      • Port Number: Port number used for connecting to the database.

      • Network Protocol: Network protocol that the connection must use when connecting to the database.

      • Property Name: Name of the property. You can select the property from the drop-down list or use the ’+’ button to add a new property.

      • Property Value: Value for the property selected.

      d. Click Next. The Add account > Synchronize page appears.

      e. Click Next. The Next button is enabled only after a successful validation. The Add account > Test and review page appears.

      f. Click Test connection to verify the database connection. A success message appears if the connectivity is successful.

      g. Review the account details and click Enable.

      h. Click Done. You are redirected to the Add custom action page and the newly created account appears in the Connect to account Database drop-down list. This account is now used to run any database custom action created under the same project.

    8. Click Next. The Action page appears.

    9. Select the Select action as the requirement is to retrieve specific information from the database.

    10. Click Next. The Tables page appears. Here, you can see all available tables, views, or synonyms.

    11. Do the following to add tables:

      a. Click the (Plus) icon to add a new table. The Add tables page appears.

      b. Select the CUSTOMER table from the catalog.

      c. Click Add. The table is added and listed in the Tables page.

    12. Click Next. The Joins page appears.

      Here, in our example, joins were not added.

    13. Click Next. The Data fields page appears.

      a. Click the (Plus) icon to add data fields. The Add data fields page appears.

      b. Select the data fields to be added in the table.

      c. Click Add.

      You can also change the Output Field Type and Sort Order for a data field by clicking the corresponding Edit button. Here, in our example, we have changed the Output Field Type to the data fields in the table.

    14. Click Next. The Condition page appears. You can see all conditions created and also add conditions.

      a. Click the (Plus) icon to add condition. The Add data fields page appears.

      b. Add the condition on the CUSTOMERID data field.

      In this example a condition on the data field CUSTOMERID is added to retrieve record of customer ID supplied to the deploy anywhere flow service.

    15. Click Next. The Parameters page appears.

      In this example no parameters are specified.

    16. Click Next. The Summary page appears.

    17. Verify the summary.

    18. Click Done. The Flow Editor page appears.

    19. Click (Input/Output) to add the input and output fields to the service.

      a. Add the input fields.

      In this example three input fields, CUSTOMER_ID, REGION, CATEGORY are added to retrieve record of customer ID supplied to the deploy anywhere flow service.

      b. Add the output fields.

      In this example one data field DISCOUNT_ZONE is added to calculate the discount zone applicable to the customer ID supplied to the deploy anywhere flow service.

    20. Click (View/Edit Pipeline) to map the input and output fields to the service. The Pipeline page appears.

    21. On the Flow service step, type getCurrentDateString to select the built-in service Date GetCurrentDateAsString.

      a. Click (View/Edit Pipeline) to map the input and output fields to the built-in service GetCurrentDate. The Pipeline page appears.

      b. Add the input fields.

      In this example input field, pattern is added as yyyy-MM-dd.

      c. The Pipeline page appears.

    22. On the Flow service step, type calculateDateDifference to select the built-in service Date CalculateDateDifference.

      a. Click (View/Edit Pipeline) to map the input and output fields to the built-in service CalculateDateDifference. The Pipeline page appears.

      b. Map the input and output fields.

      In this example input fields startDate and startDatePattern are mapped to output fields of GetCurrentDate, endDate to customer’s DOB, and endDatePattern to yyyy-MM-dd.

      c. The Pipeline page appears.

    23. On the Flow service step, add the if conditions.

      a. Add the if condition ( /dateDifferenceDays is less than or equal to 0 AND /CATEGORY is equal to ‘PRODUCT’)

      b. If the condition is true, add the Transform Pipeline.

      c. Repeat steps 23a and 23b to add the following conditions:

      Condition 1: if condition ( /dateDifferenceDays is less than or equal to 30 AND /CATEGORY is equal to ‘SHIPPING’)

      Condition 2: ( /dateDifferenceDays is less than or equal to 6500 AND /CATEGORY is equal to ‘PRODUCT’)

      Condition 3: ( /dateDifferenceDays is greater than or equal to 22000 AND /CATEGORY is equal to ‘SHIPPING’)

      Else Condition:

      After adding all conditions, the deploy anywhere flow service appears as:

    24. Click (Save) to save the service.

    25. Click (Sync).

    26. Click (Run). The discount zone applicable to the customer ID Y701342 is calculated.

    Create a deploy anywhere flow service to identify the customers discount zone for a US customer

    Follow the steps in Create Customer Deploy Anywhere Flow service to create and run a deploy anywhere flow service in the US region. The deploy anywhere flow service will extract the US customers personal data and then apply region specific rules to calculate the discount zone. For example, the GetCustomerInfo_US deploy anywhere flow service is saved and run on CPM_US edge runtime in US.

    Create a deploy anywhere flow service to identify discount percentage for the discount zone

    Use the following steps to create a deploy anywhere flow service, which uses the discount zone calculated to extract the discount percentage applicable.

    1. In the Flow services page, click (Plus) icon to create a new deploy anywhere flow service.

    2. Select Deploy Anywhere Flow Service and click Create.

    3. Provide a name and description for the deploy anywhere flow service. Description is optional. For example, GetZoneDiscount.

    4. From the list, select an Edge runtime on which you run a deploy anywhere flow service. For example, CPM_EU.

    5. On the Flow service step, type Database to select the Database connector.

    6. In the Type to choose action box, click Add Custom Operation.

      The Add custom action wizard appears. The wizard redirects and guides you through a series of pages to configure an account to connect to a database.

    7. Perform the following in the Account page to create an account that the service can use to access the database.

      a. Enter the name and description for the account.

      b. Select the CPM_EU runtime.

      c. Click the (Plus) icon to add a new database account.

      You are redirected to the Add account wizard.

      d. Enter the following details in the Add account wizard:

      • Name: Name for the account you want to add.

      • Description: Short description for the account you want to add.

      • Select default runtime: The edge runtime on which the action or account is deployed. By default, the selection on the Flow Editor page is considered. However, if you change the edge runtime on Add account wizard, the connection is redeployed.

        Note
        The edge runtime selected in the Select default runtime takes precedence during redeployment of connection.
      • Database: Database you want to connect to.

      • Driver Group: Driver group used to connect to the database. Let’s add a new driver group here. To add a new driver group, do the following:

        • Click the (Plus) icon to add a new driver group.

        • Enter the following details in the Add driver wizard:

          Database: Database selected in the Add account wizard. Driver group: Name of the driver group. Select the driver: Click on Browse files to add the appropriate JAR file for the database selected.

      • Transaction Type: Transaction type you want to perform.

      • DataSource Class: Datasource class to be used.

      • Server Name: Name of the server that hosts the database.

      • User Name: Username associated with the account on the database server.

      • Password: Password for the specified user name.

      • Database Name: Name of the database to which you are connecting.

      • Port Number: Port number used for connecting to the database.

      • Network Protocol: Network protocol that the connection must use when connecting to the database.

      • Property Name: Name of the property. You can select the property from the drop-down list or use the + button to add a new property.

      • Property Value: Value for the property selected.

      d. Click Next. The Add account > Synchronize page appears.

      e. Click Next. The Next button is enabled only after a successful validation. The Add account > Test and review page appears.

      f. Click Test connection to verify the database connection. A success message appears if the connectivity is successful.

      g. Review the account details and click Enable.

      h. Click Done. You are redirected to the Add custom action page and the newly created account appears in the Connect to account Database drop-down list. This account is now used to run any database custom action created under the same project.

    8. Click Next. The Action page appears.

    9. Select the Select action as the requirement is to retrieve specific information from the database.

    10. Click Next. The Tables page appears. Here, you can see all available tables, views, or synonyms.

    11. Do the following to add tables:

      a. Click the (Plus) icon to add a new table. The Add tables page appears.

      b. Select the ZONEDISCOUNT table from the catalog.

      c. Click Add. The table is added and listed in the Tables page.

    12. Click Next. The Joins page appears.

      Here, in our example, joins were not added.

    13. Click Next. The Data fields page appears.

      a. Click the (Plus) icon to add data fields. The Add data fields page appears.

      b. Select the data fields to be added in the table.

      c. Click Add.

      You can also change the Output Field Type and Sort Order for a data field by clicking the corresponding Edit button.

    14. Click Next. The Condition page appears. You can see all conditions created and also add conditions.

      a. Click the (Plus) icon to add condition. The Add data fields page appears.

      b. Add the condition on the ZONE data field.

      In this example a condition on the data field ZONE is added to retrieve record of zone supplied to the deploy anywhere flow service.

    15. Click Next. The Parameters page appears.

      In this example no parameters are specified.

    16. Click Next. The Summary page appears.

    17. Verify the summary.

    18. Click Done. The Flow Editor page appears.

    19. Click (Input/Output) to add the input and output fields to the service.

      a. Add the input fields.

      In this example one input field, DISCOUNTZONE is added to retrieve record of the discount zone supplied to the deploy anywhere flow service.

      b. Add the output fields.

      In this example one output field, DISCOUNTPERCENTAGE is added to retrieve the discount percentage applicable to the discount zone supplied to the deploy anywhere flow service.

    20. Click (View/Edit Pipeline) to map the input and output fields to the service. The Pipeline page appears.

    21. Click (Save) to save the service.

    22. Click (Sync).

    23. Click (Run). The discount zone applicable to the zone ‘ASZ’ is extracted.

    Create a workflow to retrieve customers zone discount based on the region provided

    Use the following steps to create a workflow, which runs the customer deploy anywhere flow service based on the region to calculate the discount zone and then runs the zonne deploy anywhere flow service to retrieve the discount percentage based on the zone.

    Ensure that all three deploy anywhere flow services are created. For example, GetCustomerInformation, GetCustomerInformation_US, GetZoneDiscount.

    1. Click Workflows.

    2. Click + > Create New Workflow to create a new workflow.

    3. Click (Pen) icon to provide a name and description for the workflow. Description is optional. For example, CalcCustomerDiscountPercentage.

    4. Click (Flow Service) from the right navigation pane. The flow services in the project are listed.

    5. Drag and drop the deploy anywhere flow services in the project to create a workflow as shown:

    6. Configure a webhook to initiate the workflow.

      • Double-click the (start) icon. The Trigger selection page appears.

      • Select Webhook. The webhook wizard appears.

      • Click Next. The Configure Webhook Options page appears.

      • Click Next. The Webhook configured successfully page appears. Configure the payload.

        Click Done.

    7. Select (Settings) icon on the arrow from Start Webhook to the GetCustomerInformation deploy anywhere flow service and set the conditions to run this section only if the region is EU.

    8. Select on the arrow from Start Webhook to the GetCustomerInformation_US deploy anywhere flow service and set the conditions to run this section only if the region is US.

    9. Double-click on the GetCustomerInformation deploy anywhere flow service to set the Integration Runtime, and map the data.

      • Select CPM_EU as Integration Runtime.

      • Map the fields.

    10. Double-click on the GetCustomerInformation_US deploy anywhere flow service to set the Integration Runtime, and map the data.

      • Select CPM_US as Integration Runtime.

      • Map the fields.

    11. Double-click on the GetZoneDiscount deploy anywhere flow service to set the Integration Runtime, and map the data.

      • Select CPM_EU as Integration Runtime.

      • Map the input and output discount zone fields.

    12. Click (Save) icon.

    13. Click (Run) icon.

    14. Click (Console Panel) icon to view the results.

    15. Select the last run results.

      The payload is:

      The final output can be seen.

    How to use the Database Application on the Edge Runtime

    Note
    To try this use case, you need the deploy anywhere flow services capability which is not available by default. To enable the capability, contact your Account Executive.

    Summary

    Let’s create a MyDatabaseService in your cloud tenant to retrieve the customer details from the Customer database that is on your local network.

    Before you Begin

    For this tutorial, let’s assume that you are using an edge runtime named sample_er and a project named ProjectDB. For your convenience, you can also use your current edge runtime and project, instead of creating new ones.

    Once you have completed all the tasks mentioned, you are ready to proceed with creating of MyDatabaseService.

    Basic Flow

    1. Go to the project ProjectDB > Integrations > Flow services.

    2. Click the (Plus) icon. The Select Flow service type page appears.

    3. Select Deploy Anywhere Flow Service and click Create. The Flow Editor page appears.

    4. Provide a name, for example, MyDatabaseService, and description for the deploy anywhere flow service. Description is optional.

    5. From the Search Runtime list, select an Edge runtime on which you run the deploy anywhere flow service.

    6. On the Flow service step, type Database to select the Database connector.

    7. In the Type to choose action box, select Add Custom Operation.

      The Add custom action wizard appears. The wizard redirects and guides you through a series of pages to configure an action to connect to a database.

    8. Perform the following in the Account page to create an account that the service can use to access the database.

      a. Enter the name and description for the account.

      b. In the Select default runtime drop-down list, select an edge runtime on which the action must be deployed. By default, the edge runtime selected on the Flow Editor page is considered.

      Note
      The edge runtime selected in the Select default runtime takes precedence during redeployment of connection.

      c. Click the (Plus) icon to add a new database account.

      You are redirected to the Add account wizard.

      d. Enter the following details in the Add account wizard:

      • Action name: Name for the account you want to add.

      • Description: Short description for the account you want to add.

      • Select default runtime: An edge runtime on which the action must be deployed. By default, the edge runtime selected on the Flow Editor page is considered.

        Note
        The edge runtime selected in the Select default runtime takes precedence during redeployment of connection.
      • Connect to account Database: Database you want to connect to.

      • Driver Group: Driver group a must be used to connect to the database. Let’s add a new driver group here. To add a new driver group, do the following:

        i. Click the ’+’ button adjacent to the Driver Group drop-down list. The Add driver dialog box appears.

        ii. Select the supported database to connect to from the Database drop-down list.

        iii. Enter a name for the Driver Group.

        iv. Click Browse to select the JDBC driver jar to upload.

        v. Click Save. You are redirected to the Add account wizard.

      • Transaction Type: Transaction type you want to perform.

      • DataSource Class: Datasource class to be used.

      • Server Name: Name of the server that hosts the database.

      • User Name: Username associated with the account on the database server.

      • Password: Password for the specified user name.

      • Database Name: Name of the database to which you are connecting.

      • Port Number: Port number used for connecting to the database.

      • Network Protocol: Network protocol that the connection must use when connecting to the database.

      • Property Name: Name of the property. You can select the property from the drop-down list or use the ’+’ button to add a new property

      • Property Value: Value for the property selected.

      d. Click Next. The Add account > Synchronize page appears.

      e. Click Next. The Next button is enabled only after a successful validation. The Add account > Test and review page appears.

      f. Click Test connection to verify the database connection. A success message appears if the connectivity is successful.

      f. Review the account details and click Enable.

      g. Click Done. You are redirected to the Add custom action page and the newly created account appears in the Connect to account Database drop-down list. This account is now used to run any database custom action created under the same project.

    9. Click Next. The Action page appears.

    10. Select the Select action as the requirement is to retrieve specific information from the database.

    11. Click Next. The Tables page appears. Here, you can see all available tables, views, or synonyms.

    12. Do the following to add tables:

      a. Click the (Plus) icon to add a new table. The Add tables page appears.

      b. Select the CUSTOMER table from the catalog.

      c. Click Add. The table is added and listed in the Tables page.

    13. Click Next. The Joins page appears. Here, in our example, joins were not added.

    14. Click Next. The Data fields page appears.

      a. Click the (Plus) icon to add data fields. The Add data fields page appears.

      b. Select the data fields to be added in the table.

      c. Click Add.

      You can also change the Output Field Type and Sort Order for a data field by clicking the corresponding Edit button. Here, in our example, we have changed the Output Field Type to the data fields in the table.

    15. Click Next. The Condition page appears. You can see all conditions created and also add conditions.

      a. Click the (Plus) icon to add condition. The Add data fields page appears.

      b. Add the condition on the PERSONID data field.

      In this example a condition on the data field PERSONID is added to retrieve record of PERSONID 91.

    16. Click Next. The Parameters page appears. In this example no parameters are specified.

    17. Click Next. The Summary page appears.

    18. Verify the summary.

    19. Click Done. The Flow Editor page appears.

    20. Click (Save) to save the service.

    21. Click (Sync).

    22. Click (Run). The customer details for the ID 91 are retrieved.

    How to use the Database Application for Custom SQL on the Edge Runtime

    Note
    To try this use case, you need the deploy anywhere flow services capability which is not available by default. To enable the capability, contact your Account Executive.

    Summary

    Let’s create a deploy anywhere flow service in your cloud tenant to retrieve the customer details from the Customer database that is on your local network using a Custom SQL action.

    Before you Begin

    For this tutorial, let’s assume that you are using an edge runtime named sample_er and a project named ProjectDB. For your convenience, you can also use your current edge runtime and project, instead of creating new ones.

    Once you have completed all the tasks mentioned, you are ready to proceed with creating of MyDatabaseService.

    Basic Flow

    1. Go to the project ProjectDB > Integrations > Flow services.

    2. Click the (Plus) icon. The Select Flow service type page appears.

    3. Select Deploy Anywhere Flow Service and click Create. The Flow Editor page appears.

    4. Provide a name, for example, MyDatabaseService, and description for the deploy anywhere flow service. Description is optional.

    5. From the Search Runtime list, select an Edge runtime on which you run the deploy anywhere flow service.

    6. On the Flow service step, type Database to select the Database connector.

    7. In the Type to choose action box, click Add Custom Operation.

      The Add custom action wizard appears. The wizard redirects and guides you through a series of pages to configure a custom action and retrieve information from the database.

    8. Perform the following in the Account page to create an account that the service can use to access the database.

      a. Enter the name and description for the account.

      b. Click the (Plus) icon to add a new database account.

      You are redirected to the Add account wizard.

      c. Enter the following details in the Add account wizard:

      • Action name: Name for the account you want to add.

      • Description: Short description for the account you want to add.

      • Select default runtime: The edge runtime on which the action or account is deployed. By default, the selection on the Flow Editor page is considered. However, if you change the edge runtime on Add account wizard, the connection is redeployed.

        Note
        The edge runtime selected in the Select default runtime takes precedence during redeployment of connection.
      • Connect to account Database: Database you want to connect to.

      • Driver Group: Driver group that must be used to connect to the database.

      • Transaction Type: Transaction type you want to perform.

      • DataSource Class: Datasource class to be used.

      • Server Name: Name of the server that hosts the database.

      • User Name: Username associated with the account on the database server.

      • Password: Password for the specified user name.

      • Database Name: Name of the database to which you are connecting.

      • Port Number: Port number used for connecting to the database.

      • Network Protocol: Network protocol that the connection must use when connecting to the database.

      • Property Name: Name of the property. You can select the property from the drop-down list or use the ’+’ button to add a new property

      • Property Value: Value for the property selected.

      d. Click Next. The Add account > Synchronize page appears.

      e. Click Next. The Next button is enabled only after a successful validation. The Add account > Test and review page appears.

      f. Click Test connection to verify the database connection. A success message appears if the connectivity is successful.

      f. Review the account details and click Enable.

      g. Click Done. You are redirected to the Add custom action page and the newly created account appears in the Connect to account Database drop-down list. This account is now used to run any database custom action created under the same project.

    9. Click Next. The Action page appears.

    10. Select the Custom SQL action as the requirement is to retrieve specific information from the database.

    11. Click Next. The SQL page appears.

    12. Type the SQL query for table EMP_A where ID =?.
      You can pass values at runtime using “?” as the value placeholder for each column in the SQL query.

    13. Click Next. The Input page appears.

    14. Enter the input field ID.

      Note
      If inputs are not auto-populated for the SQL query, you can add them explicitly using the Add Input button.

    15. Click Next. The Output page appears.

    16. Select all columns of the table EMP_A as output.

      Note
      If outputs are not auto-populated for the SQL query, you can add them explicitly using the Add Output button.

    17. Click Next. The Parameters page appears.

      In this example no parameters are specified.

    18. Click Next. The Summary page appears.

    19. Verify the summary.

    20. Click Done. The newly created account is added.

    21. Click the button at the top of the page. The Define input and output fields dialog box appears.

    22. Click the button in the Define input and output fields page to add a new input for your service and call it ID.

    23. Click (Save) to save the service.

    24. Click (Sync).

    25. Click (Run).

      In this example, 1 is provided as input value for the input field ID. The output data for the configured custom action appears.

    How to use the Database Application for Dynamic SQL on the Edge Runtime

    Note
    To try this use case, you need the deploy anywhere flow services capability which is not available by default. To enable the capability, contact your Account Executive.

    Summary

    Let’s create a deploy anywhere flow service in your cloud tenant to retrieve the customer details from the Customer database that is on your local network using a Dynamic SQL action.

    Before you Begin

    For this tutorial, let’s assume that you are using an edge runtime named sample_er and a project named ProjectDB. For your convenience, you can also use your current edge runtime and project, instead of creating new ones.

    Once you have completed all the tasks mentioned, you are ready to proceed with creating of MyDatabaseService.

    Basic Flow

    1. Go to the project ProjectDB > Integrations > Flow services.

    2. Click the (Plus) icon. The Select Flow service type page appears.

    3. Select Deploy Anywhere Flow Service and click Create. The Flow Editor page appears.

    4. Provide a name, for example, MyDatabaseService, and description for the deploy anywhere flow service. Description is optional.

    5. From the Search Runtime list, select an Edge runtime on which you run the deploy anywhere flow service.

    6. On the Flow step, type Database to select the Database connector.

    7. Select Add Custom Operation from the Type to choose action drop-down list.

      You are redirected to the Add custom action wizard. The wizard redirects and guides you through a series of pages to configure a custom action and retrieve information from the database.

    8. Perform the following in the Account page to create an account that the service can use to access the database.

      a. Enter the name and description for the account.

      b. Click the (Plus) icon to add a new database account.

      You are redirected to the Add account wizard.

      c. Enter the following details in the Add account wizard:

      • Action name: Name for the account you want to add.

      • Description: Short description for the account you want to add.

      • Select default runtime: The edge runtime on which the action or account is deployed. By default, the selection on the Flow Editor page is considered. However, if you change the edge runtime on Add account wizard, the connection is redeployed.

        Note
        The edge runtime selected in the Select default runtime takes precedence during redeployment of connection.
      • Connect to account Database: Database you want to connect to.

      • Driver Group: Driver group a must be used to connect to the database.

      • Transaction Type: Transaction type you want to perform.

      • DataSource Class: Datasource class to be used.

      • Server Name: Name of the server that hosts the database.

      • User Name: Username associated with the account on the database server.

      • Password: Password for the specified user name.

      • Database Name: Name of the database to which you are connecting.

      • Port Number: Port number used for connecting to the database.

      • Network Protocol: Network protocol that the connection must use when connecting to the database.

      • Property Name: Name of the property. You can select the property from the drop-down list or use the ’+’ button to add a new property

      • Property Value: Value for the property selected.

      d. Click Next. The Add account > Synchronize page appears.

      e. Click Next. The Next button is enabled only after a successful validation. The Add account > Test and review page appears.

      f. Click Test connection to verify the database connection. A success message appears if the connectivity is successful.

      f. Review the account details and click Enable.

      g. Click Done. You are redirected to the Add custom action page and the newly created account appears in the Connect to account Database drop-down list. This account is now used to run any database custom action created under the same project.

    9. Click Next. The Action page appears.

    10. Select the Dynamic SQL action as the requirement is to retrieve specific information from the database.

    11. Click Next. The SQL page appears.

    12. Type the dynamic SQL query statement, part of which you set at run time using input fields.

      Here, in our example, we have given the Dynamic SQL query for the table EMP_A, at run time the ‘where’ input field is set to ID = 2.

    13. Click Next. The Input page appears. Here, input values are provided during run time.

    14. Click Next. The Output page appears.

    15. Click Next. The Parameters page appears.

      In this example no parameters are specified.

    16. Click Next. The Summary page appears.

    17. Verify the summary.

    18. Click Done. The newly created account is added.

    19. Click the button at the top of the page. The Define input and output fields dialog box appears.

    20. Click the button in the Define input and output fields page to add a new input for your service and call it ID.

    21. Click (Save) to save the service.

    22. Click (Sync).

    23. Click (Run). The Input Values dialog box appears.

    24. Enter 2 as the input value. The output data for the configured custom action appears.

    How to generate a private-public key pair using OpenSSL

    Overview

    Key-based authentication involves generating a pair of cryptographic key files. These files consist of a private key and a public key, which uniquely identifies the user. Key pair authentication provides an enhanced level of security for authentication when compared to basic methods such as using a username and password. This authentication method requires a 2048-bit RSA key pair. The private-public key pair for Privacy Enhanced Mail (PEM) can be generated using OpenSSL.

    Note
    Key pair authentication is currently supported in Database connector only for Snowflake database.

    Steps to generate a private and public key pair using OpenSSL

    Note
    To explain the following steps, the key pair authentication process between the Database connector and the Snowflake database is considered.
    1. Install OpenSSL on your system.

    2. Open a terminal or command prompt to use OpenSSL commands to generate the key pair.

    3. Generate a private key and certificate using the following command:

      openssl req -x509 -newkey rsa:2048 -keyout {privatekey.pem} -out {cert.pem} -sha256 -days 730 -nodes -subj “/C={country}/ST={state}/L={city}/O={organisation}/OU={organisation_unit}/CN={common_name}

      Example: openssl req -x509 -newkey rsa:2048 -keyout privatekey.pem -out my_cert.pem -sha256 -days 730 -nodes -subj “/C=US/ST=Ohio/L=Columbus/O=Software Co/OU=Adapters/CN=soco

      This command will generate a 2048-bit RSA private key in PEM format, for example:

      —–BEGIN PRIVATE KEY—– MIIE6T… —–END PRIVATE KEY—–

    4. Once you have the private key, you can generate the corresponding public key using the following command:

      openssl rsa -in {privatekey.pem} -pubout -out {publickey.pub}

      Example: openssl rsa -in privatekey.pem -pubout -out pubkey.pub

      This command will extract the public key from the private key in PEM format, for example:

      —–BEGIN PUBLIC KEY—– MIIBIj… —–END PUBLIC KEY—–

      Securely store both the private and public keys in a local directory and record the path to the files. Note that the private key is saved in the PKCS#12 format and encrypted using the passphrase.

    5. Next generate a keystore file in either pkc12 or jks format using the following command:

      openssl pkcs12 -export -name {alias} -in {cert.pem} -inkey {privatekey.pem} -out {key.p12}

      Example: openssl pkcs12 -export -name privatekey -in my_cert.pem -inkey privatekey.pem -out mykeystore.p12

      • Set your public key to your Snowflake user to use key-pair authentication. Note that you must have the ACCOUNTADMIN role to make edits to a user.

      • Alter the user to use key pair authentication using the following command:

        alter user set rsa_public_key=‘MIIBIjAN…..’

        Example: ALTER USER JOHN SET RSA_PUBLIC_KEY=‘MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAtD7m97G2h9sNdkWeDBeyFhgUPLu2wbccYXT3vXwDudL2qFm7W6PxVaEj/k1bFeKcOHDI2jVVeHzU1awg1wxBJ3Jd2GJ9dYNmjGhovLSrthbKbGMavlwU +QIDAQAB’

      • Replace RSA_PUBLIC_KEY with the content from your public key file generated. Use this command in the Snowflake.

    6. Add the keystore file generated in step 5 in IBM webMethods Integration by clicking Projects > Select a Project > Configurations > General > Certificates > New Certificate > Keystore.

    7. Navigate to Connectors tab and select the Database connector. Add the Basic configuration, Advanced configuration and select the Keystore file. Also set the Other properties as required in Snowflake.

    8. Test connection to verify the database connection. A success message appears if the connectivity is successful.

    9. Test key pair authentication by connecting to Snowflake using your Snowflake client. You should be able to log in without entering a password.