Monitor
Track and monitor execution status, set up automated alert rules, view user activities, SAP® ERP performance and connection pools.
Let’s understand each of these sections in detail.
Track and monitor execution status, set up automated alert rules, view user activities, SAP® ERP performance and connection pools.
Let’s understand each of these sections in detail.
The Execution Results section provides a quick overview of the statistics associated with the workflow executions and Flow service executions (if enabled) along with their respective execution logs.
In the Monitor page, click on Execution Results as shown below:
Let’s now glance through the options provided by the Execution Results:
The Dashboard option offers you a consolidated view of all Workflow and Flow service execution status for the selected time period along with a graphical representation for the same.
You can fetch the execution data for a specific duration by selecting the relevant time frame option given on the top-right corner of the screen.
You can also view the detailed execution logs for each successful/failed Workflow and Flow service by clicking on the relevant figure as shown below:
This will take you to a new screen where you can click the relevant workflow/Flow service name to view the detailed execution log associated with it.
If you want to view a detailed execution log for successful Flow services, click on the relevant figure. This will take you to the following screen:
The Workflow Execution option lets you view and monitor workflow execution-related data in a detailed graphical format.
A workflow can have one of the following status:
In case of async webhook-enabled workflows you can have a Failed status if you have exceeded the execution rate limit for your tenant.
You can fetch specific workflow execution data for a certain time frame and/or a certain project, workflow, and execution status.
You can fetch the execution data for a specific duration by selecting the relevant time frame option given at the top-right corner of the screen.
You can alternatively specify a custom time frame using the date picker option.
You can fetch the execution data associated with one or more projects, one or more workflows, and one or more execution status. This can be achieved by applying the filter criteria.
To apply a filter, click on the Filters option.
You will see the following three filter criteria:
Context Id: Specify the context ID based on which you want to fetch the execution data. Read more about Context IDs.
Once you select the required filter options, click Apply. Doing this will display the execution details (in a graphical format) and execution logs for the selected projects and workflows.
To reset the filter back to default, click Reset.
The Workflow Execution option lets you view and monitor the execution log for each workflow that you execute.
You can select the columns you want to view in the Executions table based on your requirements by clicking the Settings button located beside the Download Logs option. The Settings button, when clicked, displays a list of column names and allows you to select the following columns you want to view in the Executions table. Column names that are not selected will be hidden in the Executions table.
Transaction count: The number of transactions that have been consumed for each workflow execution.
Status: The current state of a workflow. Available values include Success, Running, Stopped, Failed, Timeout, Queued, Hold.
Actions: The available options that can be performed on a workflow when it fails. Selecting or deselecting this column is not supported.
To get detailed information on the performance details of a particular workflow, click on the name of that workflow.
Clicking on the workflow’s name will take you to a screen having complete execution log information about that workflow. You can optionally export the execution log of a particular workflow to your local machine by clicking on the Export Logs button located at the top right of the screen.
Next, to view detailed information about the configured trigger or actions for the selected workflow, click on the name of the trigger or action as shown below:
You can optionally download the workflow execution logs (all or filtered records) either in JSON or CSV format to your local machine. To do this, click Download Logs located on the right side of the screen and select the format of the workflow execution log that you want to download.
The required execution logs will be automatically downloaded as a .zip file to the default download location in your machine.
The Resume feature allows you to resume the execution of your failed and timed out workflows. Since this feature works at workflow-level, you need to enable it for each workflow that you may want to resume in the future. To enable this feature for a workflow, navigate to the Workflow -> Workflow Settings and then check the Save status of each successfully executed action checkbox under the Execution Settings tab.
If you have enabled the Resume feature for your workflow and it fails, you will see the Resume button in the execution log of that workflow.
When you click on the Resume button, a dialog box will appear where you will be prompted specify whether you’d like to edit the input JSON data of failed action(s) before resuming the workflow execution or resume the workflow execution directly.
Click on Resume to immediately resume the workflow execution from the point it failed in the previous run.
Click on Edit Input to modify the input JSON of failed action(s).
Once this is done, click Resume. This will resume the workflow execution from the point it failed in the previous run, using the modified JSON input data for failed actions, you will be redirected to the Execution Logs page. Next, you will need to refresh the page by clicking on the Refresh icon located on the left side of the screen. Clicking on this icon will fetch latest status of the workflow execution in Execution Logs.
If the workflow is executed successfully, you will see that the execution status of the workflow is changed from Failed to Success. Moreover, when you click the workflow execution log, you can see the complete execution log details (previously failed action logs and current successfully executed action logs).
The Restart feature enables you to restart the execution of your failed, timed out, and stopped workflows. Additionally, this feature supports restarting failed manual workflow executions.
As this feature works at the workflow-level, you need to enable it for each workflow that you may want to restart in the future. To enable this feature for a workflow, go to Workflow -> Workflow Settings and then select the Save status of each successfully executed action checkbox under the Execution Settings tab.
You can restart one or multiple workflows as per your requirements.
Restarting a single workflow
You can Restart the Workflow executions from the main execution listing page of the Monitor tab. This option is provided under the Actions column.
If you have enabled the Restart feature for your workflow and the workflow fails, times out, or is stopped, the Restart button appears in the execution log of that particular workflow.
When you click on the Restart button, a dialog box will appear on screen where you will be prompted to specify whether you’d like to modify the webhook payload data before restarting the workflow or restart the workflow directly.
If you don’t want to modify webhook payload data, click on Restart. This will restart the workflow execution using the existing webhook payload immediately.
If you want to modify the webhook payload data before restarting the workflow, click on Edit Payload.
Once you have modified the webhook payload as per requirement, click Restart. This will restart the workflow execution using the modified webhook payload.
You will be redirected to the Execution Logs page, where you will need to refresh the page by clicking on the Refresh icon located on the left side of the screen. Clicking on this icon will fetch the latest status of the workflow execution in Execution Logs.
When a workflow execution is restarted, in the Status column of the executions table, the label Restarted is placed below Failed, indicating that the workflow has been restarted after encountering a failure.
You can check the restart history of the restarted workflow by clicking on the icon placed beside the Restarted label in the Status column. Upon clicking the icon, the Restart History window appears, displaying a chronological list of restart events with timestamps. You can click on a timestamp to view the complete execution log information for that specific workflow execution.
You can click on the name of the restarted workflow to view its execution details.
Clicking on the Restart Reference ID will direct you to a screen showing the execution details of the initially restarted workflow. Additionally, this screen also displays the total number of times the workflow has been restarted. Clicking on the Restart Count will return you to the Restart History window.
Restarting multiple workflows
You can select multiple failed Workflow executions to restart in just one click. To achieve this, in the Executions table, select the checkbox beside the name of failed workflows and click the Restart button.
The Flow service Execution option lets you view and monitor Flow service execution-related data in a detailed graphical format.
A Flow service can have one of the following status:
You can fetch specific Flow service execution data for a certain time frame and/or a certain project, Flow service, and execution status.
You can fetch the execution data for a specific duration by selecting the relevant time frame option given at the top-right corner of the screen.
You can alternatively specify a custom time frame using the date picker option.
You can fetch the execution data associated with an execution source, a project, a Flow service, an execution status, and the context ID. This can be achieved by applying the filter criteria.
To apply a filter, click on the Filters option.
You will see the following filter criteria:
In a Flow service, select the SetCustomContextID service available in the Flow category. It is recommended to add the setCustomContextID service as the first step in a Flow service.
In the mapping editor, set a value for the id field and then save and run the Flow service.
Search for the custom context ID in the Monitor > Execution Results > Flow service execution page by clicking Filters and by specifying the custom context ID in the Context ID field.
Once you select the required filter options, click Apply.
The execution details will be displayed in a graphical format and the execution logs will appear for the selected filter criteria. To reset the filter back to default, click Reset.
The Flow service Execution option lets you view and monitor the execution log for each Flow service that you run.
You can select the columns you want to view in the Executions table based on your requirements by clicking on the Settings button located beside the Download Logs option. The Settings button, when clicked, displays a list of column names and allows you to select the columns you want to view in the Executions table. Column names that are not selected will be hidden in the Executions table.
You can view a list of Flow service execution details in an ascending or descending order using the Sort by option. The Sort by option appears when you hover over the column names in the Execution table as shown below:
Click on the Sort by option to view the execution logs in an ascending or descending order. You can sort the order based on the following criteria:
To view the detailed execution logs, click on the name of the relevant flow.
This will take you to a screen having complete information about that particular Flow service. The execution page also displays the total number of documents processed by the Flow service, the number of documents processed successfully, the number of documents that did not process successfully, and the success score.
Next, to view detailed information about operations for the selected Flow service, click on the name of the operation as shown below:
You can view additional information about operation execution including Details, Results, and Business Data. Click on the relevant options to view the required information.
By default, you can retain Flow service Execution result entries for 30 days. You can optionally specify the number of days (up to 30) for which you would like to retain the Flow service execution logs by clicking the Modify Retention Period link. Once the retention period is over, the Flow service execution logs are deleted automatically.
You can optionally download the execution logs of Flow services to your local machine by clicking Download Logs as shown:
You can optionally terminate the ongoing Flow service executions from the Flow service Execution details page. To do so, navigate to the Running Executions section, select one, multiple, or all running Flow service executions, and click on the Terminate button as shown below:
The Restart feature enables you to resume execution of your successful or failed Flow services.
For more information, see the Resuming Flow services section.
The Restart feature enables you to restart execution of your successful or failed Flow services.
For more information, see the Restarting Flow services section.
You can set automated alert rules for your projects to send notifications to specific users when a workflow or Flow service fails, timeouts, or completes execution.
In the Monitor page, click on Alert Rules as shown below:
The Alert Rules option lets you send alert notifications to specific recipients when a certain event(s) occur during workflow execution. This will help you keep relevant users updated about the status of workflow executions.
To add a new alert rule, navigate to tenant homescreen > Monitor > Workflow Alerts.
Click on the Add Alert button located at the right of the screen to create a new alert rule.
You will be redirected to a New Workflow Alert configuration screen.
In the New Workflow Alert configuration screen that appears, enter the details as given below:
Once you have entered these details, click Save. This will add the specified alert rule for your tenant. By default, the status of the added alert rule will always be inactive. You will have to manually activate the alert rule by using the Active toggle button.
After this, whenever the selected workflow fails, timeouts, completes execution, an alert notification will be sent to the specified email address.
When you click on the Workflow Alerts, you will view all the list of existing alert rules for your tenant. Here, you can see the name of the alert rule, along with its description and status (active/inactive).
Integration alert -<workflow_name> from Tenant: <tenantname>
The Alert Rules option lets you send custom notifications to specific recipients when a certain event(s) occur during Flow service execution. This will help you keep relevant users updated about the status of Flow service executions.
To add a new alert rule, navigate to tenant homescreen > Monitor > Flow service Alerts.
Click on the Add Alert button located at the right of the screen to create a new alert rule. Along with a new alert rule, you can also add an alert frequency period (5 mins, 10 mins, or 15 mins) to specify how often the alert rule should be run.
You will be redirected to the New Flow service Alert configuration screen.
In the New Flow service Alert configuration screen that appears, enter the details as given below:
Once you have entered these details, click Add. This will add the specified alert rule for your tenant. After this, whenever the selected flow fails, timeouts, completes execution, an alert notification will be sent to the specified email address.
When you click on the Flow service Alerts, you will view all the list of existing alert rules for your tenant. Here, you can see the name of the alert rule, along with its description and status (active/inactive).
The General section allows you to view, track, and monitor your tenant activities. You can view audit logs, current month’s transaction usage statistics for your transaction-based tenants, and clear storage locks for integrations.
The Audit Logs section maintains a record of all the activities performed by the user. It maintains a history of all the actions that are performed within a tenant, including details such as the type of action performed, the user performing the action, and date/time.
For the Develop Anywhere, Deploy Anywhere and Central Control, Distributed Execution capabilities, audit logs are supported for the following operations:
To access the tenant audit logs, navigate to tenant homescreen and then click Monitor > Audit Logs.
You can apply filters on the audit logs to retrieve specific logs.
You can view audit logs for a certain time frame using the date picker located at the top-right corner of the Audit Logs screen.
By default, you will see the logs from the last 12 hours. You can either choose the time-range menu to select the required time range or specify the start and end date to fetch the audit logs created between them.
You can perform two types of searches, namely, Simple Search and Advanced Search to search and view log events as per your requirements. You can search and filter a set of particular log entries by specifying a search term or a query expression in the search query box. Let’s now understand each of these search capabilities in detail.
You can quickly search through your audit logs by entering a search term in the search query box to fetch specific log details.
A search term can consist of a word (such as John or Doe) or a phrase (like untitled workflow). When a search term is entered, the search scans all columns to retrieve log entries that contain the specified word or phrase.
For instance, to fetch all log details associated with a Default project, you can simply enter Default or default.
Similarly, if you want to fetch details of all untitled workflows, you can type the phrase Untitled Workflow or untitled workflow in the search query box.
You can perform an advanced search on your logs to narrow down your searches and form complex queries and fetch specific log details
This functionality allows you to search for a particular set of log entries that satisfy the condition specified in a query expression. A query expression consists of multiple search terms in conjunction with operators.
Using an advanced search capability, you can combine multiple search terms with different operators to perform a more specific search. A query expression can either consist of multiple search terms separated by commas or multiple search terms grouped together with parentheses.
Following are the different query expressions that you can perform:
A query that has search terms separated by a comma returns log entries containing the specified values in the same row. For example, the query - Project, Delete - will search for log entries containing Project and Delete in the same row.
A multiple query that has search terms joined by an OR operator with each query grouped tog ether with parentheses returns log entries containing either or both of the search terms in the same row.
For example, the query - (Project, Delete) OR (Untitled Workflow) - will search for log lines containing either Project and Delete or Untitled Workflow or both in the same row.
A query with field-based searches returns the log entries where the attribute equals the specified value in the same row.
For example, the query - (module: project, action: create) will search for log lines where the module - project equals action - create in the same row.
A query expression allows the following operators:
Operator | Example | Description |
---|---|---|
OR | (Project, Delete) OR (Default) | Searches for log entries containing either or both of the specified search terms in the same row |
, (comma) | Project, Delete | Searches for log entries containing all the specified search terms in the same row |
: (colon) | (module:project, action:delete) | Searches for log entries where the field equals the specified value |
Following are examples of a few query expressions for searching a specific set of log entries:
Query Expression | Description |
---|---|
workflow, delete | Retrieve log entries containing workflow and delete in the same row |
created, johndoe | Retrieve log entries for which the ‘Created’ action is performed by ‘johndoe’ |
(created, johndoe) OR (updated, johndoe) | Retrieve log entries for which the ‘Created’ or ‘Updated’ action is performed by ‘johndoe’ |
(published, janesmith ) OR (published, johndoe ) OR (published, veronicasmith ) | Retrieve log entries for projects that are published by ‘janesmith’ , ‘johndoe’ , or ‘veronicasmith’ |
(module: project, action: delete) | Retrieve log entries where the column fields - ‘module’ and ‘action’ equal values - ‘project’ and ‘delete’ respectively |
You can download the audit logs (all or filtered records) either in JSON or CSV format to your local machine.
To do this, click the Download Logs button located at the upper-right corner of the Audit Logs screen.
Next, select the desired format type of audit log report that you want to download.
With this, the required audit logs will be downloaded to the default download location in your machine.
For all transaction-based tenants, a certain number of transactions are allocated to their account depending on the selected plan on a monthly basis. You can view the current month’s transaction usage statistics of your transaction-based tenants through the Usage tab.
For non-paid and Free Forever Edition tenants:
To view the transaction usage of your tenant, navigate to the homescreen and then click Monitor > Usage.
Here, you can check the number of transactions already consumed by your tenant workflows and Flow services out of the total allocated transactions, for the current month.
For paid tenants:
You will not see the usage bar that is visible to non-paid and Free Forever Edition (FFE) tenants. Only the total number of transactions consumed is displayed when you click the Usage tab.
Storage locks refer to a mechanism where the system temporarily locks or marks a shared storage resource (Storage service and scheduled integrations with Prevent concurrent executions) during the execution of an integration to prevent other processes from interfering with it. IBM webMethods Integration uses a short-term store for information that needs to persist across server restarts.
The lock mechanism is used to control access and execution of integrations to avoid conflicts, especially in scenarios where multiple processes or instances might interact with the same data or resources concurrently. Locks help ensure that only one instance of an integration or process is executed at a given time, preventing issues like data inconsistency or resource conflicts. This is achieved by acquiring a lock on a storage resource before performing operations on it and releasing the lock afterward to allow other processes to access the resource.
The Integration system automatically removes integration locks at scheduled intervals, occurring every 1 hour and 15 minutes. However, if you want to manually clear the locks before the designated time, you can clear the locks from the Clear Storage Locks page.
The following are instances when the locks are applied on an integration, and you must manually clear the locks:
Scenario 1
If you choose the Prevent concurrent executions option when scheduling an integration, a lock is applied to the integration before each execution.
In the event of a system outage in Integration while a scheduled integration is ongoing, the lock remains in place and is not automatically released. As a result,
Scenario 2
If you have added the Storage add and lock services in an integration, a lock is placed on the integration. This lock is automatically released upon the completion of the integration execution.
In the event of a system outage in Integration while an integration is ongoing, the lock remains in place and is not automatically released. As a result, subsequent executions for the same integration are skipped until the lock is manually cleared.
While debugging an integration, if you close or stop the debugging process after executing the lock step, then the lock remains in place. You must clear the lock from the Clear Storage Locks page. However, if debugging is completed up to the last step, then the lock is released automatically.
Scenario 3
You have an integration that may run for a few hours, say for example, runtime exceeds 1 hour and 15 minutes, and the integration acquires a lock through Storage add and lock services or by using the Prevent concurrent executions option in the scheduler. Now, even if the integration is running, Integration automatically releases the lock during its routine lock removal schedule. To prevent this scenario and increase the lock clear time, contact Support.
Go to Monitor > General > Clear storage locks.
The Clear Storage Locks page displays the following details:
Storage Context: Name of the storage provided when adding the storage service in the integration.
Key: Name of the storage key provided when adding the storage service in the integration.
Locked On: Date and time when the lock was applied on the integration.
Go to Monitor > General > Clear storage locks. All integrations on which the locks have been applied appear.
Select the checkboxes corresponding to the integrations for which you want to release the lock.
Click Clear. The lock is cleared and the integration will be executed successfully.
This section provides a quick overview of the SAP® ERP performance and SAP® ERP connection pools.
On the Monitor page, click Connectors > SAP® ERP.
SAP® ERP allows you to monitor the following:
Connection pool : SAP® ERP repository connection pools, the SAP® ERP client connections, and the listener status.
For more information, see the Monitoring SAP® ERP Connection Pool section.
Performance : SAP® ERP performance, the SAP® response time summary, query requests and query components.
For more information, see the Monitoring SAP® ERP Performance section.
The metering functionality provides you with an overall execution count. However, this information alone is insufficient for a deeper analysis of your usage patterns based on conditions such as time range or specific category-keys.
What you need are:
The Insights functionality is designed to provide you with all of these options and more.
Insights feature adds a new module to the Integration portfolio. The primary objective of Insights is to provide you with statistics of transactions. This is achieved through the use of graphical depictions highlighting your usage of Flow services and Workflows. The filtering parameters include:
Log in to your tenant. Select the Monitor tab.
Select Insights > Overview in the left-hand side menu.
Use the calendar to choose your date range. Your cards automatically update graphs based on your calendar selection.
Enable or disable the attributes by clicking on the graph legends. Y-axis auto-scales based on the number of transactions. For example, 2 million transactions is displayed as 2M, and 500 thousand transactions is displayed as 500K.
The Analytics section provides you with advanced transaction count charts based on selected date range. The Services tab provides data on Workflows and Flow services. If you want to see all the data again under the tab, click on the Reset button at any time.
In the Analytics section, the Projects tab displays a detailed breakdown of all transactions for each project in your tenant. Click on a graph legend to enable or disable a specific project.
The Highlights section displays transaction data, sorted by the highest transaction count by default. This information is useful in identifying which services or projects are contributing to more transactions and can assist with performance optimization.
The Reports section displays the transaction and execution count of Workflows and Flow services month wise in separate rows.
Top Consumers section provides you with data on the top transaction consumers categorized by:
Project: Number of transactions
workflowName:projectName
Workflow: Number of transactions
flowserviceName:projectName
Flow service: Number of transactions