Self-hosted Transaction Monitoring

End-to-End Monitoring is used for monitoring cross product transactions involving webMethods.io iPaaS products and self-hosted products like webMethods Integration Server or API Gateway, but not used for monitoring of transactions that span only the self-hosted products. Support for transaction monitoring of self-hosted products is now available. The agents can now be configured to trace all self-hosted product transactions to the Application Platform Monitoring (APM) tool or data store of your choice using the OpenTelemetry standard. The OpenTelemetry support in End-to-End Monitoring for transactions within the webMethods.io iPaaS products will be available in a later release.

Following is a sample architecture diagram of hybrid integration monitoring and self-hosted transaction monitoring with OpenTelemetry support:

Currently supported versions

webMethods API Gateway

Operating System 10.15 10.11 10.7 10.5 10.3
Linux Yes Yes No No No
Windows Yes Yes No No No
MacOS Yes Yes No No No
AIX Yes Yes No No No

webMethods Integration Server

Operating System 10.15 10.11 10.7 10.5 10.3
Linux Yes Yes No No No
Windows Yes Yes No No No
MacOS Yes Yes No No No
AIX Yes Yes No No No

webMethods Micro Services Runtime

Operating System 10.15 10.11 10.7 10.5 10.3
Linux Yes Yes No No No
Windows Yes Yes No No No
MacOS Yes Yes No No No
AIX Yes Yes No No No

Develop Anywhere, Deploy Anywhere

The capability to transfer transaction traces to OTLP targets is currently not supported.

Note
  • If your Software AG self-hosted product version or operating system is not listed or supported, contact Software AG Global support with details about your Software AG product and operating system. We will assess the feasibility of providing support for your specific requirement.
  • End-to-End Monitoring self-hosted transaction monitoring only supports OpenTelemetry Traces.

Self-hosted agent configuration to share data with external systems

Use the following configuration to share traces from the End-to-End Monitoring self-hosted agent to targets such as messaging queues or external Application Performance Monitoring (APM) systems or custom targets being used to store monitoring data.

Pre-requisites

Integration Server package-based hybrid monitoring agent setup

To start a new installation of IS package-based Hybrid Monitoring Agent, follow the instructions available here.

Software AG Installer based hybrid monitoring agent setup

To start an installation of Software AG Installer based Hybrid Monitoring Agent, follow the instructions available here.

OTLP common configuration

Update the following properties as per your requirements in the agent.conf file:

Property Description Sample value
collector.establish_cloud_communication Use this property to enable or disable communication from agent to cloud collector. Set this property to false to enable the OTLP support. Setting this property is mandatory for OTLP support. ${SW_AGENT_COLLECTOR_ESTABLISH_CLOUD_COMMUNICATION:false}

The default value is true.
exporter.establish_external_communication Use this property to establish a connection to an external target system for pushing End-to-End Monitoring traces. Set this property to true to enable the OTLP support. Setting this property is mandatory for OTLP support. ${SW_AGENT_EXTERNAL_ESTABLISH_COMMUNICATION:false}

The default value is false.
exporter.default_target Use this property to set your default OTLP target server. ${SW_AGENT_EXTERNAL_TARGET:apm}

apm, kafka or custom target
  • APM - Sends traces to any observability platform that supports the OpenTelemetry standard. For example, Honeycomb, Dynatrace, Jeager.
  • KAFKA - Sends traces to the Kafka messaging queue. You are required to process the traces from the queue.
  • Custom target - Sends traces to your own user defined target server.
exporter.target_name Use this property to set the external target system name. Setting this property is optional. ${SW_AGENT_EXTERNAL_TARGET_NAME:<target_name>}
exporter.batch_size Use this property to control the number of records sent in a single request to the target server. ${SW_AGENT_EXT_BATCH_SIZE:2000}

The default value is 2000.
exporter.healthcheck_interval Use this property to set target server health check interval in seconds. ${SW_AGENT_EXTERNAL_API_HEALTH_CHECK_INTERVAL:60}

The default value is 60.

For custom implementation configurable properties, click here.

Note
For OTLP transaction details, refer the logs available at:
<INSTALL-DIR>\IntegrationServer\instances\default\logs\e2eagentOtlpTraces.log.

OTLP Configuration based on target server

APM specific configuration for target server

Update the following attributes in the agent configuration, if you need to push End-to-End Monitoring trace data directly to an APM tool like Dynatrace or New Relic.

The following example shows attributes for Honeycomb APM.

Property Description Sample value
exporter.url Use this property to update the endpoint URL of the external target APM server. ${SW_AGENT_OTEL_ENDPOINT:<OTLP_ENDPOINT>}

Example: https://<otlp_endpoint>/v1/traces
exporter.headers Use this property to provide key value pairs containing authorization tokens and required headers separated by commas. The header key and value are separated by ‘#’. The external target system requires valid headers . ${SW_AGENT_OTEL_HEADERS:<REQUIRED_HEADERS>}

Example: api-key#value,Content-Type#application/x-protobuf
exporter.username Use this property to provide the username of the target server. ${SW_AGENT_EXT_SYSTEM_USER:<username>}
exporter.password Use this property to provide the password of the target server. ${SW_AGENT_EXT_SYSTEM_PASSWORD: <password>}
exporter.support_long_id Use this property to support Trace ID lengths exceeding 16 bytes. Setting this property is optional. ${SW_AGENT_EXTERNAL_SUPPORT_LONG_ID:false}

Default value is false. This is the recommended value.
exporter.api_error_codes API error codes for target server health check. Add additional codes as per your requirement. ${SW_AGENT_EXTERNAL_API_ERROR_CODES:502,503,504}

The default values are 502,503,504
exporter.resource_attributes Use this property to provide comma separated key value pairs containing resource attributes. The attribute key and value are separated by ‘#’. ${SW_AGENT_OTEL_RESOURCE_ATTRIBUTES:<OTLP_RESOURCE_ATTRIBUTES>}

Example: service.name#example_service, service.namespace#example_service_namespace

Supported APM systems

The following APM systems have been verified with End-to-End Monitoring:

APM target server Supported Supported Version Remarks
Honeycomb Yes V10.15 Fix 8
LightStep(Service Now) Yes V10.15 Fix 8
Dynatrace Yes V10.15 Fix 9
New Relic Yes V10.15 Fix 9
Grafana Tempo Yes V10.15 Fix 9
Jeager Yes V10.15 Fix 9
Cisco Appdynamics Yes V10.15 Fix 9
IBM Instana Yes V10.15 Fix 9
Elastic APM Yes V10.15 Fix 9
Datadog No Planned

KAFKA specific configuration for target server

Update the following attributes in the agent configuration, if you need to push End-to-End Monitoring trace data to the KAFKA messaging queue and subsequently to process this data in a pre-configured KAFKA consumer client.

The following example shows attributes for the KAFKA server.

Property Description Sample value
exporter.url Use this property to update the endpoint URL of the external target Kafka server. ${SW_AGENT_OTEL_ENDPOINT:<KAFKA_ENDPOINT>}

Example: https://<host_name>:9091
exporter.headers Use this property to provide key value pairs containing authorization tokens and required headers separated by commas. The header key and value are separated by ‘#’. ${SW_AGENT_OTEL_HEADERS:<REQUIRED_HEADERS>}

Example: key#value
exporter.username Use this property to provide the username of the target server. ${SW_AGENT_EXT_SYSTEM_USER:<username>}
exporter.password Use this property to provide the password of the target server. ${SW_AGENT_EXT_SYSTEM_PASSWORD: <password>}
exporter.topic Use this property to provide the topic name defined in Kafka. ${SW_AGENT_EXTERNAL_KAFKA_TOPIC:topic_demo}
exporter.ack Use this property to provide the acknowledgement level. ${SW_AGENT_EXTERNAL_KAFKA_ACK:all}
Valid values are 0, 1 and all.

The default value is all.
exporter.retries Use this property to define the retries setting that determines the number of times the producer attempts to send a message before marking it as failed. ${SW_AGENT_EXTERNAL_KAFKA_RETRIES:0}

The default value is 0.
exporter.client_id Use this property to provide the label that names a particular producer. ${SW_AGENT_EXTERNAL_KAFKA_PRODUCER_ID:OTelTraceProducer}

The default value is TraceProducer.
exporter.linger_ms_config Use this property to define the amount of time a Kafka producer waits before sending a batch of messages, in milliseconds ${SW_AGENT_EXTERNAL_KAFKA_MS_CONFIG:0}

The default value is 0.
exporter.request_timeout_ms_config Use this property to provide the maximum amount of time the client waits for the response of a request, in milliseconds. If the response is not received before the timeout, the client resends the request or fails the request if the number of retries have been exhausted. ${SW_AGENT_EXTERNAL_KAFKA_TIMEOUT_MS_CONFIG:60}

The default value is 60.
exporter.send_buffer_config Use this property to provide the size of the TCP send buffer (SO_SNDBUF) to use when sending data. ${SW_AGENT_EXTERNAL_KAFKA_SEND_BUFFER_CONFIG:-1}

The default value is -1.
exporter.receive_buffer_config Use this property to provide the size of the TCP receive buffer (SO_RCVBUF) to use when reading data. ${SW_AGENT_EXTERNAL_KAFKA_RECEIVE_BUFFER_CONFIG:-1}

The default value is -1.
exporter.compression_type_config Use this property to provide the compression type for all data generated by the producer. Compression consists of full batches of data, so the efficacy of batching also impacts the compression ratio. More batching means better compression.

Valid values are:
none
gzip
snappy
lz4
zstd
${SW_AGENT_EXTERNAL_KAFKA_COMPRESSION_TYPE:none}

The default value is none.
exporter.max_request_size_config Use this property to define the maximum size of a request in bytes. This setting limits the number of record batches the producer sends in a single request to avoid sending large requests. ${SW_AGENT_EXTERNAL_KAFKA_REQUEST_SIZE:1048576}

The default value is 1048576 (1 MB).
Note

Copy the KAFKA client library version 3.5.1 or above in your IS directory and add a full path reference entry to the KAFKA library at the end of the file <INSTALL-DIR>\profiles\IS_default\configuration\custom_wrapper.conf.

For example:

  • Integration Server and API Gateway:

    wrapper.java.additional.504=-Xbootclasspath/a:..\..\..\E2EMonitoring\agent\custom_jars\kafka-clients-3.5.1.jar;"%JAVA_BOOT_CLASSPATH%"
  • Micro Services Runtime:

    set JAVA_UHMKAFKA_OPTS=-Xbootclasspath/a:"..\..\..\E2EMonitoring\agent\kafka-clients-3.5.1.jar"
    set JAVA_CUSTOM_OPTS=%JAVA_CUSTOM_OPTS% %JAVA_UHMKAFKA_OPTS%

    at the end of the file <INSTALL-DIR>\IntegrationServer\bin\setenv.bat

Custom implementation

You can implement solutions to share End-to-End Monitoring tracing data to your custom targets like APM tools or custom data stores like Elasticsearch using a Java-based API provided by End-to-End Monitoring. The reference implementation is available for the following target systems:

For more information on the default implementation, see APM specific configuration or KAFKA specific configuration.

However, if you need to implement your own solution using the End-to-End Monitoring common API, use the following JAVA docs and sample classes that implement the API.

Pre-requisites

The otlpClientPlugin.jar file contains the interfaces and helper classes to implement trace exporter for the plugged-in external target servers.

Use the otlpClientPlugin.jar file in
<INSTALL-DIR>\IntegrationServer\instances\default\packages\WmE2EMIntegrationAgent\resources\agent\plugins directory.

Note
  • Plugin file path for Microservices Runtime: INSTALL_DIR>\IntegrationServer\instances\default\packages\WmE2EMIntegrationAgent\resources\agent\plugins
  • Plugin file path for API Gateway: <INSTALL_DIR>\IntegrationServer\instances\default\packages\WmE2EMAPIAgent\resources\agent\plugins

Custom implementation configurable properties

Property Description Sample value
exporter.default_target Use this property to set the external target system. Valid values are apm and kafka. You can also use a custom target. exporter.default_target=apm

exporter.client_connector_class Use this property to enable trace export process to refer to specific classes during the processing. Update the full name of the specific class where you have implemented the logic to handle connectivity. Example:

${SW_AGENT_EXTERNAL_CLIENT_CONNECTOR_CLASS:com.softwareag.uhm.agent.exporter.impl.APMTraceExportConnectorImpl}
exporter.client_implementation_class Use this property to enable the trace export process to refer to specific classes during processing. Update the full name of the specific class where you have implemented the logic to process the traces to the external target server. Example:

${SW_AGENT_EXTERNAL_CLIENT_IMPLEMENTATION_CLASS:com.softwareag.uhm.agent.exporter.impl.APMTraceExportClientServiceImpl}
exporter.url Use this property to update the endpoint URL of the external target Kafka server. ${SW_AGENT_OTEL_ENDPOINT:localhost:9092}
exporter.headers Use this property to provide key value pairs containing authorization tokens and required headers separated by commas. The header key and value are separated by ‘#’. Example:

${SW_AGENT_OTEL_HEADERS:<REQUIRED_HEADERS>}

key#value
exporter.username Use this property to provide the username of the target server. ${SW_AGENT_EXT_SYSTEM_USER:<username>}
exporter.password Use this property to provide the password of the target server. ${SW_AGENT_EXT_SYSTEM_PASSWORD: <password>}

Custom implementation configuration

  1. Connect all the implemented classes for connectivity and trace exporter to the corresponding properties in the agent.config file. Provide the full name of connector and client implementation details in the properties exporter.client_connector_class and exporter.client_implementation_class.

    For example:

    #APM exporter
    #com.softwareag.uhm.agent.exporter.impl.APMTraceExportConnectorImpl
    #com.softwareag.uhm.agent.exporter.impl.APMTraceExportClientServiceImpl
  2. Add a reference entry of the external client library with full path at the end of the custom_wrapper.conf file at <INSTALL-DIR>\profiles\IS_default\configuration.

    For example:

    • Integration Server and API Gateway

      wrapper.java.additional.504=-Xbootclasspath/a:..\..\..\IntegrationServer\instances\default\packages\WmE2EMIntegrationAgent\resources\agent\externalClient.jar;"%JAVA_BOOT_CLASSPATH%"
    • Microservices Runtime

      set JAVA_UHM_CLI_OPTS=-Xbootclasspath/a:"packages/WmE2EMIntegrationAgent/resources/agent/externalClient.jar"
      set JAVA_CUSTOM_OPTS=%JAVA_CUSTOM_OPTS% %JAVA_UHM_OPTS% %JAVA_UHM_CLI_OPTS%

Reference implementation

The following section provides details about the classes that have implemented the API from the otlpClientPlugin.jar file.

APM

APM Connector reference implementation

This class is used to execute the attach and release connectivity operations and must implement the TraceExportConnector interface.

Example:

/**
 * 
     An APM client that creates a connection resource that helps to publish traces to APM cluster.

    Assume that you want to publish a data to an OTLP endpoint e.g. https://api.xxxx.io/v1/traces

    Usage example:


         PoolingNHttpClientConnectionManager asyncConnManager = new
         PoolingNHttpClientConnectionManager( new
         DefaultConnectingIOReactor(IOReactorConfig.DEFAULT)); 
         HttpAsyncClientBuilder asyncClientBuilder = HttpAsyncClients.custom();
         asyncClientBuilder.setConnectionManager(asyncConnManager);
         CloseableHttpAsyncClient clientHttps = syncClientBuilder.build();
         clientHttps.start();
      
    To enable loggers in your implementation, get a logger instance from the logger factory with name "e2eagentOtlpTraces". The logs generated will be added to a file e2eagentOtlpTraces.log.

    Usage example:


        private final Logger e2eagentOtlpTraces = LoggerFactory.getLogger("e2eagentOtlpTraces");
     
    Note: In the example, you are adding the serverInfo to systemConnection object.
    Usage example:

    TargetServerConnection object has been initialized in the class level. Example is as shown below.

     private final TargetServerConnection systemConnection = new TargetServerConnection(); 
        
         final HashMap<String, Object> serverInfo = new HashMap<String, Object>();
         serverInfo.put("CONNECTION_CLIENT", clientHttps);
         serverInfo.put("CONFIG", externalServerConfig); 
         systemConnection.setConnection(serverInfo);
 */
public class APMTraceExportConnectorImpl implements TraceExporterConnector {
    private final Logger e2eagentOtlpTraces = LoggerFactory.getLogger("e2eagentOtlpTraces");
    private final TargetServerConnection systemConnection = new TargetServerConnection();
    private List<String> ERROR_CODES = null;
   

/**

*
     
This method is used to connect to Plugged-In external APM server from End-to-End Monitoring.

    1. To connect using SSL, do the following steps.

      a. SSL property "exporter.tls" needs to be enabled e.g. "true" in the agent.config file.

      b. If the target server requires a client certificate for the identity

    Usage example for SSL configuration.


        String keystoreType = targetServerConfig.getString(TargetServerConfig.KEYSTORE_TYPE);
        KeyStore trustStore = keystoreType != null && keystoreType.equalsIgnoreCase("JKS")
                ? KeyStore.getInstance(KeyStore.getDefaultType())
                : KeyStore.getInstance(keystoreType);
        trustStore.load(loadStream(trustStoreLoc), truststorePassword.toCharArray());
        SSLContextBuilder sslBuilder = SSLContexts.custom()
                                .loadTrustMaterial(trustStore, new TrustSelfSignedStrategy())
                                .setSecureRandom(new SecureRandom());
        SSLContext sslContext = sslBuilder.build();
        SSLIOSessionStrategy sessionStrategy = null;
        if (targetServerConfig.getBoolean(TargetServerConfig.NO_OP_HOSTName_VERIFIER)) {
            sessionStrategy = new SSLIOSessionStrategy(sslContext, null, null,
                    NoopHostnameVerifier.INSTANCE);
        } else {
            sessionStrategy = new SSLIOSessionStrategy(sslContext, null, null,
                    SSLConnectionSocketFactory.getDefaultHostnameVerifier());
        }
        RegistryBuilder<SchemeIOSessionStrategy> registryBuilder = RegistryBuilder.create();
        registryBuilder.register("https", sessionStrategy).register("http",
                NoopIOSessionStrategy.INSTANCE);
        PoolingNHttpClientConnectionManager clientConnectionManager = new PoolingNHttpClientConnectionManager(
                new DefaultConnectingIOReactor(IOReactorConfig.DEFAULT), registryBuilder.build());
        HttpAsyncClientBuilder asyncClientBuilder = HttpAsyncClients.custom();
        asyncClientBuilder.setConnectionManager(clientConnectionManager);
        // Create a UsernamePasswordCredentials object
        asyncClientBuilder = passwordBuilder(username, password, asyncClientBuilder);
        if (asyncClientBuilder == null)
            return null;
        clientHttps = asyncClientBuilder.build();
        clientHttps.start();                        
         
    2. In case of username and password is configured for the external target server then use below mechanism to configure the credentials in the connection property.
    Usage example:

     
        if (username != null && !username.isEmpty()) {
                if (password != null && !password.isEmpty()) {
                    final CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
                    credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(username, password));
                    asyncClientBuilder.setDefaultCredentialsProvider(credentialsProvider);
                } else {
                    e2eagentOtlpTraces.info("Password initializing from IS PassMan is in progress");
                    return null;
                }
         }
         
    Note: E2EM agent is loaded during the Integration Server startup. Password retrieval from the password manager will take a while so have a logic to re-try until password is received.

    3. If SSL is not enabled, then do the following steps.
      a. SSL property needs to be disabled in the agent.config file.

      B. If the target server doesn't require a client certificate for the identity.

    Usage example:


         PoolingNHttpClientConnectionManager asyncConnManager = new
         PoolingNHttpClientConnectionManager( new
         DefaultConnectingIOReactor(IOReactorConfig.DEFAULT)); HttpAsyncClientBuilder
         asyncClientBuilder = HttpAsyncClients.custom();
         asyncClientBuilder.setConnectionManager(asyncConnManager);
         CloseableHttpAsyncClient clientHttps = syncClientBuilder.build();
         clientHttps.start();
             }
    4. The connection object has to be added into TargetServerConnection for the exporter implementation reference.
    TargetServerConnection object has been initialized in the class level. Example is as shown below.

     private final TargetServerConnection systemConnection = new TargetServerConnection(); 
    Usage example:


         final HashMap<String, Object> serverInfo = new HashMap<String, Object>();
         serverInfo.put("CONNECTION_CLIENT", clientHttps); 
         serverInfo.put("CONFIG", externalServerConfig); 
         systemConnection.setConnection(serverInfo);
         
    5. Error codes have been configured to validate the connectivity of target OTLP REST endpoint. In the example below shows how to initialize the error codes.
    The error codes are configurable in the agent config file. The default values are "exporter.api_error_codes=502,503,504"

    Usage example:


         this.ERROR_CODES = Arrays.asList(externalServerConfig.getString(TargetServerConfig.API_ERROR_CODES).split(","));
         
    Note:
    Configured passwords are retrieved from Integration Server Password Manager utility.
    The error codes are configured to validate the connectivity for OTLP REST endpoint. You can load the error codes if required. Refer to point.
     * 
     * 
     * @param targetServerConfig A TargetServerConfig object with parameters
     *                           specific for connecting to the external server.
     *                           Parameters defined in the Connectivity Information
     *                           while creating the Exporter from E2EM can be
     *                           retrieved using getters.
     * @return ExternalSystemConnection object with connection to external server
     * @throws Exception
     */
    @Override
    public TargetServerConnection connect(TargetServerConfig targetServerConfig) throws Exception {
       // Code to create required connection resource object
        return systemConnection;
    }
    /**
     * This method is used to release the external server connection from End-to-End
     * Monitoring. 
     * 
     * Here is an example, 
     *
    final HashMap<String, Object> serverInfo = (HashMap<String, Object>) systemConnection.getConnection();
    CloseableHttpAsyncClient httpClient = (CloseableHttpAsyncClient) serverInfo.get("CONNECTION_CLIENT");
    if (httpClient != null) {
        httpClient.close();
    }}
     * 
     * 
     * @param systemConnection A TargetServerConnection object containing connection
     *                         to the plugged in EXPORTER connection to the plugged
     *                         in external server.
     * @throws IOException
     */
    @SuppressWarnings("unchecked")
    @Override
    public void release(TargetServerConnection systemConnection) throws Exception {
      // code to release the connection resource
    }
    /**
     This method is used to check the connectivity with the external server.

    The anticipated error codes for your external server can be configured in the agent.config file. "exporter.api_error_codes=502,503,504"

    If any of these error codes are received, then this method returns false. Else, it returns true.

    Usage example:


    final HashMap<String, Object> serverInfo = (HashMap<String, Object>) systemConnection.getConnection();
    TargetServerConfig targetServerConfig = (TargetServerConfig) serverInfo.get("CONFIG");
    HttpGet httpGet = new HttpGet(targetServerConfig.getString(TargetServerConfig.SERVER_HOST));
    try (CloseableHttpClient client = HttpClients.createDefault();
            CloseableHttpResponse response = client.execute(httpGet)) {
        int statusCode = response.getStatusLine().getStatusCode();
        return this.ERROR_CODES.contains(String.valueOf(statusCode)) ? false : true;
    }
     * 
     * @param systemConnection A TargetServerConnection object containing connection
     *                         to the plugged in EXPORTER connection to the plugged
     *                         in external server.
     * @return Boolean
     * @throws Exception
     */
    @SuppressWarnings("unchecked")
    @Override
    public Boolean getConnectivityStatus(TargetServerConnection targetServerConnection) throws Exception {
       // Code to check the health of target server.
    }
}

APM Client reference implementation

This class is used to execute the trace export operation and must implement the TraceExportClientService interface.

Example:

/**
    An APM client that publishes traces to the APM cluster.

    This is a thread safe implementation and it shares a single CloseableHttpAsyncClient instance across threads for better performance.

    This method adds a listener to notify the status of the execution to the caller. To add a listener, do the following :

    private List<OutboundOtlpHttpChannelListener> listeners = Collections.synchronizedList(new LinkedList<OutboundOtlpHttpChannelListener>()); 
      
      public void addChannelListener(OutboundOtlpHttpChannelListener listener) {
            listeners.add(listener);
        }
      
      
    To enable loggers in your implementation, get a logger instance from the logger factory with name "e2eagentOtlpTraces". The logs generated will be added to a file e2eagentOtlpTraces.log.

    Usage example:


        private final Logger e2eagentOtlpTraces = LoggerFactory.getLogger("e2eagentOtlpTraces");
 * 
 *
 */
public class APMTraceExportClientServiceImpl implements TraceExporterClientService {
    private final Logger e2eagentOtlpTraces = LoggerFactory.getLogger("e2eagentOtlpTraces");
    
    private List<OutboundOtlpHttpChannelListener> listeners = Collections.synchronizedList(new LinkedList<OutboundOtlpHttpChannelListener>());

/**
 
    This method is used to export the trace data using the Plugged-In external server.

    Usage example: Using the CloseableHttpAsyncClient to send records with single TracesData containing the key/value pairs.

    Get the connection resource from TargetServerConnection

    Create HttpPost or required type with the hostname. URL is dynamically loaded from agent config property field "exporter.url".

    Prepare and set headers for the HTTP Post object. Header values are being dynamically loaded from agent configuration property field "exporter.headers".


        final HashMap<String, Object> serverInfo = (HashMap<String, Object>) systemConnection.getConnection();
        CloseableHttpAsyncClient httpClient = (CloseableHttpAsyncClient) serverInfo.get("CONNECTION_CLIENT");
        TargetServerConfig externalServerConfig = (TargetServerConfig) serverInfo.get("CONFIG");
        HttpPost httpPost = new HttpPost(externalServerConfig.getString(TargetServerConfig.SERVER_HOST));
        String serverHeader = externalServerConfig.getString(TargetServerConfig.HEADERS);
        String contentType = "";
        if (serverHeader != null) {
            String[] headers = serverHeader.split(",");
            for (String header : headers) {
                String[] individualHeader = header.split("#");
                if (individualHeader[0].equalsIgnoreCase("Content-Type")) {
                    contentType = individualHeader[1];
                }
                httpPost.setHeader(individualHeader[0], individualHeader[1]);
            }
        }
        
    Default security headers. Add required security headers based on your needs

    Usage example:

     
        // -- default security headers
        httpPost.setHeader("Content-Security-Policy", "default-src 'none'");
        httpPost.setHeader("Feature-Policy", "microphone 'none'; geolocation 'none'");
        httpPost.setHeader("X-XSS-Protection", "1; mode=block");
        httpPost.setHeader("X-Content-Type-Options", "nosniff");
        httpPost.setHeader("X-Frame-Options", "SAMEORIGIN");
        httpPost.setHeader("Access-Control-Allow-Origin", "null");
        httpPost.setHeader("Referrer-Policy", "no-referrer");
        httpPost.setHeader("Cache-Control", "no-Cache");
        
    Prepare and set supported content type for the request. Content-Type is being added part of header values in agent config.

    Usage example:

     
        if (!contentType.isEmpty() && contentType.equalsIgnoreCase("application/x-protobuf")) {
            byte[] serializedMessage = traceData.toByteArray();
            // Set the serialized protobuf message as the request body
            ByteArrayEntity reqEntity = new ByteArrayEntity(serializedMessage);
            httpPost.setEntity(reqEntity);
        } else {
            String protoJson = JsonFormat.printer().printingEnumsAsInts().omittingInsignificantWhitespace()
                    .print(traceData);
            StringEntity requestEntity = new StringEntity(protoJson, ContentType.APPLICATION_JSON);
            httpPost.setHeader("Accept", "application/json");
            httpPost.setEntity(requestEntity);
            e2eagentOtlpTraces.debug("JSON data: \n " + protoJson);
        }
        
    Post the data using Async mechanism and response is handled in respective callback methods.

    Usage example:

     
         httpClient.execute(httpPost, new FutureCallback<HttpResponse>() {
            @Override
            public void completed(final HttpResponse response) {
                String body = null;
                if (response.getStatusLine().getStatusCode() == 200) {
                    APMTraceExportClientServiceImpl.this.notify(httpPost, true);
                    body = EntityUtils.toString(response.getEntity());                    
                } else {
                    APMTraceExportClientServiceImpl.this.notify(httpPost, false);
                    body = EntityUtils.toString(response.getEntity());                    
                }
            }

            @Override
            public void failed(final Exception e) {
                APMTraceExportClientServiceImpl.this.notify(httpPost, false);            
            }

            @Override
            public void cancelled() {
                APMTraceExportClientServiceImpl.this.notify(httpPost, false);            
            }
        });
        
    The send() method is asynchronous. When called it publishes the record to an OTLP endpoint.

    This allows sending many records in parallel without blocking to wait for the response after each one.
 * 
 * @param systemConnection A ExternalSystemConnection object containing
 *                         parameters specific to run the pushing action through
 *                         the external server.
 * @param traceData        An object array containing trace data.
 * @return A Boolean
 * @throws Exception if an error occurs in the exporting process.
     */
    @SuppressWarnings("unchecked")
    @Override
    public void send(TargetServerConnection systemConnection, TracesData traceData)
            throws Exception {
       // Code to process the trace data to external server.
    }
    
    /**
    
    The status of the execution needs to be notified to the caller. To do, you need to add configured listener as shown in below example.


         listeners.add(listener);
         
    The configured listener class gets a status update (true/false) as and when it receives an HTTP response from the ASYNC execution.

    Usage example code for how to send an update to listener class using a notify separate method.

     
          private void notify(HttpPost httpPost, boolean status) {
              for (OutboundOtlpHttpChannelListener listener : listeners) {
                         TODO - Get the status from Http Response.
                        listener.statusChanged(count, status);
                }
     }
     */
    public void addChannelListener(OutboundOtlpHttpChannelListener listener) {
        listeners.add(listener);
    }
}

KAFKA

Kafka Connector reference implementation

This class is used to execute the attach and release connectivity operations and must implement the TraceExportConnector interface.

Example:

/**
    A KAFKA client that creates a connection resource that helps to publish traces to KAFKA cluster.

    Usage example of the sample producer to send records with strings containing sequential numbers as the key/value pairs.


     
     Properties props = new Properties();
     props.put("bootstrap.servers", "localhost:9092");
     props.put("linger.ms", 1);
     props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
     props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
     
     producer = new KafkaProducer<>(props, new StringSerializer(), new StringSerializer());
     
     
    To enable loggers in your implementation, get a logger instance from the logger factory with name "e2eagentOtlpTraces". The logs generated will be added to a file e2eagentOtlpTraces.log.

    Usage example:


        private final Logger e2eagentOtlpTraces = LoggerFactory.getLogger("e2eagentOtlpTraces");
     
    Note: In the example, you are adding the serverInfo to systemConnection object.
    Usage example:

    TargetServerConnection object has been initialized in the class level.

     private final TargetServerConnection systemConnection = new TargetServerConnection(); 

        final HashMap<String, Object> serverInfo = new HashMap<String, Object>();
        serverInfo.put("PROPERTIES", props);
        serverInfo.put("CONNECTION", producer);
        systemConnection.setConnection(serverInfo);

 *
 */
public class KafkaTraceExportConnectorImpl implements TraceExporterConnector {
    
    private final Logger e2eagentOtlpTraces = LoggerFactory.getLogger("e2eagentOtlpTraces");
    
    private final TargetServerConnection systemConnection = new TargetServerConnection();
    /**
    This method is used to connect to Plugged-In external KAFKA server from End-to-End Monitoring.

    1. To connect using SSL, do the following steps.

      a. SSL property "exporter.tls" needs to be enabled e.g. "true" in the agent.config file.

      b. If the target server requires a client certificate for the identity

    Usage example for SSL configuration.

        String truststorePassword = targetServerConfig.getString(TargetServerConfig.EXPORTER_TRUSTSTORE_PASSWORD);
        if (truststorePassword != null && !truststorePassword.isEmpty()) {
            kafkaProperties.put("security.protocol", "SSL");
            kafkaProperties.put("ssl.truststore.location",
                    targetServerConfig.getString(TargetServerConfig.TRUSTSTORE_LOCATION));
            kafkaProperties.put("ssl.truststore.password", truststorePassword);
            String keyStoreLoc = targetServerConfig.getString(TargetServerConfig.KEYSTORE_LOCATION);
            if ((keyStoreLoc != null && !keyStoreLoc.isEmpty())) {
                kafkaProperties.put("ssl.keystore.location",
                        targetServerConfig.getString(TargetServerConfig.KEYSTORE_LOCATION));
                kafkaProperties.put("ssl.keystore.password",
                        targetServerConfig.getString(TargetServerConfig.KEYSTORE_PASSWORD));
                kafkaProperties.put("ssl.key.password",
                        targetServerConfig.getString(TargetServerConfig.KEY_PASSWORD));
            }
            if (!targetServerConfig.getBoolean(TargetServerConfig.NO_OP_HOSTName_VERIFIER)) {
                kafkaProperties.put("ssl.endpoint.identification.algorithm=", "");
            }
        } else {
            // -- Password initializing from IS PassMan is in progress.
            return null;
        }
        
    2. Skip the above step if server doesn't require to pass client certificate for the identity.
    3. In case of username and password is configured for the KAFKA cluster then use below mechanism to configure the plain SASL credentials in the connection property.
    Usage example:


        kafkaProperties.put("sasl.jaas.config",    "org.apache.kafka.common.security.plain.PlainLoginModule required username="+user+" password="+password+";");
        kafkaProperties.put("security.protocol", !targetServerConfig.getBoolean(TargetServerConfig.TLS) ? "SASL_PLAINTEXT" : "SASL_SSL");
        kafkaProperties.put(SaslConfigs.SASL_MECHANISM, "PLAIN");
        
    Note:

    currently single SASL client authentication is supported for both PLAINTEXT and SSL.
    E2EM agent is loaded during the Integration Server startup. Password retrieval from the password manager will take a while so have a logic to re-try until password is received.

    4. The connection object has to be added into TargetServerConnection for the exporter implementation reference.
    TargetServerConnection object has been initialized in the class level. Example is as shown below.

     private final TargetServerConnection systemConnection = new TargetServerConnection(); 

        final HashMap<String, Object> serverInfo = new HashMap<String, Object>();
        serverInfo.put("PROPERTIES", props);
        serverInfo.put("CONNECTION", producer);
        systemConnection.setConnection(serverInfo);
        
    Note:
    Configured passwords are retrieved from Integration Server Password Manager utility.
     
     * 
     * @param targetServerConfig A ExternalServerConfig object with parameters
     *                             specific for connecting to the external server.
     *                             Parameters defined in the Connectivity
     *                             Information while creating the Exporter from E2EM
     *                             can be retrieved using getters.
     * @return ExternalSystemConnection object with connection to KAFKA server
     * @throws Exception
     */
    @Override
    public TargetServerConnection connect(TargetServerConfig targetServerConfig) throws Exception {
       // Code to prepare a connection resource object
        return systemConnection;
    }
    
    /**
     * This method is used to release the KAFKA server connection from End-to-End
     * Monitoring.
     * 
     * Here is an example,
      final HashMap<String, Object> serverInfo = (HashMap<String, Object>) systemConnection.getConnection();
       final Producer<String, TracesData> kafkaServer = (Producer<String, TracesData>) serverInfo .get("CONNECTION");
       kafkaServer.flush();
       kafkaServer.close();
  
     * @param systemConnection A ExternalSystemConnection object containing
     *                         connection to the plugged in EXPORTER connection to
     *                         the plugged in KAFKA server.
     * @throws IOException
     */
    @SuppressWarnings("unchecked")
    @Override
    public void release(TargetServerConnection systemConnection) throws Exception {
     // code to release the connection resource
    }
    /**
     This method is used to check the connectivity with the external Kafka server.

    The status is returned as true/false based on the kafka server response.

    Usage example:


     final HashMap<String, Object> serverInfo = (HashMap<String, Object>) systemConnection.getConnection();
        Properties properties = (Properties) serverInfo.get("PROPERTIES");
        TargetServerConfig externalServerConfig = (TargetServerConfig) serverInfo.get("CONFIG");
        try (AdminClient client = KafkaAdminClient.create(properties)) {
            DescribeTopicsResult topics = client
                    .describeTopics(Arrays.asList(externalServerConfig.getString(TargetServerConfig.KAFKA_TOPIC)));
            return (topics != null?true:false);
     }
     
     */
    @SuppressWarnings("unchecked")
    @Override
    public Boolean getConnectivityStatus(TargetServerConnection externalSystemConnection) throws Exception {
      // Code to check the health of target server.
      return true;
    }
}

Kafka Client reference implementation

This class is used to execute the trace export operation and must implement the TraceExportClientService interface.

Example:

/**
 * 
    A KAFKA client implementation that publishes records to the KAFKA cluster.

    The producer is a thread safe implementation and has been optimized for better performance.

    This method adds a listener to notify the status of the execution to the caller. To add a listener, do the following :

    private List<OutboundOtlpHttpChannelListener> listeners = Collections.synchronizedList(new LinkedList<OutboundOtlpHttpChannelListener>()); 
      
      public void addChannelListener(OutboundOtlpHttpChannelListener listener) {
            listeners.add(listener);
        }
      
      
    To enable loggers in your implementation, get a logger instance from the logger factory with name "e2eagentOtlpTraces". The logs generated will be added to a file e2eagentOtlpTraces.log.

    Usage example:

        private final Logger e2eagentOtlpTraces = LoggerFactory.getLogger("e2eagentOtlpTraces");
 *
 */
public class KafkaTraceExportClientServiceImpl implements TraceExporterClientService {
    private final Logger e2eagentOtlpTraces = LoggerFactory.getLogger("e2eagentOtlpTraces");
    
    private List<OutboundOtlpHttpChannelListener> listeners = Collections.synchronizedList(new LinkedList<OutboundOtlpHttpChannelListener>());

    /**
    
    This method is used to export the trace data using the Plugged-In external server.

    Here is a simple example of using the producer to send records containing strings and trace data numbers as the key/value pairs.

    Get the connection resource "Producer" from TargetServerConnection.


        final HashMap<String, Object> serverInfo = (HashMap<String, Object>) systemConnection.getConnection();
        Producer<String, TracesData> producer = (Producer<String, TracesData>) serverInfo.get("CONNECTION");
        TargetServerConfig externalServerConfig = (TargetServerConfig) serverInfo.get("CONFIG");
        producer.send(new ProducerRecord<>(externalServerConfig.getString(TargetServerConfig.KAFKA_TOPIC),
                "trace", traceData), new Callback() {
                    @Override
                    public void onCompletion(RecordMetadata metadata, Exception e) {
                        if (e != null) {
                            KafkaTraceExportClientServiceImpl.this.notify(traceData.getResourceSpansCount(),
                                    false);
                        } else {
                            long count = metadata.offset();
                            KafkaTraceExportClientServiceImpl.this.notify(traceData.getResourceSpansCount(),
                                    true);
                        }
                    }
                });
         
    The producer send() method is asynchronous. When called, it adds the record to a buffer of pending record sends and immediately returns. This allows the producer to batch together individual records for efficiency. </p

    The acks config controls the criteria under which requests are considered complete. The default setting "all" will result in blocking on the full commit of the record, the slowest but most durable setting.

    Usage example: The value is configurable in the agent config file. Applicable values are - 0, 1, all


        exporter.ack=all
    If the request fails, the producer can automatically retry. The retries setting defaults to Integer.MAX_VALUE, and it's recommended to use delivery.timeout.ms to control retry behavior, instead of retries.

    Usage example: The value is configurable in the agent config file.


        exporter.retries=0
    Note: As shown in above examples, Kafka properties are configurable in the agent config file. Refer to below list
    exporter.linger_ms_config
    exporter.request_timeout_ms_config
    exporter.send_buffer_config
    exporter.receive_buffer_config
    exporter.compression_type_config
    exporter.max_request_size_config
     * 
     * @param systemConnection A ExternalSystemConnection object containing
     *                         parameters specific to run the pushing action through
     *                         the KAFKA server.
     * @param traceData        An object array containing trace data.
     * @return A Boolean
     * @throws Exception if an error occurs in the exporting process.
     */
    @SuppressWarnings("unchecked")
    @Override
    public void send(TargetServerConnection systemConnection, TracesData traceData)
            throws Exception {      
              // Code to process the trace data to external server.
    }
    
    /**
    The configured listener class gets a status update (true/false) as and when it receives an response from the ASYNC execution.

    Here is an example of how to send an update to listener class using a notify private method.

 
      private void notify(HttpPost httpPost, boolean status) {
          for (OutboundOtlpHttpChannelListener listener : listeners) {
                listener.statusChanged(count, status);
        }
     }
     */
    public void addChannelListener(OutboundOtlpHttpChannelListener listener) {
        listeners.add(listener);
    }
}

SSL configuration

Secure Sockets Layer (SSL) configuration contains attributes that control the behavior of client and server SSL endpoints. To enable SSL in End-to-End Monitoring self-hosted agents to target server, use the following configuration.

Property Description Sample value
exporter.tls Use this property for secure SSL/TLS connections. Set this property to true to enable secure connections. false

The default value is false.
exporter.no_op_hostname_verifier Use this property to check the hostname of the connecting destination server.

This property is applicable only if the exporter.tls property is enabled.
false

The default value is false.
exporter.type Use this property to provide the truststore detail. Ensure that the Java truststore contains the external server certificate imported. Valid values are JKS or PCKS12.

This property is applicable only if the exporter.tls property is enabled.
${SW_AGENT_KEYORTRUSTSTORE_TYPE:JKS}

Default value is JKS.
exporter.truststore_location This property is applicable only if the exporter.tls property is enabled. Provide the full path of the truststore file. ${SW_AGENT_TRUSTSTORE_LOCATION:}
exporter.truststore_password This property is applicable only if the exporter.tls property is enabled. Provide the password of the truststore. ${SW_AGENT_TRUSTSTORE_PASSWORD}
Note
Two-way SSL communication is currently not supported.

Disabling agent tracing or Uninstalling an agent

You must disable tracing if the Integration Server malfunctions due to errors in the End-to-End Monitoring agent.

To disable tracing without restarting Integration Server:

  1. In Integration Server, go to Administration > Packages > Management.

  2. In the Package List, click the Home icon for the WmE2EMIntegrationAgent package. For the API Gateway agent, click the Home icon for the WmE2EMAPIAgent package.

  3. In the dynamicagent.config section, click Edit and change the SW_AGENT_DISABLE_TRACING property to true. Click Update.

  4. Verify the confirmation message.

Uninstalling an agent

To completely uninstall the End-to-End Monitoring agent from the Integration Server:

  1. In Integration Server, go to Administration > Packages > Management. In the Package List, click the Home icon for the WmE2EMIntegrationAgent or WmE2EMAPIAgent package. In the End-to-End Monitoring Status and Configuration page, Product Configuration section, click Restore.

  2. Optionally, in Integration Server, go to Administration > Packages > Management. In the Package List, click the Delete icon for the WmE2EMIntegrationAgent or WmE2EMAPIAgent package.

Important considerations in product configuration

Integration Server logs scale based on the logger configuration. For example, if you are running services in Integration Server, the session log file size or service log file size is governed by Integration Server and not by End-to-End Monitoring.

Self-hosted transactions have not been tested with End-to-End Monitoring collector. Do not route your self-hosted transactions to the collector to avoid a transaction overload on the collector.

Note
End-to-End Monitoring currently supports OTLP/HTTP that uses Protobuf payloads encoded either in binary format or in JSON format. Regardless of the encoding the Protobuf schema of the messages is the same for OTLP/HTTP. Support for OTLP/gRPC is planned in future releases.

Currently unsupported capabilities

If you have an existing hybrid monitoring agent setup installed using the Software AG Installer and Software AG Update Manager based fixes, use the following steps to install IS package-based hybrid monitoring agent.