Wednesday, August 23, 2017

API Monitor Application (with object store persistence)

1.0 Overview

Figure 1.0 shows the main flow in the API monitoring mule application. Click on this link for source code. This article is based on the the following github commit id of 7f6386ba5e946eb38f9436525d6082dd9e24bc87.
Figure 1.0

I have written a short wiki in github that talks about how the application works, if you are using the application or if you are modifying the application in a way that makes it better, I ask that you please do check in your modifications so that it would be a better health check and monitoring application for everyone.

Currently this application pings urls to see if they are “up and running” or “down and unreachable”, you could also modify it to do tcp pings for host and ports to check if particular endpoints are up and running, i.e you could ping database endpoints etc.

There is an article written in Dzone on how you could automate url health checks via flow designers to periodically call this API Monitoring application.    

This article I would elaborate more about the internal workings of the application, this is so that you would be comfortable modifying it for your organization’s use.

2.0 Digging Deeper into the Code

Figure 2.0a
Figure 2.0a shows the 1st half of the main flow, message processor number (1) is put in place to transform any inbound types especially byte array types into java objects. And message processor number (2) is put in place to ensure java objects of type string is transformed into java.util.HasMap objects. The following table shows the snippet of code that is placed into the groovy script message processor (2).
import groovy.json.JsonSlurperClassic

def jsonParse(def json) {
   new groovy.json.JsonSlurperClassic().parseText(json)
}

if (payload instanceof String){
  payload = jsonParse(payload)
}

return payload

The reason why I have used a JsonSlurper classic instead of the conventional JsonSluper is because the classic creates and object of type java.util.HashMap but the latter create an object of type groovy.json.internal.LazyMap.

The reason why we need a HashMap instead of a LazyMap is because we want to be able to serialize and store the object into the MuleSoft Object store, which brings us to message processor number (3).

The reason we are using object store is so that we could persist the timestamp of last notification, (this is so that we can implement the mechanism of elapsed time checking) as we don't want users to be bombarded with notification emails with every ping. Message processor number (3) retrieves the “lastPing” hashmap that was saved from previous pings.

The next message processor (4) is a collection splitter, it is always used in conjunction with the  collection aggregator message processor. This is akin to a for each scope anything in between (4) and (9) (at Figure 2.0b), would be processing a single element in the array payload.

Figure 2.0b

Message processor number (5) actually pings the url, it will have the smarts to determine if it is a HTTPS or ar HTTP ping. The groovy code snippet for message processor number (5) is as per illustrated by the following table.

import org.kian.mulesoft.*;
boolean isUp = false;

if (payload?.url != null){
  if(payload.url.split(':')[0].equalsIgnoreCase("http")){
     isUp = (new HttpClient()).ping(payload.url)
  }else if(payload.url.split(':')[0].equalsIgnoreCase("https")){
     isUp = (new HttpsClient()).ping(payload.url)
  }
}
return isUp;

As you can see the code block calls two custom classes namely the HttpClient or the HTTPSClient depending on the URL string that would be passed to it. The ping would return true if the url is reachable and return false otherwise.

The next message processor (6), is another groovy message processor that appends new data to the original payload, the following code snippet shows the internal groovy code, as discussed earlier message processor number (2) converts the JSON input payload to java.util.HashMap, so for us to append more information to the original payload so that it could be passed back to the user calling the api we use the HashMap.put method.
payload.put("isUp", flowVars.isUp)
payload.put("notification_timeStamp", (new Date()))
payload.put("ping_timestamp", (new Date()))
return payload

The next message processor (7) (from figure 2.0b) is a choice flow control scope. The choice flow control check’s if there was a previous ping attempt, this is possible because the application stores an object of the ping result into MuleSoft’s default object store, if there was a previous store message processor number (3) the choice flow control scope would then go to the subflow that (8a) checks if the interval has expired before deciding to send a notification, otherwise it would go to the subflow that (8b) sends a notification (if the ping returns negative results).

At message processor number (9) which is a collection aggregator would then collect the individual result payload, i.e. if there are 2 elements in your original jason input payload it would collect the processing results (output payload). Message processor number (10) is where the object store hashmap is constructed. The constructed object would look like figure 2.0c.

Figure 2.0c

The reason I have constructed the object as such, is so that it could easily be retrieved and the notification interval versus the actual elapsed time could be easily inspected. The object structure that you see in figure 2.0c is created from message processor number (10). The following shows the groovy snipped code in number (10).

def objectStoreStuct = [:]
println payload.getClass().toString()
for(int i=0; i < payload.size(); i++)
{
  objectStoreStuct.put(payload[i].name,payload[i])
}

return objectStoreStuct

Message processor number (11) stores the current ping into mule’s default object store with the key of “lastPing”. And the subsequent message processors after that is just there to prepare an output payload for the user.

4.0 Munit Test

I have added one MUNIT test to test against the application, you could always expand on this as you add in more code to increase its health check utility.
The MUNIT test I have added has about 60% test coverage, it is essentially a service test as I have all outbound endpoints and also the object stores mocked, which means the test is siloing the health check application in its test (hence making it a service test with the prefix of STxxx).

5.0 Clearing Object Stores

When you run this health check application it would persist object stores, if you want to delete/clear the persisted object store there are different ways to do it depending on how you are running the application. The following subsection would give an elaboration on ways to clear them. If you know of any other ways please do post your comments in the comments section of this article.  

5.1 Clearing Object Store in Anypoint IDE

If you are running the health check application on your anypoint IDE, the following steps are how you clear the persisted object store.

Right click on your project, then go to Mule, then navigate to “Clear Application Data”, if you click on, the IDE would delete the persisted object store (as depicted in figure 5.1).
Figure 5.1

5.1 Clearing Object Store in CloudHub

If you have deployed the health check solution into cloudhub, you would then need to clear the persisted object store in the Runtime Manager.

Log in to Anypoint Cloudhub and go to runtime manager, at runtime manager click on your application and at the left menu pane select “Application Data”, and from the menu you will see your persisted HashMap with the key of “lastPing”
Figure 5.2

Click on the “lastPing” radio button and click on the delete button at the top of the screen, this will remove/clear/delete the persister object.

6.0 Conclusion


This application is created to conduct health check on your api’s, you could also further modify it to do TCP pings to conduct health check on other endpoints. I have created this health check api so that the MuleSoft community could use it, this article has demystified its design and build, the next step is for you do download it use it and if you find that it needs improvement then contribute to the cause by checking in your changes to github so that the programs utility could be increased and would make the lives of MuleSoft developers and operation personnels easier.

10 comments:

  1. Thanks for sharing valuable information and very well explained. Keep posting.

    mule esb training
    learn mulesoft online

    ReplyDelete

  2. It is amazing to visit your site. Thanks for sharing this information, this is useful to me...
    Mulesoft Training
    Mulesoft Self Learning

    ReplyDelete