Overview

Learning from experiments: Implementing an Atlassian Connect add-on with Mule

1 Comment

This year I got the chance to gain some experiences with Atlassian Connect and also Mule/the Anypoint Platform. These two things have nothing to do with each other at the first sight. But when looking deeper, a Connect add-on needs to provide an API and calls the API of the Atlassian product. The MuleSoft products offer API Management and integration and they also support a microservice approach – which fits very well for a Connect add-on. So I tried to build a Connect add-on with Mule.

That’s not what Mule is explicitly made for and my add-on should not only provide an API, but also deliver webpages with HTML content. So this was an experiment with unknown result to learn from.

This article is more focused on Mule, but for a better understanding I start with a short description of Connect and the app being developed.

Use Case

A general introduction to Connect can already be found here on this blog. For our use case we want to have an add-on with a workflow post-function, that copies the value of a specific field from the parent issue to the issue being changed. These post-functions can be applied to any transition in a JIRA workflow. For a post-function there are three visual parts:

  • create: form to configure the post-function, in our case select the field to copy
  • edit: form to edit the post-function configuration
  • view: short description in the list of active post-functions

The add-on needs to provide endpoints where JIRA can fetch the HTML for each case to embed it in the page. Then there are some endpoints that execute the logic or provide data:

  • triggered: executes the post-function
  • descriptor: delivers the descriptor of the add-on (needed for installation)
  • add-on-installed-callback: a lifecycle hook, that is executed after add-on installation

The goal is to implement all this in Mule. There is already a spring boot implementation, so I had a working solution and the general knowledge what actions are required.

Developing the App

Setup hint: For development I used a JIRA cloud instance and a local add-on, tunneled with ngrok.

I started to design the API of the add-on using RAML (RESTful API Modeling Language), describing the six endpoints mentioned above. From this RAML specification, we can generate Mule flows using APIkit. To test early and get fast feedback, the first thing needed is the descriptor to install the add-on in the JIRA instance. So we need to serve a static json file from our add-on.

first draft of the API

first draft of the API

the descriptor for atlassian connect

the descriptor for atlassian connect

Learning #1: Delivering static content

Of course we could just try to use Set Payload and paste the json there. But a separated file is much better to handle (editing, formatting and so on). There are multiple ways of how a file can be read and provided to a mule flow. Most of the proposed solutions are rather  strange or complicated, like creating a spring bean with IOUtils, providing the file name as constructor argument and accessing this bean through the app registry. Another proposed solution was to use the underlying Thread to get the resource as a stream.

An easier solution is to use the Static Resource Handler provided by Mule. The critical point in understanding for me was to leave behind the APIkit flows – the HTTP Static Resource Handler will be placed directly behind a HTTP Connector in its own flow. This took some time to realize.

using a static resource handler besides the APIkit router

using a static resource handler besides the APIkit router

Now we are ready to install the add-on – but it does not do anything. So the next thing we need is a way to create the desired post-function.

Learning #2: Producing dynamic HTML

Mule provides a Parse Template Transformer which allows to define a template file with MEL (Mule Expression Language) expressions. This can be used to generate HTML content dynamically at runtime, but MEL is not intended to be a full templating language and does not support all needed features. So there is no easy way to loop over the existing fields returned from JIRA to create a select HTML element with all the fields as options. In this use case it was possible to implement this in JavaScript: the list of fields is not requested from the add-on-server and then rendered into the HTML. Instead the browser at the client requests these fields and renders them via JavaScript.

For more complex use cases there is also the possibility to include a real templating engine like velocity or freemarker in Mule.

To display the post-function properly in HTML, we also need to serve more static content like javascript and css files. Therefore we can reuse the Static Resource Handler, which serves from a directory. The only thing we need to change is the path in the HTTP Listener, which can contain an asterisk as wildcard.

wildcard in the path to serve all static files

wildcard in the path to serve all static files

Learning #3: Parsing JSON in MEL

The GET request for edit and view contain the configuration of the post-function in json format as request parameters. So we can display the configuration in the text or prefill the fields on editing. But how do we get the values? A simple solution would be to use MEL to extract the fieldname directly from this configuration in a Set Variable Transformer. The variable can then be accessed in the template. Sounds good, but unfortunately MEL does not support json directly. The support for JsonPath was deprecated, so even if you read somewhere about an evaluator=”json” you should not use this anymore. Instead, the json should be transformed to an Object, preferable a java.util.HashMap, which can then be accessed. In our case we need to set the config param as payload first, because the JSON to Object Transformer operates only on the payload.

part of the "edit" flow: config needs to be converted, therefore it must be the payload

part of the “edit” flow: config needs to be converted, therefore it must be the payload

Learning #4: Setting content-types

As already mentioned, in the flow to edit the post-function configuration, we need to prefill the fields with the existing values and therefore we have to handle some json data. This sets the content-type of the message to application/json. The Parse Template Transformer then updates the payload of the message, but not the content-type (there is no way to define it in this component). So the response was not accepted by JIRA. In such a case we need to update the content-type manually, even if we already defined it in our RAML file for this endpoint. Therefore we can use Set Property, like described e.g. here.

setting content-type at the end of the flow explicitly

setting content-type at the end of the flow explicitly

Learning #5: Handling json in json or: temporarily changing payload

Now that we have saved our post-function configuration, it’s time to implement the triggered flow. There we receive a POST request, containing information about the issue, the transition and the post-function-configuration itself (it is stored in the JIRA instance, not in our add-on, so JIRA must send it to us).

The content of the request is a large json document. We already saw, that we can convert it to a HashMap to work with the properties. But now when we need to find out which field we need to copy, we can not access the property directly. JIRA has a somehow strange behaviour here and provides the full configuration as one String property.

snippet of the request, showing json embedded as string in json

snippet of the request, showing json embedded as string in json

So we need to convert this json string again into a HashMap. But wait – now we have overridden the whole payload with this configuration property. Thankfully, Mule provides a solution here: Message Enricher. This allows us to define a transformer (or a series of transformers using Processor Chain) which has its own scope of payload. Means, you can for example set the configuration as payload, transform it to an Object, extract your values and then leave this scope and additionally use the results of this inner processors to enrich the original message, whose payload is kept intact.

the configuration is read inside a Message Enricher, so the original payload is not modified

the configuration is read inside a Message Enricher, so the original payload is not modified

Learning #6: Connecting to JIRA

Next thing we need is the value of the field in the parent issue, for which a request to JIRA is needed. Of course there are already connectors for JIRA: The Jira Connector and the JiraRest Connector. They provide a variety of operations, that can be configured through the Mule Properties in Studio.

The problem here is that these connectors require a username-password combination for the login. But in Connect there is this exchange of security information during installation and then the authentication is done with JWT. We can not create an extra user in every cloud instance we want to install our add-on (in fact, JIRA creates one automatically behind the scenes).

So in conclusion, we need to implement the request to JIRA by ourselves. This is no big deal as JIRA provides a REST API for this and Mule has the HTTP Request Connector.

Learning #7: Following hypermedia links

The json for the issue contains also a link to the parent. So why not following this link to retrieve the needed value from the parent? This is what hypermedia and HATEOAS are made for and what MuleSoft also advocates in presentations about API design.

Now there’s only one problem: Mule does not support following links (at least I did not find it and got no response on stackoverflow). So what we need to do is splitting the link up into several parts, storing them in variables and use these variables in our HTTP Request Connector. For the triggered flow this means three more steps, but in the end it works.

splitting the url into variables for the http request

splitting the url into variables for the http request

Learning #8: Storing information

Now when we try to read data for the parent we get back http status 401 – unauthorized. Remember the JWT and post-installation-trigger? We did not handle this until now. So we need to store the security information. In a real add-on we would need to do this for every installation separately. But this is only a test case. In the spring boot implementation we used a property file to store this information. Mule supports connecting to file, so it should be easy to use the same way. I must admit that I had real problems converting the json to properties format and storing it in a file. But this pointed me to a better solution: Mule provides an ObjectStore, where I could easily store the HashMap and retrieve it later.

ObjectStore is easy to use

ObjectStore is easy to use

Learning #9: Creating the JWT token

Now with the security properties, we can create a token to sign our request and then finally get the data of the parent issue (what we tried since learning #6 btw.).

I just copied the classes from the existing implementation over to this project and made some minor changes. The interesting part is how to call the Java class from Mule. If you search for Java in the palette, there a two results, both named “Java”: a Component and a Transformer. The difference between them is described in a blog post by MuleSoft, here the short way which one to use:

Ergo, where the custom code converts data from one representation format to another, implement it as a transformer; where it creates new data, implement it as a component.

We want to create a new token, so this would be a component. But there are more ways to call java code. There is for example Invoke, having the same Icon as the Java Component. This lets you invoke a Java method from a spring bean, defined in global elements. I used this one for the generation of the token, but both of the other variants would also work.

Which one to use?

Which one to use?

For the token itself we used a library provided by Atlassian, which needs to be added to the pom.xml of the Mule application.

Learning #10: Using 3rd party libraries

If you think, that now we are done and can finally get the value from the parent issue, you will probably be disappointed when running the application. Looking at the log, there will be a java.lang.NoSuchMethodError telling you, that the runtime tried to invoke a method that doesn’t exist. But what is the reason, since the code compiled fine and Studio did not show any errors? Turns out, that the Atlassian JWT library needs commons-lang 2.6, but the Mule Runtime provides commons-lang 2.4, which does not contain the required method.

Simply adding the newer version to your project does not work, the class loader picks the older one first. Neither on stackoverflow, nor on Mule Forums I could get an answer to create a working solution so far. So I replaced the jar in the runtime, which leads to error messages in studio, but works during execution. Of course this does not work on cloudhub, where MuleSoft provides a runtime.

I stopped at this point, as it was just an experiment and not intended for production use. If you really need this, you could try using another JWT library.

Further Steps

With the same recipes as described above, one could make a second request, where the subtask is updated with the new value. In the final solution there are some more steps, since you need to filter for issues, that actually have a parent and we also have a checkbox to decide, whether an existing value should be overridden.

triggered-flow

Without any refactorings the final flow gets really huge

If we wanted to use this in production, there would be more things to do, like checking incoming requests for a valid JWT signature.

Conclusion

It was possible to implement an add-on with the required behaviour in Mule, even if there were some points where I wished it was easier. When you control the runtime the add-on is running and working fine.

But the more important thing about this is, that you can learn a lot from such experiments. You need to use a technology to understand and really learn it, this cannot be substituted with reading articles or taking part in a guided workshop. Only if you have to solve a problem by yourself you can evolve yourself.

Using a technology in a use case it is not primarily designed for gives you an even better understanding of its strengths and weaknesses and lets you discover new functionality you may not have used before.

But be aware in which mode you are – don’t do these experiments in critical projects or in production. Use them to learn in private projects, in proof of concepts, in technical spikes or find a company offering a 4+1 model, so you can do these projects in your +1 time.

Erik Petzold

Erik Petzold works as Developer/Consultant at codecentric AG. He is mostly developing in Java and is always interested in learning new stuff.

Share on FacebookGoogle+Share on LinkedInTweet about this on TwitterShare on RedditDigg thisShare on StumbleUpon

Kommentare

Comment

Your email address will not be published. Required fields are marked *