Tuesday, 12 November 2013

Export JIRA Data Using Kettle REST Client

Kettle version 4.4 provides step which is able to connect remote REST services. As JIRA provides such interface it is fully possible to get data remotely using this step. Anyway some tricks should be applied in order to do that.

Mandatory Input Row

To start REST step communication with remote host an input row is needed. This was hard for me to find out as it is normal for such step to provide data input directly from its communication.
So if you need to input data from remote JIRA you'll have to provide at least one (in this case generated) input row.

Configure JIRA Connection

To connect remote JIRA server you'll need its root URL.
Add the root URL and don't forget to set the REST call statement. In this case calling search statement for project with key 'DOX'.

Set HTTP method to GET and Application type to JSON. Resulting JSON text will be passed to following steps under 'result' field name.

It is also necessary to provide authentication information if your JIRA instance interface is not publicly accessible.
Set your JIRA user name into 'Http Login' field and your password into 'Http Password' field.

Extract Data

As the REST service step provides JSON text input you can use 'Json Input' step provided in Kettle the same way as mentioned in Export JIRA Data Using Pentaho DI
Just provide JSON Path expressions for each field you need to extract. On this picture extracting task key, task summary and task status is shown.

PDI Jira

The idea behind PDI Jira is to be more specialized in using JIRA REST interface for integrating JIRA data using Pentaho DI although it is in too early phase and still needs JSON Input step for parsing result.

Wednesday, 30 October 2013

Export JIRA Data Using Pentaho DI

JIRA tool is very useful when organizing development process. Although there are lots of information available in the application and lots of plugins which give different views, sometimes there is a need for extracting data from JIRA database. It is normal for the JIRA server to hide its database and forbid direct access to it though. Most of the times there is web interface used for accessing particular JIRA data.

There are lots of tools and development libraries available for accessing this web interface also. What I'm providing is a useful plugin for those who use Pentaho Data Integration. This plugin is a simple step to connect remote JIRA instance, and fetch some data into the transformation using simple JQL (JIRA Query Language) expressions. All the data input can be then processed in different ways into the PDI transformations or jobs.

You can get trial JIRA version here, and you can download absolutely free of charge Pentaho DI from here.

Plugin Installation

PDI Jira Plugin can be downloaded from here.
After downloading the zipped package, you have to just unzip it in the PDI steps folders:

${pdi.home}/plugins/steps

Where ${pdi.home} is the folder where PDI is installed. That is all you need. Then start the PDI Spoon GUI.

PDI Jira Quick Start

Start a new transformation with Spoon. Then add 'Jira Plugin' from the 'Input' category.

After double clicking the Jira Plugin step you'll be able to edit remote Jira instance connection properties:
On connection tab there is also a 'Test Connection' button which can be used for testing connection to remote instance.

On JQL tab you'll be able to edit Jira Query Language statement which will be used to obtain data from remote Jira server.
Data from the Jira instance is provided to the next step in the transformation as JSON string which has field named as specified in 'Output field name'.

For the moment you can use available JSON Input plugin in Pentaho DI to extract required data from the JSON text. JSON Input plugin uses JSON Path which is much like XPath query language. For instance to extract issue key, issue status and summary you can configure following JSON Path statements:













Thursday, 7 March 2013

Ordering Surefire Tests on Linux and Windows

Although there is important for the tests to run independently, sometimes there is need to preserve order for executing with Maven Surefire. Default ordering used by this Maven plugin is filesystem, which means unit tests will execute in the order they are presented in the file system. Problem is that NTFS and Linux file systems do not provide same ordering - NTFS provides alphabetical ordering of files.

The solution is to explicitly set the test ordering for the Maven Surefire tests execution to alphabetical if you need the project build be same on both systems and if the tests execution order is important.

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-surefire-plugin</artifactId>
    <version>2.12</version>
    <configuration>
        <runOrder>alphabetical</runOrder>
    </configuration>
</plugin>