Blog

A few hours ago I released a new version of the Jira User Export app for Atlassian Jira. Version 1.2.0 now supports MS Excel (XLSX) when exporting Jira users. So Jira users can now be exported to the following formats:

  1. JSON

  2. XML

  3. CSV – comma separared

  4. CSV – semicolon separated

  5. XLSX

I have also created a new REST endpoint to handle Jira user properties in the Jira UI so now it is programmatically possible to add Jira user properties in a flexiable way.

The Jira User Export app have been tested successfully on Jira 7.0.0. This is great news. I was a bit nervous to see the performance for this version. But it was OK. So the Jira User Export app now supports Jira from version 7.0.0 to version 8.2.1.

I do hope the app can be found useful for some Jira administrators.

Yesterday I passed the ACP-100 exam so now I am an Atlassian Certified Professional Jira Administrator. A bit cool. The exam was tough, so you will have to study even though you are an experienced Jira administrator. You will be tested in almost any functionality in JIRA so prepare yourself. Got 68% and needed 65% to pass so it was close. Was aiming for a better result but I passed.

Preparing for the exam I do recommend the following:

  1. Purchase and read the JIRA Strategy Admin Workbook. Very useful resource.
  2. Do the case studies recommended by Atlassian. You can find them here: Case studies

Preparing for the exam has been extremely useful and I have learned a lot. I consider myself a JIRA plugin developer more than an administrator but being an Atlassian certified is the proof that you know something about the area. Next up is the ACP-200 Confluence Administrator certification and it is going to be challenging.

Searching custom fields is now available in Jira 7.12. This feature is extremely useful so no more basic text searching on the custom field page. But searching custom fields is not the only new feature for the Jira custom field page. I noticed the following after upgrading to JIRA 7.12:

  1. Searching Jira custom fields.
  2. Paging Jira custom fields page which means faster page load.
  3. Nice pop up for linked project contexts.
  4. Nice pop up for linked screens.

Normally many Jira instances have 100+ custom fields so the search is my favorite. The abilit

You can use VisualVM to monitor your remote JIRA instance if you like to to take a thread dump or simply just monitor the application. You will need to do the following in JIRA:

  1. Enable JMX monitoring. Administration > System > JMX Monitoring
  2. Insert the properties below in your <JIRA_INSTALL>/bin/setenv.sh file in order to expose the JMX feature. You will have to restart JIRA in order to make this work.


Parameters in setenv.sh
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.authenticate=true
-Dcom.sun.management.jmxremote.port=20000
-Dcom.sun.management.jmxremote.rmi.port=21000
-Dcom.sun.management.jmxremote.ssl=false
-Djava.rmi.server.hostname=<your-jira-ip>

In order to connect to the JXM port you can make a SSH tunnel to the JIRA server like this:

SSH Tunnel to JIRA
$ ssh -L 10000:localhost:20000 user@jiraserver.com

Add the JMX connection in VisualVM like below:

Now you are ready to explore the JIRA application

After a week of the release of macOS Mojave I decided to do the upgrade. The last major upgrades of Mac OS I have not experienced any problems what so ever. But this time I encountered a small problem that took me about 30 minutes to figure out.

I downloaded the macOS Mojave and was ready to install but for some reason I was unable to install. When I came to the last step in the installer where you select the disk where you want to install the new macOS I encountered the following error: “Unable to install, Damaged Core Storage Users”. This error did not make any sense at all. A little Google Kungfu did not give me any useful results but after a while of searching I was wondering if the FileVault could be an issue here. My data on my MacBook Pro is encrypted using macOS FileVault so I started to wonder if there could be an issue with my user and FileVault. I opened System Preferences and then Privacy & Security in macOS and then the FileVault tab and bingo. I noticed that my user was unable to unlock the disk.

The trick is to click the “Enable users” button and choose your user. Then shutdown the macOS Mojave installer and reopen. Now you are able to complete the installation. An hour later I was running macOS Mojave, nice. I love the dark UI by the way.

After some considerations I decided that today was the day that I upgraded my Intel NUC with Ubuntu 18.04 LTS server (Bionic Beaver). It took 15 minutes and no problems during the upgrade, yay. After reading some articles on the big Internet I was recommended the following approach:

$ sudo apt update 
$ sudo apt upgrade
$ sudo apt dist-upgrade
$ sudo apt autoremove
$ sudo do-release-upgrade -d

At some point you are asked if you want the upgrade to remove obsolete packages. Please say no to that, as it can cause you some problems later on, this is what I heard. But run the apt autoremove command instead after the upgrade. The Ubuntu 18.04 contains the following new features:

  • Apache 2.4.29 (from 2.4.18)
  • nginx 1.14.0 (from 1.10.3)
  • Python 3.6.5 (from 3.5.1)
  • Ruby 2.5 (from 2.3)
  • Go 1.10 (from 1.6)
  • PHP 7.2 (from 7.0)
  • Node.js 8.10 (from 4.2.6)

Let us see how the system runs for the next few days. My PostgreSQL version is now 10 and both my JIRA and Confluence use this is as the primary datastore. But all is good, so far.

PostgreSQL is an extremely powerful open source database that I use for several home made applications and third party applications like Atlassian Confluence and JIRA. I have used PostgreSQL for many years and it is my preferred RDBMS because of the performance, stability and capabilities.

PostgreSQL autovacuum daemon

PostgreSQL has the autovacuum daemon that will do some database housekeeping for the following reasons:

  1. Recover or reuse disk space occupied by updated or deleted rows.

  2. To update data statistics used by the PostgreSQL query planner.

  3. To update the visibility map, which speeds up index-only scans.

  4. To protect against loss of very old data due to transaction ID wraparound or multixact ID wraparound.

Many of the latest versions of PosgreSQL has the autovacuum enabled by default but you can check if the vacuum daemon is running with the following command:

ps aux | grep autovacuum | grep -v grep

PostgreSQL full vacuum with crontab

It might be overkill as my PostgreSQL databases are not that big but still I like to do a full vacuum sometimes.

It is easy to incorporate a crontab that can do a full database vacuum on a specific time.

# List cron jobs for postgres user
postgres@ubuntuserver:~$ crontab -l
# Create new crontab
postgres@ubuntuserver:~$ crontab -e
# Create crontab that runs a full analyze on all databases every second day at 01.00 and pipe the result to a log file
0 1 * * 1,3,5,7 "/usr/bin/vacuumdb --all --full --analyze" > /apps/logs/cron/postgresql.log 2>&1

This is just the basics but it still powerful enough to use.

Finally got Langhorn Web up and running. Over the last year I have tried to convince myself that the best way to learn about Confluence, JIRA, Bamboo and Bitbucket is read, use and experiment with the applications.

Well, I use Confluence, JIRA, Bamboo and Bitbucket at work on a daily basis and assist/consults Netic A/S customers with configuration and hosting issues regarding these applications. But I would like to have my own Confluence running, where I can share my code (some of it) and my experiences (some of them), when it comes to the Atlassian application suite.

So should I run my own Confluence at home or use one in the cloud? I decided to use my Intel NUC as the server for my Confluence setup and I must say, this tiny little machine can do the work. So Langhorn Web is running on an Intel NUC with i3 CPU, 16 GB Ram and 256 GB SSD with Ubuntu server OS installed. Work like a charm.