How to train your Workflows

You know what? We’ve been a little system heavy the last few months. Lets try something different…

So, we all know JIRA workflows. They guide your issues through their life-cycle, apparently only know three colors, and otherwise just sit there, right? What if I told you that you can train your workflows to do tricks? It’s true!

Today we will go over some of the ways I have trained my workflows over the years, and when you’d want to use each trick. The ultimate goal here is to get you to see your workflow as an equal part of the issue life-cycle – that is to say not only as a faithful guide, but an active participant!

Some of these tricks will require a plugin to enable – but where so I will clearly mark them as such and give you a link to the plugin.

With that being said, let’s get into it.

Conditions, Validators, and Post Functions, Oh My!

So to get started, we’ll need to review the three parts of a transition: The Condition, Validator, and Post Function. Each of these are evaluated at a different part of the Transition execution, and therefore are able to do different things.

Conditions

These are evaluated any time JIRA needs to see if you are even able to see that the transitions exists. On the surface, seems straight forward enough. If a user doesn’t meet the conditions, they don’t see the transition, so what tricks could we do with that?

Trick: Secret Cleanup Transition

So, here’s the situation. As a part of your workflow, you require certain information be filled in to close issues normally. For example, you require a fix version be assigned to a new feature or bug. This is because your users had a bad habit of just closing critical bugs that really needed to be fixed.

Which is great, but as time goes along, the project lead comes to you, saying they have too many issues that being honest, they won’t ever get to. However, you realize that because of the requirements, you can’t just close those issues. You obviously don’t want to get rid of the requirements, but you want to enable the Project lead to do their cleanup. How do you do this?

We can do this simply with one condition! First we’ll add an “All” transition to the closed state we’ll call “Cleanup” Then we’ll put a condition on it that says only Project Administrators can see this cleanup transition. This way we don’t open up the workflow to the abuse we’ve previously tried to stop, but we give the Project lead the power to cleanup when they need to.

Validators

Validators are similar to conditions in the fact that they evaluate whether a transition could occur. However, the key different is WHEN they do their processing. Conditions are evaluated before a transition ever occurs. On the other hand, a validator is evaluated during the processing of the transition.

One thing Validators are great for is that when they fail, they provide an error message. When a condition fails, the relevant transition just doesn’t show up. From experience, this usually leads to a lot of panicked questions from new people saying “I can’t close an issue!” This problem is avoided entirely if you use a validator instead. “Oh, I can’t close it because I need a Fix Version – cool.”

Trick: Written Approval

Note: Requires JIRA Misc Workflow Extensions

So, you have an approval chain workflow, but you want to be sure whoever approved it really MEANT it. Lets say this involves a P.O. process, so you want to know whoever approves it really meant you to spend that money.

This is easy enough with JMWE’s “Comment Required” Validator. To make sure the phrase is added as, we’ll add a conditional to evaluate the comment field.

issue.getAsString("comment") != "I Approve"

This is a bit counter-intuitive, but the condition is what makes the validator active in JWME. Here, the validator only ever validates if the comment is anything other than “I Approve” – which includes an empty string! Having the comment be “I Approve” disables the validator and lets the transition occur!

Post Functions

Post functions have no evaluation purpose, but are executed “After” all conditions are evaluated and things are ready to move forward. This gives you a chance to update fields, fire off internal events or web-hooks, or a myriad of other tasks. Most of the time when I’m looking to automate some function that is based on a workflow, Post functions are where I’ll look to first.

Trick: Auto-assign on transition

So this is great for any situations where a specific person needs to do something on a specific status….such as an approval chain. This lets you set it up to reassign the issue on demand without having the user remember to take that action. Very Handy!

To accomplish this, we’ll use the “Update Issue Field” Post function, select “Assignee” from the drop-down, then select the last radio button to specify which user you’d like to transfer it to.

There are additioonal post functions from various Apps that will do the same thing, but you can do this with just vanilla JIRA Functionality.

Workflow Properties: The forbidden technique

Okay, so maybe I’m being a little dramatic…

Transition and Status properties are a powerful technique for customizing how your workflow behaves. However, they are somewhat hidden, dubiously supported, and tricky to get right. Basically, if you do choose to go this route, you will be pretty much on your own, with your only company being whatever articles you can google. By the way, Hi googlers! Welcome to the blog!

In general, I don’t really like to use Workflow properties, but there are some cases where they can’t be helped. One use case I’ve used them for is to limit who can reassign an issue in a given status…which being honest I learned from another blog.

To access the properties screen, select your transition or status of your choice, then click “Properties”

This will open up another screen where you can see a list of that item’s properties. By default there should be none.

Trick: Re-order the Transition buttons

So lets say you have a project lead that is a little…particular. They want the most relevant transition to be the first button, with anything else being second. But you can’t re-order the Transition buttons, right?

Well, I wouldn’t be writing this section if that was the case. We can use a property on each transition called opsbar-sequence. This is a setting that will allow us to specify which transition gets priority on the issue view screen. To do this, we’ll need to add the key “opsbar-sequence” to each transition, along with a unique value. It is recommend you increment the value by 10’s on each transitiono, so if you add more transitions later, you have a place to put them between existing transitions

Remember, in this arrangement, the lowest opsbar-sequence value will go first, with each increasing value following afterwards.

You are now a workflow trainer!

My goal here is to get you to think about different automations you can enable for your users. It may seem trivial to you, but if you can save them 10 seconds on each transaction, that can really add up over time. And you don’t have to stop with this list! Look over what post functions, validators, conditions, and yes even properties you have available, and keep an open mind when you get these unusual requests.

However, a word of caution. If you go crazy with these, you may find you have too many workflows and actions within workflows to manage. It’s best to think of it like candy. Every once in a while is fine, and can make you feel happy for a bit. But too much too often can make you regret it before too long!

As always, don’t forget to subscribe to be notified by email of new posts! You’ll find a signup form at the bottom of the post. Also don’t forget to check out our Atlassian Discord Chat! It’s still ramping up, but I’m always happy to see people getting questions answered there! https://discord.gg/mXuRsVu

But until next time, this is Rodney, asking “Have you updated your JIRA issues today?”

A look at JIRA Software 8.7…and Service Desk 4.7 too!

So it’s that time again. And this time only two weeks behind!

Atlassian released JIRA Core and Software 8.7, as well as JIRA Service Desk 4.7, a few weeks ago. Unlike JIRA 8.6, this only only has a few new features, with most of them relegated to JIRA Service Desk, but I figured I’d take a look at them all the same.

As I mentioned in my last release review, JIRA Server and Data Center are released in discrete releases. As an admin, you should always review each one. Even if you are staying on Enterprise Releases, it will give you ideas of what features to look forward to and promote when you do your next upgrade.

If this release has a theme, I feel it is Privacy and Security. Almost every new feature touches on one of these two ideas in one way or another. There is only a few new features in JIRA Software and Core, with Service Desk having more. So we’ll cover all the universal new features, then go over what’s new in Service Desk.

GDPR Anonymizing

I’ll admit, other than some audit headaches, GDPR hasn’t impacted me too much. However, I’m still a fan of the rules and are hoping something similar is adopted over here in the United States.

However, for those of you who are subject to GDPR Rules, Atlassian is helping you out. You can now anonymize any information tied to a user that can identify them upon request. This is a one way process, with no way to undo it, but that is kind of the point.

Courtesy Atlassian Knowledge Base

Included is a wizard that will search that user, show you any items in JRIA that stores that user’s personal information, then let you Anonymize them. Honestly, it looks great and it’s one less headache at my next audit, so I like this one!

PostgreSQL 11 Support

This one is big. As you may know, Atlassian products can run on a number of Database architectures. However, what you may not know is that they run best on PostgreSQL. Now I’m personally still partial to MySQL, but when it comes to getting every bit of performance I can out of these applications, knowing that I have this as an option is always nice.

However, until JIRA 8.7, the most recent PostgreSQL supported was PostgreSQL 10, and the only PostgreSQL 9 with continuing support being 9.6. So it’s always nice to have them add another version in. Now here’s hoping they can get around to MySQL 8 before 5.7 is end of life’d.

OpenID Connect for Data Center

And a feature coming to Data Center instances with this release is OpenID Connect. This gives you another tool to connect with various Identity Providers. Another arrow in the quiver when trying to integrate JIRA is always appreciated.

However, for JIRA Core and Software, these are all the new features. However there was a number of bugs also fixed, so that might also be something you will look at when determining if a particular version is right for you.

New Service Desk Features

Keeping your requests Private

The first new Service Desk feature is really a change to a default option. As of Service Desk 4.7, any request made through the Customer portal is Private by default. You can choose to share them with people in your organization, but if you don’t change any settings, it will be private.

Image courtesy of Atlassian Knowledge Base

Requests originating from an email still is shared by default, but you as an admin now have the ability to change the default behavior within Administration > Applications > Jira Service Desk configuration.

Verifying Customer’s Email

For those of you who have enabled public signup through your customer portal, you can also add a step to have them verify their email address. This should make it easier to get in touch with your customers, as you’ll know that the email they signed up with actually belongs to them.

Agents are well…agents

This one has aggravated me. When commenting on issues or acting as an approver, your agents were treated as customers by JIRA. This would muck up all sorts of things, from SLA’s to automation – which was my headache. It could even confuse customers – which is the last thing you want. So they are changing it so agents are well…agents! Novel concept I know.

So….should I upgrade to this?

Well….That’s a whole big bag of “It Depends.” If you are not currently on an Enterprise release, it would be tempting to just get the latest bug fixes. But ultimately I think this question comes down to is the question, “Do I need the new features?” Some of you may see the GDPR features and go “Yes”. Others will be able to wait for the next Enterprise release. Ultimately it is up to you to look at the features and see if the value justifies the time it will take to do an upgrade cycle.

And that’s it for this week!

A bit of a shorter article this week, but it’s been a busy week. My wife and I are getting ready for a long weekend trip together, and I’ve still got a lot of getting ready to do.

As I told you guys last week, I recently helped out with the Q & A section of a webinar my company put on. Well, I also helped out on the write-up of the questions and their answers afterwards. So if you have some free time and want some good answers to questions people had about Atlassian Cloud, please check it out!

Don’t forget to subscribe below to get notified about new blog posts, and also about our Atlassian Discord Chat. While unofficial, it’s a good place to get help, discuss things, and connect with your fellow Atlassian Admins. https://discord.gg/mXuRsVu

But until next time, my name is Rodney, asking “Have you updated your JIRA issues today?”

Monitoring JIRA for Fun and Health

So, dear readers, here’s the deal. Some weeks, when I sit down to write, I know exactly what I’m going to write about, and can get right to it. Other weeks, I’m sitting down, and I don’t have a clue. I can usually figure something out, but it’s very much a struggle. This week is VERY much the latter.

Compound that with the fact that I just lost most of my VM’s due to a storage failure I had this very morning. Part of it was a mistake on my part. I have the home lab so that I can learn things I can’t learn on the job. And mistakes are a painful but powerful way to learn. Still….

This brings me back to a conversation I had with a colleague and fellow Atlassian Administrator for a company I used to work for. He had asked me what my thoughts around implementing Monitoring of JIRA. Well, I have touched on the subject before, but if I’m being honest, this isn’t my greatest work. Combine that with the fact that I suddenly need to rebuild EVERYTHING, well, why not start with my monitoring stack!

So, we are going to be setting up a number of systems. To gather system stats, that is to say CPU usage, Memory Usage, and Disk usage, we are going to be using Telegraf, which will be storing that data in an InfluxDB database. Then for JIRA stats we are going to use Prometheus. And to query and display this information, we will be using Grafana.

The Setup

So we are going to be setting up a new system that will live alongside our JIRA instance. We will call it Grafana, as that will be the front end we will interact with the system with.

On the back end it will be running both a InfluxDB Server and a Prometheus Server. Grafana will use both InfluxDB and Prometheus as data sources, and will use that to generate stats and graphs of all the relevant information.

Our system will be a CentOS 7 system (my favorite currently), and will have the following stats:

  • 2 vCPU
  • 4 GB RAM
  • 16 GB Root HDD for OS
  • 50 GB Secondary HDD for Services

This will give us the ability to scale up the capacity for services to store files without too much impact on the overall system, as well as monitor it’s size as well.

As per normal, I am going to write all commands out assuming you are root. If you are not, I’m also assuming you know what sudo is and how to use it, so I won’t insult you by holding your hand with that.

InfluxDB

Lets get started with InfluxDB. First thing we’ll need to do is add the yum repo from Influxdata onto the system. This will allow us to use yum to do the heavy lifting in the install of this service.

So lets open /etc/yum.repos.d/influxdb.repo

vim /etc/yum.repos.d/influxdb.repo

And add the following to it:

[influxdb]
name = InfluxDB Repository - RHEL \$releasever
baseurl = https://repos.influxdata.com/rhel/\$releasever/\$basearch/stable
enabled = 1
gpgcheck = 1
gpgkey = https://repos.influxdata.com/influxdb.key

Now we can install InfluxDB

yum install influxdb -y

And really, that’s it for the install. Kind of wish Atlassian did this kind of thing.

We’ll need to of course allow firewall access to Telegraf can get data into InfluxDB.

firewall-cmd --permanent --zone=public --add-port=8086/tcp
firewall-cmd --reload

And with that we’ll start and enable the service so that we can actually do the service setup.

systemctl start influxdb
systemctl enable influxdb

Now we need to set some credentials. As initially setup, the system isn’t really all that secure. So we are going to secure it initially by using curl to set ourselves an account.

curl -XPOST "http://localhost:8086/query" --data-urlencode \
"q=CREATE USER username WITH PASSWORD 'strongpassword' WITH ALL PRIVILEGES"

I shouldn’t have to say this, but you should replace username with one you can remember and strongpassword with, well, a strong password.

Now we can use the command “influx” to get into InfluxDB and do any further set up we need.

influx -username 'username' -password 'password'

Now that we are in, we need to setup a database and user for our JIRA data to go into. As a rule of thumb, I like to have one DB per application and/or system I intend to monitor with InfluxDB.

CREATE DATABASE Jira
CREATE USER jira WITH PASSWORD 'strongpassword'
GRANT ALL ON jira TO jira
CREATE RETENTION POLICY one_year ON Jira DURATION 365d REPLICATION 1 DEFAULT
SHOW RETENTION POLICIES ON Jira

And that’s it, InfluxDB is ready to go!

Grafana

Now that we have at least one datasource, we can get to setting up the Front End. Unfortunately, we’ll need information from JIRA in order to setup Prometheus (once we’ve set JIRA up to use the Prometheus Exporter), so that data source will need to wait.

Fortunately, Grafana can also be setup using a Yum repo. So lets open up /etc/yum.repos.d/grafana.repo

vim /etc/yum.repos.d/grafana.repo

and add the following:

[grafana]
name=grafana
baseurl=https://packages.grafana.com/oss/rpm
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://packages.grafana.com/gpg.key
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt

Afterwards, we just run the yum install command:

sudo yum install grafana -y

Grafana defaults to port 3000, however options to change or proxy this are available. However, we will need to open port 3000 on the firewall.

firewall-cmd --permanent --zone=public --add-port=3000/tcp
firewall-cmd --reload

Then we start and enable it:

sudo systemctl start grafana-server
sudo systemctl enable grafana-server

Go to port 3000 of the system on your web browser and you should see it up and running. We’ll hold off on setting up everything else on Grafana until we finish the system setup, though.

Telegraf

Telegraf is the tool we will use to get our data from JIRA’s underlying linux system and into InfluxDB. This is actually part of the same YUM repo that InfluxDB is installed from, so we’ll now also add it to the JIRA server – same as we did Grafana.

vim /etc/yum.repos.d/influxdb.repo

And add the following to it:

[influxdb]
name = InfluxDB Repository - RHEL \$releasever
baseurl = https://repos.influxdata.com/rhel/\$releasever/\$basearch/stable
enabled = 1
gpgcheck = 1
gpgkey = https://repos.influxdata.com/influxdb.key

And now that it has the YUM repo, we’ll install telegraf onto the JIRA Server.

yum install telegraf -y

Now that we have it installed, we can take a look at it’s configuration, which you can find in /etc/telegraf/telegraf.conf. I highly suggest you take a backup of this file first. Here is an example of a config file where I’ve filtered out all the comments and added back in everything essential.

[global_tags]
[agent]
  interval = "10s"
  round_interval = true
  metric_batch_size = 1000
  metric_buffer_limit = 10000
  collection_jitter = "0s"
  flush_interval = "10s"
  flush_jitter = "0s"
  precision = ""
  logtarget = "file"
  logfile = "/var/log/telegraf/telegraf.log"
  logfile_rotation_interval = "1d"
  logfile_rotation_max_size = "500MB"
  logfile_rotation_max_archives = 3
  hostname = "<JIRA's Hostname>"
  omit_hostname = false
[[outputs.influxdb]]
  urls = ["http://<grafana's url>:8086"]
  database = "Jira"
  username = "jira"
  password = "<password from InfluxDB JIRA Database setup>"
[[inputs.cpu]]
  percpu = true
  totalcpu = true
  collect_cpu_time = false
  report_active = false
[[inputs.disk]]
  ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"]
[[inputs.diskio]]
[[inputs.kernel]]
[[inputs.mem]]
[[inputs.processes]]
[[inputs.swap]]
[[inputs.system]]

And that should be it for config. There are of course more we can capture using various plugins – based on whatever we are interested in, but this will get the bare minimum we are interested in.

Because telegraf is pushing data to the InfluxDB server, we don’t need to open any firewall ports for this, which means we can start it, then monitor the logs to make sure it is sending the data over without any problems.

systemctl start telegraf
systemctl enable telegraf
tail -f /var/log/telegraf/telegraf.log

And assuming you don’t see any errors here, you are good to go! We will have the stats waiting for us when you finish the setup of Grafana. But first….

Prometheus Exporter

So telegraf is great for getting the Linux system stats, but that only gives us a partial picture. We can train it to capture JMX info, but that means we have to setup JMX – something I’m keen to avoid whenever possible. So what options have we got to capture details like JIRA usage, JAVA Heap performance, etc?

Ladies and gentlemen, the Prometheus Exporter!

That’s right, as of the time of this writing, this is yet another free app! This will setup a special page that Prometheus can go to and “scrape” the data from. This is what will take our monitoring from “okay” to “Woah”.

Because it is a free app, we can install it directly from the “Manage Apps” section of the JIRA Administration console

Once you click install, click “Accept & Install” on the pop up, and it’s done! After a refresh, you should notice a new sidebar item called “Prometheus Exporter Settings”. Click that, then click “Generate” next to the token field.

Next we’ll need to open the “here” link into a new tab on the “Exposed metrics are here” text. Take special special note of the URL used, as we’ll need this to setup Prometheus.

Prometheus

Now we’ll go back to our Grafana system to setup Prometheus. To find the download, we’ll go to the Prometheus Download Page, and find the latest Linux 64 bit version.

Try to avoid “Pre-release”

Copy that to your clipboard, then download it to your Grafana system.

 wget https://github.com/prometheus/prometheus/releases/download/v2.15.2/prometheus-2.15.2.linux-amd64.tar.gz

Next we’ll need to unpack it and move it into it’s proper place.

tar -xzvf prometheus-2.15.2.linux-amd64.tar.gz
mv prometheus-2.15.2.linux-amd64 /archive/prometheus

Now if we go into the prometheus folder, we will see a normal assortment of files, but the one we are interested in is prometheus.yml. This is our config file and where we are interested in working. As always, take a backup of the original file, then open it with:

vim /archive/prometheus/prometheus.yml

Here we will be adding a new “job” to the bottom of the config. You can copy this config and modify it for your purposes. Note we are using the URL we got from the Prometheus Exporter. The first part of the URL (everything up to the first slash, or the FQDN) goes under target where indicated. The rest of the URL (folder path) goes under metrics_path. And then your token goes where indicated so that you can secure these metrics.

global:
  scrape_interval:     15s
  evaluation_interval: 15s
alerting:
  alertmanagers:
  - static_configs:
    - targets:
rule_files:
scrape_configs:
  - job_name: 'prometheus'
    static_configs:
    - targets: ['localhost:9090']
  - job_name: 'Jira'
    scheme: https
    metrics_path: '<everything after the slash>'
    params:
      token: ['<token from Prometheus exporter']
    static_configs:
    - targets:
      - <first part of JIRA URL, everything before the first '/'>

We’ll need to now open up the firewall port for Prometheus

firewall-cmd --permanent --zone=public --add-port=9090/tcp
firewall-cmd --reload

Now we can test Prometheus. from the prometheus folder, run the following command.

./prometheus --config.file=prometheus.yml

From here we can open a web browser, and point it to our Grafana server on port 9090. On the Menu, we can go to Status -> Targets and see that both the local monitoring and JIRA are online.

Go ahead and stop prometheus for now by hitting “Ctrl + C”. We’ll need to set this up as a service so that we can rely on it coming up on it’s own should we ever have to restart the Grafana server.

Start by creating a unique user for this service. We’ll be using the options “–no-create-home” and “–shell /bin/false” to tell linux this is an account that shouldn’t be allowed to login to the server.

useradd --no-create-home --shell /bin/false prometheus

Now we’ll change the files to be owned by this new prometheus account. Note that the -R makes chown run recursively, meaning it will change it for every file underneath were we run it. Stop and make sure you are running it from the correct directory. If you run this command from the root directory, you will have a bad day (Trust me)!

chown -R prometheus:prometheus ./

And now we can create it’s service file.

vim /etc/systemd/system/prometheus.service

Inside the file we’ll place the following:

[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target

[Service]
User=prometheus
Group=prometheus
Type=simple
ExecStart=/archive/prometheus/prometheus \
    --config.file /archive/prometheus/prometheus.yml \
    --storage.tsdb.path /archive/prometheus/ \
    --web.console.templates=/archive/prometheus/consoles \
    --web.console.libraries=/archive/prometheus/console_libraries 

[Install]
WantedBy=multi-user.target

After you save this file, type the following commands to reload systemctl, start the service, make sure it’s running, then enable it for launch on boot.:

systemctl daemon-reload
systemctl start prometheus
systemctl status prometheus
systemctl enable prometheus

Now just double check that the service is in fact running, and you’re good to go!

Grafana, the Reckoning

Now that we have both our datasources up and gathering information, we need to start by creating a way to display it. On your web browser, go back to Grafana, port 3000. You should be greeted with the same login screen as before. To login the first time, use ‘admin’ as username and password.

You will be prompted immediately to change this password. Do so. No – really.

After you change your password, you should see the screen below. Clilck “Add data source”

We’ll select InfluxDB from the list as our first Data Source.

For settings, we’ll enter only the following:

  • Name: JIRA
  • URL: http://localhost:8086
  • Database: Jira
  • User: jira
  • Password: Whatever you set the InfluxDB Jira password to be

Click “Save & Test” at the bottom and you should have the first one down. Now click “Back” so we can set up Prometheus.

On Prometheus, all we’ll need to do is set the URL to be “http://localhost:9090. Enter that, then click “Save & Test”. And that’s both Data Sources done! Now we can move onto the Dashboard. On the right sidebar, click through to “Home”, then click “New Dashboard”

And now you are ready to start visualizing Data. I’ve already covered some Dashboard tricks in my previous attempt at this topic. However, if it helps, here’s how I used Prometheus to setup a graph of the JVM Heap.

Some Notes

Now, there is some cleanup you can do here. You can map out the storage for Grafana and InfluxDB to go to your /archive drive, for example. However, I can’t be giving away *ALL* the secrets ;). I want to challenge you there to see if you can learn to do it yourself.

We do have a few scaling options here too. For one, we can split Influx, Prometheus, and Grafana onto their own systems. However, my experience has been that this isn’t usually necessary, and they can all live comfortably on one system.

And one final note. The Prometheus exporter, strictly speaking, isn’t JIRA Data Center compatible. It will run however. As best I can tell, it will give you the stats for each node where applicable, and the overall stats where that makes sense. It might be worth installing and setting up Prometheus to bypass the load balancer and do each node individually.

But seriously, that’s it?

Indeed it is! This one is probably one of my longer posts, so thank you for making it to the end. It’s been a great week hearing how the blog is helping people out in their work, so keep it up! I’ll do my part here to keep providing you content.

On that note, this post was a reader-requested topic. I’m always happy to take on a challenge from readers, so if you have something you’d like to hear about, let me know!

One thing that I’m working on is to try and make it easier for you to be notified about new blog posts. As such, I’ve included an email subscription form at the bottom of the blog. If you want to be notified automatically about to blog posts, enter your email and hit subscribe!

And don’t forget about the Atlassian Discord chat – thoroughly unofficial. click here to join: https://discord.gg/mXuRsVu

But until next time, my name is Rodney, asking “Have you updated your JIRA issues today?”

Coyote Creek Webinar: Is Your Organization Atlassian Cloud Ready?

Readers,

As you know, I don’t normally like to post mid-week. I also try to keep the blog and my work-life separate (no small feat considering they are both based around supporting Atlassian systems). However, I thought you might be interested in this.

I have been asked to host the Q&A Section of a webinar my company is giving Wednesday. The topic is going to be “Is Your Organization Atlassian Cloud Ready?” If this sounds like something you’d be interested in please follow the link here to sign up.

I’ll have a post ready for you at our normal time this Wednesday, with the webinar following shortly afterwards! Until then, I’m Rodney, asking “Have you updated your JIRA issues today?”

Upgrading JIRA Data Center – with no Downtime!

So last week we took a dive into upgrading JIRA Server. So naturally this week it stands to reason that we will go over it’s big brother, JIRA Data Center. To be fair, doing an upgrade in Data Center isn’t much different than in Server. There are minor changes though that take full advantage of Data Center’s architecture, and those are what we’ll be focusing on today. We can sit here jabbering about it for a while, or we could get into it.

Zero Downtime Upgrades

So one of JIRA Data Center’s super-powers is the ability to stay online and completely functional while you are doing an upgrade! Called “Zero Downtime Upgrades” by Atlassian, this is the technique we will review today.

This seemingly superhuman feat takes advantage of the fact that JIRA Data Center runs on multiple nodes. Zero Downtime Upgrades allows you to put JIRA in a special “upgrade mode”, which allows nodes to be on different versions, then only bring down one JIRA node at a time while you perform the update. After the upgrade, you take it out of “upgrade mode” and let it start humming along on it’s new version!

Here is the doc we will be following that explains the what and how of doing a Zero Downtime Upgrade:

Before you begin

So, just because Data Center has fancy features, doesn’t absolve you from your own responsibilities. You will still need to go out and research what version makes sense for your instance, make sure it won’t impact any Apps, make sure your support systems are still supported under the new version, etc.

And I shouldn’t have to say this (I mean, I’ve pretty much said it the past two weeks now), but it is on you to thoroughly test each upgrade before putting it into production. JIRA is not a system that protects you from yourself. That is on you to do.

Also, “Zero Downtime” does not mean “Zero Risk”. It is still on you to make sure you have a good backup of the shared home, the database, and each node prior to upgrading the system. Remember our rule of thumb with upgrades:

ALWAYS give yourself a path back to before you changed anything.”

Entering Upgrade Mode

So, you’ve selected your version, have everything ready to go, but where do you start?

To Enter the mythical “Upgrade Mode”, Head on over to Applications -> JIRA upgrades in the admin section (or hit “g” twice, then type JIRA Upgrades into the search bar that pops up).

That DB Collation Error will be taken care of shortly….

Click the “Put JIRA into upgrade mode” button, and you are off!

Upgrading each node

After putting JIRA into Upgrade mode, it’s time to do each node. Depending on the number of nodes you have, this can take a long time. Just remember, for this to be truly zero downtime, at least one JIRA Node must be up at a time.

I should also point out that every JIRA node that is still up will be getting the load of the nodes that are down for the upgrade. To me, if you have the capacity, it makes sense to build out extra nodes before a JIRA upgrade to make sure that no single node becomes overloaded while some part of them are down.

I won’t bore you with how to do the upgrade on each node – because I already covered it last week. At this point you treat each node like an individual JIRA server, download the installer onto each one, upgrade that system, then make any filesystem changes you need to.

If you have any customization in the JIRA install directory, you will need to copy these over as well. Remember, you can backup and restore the modified file – but during an upgrade that does carry a chance of breaking that node. It’s always a better idea to make a diff between the old and new files, identify the differences from your edits, and apply those edits to the new version of the document.

As each node comes back up, test it by bypassing the proxy and going to it’s IP directly. If you have customizations, you will get the warning about that, but then it should pop up with it’s new version.

Also check back at the proxy (System -> System info) to make sure each node is connected back to the cluster after upgrading.

Complete this for all remaining nodes, then continue on.

After you’ve upgraded all nodes

Once you’ve completed the install for all JIRA nodes, go back to Applications -> JIRA upgrades in the admin section. Here you will see a new button lit up, “Run upgrade tasks”. Go ahead and click it.

This will do all the edits necessary to the Database and Shared Home to get you running fully on your new version. As this is an important step, DO NOT SKIP IT!

Once complete, this will take you out of “Upgrade Mode” and return JIRA to normal operation.

Take this time to do your normal tour of the health checks, Apps, and basic functionality to make sure it is all working as expected on every node. Once you are satisfied, you’re done! All without any end-user interruption!

And if something went very wrong?

Well, depending on when it went wrong, you’ll change what you do.

If you have entered upgrade mode, but have not modified any nodes yet, you have the option to cancel out of upgrade mode. No harm, no foul.

If you’ve already started working on a node – but have not brought any nodes online under the new version, simply restore that node to the previous version, then cancel out and go back to test.

If – however – you do have nodes online under the new version, and want to back out fully to the old version, you are in for a bad day.

This is the point of no return for the upgrade, which means in order to return, you are going to have to bring the whole cluster down. Afterwards, you will need to restore the database, restore the shared home, and then restore each and every node that is running a new version. Then you can bring your cluster back online. Then write an email to all stakeholders explaining why you just brought JIRA down when they weren’t expecting it.

And this is why I keep telling you to test until you are confident. It won’t catch everything, but it will catch far more than not testing.

So, Listen – we have, like, a lot of nodes. Like, A LOT.

So, you are wondering if you can automate this on nodes somehow. Well….

You can automate a JIRA upgrade using tools like Ansible, Salt, etc. I actually have an Ansible job running of my Jenkins (sorry, no Bamboo here) which keeps my personal Confluence up to date without me having to upgrade it every time. However, I do not recommend you do this for production.

Atlassian is changing things all the time, and they don’t always get into the nitty gritty of what they are changing. That Confluence Upgrade job in Ansible I mentioned? It was actually broken for four months because Atlassian changed the URL you download from, and Ansible couldn’t make sense of the redirect, and I didn’t have time to debug it.

Humans are much better to adapting to these small changes. That’s why I prefer to always keep a human somewhere in the loop for a production upgrade on each node.

But, if that doesn’t dissuade you, I guess I can’t stop you. Just know I won’t be the one to tell you how to do it, just that it can be done.

And that’s Upgrading JIRA Data Center!

I hope you got something useful out of this guide. I think I’m done with upgrades for a while though – between work and the blog, I’ve upgraded 5 nodes across 4 instances in the past two weeks. So yeah – we’ll be covering something new next week.

Don’t forget to hop onto our discord community! There you can chat with fellow Atlassian Admins, get help, and see what’s new! https://discord.gg/mXuRsVu

And until next time, this is Rodney, asking “Have you updated your JIRA issues today?”