Spring Cleaning for Custom Fields

Well, I’m back…or at least upright. Thank you all for the well wishes I’ve received. As much as I hate not posting an actual article, I really did need the week off to recover. But now it’s time to get back to it. As I’ve stated, this week’s article was going to be a reader selected topic. It seems there was some demand for each, but the clear winner was…

I’ll be tackling the other articles in the coming weeks, but lets look at Getting those custom fields under control this week.

A bit of a confession here.

First, let me say that as Admins, I think each one of us has various strengths and weaknesses. Me, for example. My obvious strength is Systems. I can tune almost any system to run smoothly. I can learn new back-end technologies with almost no lead-time and still successfully deploy.

However, my weakness as a JIRA Admin is I’ve never been able to execute a successful Cleanup. I’ve known the problem, and I’ve worked to fix it. But I’d rather be a con-census builder than a dictator when it comes to JIRA, and people LOVE their custom fields.

To manage this, I’ve always put safeguards to keep things getting worse. I re-use fields rather than create new ones, and really analyze whether new fields are needed. But on systems I’ve taken over, it’s rare I get a group to give up a field that was already existing.

That being said, if you want to learn from a real master of this topic, I’d highly recommend Rachel Wright’s JIRA Strategy Admin Workbook.

However, I’m going to give you tools to analyze the problem, see what fields aren’t being used, and give you the data to say to people “This…this is a problem we can fix.” Then I’ll show you how to go about cleaning things up. So, lets learn together!

Why does this even matter?

So, you might be asking yourself “JIRA is all about the custom fields, why should I worry about how many there are?” That, honestly, is a fair question. But the reason you should care is performance. It’s just a fact: The more custom fields your system has to parse through, the slower things like creating an issue, searching, etc will be.

Average response times per action (in milliseconds): custom fields test. Courtesy Atlassian.

Take a look at this chart above. This is from some testing Atlassian did. If we look at JQL Searching alone, at 1400 custom fields, the performance is abysmal at almost 3 seconds. However, doubling the custom field count to 2800 triples that number to nearly 9 seconds.

And yes, this is an extreme case. But users will notice a delay of half a second. You will get comments saying JIRA is slow even at that number, trust me. And it doesn’t take too terribly many custom fields to get searching to be that slow. That is why your custom field count matters.

Analysis

It is said the first step to fixing a problem is admitting you have a problem. Check. The second step it would seem is to figure out how big the problem is. Unfortunately, for this part we are going to have to go directly to the database.

For all my databases, I like to keep an administrative read-only account that will allow me to go in and query from time to time. This helps with making sure things are running properly, or diagnosing problems every now and again. Just make sure it’s read only so you don’t mess with something that aught not to be messed with.

Anyways, Atlassian has some queries that will help you figure out what fields are being used in your system. You can find all of them available here:

All the queries below I have confirmed are still working for MySQL 5.7 and JIRA 8.5.3.

Unused custom fields

select count(*), customfield.id, customfield.cfname, customfield.description
from customfield left join customfieldvalue on customfield.id = customfieldvalue.customfield
where customfieldvalue.stringvalue is null 
and customfieldvalue.numbervalue is null 
and customfieldvalue.textvalue is null
and customfieldvalue.datevalue is null
and customfield.CUSTOMFIELDTYPEKEY not like '%com.atlassian.servicedesk%'
and customfield.CUSTOMFIELDTYPEKEY not like '%com.pyxis.greenhopper%'
and customfield.CUSTOMFIELDTYPEKEY not like '%com.atlassian.bonfire%'
and customfield.CUSTOMFIELDTYPEKEY not like '%com.atlassian.jpo%'
group by customfield.id, customfield.cfname, customfield.description;

So, the first logical question is to ask is what fields are just not being used – period. These can probably be thrown out easily enough. Just a word of caution with this query: It can return fields that are being used by Apps using their own datastore. I’ve modified it to at least throw out results from JIRA Service Desk, JIRA Software, Capture, and Portfolio So make sure you are vetting this list for such false postitives. But if it’s a field you or your team has created, and it’s on this list, it might be time to have a discussion on whether it needs to be around.

Custom fields that have not been updated after date (YYYY-MM-DD)

select field.id, field.cfname from customfield field where field.cfname not in (
select item.field from changeitem item
JOIN changegroup cgroup ON item.groupid=cgroup.id
where item.fieldtype='custom' and cgroup.created > 'YYYY-MM-DD'
) and customfieldtypekey not like '%com.pyxis.greenhopper%'
and customfieldtypekey not like '%com.atlassian.servicedesk%'
and customfieldtypekey not like '%com.atlassian.bonfire%'
and customfieldtypekey not like '%com.atlassian.jpo%'

So this one returns fields that haven’t been used after a date. You’ll have to modify the query by replaying “YYYY-MM-DD” with a date you are interested in, but this one is handy for an end of year review, as it can answer “What fields have we not used this year”. If your teams haven’t touched the field in that long, you have a strong case for it’s need to go away.

Custom fields with low usage

select count(*), customfield.id, customfield.cfname, customfield.description
from customfield left join customfieldvalue on customfield.id = customfieldvalue.customfield
where customfield.CUSTOMFIELDTYPEKEY not like '%com.atlassian.servicedesk%'
and customfield.CUSTOMFIELDTYPEKEY not like '%com.pyxis.greenhopper%'
and customfield.CUSTOMFIELDTYPEKEY not like '%com.atlassian.bonfire%'
and customfield.CUSTOMFIELDTYPEKEY not like '%com.atlassian.jpo%'
and customfieldvalue.ID is not NULL
group by customfield.id having count(*) < 5
order by count(*) desc

Unlike the first query, this one will return fields that do not meet a certain threshold of usage. As written, this threshold is 5 issues, but you can adjust it by changing the number in “count(*) < 5”. As a rule, I like to keep this to one-tenth of one percent of the total number of issues in your instance.. So if you have 400,000 issues in your instance, the threshold would be 400. But that is a personal preference, you should set this wherever it makes sense for you.

So, what now?

You know have your information, and you know what fields are not being used. You can just go and delete them, right?

Well…

Not really. You should do some more digging. Who asked for that field to be added? What was their business case? Was that field just recently added? For those with a small usage, are they only being used in a specific project? If so, what is the usage within that project?

Also, use Botron Power Admin to see where all the field is being used (or not used). This might tell you who you need to talk to as well.

This should help you narrow down your candidates further. Once you do that, go back to whoever requested those fields. Take the data from your queries, and let them know that the fields aren’t being used, and are on a list to be removed. You can also include documents on how custom fields impact performance for everyone. If you are lucky, they’ll agree with the data and you can remove it. If not, start negotiating. See why they think they need a field they aren’t using. See if there is a way you can give them their ask without the field.

Custom Field optimizer (Data Center)

For those of you on Data Center, JIRA has a built-in Custom field optimizer you can use. It will go through and find fields that are not used often, or otherwise “misconfigured” (As defined by Atlassian).

As you can see, it’s alerting on a field in my instance that has a global context. This was one I specifically created to test the optimizer, so it’s Ideally, you should limit your custom fields to only be available to the projects that need it. So you can use this tool to find those with global contexts and if necessary, fix them.

Merging Fields (App Required)

So, lets say you have a duplicate field. How do you copy the information from one to another so you can delete one without losing data?

Well, as you can guess, this isn’t something that is able to be done in vanilla JIRA – so we have to turn to the Atlassian Marketplace – and specifically Adaptavist’s Scriptrunner plugin.

Now my groovy – it needs some work. But Scriptrunner has a number of built in scripts, including one to “Copy custom field values”. This does what it says on the tin. It takes the value in a source field, and places it into a destination field, overwriting what was there. For most cases, this won’t be a problem for reasons I’ll talk about later, but do keep this in mind.

To use this, you will first need to save a filter that will limit what issues it will copy the fields on. I use something along the lines of:

"Custom Field A" is not EMPTY

This will return all issues that has the custom field set. You can add another clause to weed out those that have the destination field already set, and handle combining those manually. In most cases, each of the duplicate fields would have only been used on a project or two a piece, and have no overlap, so this shouldn’t be too much of a concern, but as always do your research first and make sure that is not the case in your situation.

Now, I shouldn’t have to say this, but always test any changes you want to make before doing it in production. That’s very true here. This built-in script isn’t perfect, and you may have to pick it apart and modify it a bit to get to your use case. The only way to know if this is required is to do your testing.

That being said, I asked my colleagues at Coyote Creek about merging fields, and Neil Taylor gave me this as a solutions he’s used before. From testing it, it’s the easiest to implement and execute that I’ve found, and should handle most of your use cases.

So time to get cleaning!

So you know how to find fields that are good candidates to removal, and have an action plan to migrate the data should they have anything worth keeping in them. What are you waiting for? Get out there, start conversations, and get yourself on a better path to better JIRA Performance.

Don’t forget to subscribe to receive new posts by email. It’s the easiest way to be notified as soon as the posts go live. You can sign up with the form at the bottom of the post. And don’t forget to check out our Atlassian Discord Chat!It’s always interesting to see who has popped up there and what’s being discussed. https://discord.gg/mXuRsVu

But until next time, this is Rodney, asking “Have you updated your JIRA issues today?”

How to train your Workflows

You know what? We’ve been a little system heavy the last few months. Lets try something different…

So, we all know JIRA workflows. They guide your issues through their life-cycle, apparently only know three colors, and otherwise just sit there, right? What if I told you that you can train your workflows to do tricks? It’s true!

Today we will go over some of the ways I have trained my workflows over the years, and when you’d want to use each trick. The ultimate goal here is to get you to see your workflow as an equal part of the issue life-cycle – that is to say not only as a faithful guide, but an active participant!

Some of these tricks will require a plugin to enable – but where so I will clearly mark them as such and give you a link to the plugin.

With that being said, let’s get into it.

Conditions, Validators, and Post Functions, Oh My!

So to get started, we’ll need to review the three parts of a transition: The Condition, Validator, and Post Function. Each of these are evaluated at a different part of the Transition execution, and therefore are able to do different things.

Conditions

These are evaluated any time JIRA needs to see if you are even able to see that the transitions exists. On the surface, seems straight forward enough. If a user doesn’t meet the conditions, they don’t see the transition, so what tricks could we do with that?

Trick: Secret Cleanup Transition

So, here’s the situation. As a part of your workflow, you require certain information be filled in to close issues normally. For example, you require a fix version be assigned to a new feature or bug. This is because your users had a bad habit of just closing critical bugs that really needed to be fixed.

Which is great, but as time goes along, the project lead comes to you, saying they have too many issues that being honest, they won’t ever get to. However, you realize that because of the requirements, you can’t just close those issues. You obviously don’t want to get rid of the requirements, but you want to enable the Project lead to do their cleanup. How do you do this?

We can do this simply with one condition! First we’ll add an “All” transition to the closed state we’ll call “Cleanup” Then we’ll put a condition on it that says only Project Administrators can see this cleanup transition. This way we don’t open up the workflow to the abuse we’ve previously tried to stop, but we give the Project lead the power to cleanup when they need to.

Validators

Validators are similar to conditions in the fact that they evaluate whether a transition could occur. However, the key different is WHEN they do their processing. Conditions are evaluated before a transition ever occurs. On the other hand, a validator is evaluated during the processing of the transition.

One thing Validators are great for is that when they fail, they provide an error message. When a condition fails, the relevant transition just doesn’t show up. From experience, this usually leads to a lot of panicked questions from new people saying “I can’t close an issue!” This problem is avoided entirely if you use a validator instead. “Oh, I can’t close it because I need a Fix Version – cool.”

Trick: Written Approval

Note: Requires JIRA Misc Workflow Extensions

So, you have an approval chain workflow, but you want to be sure whoever approved it really MEANT it. Lets say this involves a P.O. process, so you want to know whoever approves it really meant you to spend that money.

This is easy enough with JMWE’s “Comment Required” Validator. To make sure the phrase is added as, we’ll add a conditional to evaluate the comment field.

issue.getAsString("comment") != "I Approve"

This is a bit counter-intuitive, but the condition is what makes the validator active in JWME. Here, the validator only ever validates if the comment is anything other than “I Approve” – which includes an empty string! Having the comment be “I Approve” disables the validator and lets the transition occur!

Post Functions

Post functions have no evaluation purpose, but are executed “After” all conditions are evaluated and things are ready to move forward. This gives you a chance to update fields, fire off internal events or web-hooks, or a myriad of other tasks. Most of the time when I’m looking to automate some function that is based on a workflow, Post functions are where I’ll look to first.

Trick: Auto-assign on transition

So this is great for any situations where a specific person needs to do something on a specific status….such as an approval chain. This lets you set it up to reassign the issue on demand without having the user remember to take that action. Very Handy!

To accomplish this, we’ll use the “Update Issue Field” Post function, select “Assignee” from the drop-down, then select the last radio button to specify which user you’d like to transfer it to.

There are additioonal post functions from various Apps that will do the same thing, but you can do this with just vanilla JIRA Functionality.

Workflow Properties: The forbidden technique

Okay, so maybe I’m being a little dramatic…

Transition and Status properties are a powerful technique for customizing how your workflow behaves. However, they are somewhat hidden, dubiously supported, and tricky to get right. Basically, if you do choose to go this route, you will be pretty much on your own, with your only company being whatever articles you can google. By the way, Hi googlers! Welcome to the blog!

In general, I don’t really like to use Workflow properties, but there are some cases where they can’t be helped. One use case I’ve used them for is to limit who can reassign an issue in a given status…which being honest I learned from another blog.

To access the properties screen, select your transition or status of your choice, then click “Properties”

This will open up another screen where you can see a list of that item’s properties. By default there should be none.

Trick: Re-order the Transition buttons

So lets say you have a project lead that is a little…particular. They want the most relevant transition to be the first button, with anything else being second. But you can’t re-order the Transition buttons, right?

Well, I wouldn’t be writing this section if that was the case. We can use a property on each transition called opsbar-sequence. This is a setting that will allow us to specify which transition gets priority on the issue view screen. To do this, we’ll need to add the key “opsbar-sequence” to each transition, along with a unique value. It is recommend you increment the value by 10’s on each transitiono, so if you add more transitions later, you have a place to put them between existing transitions

Remember, in this arrangement, the lowest opsbar-sequence value will go first, with each increasing value following afterwards.

You are now a workflow trainer!

My goal here is to get you to think about different automations you can enable for your users. It may seem trivial to you, but if you can save them 10 seconds on each transaction, that can really add up over time. And you don’t have to stop with this list! Look over what post functions, validators, conditions, and yes even properties you have available, and keep an open mind when you get these unusual requests.

However, a word of caution. If you go crazy with these, you may find you have too many workflows and actions within workflows to manage. It’s best to think of it like candy. Every once in a while is fine, and can make you feel happy for a bit. But too much too often can make you regret it before too long!

As always, don’t forget to subscribe to be notified by email of new posts! You’ll find a signup form at the bottom of the post. Also don’t forget to check out our Atlassian Discord Chat! It’s still ramping up, but I’m always happy to see people getting questions answered there! https://discord.gg/mXuRsVu

But until next time, this is Rodney, asking “Have you updated your JIRA issues today?”

A look at JIRA Software 8.7…and Service Desk 4.7 too!

So it’s that time again. And this time only two weeks behind!

Atlassian released JIRA Core and Software 8.7, as well as JIRA Service Desk 4.7, a few weeks ago. Unlike JIRA 8.6, this only only has a few new features, with most of them relegated to JIRA Service Desk, but I figured I’d take a look at them all the same.

As I mentioned in my last release review, JIRA Server and Data Center are released in discrete releases. As an admin, you should always review each one. Even if you are staying on Enterprise Releases, it will give you ideas of what features to look forward to and promote when you do your next upgrade.

If this release has a theme, I feel it is Privacy and Security. Almost every new feature touches on one of these two ideas in one way or another. There is only a few new features in JIRA Software and Core, with Service Desk having more. So we’ll cover all the universal new features, then go over what’s new in Service Desk.

GDPR Anonymizing

I’ll admit, other than some audit headaches, GDPR hasn’t impacted me too much. However, I’m still a fan of the rules and are hoping something similar is adopted over here in the United States.

However, for those of you who are subject to GDPR Rules, Atlassian is helping you out. You can now anonymize any information tied to a user that can identify them upon request. This is a one way process, with no way to undo it, but that is kind of the point.

Courtesy Atlassian Knowledge Base

Included is a wizard that will search that user, show you any items in JRIA that stores that user’s personal information, then let you Anonymize them. Honestly, it looks great and it’s one less headache at my next audit, so I like this one!

PostgreSQL 11 Support

This one is big. As you may know, Atlassian products can run on a number of Database architectures. However, what you may not know is that they run best on PostgreSQL. Now I’m personally still partial to MySQL, but when it comes to getting every bit of performance I can out of these applications, knowing that I have this as an option is always nice.

However, until JIRA 8.7, the most recent PostgreSQL supported was PostgreSQL 10, and the only PostgreSQL 9 with continuing support being 9.6. So it’s always nice to have them add another version in. Now here’s hoping they can get around to MySQL 8 before 5.7 is end of life’d.

OpenID Connect for Data Center

And a feature coming to Data Center instances with this release is OpenID Connect. This gives you another tool to connect with various Identity Providers. Another arrow in the quiver when trying to integrate JIRA is always appreciated.

However, for JIRA Core and Software, these are all the new features. However there was a number of bugs also fixed, so that might also be something you will look at when determining if a particular version is right for you.

New Service Desk Features

Keeping your requests Private

The first new Service Desk feature is really a change to a default option. As of Service Desk 4.7, any request made through the Customer portal is Private by default. You can choose to share them with people in your organization, but if you don’t change any settings, it will be private.

Image courtesy of Atlassian Knowledge Base

Requests originating from an email still is shared by default, but you as an admin now have the ability to change the default behavior within Administration > Applications > Jira Service Desk configuration.

Verifying Customer’s Email

For those of you who have enabled public signup through your customer portal, you can also add a step to have them verify their email address. This should make it easier to get in touch with your customers, as you’ll know that the email they signed up with actually belongs to them.

Agents are well…agents

This one has aggravated me. When commenting on issues or acting as an approver, your agents were treated as customers by JIRA. This would muck up all sorts of things, from SLA’s to automation – which was my headache. It could even confuse customers – which is the last thing you want. So they are changing it so agents are well…agents! Novel concept I know.

So….should I upgrade to this?

Well….That’s a whole big bag of “It Depends.” If you are not currently on an Enterprise release, it would be tempting to just get the latest bug fixes. But ultimately I think this question comes down to is the question, “Do I need the new features?” Some of you may see the GDPR features and go “Yes”. Others will be able to wait for the next Enterprise release. Ultimately it is up to you to look at the features and see if the value justifies the time it will take to do an upgrade cycle.

And that’s it for this week!

A bit of a shorter article this week, but it’s been a busy week. My wife and I are getting ready for a long weekend trip together, and I’ve still got a lot of getting ready to do.

As I told you guys last week, I recently helped out with the Q & A section of a webinar my company put on. Well, I also helped out on the write-up of the questions and their answers afterwards. So if you have some free time and want some good answers to questions people had about Atlassian Cloud, please check it out!

Don’t forget to subscribe below to get notified about new blog posts, and also about our Atlassian Discord Chat. While unofficial, it’s a good place to get help, discuss things, and connect with your fellow Atlassian Admins. https://discord.gg/mXuRsVu

But until next time, my name is Rodney, asking “Have you updated your JIRA issues today?”

Monitoring JIRA for Fun and Health

So, dear readers, here’s the deal. Some weeks, when I sit down to write, I know exactly what I’m going to write about, and can get right to it. Other weeks, I’m sitting down, and I don’t have a clue. I can usually figure something out, but it’s very much a struggle. This week is VERY much the latter.

Compound that with the fact that I just lost most of my VM’s due to a storage failure I had this very morning. Part of it was a mistake on my part. I have the home lab so that I can learn things I can’t learn on the job. And mistakes are a painful but powerful way to learn. Still….

This brings me back to a conversation I had with a colleague and fellow Atlassian Administrator for a company I used to work for. He had asked me what my thoughts around implementing Monitoring of JIRA. Well, I have touched on the subject before, but if I’m being honest, this isn’t my greatest work. Combine that with the fact that I suddenly need to rebuild EVERYTHING, well, why not start with my monitoring stack!

So, we are going to be setting up a number of systems. To gather system stats, that is to say CPU usage, Memory Usage, and Disk usage, we are going to be using Telegraf, which will be storing that data in an InfluxDB database. Then for JIRA stats we are going to use Prometheus. And to query and display this information, we will be using Grafana.

The Setup

So we are going to be setting up a new system that will live alongside our JIRA instance. We will call it Grafana, as that will be the front end we will interact with the system with.

On the back end it will be running both a InfluxDB Server and a Prometheus Server. Grafana will use both InfluxDB and Prometheus as data sources, and will use that to generate stats and graphs of all the relevant information.

Our system will be a CentOS 7 system (my favorite currently), and will have the following stats:

  • 2 vCPU
  • 4 GB RAM
  • 16 GB Root HDD for OS
  • 50 GB Secondary HDD for Services

This will give us the ability to scale up the capacity for services to store files without too much impact on the overall system, as well as monitor it’s size as well.

As per normal, I am going to write all commands out assuming you are root. If you are not, I’m also assuming you know what sudo is and how to use it, so I won’t insult you by holding your hand with that.

InfluxDB

Lets get started with InfluxDB. First thing we’ll need to do is add the yum repo from Influxdata onto the system. This will allow us to use yum to do the heavy lifting in the install of this service.

So lets open /etc/yum.repos.d/influxdb.repo

vim /etc/yum.repos.d/influxdb.repo

And add the following to it:

[influxdb]
name = InfluxDB Repository - RHEL \$releasever
baseurl = https://repos.influxdata.com/rhel/\$releasever/\$basearch/stable
enabled = 1
gpgcheck = 1
gpgkey = https://repos.influxdata.com/influxdb.key

Now we can install InfluxDB

yum install influxdb -y

And really, that’s it for the install. Kind of wish Atlassian did this kind of thing.

We’ll need to of course allow firewall access to Telegraf can get data into InfluxDB.

firewall-cmd --permanent --zone=public --add-port=8086/tcp
fireall-cmd --reload

And with that we’ll start and enable the service so that we can actually do the service setup.

systemctl start influxdb
systemctl enable influxdb

Now we need to set some credentials. As initially setup, the system isn’t really all that secure. So we are going to secure it initially by using curl to set ourselves an account.

curl -XPOST "http://localhost:8086/query" --data-urlencode \
"q=CREATE USER username WITH PASSWORD 'strongpassword' WITH ALL PRIVILEGES"

I shouldn’t have to say this, but you should replace username with one you can remember and strongpassword with, well, a strong password.

Now we can use the command “influx” to get into InfluxDB and do any further set up we need.

influx -username 'username' -password 'password'

Now that we are in, we need to setup a database and user for our JIRA data to go into. As a rule of thumb, I like to have one DB per application and/or system I intend to monitor with InfluxDB.

CREATE DATABASE Jira
CREATE USER jira WITH PASSWORD 'strongpassword'
GRANT ALL ON jira TO jira
CREATE RETENTION POLICY one_year ON Jira DURATION 365d REPLICATION 1 DEFAULT
SHOW RETENTION POLICIES ON Jira

And that’s it, InfluxDB is ready to go!

Grafana

Now that we have at least one datasource, we can get to setting up the Front End. Unfortunately, we’ll need information from JIRA in order to setup Prometheus (once we’ve set JIRA up to use the Prometheus Exporter), so that data source will need to wait.

Fortunately, Grafana can also be setup using a Yum repo. So lets open up /etc/yum.repos.d/grafana.repo

vim /etc/yum.repos.d/grafana.repo

and add the following:

[grafana]
name=grafana
baseurl=https://packages.grafana.com/oss/rpm
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://packages.grafana.com/gpg.key
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt

Afterwards, we just run the yum install command:

sudo yum install grafana -y

Grafana defaults to port 3000, however options to change or proxy this are available. However, we will need to open port 3000 on the firewall.

firewall-cmd --permanent --zone=public --add-port=3000/tcp
firewall-cmd --reload

Then we start and enable it:

sudo systemctl start grafana-server
sudo systemctl enable grafana-server

Go to port 3000 of the system on your web browser and you should see it up and running. We’ll hold off on setting up everything else on Grafana until we finish the system setup, though.

Telegraf

Telegraf is the tool we will use to get our data from JIRA’s underlying linux system and into InfluxDB. This is actually part of the same YUM repo that InfluxDB is installed from, so we’ll now also add it to the JIRA server – same as we did Grafana.

vim /etc/yum.repos.d/influxdb.repo

And add the following to it:

[influxdb]
name = InfluxDB Repository - RHEL \$releasever
baseurl = https://repos.influxdata.com/rhel/\$releasever/\$basearch/stable
enabled = 1
gpgcheck = 1
gpgkey = https://repos.influxdata.com/influxdb.key

And now that it has the YUM repo, we’ll install telegraf onto the JIRA Server.

yum install telegraf -y

Now that we have it installed, we can take a look at it’s configuration, which you can find in /etc/telegraf/telegraf.conf. I highly suggest you take a backup of this file first. Here is an example of a config file where I’ve filtered out all the comments and added back in everything essential.

[global_tags]
[agent]
  interval = "10s"
  round_interval = true
  metric_batch_size = 1000
  metric_buffer_limit = 10000
  collection_jitter = "0s"
  flush_interval = "10s"
  flush_jitter = "0s"
  precision = ""
  logtarget = "file"
  logfile = "/var/log/telegraf/telegraf.log"
  logfile_rotation_interval = "1d"
  logfile_rotation_max_size = "500MB"
  logfile_rotation_max_archives = 3
  hostname = "<JIRA's Hostname>"
  omit_hostname = false
[[outputs.influxdb]]
  urls = ["http://<grafana's url>:8086"]
  database = "Jira"
  username = "jira"
  password = "<password from InfluxDB JIRA Database setup>"
[[inputs.cpu]]
  percpu = true
  totalcpu = true
  collect_cpu_time = false
  report_active = false
[[inputs.disk]]
  ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"]
[[inputs.diskio]]
[[inputs.kernel]]
[[inputs.mem]]
[[inputs.processes]]
[[inputs.swap]]
[[inputs.system]]

And that should be it for config. There are of course more we can capture using various plugins – based on whatever we are interested in, but this will get the bare minimum we are interested in.

Because telegraf is pushing data to the InfluxDB server, we don’t need to open any firewall ports for this, which means we can start it, then monitor the logs to make sure it is sending the data over without any problems.

systemctl start telegraf
systemctl enable telegraf
tail -f /var/log/telegraf/telegraf.log

And assuming you don’t see any errors here, you are good to go! We will have the stats waiting for us when you finish the setup of Grafana. But first….

Prometheus Exporter

So telegraf is great for getting the Linux system stats, but that only gives us a partial picture. We can train it to capture JMX info, but that means we have to setup JMX – something I’m keen to avoid whenever possible. So what options have we got to capture details like JIRA usage, JAVA Heap performance, etc?

Ladies and gentlemen, the Prometheus Exporter!

That’s right, as of the time of this writing, this is yet another free app! This will setup a special page that Prometheus can go to and “scrape” the data from. This is what will take our monitoring from “okay” to “Woah”.

Because it is a free app, we can install it directly from the “Manage Apps” section of the JIRA Administration console

Once you click install, click “Accept & Install” on the pop up, and it’s done! After a refresh, you should notice a new sidebar item called “Prometheus Exporter Settings”. Click that, then click “Generate” next to the token field.

Next we’ll need to open the “here” link into a new tab on the “Exposed metrics are here” text. Take special special note of the URL used, as we’ll need this to setup Prometheus.

Prometheus

Now we’ll go back to our Grafana system to setup Prometheus. To find the download, we’ll go to the Prometheus Download Page, and find the latest Linux 64 bit version.

Try to avoid “Pre-release”

Copy that to your clipboard, then download it to your Grafana system.

 wget https://github.com/prometheus/prometheus/releases/download/v2.15.2/prometheus-2.15.2.linux-amd64.tar.gz

Next we’ll need to unpack it and move it into it’s proper place.

tar -xzvf prometheus-2.15.2.linux-amd64.tar.gz
mv prometheus-2.15.2.linux-amd64 /archive/prometheus

Now if we go into the prometheus folder, we will see a normal assortment of files, but the one we are interested in is prometheus.yml. This is our config file and where we are interested in working. As always, take a backup of the original file, then open it with:

vim /archive/prometheus/prometheus.yml

Here we will be adding a new “job” to the bottom of the config. You can copy this config and modify it for your purposes. Note we are using the URL we got from the Prometheus Exporter. The first part of the URL (everything up to the first slash, or the FQDN) goes under target where indicated. The rest of the URL (folder path) goes under metrics_path. And then your token goes where indicated so that you can secure these metrics.

global:
  scrape_interval:     15s
  evaluation_interval: 15s
alerting:
  alertmanagers:
  - static_configs:
    - targets:
rule_files:
scrape_configs:
  - job_name: 'prometheus'
    static_configs:
    - targets: ['localhost:9090']
  - job_name: 'Jira'
    scheme: https
    metrics_path: '<everything after the slash>'
    params:
      token: ['<token from Prometheus exporter']
    static_configs:
    - targets:
      - <first part of JIRA URL, everything before the first '/'>

We’ll need to now open up the firewall port for Prometheus

firewall-cmd --permanent --zone=public --add-port=9090/tcp
firewall-cmd --reload

Now we can test Prometheus. from the prometheus folder, run the following command.

./prometheus --config.file=prometheus.yml

From here we can open a web browser, and point it to our Grafana server on port 9090. On the Menu, we can go to Status -> Targets and see that both the local monitoring and JIRA are online.

Go ahead and stop prometheus for now by hitting “Ctrl + C”. We’ll need to set this up as a service so that we can rely on it coming up on it’s own should we ever have to restart the Grafana server.

Start by creating a unique user for this service. We’ll be using the options “–no-create-home” and “–shell /bin/false” to tell linux this is an account that shouldn’t be allowed to login to the server.

useradd --no-create-home --shell /bin/false prometheus

Now we’ll change the files to be owned by this new prometheus account. Note that the -R makes chown run recursively, meaning it will change it for every file underneath were we run it. Stop and make sure you are running it from the correct directory. If you run this command from the root directory, you will have a bad day (Trust me)!

chown -R prometheus:prometheus ./

And now we can create it’s service file.

vim /etc/systemd/system/prometheus.service

Inside the file we’ll place the following:

[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target

[Service]
User=prometheus
Group=prometheus
Type=simple
ExecStart=/archive/prometheus/prometheus \
    --config.file /archive/prometheus/prometheus.yml \
    --storage.tsdb.path /archive/prometheus/ \
    --web.console.templates=/archive/prometheus/consoles \
    --web.console.libraries=/archive/prometheus/console_libraries 

[Install]
WantedBy=multi-user.target

After you save this file, type the following commands to reload systemctl, start the service, make sure it’s running, then enable it for launch on boot.:

systemctl daemon-reload
systemctl start prometheus
systemctl status prometheus
systemctl enable prometheus

Now just double check that the service is in fact running, and you’re good to go!

Grafana, the Reckoning

Now that we have both our datasources up and gathering information, we need to start by creating a way to display it. On your web browser, go back to Grafana, port 3000. You should be greeted with the same login screen as before. To login the first time, use ‘admin’ as username and password.

You will be prompted immediately to change this password. Do so. No – really.

After you change your password, you should see the screen below. Clilck “Add data source”

We’ll select InfluxDB from the list as our first Data Source.

For settings, we’ll enter only the following:

  • Name: JIRA
  • URL: http://localhost:8086
  • Database: Jira
  • User: jira
  • Password: Whatever you set the InfluxDB Jira password to be

Click “Save & Test” at the bottom and you should have the first one down. Now click “Back” so we can set up Prometheus.

On Prometheus, all we’ll need to do is set the URL to be “http://localhost:9090. Enter that, then click “Save & Test”. And that’s both Data Sources done! Now we can move onto the Dashboard. On the right sidebar, click through to “Home”, then click “New Dashboard”

And now you are ready to start visualizing Data. I’ve already covered some Dashboard tricks in my previous attempt at this topic. However, if it helps, here’s how I used Prometheus to setup a graph of the JVM Heap.

Some Notes

Now, there is some cleanup you can do here. You can map out the storage for Grafana and InfluxDB to go to your /archive drive, for example. However, I can’t be giving away *ALL* the secrets ;). I want to challenge you there to see if you can learn to do it yourself.

We do have a few scaling options here too. For one, we can split Influx, Prometheus, and Grafana onto their own systems. However, my experience has been that this isn’t usually necessary, and they can all live comfortably on one system.

And one final note. The Prometheus exporter, strictly speaking, isn’t JIRA Data Center compatible. It will run however. As best I can tell, it will give you the stats for each node where applicable, and the overall stats where that makes sense. It might be worth installing and setting up Prometheus to bypass the load balancer and do each node individually.

But seriously, that’s it?

Indeed it is! This one is probably one of my longer posts, so thank you for making it to the end. It’s been a great week hearing how the blog is helping people out in their work, so keep it up! I’ll do my part here to keep providing you content.

On that note, this post was a reader-requested topic. I’m always happy to take on a challenge from readers, so if you have something you’d like to hear about, let me know!

One thing that I’m working on is to try and make it easier for you to be notified about new blog posts. As such, I’ve included an email subscription form at the bottom of the blog. If you want to be notified automatically about to blog posts, enter your email and hit subscribe!

And don’t forget about the Atlassian Discord chat – thoroughly unofficial. click here to join: https://discord.gg/mXuRsVu

But until next time, my name is Rodney, asking “Have you updated your JIRA issues today?”

Upgrading JIRA Data Center – with no Downtime!

So last week we took a dive into upgrading JIRA Server. So naturally this week it stands to reason that we will go over it’s big brother, JIRA Data Center. To be fair, doing an upgrade in Data Center isn’t much different than in Server. There are minor changes though that take full advantage of Data Center’s architecture, and those are what we’ll be focusing on today. We can sit here jabbering about it for a while, or we could get into it.

Zero Downtime Upgrades

So one of JIRA Data Center’s super-powers is the ability to stay online and completely functional while you are doing an upgrade! Called “Zero Downtime Upgrades” by Atlassian, this is the technique we will review today.

This seemingly superhuman feat takes advantage of the fact that JIRA Data Center runs on multiple nodes. Zero Downtime Upgrades allows you to put JIRA in a special “upgrade mode”, which allows nodes to be on different versions, then only bring down one JIRA node at a time while you perform the update. After the upgrade, you take it out of “upgrade mode” and let it start humming along on it’s new version!

Here is the doc we will be following that explains the what and how of doing a Zero Downtime Upgrade:

Before you begin

So, just because Data Center has fancy features, doesn’t absolve you from your own responsibilities. You will still need to go out and research what version makes sense for your instance, make sure it won’t impact any Apps, make sure your support systems are still supported under the new version, etc.

And I shouldn’t have to say this (I mean, I’ve pretty much said it the past two weeks now), but it is on you to thoroughly test each upgrade before putting it into production. JIRA is not a system that protects you from yourself. That is on you to do.

Also, “Zero Downtime” does not mean “Zero Risk”. It is still on you to make sure you have a good backup of the shared home, the database, and each node prior to upgrading the system. Remember our rule of thumb with upgrades:

ALWAYS give yourself a path back to before you changed anything.”

Entering Upgrade Mode

So, you’ve selected your version, have everything ready to go, but where do you start?

To Enter the mythical “Upgrade Mode”, Head on over to Applications -> JIRA upgrades in the admin section (or hit “g” twice, then type JIRA Upgrades into the search bar that pops up).

That DB Collation Error will be taken care of shortly….

Click the “Put JIRA into upgrade mode” button, and you are off!

Upgrading each node

After putting JIRA into Upgrade mode, it’s time to do each node. Depending on the number of nodes you have, this can take a long time. Just remember, for this to be truly zero downtime, at least one JIRA Node must be up at a time.

I should also point out that every JIRA node that is still up will be getting the load of the nodes that are down for the upgrade. To me, if you have the capacity, it makes sense to build out extra nodes before a JIRA upgrade to make sure that no single node becomes overloaded while some part of them are down.

I won’t bore you with how to do the upgrade on each node – because I already covered it last week. At this point you treat each node like an individual JIRA server, download the installer onto each one, upgrade that system, then make any filesystem changes you need to.

If you have any customization in the JIRA install directory, you will need to copy these over as well. Remember, you can backup and restore the modified file – but during an upgrade that does carry a chance of breaking that node. It’s always a better idea to make a diff between the old and new files, identify the differences from your edits, and apply those edits to the new version of the document.

As each node comes back up, test it by bypassing the proxy and going to it’s IP directly. If you have customizations, you will get the warning about that, but then it should pop up with it’s new version.

Also check back at the proxy (System -> System info) to make sure each node is connected back to the cluster after upgrading.

Complete this for all remaining nodes, then continue on.

After you’ve upgraded all nodes

Once you’ve completed the install for all JIRA nodes, go back to Applications -> JIRA upgrades in the admin section. Here you will see a new button lit up, “Run upgrade tasks”. Go ahead and click it.

This will do all the edits necessary to the Database and Shared Home to get you running fully on your new version. As this is an important step, DO NOT SKIP IT!

Once complete, this will take you out of “Upgrade Mode” and return JIRA to normal operation.

Take this time to do your normal tour of the health checks, Apps, and basic functionality to make sure it is all working as expected on every node. Once you are satisfied, you’re done! All without any end-user interruption!

And if something went very wrong?

Well, depending on when it went wrong, you’ll change what you do.

If you have entered upgrade mode, but have not modified any nodes yet, you have the option to cancel out of upgrade mode. No harm, no foul.

If you’ve already started working on a node – but have not brought any nodes online under the new version, simply restore that node to the previous version, then cancel out and go back to test.

If – however – you do have nodes online under the new version, and want to back out fully to the old version, you are in for a bad day.

This is the point of no return for the upgrade, which means in order to return, you are going to have to bring the whole cluster down. Afterwards, you will need to restore the database, restore the shared home, and then restore each and every node that is running a new version. Then you can bring your cluster back online. Then write an email to all stakeholders explaining why you just brought JIRA down when they weren’t expecting it.

And this is why I keep telling you to test until you are confident. It won’t catch everything, but it will catch far more than not testing.

So, Listen – we have, like, a lot of nodes. Like, A LOT.

So, you are wondering if you can automate this on nodes somehow. Well….

You can automate a JIRA upgrade using tools like Ansible, Salt, etc. I actually have an Ansible job running of my Jenkins (sorry, no Bamboo here) which keeps my personal Confluence up to date without me having to upgrade it every time. However, I do not recommend you do this for production.

Atlassian is changing things all the time, and they don’t always get into the nitty gritty of what they are changing. That Confluence Upgrade job in Ansible I mentioned? It was actually broken for four months because Atlassian changed the URL you download from, and Ansible couldn’t make sense of the redirect, and I didn’t have time to debug it.

Humans are much better to adapting to these small changes. That’s why I prefer to always keep a human somewhere in the loop for a production upgrade on each node.

But, if that doesn’t dissuade you, I guess I can’t stop you. Just know I won’t be the one to tell you how to do it, just that it can be done.

And that’s Upgrading JIRA Data Center!

I hope you got something useful out of this guide. I think I’m done with upgrades for a while though – between work and the blog, I’ve upgraded 5 nodes across 4 instances in the past two weeks. So yeah – we’ll be covering something new next week.

Don’t forget to hop onto our discord community! There you can chat with fellow Atlassian Admins, get help, and see what’s new! https://discord.gg/mXuRsVu

And until next time, this is Rodney, asking “Have you updated your JIRA issues today?”

So it’s time for a JIRA Server Upgrade

So, I was recently contacted by a good friend. He isn’t a JIRA Admin, at least by trade, but he’d kind of found himself in the role anyways. He’s facing an upcoming upgrade, and wanted to know if I wanted a side gig helping out.

Unfortunately, as I’m a consultant in my day job, there was some concerns about me doing the same thing on the side, so I had to decline. However, that doesn’t mean I can’t help out.

A JIRA upgrade is no trivial manner, but they happen everyday somewhere. If you know what you are doing, you can even automate it – to a point. I obviously don’t recommend automating a production upgrade, but for testing it wouldn’t be a terrible idea.

For this I’m going to use my JIRA Test instance. It’s running on CentOS and MySQL 5.7, and is currently running JIRA 8.4.3. For this I’ll be targeting the latest Enterprise release 8.5.3. Your system may vary a bit, but so long as you are on linux it should still be the similar enough that you can follow the post.

And yes, I am aware of you windows folks out there. I do owe you an install and upgrade guide too. But I only have one windows server license – and well…it’s in use. So that will have to wait for another day.

Data Center?

So, this guide here today is for JIRA Server. There are a few extra steps in a Data Center Upgrade, but the benefit to Data Center is you can upgrade without downtime, as you upgrade each node one at a time. So, I’ll cover a Data Center upgrade next week.

When you need an expert

So, my day-job is as an Atlassian Partner and Consultant. So – it’s my job to help people with their JIRA problems. And sometimes that includes helping people plan and execute their JIRA upgrades. So trust me, sometimes experts are needed.

So, when do you need an expert? Well…that depends. How confident are you in doing this yourself? Do you foresee problems you don’t know how to fix? What’s your support plan?

I’ve learned a lot by doing upgrades myself and getting help when I ran into a problem I couldn’t fix. But I had the benefit of working with Atlassian Premiere Support, and every time I came across a problem I’d work closely with them to learn what went wrong, what it’s supposed to do, and how to fix it.

That being said, I’ve been known to live off of adrenaline, and some people’s tolerance for this sort of thing is…less. Ultimately, there are no hard rules I can give you for when you need help. It comes down to your unique situation. But if you feel you do need help, don’t be afraid to reach out to an Altassian Partner. That’s what we are here for.

Planning it out

So, you want to go forward with the upgrade. What’s next? What are the questions you want to answer?

Well, the first question is “When should you upgrade?” This depends on your teams, but I find a good rule of thumb is to try to get an upgrade in every six months. This not always going to happen, but you shouldn’t let it go more than a year out. Updates to JIRA Server not only include new functionality, but a multitude of bug and security fixes as well.

However, you don’t always get your way. When I was working with Development teams, I’d always avoid an upgrade leading up to and immediately after a release. This is a time your teams depend on JIRA most, and having it down just isn’t an option, even for an hour.

Next question you have to answer is “What Version am I upgrading too?” To determine this, I’ll spend an hour or two with the JIRA Release notes. Not only will I go over each feature release, taking notes on which new features are available, but I’ll also look at each feature release’s upgrade notes for gotcha’s and changes. As I’ve stated in previous post, I give a heavy preference to an Enterprise Release.

After this, you should ask yourself “Will I need to upgrade anything else?” It doesn’t happen often, but sometimes Atlassian will deprecate support for a particular Database or JRE Version. So you should check the Supported Platforms Section to see if your systems are still good. Assuming you are, you can move on to testing. If not, take the time to get your supporting systems up to a supported version, then proceed.

And the last question you should be asking is “How will this affect my Addons/Apps?” Not every app is compatible with every version of JIRA. In order to account for changes in the JAVA API between different versions, a particular version of an app will only be compatible with a particular set of versions of JIRA.

Thankfully JIRA manages all this for you. You can even get a report of what apps will be compatible with the version of JIRA you are going to. Go to Manage Apps -> Manage Apps -> JIRA Upgrade Check

Here, you can select another release of JIRA, and it will give you a list of what apps are good to go as is, what apps will be good to go if updated, and what apps are not available for that version of JIRA. This will give you a heads up on what changes you need to make to be ready.

Now an App upgrade should be treated with the same respect as a JIRA Upgrade – even though it is far easier. App upgrades can change features your teams depend on, so always do them first in test and have your teams review that functionality to make sure it works as expected.

Test Test Test

So you have all the questions answered, what’s next? Well, Testing! Wait, is this déjà vu ?

Yep, I actually did cover how to go about testing last week. You should go back and read that. 😉

I’ll take a backup of my test instance, and upgrade it, then figure out what breaks. Then I’ll figure out how to fix that, document the fix, reset the instance, and test the full upgrade – including fix – again. I’ll rinse and repeat until I have a fully documented upgrade with no bugs.

This is always a bit of work, but I’d much rather break it in testing than in production. The last thing you want is to come into work the Monday after the upgrade to find a line at your desk. Trust me on that one.

Once you are happy with your test system upgrade, now comes the wait. You should give a week or two for end users to test the new version of JIRA before you put it into production. Not every user will test things out, but those that care the most will. So give them a chance to find what you might have missed.

Doing an Upgrade on System

So, this is the main event. Whether you are doing it as a test, or in production, the process should be the same. That being said, I suggest you practice it in test first to get a good feel for doing it yourself.

Now I shouldn’t have to say this after last week’s post, but here it goes. Backup now. Even if it is a test, give yourself an easy way back to a pre-install state. If you must do it manually, backup JIRA’s Home Directory and take a database dump. IF you can, just take a snapshot of the VM. But make sure you can undo this upgrade easily if you need to.

Either way, you will need to get the download first. Go to Atlassian’s JIRA Download Page. Once there, find the “All Versions Link”, then navigate to your chosen version.

Once you click the “Download” link, you will select “Linux 64 Bit” from the dropdown, click the check box to approve the EULA, then download the software. After this, you will need to transfer this file to your server via your favorite method.

Once on your system, you should have a file named “atlassian-jira-software-<version>-x64.bin”. We will make this file able to be run as a program using a chmod command. Be sure you are running the commands that follow as root.

chmod 755 ./atlassian-jira-software-*-x64.bin

Before we start the upgrade, we will need to shutdown JIRA.

systemctl stop jira

And once it’s stopped, we can initiate the upgrade

./atlassian-jira-software-<version>-x64.bin

The first option we face is just making sure we intended to start the installer. Take a second here and confirm the version matches what you are intending to upgrade to, then click “Enter”

After this, it will ask you the operation you intend to undergo. The Installer will detect that JIRA is installed on the system, and set the default option here to 3, which is upgrade. So press “Enter” to proceed.

The next option will ask you to confirm your installation directory. Take a second here and confirm this is correct. If it is, press “Enter” to proceed. If it is not, enter the correct directory, then proceed by pressing “Enter”

The next screen will ask you to backup your home directory. Technically, you should have already done this before you started (right?). Therefore, I often skip this step as it can eat up valuable time and disk space. And I already had a backup. But if you didn’t heed my earlier warning, this is your last chance to back up that information.

Next JIRA will check for modified files. These normally aren’t but two or three, with the most common suspect being <jira_install>/conf/server.xml. Open up a separate terminal window and copy these out of the install directory. Once you have these backed up, hit enter.

Next will give you a final checklist, and one last chance to back out. If you proceed from here, you are committing to changing the install directory, and will need to restore your backup to get back to a pre-test state. So hit “Enter” once to confirm you’ve completed the checklist, then hit “Enter” proudly once more. You are ready.

Here, JIRA will backup the current install directory, then replace it with a new one for the new version. For JIRA 8.5.3, it will not copy over any modifications you make, but we’ll get to that in a bit. Once done, it will ask if you if you want to start JIRA. Say “No”.

And JIRA is Upgraded….mostly. Time to deal with those modified files.

Re-adding modifications

So, you can just drop in the modified files you backed up during the install and have things right, correct?

https://gifimage.net/wp-content/uploads/2017/11/isnt-that-cute-but-its-wrong-gif-1.gif

Atlassian will on occasion make drastic changes to these files – changes that if not present, will cause JIRA not to run well – or at all. One such test I did on an upgrade, where I just copied over the files from a previous install, all I had from JIRA was a blank white page. Took me two days to figure out what I had did wrong.

Seriously, don’t be me. Or at the very least learn from my mistakes.

The easiest way we can tell how much has changed is with linux’s diff tool.

diff <backed up version of file> <current version from install directory>

If all you see are your modifications, then probably not a lot has changed. You should still use this as a road map to remake those modifications to the new files. Even a minor change can have an impact, no sense in risking it. Do this with all the files. Slow and steady keeps you out of trouble, after all.

It may also be worthwhile to store these modified files within a git repo – that way you can track what has changed with each upgrade across time. I’ve done it with various productions systems over the years, and it certainly cannot hurt.

After you’ve done this for all modified files, you are clear to start JIRA:

systemctl start jira

For the first start after a new install, I like to follow the logs to see if anything goes wrong.

tail -f <jira_home>/log/atlassian-jira.log

Assuming all is well, we can now log into JIRA’s UI to do the next step.

A note about modified files in JIRA 8.6 and above

As I noted when I looked at JIRA 8.6, the JIRA installer will start looking at your modified files and where it can, copy over the modifications to the new install. To be honest, I don’t like it.

If you’ve been in this business for any length of time, you too likely have a rather healthy skepticism to anything that happens “Automagically”. Even doing this, I’d be one to copy the modified files out during the install and compare the old ones to the new ones.

That being said, I admittedly haven’t done a JIRA 8.6 install yet, and their docs didn’t provide much detail on how the process works, so it could be fine. Just a warning to you in the future that an ounce of caution here might be worth a pound of cure later.

Updating Plugins

When you first get onto JIRA, it will warn you again about those modified files. Ignore the warnings to continue onto JIRA. From here, log in, then go to the Admin Console -> Manage Apps.

Come on Rodney – that isn’t good…

Given how App compatibility works, it’s likely you will have a few to upgrade. Take this time and upgrade these as well. Just click “Update” next to each one.

Be aware when you upgrade the “Atlassian Universal Plugin Manager”, you will need to refresh this page to proceed. That alone usually makes that plugin my first target.

Also be aware that any plugin update will require you to do a re-index. You don’t need to do a re-index for each update, but be sure you do one after you finish.

Wait…that’s not right…OH SHI

Now this guide assumes everything went well. As I’ve stated, everything doesn’t always go well. Hopefully this is in test and you have time to work it out. Even if it’s not, just take a second, and breathe, and relax. This isn’t ideal, but you got this.

IF this is in test, collect any logs files or support packets you can to your local system, then reset your instance back to before you started the upgrade. Try it again to confirm you did everything as you expected to. IF the problem reoccurs, you now have a pattern, and two data points.

Then open a ticket with Atlassian Support, and include everything you have, including a detailed account of the system you are running on, the version you were on, the version you are upgrading too, and all the files you got from the system.

If this is production, get what files you can, and execute your back out plan now. Restore the system to before the upgrade using the backups you have taken (don’t forget to also restore the Database) and bring it back up on the previous version of JIRA. There is no shame of backing out of a bad upgrade. There is shame on keeping your teams without JIRA “while you figure it out.”

Take those artifacts and open a ticket with Atlassian. Because this was production, and an issue that (presumably) didn’t pop up in Testing, it can often get a higher Business Impact than a test system problem.

And that’s it?

Yes, really, that’s it! You have JIRA Server running on a new version! Whether this is test or production, the above instructions should be sufficient to get you across the finish line.

At this point in my career, I’ve honestly lost track of how many Upgrades I’ve done. Using the method outlined above, I can usually complete an upgrade cycle – from kickoff to production upgrade complete, in about three weeks. Most of this time is to give end users a chance to test new features before they hit production.

So…humble brag. January is now the best month the blog has had to date, both in page views and visitors.

I cannot thank you all enough for coming by each week to see what I’m writing about. January has been an interesting month for me, with a lot going on. But being able to sit down at some point each week and share with you something about this tool has been both affirming and stabilizing. So thank you. Now lets see if we can make February even better!

Don’t forget that if you need help, we have a discord community. Stop by, ask questions, and chat with fellow Atlassian Admins: https://discord.gg/mXuRsVu

And until next time, this is Rodney, asking “Have you updated your JIRA Issues today?”

Getting started with JIRA Data Center: Pt 2 – Updating JIRA to JDC

So, when we left off as week, we had all the support system ready to go. As a review, we have the NFS Server, database, and Load balancer – each needed for JIRA Data Center to work to it’s fullest potential. Now we need to convert the actual JIRA instance into our first node for JIRA Data Center.

First, I want to address the elephant in the room. You may have noticed that this blog post is being released a little later than normal. That is because Atlassian released another vulnerability report today. This time its for Confluence Server and Data Center. I won’t be covering it today – I’d much rather get back to the JIRA Data Center install. But I’ll post it here for your own research

Also, as a cleanup note from last week, I forgot to mention one thing. All the systems you use should be configured to use the same NTP server for time synchronization. That is to say not the same *pool*, but the same *server*. What, me? Make that mistake? And only find it while writing this article? Never! </sarcasm>

Okay yeah, I did that. I am not perfect, and sometimes some things do slip by. But I don’t want you to make that same mistake, so I figured I’d own up to it now and not let you fall into the same problem.

Installing the JDC License

Before we do anything further, we need to install a JDC license key into JIRA. That way once we get things done and bring it online, you won’t have any issues.

To do this, we need to find our Server ID. This can be found by going to System -> System Support, then looking for “Server ID” on that page.

After you grab this, go to https://my.atlassian.com if you are getting a trial license. And honestly, I’d recommend you get a trial license while you are doing your initial testing. No need to put money down while you are doing a proof of concept.

Click on the “New Trial License”, then on the next screen select your JIRA flavor from the drop down (Either JIRA Software or JIRA Service Desk). Then once the rest pops up, select Data Center. You will then enter your Server ID we just grabbed.

This will generate a trial license that lasts 30 days. You can renew this a number of times, but it’s not infinite, so use this time wisely.

Copy your new trial license, then in your JIRA instance, go to Application->Versions and licenses, and paste it in. Then you are done with this step, and we can begin making JIRA into a proper node.

Moving the Shared Home.

From here on out, we are going to be following the doc “Installing JIRA Data Center”

Per the guide, we are setting up our shared. First, shutdown your JIRA instance.

systemctl stop jira

Once it has stopped, navigate to your JIRA Home Directory, and copy the following folders to your shared NFS directory.

  • data
  • plugins
  • logos
  • import
  • export
  • caches

Once copied, be sure to change the owner back to jira so that it can work once you bring the system back up.

chown -R jira:jira /data/jira/sharedhome

And this part is done! One step closer to having JDC up and running.

Setup your cluster.properties file

Now that we have the shared home setup, we need to setup the file that tells JIRA that a) It’s a JDC Node, and b) where to find the rest of it’s files. This is the cluster.properties folder, and it’s placed in the root of the local home folder.

This file will have the following settings:

# This ID must be unique across the cluster
jira.node.id = jnode1
# The location of the shared home directory for all Jira nodes
jira.shared.home = /data/jira/sharedhome

As a best practice, I like to put the system hostnames into the node id, that way we know where to look when we are having issues, but honestly it’s arbitrary, so use whatever works. The only requirement is that each node has a unique name.

There are some more options you have in the cluster.properties file, but unless you are having problems, you shouldn’t need to use them. But, I want you to at least be aware of them.

As a last setting, if you are running on linux (like we are), go to your <jira_install>/bin/setenv.sh file, and add the following line.

ulimit -n 16384

Now we need to unblock two new ports that JIRA will use to communicate between nodes.

firewall-cmd --permanent --zone=public --add-port=40001/tcp
firewall-cmd --permanent --zone=public --add-port=40011/tcp
firwall-cmd --reload

At this point we can start JIRA as a data center instance!

systemctl start jira

If all goes well, when JIRA will come up as a single node Data Center instance.

Standing up additional nodes.

Now, we can really take advantage of the super power of JDC: Adding nodes!

Now, if we want to go easy mode, we could just clone the VM and modify the cluster.properties file. But we are JIRA Guys, we don’t do easy mode here!

From a fresh install, we need to install the same version of JIRA that is on our first node. After this install, we need to see if we are using the same userid and group id for JIRA. If not, it will cause problems with NFS. So run the following on all nodes:

id jira

If any differs, it needs to be changed to match the existing nodes. Note this also means you will need to change the file permissions that belong to that particular JIRA user on that node. I could go through the guide on how to do this, but someone else has already done a much better job than I can.

With that minor annoyance out of the way, we can copy over the local home folder. The best way I found to do this is to tarball the entire thing up and scp it to the new node.

tar -czvf jira_home.tar.gz <jira_local_home>
scp ./jira_home.tar.gz <new_node>:~/

Once on the new node, unpack the tarball and move the home folder to the appropriate place. If you used default locations for both nodes, you can probably just unpack it at the root folder and it will find it’s way to the right place. Then we just make sure the permissions weren’t broken:

chown -R jira:jira <jira_local_home>

Next we modify the cluster.properties folder, changing the jira.node.id to something unique in the cluster. As an added note, if your nodes do not resolve to DNS, you will need to add the following to this file.

ehcache.listener.hostname = "<ip_addr_of_node>"

You will also need to do this for the existing nodes as well. I also like to take the extra step of adding the file share, database, load balancer, and all nodes to the hosts file of all systems involved. That way even if you do have a DNS Failure, your cluster won’t die. Host entry take the following format:

<ip_addr>    <fqdn>  <alternate_host_name>

An ounce of prevention is worth a pound of cure, especially when IT decides to move up an upgrade of a DNS node without telling anyone. Yes, that’s a true story!

At this point, you should make sure that the system has the shared home mounted. You can do this by copying the entry for the NFS share from the first node’s /etc/fstab file and placing it into the second. Then make sure the folder structure up to the mount point is present. You can uses the -p flag on mkdir to make all required folders in one command.

mkdir -p /data/jira/sharedhome

You can now mount the share with the mount command.

mount /data/jira/sharedhome

Take a second and test that the JIRA Account can write and see all files in there. Now unblock all the correct ports you need for JDC:

firewall-cmd --permanent --zone=public --add-port=8080/tcp
firewall-cmd --permanent --zone=public --add-port=40001/tcp
firewall-cmd --permanent --zone=public --add-port=40011/tcp
firwall-cmd --reload

If everything looks good, start up JIRA as you would on any other system, and follow the logs. IF all goes well, you should be able to access the node by going directly to the IP Address for it.

At this point, check the copyright info at the bottom. You should see the new node name at the bottom after the version info:

Log in, and check the Health Check, if all is good you should see green check marks on the Cluster.

Now the only thing to do is add it to your haproxy config, restart haproxy on your load balancer, and start sending traffic to your new node!

And you made it!

It’s an involved process, I tried not to lie about that. But it’s not an Everest sized challenge. You might have to get a bit outside your comfort zone and play with some technologies that as a JIRA Admin, you haven’t played with before. But my view on it is in order for us to grow as Admins, sometimes we need to step outside our comfort zones. But at least you have a road map to help guide you on your Data Center Migration!

I’m still looking to do a Lightning round Q&A session next week, so get your questions in. Anything from how I do testing and capture images for the blog, to how do you deal with a particular challenge, to what kind of server(s) do I run, it’s all on the table.

Also have an article I’m excited to write up for the New Years – which incidentally enough, is a blog day.

So until next time, my name is Rodney, asking “Have you updated your JIRA Issues today?”