What happens when you are busy making other plans…

So this week has been an interesting one. Today’s post will be short. I know I promised you an article this week based on how you guys voted in last week’s poll, but you see…

Something came up.

After I got done with last week’s article, I went to have some testing done to find the root cause to some health problems I’ve been having. And well, we found the problem. It was bad enough that the doctor had me go to the ER.

What she found was a very large amount of fluid around my heart. As a result, I’ve unfortunately spent six days in the hospital. Because of where this fluid was, they couldn’t remove it the normal way, so I had to have surgery last Thursday, where they removed 600ml from around my heart! I don’t recommend that as recreational activity. So not fun…

It’s been almost a week now since the surgery, and I am recovering nicely. I cannot describe what it’s like to have that much fluid removed at once, but it was a definite difference I can feel. I’ve had a busy week trying to build back my strength and recover.

Which means, as much as I don’t want to disappoint you guys, I do need to be honest with you and say that I wasn’t able do a full article this week. I’m really am sorry – I was looking forward to this article too. But priorities are priorities.

I have opted to keep the poll open for another week though, so if you didn’t vote last week, you still have a chance!

However, some other news this week.

Look, at the point I was in the hospital, I had pretty much ruled out Summit this year. No way my doctors would let me travel so soon.

But that’s okay. Due to concerns about COVID-19, Atlassian has decided to cancel the in-person Summit this year and take it virtual instead.

While I had some surprises planned if I could attend, I cannot fault Atlassian for it’s overabundance of caution. To sign up for the now free virtual event, follow the link below.

So, this week appears to be the week of the unexpected. I do apologize for not having this week’s article ready, but one of my resolutions was to take care of myself, so here we are. Until next time, this is Rodney, asking “Have you updated your Jira Issues today?”

Monitoring JIRA for Fun and Health

So, dear readers, here’s the deal. Some weeks, when I sit down to write, I know exactly what I’m going to write about, and can get right to it. Other weeks, I’m sitting down, and I don’t have a clue. I can usually figure something out, but it’s very much a struggle. This week is VERY much the latter.

Compound that with the fact that I just lost most of my VM’s due to a storage failure I had this very morning. Part of it was a mistake on my part. I have the home lab so that I can learn things I can’t learn on the job. And mistakes are a painful but powerful way to learn. Still….

This brings me back to a conversation I had with a colleague and fellow Atlassian Administrator for a company I used to work for. He had asked me what my thoughts around implementing Monitoring of JIRA. Well, I have touched on the subject before, but if I’m being honest, this isn’t my greatest work. Combine that with the fact that I suddenly need to rebuild EVERYTHING, well, why not start with my monitoring stack!

So, we are going to be setting up a number of systems. To gather system stats, that is to say CPU usage, Memory Usage, and Disk usage, we are going to be using Telegraf, which will be storing that data in an InfluxDB database. Then for JIRA stats we are going to use Prometheus. And to query and display this information, we will be using Grafana.

The Setup

So we are going to be setting up a new system that will live alongside our JIRA instance. We will call it Grafana, as that will be the front end we will interact with the system with.

On the back end it will be running both a InfluxDB Server and a Prometheus Server. Grafana will use both InfluxDB and Prometheus as data sources, and will use that to generate stats and graphs of all the relevant information.

Our system will be a CentOS 7 system (my favorite currently), and will have the following stats:

  • 2 vCPU
  • 4 GB RAM
  • 16 GB Root HDD for OS
  • 50 GB Secondary HDD for Services

This will give us the ability to scale up the capacity for services to store files without too much impact on the overall system, as well as monitor it’s size as well.

As per normal, I am going to write all commands out assuming you are root. If you are not, I’m also assuming you know what sudo is and how to use it, so I won’t insult you by holding your hand with that.

InfluxDB

Lets get started with InfluxDB. First thing we’ll need to do is add the yum repo from Influxdata onto the system. This will allow us to use yum to do the heavy lifting in the install of this service.

So lets open /etc/yum.repos.d/influxdb.repo

vim /etc/yum.repos.d/influxdb.repo

And add the following to it:

[influxdb]
name = InfluxDB Repository - RHEL \$releasever
baseurl = https://repos.influxdata.com/rhel/\$releasever/\$basearch/stable
enabled = 1
gpgcheck = 1
gpgkey = https://repos.influxdata.com/influxdb.key

Now we can install InfluxDB

yum install influxdb -y

And really, that’s it for the install. Kind of wish Atlassian did this kind of thing.

We’ll need to of course allow firewall access to Telegraf can get data into InfluxDB.

firewall-cmd --permanent --zone=public --add-port=8086/tcp
fireall-cmd --reload

And with that we’ll start and enable the service so that we can actually do the service setup.

systemctl start influxdb
systemctl enable influxdb

Now we need to set some credentials. As initially setup, the system isn’t really all that secure. So we are going to secure it initially by using curl to set ourselves an account.

curl -XPOST "http://localhost:8086/query" --data-urlencode \
"q=CREATE USER username WITH PASSWORD 'strongpassword' WITH ALL PRIVILEGES"

I shouldn’t have to say this, but you should replace username with one you can remember and strongpassword with, well, a strong password.

Now we can use the command “influx” to get into InfluxDB and do any further set up we need.

influx -username 'username' -password 'password'

Now that we are in, we need to setup a database and user for our JIRA data to go into. As a rule of thumb, I like to have one DB per application and/or system I intend to monitor with InfluxDB.

CREATE DATABASE Jira
CREATE USER jira WITH PASSWORD 'strongpassword'
GRANT ALL ON jira TO jira
CREATE RETENTION POLICY one_year ON Jira DURATION 365d REPLICATION 1 DEFAULT
SHOW RETENTION POLICIES ON Jira

And that’s it, InfluxDB is ready to go!

Grafana

Now that we have at least one datasource, we can get to setting up the Front End. Unfortunately, we’ll need information from JIRA in order to setup Prometheus (once we’ve set JIRA up to use the Prometheus Exporter), so that data source will need to wait.

Fortunately, Grafana can also be setup using a Yum repo. So lets open up /etc/yum.repos.d/grafana.repo

vim /etc/yum.repos.d/grafana.repo

and add the following:

[grafana]
name=grafana
baseurl=https://packages.grafana.com/oss/rpm
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://packages.grafana.com/gpg.key
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt

Afterwards, we just run the yum install command:

sudo yum install grafana -y

Grafana defaults to port 3000, however options to change or proxy this are available. However, we will need to open port 3000 on the firewall.

firewall-cmd --permanent --zone=public --add-port=3000/tcp
firewall-cmd --reload

Then we start and enable it:

sudo systemctl start grafana-server
sudo systemctl enable grafana-server

Go to port 3000 of the system on your web browser and you should see it up and running. We’ll hold off on setting up everything else on Grafana until we finish the system setup, though.

Telegraf

Telegraf is the tool we will use to get our data from JIRA’s underlying linux system and into InfluxDB. This is actually part of the same YUM repo that InfluxDB is installed from, so we’ll now also add it to the JIRA server – same as we did Grafana.

vim /etc/yum.repos.d/influxdb.repo

And add the following to it:

[influxdb]
name = InfluxDB Repository - RHEL \$releasever
baseurl = https://repos.influxdata.com/rhel/\$releasever/\$basearch/stable
enabled = 1
gpgcheck = 1
gpgkey = https://repos.influxdata.com/influxdb.key

And now that it has the YUM repo, we’ll install telegraf onto the JIRA Server.

yum install telegraf -y

Now that we have it installed, we can take a look at it’s configuration, which you can find in /etc/telegraf/telegraf.conf. I highly suggest you take a backup of this file first. Here is an example of a config file where I’ve filtered out all the comments and added back in everything essential.

[global_tags]
[agent]
  interval = "10s"
  round_interval = true
  metric_batch_size = 1000
  metric_buffer_limit = 10000
  collection_jitter = "0s"
  flush_interval = "10s"
  flush_jitter = "0s"
  precision = ""
  logtarget = "file"
  logfile = "/var/log/telegraf/telegraf.log"
  logfile_rotation_interval = "1d"
  logfile_rotation_max_size = "500MB"
  logfile_rotation_max_archives = 3
  hostname = "<JIRA's Hostname>"
  omit_hostname = false
[[outputs.influxdb]]
  urls = ["http://<grafana's url>:8086"]
  database = "Jira"
  username = "jira"
  password = "<password from InfluxDB JIRA Database setup>"
[[inputs.cpu]]
  percpu = true
  totalcpu = true
  collect_cpu_time = false
  report_active = false
[[inputs.disk]]
  ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"]
[[inputs.diskio]]
[[inputs.kernel]]
[[inputs.mem]]
[[inputs.processes]]
[[inputs.swap]]
[[inputs.system]]

And that should be it for config. There are of course more we can capture using various plugins – based on whatever we are interested in, but this will get the bare minimum we are interested in.

Because telegraf is pushing data to the InfluxDB server, we don’t need to open any firewall ports for this, which means we can start it, then monitor the logs to make sure it is sending the data over without any problems.

systemctl start telegraf
systemctl enable telegraf
tail -f /var/log/telegraf/telegraf.log

And assuming you don’t see any errors here, you are good to go! We will have the stats waiting for us when you finish the setup of Grafana. But first….

Prometheus Exporter

So telegraf is great for getting the Linux system stats, but that only gives us a partial picture. We can train it to capture JMX info, but that means we have to setup JMX – something I’m keen to avoid whenever possible. So what options have we got to capture details like JIRA usage, JAVA Heap performance, etc?

Ladies and gentlemen, the Prometheus Exporter!

That’s right, as of the time of this writing, this is yet another free app! This will setup a special page that Prometheus can go to and “scrape” the data from. This is what will take our monitoring from “okay” to “Woah”.

Because it is a free app, we can install it directly from the “Manage Apps” section of the JIRA Administration console

Once you click install, click “Accept & Install” on the pop up, and it’s done! After a refresh, you should notice a new sidebar item called “Prometheus Exporter Settings”. Click that, then click “Generate” next to the token field.

Next we’ll need to open the “here” link into a new tab on the “Exposed metrics are here” text. Take special special note of the URL used, as we’ll need this to setup Prometheus.

Prometheus

Now we’ll go back to our Grafana system to setup Prometheus. To find the download, we’ll go to the Prometheus Download Page, and find the latest Linux 64 bit version.

Try to avoid “Pre-release”

Copy that to your clipboard, then download it to your Grafana system.

 wget https://github.com/prometheus/prometheus/releases/download/v2.15.2/prometheus-2.15.2.linux-amd64.tar.gz

Next we’ll need to unpack it and move it into it’s proper place.

tar -xzvf prometheus-2.15.2.linux-amd64.tar.gz
mv prometheus-2.15.2.linux-amd64 /archive/prometheus

Now if we go into the prometheus folder, we will see a normal assortment of files, but the one we are interested in is prometheus.yml. This is our config file and where we are interested in working. As always, take a backup of the original file, then open it with:

vim /archive/prometheus/prometheus.yml

Here we will be adding a new “job” to the bottom of the config. You can copy this config and modify it for your purposes. Note we are using the URL we got from the Prometheus Exporter. The first part of the URL (everything up to the first slash, or the FQDN) goes under target where indicated. The rest of the URL (folder path) goes under metrics_path. And then your token goes where indicated so that you can secure these metrics.

global:
  scrape_interval:     15s
  evaluation_interval: 15s
alerting:
  alertmanagers:
  - static_configs:
    - targets:
rule_files:
scrape_configs:
  - job_name: 'prometheus'
    static_configs:
    - targets: ['localhost:9090']
  - job_name: 'Jira'
    scheme: https
    metrics_path: '<everything after the slash>'
    params:
      token: ['<token from Prometheus exporter']
    static_configs:
    - targets:
      - <first part of JIRA URL, everything before the first '/'>

We’ll need to now open up the firewall port for Prometheus

firewall-cmd --permanent --zone=public --add-port=9090/tcp
firewall-cmd --reload

Now we can test Prometheus. from the prometheus folder, run the following command.

./prometheus --config.file=prometheus.yml

From here we can open a web browser, and point it to our Grafana server on port 9090. On the Menu, we can go to Status -> Targets and see that both the local monitoring and JIRA are online.

Go ahead and stop prometheus for now by hitting “Ctrl + C”. We’ll need to set this up as a service so that we can rely on it coming up on it’s own should we ever have to restart the Grafana server.

Start by creating a unique user for this service. We’ll be using the options “–no-create-home” and “–shell /bin/false” to tell linux this is an account that shouldn’t be allowed to login to the server.

useradd --no-create-home --shell /bin/false prometheus

Now we’ll change the files to be owned by this new prometheus account. Note that the -R makes chown run recursively, meaning it will change it for every file underneath were we run it. Stop and make sure you are running it from the correct directory. If you run this command from the root directory, you will have a bad day (Trust me)!

chown -R prometheus:prometheus ./

And now we can create it’s service file.

vim /etc/systemd/system/prometheus.service

Inside the file we’ll place the following:

[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target

[Service]
User=prometheus
Group=prometheus
Type=simple
ExecStart=/archive/prometheus/prometheus \
    --config.file /archive/prometheus/prometheus.yml \
    --storage.tsdb.path /archive/prometheus/ \
    --web.console.templates=/archive/prometheus/consoles \
    --web.console.libraries=/archive/prometheus/console_libraries 

[Install]
WantedBy=multi-user.target

After you save this file, type the following commands to reload systemctl, start the service, make sure it’s running, then enable it for launch on boot.:

systemctl daemon-reload
systemctl start prometheus
systemctl status prometheus
systemctl enable prometheus

Now just double check that the service is in fact running, and you’re good to go!

Grafana, the Reckoning

Now that we have both our datasources up and gathering information, we need to start by creating a way to display it. On your web browser, go back to Grafana, port 3000. You should be greeted with the same login screen as before. To login the first time, use ‘admin’ as username and password.

You will be prompted immediately to change this password. Do so. No – really.

After you change your password, you should see the screen below. Clilck “Add data source”

We’ll select InfluxDB from the list as our first Data Source.

For settings, we’ll enter only the following:

  • Name: JIRA
  • URL: http://localhost:8086
  • Database: Jira
  • User: jira
  • Password: Whatever you set the InfluxDB Jira password to be

Click “Save & Test” at the bottom and you should have the first one down. Now click “Back” so we can set up Prometheus.

On Prometheus, all we’ll need to do is set the URL to be “http://localhost:9090. Enter that, then click “Save & Test”. And that’s both Data Sources done! Now we can move onto the Dashboard. On the right sidebar, click through to “Home”, then click “New Dashboard”

And now you are ready to start visualizing Data. I’ve already covered some Dashboard tricks in my previous attempt at this topic. However, if it helps, here’s how I used Prometheus to setup a graph of the JVM Heap.

Some Notes

Now, there is some cleanup you can do here. You can map out the storage for Grafana and InfluxDB to go to your /archive drive, for example. However, I can’t be giving away *ALL* the secrets ;). I want to challenge you there to see if you can learn to do it yourself.

We do have a few scaling options here too. For one, we can split Influx, Prometheus, and Grafana onto their own systems. However, my experience has been that this isn’t usually necessary, and they can all live comfortably on one system.

And one final note. The Prometheus exporter, strictly speaking, isn’t JIRA Data Center compatible. It will run however. As best I can tell, it will give you the stats for each node where applicable, and the overall stats where that makes sense. It might be worth installing and setting up Prometheus to bypass the load balancer and do each node individually.

But seriously, that’s it?

Indeed it is! This one is probably one of my longer posts, so thank you for making it to the end. It’s been a great week hearing how the blog is helping people out in their work, so keep it up! I’ll do my part here to keep providing you content.

On that note, this post was a reader-requested topic. I’m always happy to take on a challenge from readers, so if you have something you’d like to hear about, let me know!

One thing that I’m working on is to try and make it easier for you to be notified about new blog posts. As such, I’ve included an email subscription form at the bottom of the blog. If you want to be notified automatically about to blog posts, enter your email and hit subscribe!

And don’t forget about the Atlassian Discord chat – thoroughly unofficial. click here to join: https://discord.gg/mXuRsVu

But until next time, my name is Rodney, asking “Have you updated your JIRA issues today?”

Coyote Creek Webinar: Is Your Organization Atlassian Cloud Ready?

Readers,

As you know, I don’t normally like to post mid-week. I also try to keep the blog and my work-life separate (no small feat considering they are both based around supporting Atlassian systems). However, I thought you might be interested in this.

I have been asked to host the Q&A Section of a webinar my company is giving Wednesday. The topic is going to be “Is Your Organization Atlassian Cloud Ready?” If this sounds like something you’d be interested in please follow the link here to sign up.

I’ll have a post ready for you at our normal time this Wednesday, with the webinar following shortly afterwards! Until then, I’m Rodney, asking “Have you updated your JIRA issues today?”

Upgrading JIRA Data Center – with no Downtime!

So last week we took a dive into upgrading JIRA Server. So naturally this week it stands to reason that we will go over it’s big brother, JIRA Data Center. To be fair, doing an upgrade in Data Center isn’t much different than in Server. There are minor changes though that take full advantage of Data Center’s architecture, and those are what we’ll be focusing on today. We can sit here jabbering about it for a while, or we could get into it.

Zero Downtime Upgrades

So one of JIRA Data Center’s super-powers is the ability to stay online and completely functional while you are doing an upgrade! Called “Zero Downtime Upgrades” by Atlassian, this is the technique we will review today.

This seemingly superhuman feat takes advantage of the fact that JIRA Data Center runs on multiple nodes. Zero Downtime Upgrades allows you to put JIRA in a special “upgrade mode”, which allows nodes to be on different versions, then only bring down one JIRA node at a time while you perform the update. After the upgrade, you take it out of “upgrade mode” and let it start humming along on it’s new version!

Here is the doc we will be following that explains the what and how of doing a Zero Downtime Upgrade:

Before you begin

So, just because Data Center has fancy features, doesn’t absolve you from your own responsibilities. You will still need to go out and research what version makes sense for your instance, make sure it won’t impact any Apps, make sure your support systems are still supported under the new version, etc.

And I shouldn’t have to say this (I mean, I’ve pretty much said it the past two weeks now), but it is on you to thoroughly test each upgrade before putting it into production. JIRA is not a system that protects you from yourself. That is on you to do.

Also, “Zero Downtime” does not mean “Zero Risk”. It is still on you to make sure you have a good backup of the shared home, the database, and each node prior to upgrading the system. Remember our rule of thumb with upgrades:

ALWAYS give yourself a path back to before you changed anything.”

Entering Upgrade Mode

So, you’ve selected your version, have everything ready to go, but where do you start?

To Enter the mythical “Upgrade Mode”, Head on over to Applications -> JIRA upgrades in the admin section (or hit “g” twice, then type JIRA Upgrades into the search bar that pops up).

That DB Collation Error will be taken care of shortly….

Click the “Put JIRA into upgrade mode” button, and you are off!

Upgrading each node

After putting JIRA into Upgrade mode, it’s time to do each node. Depending on the number of nodes you have, this can take a long time. Just remember, for this to be truly zero downtime, at least one JIRA Node must be up at a time.

I should also point out that every JIRA node that is still up will be getting the load of the nodes that are down for the upgrade. To me, if you have the capacity, it makes sense to build out extra nodes before a JIRA upgrade to make sure that no single node becomes overloaded while some part of them are down.

I won’t bore you with how to do the upgrade on each node – because I already covered it last week. At this point you treat each node like an individual JIRA server, download the installer onto each one, upgrade that system, then make any filesystem changes you need to.

If you have any customization in the JIRA install directory, you will need to copy these over as well. Remember, you can backup and restore the modified file – but during an upgrade that does carry a chance of breaking that node. It’s always a better idea to make a diff between the old and new files, identify the differences from your edits, and apply those edits to the new version of the document.

As each node comes back up, test it by bypassing the proxy and going to it’s IP directly. If you have customizations, you will get the warning about that, but then it should pop up with it’s new version.

Also check back at the proxy (System -> System info) to make sure each node is connected back to the cluster after upgrading.

Complete this for all remaining nodes, then continue on.

After you’ve upgraded all nodes

Once you’ve completed the install for all JIRA nodes, go back to Applications -> JIRA upgrades in the admin section. Here you will see a new button lit up, “Run upgrade tasks”. Go ahead and click it.

This will do all the edits necessary to the Database and Shared Home to get you running fully on your new version. As this is an important step, DO NOT SKIP IT!

Once complete, this will take you out of “Upgrade Mode” and return JIRA to normal operation.

Take this time to do your normal tour of the health checks, Apps, and basic functionality to make sure it is all working as expected on every node. Once you are satisfied, you’re done! All without any end-user interruption!

And if something went very wrong?

Well, depending on when it went wrong, you’ll change what you do.

If you have entered upgrade mode, but have not modified any nodes yet, you have the option to cancel out of upgrade mode. No harm, no foul.

If you’ve already started working on a node – but have not brought any nodes online under the new version, simply restore that node to the previous version, then cancel out and go back to test.

If – however – you do have nodes online under the new version, and want to back out fully to the old version, you are in for a bad day.

This is the point of no return for the upgrade, which means in order to return, you are going to have to bring the whole cluster down. Afterwards, you will need to restore the database, restore the shared home, and then restore each and every node that is running a new version. Then you can bring your cluster back online. Then write an email to all stakeholders explaining why you just brought JIRA down when they weren’t expecting it.

And this is why I keep telling you to test until you are confident. It won’t catch everything, but it will catch far more than not testing.

So, Listen – we have, like, a lot of nodes. Like, A LOT.

So, you are wondering if you can automate this on nodes somehow. Well….

You can automate a JIRA upgrade using tools like Ansible, Salt, etc. I actually have an Ansible job running of my Jenkins (sorry, no Bamboo here) which keeps my personal Confluence up to date without me having to upgrade it every time. However, I do not recommend you do this for production.

Atlassian is changing things all the time, and they don’t always get into the nitty gritty of what they are changing. That Confluence Upgrade job in Ansible I mentioned? It was actually broken for four months because Atlassian changed the URL you download from, and Ansible couldn’t make sense of the redirect, and I didn’t have time to debug it.

Humans are much better to adapting to these small changes. That’s why I prefer to always keep a human somewhere in the loop for a production upgrade on each node.

But, if that doesn’t dissuade you, I guess I can’t stop you. Just know I won’t be the one to tell you how to do it, just that it can be done.

And that’s Upgrading JIRA Data Center!

I hope you got something useful out of this guide. I think I’m done with upgrades for a while though – between work and the blog, I’ve upgraded 5 nodes across 4 instances in the past two weeks. So yeah – we’ll be covering something new next week.

Don’t forget to hop onto our discord community! There you can chat with fellow Atlassian Admins, get help, and see what’s new! https://discord.gg/mXuRsVu

And until next time, this is Rodney, asking “Have you updated your JIRA issues today?”

How to test changes in JIRA

So, a bit of a backstory here. I was doing some experiments at work on running JIRA Data Center in Kubernetes using the official Atlassian containers when I noticed something odd. After loading the MySQL Connector and starting it all up, JIRA Setup kept telling me that the database wasn’t empty. I could see that it was, and per advice from a colleague, even double checked that the collations and char-sets were all correctly set.

Finally I isolated it down to the MySQL Connector. I had grabbed version 8.something, and Atlassian only supports version 5.1.48. And while this connector worked for JIRA 8.5.0, it apparently had some issues with JIRA 8.5.2 and 8.5.3.

This did get me thinking though. I went through the process of isolating the problem relatively quickly as I have had to do this fairly often in my career. But it isn’t the most intuitive thing to learn. So why not cover that this week!

Dev and Test

So, first thing: Friends don’t let Friends Test in Production. People are depending on that system being stable and there, and if you are mucking about in it constantly to “test” things, it will be anything but stable.

For all license tiers save the smallest, Atlassian also gives you an unlimited use Development License. And this is for both Apps and the main Applications. USE IT! If I.T. won’t give you another system, setup a VM on your desktop. IF they won’t let you use that, bring in an old PC from Home. There is no excuse for testing in production.

The most common setup I see is for a team to have two non-production instances of each platform: Test and Dev. Dev is your personal instance. This is where you can make changes to your hearts content, bring it up and down, upgrade it, reset it, whatever as much as you want. Break it? Won’t impact anything, and just refresh from Production. This is usually where I test “I wonder what will happen if I do this?” at.

Test, on the other hand, is your public non-production instance. You want to let a user test the functionality of a new App before purchasing it? Goes in Test. A user wants to add a new field? Put it in test and let them see what it looks like first. I usually like to refresh this from production on every JIRA Upgrade, but will do it sooner if we’ve made any big changes in production.

As a best practice, I also like to change the color scheme of JIRA for each instance, so you can identify which is which on site. My usual color scheme is to have the top bar be orange for Test, and Red for Dev. A few other things I do:

  • Separate out each instance to a separate DB Server
  • Make sure that if a given non-production server tries to talk to Production, it’s rerouted to the appropriate non-production instance instead. Often using /etc/hosts file.
  • DISABLE THE OUTGOING EMAIL SERVER

I definitely recommend you have both available. If you are only limited to one due to policy or budget, at least have a test instance. Your production instance will thank you.

But what about a non-production site for JIRA Cloud?

Okay – so I haven’t had to deal with this too often. BUT, you are also not the first person to ask, dear reader. Atlassian has a document actually outlining a few options you have to setting up non-production Atlassian cloud instances.

Take a snapshot and/or backup before changing anything

Before trying to figure out a problem or making a change, give yourself a way to get back to a pre-test state. If your instance (DB and all) is on a single VM, take a snapshot of the VM before starting. IF not, Take a tarball of your install and home directory, and while those are running take a database dump from your DB. Heck, if you can, take a file backup and a VM snapshot, do both!

Before I have your ESXI admins after me with torches and pitchforks, I should note here. The way I understand it, a snapshot setups up a way for ESXI to journal all the changes made to a system within a file, and revert back those changes. That means the longer a snapshot sits on a system, the larger it becomes. So always go back and remove a snapshot after you finish your testing. At the very least, it keeps things from getting messy.

This doesn’t only extend to a whole system. If you are changing a single file, make a copy of it first. That way you can go back to the file before you made any changes should the change prove catastrophic. The goal here is no matter what you are doing, always give yourself a path back to before you did it.

Isolate and make only one change at a time

This is probably the most challenging part of testing. For each run you do, you need to make only one change at a time. But what do I mean by change? Do I mean you should upgrade by changing one file at a time? Of course not!

The purpose of this is to isolate something enough to know what fixes or breaks it. So if you are doing a full upgrade, start by upgrading JIRA. Then check to see that it still runs as expected. Then make your changes to setenv.sh. Check again. Then server.xml. Then check again. Then upgrade the apps. Check again.

In the example I gave in the intro, here’s the changes I made each run when I found there was a problem with the DB:

  1. Drop and Re-Setup the Database using a GUI Tool
  2. Drop and Re-Setup the Database from command line.
  3. Try a MySQL 5.7 DB instead of a MySQL 5.6 DB
  4. Try JIRA 8.5.2 instead of JIRA 8.5.3
  5. Try JIRA 8.5.2 with MySQL 5.6 instead of MySQL 5.7
  6. Try JIRA 8.5.2, MySQL 5.6, with a different MySQL Connector – FIXED!

So you can see how each step I only changed one item. Yeah, it took me six runs to find a solution, but I now know it was for sure the MySQL Connector.

Yes, this adds significant overhead of bringing down and restarting JIRA each run. BUT – if and when something does break, you will know it was only the last thing you did that broke it. Likewise if something fixes it, you also know it was the last thing you did that actually fixed it.

Keep track of the changes you’ve made to each instance since the last Refresh

This is a bit of practical advice. Somewhere (Confluence), you need to have a document that shows in what ways each non-production instance has been changed since the last time you refreshed it from production.

Add a field? Add that to the doc. User tested an App? Document it. The idea is to have a journal to show what you’ve done, so that if you need to refresh it while a user is still testing something, you know where to find those changes to restore them.

And I get it – documentation is evil. Why spend time writing what you are doing when you can be doing more. This something I struggle with too! But this is a case where an ounce of prevention is worth a pound of cure.

Practice good Change Management on Production!

So, you’ve tested something in dev, put it before users in test, and now you are ready to put it on Production now. Enough delays, right?

Slow down there, friend! Production is sacred, you shouldn’t just run in there with every change.

Change control/change management is a complex subject – and honestly – hasn’t always been my strong suit. But it’s meant to keep you as an admin from your worst impulses. Annoying at times, I’ll grant you, but still a good thing overall.

The best way I found is to setup a board made of up of your Power Users, other Admins, and various other stakeholders as needed. Have them meet every so often (every other week seems to be the sweet spot here). If you have the budget for it, make it a lunch meeting and provide food. You are much more likely to get people to show up if they get to eat.

Then go over every change you want to make and gather feedback. They might spot a problem with a use case you hadn’t considered. But be sure to get a vote on each change before the meeting is over. Trust me, if you don’t structure and control the meeting, they will talk each point to death.

As a note here, there should be an exception to putting changes through the board during an emergency. If production is down, your first priority should be getting it back online as soon as possible. Then you can have time to retroactively put it through the board. For all non-emergency changes though, the change board is the valve to what you want to put into production.

Strictly this is not part of testing, but considering all, I didn’t want you to run off thinking testing was the last step. As with everything JIRA, it all works best when it’s a process.

And that is it!

You are ready to do some testing in JIRA. With the advice above, you are ready to maintain your JIRA Instances responsibly – or at the very least give yourself a way out of any sticky situations you find yourself in.

Don’t forget to join us on Discord! https://discord.gg/mXuRsVu

Until next time, this is Rodney, asking “Have you updated your JIRA issues today?”

Getting started with JIRA Data Center: Pt 1 – Support Systems

Okay, okay, you guys talked me into it. Today and next week we’ll discuss when you should start thinking about Data Center, what support systems you will need to put into place, and what the process looks like for converting your single-server instance into a multi-node JIRA Data Center instance.

It was actually an email from one of you that made me decide to cover JIRA Data Center. I love hearing from people about how much they are learning from this blog, So don’t be afraid to send me a comment, use the contact form on the blog, or send me a DM on LinkedIn.

Now, some forewarning here. A Data Center install is an involved process. This is something you absolutely should practice doing on a test instance. Actually no, you should practice it several times. It’s not something you want to just do on production. So, without any more delays, lets get to this.

When should you look to convert to Data Center?

So, question time. When should you look to migrate from JIRA Server to Data Center? Is this something that will even be worth the expense and time?

This is a question that I’ve actually struggled with too. I actually pushed back against a Data Center Migration for a while there, with my reasoning being that we weren’t experiencing any significant problems with performance, why take on the extra cost and effort.

It was actually a mid-day downtime event that made me reconsider. The VM host that JIRA lived one unexpectedly went down. Now the VM infrastructure wasn’t my responsibility, but the resulting corruption of the JIRA Index was. JIRA was actually down for an additional 1.5 hours because we had to rebuild the index from scratch.

So I’m going to tell you now, don’t be me. Use actual numbers and metrics to inform your decision. Atlassian recommends you look at three things. These are not a hard and fast checklist, but a way to start the conversation about whether this is right for your organization.

1. Active User Count

The first Criteria is User base. Atlassian’s own studies have shown that organizations typically start running into performance issues on JIRA Server when supporting between 500 and 1000 active users. So if your instance has a peak load of around 450 users – it might be time to bring this up.

2. Performance Degradation

The second is actual performance. If you are experiencing regular performance degradations after you’ve done every optimization you can find – it might be time to bring this up. You can only “grease the track” so much before your single node can’t support any more.

3. Downtime and Outages

The third factor to consider is how critical your instance or and how tolerant you can be of downtime and outages. If your business needs dictate that JIRA has to be up, period, you guess it. It might be time to bring this up.

Now these do provide some numbers and guidelines, but they leave a lot up to your judgement. Take your time and consider these things carefully. This is not something to shout “Leeroy Jenkins” and run into head-first.

What support systems will I need to setup?

So you’ve looked at everything and decided, “Yes, Data Center is for me.” What next? Well, that’s actually going to be the focus of rest of today’s post. In order for JIRA Data Center to work, each “JIRA Server” Node needs access to a common shared set of resources. You can actually read more about it in this week’s Document, simply titled “JIRA Data Center”

From Atlassian JDC Documentation

As you can see, the three things we’ll need are a shared Filed System, a Load Balancer, and a Shared Database. As far as Databases go, JDC supports the same selection as JIRA Server. This means we can setup the Database the same way we did for Server, only we’ll need to tweak the settings a bit to make it friendly for Network use.

For the file share, I’ll be setting up a dedicated NFS Share Server for this function. I’ll also be setting up HA Proxy for the Load Balancer. Both of these, as well as the Database, will be running on CentOS 7 systems. These are both my preferences…you could use SMB/Windows File shares if you were running in a windows environment. Or you can run Nginx as your load balancer. If you have the funds, you can even run an F5. As I’ve stressed multiple times: Follow the supported platforms sheet, but use what you or your team knows.

Assumptions

Alright, so I am assuming a few things here. First, I am going to assume that you are at least familiar with how I setup JIRA Server, based on my posts here, here, and here, and that you have set up your Server instance following those instructions. I am also assuming that the database is currently on the JIRA Server.

These are assumptions I need to make in order to write this guide, as I need to know where you are starting from. However, I also think that you are smart enough that where your configuration is different, you can figure it out from what I’ve provided.

Setting up an NFS Share for the Shared Home

So…this is annoying. There doesn’t appear to be a good guide from Atlassian in setting this up. But that’s okay, I’ve setup NFS before for other purposes, so we got this.

Start with a fresh CentOS 7 machine. Our first job will be to install the package nfs-utils.

yum install nfs-utils -y

After this we’ll make a directory where the share will live. For monitoring ease, I’m also going to put this on it’s own drive separate from the root filesystem, but one step at a time.

mkdir /var/jdc-share

Now that we have a directory for the share, lets map it to a drive. To do this in a sustainable way, we need to find the UUID of the new drive – which for my example is /dev/sdb1

blkid /dev/sdb1

We then take this, and using the text editor of your choice, modify /etc/fstab, adding the following line:

UUID=e64ff994-a046-407e-96fe-3bc8ba149254 /var/jdc-share xfs defaults 0 0

If you have your fstab correct, all you should have to do is mount the folder. I will also check df -h to be sure it mounted as I expected:

mount /var/jdc-share
df -h

Next we need to make sure the permissions are correct so that we don’t have any issues when we configure it for NFS. To do this, we need to set the folder’s permissions and file ownership:

chmod -R 755 /var/jdc-share
chown nfsnobody:nfsnobody /var/jdc-share

Now that we have the filesystem prepared, we can configure the NFS service to actually share that folder. Open up the /etc/exports folder in the text editor of your choice and add the following line

/var/jdc-share <ip range of JIRA nodes>(rw,sync,no_root_squash,no_all_squash)

For the IP range – you might need a network engineer to help you. Most of the time though, you can get a close approximation by doing the following. Take the IP address of your first node or JIRA Server, as appropriate, and replace the last octect (number) with 0, then add a /24 to the end. So if your JIRA Server’s IP is 172.16.1.63, your IP range will likely be 172.16.1.0/24. This only works if all your JIRA nodes will be in the 172.16.1.0 subnet! Talk to your Network Engineers or whoever provisions your IP addresses to confirm!

So with that configured, start the following services:

systemctl start rpcbind
systemctl start nfs-server
systemctl start nfs-lock
systemctl start nfs-idmap

This will start your services. Now we need to make sure the Firewall won’t block your incoming nfs requests:

firewall-cmd --permanent --zone=public --add-service=nfs
firewall-cmd --permanent --zone=public --add-service=mountd
firewall-cmd --permanent --zone=public --add-service=rpc-bind
firewall-cmd --reload

Now it’s time to test. Remember, you are not done until you’ve confirmed it’s working yourself. Go to your JIRA Server and install nfs-utils, the same as we did for the NFS Server. Then to a temporary mount using the following command:

mount -t nfs <ip address of nfs Server>:/var/jdc-share /mnt

You can use a domain name for this, but DNS then become yet another point of failure, so for the support systems, I like to use IP addresses where possible. If all goes well, you should see no error. Try writing a file or two to /mnt/ from the JIRA Server, and see if you can see it on the NFS Server.

If all looks good, we can go ahead and enable those services for the NFS Server, and add a fstab entry to all your future JIRA nodes, then mount the share.

On NFS Server:
systemctl enable rpcbind
systemctl enable nfs-server
systemctl enable nfs-lock
systemctl enable nfs-idmap

This is the following line that needs to be added to the /etc/fstab in every JIRA node in your JDC deployment:

<ip address of NFS Server>:/var/jdc-share /data/jira/sharedhome nfs defaults 0 0

Make sure the mountpoint ‘/data/jira/sharedhome’ is created on each node, then manually mount the share to that node. By adding it to the /etc/fstab, it will also auto-mount on each boot.

umount /mnt
mkdir -p /data/jira/sharedhome
mount /data/jira/sharedhome

A bit of a different order, but still works!

And that’s the Share. We can’t really use it until we go to transform our JIRA Server into JIRA Datacenter – unlike the Load Balancer and Database. But we’ve at least tested it working as expected.

Setting up HAProxy for use with JIRA DC

Unlike with the file share, Atlassian has some documentation around setting up your Load Balancer.

So, for this we’ll also be starting with a fresh CentOS 7 instance. To install HAProxy, we’ll run the following command.

yum install haproxy -y

This will install the application on your system. To configure it, first take a backup of the file /etc/haproxy/haproxy.cfg, then open it in the text editor of your choice.

cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.example
nano /etc/haproxy/haproxy.cfg

Once open, remove the following sections from the default config:

  • frontend main
  • backend static
  • backend app

After this, add the following to the bottom:

#Begin JIRA Configuration
frontend ft_web
  bind 0.0.0.0:8080
  default_backend bk_web
  
backend bk_web
  balance roundrobin
  cookie JSESSIONID prefix nocache
  server s1 <JIRA Server IP>:<JIRA Server Port> check cookie s1
#End JIRA Configuration

Once you enter this, start haproxy with the following command:

systemctl start haproxy

You should be able to go to port 8080 on the Load balancer server and get to your JIRA instance. This assumes that port 8080 is open on your JIRA server and the load balancer. If it passes, enable the service so that it survives a system restart:

systemctl enable haproxy

A note here: My configuration assumes you are going to put a proxy in front of the load balancer to handle SSL translation. You would do this similar to how we did it for JIRA Server, just pointing to the load balancer instead of the JIRA application. You can also set up HAProxy to handle SSL directly as well, but lets keep thing easy here.

Database?

The database is the easiest part of all this. If you’ve already setup the database as a remote resource, congrats, you are already done with this section. However, if you haven’t, please read on.

From yet another fresh CentOS install, follow the database setup we went through with JIRA Server. All the settings will be the same for this, save for a few details. First, we need to tweak the SQL statement that grants access to the JIRA User.

By default it looks like:

GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP,REFERENCES,ALTER,INDEX on <JIRADB>.* TO '<USERNAME>'@'<JIRA_SERVER_HOSTNAME>' IDENTIFIED BY '<PASSWORD>';

However, we need to make sure JIRA can log in remotely, so instead of “<jira_server_hostname>”, we’ll be putting ‘%’, so that it now looks like:

GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP,REFERENCES,ALTER,INDEX on <JIRADB>.* TO '<USERNAME>'@'%' IDENTIFIED BY '<PASSWORD>';

Granted, you could add a grant for every JIRA hostname, but in reality this adds alot of overhead for only marginal security gains, so it’s not really necessary.

Next we need to add a firewall rule to allow traffic to mysql:

firewall-cmd --permanent --zone=public --add-service=mysql
firewall-cmd --reload

And that will be your Database ready to host a JIRA database. Now we just need to migrate a JIRA DB to it. First thing you should do is shut down your JIRA instance as to guarentee there will be no changes to the data while you work. If you setup your JIRA instance as a service, enter the following.

systemctl stop jira

Now go to your current JIRA host, and enter the following command, using the JIRA DB’s username and password for that host:

mysqldump -h localhost --user=<JIRA_DB_USERNAME> --password=<JIRA_DB_PASSWORD> --single-transaction --quick --opt <JIRA_DB> | gzip > "jiradb-$(date +%Y-%m-%d).gz"

Transfer the resulting file, which should be named “jiradb-<today’s date>.gz”, to your new centralized DB server. Once it’s there, import it into that database using:

gunzip < jiradb-<today's date>.gz | mysql -u root -p <JIRA_DB>

You should take a moment here to connect to the new database with a remote tool, like mysql workbench, using the JIRA DB username and password. Make sure you can see your JIRA Database as you’d expect it before we can move on. If you can connect to it, JIRA should be able to connect to it, assuming no network obstructions.

Once you completed that, you should be ready to cut JIRA over to the new database. Go to the JIRA home folder on your JIRA Node. There you will find a file called, “dbconfig.xml”. Take a backup of it, then open that up in your text editor of choice.

In here we are looking for the fields “username”, “password”, and URL. Change your username and password to match your new database system.

Now startup JIRA, and monitor the logs to make sure it connects cleanly. Also, once it’s done loading, go into the UI to make sure everything looks normal. Also check under the System -> Troubleshooting and Support tools that there are no errors, and check the System -> System Info to make sure it’s connected as you expected.

If everything looks good, congratulations! Your system is now ready to be converted to JIRA Data Center – which we’ll focus on next week.

A quick note here: This only works if you are staying on the same database platform. If you are using this opportunity to migrate to another Database platform, like moving from MySQL to PostgreSQL, I suggest you read the following Documentation. However, the gist of it is, you will need to run an export of your entire instance, stand up a new instance on the new Database Platform, then Import your backup into that new instance, followed by copying over the attachments.

Can’t I have one system do all three support roles?

In an ideal world, yes. Save the resouces, run only one VM. We don’t live in an ideal world. The idea here is to spread the risk around to multiple systems so that any one point is less likely to be a problem. Having one machine perform multiple roles increases the complexity of that system, and therefore increases the likelihood of something going wrong. As my engineering professors used to say: “Keep it simple”

But you have some pretty big single points of failure you got right there.

This is a valid criticism of my configuration here. With enough time and resources, ideally you’d want to make every system here redundant – and they all support redundancy. However, is it always worth it? My goal is to point you in the right direction. In my lab setup, I have one VM server, with limited resources. There is also time to consider. I have a self-imposed deadline for these articles as well.

But if you have the time and resources, you should definitely research how to run each of these services redundantly. If our goal is no downtime, it will do a lot to guarantee that. Understand this is an example project, and not a production system.

So what’s Next?

Well, next week we’ll talk about converting your JIRA Server system into a JIRA Data Center Node, and then what you need to do to setup each additional JDC Node after that.

Also a note about the coming weeks. We are about to enter the holiday season here in the United States, so I’ll actually be off work (though on call) for the blog posts on the 25th and the 1st. Given that, I’d like to do something special for those. So, send me your quick JIRA questions and I’ll do a lightning round Q&A, assuming I get enough questions in. And remember, I am always willing to take on reader requests for topics, so even if you don’t think your question is small enough for a lightning round, ask it anyways! This very post started as a reader request!

So until then, this is Rodney, asking “Have you updated your JIRA Issues today?”