Monitoring JIRA for Fun and Health

So, dear readers, here’s the deal. Some weeks, when I sit down to write, I know exactly what I’m going to write about, and can get right to it. Other weeks, I’m sitting down, and I don’t have a clue. I can usually figure something out, but it’s very much a struggle. This week is VERY much the latter.

Compound that with the fact that I just lost most of my VM’s due to a storage failure I had this very morning. Part of it was a mistake on my part. I have the home lab so that I can learn things I can’t learn on the job. And mistakes are a painful but powerful way to learn. Still….

This brings me back to a conversation I had with a colleague and fellow Atlassian Administrator for a company I used to work for. He had asked me what my thoughts around implementing Monitoring of JIRA. Well, I have touched on the subject before, but if I’m being honest, this isn’t my greatest work. Combine that with the fact that I suddenly need to rebuild EVERYTHING, well, why not start with my monitoring stack!

So, we are going to be setting up a number of systems. To gather system stats, that is to say CPU usage, Memory Usage, and Disk usage, we are going to be using Telegraf, which will be storing that data in an InfluxDB database. Then for JIRA stats we are going to use Prometheus. And to query and display this information, we will be using Grafana.

The Setup

So we are going to be setting up a new system that will live alongside our JIRA instance. We will call it Grafana, as that will be the front end we will interact with the system with.

On the back end it will be running both a InfluxDB Server and a Prometheus Server. Grafana will use both InfluxDB and Prometheus as data sources, and will use that to generate stats and graphs of all the relevant information.

Our system will be a CentOS 7 system (my favorite currently), and will have the following stats:

  • 2 vCPU
  • 4 GB RAM
  • 16 GB Root HDD for OS
  • 50 GB Secondary HDD for Services

This will give us the ability to scale up the capacity for services to store files without too much impact on the overall system, as well as monitor it’s size as well.

As per normal, I am going to write all commands out assuming you are root. If you are not, I’m also assuming you know what sudo is and how to use it, so I won’t insult you by holding your hand with that.

InfluxDB

Lets get started with InfluxDB. First thing we’ll need to do is add the yum repo from Influxdata onto the system. This will allow us to use yum to do the heavy lifting in the install of this service.

So lets open /etc/yum.repos.d/influxdb.repo

vim /etc/yum.repos.d/influxdb.repo

And add the following to it:

[influxdb]
name = InfluxDB Repository - RHEL \$releasever
baseurl = https://repos.influxdata.com/rhel/\$releasever/\$basearch/stable
enabled = 1
gpgcheck = 1
gpgkey = https://repos.influxdata.com/influxdb.key

Now we can install InfluxDB

yum install influxdb -y

And really, that’s it for the install. Kind of wish Atlassian did this kind of thing.

We’ll need to of course allow firewall access to Telegraf can get data into InfluxDB.

firewall-cmd --permanent --zone=public --add-port=8086/tcp
fireall-cmd --reload

And with that we’ll start and enable the service so that we can actually do the service setup.

systemctl start influxdb
systemctl enable influxdb

Now we need to set some credentials. As initially setup, the system isn’t really all that secure. So we are going to secure it initially by using curl to set ourselves an account.

curl -XPOST "http://localhost:8086/query" --data-urlencode \
"q=CREATE USER username WITH PASSWORD 'strongpassword' WITH ALL PRIVILEGES"

I shouldn’t have to say this, but you should replace username with one you can remember and strongpassword with, well, a strong password.

Now we can use the command “influx” to get into InfluxDB and do any further set up we need.

influx -username 'username' -password 'password'

Now that we are in, we need to setup a database and user for our JIRA data to go into. As a rule of thumb, I like to have one DB per application and/or system I intend to monitor with InfluxDB.

CREATE DATABASE Jira
CREATE USER jira WITH PASSWORD 'strongpassword'
GRANT ALL ON jira TO jira
CREATE RETENTION POLICY one_year ON Jira DURATION 365d REPLICATION 1 DEFAULT
SHOW RETENTION POLICIES ON Jira

And that’s it, InfluxDB is ready to go!

Grafana

Now that we have at least one datasource, we can get to setting up the Front End. Unfortunately, we’ll need information from JIRA in order to setup Prometheus (once we’ve set JIRA up to use the Prometheus Exporter), so that data source will need to wait.

Fortunately, Grafana can also be setup using a Yum repo. So lets open up /etc/yum.repos.d/grafana.repo

vim /etc/yum.repos.d/grafana.repo

and add the following:

[grafana]
name=grafana
baseurl=https://packages.grafana.com/oss/rpm
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://packages.grafana.com/gpg.key
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt

Afterwards, we just run the yum install command:

sudo yum install grafana -y

Grafana defaults to port 3000, however options to change or proxy this are available. However, we will need to open port 3000 on the firewall.

firewall-cmd --permanent --zone=public --add-port=3000/tcp
firewall-cmd --reload

Then we start and enable it:

sudo systemctl start grafana-server
sudo systemctl enable grafana-server

Go to port 3000 of the system on your web browser and you should see it up and running. We’ll hold off on setting up everything else on Grafana until we finish the system setup, though.

Telegraf

Telegraf is the tool we will use to get our data from JIRA’s underlying linux system and into InfluxDB. This is actually part of the same YUM repo that InfluxDB is installed from, so we’ll now also add it to the JIRA server – same as we did Grafana.

vim /etc/yum.repos.d/influxdb.repo

And add the following to it:

[influxdb]
name = InfluxDB Repository - RHEL \$releasever
baseurl = https://repos.influxdata.com/rhel/\$releasever/\$basearch/stable
enabled = 1
gpgcheck = 1
gpgkey = https://repos.influxdata.com/influxdb.key

And now that it has the YUM repo, we’ll install telegraf onto the JIRA Server.

yum install telegraf -y

Now that we have it installed, we can take a look at it’s configuration, which you can find in /etc/telegraf/telegraf.conf. I highly suggest you take a backup of this file first. Here is an example of a config file where I’ve filtered out all the comments and added back in everything essential.

[global_tags]
[agent]
  interval = "10s"
  round_interval = true
  metric_batch_size = 1000
  metric_buffer_limit = 10000
  collection_jitter = "0s"
  flush_interval = "10s"
  flush_jitter = "0s"
  precision = ""
  logtarget = "file"
  logfile = "/var/log/telegraf/telegraf.log"
  logfile_rotation_interval = "1d"
  logfile_rotation_max_size = "500MB"
  logfile_rotation_max_archives = 3
  hostname = "<JIRA's Hostname>"
  omit_hostname = false
[[outputs.influxdb]]
  urls = ["http://<grafana's url>:8086"]
  database = "Jira"
  username = "jira"
  password = "<password from InfluxDB JIRA Database setup>"
[[inputs.cpu]]
  percpu = true
  totalcpu = true
  collect_cpu_time = false
  report_active = false
[[inputs.disk]]
  ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"]
[[inputs.diskio]]
[[inputs.kernel]]
[[inputs.mem]]
[[inputs.processes]]
[[inputs.swap]]
[[inputs.system]]

And that should be it for config. There are of course more we can capture using various plugins – based on whatever we are interested in, but this will get the bare minimum we are interested in.

Because telegraf is pushing data to the InfluxDB server, we don’t need to open any firewall ports for this, which means we can start it, then monitor the logs to make sure it is sending the data over without any problems.

systemctl start telegraf
systemctl enable telegraf
tail -f /var/log/telegraf/telegraf.log

And assuming you don’t see any errors here, you are good to go! We will have the stats waiting for us when you finish the setup of Grafana. But first….

Prometheus Exporter

So telegraf is great for getting the Linux system stats, but that only gives us a partial picture. We can train it to capture JMX info, but that means we have to setup JMX – something I’m keen to avoid whenever possible. So what options have we got to capture details like JIRA usage, JAVA Heap performance, etc?

Ladies and gentlemen, the Prometheus Exporter!

That’s right, as of the time of this writing, this is yet another free app! This will setup a special page that Prometheus can go to and “scrape” the data from. This is what will take our monitoring from “okay” to “Woah”.

Because it is a free app, we can install it directly from the “Manage Apps” section of the JIRA Administration console

Once you click install, click “Accept & Install” on the pop up, and it’s done! After a refresh, you should notice a new sidebar item called “Prometheus Exporter Settings”. Click that, then click “Generate” next to the token field.

Next we’ll need to open the “here” link into a new tab on the “Exposed metrics are here” text. Take special special note of the URL used, as we’ll need this to setup Prometheus.

Prometheus

Now we’ll go back to our Grafana system to setup Prometheus. To find the download, we’ll go to the Prometheus Download Page, and find the latest Linux 64 bit version.

Try to avoid “Pre-release”

Copy that to your clipboard, then download it to your Grafana system.

 wget https://github.com/prometheus/prometheus/releases/download/v2.15.2/prometheus-2.15.2.linux-amd64.tar.gz

Next we’ll need to unpack it and move it into it’s proper place.

tar -xzvf prometheus-2.15.2.linux-amd64.tar.gz
mv prometheus-2.15.2.linux-amd64 /archive/prometheus

Now if we go into the prometheus folder, we will see a normal assortment of files, but the one we are interested in is prometheus.yml. This is our config file and where we are interested in working. As always, take a backup of the original file, then open it with:

vim /archive/prometheus/prometheus.yml

Here we will be adding a new “job” to the bottom of the config. You can copy this config and modify it for your purposes. Note we are using the URL we got from the Prometheus Exporter. The first part of the URL (everything up to the first slash, or the FQDN) goes under target where indicated. The rest of the URL (folder path) goes under metrics_path. And then your token goes where indicated so that you can secure these metrics.

global:
  scrape_interval:     15s
  evaluation_interval: 15s
alerting:
  alertmanagers:
  - static_configs:
    - targets:
rule_files:
scrape_configs:
  - job_name: 'prometheus'
    static_configs:
    - targets: ['localhost:9090']
  - job_name: 'Jira'
    scheme: https
    metrics_path: '<everything after the slash>'
    params:
      token: ['<token from Prometheus exporter']
    static_configs:
    - targets:
      - <first part of JIRA URL, everything before the first '/'>

We’ll need to now open up the firewall port for Prometheus

firewall-cmd --permanent --zone=public --add-port=9090/tcp
firewall-cmd --reload

Now we can test Prometheus. from the prometheus folder, run the following command.

./prometheus --config.file=prometheus.yml

From here we can open a web browser, and point it to our Grafana server on port 9090. On the Menu, we can go to Status -> Targets and see that both the local monitoring and JIRA are online.

Go ahead and stop prometheus for now by hitting “Ctrl + C”. We’ll need to set this up as a service so that we can rely on it coming up on it’s own should we ever have to restart the Grafana server.

Start by creating a unique user for this service. We’ll be using the options “–no-create-home” and “–shell /bin/false” to tell linux this is an account that shouldn’t be allowed to login to the server.

useradd --no-create-home --shell /bin/false prometheus

Now we’ll change the files to be owned by this new prometheus account. Note that the -R makes chown run recursively, meaning it will change it for every file underneath were we run it. Stop and make sure you are running it from the correct directory. If you run this command from the root directory, you will have a bad day (Trust me)!

chown -R prometheus:prometheus ./

And now we can create it’s service file.

vim /etc/systemd/system/prometheus.service

Inside the file we’ll place the following:

[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target

[Service]
User=prometheus
Group=prometheus
Type=simple
ExecStart=/archive/prometheus/prometheus \
    --config.file /archive/prometheus/prometheus.yml \
    --storage.tsdb.path /archive/prometheus/ \
    --web.console.templates=/archive/prometheus/consoles \
    --web.console.libraries=/archive/prometheus/console_libraries 

[Install]
WantedBy=multi-user.target

After you save this file, type the following commands to reload systemctl, start the service, make sure it’s running, then enable it for launch on boot.:

systemctl daemon-reload
systemctl start prometheus
systemctl status prometheus
systemctl enable prometheus

Now just double check that the service is in fact running, and you’re good to go!

Grafana, the Reckoning

Now that we have both our datasources up and gathering information, we need to start by creating a way to display it. On your web browser, go back to Grafana, port 3000. You should be greeted with the same login screen as before. To login the first time, use ‘admin’ as username and password.

You will be prompted immediately to change this password. Do so. No – really.

After you change your password, you should see the screen below. Clilck “Add data source”

We’ll select InfluxDB from the list as our first Data Source.

For settings, we’ll enter only the following:

  • Name: JIRA
  • URL: http://localhost:8086
  • Database: Jira
  • User: jira
  • Password: Whatever you set the InfluxDB Jira password to be

Click “Save & Test” at the bottom and you should have the first one down. Now click “Back” so we can set up Prometheus.

On Prometheus, all we’ll need to do is set the URL to be “http://localhost:9090. Enter that, then click “Save & Test”. And that’s both Data Sources done! Now we can move onto the Dashboard. On the right sidebar, click through to “Home”, then click “New Dashboard”

And now you are ready to start visualizing Data. I’ve already covered some Dashboard tricks in my previous attempt at this topic. However, if it helps, here’s how I used Prometheus to setup a graph of the JVM Heap.

Some Notes

Now, there is some cleanup you can do here. You can map out the storage for Grafana and InfluxDB to go to your /archive drive, for example. However, I can’t be giving away *ALL* the secrets ;). I want to challenge you there to see if you can learn to do it yourself.

We do have a few scaling options here too. For one, we can split Influx, Prometheus, and Grafana onto their own systems. However, my experience has been that this isn’t usually necessary, and they can all live comfortably on one system.

And one final note. The Prometheus exporter, strictly speaking, isn’t JIRA Data Center compatible. It will run however. As best I can tell, it will give you the stats for each node where applicable, and the overall stats where that makes sense. It might be worth installing and setting up Prometheus to bypass the load balancer and do each node individually.

But seriously, that’s it?

Indeed it is! This one is probably one of my longer posts, so thank you for making it to the end. It’s been a great week hearing how the blog is helping people out in their work, so keep it up! I’ll do my part here to keep providing you content.

On that note, this post was a reader-requested topic. I’m always happy to take on a challenge from readers, so if you have something you’d like to hear about, let me know!

One thing that I’m working on is to try and make it easier for you to be notified about new blog posts. As such, I’ve included an email subscription form at the bottom of the blog. If you want to be notified automatically about to blog posts, enter your email and hit subscribe!

And don’t forget about the Atlassian Discord chat – thoroughly unofficial. click here to join: https://discord.gg/mXuRsVu

But until next time, my name is Rodney, asking “Have you updated your JIRA issues today?”

App Review: Botron Power Admin

So, Confession time…

This was going to be my Christmas day 2019 post. But – it wasn’t ready in time, and making it ready would have meant not spending time with my family. And family is always more important. So, I chose to delay it until today.

This also represents something new I am trying…well, actually a few things new. So let me know in the comments here or on LinkedIn what you think of it? Would you like more like this? Do you have any favorites you think need attention? I’d love to hear from you.

So then, I’ll be looking at a new(ish) app that I recently learned about from an ACE event in Atlanta last month. It solves a problem that has frustrated me many a times, and one I imagine you’ve dealt with too.

Now I should note here that this is a review of a app (or plugin if you are old-school like me). I have not received any sort of monetary compensation for this post, and they don’t even know I’m doing this. I did get a T-shirt and a few small plushies at the ACE Event, but that was swag we all got there. However, I do think I should be upfront an honest about that. In the end I think this app solves a big problem I’ve always had, and want you to at least try it out.

That being said, he is cute!

The Problem

So, imagine this. You are in a meeting, reviewing some configuration changes you want to make to a field, and a stakeholder asks this dreaded question:

So, where exactly is this field used, anyways?

Every stakeholder, every time.

Well, there goes your afternoon. This is not something that is easy to answer in vanilla JIRA. Yeah, you can see what projects it’s “Used” in…but that’s only if it happens to appear on a screen associated with that project. That leaves out workflow transitions, filters, and dashboards – the last two of which you may not get a clear understanding of, ever.

You are now doomed to check workflows, popular dashboards, etc. and hope you catch them all. And inevitably you will miss one, and that user is going to complain that they “Should have been notified about the change!*”, even though they used it in a private filter that you cannot see.

*technically, you should notify on all changes you plan to make in production, but this doesn’t always happen.

And this problem isn’t limited to just Custom Fields, though that’s the example I chose. It could be your boss asking, “How many people are really using this expensive plugin?” come renewal time. Or making sure a status name change you are about to make will only impact those you intend to. Or see which workflows are using a screen you need to change.

Finally, a solution

For the longest time, my best answer was to do about a half-day’s to a full day’s research, and try to find where all it’s being used. In a perfect world all of this would be documented, but that doesn’t always happen either. Sometimes we just get too busy doing to write up what we are doing.

Thankfully this isn’t an uncommon problem, and we finally have a tool in our tool-belt to find an answer quickly: Power Admin by Botron.

As I said, I found out about this when Peter Toudjarski from Botron came to speak at an Atlassian ACE event in Atlanta last month. You really should consider going to your local ACE Events if you can. I’ve been doing the admin thing for years and I still manage to learn something from my colleagues each time I go.

That aside, this app blew my mind when I saw what it can do. It solves a gripe I’ve had that I wasn’t even aware of. Lets take the above example: Finding out where a custom field is used. I’ll go to my test instance, and look up “Company”

I really should remove that terrible theme…it’s after the New Years already!

From the quick search, I can see that it is used in one project. So what? Now watch as I click on “Company” in the search results.

Boom.

There you go. Every board, dashboard, and screen that custom field appears on. It will even catch workflows where applicable! That afternoon it would take previously? Now you can answer that question while you are still in the meeting!

And this isn’t limited to custom fields!

Now – this isn’t fool proof. It’s still up to you to interpret the result. For example, here is my returns for App:

So I can remove all plugins because they are used in no projects right? Eh…no. But it is far more insight than we were able to have previously.

What I wish it did better…

Now, I don’t have to tell you I’m already in love with this app. However, that is not to say it is perfect.

In custom fields for example, I’d love it to tell me what percent of issues in the projects a field is used are populated. Bonus points if it can give this to me as a report across all fields in JIRA.

Another area is Apps. Yeah – I actually had to defend this app to a co-worker who joked that because the Atlassian Troubleshooting and Support Tools were unused in all projects, it’s a candidate for removal. I think every experienced JIRA Admin knows this is a joke, but I can see how a new JIRA Admin could see that result and legitimately think that.

And as one last note, This is not available for JIRA Cloud. This relies on some low-level JIRA calls that don’t exist in the Atlassian cloud products. According to Peter, they are looking at some things now that Atlassian has added Forge to Cloud, which may let them get this kind of access, but they don’t want to rush something out the door until they know it’s good.

But that being said, for the price, this is invaluable knowledge on Server and Data Center instances.

About that price thing….

This is the part where you are expecting me to tell you it’s expensive. I mean, any JIRA Admin who’s been at this more than a year knows Apps can get expensive fast.

Well…..about that…

As of the time of this writing….

It’s free. Yes, Free as in beer. You can add this to your JIRA Server and Data Center instance today at no cost. I don’t know if and when it will become paid, so I’d scoop it up while I can.

Okay, you can hate me now.

I deserve that one. But it’s rare we get an app of this quality that is free, so I had to.

So, what do you think of this app? Personally, I think it deserves more attention, hence why I’m writing this post about it. As I’ve stated previously, this just saves so much time.

And now, for something else new!

As you may remember, I really started posting regularly to this blog after being in a position where I had lost my job. If I’m being honest, I had just ended a rough week in the job hunt and was looking to prove I still had “it”, both to myself and to potential employees.

Considering the blogs “real” start then, I’ve been thinking of a way to give back to others who may be in the same situation I was in. And this was the frame of mind I was in when I was contacted by someone asking if I knew of any remote Atlassian Administrator opportunities. So why not feature them on the blog!

On that note, I’d like to introduce you to John Fry, who is looking for a remote Atlassian Administrator role. He has been working as an Administrator of Atlassian Tooling since 2016, but has a further 5 year’s experience in System Maintenance and Administration. He is also ACP certified with his ACP-JA (ACP-100) and AC-JPA (ACP-600). He also has experience with the following supporting architectures:

  • Linux
  • Windows Server
  • AWS EC2
  • VMware VSphere
  • Ansible
  • MySQL
  • PostgreSQL

For a full listing of his qualifications, I’d refer you to his resume below:

And if he looks like someone you’d be interested in having on your team, please reach out to him on LinkedIn. I know he’d be happy to speak with you!

I don’t think at this time I’ll do a job hunter feature in every post, but it’s something that I’d like to do every now and again. Just another way to give back to the wonderful Atlassian Community!

In speaking of Communities

So, last week I happen upon a post in the JIRA subreddit asking about an Atlassian chat – someplace people can go to chat or ask questions that’s not a threaded forum style like Atlassian Community. Someone posted a slack link, but no way to get into the slack, and I had found a discord server. However – the discord option appeared to be all but dead. NO literally, the only admin had not logged on in so long that his account was deleted.

So, I did what any sane* person does, and start my own! Come hang out, ask questions, talk with other Atlassian admins, and see what’s up!

https://discord.gg/mXuRsVu

But until next time, my name is Rodney, asking “Have you updated your JIRA Issues today?”

*may be using a loose definition