Month: February 2019

Alex's Guardian > Blog > 2019 > February
Speedtest Data in Grafana

Speedtest Data in Grafana

Green is Download (RX) Yellow is Upload (TX)

You’ll first need to create the database that the speedtest container will be writing data to. If you followed my Grafana guide then you will need to use the following commands to create the database and needed permissions. Make sure to change <> areas with your needed info

curl -XPOST "http://<ip.of.influx.db>:8086/query" -u admin:password --data-urlencode "q=CREATE DATABASE speedtest"

curl -XPOST "http://<ip.of.influx.db>:8086/query" -u admin:password --data-urlencode "q=CREATE USER speedtest WITH PASSWORD 'speedtest'"

curl -XPOST "http://<ip.of.influx.db>:8086/query" -u admin:password --data-urlencode "q=GRANT WRITE ON speedtest TO 'speedtest'"

curl -XPOST "http://<ip.of.influx.db>:8086/query" -u admin:password --data-urlencode "q=GRANT READ ON speedtest TO grafana"

Now to save a few words I am just going to link the guide for the Speedtest-for-InfluxDB-and-Grafana docker container here. Its pretty straight forward and easy to follow, so go get the container running with your new database and then return!


Now that you have the speedtest container running in docker and sending the info to influxdb, (I recommend changing the delay = 300 to someting more infrequent so its not testing every 5 minutes), we can set up the graph to plot the data.

You can use this site to get speedtest server IDs if you wish to specify them instead of using “Auto.”

Navigate to Grafana, login, go to the add data sources view and click “add data source”.

SettingValue
NAMESpeedtest Data
URLhttp://<ip.of.influx.db>:8086
USERgrafana
PASSWORDgrafana
DATABASEspeedtest

Now head to your dashboard and add a new graph. Under Metrics:

SELECT mean("download") FROM "speed_test_results" WHERE time >= now() - 30d GROUP BY time(1h) fill(linear);SELECT mean("upload") FROM "speed_test_results" WHERE time >= now() - 30d GROUP BY time(1h) fill(linear)

Axes tab:

SettingValue
Left Y – Unitbits/sec
Left Y – Decimals1

Edit the rest of the settings to your liking and then click the back arrow. Give it time to update, aka allow the speedtest to run, and you should see your first mark on the graph.


Edit 4/29/19: Updated to match new Grafana guide settings.

IPMI Monitoring via Telegraf

IPMI Monitoring via Telegraf

Telegraf supports IPMI inputs for monitoring via ipmitool. Now this will only work if your server supports the Intelligent Platform Management Interface aka IPMI. To check if your server supports it you can either look up your server’s documentation or take a look at the UEFI/BIOS for IPMI settings. Usually you have to enable it as its not enabled by default.

Getting started

Note: If you followed my guide, Grafana – Start from Scratch, you should already have IPMItool installed with Telegraf in Docker! If so you can skip right to the configuration section!

First you will need to download and install the ipmitool. I am running this on a 2c 2GB Ubuntu server 17.10 VM along with a Telegraf install with [[inputs.ipmi_sensor]] enabled.

Installing IPMItool

sudo apt-get install ipmitool -y

To check and see if it installed correctly you can run ipmitool -H IP.OF.SERVER.HERE -U username -P password sensor

Configuring the IPMI Input

Install Telegraf and edit the telegraf.conf file.

nano /etc/telegraf/telegraf.d/ipmi-input.conf

Paste the following into the new file and edit theservers and metric_version sections to match your setup.

Note, you can have multiple IPMI inputs, just copy everything and paste it a second time and for how many servers you want to monitor.

[[inputs.ipmi_sensor]]
  path = "/usr/bin/ipmitool" # This is the default install location of ipmitool
  servers = ["USERNAME:[email protected](IP.OF.IPMI.SERVER)"]
  interval = "30s"
  timeout = "20s"
  metric_version = "SUPPORTED METRIC VERSION OF SERVER" # Usually 1 or 2

Save and close ipmi-input.conf and start telegraf.

sudo systemctl start telegraf.service

Adding IPMI to Grafana

Now I am going to assume you already have Telegraf reporting to Influxdb with a Influxdb Telegraf data source already added to Grafana. If not go check out the Telegraf install guide(s).

Add a single stat panel to your dashboard with the following info under Metrics:

FROM default ipmi_sensor WHERE server = IP.OF.IPMI.SERVER AND name = cpu1_temp SELECT field(value) mean() GROUP BY time(30s) full(null)

Now the problem with IPMI is that all machines report their values different so one server may have it as cpu_1_temp_C and another may have it as proc1_temp_C. You’ll have to play with your queries to get the right values.

Under options set Unit to Temperature > Celsius (°C)

You should now have a singlestat panel that displays current cpu temp every 30s. You can speed up the pooling rate by editing the interval = "30s" value in telegraf.conf and changing time(30s) to the same value.

Pi-Hole Stats in Grafana

If you do not know what Pi-hole is I definitely recommend you look into it. Especially if you want to block ads/telemetry on all your home network devices.

Now there are probably a few ways to do this but for my dashboard I ended up using a script to send pi-hole stats to InfluxDB. I currently have Pi-Hole running on a VM under Ubuntu Server 17.10.

This script can be run remotely but you will need to use the authentication method described in the repo.

First we need to install python-pip. So SSH to your Pi-Hole server and run:

sudo apt-get install python-pip -y

Now create a new directory for the script to live in.

mkdir /opt/pihole-influx

Clone the pi-hole-influx repo.

git clone https://github.com/janw/pi-hole-influx.git /opt/pihole-influx

Once that finishes, cd to /pihole-influx and run:

pip install -r requirements.txt

Now clone the config.example.ini to config.ini.

cp config.example.ini config.ini

Edit the config.ini file to match your environment.

nano config.ini
[InfluxDB]
port8086
hostname<if.of.influx.db>
usernamepihole
passwordpihole
databasepihole
[pihole]
api_locationaddress of the /admin/api.php of your pi-hole instance
instance_namehostname

You can scrape multiple pi-hole instances if you run more than 1 by adding a second config block called [pihole_2], etc. I’d recommend using a docker container if you plan to use more then one pi-hole instance.

Save and close the config.ini file.

Lastly, you need to create the InfluxDB database that your pi-hole stats will reside in. Will need to match what you put in the database = databasename section in your config.ini.

curl -XPOST "http://<ip.of.influx.db>:8086/query?u=<admin user>&p=<password>" --data-urlencode "q=CREATE DATABASE "pihole""

curl -i -XPOST "http://<ip.of.influx.db>:8086/query?u=<admin user>&p=<password>" --data-urlencode "q=CREATE USER "pihole" WITH PASSWORD "pihole""

curl -XPOST "http://<ip.of.influx.db>:8086/query?u=<admin user>&p=<password>" --data-urlencode "q=GRANT WRITE ON "pihole" TO "pihole""

curl -XPOST "http://<ip.of.influx.db>:8086/query?u=<admin user>&p=<password>" --data-urlencode "q=GRANT READ ON "pihole" TO "grafana""

Now launch piholeinflux.py.

./piholeinflux.py

Running piholeinflux.py as a service.

You can setup piholeinflux.py to run as a systemd service so that it will auto launch at boot time if you ever have to reboot your server.

Create the piholeinflux.service file.

nano piholeinflux.service

Paste the below info into your new .service file.

User=pi #YOUR USERNAME
ExecStart=/usr/bin/python /home/USERNAME/pihole-influx/piholeinflux.py

Save and close the .service file. Then run the following in order:

sudo ln -s /opt/pihole-influx/piholeinflux.service /etc/systemd/system
sudo systemctl enable piholeinflux.service
sudo systemctl daemon-reload
sudo systemctl start piholeinflux.service

If you get an error while running make sure you can A: communicate to InfluxDB and B: the “USER=” in the .service file is set to a user that can run it (i.e. root or you).

Setting up the Grafana Dashboard

Pi-Hole Data Source

  1. Select the cog on the left hand side and click “data sources”
  2. Click “add data source”
  3. Click InfluxDB
  4. Enter the following information and hit “Save & Test”

If you get an error, double check your connection info for typos!

SettingValue
Namepi-hole
URLhttp://<ip-of-influx.db>:8086
Databasepihole
usergrafana
passwordgrafana

Dashboard Setup

Now you can import a basic dashboard using the ID from the script’s repo. This will give you some basic info from your Pi-hole data source.

Dashboard ID: 6603

HOWEVER, there are a few issues with it. First you will want to edit the Realtime Queries and add: non_negative_derivative or add math(* -1) to each of the queries, under “Metrics”, so the Y-Axis has no negative values.

Docker Container Version

You can deploy this script using the below Dockerfile and config.ini

FROM alpine as builder
RUN apk add --no-cache git
WORKDIR /app
RUN git clone https://github.com/janw/pi-hole-influx.git

FROM python:3-alpine
WORKDIR /usr/src/app

COPY --from=builder /app/pi-hole-influx/requirements.txt /usr/src/app
RUN pip install --no-cache-dir -r requirements.txt
COPY --from=builder /app/pi-hole-influx/piholeinflux.py /usr/src/app
COPY config.ini .

CMD [ "python", "./piholeinflux.py" ]
[influxdb]

port = 8086
hostname = 10.9.9.120
username = pihole
password = allthosesweetstatistics
database = pihole

# Time between reports to InfluxDB (in seconds)
reporting_interval = 10


[pihole]
api_location = http://10.9.9.120/admin/api.php
instance_name = pihole
timeout = 10

Copy both of the above blocks into the same folder.

docker build -t your-name/of-image .

Thanks to my co-worker for throwing together this easy, lightweight Docker container version. Also since it doesn’t require a mounting volume, it should work in swarm mode!


Edit 4/29/19: Updated to match new Grafana guide settings.

Transmission Metrics in Grafana

Transmission Metrics in Grafana

If you use Transmission as your download client, you can use a metrics exporter for Prometheus to ingest info into Grafana to display. You’ll also want to make sure Prometheus is setup and running or else this won’t work!

SSH to your docker host that is running Prometheus and edit the prometheus.yml configuration file by adding the following:

scrape_configs:
  - job_name: 'transmission'
    scrape_interval: 10s
    static_configs:
      - targets: ['<ip.of.transmission.exporter>:19091']

Save and close prometheus.yml. Now create the Transmission-Exporter.

docker run -d -p 19091:19091 -e TRANSMISSION_ADDR=http://<ip.of.transmission.client>:<port> metalmatze/transmission-exporter

You’ll need to change ipoftransmission:<port>to the ip:port of your Transmission install. If you have authentication enabled you’ll need to specify -e TRANSMISSION_USERNAME=username and -e TRANSMISSION_PASSWORD=passwordto the above command before metalmatze/transmission-exporter. Restart your Prometheus container so it picks up the new config. You should be able to navigate to the Prometheus web ui now and run the following query:

transmission_session_stats_downloaded_bytes{type="cumulative"}

Should see something similar to:

Now open Grafana and under Data Sources add a new Prometheus source:

SettingValue
NamePrometheus
URLhttp://ip-of-prometheus:9090
HTTP MethodGET

Click Save & Test. If you get an error make sure Grafana can reach Prometheus.

Adding Transmission Stats to your Dashboard

Navigate to a Dashboard you wish to add Transmission stats to and add a new singlestat panel. Under Metrics add the following query:

transmission_session_stats_downloaded_bytes{type="cumulative"}

And then under the Options tab select “Average” for Stat and “data(metric)/bytes” under Unit.

If you want to see all of the metrics that are exported, you can navigate to http://ip-of-transmission-exporter:19091/metrics. It will display all exported metrics and their current values they would return. You can also see them by typing “transmission” into the query box under the Prometheus web ui.

Useful links:


Edit 4/29/19: Updated to match new Grafana guide settings.

Monitoring your Plex Media Server with Varken

Monitoring your Plex Media Server with Varken

Before we get started, you should go buy the guys over at Varken a coffee and star their repo. Varken makes the data collection of Plex server stats via companion apps stupid easy.

For this to work you will need at least one of the following services configured and running:

Sonarr Radarr Lidarr Tautulli Ombi

If you followed my guide, here, then deploying Varken into your InfluxDB Docker network setup should be rather easy. First SSH to your docker host. Then create the config directory and copy down my compose file for Varken.

mkdir /opt/containers/varken && curl https://gist.githubusercontent.com/alexandzors/39760b9e742d6b9b28a0164af8648ac8/raw/091f7b35ad2e44fdaa46f7c410d97e2292f9905c/varken-compose.yml -o /opt/containers/varken/varken-compose.yml

Now we need to edit the compose file and enter the required info for our services. (You can mount a pre-defined config file to Varken if you do not want to use environment variables! Also the official Varken compose file can be found here, which includes Grafana and InfluxDB.)

nano /opt/containers/varken/varken-compose.yml

Change the volumes section to match your setup. Then edit the following entries under the “environment” section:

TZyour Timezone
VRKN_GLOBAL_MAXMIND_LICENSE_KEY MaxMind key for GEOIP DB
VRKN_INFLUXDB_URLinflux
VRKN_INFLUXDB_USERNAMEvarken
VRKN_INFLUXDB_PASSWORD‘password’
VRKN_TAUTULLI_1_URLurl for your Tautulli install
VRKN_TAUTULLI_1_APIKEYTautulli API key
VRKN_SONARR_1_URLurl for your Sonarr install
VRKN_SONARR_1_APIKEYSonarr API key
VRKN_RADARR_1_URLurl for your Radarr install
VRKN_RADARR_1_APIKEYRadarr API key
VRKN_OMBI_1_URL url for your Ombi install
VRKN_OMBI_1_APIKEYOmbi API key

Save and close the file CTRL+X then Y. Once the file is closed we can go ahead and create the varken database / varken user in Influx. (-u root:password is your InfluxDB root/admin user!)

curl -XPOST http://localhost:8086/query -u root:password --data-urlencode "q=CREATE DATABASE varken"

Create the user for Varken using the password you specified in the config earlier. Then assign user permissions

curl -XPOST http://localhost:8086/query -u root:password --data-urlencode "q=CREATE USER varken WITH PASSWORD 'password'"
curl -XPOST http://localhost:8086/query -u root:password --data-urlencode "q=GRANT WRITE ON varken TO varken"
curl -XPOST http://localhost:8086/query -u root:password --data-urlencode "q=GRANT READ ON varken TO grafana"

Deploy Varken:

docker-compose -f /opt/containers/varken/varken-compose.yml up -d

If these steps stop working, check out the Varken wiki for up-to-date installation instructions as well as other general support info.

Importing the Varken dashboard

First we need to setup the data source for Varken. Navigate to Grafana, login, and then go to your data sources configuration page. Click Add data source and then click InfluxDB. Input the following:

  • Name: InfluxDB [Varken]
  • URL: http://influxdb_influx_1:8086
  • Database: varken
  • User: grafana
  • Password: ‘grafana user password’

The Grafana user info was defined if you followed my guide, however if you did not you can substitute these credentials with the varken user instead.

Click Save & Test. Then click + > Import. In the Grafana.com Dashboard, import ID 9585.

Fill out the info to match your setup and make sure you select varken in the varken dropdown box. Once done, click Import.

Varken does not log history! Data is only logged the moment Varken is started the first time. This may cause some of your panels to appear blank (Device Types, etc). Fire up a Plex stream quick and it should populate. If it does not check your container log and look for errors. docker logs -f conatinername.

Optional

Here are a few panel jsons for extra panels you can add to your Varken dashboard.

      {
        "aliasColors": {
          "": "#b7dbab",
          "AAC": "#f2c96d",
          "AC3": "#70dbed",
          "DCA": "#f2c96d",
          "OPUS": "#f29191"
        },
        "breakPoint": "50%",
        "cacheTimeout": null,
        "combine": {
          "label": "Other",
          "threshold": ".04"
        },
        "datasource": "InfluxDB [Varken]",
        "decimals": 0,
        "fontSize": "110%",
        "format": "none",
        "gridPos": {
          "h": 8,
          "w": 4,
          "x": 12,
          "y": 4
        },
        "hideTimeOverride": true,
        "id": 59,
        "interval": null,
        "legend": {
          "percentage": true,
          "percentageDecimals": 0,
          "show": true,
          "sort": "total",
          "sortDesc": true,
          "values": false
        },
        "legendType": "On graph",
        "links": [],
        "maxDataPoints": 3,
        "nullPointMode": "connected",
        "options": {},
        "pieType": "donut",
        "pluginVersion": "6.5.2",
        "strokeWidth": "1",
        "targets": [
          {
            "alias": "$tag_stream_audio_codec",
            "groupBy": [
              {
                "params": [
                  "stream_audio_codec"
                ],
                "type": "tag"
              }
            ],
            "measurement": "Tautulli",
            "orderByTime": "ASC",
            "policy": "default",
            "refId": "A",
            "resultFormat": "time_series",
            "select": [
              [
                {
                  "params": [
                    "hash"
                  ],
                  "type": "field"
                },
                {
                  "params": [],
                  "type": "distinct"
                },
                {
                  "params": [],
                  "type": "count"
                }
              ]
            ],
            "tags": [
              {
                "key": "server",
                "operator": "=",
                "value": "1"
              },
              {
                "condition": "AND",
                "key": "type",
                "operator": "=",
                "value": "Session"
              }
            ]
          }
        ],
        "timeFrom": "2w",
        "timeShift": null,
        "title": "Stream Audio Codec",
        "type": "grafana-piechart-panel",
        "valueName": "total"
      },
      {
        "aliasColors": {
          "0.7 Mbps 328p": "#9ac48a",
          "1.5 Mbps 480p": "#f2c96d",
          "2 Mbps 720p": "#b7dbab",
          "3 Mbps 720p": "#f2c96d",
          "4 Mbps 720p": "#f29191",
          "Original": "#70dbed"
        },
        "breakPoint": "50%",
        "cacheTimeout": null,
        "combine": {
          "label": "Other",
          "threshold": ".04"
        },
        "datasource": "InfluxDB [Varken]",
        "decimals": 0,
        "fontSize": "110%",
        "format": "none",
        "gridPos": {
          "h": 8,
          "w": 4,
          "x": 8,
          "y": 4
        },
        "hideTimeOverride": true,
        "id": 69,
        "interval": null,
        "legend": {
          "percentage": true,
          "percentageDecimals": 0,
          "show": true,
          "sort": "total",
          "sortDesc": true,
          "values": false
        },
        "legendType": "On graph",
        "links": [],
        "maxDataPoints": 3,
        "nullPointMode": "connected",
        "options": {},
        "pieType": "donut",
        "pluginVersion": "6.5.2",
        "strokeWidth": "1",
        "targets": [
          {
            "alias": "$tag_quality_profile",
            "groupBy": [
              {
                "params": [
                  "quality_profile"
                ],
                "type": "tag"
              }
            ],
            "measurement": "Tautulli",
            "orderByTime": "ASC",
            "policy": "default",
            "refId": "A",
            "resultFormat": "time_series",
            "select": [
              [
                {
                  "params": [
                    "hash"
                  ],
                  "type": "field"
                },
                {
                  "params": [],
                  "type": "distinct"
                },
                {
                  "params": [],
                  "type": "count"
                }
              ]
            ],
            "tags": [
              {
                "key": "server",
                "operator": "=",
                "value": "1"
              },
              {
                "condition": "AND",
                "key": "type",
                "operator": "=",
                "value": "Session"
              }
            ]
          }
        ],
        "timeFrom": "2w",
        "title": "Stream Quality Profile",
        "type": "grafana-piechart-panel",
        "valueName": "total"
      },

Useful links