Month: March 2019

Alex's Guardian > Blog > 2019 > March
Deluge -> InfluxDB

Deluge -> InfluxDB

So ever since I got my remote seed box setup from seedboxes.cc I have been trying to figure out the best way to get Deluge stats to show up in my Grafana stack.

I first tried a Deluge Exporter for Prometheus but it didn’t seem to work as it required the config directory of Deluge in order for it to export the stats. Really dumb but its whatever. I then came across a influxdb script that sent Deluge stats to Influx. However, that did not work either as apparently the /json endpoint used a self signed certificate and the script errored because of that.

BUT, I got that script to actually work though! Had to use a deluge “thin client” to connect to the remote seed box and basically mirror the data locally. This was done by running a deluge container in docker and using the connection preferences to connect to Cerberus (my remote seedbox).

W.I.P. Deluge Influx dashboard.

Now a quick note here, this dashboard is currently a full WIP as I learn what data is what and how to properly visualize it in Grafana.

What you will need to set this up

First make sure you have docker installed and setup (preferably on a Linux host). Then make sure you have your deluge client setup and configured properly for hosting Linux ISO downloads, etc.

First create the deluge database, user and assign the appropriate permissions. If you do not have a Grafana/Influx stack going see my guide here.

curl -XPOST "http://ip.of.influx.db:8086/query" -u admin:password --data-urlencode "q=CREATE DATABASE 'deluge'"

curl -XPOST "http://ip.of.influx.db:8086/query" -u admin:password --data-urlencode "q=CREATE USER deluge WITH PASSWORD 'deluge'"

curl -XPOST "http://ip.of.influx.db:8086/query" -u admin:password --data-urlencode "q=GRANT WRITE ON deluge TO deluge"

curl -XPOST "http://ip.of.influx.db:8086/query" -u admin:password --data-urlencode "q=GRANT READ ON deluge TO grafana"

Create a exporters folder inside your Influxdb directory.

# BASH [ LINUX VM ]
mkdir /opt/containers/influxdb/exporters && mkdir /opt/containers/influxdb/exporters/deluge

Copy the following into a file to your docker host and then edit it to match your setup.

# BASH [ LINUX VM ]
curl https://bin.alexsguardian.net/raw/deluge2influx -o /opt/containers/influxdb/exporters/deluge/deluge2influx-compose.yml
# BASH [ LINUX VM ]
nano /opt/containers/influxdb/exporters/deluge/deluge2influx-compose.yml

When you finish editing hit CTRL+X then Y to save and close the file.

Now startup the container.

# BASH [ LINUX VM ]
docker-compose -f /opt/containers/influxdb/exporters/deluge/deluge2influx-compose.yml up -d

Create a new dashboard in Grafana and import this .json file. Note that this dashboard expects the data source in Grafana to be called “deluge”.

HDD Failure Update

HDD Failure Update

Ok so a few weeks ago I suffered a partial HDD failure. Basically the HDD hosting my 2 Docker VMs started producing a ton of bad sectors which caused partial corruption on my VMs. This in turn caused issues with containers that read persistent data volumes. On a good note, this HDD was a WD Blue 500GB which was almost 7 years old before it started having issues.

Now due to this failure my entire Grafana metrics system/stack got corrupted causing me to have to start from scratch. Which is what I have now done:

Main Overview

So instead of having a single super massive dashboard I decided to create an overview dashboard that just houses alert panels that pull alerts from other dashboards. This gives me the ability to, at a glance, see whats going on. I can then click each alert to take me to their respective panels.

This is far from done. I plan on expanding my Hyper-V stats as well as my System Alerts. I also need to add my remote seedbox stats and monitoring for my Virtualized AD network and game servers.

Caddy in Docker with Cloudflare DNS

Caddy in Docker with Cloudflare DNS

EDIT 5/26/2019: As of Caddy 1.0 my image is currently broken due to incompatibilities with plugins and the new version. Once the plugins are updated my image will be good to go again.

EDIT 7/15/2019: My docker image of Caddy is building properly again!

So I’ve been using Caddy for a while as my web server/reverse proxy. Basically it sits in front of all of my services and redirects/protects my stuff.

Now I have been building a custom image off of abiosoft/caddy-docker with a custom set of plugins… manually… That changes today!

WOOT! What you are seeing here is an screenshot of docker cloud automatic image building. So basically I created a repo on Github that houses a Dockerfile (at the moment its a custom temp one cause docker cloud isn’t properly passing env variables) which when the master branch is pushed will remote trigger a build on docker hub using the Dockerfile.

alexandzors/caddy

Custom abiosoft/caddy-docker image. Contribute to alexandzors/caddy development by creating an account on GitHub.

So why all this you may ask. Well I really wanted to get a wildcard Let’s Encrypt SSL certvia Caddy. The best way to do that is to use a supported DNS provider plugin with the following in a Caddyfile directive:

tls {
 dns cloudflare
}

I can then import that wildcard cert to all of my subdomain directives. Makes it easier to manage as Caddy gets one cert instead of a cert for each subdomain + the root domain.

This custom image also has a few extra plugins besides the Cloudflare one so its even better!

If you want to try it out you can run sudo docker pull alexandzors/caddy. Then deploy it with the following compose file:

version: '3.6'
services:
  caddy:
    deploy:
      replicas: 1
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "curl", "-I", "http://localhost"]
      interval: 1m30s
      timeout: 10s
      retries: 3
      start_period: 40s
    image: alexandzors/caddy
    ports:
      - "81:80"
      - "444:443"
      - "2015:2015"
    volumes:
      - /path/to/Caddyfile:/etc/Caddyfile
      - /path/to/caddy/dir:/root/.caddy
      - /path/to/sites/root:/srv
    environment:
      - ACME_AGREE=true
      - CLOUDFLARE_EMAIL=<CF_EMAIL>
      - CLOUDFLARE_API_KEY=<CF_API_KEY>

My Caddy docker container is a custom version of abiosoft/caddy.

abiosoft/caddy-docker

Docker container for Caddy. Contribute to abiosoft/caddy-docker development by creating an account on GitHub.

Custom TIGP Stack for Docker

Custom TIGP Stack for Docker


This is outdated, please use the compose file here.


My TIGP stack consists of the following services:

  • Grafana
  • InfluxDB
  • Telegraf
  • Varken
  • Chronograf
  • Kapacitor
  • Prometheus
  • Transmission-Exporter

Now more services will probably be added later or at least a second stack will be created with more Prometheus Exporters for more services.

Here is the docker-compose.yml file you can use to deploy your own TIGP stack.

Now you may want to run this following sh script before deploying your TIGP stack as this will create the persistent directories for each service that needs them in your /opt/ dir.

#!/bin/bash

mkdir /opt/grafana
mkdir /opt/grafana/conf
chmod 777 /opt/grafana
mkdir /opt/influx
chmod 777 /opt/influx
mkdir /opt/varken
mkdir /opt/chronograf
mkdir /opt/telegraf
mkdir /opt/prometheus

cp /home/alexander/cfg_files/grafana.ini /opt/grafana/conf
cp /home/alexander/cfg_files/varken.ini /opt/varken
cp /home/alexander/cfg_files/telegraf.conf /opt/telegraf
cp /home/alexander/cfg_files/prometheus.yml /opt/prometheus

docker network create --driver=bridge influx

Now you may want to change the /home/alexander/cfg_files to a dir that houses your config files for varken, telegraf, prometheus, grafana or remove that section completely and just create the files yourself inside each of the proper service directories.

This script also creates the influxdocker network for TIGP services to run under. But since it is set to “bridge” you’ll be able to access each service that is port forwarded in docker via the host IP:PORT.

Don’t worry to much about the “unhealthy” status. Still working out the kinks on the health checks for some of the services.

Forcing devices to use Pi-Hole

Forcing devices to use Pi-Hole

So currently I have a few “smart home” devices and a few of them have hard coded DNS servers they use for DNS queries. Most notably my Google devices…

Anyway, I went a head and setup two NAT rewrite rules on my ER-Lite that basically force all devices to go through my Pi-Hole DNS server for DNS queries. These rules basically fake a connection to the hard coded DNS servers but actually the queries are going to my Pi-Hole server.

So if you want to setup your own rewrite rules to do this, you can by running these configuration commands from your ER’s CLI interface. Make sure to swap out IP.OF.PIHOLE.SERVER with the IP of your Pi-Hole server, ETH-INTERFACE-HERE with your ethernet interface you wish to set the rule to, and YOUR.SUBNET-RANGE.HERE to your subnet range (ie 10.9.9.2-10.9.9.254).

configure
set service nat rule 1 description 'Pi-Hole DNS'
set service nat rule 1 destination address '!IP.OF.PIHOLE.SERVER'
set service nat rule 1 destination port 53
set service nat rule 1 inbound-interface ETH-INTERFACE-HERE
set service nat rule 1 inside-address address IP.OF.PIHOLE.SERVER
set service nat rule 1 inside-address port 53
set service nat rule 1 log enable
set service nat rule 1 protocol tcp_udp
set service nat rule 1 source address '!IP.OF.PIHOLE.SERVER'
set service nat rule 1 type destination
set service nat rule 5011 description 'Masquerade Pi-Hole DNS'
set service nat rule 5011 destination address IP.OF.PIHOLE.SERVER
set service nat rule 5011 destination port 53
set service nat rule 5011 log disable
set service nat rule 5011 outbound-interface ETH-INTERFACE-HERE
set service nat rule 5011 protocol tcp_udp
set service nat rule 5011 source address YOUR.SUBNET-RANGE.HERE
set service nat rule 5011 type masquerade
commit
save
exit

You can then verify if they are enabled by running:

show nat rules
[email protected]:~$ show nat rules

Type Codes:  SRC - source, DST - destination, MASQ - masquerade
              X at the front of rule implies rule is excluded

rule   type  intf     translation
----   ----  ----     -----------
2      DST   eth1     daddr !10.9.9.116 to 10.9.9.116
    proto-tcp_udp     dport 53 to 53
                      when saddr !10.9.9.116, sport ANY

3      DST   eth2.3   daddr ANY to 9.9.9.9
    proto-tcp_udp     dport 53 to 53

5010   MASQ  eth0     saddr ANY to xxx.xxx.xxx.xxx
    proto-all         sport ANY

5012   MASQ  eth1     saddr 10.9.9.2-10.9.9.254 to 10.9.9.1
    proto-tcp_udp     sport ANY
                      when daddr 10.9.9.116, dport 53

Your devices should now hit your Pi-Hole DNS server over the one they were hard coded with as the Masquerade source nat rule makes it look like it still queried the original server.

Goodbye hard coded DNS servers! :)

PS: A friend from work actually helped me fix this as originally I only had a destination NAT rule set which made it seem like it was working but it actually wasn’t.