Influx DB CLI Making Me Crazy

Summary

A solution to the Influx DB CLI error “Failed to check token: received status code 401 from server” including instructions to install the InfluxDB V2 CLI.

Story

As I worked my way through using the Influx Database I tried using the InfluxDB CLI.  However, no matter what I seemed to do I got this error, which was super annoying.

linux$ 
linux$ influx
Failed to check token: received status code 401 from server
linux$ influx create
Failed to check token: received status code 401 from server
linux$ influx adsf
Failed to check token: received status code 401 from server
linux$ influx setup
Failed to check token: received status code 401 from server

I tried googling around to try to figure out what was happening but really didn’t see anything that would explain that behavior.  However, I did find one comment that if you were using Docker you could run the CLI by running in the shell.  So I tried that:

linux$ docker exec -it 89ffd4bb9ec5  /bin/bash
root@89ffd4bb9ec5:/# influx version
Influx CLI 2.0.4 (git: 4e7a59bb9a) build_date: 2021-02-08T17:47:02Z

When I originally installed the client (on Ubuntu) I did this:

linux$ sudo apt install influxdb-client
[sudo] password for arh: 
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following package was automatically installed and is no longer required:
  linux-hwe-5.4-headers-5.4.0-71
Use 'sudo apt autoremove' to remove it.
The following NEW packages will be installed:
  influxdb-client
0 upgraded, 1 newly installed, 0 to remove and 97 not upgraded.
Need to get 1,146 kB of archives.
After this operation, 3,969 kB of additional disk space will be used.
Get:1 http://us.archive.ubuntu.com/ubuntu bionic/universe amd64 influxdb-client amd64 1.1.1+dfsg1-4 [1,146 kB]
Fetched 1,146 kB in 1s (1,883 kB/s)       
Selecting previously unselected package influxdb-client.
(Reading database ... 275845 files and directories currently installed.)
Preparing to unpack .../influxdb-client_1.1.1+dfsg1-4_amd64.deb ...
Unpacking influxdb-client (1.1.1+dfsg1-4) ...
Setting up influxdb-client (1.1.1+dfsg1-4) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...

When I ran the version look what I got:

linux$ /usr/bin/influx -version
InfluxDB shell version: 1.1.1

Now we know the problem.  I have the wrong version of the CLI.  This was caused by two things

  1. I installed the docker version of InfluxDB.  So I never had the command line version on my linux box.
  2. The influxdb-client that you get from whatever the debian source on Ubuntu is the 1.xx version.  It is too bad that you can’t “sudo apt install influxdbv2-client”  Oh well.

To fix this start by nuking the old version.

linux$ sudo apt-get remove influxdb-client
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following package was automatically installed and is no longer required:
  linux-hwe-5.4-headers-5.4.0-71
Use 'sudo apt autoremove' to remove it.
The following packages will be REMOVED:
  influxdb-client
0 upgraded, 0 newly installed, 1 to remove and 97 not upgraded.
After this operation, 3,969 kB disk space will be freed.
Do you want to continue? [Y/n] y
(Reading database ... 275851 files and directories currently installed.)
Removing influxdb-client (1.1.1+dfsg1-4) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...

Then download the “InfluxDB Cloud CLI” (which is just a standalone version of the CLI.  You can get it from the influxdata.com website here.

Scroll down to the InfluxDB Cloud CLI

Then follow the instructions (Notice that I moved it to /usr/local/bin)

linux$ wget https://dl.influxdata.com/influxdb/releases/influxdb2-client-2.0.6-linux-amd64.tar.gz
--2021-05-21 11:08:48--  https://dl.influxdata.com/influxdb/releases/influxdb2-client-2.0.6-linux-amd64.tar.gz
Resolving dl.influxdata.com (dl.influxdata.com)... 13.33.74.100, 13.33.74.21, 13.33.74.27, ...
Connecting to dl.influxdata.com (dl.influxdata.com)|13.33.74.100|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 11758006 (11M) [application/x-gzip]
Saving to: ‘influxdb2-client-2.0.6-linux-amd64.tar.gz’

influxdb2-client-2.0.6-linux-amd64.tar.gz              100%[===========================================================================================================================>]  11.21M  10.8MB/s    in 1.0s    

2021-05-21 11:08:50 (10.8 MB/s) - ‘influxdb2-client-2.0.6-linux-amd64.tar.gz’ saved [11758006/11758006]

linux$ tar xvf influxdb2-client-2.0.6-linux-amd64.tar.gz 
influxdb2-client-2.0.6-linux-amd64/LICENSE
influxdb2-client-2.0.6-linux-amd64/README.md
influxdb2-client-2.0.6-linux-amd64/influx
linux$ sudo mv influxdb2-client-2.0.6-linux-amd64/influx /usr/local/bin

Now when you check the version, you are in the money

linux$ influx version
Influx CLI 2.0.6 (git: 4db98b4c9a) build_date: 2021-04-29T16:48:12Z

The Creek 3.0: A Docker MySQL Diversion – Part 2.5

Summary

A discussion of reading I2C data from a sensor and sending it to a MySQL instance in the cloud using Python.

I was originally planning only one article on the MySQL part of this project.  But things got really out of control and I ended up splitting the article into two parts.  I jokingly called this article “Part 2.5”.  In today’s article I’ll take the steps to have Python and the libraries running on the Raspberry Pi to read data and send it to my new Docker MySQL Server.

Here is what the picture looks like:

Build the Python Environment w/smbus & mysql-connector-python

I typically like to build a Python virtual environment with the specific version of python and all of the required packages.  To do this you need to

  1. python3 -m venv venv
  2. source venv/bin/activate
  3. pip install smbus
  4. pip install mysql-connector-python
pi@iotexpertpi:~ $ mkdir mysql-docker
pi@iotexpertpi:~ $ python3 -m venv venv
pi@iotexpertpi:~ $ source venv/bin/activate
(venv) pi@iotexpertpi:~ $ pip install smbus
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Collecting smbus
  Using cached https://www.piwheels.org/simple/smbus/smbus-1.1.post2-cp37-cp37m-linux_armv6l.whl
Installing collected packages: smbus
Successfully installed smbus-1.1.post2
(venv) pi@iotexpertpi:~ $ pip install mysql-connector-python
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Collecting mysql-connector-python
  Using cached https://files.pythonhosted.org/packages/2a/8a/428d6be58fab7106ab1cacfde3076162cd3621ef7fc6871da54da15d857d/mysql_connector_python-8.0.25-py2.py3-none-any.whl
Collecting protobuf>=3.0.0 (from mysql-connector-python)
  Downloading https://files.pythonhosted.org/packages/6b/2c/62cee2a27a1c4c0189582330774ed6ac2bfc88cb223f04723620ee04d59d/protobuf-3.17.0-py2.py3-none-any.whl (173kB)
    100% |████████████████████████████████| 174kB 232kB/s 
Collecting six>=1.9 (from protobuf>=3.0.0->mysql-connector-python)
  Using cached https://files.pythonhosted.org/packages/d9/5a/e7c31adbe875f2abbb91bd84cf2dc52d792b5a01506781dbcf25c91daf11/six-1.16.0-py2.py3-none-any.whl
Installing collected packages: six, protobuf, mysql-connector-python
Successfully installed mysql-connector-python-8.0.25 protobuf-3.17.0 six-1.16.0
(venv) pi@iotexpertpi:~

Once that is done you can see that everything is copasetic by running “pip freeze” where you can see the mysql-connector-python and the smbus.

(venv) pi@iotexpertpi:~ $ pip freeze
mysql-connector-python==8.0.25
pkg-resources==0.0.0
protobuf==3.17.0
six==1.16.0
smbus==1.1.post2

Python: Get Data SMBUS

If you remember from the original design that the PSoC 4 acts as a register file with the data from the temperature and pressure sensor.  It has 12 bytes of data as

  1. 2-bytes formatted as a 16-bit unsigned ADC counts from the Pressure Sensor
  2. 2-bytes formatted as a 16-bit signed pressure in “centiTemp”
  3. 4-bytes float as the depth in Feet
  4. 4-bytes float as the temperature in Centigrade

This program:

  1. Reads the I2c for 12-bytes
  2. Converts it into an array
  3. Prints out the values
import struct
import sys
import smbus
from datetime import datetime
from influxdb_client import InfluxDBClient, Point, WritePrecision
from influxdb_client.client.write_api import SYNCHRONOUS


######################################################
#Read the data from the PSoC 4
######################################################
bus = smbus.SMBus(1)
address = 0x08

# The data structure in the PSOC 4 is:
# uint16_t pressureCount ; the adc-counts being read on the pressure sensor
# int16_t centiTemp ; the temperaure in 10ths of a degree C
# float depth ; four bytes float representing the depth in Feet
# float temperature ; four byte float representing the temperature in degrees C

numBytesInStruct = 12
block = bus.read_i2c_block_data(address, 0, numBytesInStruct)

# convert list of bytes returned from sensor into array of bytes
mybytes = bytearray(block)
# convert the byte array into
# H=Unsigned 16-bit int
# h=Signed 16-bit int
# f=Float 
# this function will return a tuple with pressureCount,centiTemp,depth,temperature
vals = struct.unpack_from('Hhff',mybytes,0)
# prints the tuple
depth = vals[2]
temperature = vals[3]
print(f"{depth} {temperature}")

Python: MySQL

I created a separate Python program to insert the data into the MySQL database.  This program does the following things

  1. Makes sure the command line arguments make sense
  2. Makes a connection to the server
  3. Creates the SQL statement
  4. Runs the inserts
import mysql.connector
import sys
from datetime import datetime


if len(sys.argv) != 3:
    sys.exit("Wrong number of arguments")

mydb = mysql.connector.connect(
    host="spiff.local",
    user="creek",
    password="donthackme",
    database="creekdata",
    auth_plugin='mysql_native_password')

now = datetime.now()
formatted_date = now.strftime('%Y-%m-%d %H:%M:%S')
sql = "insert into creekdata.creekdata (depth,temperature,created_at) values (%s,%s,%s)"
vals = (sys.argv[1],sys.argv[2],formatted_date)


mycursor = mydb.cursor()

mycursor.execute(sql, vals)

mydb.commit()

Shell Script & Crontab

I created a simple bash shell script to

  1. Activate the virtual enviroment
  2. Run the get data python program
  3. Run the insert program
#!/bin/bash

cd ~/influxdb
source venv/bin/activate
vals=$(python getData.py)
#echo $vals
python insertMysql.py $vals

Finally, a cronjob to run the program every 5 minutes.

# Edit this file to introduce tasks to be run by cron.
# 
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
# 
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').
# 
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
# 
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
# 
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
# 
# For more information see the manual pages of crontab(5) and cron(8)
# 
# m h  dom mon dow   command
0,5,10,20,25,30,35,40,45,50,55 * * * * /home/pi/influxdb/getInsertData.sh

Test with MySQL WorkBench

Now when I load the data from the MySQL Workbench I can see the inserts are happening.  Kick ass.

The Creek 3.0: A Docker MySQL Diversion – Part 2

Summary

A tutorial on running MySQL in an instance of Docker on Ubuntu Linux.  Then creating a Raspberry Pi Python interface from a sensor to insert data over the network to the new MySQL Server.

Story

As I said in the introduction, this whole process has been a bit chaotic.  So here we go.  The Raspberry Pi that runs the current creek system has been in my barn since at least 2013 running on the same SD Card and never backed up.  I suppose that it wouldn’t have really mattered if I lost the old flood data, but it would have been annoying.  Also, that Raspberry Pi is very slow running queries given the 2.2M records that now exist in the database.

To fix this I decided that I want to start by moving the MySQL server to a new computer that runs Docker.  Here is the original configuration (from the original article)

When I set out to do in this article the plan was to move the MySQL Instance from the Raspberry Pi to a new Linux box.  Unfortunately while I was doing this, I broke the operating system on the Raspberry Pi and ended up having to rebuild the interface to the PSoC 4.  Here is what I ended up building:

This article will walk you through the following steps.

  1. Build a new Linux machine & Install Ubuntu Server
  2. Install Docker & MySQL
  3. Migrate the Data from the original Raspberry Pi MySQL Database
  4. Build the Python Environment (Part 2.5)
  5. Python: Get Data SMBUS (Part 2.5)
  6. Python: Insert MySQL (Part 2.5)
  7. Shell Script & Crontab (Part 2.5)
  8. Test using MySQL WorkBench (Part 2.5)

Build a new Linux Box with Ubuntu Server

I wanted to have a local to my lan server running MySQL.  My lab assistant suggested that I find something fairly inexpensive on ebay.  Here is what I bought:

 

And… for sure it needed an SSD.

Then I downloaded Ubuntu Server 20.04 from https://ubuntu.com/download/server

After the file was downloaded I created a bootable sdcard by running: dd if=ubuntu-20.04.2-live-server-amd64.iso of=/dev/rdisk4 bs=1m

arh Downloads $ sudo diskutil unmountDisk /dev/disk4
Unmount of all volumes on disk4 was successful
arh Downloads $ sudo dd if=ubuntu-20.04.2-live-server-amd64.iso of=/dev/rdisk4 bs=1m
1158+1 records in
1158+1 records out
1215168512 bytes transferred in 32.400378 secs (37504763 bytes/sec)
arh Downloads $ diskutil list /dev/disk4
/dev/disk4 (external, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:     Apple_partition_scheme                        *31.1 GB    disk4
   1:        Apple_partition_map ⁨⁩                        4.1 KB     disk4s1
   2:                  Apple_HFS ⁨⁩                        4.1 MB     disk4s2

After doing the installation (I dont have screen captures of that, but it is easy).  I installed the avahi daemon.  What is that?  Avahi is program that enables mDNS – a part of no configuration networking that helps you manage “names”.  Specifically in my case it will create a DNS-like name for this computer without having to actually configure the DNS.  That name is “linux.local”.

To install avahi run sudo apt install avahi-daemon

arh@spiff:~$ systemctl status avahi-daemon
● avahi-daemon.service - Avahi mDNS/DNS-SD Stack
     Loaded: loaded (/lib/systemd/system/avahi-daemon.service; enabled; vendor preset: enabled)
     Active: active (running) since Sat 2021-05-01 14:20:40 UTC; 2 weeks 1 days ago
TriggeredBy: ● avahi-daemon.socket
   Main PID: 713 (avahi-daemon)
     Status: "avahi-daemon 0.7 starting up."
      Tasks: 2 (limit: 14105)
     Memory: 2.8M
     CGroup: /system.slice/avahi-daemon.service
             ├─713 avahi-daemon: running [spiff.local]
             └─757 avahi-daemon: chroot helper

May 11 11:26:26 spiff avahi-daemon[713]: Registering new address record for fe80::4409:73ff:fe08:4c75 on veth72ac3b7.*.
May 11 11:26:26 spiff avahi-daemon[713]: Joining mDNS multicast group on interface br-18a7431f8090.IPv6 with address fe80::42:beff:fe8c:e24.
May 11 11:26:26 spiff avahi-daemon[713]: New relevant interface br-18a7431f8090.IPv6 for mDNS.
May 11 11:26:26 spiff avahi-daemon[713]: Registering new address record for fe80::42:beff:fe8c:e24 on br-18a7431f8090.*.
May 11 11:26:43 spiff avahi-daemon[713]: Interface veth72ac3b7.IPv6 no longer relevant for mDNS.
May 11 11:26:43 spiff avahi-daemon[713]: Leaving mDNS multicast group on interface veth72ac3b7.IPv6 with address fe80::4409:73ff:fe08:4c75.
May 11 11:26:43 spiff avahi-daemon[713]: Withdrawing address record for fe80::4409:73ff:fe08:4c75 on veth72ac3b7.
May 11 11:26:48 spiff avahi-daemon[713]: Joining mDNS multicast group on interface veth5c71e0d.IPv6 with address fe80::4499:b0ff:feef:30fe.
May 11 11:26:48 spiff avahi-daemon[713]: New relevant interface veth5c71e0d.IPv6 for mDNS.
May 11 11:26:48 spiff avahi-daemon[713]: Registering new address record for fe80::4499:b0ff:feef:30fe on veth5c71e0d.*.
arh@spiff:~$ 

I also will be running MySQL in a Docker instance.  To install docker run: sudo apt install docker.io

arh@spiff:~$ systemctl status docker
● docker.service - Docker Application Container Engine
     Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
     Active: active (running) since Sat 2021-05-01 14:20:42 UTC; 2 weeks 1 days ago
TriggeredBy: ● docker.socket
       Docs: https://docs.docker.com
   Main PID: 758 (dockerd)
      Tasks: 26
     Memory: 142.1M
     CGroup: /system.slice/docker.service
             ├─   758 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
             └─240639 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 3306 -container-ip 172.18.0.2 -container-port 3306

May 02 13:39:42 spiff dockerd[758]: time="2021-05-02T13:39:42.754047181Z" level=info msg="ignoring event" container=4d2e6a3c8c779e01676e4fd8f748aa4581c9469d92398ff274a3800c5d3e98a2 module>
May 02 13:40:42 spiff dockerd[758]: time="2021-05-02T13:40:42.381681852Z" level=error msg="Error setting up exec command in container 4d2e6a3c8c77: Container 4d2e6a3c8c779e01676e4fd8f748a>
May 02 13:40:42 spiff dockerd[758]: time="2021-05-02T13:40:42.760184585Z" level=warning msg="error locating sandbox id 5e4b44ba78eacdb974bfd773ffabf46526177f4ff135ace09b667c3e497b3468: sa>
May 02 13:40:42 spiff dockerd[758]: time="2021-05-02T13:40:42.762228692Z" level=error msg="4d2e6a3c8c779e01676e4fd8f748aa4581c9469d92398ff274a3800c5d3e98a2 cleanup: failed to delete conta>
May 02 13:40:42 spiff dockerd[758]: time="2021-05-02T13:40:42.764274310Z" level=error msg="restartmanger wait error: network c6593d532df7651e3a38572e609d42f69f0daba3ac36263933ca0ae43504cc>
May 11 11:26:24 spiff dockerd[758]: time="2021-05-11T11:26:24.772660650Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [namese>
May 11 11:26:24 spiff dockerd[758]: time="2021-05-11T11:26:24.772680359Z" level=info msg="IPv6 enabled; Adding default IPv6 external servers: [nameserver 2001:4860:4860::8888 nameserver 2>
May 11 11:26:42 spiff dockerd[758]: time="2021-05-11T11:26:42.994091472Z" level=info msg="ignoring event" container=bfd550cab791b061bbd4e26f3435165de7b3664373de9cbb80d2e78a0aff08e2 module>
May 11 11:26:46 spiff dockerd[758]: time="2021-05-11T11:26:46.212688536Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [namese>
May 11 11:26:46 spiff dockerd[758]: time="2021-05-11T11:26:46.212708396Z" level=info msg="IPv6 enabled; Adding default IPv6 external servers: [nameserver 2001:4860:4860::8888 nameserver 2>
arh@spiff:~$

Docker Training

I knew that I wanted to try Docker, no kidding eh, but I didn’t know much of anything about it.  I am not really a “video” person for learning, but my son had talked me into trying a skill share class to learn how to edit video.  So, I thought that I would give it a try for Docker as well.  This class was OK but not great (like 2/5)  Here is a screenshot from the class:

I also watched this class, which is excellent…. especially if you watch it at 1.5x speed.

Docker Introduction

There are four basic ideas which you need to understand Docker.

Concept Description Commands
Image A runnable binary template that can be instantiated into a container (like a class in object oriented programming) docker image ls
Container An VM-like instance of an image (like an object i.e. an instance of a class in object oriented programming).  This includes network port mapping, volumes,network etc. docker ps -a
Volume A directory or file map between the host operating system and the docker container.  For example a directory X on the host is mapped to the directory Y inside of the container docker volume ls
Network A synthetic network that is created by the docker daemon to map one or more containers together.  This includes a dhcp, dns, routing etc. docker network ls

Docker Compose & MySQL

You can find new images at https://hub.docker.com.  In fact this is where I get everything that I need for mysql.

If you look a little bit later down on the docker hub you will find the specific instruction for “running” a docker mysql image.

These instructions will work.  However, there are two problems.

#1 by running it this way you will not expose the ip port 3306 from inside of the container to the outside work (on your computer or network).  This means you won’t be able to talk to the MySQL instance.  That is not very helpful

#2 all of the secret sauce you typed will be lost if you need to do that same command again.

The good news is that docker has a specific file format for saving this information called “docker-compose.yaml”.

My docker compose file looks like this.

  1. The image is “mysql” (use the official docker mysql image)
  2. Map the MySQL port 3306 from inside the container to the outside
  3. Make the root password “supersecret”
  4. Create a database called “creekdata”
  5. Create a user called “creek” with a password “asillypassword”
  6. Map the mysql data inside of the container at /var/lib/mysql to an outside volume called “mysql”
arh@spiff:~/mysql$ more docker-compose.yaml
version: '3.1'

services:

  db:
    image: mysql
    restart: always
    ports:
      - 3306:3306
    environment:
      MYSQL_ROOT_PASSWORD: supersecret
      MYSQL_DATABASE: creekdata
      MYSQL_USER: creek
      MYSQL_PASSWORD: asillypassword
    volumes:
      - mysql:/var/lib/mysql
      
volumes:
  mysql:

With this file I can create the container by running “docker-compose up”

linux$ docker-compose up
Creating network "mysql_default" with the default driver
Creating volume "mysql_mysql" with default driver
Pulling db (mysql:latest)...
latest: Pulling from library/mysql
69692152171a: Pull complete
1651b0be3df3: Pull complete
951da7386bc8: Pull complete
0f86c95aa242: Pull complete
37ba2d8bd4fe: Pull complete
6d278bb05e94: Pull complete
497efbd93a3e: Pull complete
f7fddf10c2c2: Pull complete
16415d159dfb: Pull complete
0e530ffc6b73: Pull complete
b0a4a1a77178: Pull complete
cd90f92aa9ef: Pull complete
Digest: sha256:d50098d7fcb25b1fcb24e2d3247cae3fc55815d64fec640dc395840f8fa80969
Status: Downloaded newer image for mysql:latest
Creating mysql_db_1 ... 
Creating mysql_db_1 ... done
Attaching to mysql_db_1
db_1  | 2021-05-17 20:01:20+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.25-1debian10 started.
db_1  | 2021-05-17 20:01:20+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
db_1  | 2021-05-17 20:01:20+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.25-1debian10 started.
db_1  | 2021-05-17 20:01:20+00:00 [Note] [Entrypoint]: Initializing database files
db_1  | 2021-05-17T20:01:20.192621Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.25) initializing of server in progress as process 41
db_1  | 2021-05-17T20:01:20.196027Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
db_1  | 2021-05-17T20:01:20.770999Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
db_1  | 2021-05-17T20:01:21.809117Z 6 [Warning] [MY-010453] [Server] root@localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.
db_1  | 2021-05-17 20:01:24+00:00 [Note] [Entrypoint]: Database files initialized
db_1  | 2021-05-17 20:01:24+00:00 [Note] [Entrypoint]: Starting temporary server
db_1  | 2021-05-17T20:01:24.396505Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.25) starting as process 86
db_1  | 2021-05-17T20:01:24.415784Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
db_1  | 2021-05-17T20:01:24.551463Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
db_1  | 2021-05-17T20:01:24.618191Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Socket: /var/run/mysqld/mysqlx.sock
db_1  | 2021-05-17T20:01:24.726805Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
db_1  | 2021-05-17T20:01:24.726923Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
db_1  | 2021-05-17T20:01:24.728714Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory.
db_1  | 2021-05-17T20:01:24.738807Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.25'  socket: '/var/run/mysqld/mysqld.sock'  port: 0  MySQL Community Server - GPL.
db_1  | 2021-05-17 20:01:24+00:00 [Note] [Entrypoint]: Temporary server started.
db_1  | Warning: Unable to load '/usr/share/zoneinfo/iso3166.tab' as time zone. Skipping it.
db_1  | Warning: Unable to load '/usr/share/zoneinfo/leap-seconds.list' as time zone. Skipping it.
db_1  | Warning: Unable to load '/usr/share/zoneinfo/zone.tab' as time zone. Skipping it.
db_1  | Warning: Unable to load '/usr/share/zoneinfo/zone1970.tab' as time zone. Skipping it.
db_1  | 2021-05-17 20:01:25+00:00 [Note] [Entrypoint]: Creating database creekdata
db_1  | 2021-05-17 20:01:25+00:00 [Note] [Entrypoint]: Creating user creek
db_1  | 2021-05-17 20:01:25+00:00 [Note] [Entrypoint]: Giving user creek access to schema creekdata
db_1  | 
db_1  | 2021-05-17 20:01:25+00:00 [Note] [Entrypoint]: Stopping temporary server
db_1  | 2021-05-17T20:01:25.775184Z 13 [System] [MY-013172] [Server] Received SHUTDOWN from user root. Shutting down mysqld (Version: 8.0.25).
db_1  | 2021-05-17T20:01:27.490685Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.25)  MySQL Community Server - GPL.
db_1  | 2021-05-17 20:01:27+00:00 [Note] [Entrypoint]: Temporary server stopped
db_1  | 
db_1  | 2021-05-17 20:01:27+00:00 [Note] [Entrypoint]: MySQL init process done. Ready for start up.
db_1  | 
db_1  | 2021-05-17T20:01:27.988961Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.25) starting as process 1
db_1  | 2021-05-17T20:01:27.999715Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
db_1  | 2021-05-17T20:01:28.135399Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
db_1  | 2021-05-17T20:01:28.202245Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock
db_1  | 2021-05-17T20:01:28.287968Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
db_1  | 2021-05-17T20:01:28.288087Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
db_1  | 2021-05-17T20:01:28.290206Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory.
db_1  | 2021-05-17T20:01:28.300867Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.25'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  MySQL Community Server - GPL.

Migrate the Data using MySQLWorkbench

I have a BUNCH of data (2.2M rows or so) on the original Raspberry Pi.  I want this data in my newly created instance of MySQL.  To get it there I will use the MySQL Workbench migration wizard to move the data from the old to the new instance.

It starts with these nice instructions.

Then I specify the source (the original Raspberry Pi)

The target is specified next.

It then reads the database schema from the source and makes sure that it can talk to the target.

Then it asks me what I want to transfer.  There is only one database schema on the source, the “creekdata” database.

Next it reads the source schema and reverse engineers the tables etc.

Now it asks specifically what you want to transfer.  For my case there are two tables in the creekdata database.

Then it generates the specific mysql commands required to recreate the schema

Gives the option of changing it.

Now it asks you what method you want to use on the target.  I choose to have it do all of the work.

Then it creates the new database and tables.

And you can see that it worked.

Then it asks how I want to copy the data.  I tell it to do all of the work for me.

Then it runs a bulk transfer of the data.

And give me a final report that things worked.  Kick ass.

I can now make a connection to the new database.   And I see my old data back to 2013.

That is it for this article.  In the next article Ill do the Python Shell Script stuff to reconnect my data to the new MySql Server.

The Creek 3.0: Docker Telegraf, Influx, Grafana – Part 1

Summary

The architecture and first steps of a new IoT implementation using PSoC 6, CYW43012 WiFi, AnyCloud MQTT, Raspberry Pi, Python, Influx, Grafana, Telegraf and Docker.  Wow, sounds like a lot.

The Story

For quite some time, I have been wanting to replace my original Elkhorn Creek implementation because. … well …, it is old school and a bit tired.  I started an implementation which I called “The Creek 2.0” which used AWS IoT,  AWS Lambda, and MySQL.  I thought it was interesting to learn about all of the AWS stuff… but I never finished the user interface, nor did I replace the Raspberry Pi.  Also, this solution was going in the old school direction and I wanted to use more open source.

So, this time I am going to go all the way.  Here is the architecture:

There are a bunch of things that I have never used including:

  1. Docker
  2. Mosquito MQTT
  3. Telegraf
  4. Influx DB
  5. Grafana

Which is quite a bit of new stuff.  Almost every time I work on a series like this I do all of the work in advance of writing the first article.  That way I know how things are going to an end and what is going to go wrong.   This way I can fix them in advance of you guys having to suffer with me.  This time, well, not so much, so I am quite sure that there will be some drama.

To this point I have spend a bunch of time with:

  1. Learning Docker
  2. Trying out Influx DB and Grafana
  3. Making Telegraf work

There are still some things which are a bit unknown, including:

  1. I don’t like the Telegraf implementation of the mqtt_consumer, which is going to require me to spend time learning “Go”
  2. I don’t really know how to expose Grafana to the internet safely (is that going to be OK?)
  3. I am considering writing a “Influx Client Library” for PSoC to skip the MQTT?
  4. I am considering using “Influx Line Protocol” and not using MQTT

So over the next few weeks we will see how things evolves.  I also decided to purchase a new Linux box for my house to run the system so I will talk about what I did there.

Alan