Back in the summer of 2020 I won a Raspberry Pi with an Air Quality sensor as part of an internal competition to celebrate Earth Day 2020. Having set up the kit I then expanded it to include an additional sensor to capture some other weather related values, using the Raspberry Pi Weather Station project as inspiration. I wrote about my setup here and then didn’t get around to publishing this blog post.
To celebrate Pi Day in 2021 I was invited to share my Pi Project on an internal celebration bash within VMware. That encouraged me to finish off this post and share the details with a wider audience.
My setup
When starting on the next stage of my project I wanted to be able to store the data I was capturing from my sensors in a more persistent way. Then I would be able to go back and look at the historical data and see how it was changing over time, for example during the different seasons.
The choice of solution came from two different directions. My colleague Sam McGeown had previously posted and shared how he was using the same products I was looking at. Sam was using them to send temperature data from a sensor (which is similar to the one I have) attached to a Pi for his home lab. I was also looking at the products for a side project at work and needed something that produced example data that I could use in my learning journey with the tools
I don’t maintain a dedicated home lab. Working at VMware I have access to cloud based environments that fit 90% of my needs, and so I take advantage of those instead usually. For this project, I needed something that could connect to my home network to communicate with my Pi and wasn’t dependent on connectivity to something in the work environment. So I created a virtual machine on my personal macbook using VMware Fusion. This VM was to be my InfluxDB 2.0 and Grafana host machine, so I installed Ubuntu 20.04 as the operating system. I could have installed these directly on the Pi, however I wanted to be able to test sending data from a separate host to the tools.
The only other machine involved is my Raspberry Pi 4 with the Air Quality and Temperature/Humidity/Air Pressure sensors attached to it. The Pi is running the standard Raspberry Pi OS, I chose not to install any agents or extra software onto the Pi and to use the InfluxDB API with the Python Client to forward the data.

InfluxDB Setup
The installation of InfluxDB 2.0 is straight forward and documented at https://docs.influxdata.com/influxdb/v2.0/get-started/?t=Linux. Once it is installed and the service started there was a small amount of configuration to perform using the UI to create the initial user and bucket. I choose to name my bucket Pi_Sensors.
Within the UI I grabbed the token created for my initial user, this will be used to authenticate via the Python script and with Grafana. You can also create a dedicated user and token to use instead of the initial user if you want to.

That’s all the setup needed in InfluxDB. The Python script will push data into the bucket in the correct format. Keep a note of the Organization and Bucket names you chose, these are needed later in the Python script.
Grafana Setup
The setup for Grafana was a breeze, the documentation contained all the steps needed at https://grafana.com/docs/grafana/latest/getting-started/getting-started/ and https://grafana.com/docs/grafana/latest/installation/debian/. I installed Grafana onto the same VM as InfluxDB for simplicity, however it could easily be installed onto a separate machine.
After installing and logging in to the UI I added a datasource for my InfluxDB instance. I choose to use Flux as the query language as it is the newer option. The steps for adding an InfluxDB data source are also well documented at https://grafana.com/docs/grafana/latest/datasources/influxdb/ and the plugin is available out of the box, no additional downloads are required. Essentially I just had to point Grafana at my InfluxDB instance URL and configure the token value I captured in the InfluxDB UI for authentication.
The remaining configuration in Grafana was to create a Dashboard, I will cover this in a later section.
Python Script to forward the sensor data
I already had a Python script running on my Pi to collect my data and forward it to both a Wavefront dashboard (internal to VMware) and an Adafruit dashboard from my previous post. It was a logical step to reuse this script and amend it to send data to InfluxDB instead.
InfluxDB 2.0 has a Python client available for download from GitHub, and provides examples on how it can be used. I also had Sam’s blog post as a reference, although I wanted to send multiple data points to Influx and use a later version which had a different syntax so some trial and error work was still involved.
The first thing to do was to install Python 3 on my Pi as I needed to use it to install the InfluxDB Python Client and it requires Python 3.6 or later as documented in the PyPi page linked on the InfluxDB Python Client readme file. Without Python 3 I could not use the Pip3 install commands on my Pi and when using the Pip command for earlier versions of Python, it does not find the InfluxDB client installation due to the version dependency.
With Python 3 installed a simple command installs the client
pip3 install influxdb-client
Next I took a backup of my .py script file, and set to work on adding in the InfluxDB integration.
At the top of my script I added some new import statements for the InfluxDB client
from influxdb_client import InfluxDBClient, Point, WritePrecision, WriteOptions
from influxdb_client.client.write_api import SYNCHRONOUS
The script also needed to define some variables I could use to connect to my InfluxDB instance as well as authenticating.
# Configure InfluxDB connection variables token = "<InfluxDB_User_Token_Value>" org = "<InfluxDB_Organization_Name>" bucket = "<InfluxDB_Bucket_Name>" measurement = "rpi-bme280" location = "home office" _client = InfluxDBClient(url="http://<InfluxDB_Server_IP_Address>:8086", token=token) _write_client = _client.write_api(write_options=WriteOptions( batch_size=500, flush_interval=10_000, jitter_interval=2_000, retry_interval=5_000, max_retries=5, max_retry_delay=30_000, exponential_base=2))
The formatting of the script is altered as I add it within the blog post, the lines batch_size=500, onwards are actually all aligned as shown in the screenshot below

My script already contained the code to allow a reading to be captured every 10 seconds from each of the sensors, so I did not need to make any changes to this part of my script. Within the loop I used to capture the readings I added new lines to use the _write_client command to push my data to InfluxDB and format it at the same time such that each type of measurement (temperature, air pressure, humidity and air quality) is recorded in its own right with a time stamp. This would allow me to display each type of measurement as a separate piece of data. The _write_client command is documented with examples on the GitHub page for the Python Client. The lines I added were as follows:
_write_client.write(bucket, org, [{"measurement": measurement, "tags": {"location": location},
"fields": {"temperature": temperature}, "time": iso}])
_write_client.write(bucket, org, [{"measurement":measurement, "tags": {"location": location},
"fields":{"humidity": humidity}, "time": iso}])
_write_client.write(bucket, org, [{"measurement":measurement, "tags": {"location": location},
"fields":{"air-pressure": pressure}, "time": iso}])
_write_client.write(bucket, org, [{"measurement":measurement, "tags": {"location": location},
"fields":{"aq-pmtwofive": pmtwofive}, "time": iso}])
_write_client.write(bucket, org, [{"measurement":measurement, "tags": {"location": location},
"fields":{"aq-pmten": pmten}, "time": iso}])
Again the formatting is altered within the blog post, the actual format used looked like the screenshot below.

The location variable being used is a string value I set to home office at the start of the script. The idea of including a location value was so that if I set up extra sensors, or convinced others to send me their data I could compare the values between different locations. Each of the other variables relate to the value captured from the sensors. The iso variable is the current utc time defined using the following code within my loop so that the time is set each time the measurements are captured
iso = datetime.utcnow()
The final change I made to the script was to put everything apart from the import statements and the creation of variables within a try statement. I added an except statement at the end to perform the most basic exception handling to allow the script to exit, closing the connection to InfluxDB if something went wrong, or I hit a key on the keyboard.
except KeyboardInterrupt: _write_client.del() _client.del() pass
A bit of trial and error along the way, I now had a script that when tested successfully sent data to InfluxDB. I could check this by logging in to the InfluxDB UI and using the Explore tool to read the data and visualise it.

In the screenshot you can see under the third column the _field values allows me to select each of the different types of measurements individually.
I have the script running interactively via the terminal at the moment, and it prints the sensor readings to the console each time a new reading is taken. The script could also be configured as a Cron job to auto launch it when the Pi starts up.
My final script for just the InfluxDB integration (Wavefront and AdaFruit removed) is provided below It’s not the cleanest code I will ever create, it works on my Pi and that is enough for me for now 🙂
Grafana Dashboards
As the screenshot in the previous section shows, it is possible to use InfluxDB as both the times series data storage platform and the visualisation engine in v2.0. I decided to use Grafana despite this as I wanted to learn some of the basic tasks for my side project.
The Grafana documentation covers how to create dashboards. I decided to create a single dashboard with a chart for each of my different measurement types.
For each chart I selected InfluxDB as my datasource and Flux as my query language. Then I defined the Query I wanted to use. Here’s where the InfluxDB visualization engine comes in handy. If you go back to the InfluxDB UI and use the Explore tool to display the data again you will notice in the bottom right corner of the screen there is a Script Editor button.

If you click on the Script Editor button it will show you the query you have created via the GUI, which is in the Flux query language.

Now you can copy that query from InfluxDB and paste it straight into Grafana for the chart you are creating. Save the chart and repeat for any other measurements you have in new charts within the dashboard.

With each of the charts configured with their respective queries you can edit the chart types to change the display options, for example you can have Grafana show you the latest non zero value or show the mean value across the data it has collected. You can change the display colours used for the chart, and the chart format using the side edit panel.
My final dashboard currently looks like this.

This is showing me the latest non zero values for each measurement over the last 7 days and auto refreshes the dashboard.