33
loading...
This website collects cookies to deliver better user experience
RedisTimeSeries
are just a part of the overall solution. You also need to think about how to collect (ingest), process and send all your data to RedisTimeSeries
. What you really need is a scalable Data Pipeline that can act as a buffer to decouple producers, consumers.RedisTimeSeries
with Apache Kafka for analyzing time series data. GitHub repo - https://github.com/abhirockzz/redis-timeseries-kafka
temperature
and pressure
. We will store these metrics in RedisTimeSeries
(of course!) and use the following naming convention for the keys - <metric name>:<location>:<device>
. For e.g. temperature for device 1
in location 5
will be represented as temp:5:1
. each time series data point will also have the following labels (metadata) - metric
, location
, device
. This is to allow for flexible querying as you will see later in the upcoming sections.TS.ADD
command:# temperature for device 2 in location 3 along with labels
TS.ADD temp:3:2 * 20 LABELS metric temp location 3 device 2
# pressure for device 2 in location 3
TS.ADD pressure:3:2 * 60 LABELS metric pressure location 3 device 2`
RedisTimeSeries
module.mqtt.device-stats
)az spring-cloud create -n <name of Azure Spring Cloud service> -g <resource group name> -l <enter location e.g southeastasia>
git clone https://github.com/abhirockzz/redis-timeseries-kafka
cd redis-timeseries-kafka
mosquitto
broker locally on Mac.brew install mosquitto
brew services start mosquitto
brew install grafana
brew services start grafana
docker run -d -p 3000:3000 --name=grafana -e "GF_INSTALL_PLUGINS=redis-datasource" grafana/grafana
You should be able to find the connect-distributed.properties
file in the repo that you just cloned. Replace the values for properties such as bootstrap.servers
, sasl.jaas.config
etc.
export KAFKA_INSTALL_DIR=<kafka installation directory e.g. /home/foo/kafka_2.12-2.5.0>
$KAFKA_INSTALL_DIR/bin/connect-distributed.sh connect-distributed.properties
plugin.path
configuration propertiesIf you're using Confluent Platform locally, simply use the CLI: confluent-hub install confluentinc/kafka-connect-mqtt:latest
mqtt-source-config.json
file: make sure you enter the right topic name for kafka.topic
and leave the mqtt.topics
unchanged.curl -X POST -H 'Content-Type: application/json' http://localhost:8083/connectors -d @mqtt-source-config.json
# wait for a minute before checking the connector status
curl http://localhost:8083/connectors/mqtt-source/status
cd consumer
export JAVA_HOME=/Library/Java/JavaVirtualMachines/zulu-11.jdk/Contents/Home
mvn clean package
az spring-cloud app create -n device-data-processor -s <name of Azure Spring Cloud instance> -g <name of resource group> --runtime-version Java_11
az spring-cloud app deploy -n device-data-processor -s <name of Azure Spring Cloud instance> -g <name of resource group> --jar-path target/device-data-processor-0.0.1-SNAPSHOT.jar
./gen-timeseries-data.sh
All it does is use the mosquitto_pub
CLI command to send data
device-stats
MQTT topic (this is not the Kafka topic). You can double check by using the CLI subscriber:mosquitto_sub -h localhost -t device-stats
az spring-cloud app logs -f -n device-data-processor -s <name of Azure Spring Cloud instance> -g <name of resource group>
localhost:3000
.grafana_dashboards
folder in the GitHub repo you had cloned.Refer to the Grafana documentation if you need assistance on how to import dashboards.
TS.MRANGE
)TS.MRANGE
).redis-cli
and connect to the Azure Cache for Redis instance:redis-cli -h <azure redis hostname e.g. redisdb.southeastasia.redisenterprise.cache.azure.net> -p 10000 -a <azure redis access key> --tls
# pressure in device 5 for location 1
TS.GET pressure:1:5
# temperature in device 5 for location 4
TS.GET temp:4:5
TS.MGET WITHLABELS FILTER location=3
TS.MRANGE - + WITHLABELS FILTER location=3
TS.MRANGE - + WITHLABELS FILTER location=(3,5)
- +
refers to everything from beginning up until the latest timestamp, but you could be more specific
MRANGE
is what we need! We can get back multiple time series and use filter.TS.MRANGE - + WITHLABELS FILTER location=3 device=2
TS.MRANGE - + WITHLABELS FILTER location=3 device=2 metric=temp
TS.MRANGE - + WITHLABELS FILTER location=3 metric=temp
# all the temp data points are not useful. how about an average (or max) instead of every temp data points?
TS.MRANGE - + WITHLABELS AGGREGATION avg 10000 FILTER location=3 metric=temp
TS.MRANGE - + WITHLABELS AGGREGATION max 10000 FILTER location=3 metric=temp
It's also possible to create a rule to do this aggregation and store in a different time series
brew services stop mosquitto
)brew services stop grafana
)RedisTimeSeries
TS.CREATERULE temp:1:2 temp:avg:30 AGGREGATION avg 30000
)BLOCK
) is indeed what you need. If not, consider other options.This is not an exhaustive list. For other configuration options, please refer to the RedisTimeSeries
documentation
Integration: It's not just Grafana! RedisTimeSeries
also integrates with Prometheus and Telegraf. However, there is no Kafka connector at the time this blog post was written - this would a great add-on!