Jarvis: A home automation ChatOps bot (+ Azure & Kubernetes)
azure
linux
🚧 NOTE: Jarvis is currently a work in progress
Jarvis is a chat bot used for home automation tasks. One use of Jarvis sending one-time codes for guests to us to enter a home. Jarvis v1 has served loyally, but it is time to give him a bit of an upgrade. This post is a collection of tips and learnings I have had implementing Jarvis on Azure Kubernetes Service (AKS).
I chose to rewrite Jarvis in go, and am using several Azure services such as KeyVault and CosmosDB. The app is getting continuously deployed through a GitHub Action and federated credentials. For development, I am using GitHub Codespaces with a dev environment auto-configured for me through the .devcontainer
config checked into the repo.
Application
Go web service with Gin
The [server] portion that will run in my AKS cluster is written in go using the Gin framework. Gin was very easy to set up and quite familiar to anyone who has dabbled with express or koa.
The router delegates HTTP requests to appropriate handlers. For routes that need authentiation, middleware can be introduced on a per-route basis.
func SetupRouter(bot *tgbotapi.BotAPI) *gin.Engine {
router := gin.Default()
router.Use(BotContext(bot))
// Hello
router.GET("/", Hello)
// Health Endpoint
router.GET("/health", Health)
// Knocks
router.POST("/welcome/:invite_code", Welcome)
router.POST("/trustedknock", TrustedHmacAuthentication(), TrustedKnock)
return router
}
Telegram Bot Context
I utilize Telegram’s easy-to-use bot framework as my way to chat and interact with Jarvis. Jarvis then processes the commands and executes the write code. Jarvis can also message back to me in the thread.
I have another blog post on setting up a Telegram Bot.
...
...
// ignore any non-command Messages
if !update.Message.IsCommand() {
continue
}
// Create a new MessageConfig.
msg := tgbotapi.NewMessage(update.Message.Chat.ID, "")
// Extract the command from the Message.
switch update.Message.Command() {
case "help":
msg.Text = HelpCommand()
case "status":
msg.Text = StatusCommand()
default:
msg.Text = "Try Again."
}
if _, err := bot.Send(msg); err != nil {
log.Panic(err)
}
Setting contextual information in Gin with additional functionality
Above I set the router.Use(BotContext(bot))
, which wraps the newly created Telegram bot
object as a BotExtended
object.
func BotContext(bot *tgbotapi.BotAPI) gin.HandlerFunc {
return func(c *gin.Context) {
botExtended := &BotExtended{bot}
c.Set(BOT_CONTEXT, botExtended)
c.Next()
}
}
BotExtended was my solution to expose my own functionality on top of what is provided by the tgbotapi
object. You can see one such function SendMessageToPrimaryTelegramGroup
, which is quite helpful to have exposed on the bot
object in the router.
type BotExtended struct {
*tgbotapi.BotAPI
}
func (b *BotExtended) SendMessageToPrimaryTelegramGroup(message string) {
// Get primary group, which is the first in the space-separated list.
validTelegramGroups := strings.Split(os.Getenv("VALID_TELEGRAM_GROUPS"), " ")
if len(validTelegramGroups) == 0 {
log.Panic("No valid Telegram groups configured.")
}
primaryChatId, err := strconv.ParseInt(validTelegramGroups[0], 10, 64)
if err != nil {
log.Panic(err)
}
msg := tgbotapi.NewMessage(primaryChatId, message)
b.Send(msg)
}
Environment Variables
Various environment variables are required - for example the TELEGRAM_BOT_TOKEN
. In the cluster, these secrets are stored in a KeyVault and exposed as environment variables (more on that later). For development, you can either add a…
-
dev.env
environment variable file. See example.env for an idea of what secrets are necessary.
Developing
Opening this repo in a Codespace or VS Code, one can hit F5
to build the go program, execute the program, and attach the debugger. This is the launch.json
config.
{
"version": "0.2.0",
"configurations": [
{
"name": "Start Server",
"type": "go",
"request": "launch",
"mode": "debug",
"program": "./server",
"console": "integratedTerminal",
"envFile": "/workspaces/jarvis/dev.env"
}
]
}
Infrastructure
Containerizing
The Jarvis go application is built into a docker container via its Dockerfile. To deploy on AKS, the image needs to be pushed to an Azure Container Registry that you control.
$ az acr build -t jarvis:1.0.0 -r jarvisacr ./server/ -f ./Dockerfile
Locally the docker image can be executed and run. Note several environment variables are required.
$ docker run -p 4000:4000 --env-file example.env jarvis
[GIN-debug] GET /health --> main.Health (3 handlers)
[GIN-debug] POST /knock --> main.Knock (4 handlers)
2022/05/08 21:08:37 Bot authorized on account 'dev_bot'
Serving at http://localhost:4000
Creating your cluster
Following the Deploy an app to AKS Quickstart, it is very easy to create an AKS cluster with the Azure CLI.
az group create --name $RG --location eastus
az aks create --resource-group $RG --name $CLUSTER --node-count 1 --enable-addons monitoring --generate-ssh-keys
az aks install-cli
az aks get-credentials --resource-group $RG --name $CLUSTER
kubectl get nodes
NAME STATUS ROLES AGE VERSION
aks-nodepool1-31718369-0 Ready agent 6m44s v1.12.8
Rollout
Jarvis is containerized and deployed into a kubernetes cluster. The deployment is defined by the joshspicer/jarvis:rollout.yaml.
apiVersion: apps/v1
kind: Deployment
metadata:
name: jarvis
spec:
replicas: 1
selector:
matchLabels:
app: jarvis
template:
metadata:
labels:
app: jarvis
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: jarvis
image: jarvisacrdev.azurecr.io/jarvis:1.0.0
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
volumeMounts:
- name: secrets-store01-inline
mountPath: "/mnt/secrets-store"
readOnly: true
env:
- name: PORT
value: "80"
- name: GIN_MODE
value: "release"
- name: TELEGRAM_BOT_TOKEN
valueFrom:
secretKeyRef:
name: env-secrets
key: TelegramBotToken
- name: VALID_TELEGRAM_SENDERS
valueFrom:
secretKeyRef:
name: env-secrets
key: ValidTelegramSenders
- name: VALID_TELEGRAM_GROUPS
valueFrom:
secretKeyRef:
name: env-secrets
key: ValidTelegramGroups
volumes:
- name: secrets-store01-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "azure-jarvis-secret"
---
apiVersion: v1
kind: Service
metadata:
name: jarvis
spec:
type: ClusterIP
ports:
- port: 80
selector:
app: jarvis
---
...
...
KeyVault
Following - Azure CSI Secrets Driver (microsoft.com) and Azure CSI Identity Access (microsoft.com), I set up a KeyVault using the cluster’s managed identity. In the KeyVault, secrets can be securely stored and then exposed through the rollout.yaml
as envrionment variables to the application.
...
...
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: azure-jarviskv-secrets
spec:
provider: azure
parameters:
usePodIdentity: "false"
useVMManagedIdentity: "true" # Set to true for using managed identity
userAssignedIdentityID: dc34b44c-5ea3-40d3-8820-69445bc5ccde # Set the clientID of the user-assigned managed identity to use
keyvaultName: jarviskv # Set to the name of your key vault
objects: |
array:
- |
objectName: TelegramBotToken
objectType: secret
- |
objectName: ValidTelegramSenders
objectType: secret
- |
objectName: ValidTelegramGroups
objectType: secret
tenantId: 0ad1a6ca-bf7b-4eea-b39d-a0a369403977 # The tenant ID of the key vault
secretObjects:
- data:
- key: TelegramBotToken # data field to populate
objectName: TelegramBotToken # name of the mounted content to sync; this could be the object name or the object alias
- key: ValidTelegramSenders
objectName: ValidTelegramSenders
- key: ValidTelegramGroups
objectName: ValidTelegramGroups
secretName: env-secrets
type: Opaque
...
...
TLS-terminated Ingress
Ingress is handled by nginx and certificates are generated with lets-encrypt. I followed the Azure AKS ingress guide.
Some environment variables need to be set:
ssl-ingress.env
REGISTRY_NAME=myreg
ACR_URL=$REGISTRY_NAME.azurecr.io
SOURCE_REGISTRY=k8s.gcr.io
CONTROLLER_IMAGE=ingress-nginx/controller
CONTROLLER_TAG=v1.0.4
PATCH_IMAGE=ingress-nginx/kube-webhook-certgen
PATCH_TAG=v1.1.1
DEFAULTBACKEND_IMAGE=defaultbackend-amd64
DEFAULTBACKEND_TAG=1.5
CERT_MANAGER_REGISTRY=quay.io
CERT_MANAGER_TAG=v1.5.4
CERT_MANAGER_IMAGE_CONTROLLER=jetstack/cert-manager-controller
CERT_MANAGER_IMAGE_WEBHOOK=jetstack/cert-manager-webhook
CERT_MANAGER_IMAGE_CAINJECTOR=jetstack/cert-manager-cainjector
DNS_LABEL="jarvis-aks-in"
#!/bin/bash
source ssl-ingress.env
az acr import --name $REGISTRY_NAME --source $SOURCE_REGISTRY/$CONTROLLER_IMAGE:$CONTROLLER_TAG --image $CONTROLLER_IMAGE:$CONTROLLER_TAG
az acr import --name $REGISTRY_NAME --source $SOURCE_REGISTRY/$PATCH_IMAGE:$PATCH_TAG --image $PATCH_IMAGE:$PATCH_TAG
az acr import --name $REGISTRY_NAME --source $SOURCE_REGISTRY/$DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG --image $DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG
az acr import --name $REGISTRY_NAME --source $CERT_MANAGER_REGISTRY/$CERT_MANAGER_IMAGE_CONTROLLER:$CERT_MANAGER_TAG --image $CERT_MANAGER_IMAGE_CONTROLLER:$CERT_MANAGER_TAG
az acr import --name $REGISTRY_NAME --source $CERT_MANAGER_REGISTRY/$CERT_MANAGER_IMAGE_WEBHOOK:$CERT_MANAGER_TAG --image $CERT_MANAGER_IMAGE_WEBHOOK:$CERT_MANAGER_TAG
az acr import --name $REGISTRY_NAME --source $CERT_MANAGER_REGISTRY/$CERT_MANAGER_IMAGE_CAINJECTOR:$CERT_MANAGER_TAG --image $CERT_MANAGER_IMAGE_CAINJECTOR:$CERT_MANAGER_TAG
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
source ssl-ingress.env
echo $ACR_URL
# Use Helm to deploy an NGINX ingress controller
helm install nginx-ingress ingress-nginx/ingress-nginx \
--version 4.0.13 \
--namespace ingress-basic --create-namespace \
--set controller.replicaCount=2 \
--set controller.nodeSelector."kubernetes\.io/os"=linux \
--set controller.image.registry=$ACR_URL \
--set controller.image.image=$CONTROLLER_IMAGE \
--set controller.image.tag=$CONTROLLER_TAG \
--set controller.image.digest="" \
--set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \
--set controller.admissionWebhooks.patch.image.registry=$ACR_URL \
--set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE \
--set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG \
--set controller.admissionWebhooks.patch.image.digest="" \
--set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \
--set defaultBackend.image.registry=$ACR_URL \
--set defaultBackend.image.image=$DEFAULTBACKEND_IMAGE \
--set defaultBackend.image.tag=$DEFAULTBACKEND_TAG \
--set defaultBackend.image.digest=""
echo $DNS_LABEL
helm upgrade nginx-ingress ingress-nginx/ingress-nginx \
--namespace ingress-basic \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DNS_LABEL
### SSL cert
# Label the ingress-basic namespace to disable resource validation
kubectl label namespace ingress-basic cert-manager.io/disable-validation=true
# Add the Jetstack Helm repository
helm repo add jetstack https://charts.jetstack.io
# Update your local Helm chart repository cache
helm repo update
# Install the cert-manager Helm chart
helm install cert-manager jetstack/cert-manager \
--namespace ingress-basic \
--version $CERT_MANAGER_TAG \
--set installCRDs=true \
--set nodeSelector."kubernetes\.io/os"=linux \
--set image.repository=$ACR_URL/$CERT_MANAGER_IMAGE_CONTROLLER \
--set image.tag=$CERT_MANAGER_TAG \
--set webhook.image.repository=$ACR_URL/$CERT_MANAGER_IMAGE_WEBHOOK \
--set webhook.image.tag=$CERT_MANAGER_TAG \
--set cainjector.image.repository=$ACR_URL/$CERT_MANAGER_IMAGE_CAINJECTOR \
--set cainjector.image.tag=$CERT_MANAGER_TAG
The rollout.yaml
is then updated as follows:
...
...
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: jarvis@jarvis.dev
privateKeySecretRef:
name: letsencrypt
solvers:
- http01:
ingress:
class: nginx
podTemplate:
spec:
nodeSelector:
"kubernetes.io/os": linux
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jarvis-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/limit-rpm: "5"
nginx.ingress.kubernetes.io/limit-rps: "2"
nginx.ingress.kubernetes.io/limit-burst-multiplier: "1"
cert-manager.io/cluster-issuer: letsencrypt
spec:
ingressClassName: nginx
tls:
- hosts:
- jarvis.dev
secretName: tls-secret-jarvisdev
rules:
- host: jarvis.dev
http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: jarvis
port:
number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: home-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/limit-rpm: "5"
nginx.ingress.kubernetes.io/limit-rps: "2"
nginx.ingress.kubernetes.io/limit-burst-multiplier: "1"
cert-manager.io/cluster-issuer: letsencrypt
spec:
ingressClassName: nginx
tls:
- hosts:
- home.party
secretName: tls-secret-party
rules:
- host: home.party
http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: jarvis
port:
number: 80
These nginx options from above implement rate limiting:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/limit-rpm: "5"
nginx.ingress.kubernetes.io/limit-rps: "2"
nginx.ingress.kubernetes.io/limit-burst-multiplier: "1"
Set an A record for the subdomains listed above (home.party
and jarvis.dev
) to the LoadBalancer EXTERNAL-IP
found:
@joshspicer ➜ /workspaces/jarvis (main) $ kubectl get service -n ingress-basic
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cert-manager ClusterIP 10.0.173.113 <none> 9402/TCP 2d15h
cert-manager-webhook ClusterIP 10.0.201.243 <none> 443/TCP 2d15h
nginx-ingress-ingress-nginx-controller LoadBalancer 10.0.88.215 20.120.147.108 80:30261/TCP,443:32211/TCP 2d15h
nginx-ingress-ingress-nginx-controller-admission ClusterIP 10.0.8.98 <none> 443/TCP 2d15h
CosmosDB Data Store
The Azure SDK for Go is used to communicate with a CosmosDB instance.
The managed identity associated with the cluster is automatically picked up with azidentity.NewDefaultAzureCredential
, which chains together several authentication methods - trying each one until successful auth or all options have been exhausted.
import "github.com/Azure/azure-sdk-for-go/sdk/azidentity"
cred, err := azidentity.NewDefaultAzureCredential(nil)
handle(err)
client, err := azcosmos.NewClient("myAccountEndpointURL", cred, nil)
handle(err)
Continuous Deployments with a GitHub Action and Federated Credentials
On each commit against the main
branch, the source code is built into a docker image and pushed to the relevant ACR.
buildImage:
permissions:
contents: read
id-token: write
runs-on: ubuntu-latest
steps:
# Checks out the repository this file is in
- uses: actions/checkout@v3
# Logs in with your Azure credentials
- name: Azure login
uses: azure/login@v1.4.3
with:
client-id: $
tenant-id: $
subscription-id: $
# Builds and pushes an image up to your Azure Container Registry
- name: Build and push image to ACR
run: |
az acr build --image $.azurecr.io/$:$ --registry $ -g $ -f ./Dockerfile ./server
and the rollout.yaml
is applied to the cluster.
deploy:
permissions:
actions: read
contents: read
id-token: write
runs-on: ubuntu-latest
needs: [buildImage, createSecret]
steps:
# Checks out the repository this file is in
- uses: actions/checkout@v3
# Logs in with your Azure credentials
- name: Azure login
uses: azure/login@v1.4.3
with:
client-id: $
tenant-id: $
subscription-id: $
# Retrieves your Azure Kubernetes Service cluster's kubeconfig file
- name: Get K8s context
uses: azure/aks-set-context@v2.0
with:
resource-group: $
cluster-name: $
# Deploys application based on given manifest file
- name: Deploy application
uses: Azure/k8s-deploy@v3.1
with:
action: deploy
manifests: $
images: |
$.azurecr.io/$:$
imagepullsecrets: |
$
This uses Federated Identity Credentials that are scoped to the specific GitHub Repo, via a GitHub/Azure Connection (microsoft.com).
Additional
Have a comment? Let me know
This post helpful? Buy me a coffee!