26
loading...
This website collects cookies to deliver better user experience
AzureWebJobsStorage
in the local.settings.json
file to UseDevelopmentStorage=true
SourceRepositoryConfiguration
) i.e. which repositories to look at, what are the hashtags to be used etc.RepositoryUpdateHistory
) i.e. what was the latest release that we tweeted about.SourceRepositoryConfiguration
looks like:RepositoryUpdateHistory
looks like:function.json
file to start every six hours, by setting the corresponding value:"schedule": "0 0 */6 * * *"
function.json
file:{
"name": "starter",
"type": "orchestrationClient",
"direction": "in"
}
import * as df from "durable-functions"
import { AzureFunction, Context } from "@azure/functions"
const timerTrigger: AzureFunction = async function (context: Context, myTimer: any): Promise<void> {
const client = df.getClient(context)
const instanceId = await client.startNew(process.env["MainOrchestratorName"], undefined, undefined)
context.log(`Started timer triggered orchestration with ID = '${instanceId}'.`)
}
export default timerTrigger
💡 However, during local development I wanted to trigger the orchestration manually and therefore I also created an HTTP triggered Function, to be able to do so.
As a spoiler, I also deployed that Function to Azure, but I deactivated it. It serves me as a backup to trigger the process manually in production if I want or need to.
Config Reader
Activity Function)const configuration = yield context.df.callActivity("KymaUpdateBotConfigReader")
if (configuration) {
const updateTasks = []
for (const configurationEntry of configuration) {
const child_id = context.df.instanceId + `:${configurationEntry.RowKey}`
const updateTask = context.df.callSubOrchestrator("KymaUpdateBotNewsUpdateOrchestrator", configurationEntry, child_id)
updateTasks.push(updateTask)
}
if (context.df.isReplaying === false) {
context.log.info(`Starting ${updateTasks.length} sub-orchestrations for update`)
}
yield context.df.Task.all(updateTasks)
}
⚠ There is one strange behavior that I encountered. The daat fetched from the configuration has column names that start with a capital letter as in the Azure Storage Table (e.g. RepositoryOwner). After transferring the object to the sub-orchestrator it seems that some JSON manipulation takes place and the name in the input parameter of the sub-orchestrator will start with a small letter (e.g. repositoryOwner). This needs to be kept in mind, otherwise you will be surprised when you assign the data later in the processing.
isActive
which can be achieved via a filter. function.json
file:{
"name": "repositoryConfiguration",
"type": "table",
"connection": "myStorageConnectionString",
"tableName": "SourceRepositoryConfiguration",
"partitionKey": "Repositories",
"filter": "(IsActive eq true)",
"direction": "in"
}
⚠ The official documentation for JavaScript Functions states that you must not use the rowKey
and partitionKey
parameters. However, they do work and I could not see any impact on the behavior. So I did not pass the partition key as a OData filter parameter and opened a issue on the documentation https://github.com/MicrosoftDocs/azure-docs/issues/77103.
GitHub Reader
Activity Function).History Reader
Activity Function). Update Twitter Sender
Activity Function) and store the updated information into the history table (History Update
Activity Function). if
clauses.callActivityWithRetry
) to avoid immediate failures in case of hiccups in the downstream systems. The configuration parameters are injected via environment parameters from the Function App settings.@octokit/core
SDK that helps you a lot with interacting with the endpoints. const octokit = new Octokit()
const requestUrl = `/repos/${context.bindings.configuration.repositoryOwner.toString()}/${context.bindings.configuration.repositoryName.toString()}/releases/latest`
const repositoryInformation = await octokit.request(requestUrl)
{
"name": "updateHistory",
"type": "table",
"connection": "myStorageConnectionString",
"tableName": "RepositoryUpdateHistory",
"partitionKey": "History",
"direction": "in"
}
const result = <JSON><any>context.bindings.updateHistory.find( entry => ( entry.RepositoryOwner === context.bindings.configuration.repositoryOwner && entry.RepositoryName === context.bindings.configuration.repositoryName))
const TwitterClient = require('twitter-lite')
const tweetText = buildTweet(context)
try {
const client = new TwitterClient({
consumer_key: process.env["TwitterApiKey"],
consumer_secret: process.env["TwitterApiSecretKey"],
access_token_key: process.env["TwitterAccessToken"],
access_token_secret: process.env["TwitterAccessTokenSecret"]
})
const tweet = await client.post("statuses/update", {
status: tweetText
})
context.log.info(`Tweet successfully sent: ${tweetText}`)
} catch (error) {
context.log.error(`The call of the Twitter API caused an error: ${error}`)
}
const tableSvc = AzureTables.createTableService(process.env["myStorageConnectionString"])
const entGen = AzureTables.TableUtilities.entityGenerator
let tableEntry = {
PartitionKey: entGen.String(process.env["HistoryPartitionKeyValue"]),
RowKey: entGen.String(context.bindings.updateInformation.RowKey),
RepositoryOwner: entGen.String(context.bindings.updateInformation.RepositoryOwner),
RepositoryName: entGen.String(context.bindings.updateInformation.RepositoryName),
Name: entGen.String(context.bindings.updateInformation.Name),
TagName: entGen.String(context.bindings.updateInformation.TagName),
PublishedAt: entGen.DateTime(context.bindings.updateInformation.PublishedAt),
HtmlURL: entGen.String(context.bindings.updateInformation.HtmlUrl),
UpdatedAt: entGen.DateTime(new Date().toISOString())
}
const result = await insertOrMergeEntity(tableSvc, process.env["HistoryEntityName"], tableEntry)
local.settings.json
file namely "AzureWebJobs.[NameOfTheFunction].Disabled": true
. The Function runtime will do some sanity checks on the Function despite of that setting that made me aware of an error in the Function. That's cool imho.
"AzureWebJobsStorage": "UseDevelopmentStorage=true"
in my local.setting.json
file. I decided to use a different storage for my configuration and history table. which means you must specify the connection string to that storage in your Functions that make use of the non-default storage ... which you might forget when you do the local development. For the sake of science I made that error (or I just forgot the setting .. who knows) and be assured you will become aware of that when you deploy the Function. So specify that connection in your bindings in case you deviate from the default storage attached to your Function like in my case:
"connection": "myStorageConnectionString"
on:
push:
branches:
- main
paths-ignore:
- "**.md"
@Microsoft.KeyVault(SecretUri=[URI copied from the Key Vault Secret])
.
One cool thing here is: you might make a copy & paste error but Azure has you covered there as it will validate the reference and put a checkmark if everything is fine (valid URI and valid access rights):