Retrofitting a Third-Party Service with Webhooks via Kong

Retrofitting a Third-Party Service with Webhooks via Kong

Recently, we were faced with a problem of needing a third-party service to generate events to which internal services could respond. For example, we have a forum in which one thread displays many user profiles. To efficiently render such a page, it may be beneficial to cache user profiles or entire pages. Now imagine that the service that stores these profiles doesn’t have a mechanism to alert another service that a profile has been updated. How do you invalidate the caches?

Following some brainstorming and shameful consideration of polling we decided to leverage Kong, our API gateway, and AWS SNS to retrofit the service with webhooks. Here’s how we did it:

What is Kong?

Kong is a scalable, open source API layer, also known as an API gateway or API middleware. It runs in front of any RESTful API and can be extended through plug-ins, which provide extra functionalities and services beyond the core platform. For more information, you can check out https://getkong.org/about/.

Kong is written in Lua built on top of NGINX and OpenResty. Its functionality is extended via plug-ins, also written in Lua. For more information on the background and motivations of Kong, check out this Changelog episode with Ahmad Nassri.

Kong sits in front of our APIs and the third-party services with which we integrate. We’ve chosen Kong because it provides us the following benefits:

  • Kong handles authentication in the API gateway layer and passes credentials to downstream services;
  • Downstream services don’t need to share nor maintain authentication secrets;
  • Downstream services are private and inside the same Amazon Virtual Private Cloud (VPC) as Kong and therefore don’t require SSL/TLS authentication;
  • We get access to aggregated analytics;
  • We can make transparent architecture changes at the service level;
  • We can route all client API requests through Articulate domains; and
  • We can set up granular rate-limiting.

Using Kong with Amazon Simple Notification Service (SNS)

According to What is Amazon Simple Notification Service?, Amazon SNS is a fast, flexible, fully managed pub-sub messaging service.

Messages are published to topics. A topic can have any number of subscriptions. Various types of subscriptions exist, including HTTP, Email, Amazon SQS, Lambda, and more. When a message is published to a topic, each of the subscriptions are notified and handle the message appropriately. This notification method is often referred to as “fanout.”

Creating the Webhook

In order to add a webhook to our third-party API, we created a Kong plug-in to match successful requests to downstream APIs by request method (e.g., GET, POST, etc.) and a path regex (e.g,. PUT /profile).

When requests match our pattern, the request log is serialized to JSON and posted to an SNS topic. We created an HTTP subscription to that topic that points to a callback route provided by our application. This callback is traditionally called a webhook. In terms of the profile caching example, our callback expires any caches dependent on the updated profile.

Alternatively, we create an Amazon SQS subscription to the topic and build a worker in our application that pulls messages from the SQS queue and expires the caches. The two approaches have obvious trade-offs:

  • With the SQS approach, an additional service is required to monitor the queue and, as a result, retries and errors are handled in the application.
  • With the webhook approach, retries and errors are handled via configuration in SNS (see SNS Delivery Policies for more info).

Implementation

Because kong-publish-sns is our first Kong plug-in, we searched Kong’s source code for official plug-ins with similar behavior for inspiration. We found File log (docs code) which, wait for it… logs requests to a file. This plug-in accomplishes two important functions, both of which are similar to what we needed to accomplish:

  1. Serializes the request to JSON
  2. Logs the request JSON efficiently

Our serialization logic is more or less taken directly from File Log. For the logging portion, we retained the nginx timer to create a light thread for the actual publishing of the message to SNS.

Kong Plugin Development provides a good overview of the basics. The following files are the most integral to the operation of the plug-in:

  • schema.lua – Plug-in configuration format definition
  • handler.lua – Includes the hooks for specific phases and the nginx request/response lifecycle
  • log.lua – Called by :log() handler.lua when the last response byte is sent to the client

Working with schema.lua

This file contains the configuration options for our plug-in that administrators would specify when enabling the plug-in via the admin API.

kong-publish-sns provides the following options:

  • topic_arn string: SNS topic ARN
  • region string: AWS region
  • endpoint string: AWS SNS endpoint
  • routes array: an array of Lua tables each item containing the following:
    • method string: HTTP method of the route (GET, POST, etc.)
    • matcher string: a regex to match request paths

Working with handler.lua

This file contains plug-in initialization and is where we define which phase(s) of the of the request lifecycle into which our plug-in will hook. In our case, we want to publish only successful requests, implying that the logging needs to happen after the response is received from the upstream service.

The kong.plugins.base_plugin :log() request context function provides us this ability. It is a wrapper for log_by_lua, a lua-nginx-module context, which is executed when the last response byte has been sent to the client.

The convention across the official Kong plug-ins is to create a separate file for each of the request context functions used. Enter log.lua.

Working with log.lua

log.lua exports an execute function, which triggers our log and executes it in a lua-nginx-module “light thread” via an nginx timer callback function. As such, the impact on the response time should be negligible. The function itself iterates through the configured routes and publishes the log to the configured SNS topic.

Development of the Plug-in

The plug-in was developed using docker, docker-compose, kongfig, and luarocks. The docker-compose setup creates two containers, one running Cassandra (Kong’s database) and one running Kong. The Kong container builds and installs our plug-in using luarocks make and then Kong is started with a simple:
docker-compose up

The plug-in is then enabled and configured for a specific API via Kongfig as follows:
kongfig --path ./config.yml --host http://$(docker-machine ip default):8001

config.yml:


---
apis:
-
name: "mockbin"
attributes:
upstream_url: "https://mockbin.com/"
request_path: "/mockbin"
strip_request_path: true
plugins:
-
name: publish-sns
attributes:
config.topic_arn: [TOPIC_ARN]
config.sns_endpoint: [AWS_SNS_ENDPOINT]
config.routes:
-
method: PUT
matcher: /profile

Assuming the topic ARN and SNS endpoint is configured correctly, the following request will produce a new message in our SNS topic:
curl -X "PUT" "http://192.168.1.1/mockbin/profile" \
-H "Content-Type: application/json" \
-d "{\"foo\":\"bar\"}"

We’ve only scratched the surface of Kong’s capabilities. However, this example demonstrates the flexibility and power of Kong for aggregating and adding functionality to existing APIs.

Many thanks to Lindsey Bieda (@LindseyB) and Chaz Straney (@chazu) for the implementation of this plug-in and for allowing me to use it demonstrate some of the capabilities of Kong.