How to use PostgreSQL in your serverless app
In this example we will look at how to use PostgreSQL in our serverless app using Serverless Stack (SST). We’ll be creating a simple hit counter using Amazon Aurora Serverless.
Requirements
- Node.js >= 10.15.1
- We’ll be using TypeScript
- An AWS account with the AWS CLI configured locally
Create an SST app
Let’s start by creating an SST app.
$ npm init sst -- typescript-starter rest-api-postgresql
$ cd rest-api-postgresql
By default our app will be deployed to an environment (or stage) called dev
and the us-east-1
AWS region. This can be changed in the sst.json
in your project root.
{
"name": "rest-api-postgresql",
"region": "us-east-1",
"main": "stacks/index.ts"
}
Project layout
An SST app is made up of two parts.
-
stacks/
— App InfrastructureThe code that describes the infrastructure of your serverless app is placed in the
stacks/
directory of your project. SST uses AWS CDK, to create the infrastructure. -
backend/
— App CodeThe code that’s run when your API is invoked is placed in the
backend/
directory of your project.
Adding PostgreSQL
Amazon Aurora Serverless is an auto-scaling managed relational database that supports PostgreSQL.
Replace the stacks/MyStack.ts
with the following.
import { Api, RDS, StackContext } from "@serverless-stack/resources";
export function MyStack({ stack }: StackContext) {
const DATABASE = "CounterDB";
// Create the Aurora DB cluster
const cluster = new RDS(stack, "Cluster", {
engine: "postgresql10.14",
defaultDatabaseName: DATABASE,
migrations: "migrations",
});
}
This creates an RDS Serverless cluster. We also set the database engine to PostgreSQL. The database in the cluster that we’ll be using is called CounterDB
(as set in the defaultDatabaseName
variable).
Setting up the API
Now let’s add the API.
Add this below the cluster
definition in stacks/MyStack.ts
.
// Create a HTTP API
const api = new Api(stack, "Api", {
defaults: {
function: {
environment: {
DATABASE,
CLUSTER_ARN: cluster.clusterArn,
SECRET_ARN: cluster.secretArn,
},
permissions: [cluster],
},
},
routes: {
"POST /": "functions/lambda.handler",
},
});
// Show the resource info in the output
stack.addOutputs({
ApiEndpoint: api.url,
SecretArn: cluster.secretArn,
ClusterIdentifier: cluster.clusterIdentifier,
});
Our API simply has one endpoint (the root). When we make a POST
request to this endpoint the Lambda function called handler
in backend/functions/lambda.ts
will get invoked.
We also pass in the name of our database, the ARN of the database cluster, and the ARN of the secret that’ll help us login to our database. An ARN is an identifier that AWS uses. You can read more about it here.
We then allow our Lambda function to access our database cluster. Finally, we output the endpoint of our API, ARN of the secret and the name of the database cluster. We’ll be using these later in the example.
Reading from our database
Now in our function, we’ll start by reading from our PostgreSQL database.
Replace backend/functions/lambda.ts
with the following.
import client from "data-api-client";
const db = client({
database: process.env.DATABASE,
secretArn: process.env.SECRET_ARN,
resourceArn: process.env.CLUSTER_ARN,
});
export async function handler() {
const { records } = await db.query(
"SELECT tally FROM tblcounter where counter='hits'"
);
let count = records[0].tally;
return {
statusCode: 200,
body: count,
};
}
We are using the Data API. It allows us to connect to our database over HTTP using the data-api-client.
For now we’ll get the number of hits from a table called tblcounter
and return it.
Let’s install the data-api-client
in the backend/
folder.
$ npm install data-api-client
And test what we have so far.
Starting your dev environment
SST features a Live Lambda Development environment that allows you to work on your serverless apps live.
$ npm start
The first time you run this command it’ll take a couple of minutes to deploy your app and a debug stack to power the Live Lambda Development environment.
===============
Deploying app
===============
Preparing your SST app
Transpiling source
Linting source
Deploying stacks
manitej-rest-api-postgresql-my-stack: deploying...
✅ manitej-rest-api-postgresql-my-stack
Stack manitej-rest-api-postgresql-my-stack
Status: deployed
Outputs:
SecretArn: arn:aws:secretsmanager:us-east-1:087220554750:secret:CounterDBClusterSecret247C4-MhR0f3WMmWBB-dnCizN
ApiEndpoint: https://u3nnmgdigh.execute-api.us-east-1.amazonaws.com
ClusterIdentifier: manitej-rest-api-postgresql-counterdbcluster09367634-1wjmlf5ijd4be
The ApiEndpoint
is the API we just created. While the SecretArn
is what we need to login to our database securely. The ClusterIdentifier
is the id of our database cluster.
Before we can test our endpoint let’s create the tblcounter
table in our database.
Creating our table
To create our table we’ll use the SST Console. The SST Console is a web based dashboard to manage your SST apps. Learn more about it in our docs.
Go to the RDS tab and paste the below SQL code in the editor.
CREATE TABLE tblcounter (
counter text UNIQUE,
tally integer
);
INSERT INTO tblcounter VALUES ('hits', 0);
Hit the Execute button to run the SQL query. The above code will create our table and insert a row to keep track of our hits.
Test our API
Now that our table is created, let’s test our endpoint with the SST Console.
Go to the API tab and click Send button to send a POST
request.
Note, The API explorer lets you make HTTP requests to any of the routes in your Api
construct. Set the headers, query params, request body, and view the function logs with the response.
You should see a 0
in the response body.
Writing to our table
So let’s update our table with the hits.
Add this above the return
statement in backend/functions/lambda.ts
.
await db.query(`UPDATE tblcounter set tally=${++count} where counter='hits'`);
Here we are updating the hits
row’s tally
column with the increased count.
And now if you head over to your console and make a request to our API. You’ll notice the count increase!
Running migrations
You can run migrations from the SST console, The RDS
construct uses Kysely to run and manage schema migrations. The migrations
prop should point to the folder where your migration files are. you can read more about migrations here.
Let’s create a migration file that creates a table called todos
.
Create a migrations
folder inside the backend/
folder.
Let’s write our first migration file, create a new file called first.ts
inside the newly created backend/migrations
folder and paste the below code.
module.exports.up = async (db) => {
await db.schema
.createTable("todos")
.addColumn("id", "text", (col) => col.primaryKey())
.addColumn("title", "text")
.execute();
};
module.exports.down = async (db) => {
await db.schema.dropTable("todos").execute();
};
update the cluster definition like below in stacks/MyStack.ts
.
const cluster = new RDS(stack, "Cluster", {
engine: "postgresql10.14",
defaultDatabaseName: DATABASE,
migrations: "backend/migrations", // add this line
});
This creates an infrastructure change, open the terminal and hit enter when it asks.
Now to run the migrations we can use the SST console. Go to the RDS tab and click the Migrations button on the top right corner.
It will list out all the migration files in the specified folder.
Now to apply the migration that we created, click on the Apply button beside to the migration name.
To confirm if the migration is successful, let’s display the todos
table by running the below query.
select * from todos
You should see the empty table with column names.
Note, to revert back to a specific migration, re-run its previous migration.
Deploying to prod
To wrap things up we’ll deploy our app to prod.
$ npm run deploy -- --stage prod
This allows us to separate our environments, so when we are working in dev
, it doesn’t break the API for our users.
Run the below command to open the SST Console in prod stage to test the production endpoint.
npm run console -- --stage prod
Go to the API tab and click Send button to send a POST
request.
Cleaning up
Finally, you can remove the resources created in this example using the following commands.
$ npm run remove
$ npm run remove -- --stage prod
Conclusion
And that’s it! We’ve got a completely serverless hit counter. And we can test our changes locally before deploying to AWS! Check out the repo below for the code we used in this example. And leave a comment if you have any questions!
Example repo for reference
github.com/serverless-stack/serverless-stack/tree/master/examples/rest-api-postgresqlFor help and discussion
Comments on this exampleMore Examples
APIs
-
REST API
Building a simple REST API.
-
WebSocket API
Building a simple WebSocket API.
-
Go REST API
Building a REST API with Golang.
-
Custom Domains
Using a custom domain in an API.
Web Apps
-
React.js
Full-stack React app with a serverless API.
-
Next.js
Full-stack Next.js app with DynamoDB.
-
Vue.js
Full-stack Next.js app with a serverless API.
-
Svelte
Full-stack Svelte app with a serverless API.
-
Gatsby
Full-stack Gatsby app with a serverless API.
-
Angular
Full-stack Angular app with a serverless API.
Mobile Apps
GraphQL
Databases
-
DynamoDB
Using DynamoDB in a serverless API.
-
MongoDB Atlas
Using MongoDB Atlas in a serverless API.
-
CRUD DynamoDB
Building a CRUD API with DynamoDB.
-
PlanetScale
Using PlanetScale in a serverless API.
Authentication
Using AWS IAM
-
Cognito IAM
Authenticating with Cognito User Pool and Identity Pool.
-
Facebook Auth
Authenticating a serverless API with Facebook.
-
Twitter Auth
Authenticating a serverless API with Twitter.
-
Auth0 IAM
Authenticating a serverless API with Auth0.
Using JWT
-
Cognito JWT
Adding JWT authentication with Cognito.
-
Auth0 JWT
Adding JWT authentication with Auth0.
-
Google Auth
Authenticating a full-stack serverless app with Google.
-
GitHub Auth
Authenticating a full-stack serverless app with GitHub.
Async Tasks
-
Cron
A simple serverless Cron job.
-
Queues
A simple queue system with SQS.
-
Pub/Sub
A simple pub/sub system with SNS.
-
Resize Images
Automatically resize images uploaded to S3.
-
Kinesis data streams
A simple Kinesis Data Stream system.
-
EventBus
A simple EventBridge system with EventBus.
Editors
-
Debug With VS Code
Using VS Code to debug serverless apps.
-
Debug With WebStorm
Using WebStorm to debug serverless apps.
-
Debug With IntelliJ
Using IntelliJ IDEA to debug serverless apps.
Monitoring
Miscellaneous
-
Lambda Layers
Using the chrome-aws-lambda layer to take screenshots.
-
Middy Validator
Use Middy to validate API request and responses.