Thursday, 7 March, 2019 UTC


Summary

In my most recent article I showed you how to get Up and Running with the Serverless Framework. In this series we’ll be diving much deeper. I’ll be walking you through building a Serverless image resizing service. The service will be built using Serverless, Docker, Node.js, AWS S3, and the Sharp image processing library. This is a very scalable and cost-effective service that can easily be implemented in a lot of real world scenarios.

🐊 Alligator.io recommends ⤵

Learn Node by Wes Bos
ⓘ About this affiliate link
What we’ll be building
We’ll be building a image resizing service within AWS that will resize images within S3 to any custom size. It can help you create custom sized images for any screen size or device. Need avatars resized for performance and quality? No problem. Need images resized to fit the width of your blog? Done deal.
Next, let’s take a look at the API our service will have when we’re finished.
API Documentation

Resize Image

This endpoint resizes JPEG and PNG images to custom dimensions.
HTTP Request
POST /resize
Headers
{ Content-Type: 'application/json' }
Parameters
  • width: Type: Number Default: none Description: The width in pixels to resize the image to.
  • height: Type: Number Default: none Description: The height in pixels to resize the image to.
  • quality: Type: Number Default: 100 Description: The image quality for the new resized image, if JPEG.
  • match: Type: Boolean Default: false Description: If true, images will be resized to the the exact dimensions requested. If the aspect ratio differs from the source image then the images will be letter-boxed with a white background.
  • token: Type: String Default: none Description: A random string provided to you which serves as your API key.
  • key: Type: String Default: none Description: Path to the source image in S3.
  • path: Type: String Default: none Description: Output destination of the resized image in S3.
As you can see our service will have a single HTTP endpoint called /resize that will listen for POST requests with parameters that specify the dimensions to resize our images to, where the source images can be found in S3 and where the output images should be stored. Let’s get started building it!
Project setup
I'll be assuming some familiarity with working with the CLI, Serverless and Docker.

Pre-requisites

  • AWS CLI tool installed and configured with valid credentials
  • Serverless Framework installed
  • Node.js installed and running at v8.10.0 (Mirrors version in the AWS Lambda environment)
  • Docker installed and running
  • An AWS account (Free tier)

Initial scaffolding

First things first, let’s create a directory for our project and enter it.
CLI:
$ mkdir serverless-image-resizer && cd serverless-image-resizer 
Once that’s done let’s scaffold our project with npm.
npm:
$ npm init 
Follow the prompts and accept the defaults.

S3 Bucket

We will need to go into our AWS account and create and S3 bucket. I’m going to name my bucket alligatorio-images-2019. Remember that S3 buckets need to have a unique name globally so you will need to come up with your own name. Just remember what you name it as we’ll use that info soon.

Serverless Config

We won’t have anything to deploy for awhile but we still need to create our Serverless config file now. We will need to add some plugins for supporting a good development and debugging experience that will help us for the duration of our project. This will be done within the config file itself so let’s create that now.
Create a file called serverless.yml in the root of your project. Add the following:
serverless.yml
# Serverless Config service: serverless-image-resizer # Name of service within AWS provider: name: aws # Cloud service provider runtime: nodejs8.10 stage: dev region: us-east-2 # US East (Ohio) iamRoleStatements: - Effect: Allow # Allows our services to see our S3 bucket Action: - s3:ListBucket Resource: - arn:aws:s3:::alligatorio-images-2019 # Enter your S3 bucket ARN - Effect: Allow # Allows all S3 operations on our bucket Action: - s3:* Resource: - arn:aws:s3:::alligatorio-images-2019/* # Enter your S3 bucket ARN functions: resize: #Lambda function handler: src/resize.handler # The handler for this function events: # Event that triggers this Lambda - http: # Creates API Gateway endpoint trigger path: resize # Path for this endpoint method: post # HTTP method cors: true 
We’ll be adding to this soon when we setup some Serverless plugins. This is a good start though.

webpack and Babel

Although not required, I highly recommend setting up webpack and Babel in Serverless projects. It will allow us to leverage the latest features of JavaScript. We’ll also be able to leverage webpack features and plugins. Let’s pull in the necessary dependencies and get the setup now!

Development dependencies

We’ll need to add some packages for our development workflow.
npm:
$ npm install -D aws-sdk babel-core babel-loader babel-plugin-source-map-support babel-plugin-transform-runtime babel-preset-env babel-preset-stage-3 serverless-offline serverless-webpack webpack webpack-node-externals 

Production dependencies

We’ll need the babel-runtime and source-map-support packages in our production environment. So we’ll add those as well.
npm:
$ npm install -S babel-runtime source-map-support 

Babel

Let’s now configure Babel. Create a file called .babelrc in your project root folder. Add the following:
.babelrc
{ "plugins": ["source-map-support", "transform-runtime"], "presets": [ ["env", { "node": "8.10" }], "stage-3" ] } 
This tells Babel that we’re running two plugins, that we’re leveraging Node 8.10 and finally that we’re using Stage 3. This will allow it to transpile our code into something that can run within the AWS Lambda environment.

webpack

We can now configure webpack. We’ll be using it to tell Babel to transpile our code when required. It will also generate source maps for us which will help with debugging within the AWS environment.
Create a file in your root folder called webpack.config.js and add the following:
webpack.config.js
const slsw = require('serverless-webpack'); const nodeExternals = require('webpack-node-externals'); module.exports = { entry: slsw.lib.entries, target: 'node', // Generate sourcemaps for proper error messages devtool: 'source-map', // Since 'aws-sdk' is not compatible with webpack, // we exclude all node dependencies externals: [nodeExternals()], mode: slsw.lib.webpack.isLocal ? 'development' : 'production', optimization: { // We do not want to minimize our code. minimize: false }, performance: { // Turn off size warnings for entry points hints: false }, // Run babel on all .js files and skip those in node_modules module: { rules: [ { test: /\.js$/, loader: 'babel-loader', include: __dirname, exclude: /node_modules/ } ] } }; 
Conclusion
We now have our project setup complete. In Part 2 of this series we’ll start writing the code for our service. We’ll also deploy our stack to AWS so we can debug and test as we develop. Stay tuned!