Building a scalable serverless mail redirection solution
We use Facebook Login massively at PoweredLocal. In result, our customers get email addresses of thousands of WiFi users which, nowadays, may lead to some undesired results like massive, repetitive email campaigns.
So we decided to protect end-users from excessive correspondence and mask their email addresses so that all emails would pass through our system first. Then, we would be able to control volumes, as well as quality of email campaigns.
We came up with a serverless solution entirely based on AWS services.
The task is simple — a WiFi visitor John has an email address of firstname.lastname@example.org. John often goes to Proud Mary Coffee in North Melbourne and we want the coffee shop to be able to notify John of specials from time to time. What we don’t want however is the coffee shop to send 10 emails every day–telling about a new PoS system installed, a new haircut one of the baristas got or the latest Hario dripper.
So instead of passing email@example.com to the coffee shop owners, we want to give them something like firstname.lastname@example.org, and we will act as a email proxy. If the coffee shop starts behaving bad (eg. sending way too many emails or too many Hario close up photos), we would like to be able to turn the email redirection off, temporarily or permanently.
We chose Amazon SES as our email solution but no matter who you choose — one of the issues, the bottlenecks will be your throttle rate (maximum send-out rate). Therefore, the solution has to be able to adjust the pace according to the current allowed rate.
Since you are going to send out the emails yourself, you must take care and take note of complaints and bounces. You don’t want your email provider to shut the access one morning because you ignored all the customer complaints.
Our solution almost looks like an AWS lego:
We assign “proxied” emails in the form of email@example.com (we called our solution MailTumble and we even made it open-source). MX is set to AWS SES service so all the emails will arrive to Amazon.
Every incoming email is saved in an S3 bucket, as well as processed by a Lambda function. This Lambda function verifies that the destination email exists and the email is ok to pass through (there haven’t been too many complaints from the final recipient, content doesn’t have viruses, etc).
If all checks pass, emails unique Id is passed to a SQS queue. We chose a queue here because we want to be able to send-out the messages synchronously, at any rate of our choice. We don’t want to send emails asynchronously using Lambda (or SNS) because we may hit the throttle rate very, very quickly.
After email is sent, SES will send us a message through SNS informing of the result — was it delivered, or did it bounce, or did the recipient mark it as spam? We will use another Lambda function to process these notifications.
We ended up with:
- 3 SNS topics and one 1 SQS queue;
- 1 S3 bucket and 1 API gateway;
- Zero servers of our own engaged. Zero DevOps efforts, yay!
The solution is scalable (theoretically to very high throughputs) and cost-efficient. If there are no emails coming at all — we will only pay for Lambda invocations of the sending out function and SQS polls associated with it (at the minimal rate).
Feel free to browse our solution on GitHub and ask any questions. If you want to contribute — even better!
PoweredLocal is a Melbourne-based wi-fi innovation startup. We host PHP, Ruby and Node.js microservices behind our newly baked API gateway.
Written by Denis Mysenko, Chief Technical Officer at PoweredLocal