Recently, one of my websites went down. After noticing, I checked my EC2 dashboard and saw the instance stopped. AWS had emailed me to say that due to a physical hardware issue, it was terminated. When an instance is terminated, all of its data is lost. Luckily, all of my data is backed up automatically every night.
Since I don’t use RDS, I have to manually manage data redundancy. After a few disasters, I came up with a solution to handle it. I trigger a nightly cron-job to run a shell script. That script takes a MySQL dump and uploads it to S3.
As long as I have the user generated data, everything else is replaceable. The website that went down is a fitness tracking app. Every day users record their martial arts progress. Below are the ten steps taken to bring everything back up.
Allowing users to upload images to your app can be a pivotal feature. Many digital products rely on it. This post will show you how to do it using PHP and AWS S3.
After launching version 1.0 of SplitWit, I decided to enhance the platform by adding features. An important A/B experiment involves swapping images. This is particularly useful on ecommerce stores that sell physical products.
Originally, users could only swap images by entering a URL. To the average website owner, this would seem lame. For SplitWit to be legit, adding images on the fly had to be a feature.
I wrote three scripts – one to upload files, one to fetch them, and one to delete them. Each leverages a standalone PHP class written by Donovan Schönknecht, making it easy to interact with AWS S3. All you’ll need is your S3 bucket name and IAM user credentials. The library provides methods to do everything you need.
You’ll want to create a new IAM user to programmatically interact with this bucket. Make sure that new user is added to a group that includes the permission policy “AmazonS3FullAccess”. You can find the access key ID and secret in the “Security credentials” tab.
Uploading image files
When users select an image in the visual editor, they are shown a button to upload a new file. Clicking on it opens the gallery modal.
The HTML file-type input element presents a browser dialog to select a file. Once selected, the image data is posted to the S3 upload script. The newly uploaded image then replaces the existing image in the visual editor.
$(".uploadimage").change(function(){
var file = $(this)[0].files[0];
var formData = new FormData();
formData.append("file", file, file.name);
formData.append("upload_file", true);
$.ajax({
type: "POST",
url: "/s3-upload.php",
xhr: function () {
var myXhr = $.ajaxSettings.xhr();
if (myXhr.upload) {
// myXhr.upload.addEventListener('progress', that.progressHandling, false);
}
return myXhr;
},
success: function (response) {
console.log(response);
document.getElementById("uploadimage").value = "";
if(response !== "success"){
$(".file-error").text(response).show();
setTimeout(function(){ $(".file-error").fadeOut();}, 3000)
return;
}
$("#image-gallery-modal").hide();
loadS3images();
var newImageUrl = "https://splitwit-image-upload.s3.amazonaws.com/<?php echo $_SESSION['userid'];?>/" + file.name;
$("input.img-url").val(newImageUrl);
$(".image-preview").attr("src", newImageUrl).show();
$(".image-label .change-indicator").show();
//update editor (right side)
var selector = $(".selector-input").val();
var iFrameDOM = $("iframe#page-iframe").contents()
if($(".element-change-wrap").is(":visible")){
iFrameDOM.find(selector).attr("src", newImageUrl).attr("srcset", "");
$(".element-change-save-btn").removeAttr("disabled");
}
if($(".insert-html-wrap").is(":visible")){
var position = $(".position-select").val();
var htmlInsertText = "<img style='display: block; margin: 10px auto;' class='htmlInsertText' src='"+newImageUrl+"'>";
iFrameDOM.find(".htmlInsertText").remove();
if(position == "before"){
iFrameDOM.find(selector).before(htmlInsertText);
}
if(position == "after"){
iFrameDOM.find(selector).after(htmlInsertText);
}
}
},
error: function (error) {
console.log("error: ");
console.log(error);
},
async: true,
data: formData,
cache: false,
contentType: false,
processData: false,
timeout: 60000
});
});
The upload script puts files in the same S3 bucket, under a separate sub-directory for each user account ID. It checks the MIME type on the file to make sure an image is being uploaded.
<?php
require 's3.php';
$s3 = new S3("XXXX", "XXXX"); //access key ID and secret
// echo "S3::listBuckets(): ".print_r($s3->listBuckets(), 1)."\n";
$bucketName = 'image-upload';
if(isset($_FILES['file'])){
$file_name = $_FILES['file']['name'];
$uploadFile = $_FILES['file']['tmp_name'];
if ($_FILES['file']['size'] > 5000000) { //5 megabyte
echo 'Exceeded filesize limit.';
die();
}
$finfo = new finfo(FILEINFO_MIME_TYPE);
if (false === $ext = array_search(
$finfo->file($uploadFile),
array(
'jpg' => 'image/jpeg',
'png' => 'image/png',
'gif' => 'image/gif',
),
true
)) {
if($_FILES['file']['type'] == ""){
echo 'File format not found. Please re-save the file.';
}else{
echo 'Invalid file format.';
}
die();
}
//create new directory with account ID, if it doesn't already exist
session_start();
$account_id = $_SESSION['userid'];
if ($s3->putObjectFile($uploadFile, $bucketName, $account_id."/".$file_name, S3::ACL_PUBLIC_READ)) {
echo "success";
}
}
?>
After upload, the gallery list is reloaded by the loadS3images() function.
Fetching image files from S3
When the image gallery modal first shows, that same loadS3images() runs to populate any images that have been previously uploaded.
function loadS3images(){
$.ajax({
url:"/s3-get-objects.php",
complete: function(response){
gotImages = true;
$(".loading-images").hide();
var data = JSON.parse(response.responseText);
var x;
var html = "<p><strong>Select existing file:</strong></p>";
var l = 0;
for (x in data) {
l++;
var name = data[x]["name"];
nameArr = name.split("/");
name = nameArr[1];
var imgUrl = "https://splitwit-image-upload.s3.amazonaws.com/<?php echo $_SESSION['userid'];?>/" + name;
html += "<div class='image-data-wrap'><p class='filename'>"+name+"</p><img style='width:50px;display:block;margin:10px;' src='' class='display-none'><button type='button' class='btn select-image'>Select</button> <button type='button' class='btn preview-image'>Preview</button> <button type='button' class='btn delete-image'>Delete</button><hr /></div>"
}
if(l){
$(".image-gallery-content").html(html);
}
}
});
}
It hits the “get objects” PHP script to pull the files in the account’s directory.
<?php
require 's3.php';
$s3 = new S3("XXX", "XXX"); //access key ID and secret
$bucketName = 'image-upload';
session_start();
$account_id = $_SESSION['userid'];
$info = $s3->getBucket($bucketName, $account_id);
echo json_encode($info);
?>
Existing images can be chosen to replace the one currently selected in the editor. There are also options to preview and delete.
Delete an S3 object
When the delete button is pressed for a file in the image gallery, all we need to do is pass the filename along. If the image is currently being used, we also remove it from the editor.
My email account is a skeleton key to anything online I’ve signed up for. If I forget a password, I can reset it. Implementing this feature for a web app takes just a few steps.
When users enter an incorrect password, I prompt them to reset it.
Clicking the reset link calls a “forgot password” back-end service.
$(document).on("click",".reset-pw-cta", function(){
var email = $(this).attr("data");
$.ajax({
url:"/service-layer/user-service.php?method=forgotPw&email="+email,
complete:function(response){
console.log(response.responseText)
window.showStatusMessage("A password reset email as been sent to " + email);
}
})
});
A token is created in our ‘password recovery’ database table. That token is related back to an account record.
As a security practice, recovery tokens are deleted nightly by a cron job.
An email is then sent containing a “reset password” link embedded with the token. AWS SES and PHPMailer is used to send that message.
function forgotPw(){
$email = $this->email;
$row = $this->row;
$number_of_rows = $this->number_of_rows;
$conn = $this->connection;
if($number_of_rows > 0){
$this->emailFound = 1;
$userid = $row['ID'];
$this->userid = $userid;
//create reset token
$timestamp = time();
$expire_date = time() + 24*60*60;
$token_key = md5($timestamp.md5($email));
$statement = $conn->prepare("INSERT INTO `passwordrecovery` (userid, token, expire_date) VALUES (:userid, :token, :expire_date)");
$statement->bindParam(':userid', $userid);
$statement->bindParam(':token', $token_key);
$statement->bindParam(':expire_date', $expire_date);
$statement->execute();
//send email via amazon ses
include 'send-email-service.php';
$SendEmailService = new SendEmailService();
$reset_url = 'https://www.bjjtracker.com/reset-pw.php?token='.$token_key;
$subject = 'Reset your password.';
$body = 'Click here to reset your password: <a href="'.$reset_url.'">'. $reset_url .'</a>';
$altBody = 'Click here to reset your password: ' . $reset_url;
$this->status = $SendEmailService -> sendEmail($subject, $body, $altBody, $email);
}else{
$this->emailFound = 0;
}
}
That link navigates to a page with a “reset password” form.
Upon submission the new password and embedded token are passed along to the server.
$(document).ready(function() {
$(".reset-button").click(function(){
var newPassword = $(".password-reset-input").val();
if(newPassword.length < 1){
var notifications = new UINotifications();
notifications.showStatusMessage("Please don't leave that blank :( ");
return;
}
var data = $(".resetpw-form").serialize();
$.ajax({
url: "/service-layer/user-service.php?method=resetPw&token=<?php echo $_GET['token']; ?>",
method: "POST",
data: data,
complete: function(response){
// console.log(response);
window.location = "/";
}
});
});
$("input").keypress(function(e) {
if(e.which == 13) {
e.preventDefault();
$(".reset-button").click();
}
});
});
The correct recovery record is selected by using the token value. That provides the user ID of the account that we want to update. The token should be deleted once the database is updated.
function resetPw(){
$conn = $this->connection;
$token = $_GET['token'];
$password = $_POST['password'];
$passwordHash = password_hash($password, PASSWORD_DEFAULT);
$statement = $conn->prepare("SELECT * FROM `passwordrecovery` where token = ?");
$statement->execute(array($token));
$row = $statement->fetch(PDO::FETCH_ASSOC);
$userid = $row['userid'];
$update_statement = $conn->prepare("UPDATE `users` SET password = ? where ID = ?");
$update_statement->execute(array($passwordHash, $userid));
$delete_statement = $conn->prepare("DELETE FROM `passwordrecovery` where token = ?");
$delete_statement->execute(array($token));
}
This is a secure and user-friendly workflow to allow users to reset their passwords.
A crashed database is a problem I’ve encountered across multiple WordPress websites. When trying to load the site you’re faced with a dreaded “Error establishing a database connection” message. Restarting the DB service usually clears things up. But, sometimes it won’t restart at all – which is why I started automating nightly data dumps to an S3 bucket.
Recently, one particular site kept going down unusually often. I assumed it was happening due to low computing resources on the EC2 t3.micro instance. I decide to spin up a a new box with more RAM (t3.small) and migrate the entire WordPress setup.
Since I couldn’t be sure of what was causing the issue, I needed a way to monitor the health of my WordPress websites. I decided to write code that would periodically ping the site, and if it is down send an email alert and attempt to restart the database.
The first challenge was determining the status of the database. Even if it crashed, my site would still return a 200 OK response. I figured I could use cURL to get the homepage content, and then strip out any HTML tags to check the text output. If the text did match the error message, I could take further action.
Next, I needed to programmatically restart MySql. This is the command I run to do it manually: sudo service mariadb restart
After doing some research, I found that I could use shell_exec() to run it from my PHP code. Unfortunately, Apache wouldn’t let the (non-password using) web server user execute that without special authorization. I moved that command to its own restart-db.sh file, and allowed my code to run it by adding this to the visudo file: apache ALL=NOPASSWD: /var/www/html/restart-db.sh
I also needed to make the file executable by adjusting permissions: sudo chmod +x /var/www/html/restart-db.sh
Once those pieces were configured, my code would work:
<?php
$url = "http://www.antpace.com/blog";
$curl_connection = curl_init();
curl_setopt($curl_connection, CURLOPT_URL, $url);
curl_setopt($curl_connection, CURLOPT_RETURNTRANSFER, true);
$curl_response = curl_exec($curl_connection);
$plain_text = strip_tags($curl_response);
if(strpos($plain_text, "Error establishing a database connection") !== false){
echo "The DB is down.";
//restart the database
shell_exec('sudo /var/www/html/restart-db.sh');
//send notification email
import 'send-email.php';
send_email();
}else{
echo "The DB is healthy.";
}
?>
A cron job is a scheduled task in Linux that runs at set times. For my PHP code to effectively monitor the health of the database, it needs to run often. I decided to execute it every five minutes. Below are three shell commands to create a cron job.
The first creates the cron file for the root user:
sudo touch /var/spool/cron/root
The next appends my cron command to that file:
echo "*/5 * * * * sudo wget -q 127.0.0.1/check-db-health.php" | sudo tee -a /var/spool/cron/root
And, the last sets the cron software to listen for that file:
sudo crontab /var/spool/cron/root
Alternatively, you can create, edit, and set the cron file directly by running sudo crontab -e . The contents of the cron file can be confirmed by running sudo crontab -l .
In a previous article I discussed launching a website on AWS. The project was framed as transferring a static site from another hosting provider. This post will extend that to migrating a dynamic WordPress site with existing content.
Install WordPress
After following the steps to launch your website to a new AWS EC2 instance, you’ll be able to connect via sFTP. I use FileZilla as my client. You’ll need the hostname (public DNS), username (ec2-user in this example), and key file for access. The latest version of WordPress can be downloaded from wordpress.org. Once connected to the server, I copy those files to the root web directory for my setup: /var/www/html
Make sure the wp-config.php file has the correct details (username, password) for your database. You should use the same database name from the previous hosting environment.
Data backup and import
It is crucial to be sure we don’t lose any data. I make a MySql dump of the current database and copy the entire wp-content folder to my local machine. I’m careful to not delete or cancel the old server until I am sure the new one is working identically.
After configuring my EC2 instance, I install phpMyAdmin so that I can easily import the sql file.
The above Linux commands installs the database management software on the root directory of the new web server. It is accessible from a browser via yourdomainname.com/phpMyAdmin. This tool is used to upload the data to the new environment.
Create the database and make sure the name matches what’s in wp-config.php from the last step. Now you’ll be able to upload your .sql file.
Next, I take the wp-content folder that I stored on my computer, and copy it over to the new remote. At this point, the site homepage will load correctly. You might notice other pages won’t resolve, and will produce a 404 “not found” response. That error has to do with certain Apache settings, and can be fixed by tweaking some options.
Server settings
With my setup, I encountered the above issue with page permalinks . WordPress relies on the .htaccess file to route pages/posts with their correct URL slugs. By default, this Apache setup does not allow its settings to be overridden by .htaccess directives. To fix this issue, the httpd.conf file needs to be edited. Mine was located in this directory: /etc/httpd/conf
You’ll need to find (or create) a section that corresponds to the default document root: <Directory “/var/www/html”></Directory>. In that block, they’ll be a AllowOverride command that is set to “None”. That needs to be changed to “All” for our configuration file to work.
Final steps
After all the data and content has been transferred, do some smoke-testing. Try out as many pages and features as you can to make sure the new site is working as it should. Make sure you keep a back-up of everything some place secure (I use an S3 bucket). Once satisfied, you can switch your domain’s A records to point at the new box. Since the old and new servers will appear identical, I add a console.log(“new server”) to the header file. That allows me tell when the DNS update has finally resolved. Afterwards, I can safely cancel/decommission the old web hosting package.
I have had some lousy luck with databases. In 2018, I created a fitness app for martial artists, and quickly gained over a hundred users in the first week. Shortly after, the server stopped resolving and I didn’t know why. I tried restarting it, but that didn’t help. Then, I stopped the EC2 instance from my AWS console. Little did I know, that would wipe the all of the data from that box. Ouch.
Recently, a client let me know that their site wasn’t working. A dreaded “error connecting to the database” message was all that resolved. I’d seen this one before – no sweat. Restarting the database usually does the trick: “sudo service mariadb restart”. The command line barked back at me: “Job for mariadb.service failed because the control process exited with error code.”
Uh-oh.
The database was corrupted. It needed to be deleted and reinstalled. Fortunately, I just happen to have a SQL dump for this site saved on my desktop. This was no way to live – in fear of the whims of servers.
Part of the issue is that I’m running MySQL on the same EC2 instance as the web server. A more sophisticated architecture would move the database to RDS. This would provide automated backups, patches, and maintenance. It also costs more.
To keep cost low, I decided to automate MySQL dumps and upload to an S3 bucket. S3 storage is cheap ($0.20/GB), and data transfer from EC2 is free.
AWS Setup
The first step was to get things configured in my Amazon Web Services (AWS) console. I created a new S3 bucket. I also created a new IAM user, and added it to a group that included the permission policy “AmazonS3FullAccess”.
This policy provides full access to all buckets.
I went to the security credentials for that user, and copied down the access key ID and secret. I would use that info to access my S3 bucket programatically. All of the remaining steps take place from the command line, via SSH, against my server. From a Mac terminal, you could use a command like this to connect to an EC2 instance:
Shell scripts are programs that can be run directly by Linux. They’re great for automating tasks. To create the file on my server I ran: “nano backup.sh”. This assumes you already have the nano text editor installed. If not: “sudo yum install nano” (or, “sudo apt install nano”, depending on your Linux flavor).
Below is the full code I used. I’ll explain what each part of it does.
The first line tells the system what interpreter to use: “#!/bin/bash”. Bash is a variation of the shell scripting language. The next eight lines are variables that contain details about my AWS S3 bucket, and the MySQL database connection.
After switching to a temporary directory, the filename is built. The name of the file is set to the database’s name plus the day of the week. If that file already exists (from the week previous), it’ll be overwritten. Next, the sql file is created using mysqldump and the database connection variables from above. Once that operation is complete, then we zip the file, upload it to S3, and delete the zip from our temp folder.
If the mysqldump operation fails, we spit out an error message and exit the program. (Exit code 1 is a general catchall for errors. Anything other than 0 is considered an error. Valid error codes range between 1 and 255.)
Before this shell script can be used, we need to change its file permissions so that it is executable: “chmod +x backup.sh”
After all of this, I ran the file manually, and made sure it worked: “./backup.sh”
Sure enough, I received a success message. I also checked the S3 bucket and made sure the file was there.
Scheduled Cronjob
The last part is to schedule this script to run every night. To do this, we’ll edit the Linux crontab file: “sudo crontab -e”. This file controls cronjobs – which are scheduled tasks that the system will run at set times.
The file opened in my terminal window using the vim text editor – which is notoriously harder to use than the nano editor we used before.
I had to hit ‘i’ to enter insertion mode. Then I right clicked, and pasted in my cronjob code. Then I pressed the escape key to exit insertion mode. Finally, I typed “wq!” to save my changes and quit.
And that’s it. I made sure to check the next day to make sure my cronjob worked (it did). Hopefully now, I won’t lose production data ever again!
Request Time Too Skewed (update)
A while after setting this up, I randomly checked my S3 buckets to make sure everything was still working. Although it had been for most of my sites, one had not been backed up in almost 2 months! I shelled into that machine, and tried running the script manually. Sure enough, I received an error: “An error occurred (RequestTimeTooSkewed) when calling the PutObject operation: The difference between the request time and the current time is too large.“
I checked the operating system’s current date and time, and it was off by 5 days. I’m not sure how that happened. I fixed it by installing and running “Network Time Protocol”:
sudo yum install ntp sudo ntpdate ntp.ubuntu.com
After that, I was able to run my backup script successfully, without any S3 errors.
Nano text-editor tip I learned along the way:
You can delete chunks of text content using Nano. Use CTRL + Shift + 6 to enter selection mode, move the cursor to expand the block, and press CTRL + K to delete it.
My last post was about launching a website onto AWS. This covered launching a new EC2 instance, configuring a security group, installing LAMP software, and pointing a domain at the new instance. The only thing missing was to configure SSL and HTTPS.
Secure Sockets Layer (SSL) encrypts traffic between a website and its server. HTTPS is the protocol to deliver secured data via SSL to end-users.
In my last post, I already allowed all traffic through port 443 (the port that HTTPS uses) in the security group for my EC2 instance. Now I’ll install software to provision SSL certificates for the server.
Certbot
Certbot is free software that will communicate with Let’s Encrypt, an SSL certificate authority, to automate the management of encryption certificates.
Before downloading and installing Certbot, we’ll need to install some dependencies (Extra Packages for Enterprise Linux). SSH into the EC2 instance that you want to secure, and run this command in your home directory (/home/ec2-user):
sudo wget -r --no-parent -A 'epel-release-*.rpm' http://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/
Finally, schedule an automated task (a cron job) to renew the encryption certificate as needed. If you don’t do this part, HTTPS will fail for your website after a few months. Users will receive an ugly warning, telling them that your website is not secure. Don’t skip this part!
Run this command to open your cron file:
sudo nano /etc/crontab
Schedule Certbot to renew everyday, at 4:05 am:
05 4 * * * root certbot renew --no-self-upgrade
Make sure your cron daemon is running:
sudo systemctl restart crond
That’s it! Now your website, hosted on EC2 will support HTTPS. Next, we’ll force all traffic to use it.
In 2008 I deployed my first website to production. It used a simple LAMP stack , a GoDaddy domain name, and HostGator hosting.
Since 2016, I’ve used AWS as my primary cloud provider. And this year, I’m finally cancelling my HostGator package. Looking through that old server, I found artifacts of past projects – small businesses and start-ups that I helped develop and grow. A virtual memory lane.
Left on that old box was a site that I needed to move to a fresh EC2 instance. This is an opportunity to document how I launch a site to Amazon Web Services.
Amazon Elastic Compute Cloud
To start, I launch a new EC2 instance from the AWS console. Amazon’s Elastic Compute Cloud provides “secure and resizable compute capacity in the cloud.” When prompted to choose an Amazon Machine Image (AMI), I select “Amazon Linux 2 AMI”. I leave all other settings as default. When I finally click “Launch”, it’ll ask me to either generate a new key file, or use an existing one. I’ll need that file later to SSH or sFTP into this instance. A basic Linux server is spun up, with little else installed.
Amazon Linux 2 AMI is free tier eligible.
Next, I make sure that instance’s Security Group allows inbound traffic on SSH, HTTP, and HTTPS. We allow all traffic via HTTP and HTTPS (IPv4 and IPv6, which is why there are 2 entries for each). That way end-users can reach the website from a browser. Inbound SSH access should not be left wide open. Only specific IP addresses should be allowed to command-line in to the server. AWS has an option labeled “My IP” that will populate it for your machine.
Don’t allow all IPs to access SSH in a live production environment.
Configure the server
Now that the hosting server is up-and-running, I can command-line in via SSH from my Mac’s terminal using the key file from before. This is what the command looks like:
That command gives me Apache, PHP, and MariaDB – a basic LAMP stack. This next one installs the database server:
sudo yum install -y httpd mariadb-server
MariaDB is a fork of the typical MySQL, but with better performance.
Start Apache: “sudo systemctl start httpd“. And, make sure it always starts when the server boots up “sudo systemctl enable httpd”
The server setup is complete. I can access an Apache test page from a web browser by navigating to the EC2 instance’s public IP address.
A test page shows when no website files are present.
I’ll take my website files (that are stored on my local machine and synched to a Git repo) and copy them to the server via sFTP.
I use FileZilla to access my EC2 public directory
I need to make sure the Linux user I sFTP with owns the directory “/var/www/html”, or else I’ll get a permission denied error: sudo chown -R ec2-user /var/www/html
Instead of having to use the EC2 server’s public address to see my website from a browser, I’ll point a domain name at it. AWS Route 53 helps with this. It’s a “DNS web service” that routes users to websites by mapping domain names to IP addresses.
In Route 53 I create a new “hosted zone”, and enter the domain name that I’ll be using for this site. This will automatically generate two record sets: a Name Server (NS) record and a Start-of-Authority (SOA) record. I’ll create one more, an IPv4 address (A) record. The value of that record should be the public IP address that I want my domain to point at. You’ll probably also want to add another, identical to the last one, but specifying “www” in the record name.
Finally, I’ll head over to my domain name registrar, and find my domain name’s settings. I update the nameserver values there to match those in my Route 53 NS record set. It’ll likely take some time for this change to be reflected in the domain’s settings. Once that is complete, the domain name will be pointing at my new EC2 instance.
Email is the best way that we can communicate with our users; still better than SMS or app notifications. An effective messaging strategy can enhance the journey our products offer.
This post is about sending email from the website or app you’re developing. We will use SES to send transactional emails. AWS documentation describes Simple Email Service (SES) as “an email sending and receiving service that provides an easy, cost-effective way for you to send email.” It abstracts away managing a mail server.
Configuring your domain name
The first step to sending email through SES is to verify the domain name we’ll want messages coming from. We can do this from the “Domains” dashboard.
Verify a new domain name
This will generate a list of record sets that will need to be added to our domain as DNS records. I use Route 53, another Amazon service, to manage my domains – so that’s where I’ll need to enter this info.
Understand deliverability
We want to be confident that intended recipients are actually getting the messages that are sent. Email service providers, and ISPs, want to prevent being abused by spammers. Following best practices, and understanding deliverability, can ensure that emails won’t be blocked.
Verify any email addresses that you are sending messages from: “To maintain trust between email providers and Amazon SES, Amazon SES needs to ensure that its senders are who they say they are.”
Make sure DKIM has been verified for your domain: “DomainKeys Identified Mail (DKIM) provides proof that the email you send originates from your domain and is authentic”. If you’re already using Route 53 to manage your DNS records, SES will present an option to automatically create the necessary records.
Be reputable. Send high quality emails and make opt-out easy. You don’t want to be marked as spam. Respect sending quotas. If you’re plan on sending bulk email to a list-serve, I suggest using an Email Service Provider such as MailChimp (SES could be used for that too, but is outside the scope of this writing).
An access key can be created using Identity and Access Management (IAM). “You use access keys to sign programmatic requests that you make to AWS.” This requires creating a user, and setting its permissions policies to include “AmazonSESSendingAccess”. We can create an access key in the “security credentials” for this user.
Permission policy for IAM user
Integrating with WordPress
Sending email from WordPress is made easy with plugins. They can be used to easily create forms. Those forms can be wired to use the outbound mail server of our choice using WP Mail SMTP Pro. All we’ll need to do is enter the access key details. If we try to send email without specifying a mail server, forms will default to sending messages directly from the LAMP box hosting the website. That would result in low-to-no deliverability.
Screenshot of WP Mail SMTP Pro
Integrating with custom code
Although the WordPress option is simple, the necessary plugin has an annual cost. Alternatively, SES can integrate with custom code we’ve written. We can use PHPMailer to abstract away the details of sending email programmatically. Just include the necessary files, configure some variables, and call a send() method.
Contact form powered by SES
The contact forms on my résumé and portfolio webpages use this technique. I submit the form data to a PHP file that uses PHPMailer to interact with SES. The front-end uses a UI notification widget to give the user alerts. It’s available on my GitHub, so check it out.
Front-end, client-side:
<form id="contactForm">
<div class="outer-box">
<input type="text" placeholder="Name" name="name" value="" class="input-block-level bordered-input">
<input type="email" placeholder="Email" value="" name="email" class="input-block-level bordered-input">
<input type="text" placeholder="Phone" value="" name="phone" class="input-block-level bordered-input">
<textarea placeholder="Message" rows="3" name="message" id="contactMessage" class="input-block-level bordered-input"></textarea>
<button type="button" id="contactSubmit" class="btn transparent btn-large pull-right">Contact Me</button>
</div>
</form>
<link rel="stylesheet" type="text/css" href="/ui-messages/css/ui-notifications.css">
<script src="/ui-messages/js/ui-notifications.js"></script>
<script type="text/javascript">
$(function(){
var notifications = new UINotifications();
$("#contactSubmit").click(function(){
var contactMessage = $("#contactMessage").val();
if(contactMessage < 1){
notifications.showStatusMessage("Don't leave the message area empty.");
return;
}
var data = $("#contactForm").serialize();
$.ajax({
type:"POST",
data:data,
url:"assets/contact.php",
success:function(response){
console.log(response);
notifications.showStatusMessage("Thanks for your message. I'll get back to you soon.");
$("form input, form textarea").val("");
}
});
});
});
</script>
In the PHP file, we set the username and password as the access key ID and access key secret. Make sure the region variable matches what you’re using in AWS. #TODO: It would be best practice to record the message to a database. (The WordPress plugin from earlier handles that out-of-the-box). We might also send an additional email to the user, letting them know their note was received.
Back-end, server-side:
<?php
//send email via amazon ses
use PHPMailer\PHPMailer\PHPMailer;
use PHPMailer\PHPMailer\Exception;
$name = "";
$email = "";
$phone = "";
$message = "";
if(isset($_POST["name"])){
$name = $_POST["name"];
}
if(isset($_POST["email"])){
$email = $_POST["email"];
}
if(isset($_POST["phone"])){
$phone = $_POST["phone"];
}
if(isset($_POST["message"])){
$message = $_POST["message"];
}
$region = "us-east-1"
$aws_key_id = "xxx"
$aws_key_secret = "xxx"
require '/var/www/html/PHPMailer/src/Exception.php';
require '/var/www/html/PHPMailer/src/PHPMailer.php';
require '/var/www/html/PHPMailer/src/SMTP.php';
// // Instantiation and passing `true` enables exceptions
$mail = new PHPMailer(true);
try {
if(strlen($message) > 1){
//Server settings
$mail->SMTPDebug = 2; // Enable verbose debug output
$mail->isSMTP(); // Set mailer to use SMTP
$mail->Host = 'email-smtp.' . $region . '.amazonaws.com'; // Specify main and backup SMTP servers
$mail->SMTPAuth = true; // Enable SMTP authentication
$mail->Username = $aws_key_id; // access key ID
$mail->Password = $aws_key_secret; // AWS Key Secret
$mail->SMTPSecure = 'tls'; // Enable TLS encryption, `ssl` also accepted
$mail->Port = 587; // TCP port to connect to
//Recipients
$mail->setFrom('XXX@antpace.com', 'Portfolio');
$mail->addAddress("XXX@antpace.com"); // Add a recipient
$mail->addReplyTo('XXX@antpace.com', 'Portfolio');
// Content
$mail->isHTML(true); // Set email format to HTML
$mail->Subject = 'New message from your portfolio page.';
$mail->Body = "This message was sent from: $name - $email - $phone \n Message: $message";
$mail->AltBody = "This message was sent from: $name - $email - $phone \n Message: $message";
$mail->send();
echo 'Message has been sent';
}
} catch (Exception $e) {
echo "Message could not be sent. Mailer Error: {$mail->ErrorInfo}";
}
?>
The technical side of sending email from software is straight-forward. The strategy can be fuzzy and requires planning. Transactional emails have an advantage over marketing emails. Since they are triggered by a user’s action, they have more meaning. They have higher open rates, and in that way afford an opportunity.
How can we optimize the usefulness of these emails? Be sure to create a recognizable voice in your communication that resonates your brand. Provide additional useful information, resources, or offers. These kind of emails are an essential part of the user experience and your product’s development.