Website Redesign with WordPress Gutenberg via AWS Lightsail

Website Revamp with WordPress Gutenberg via AWS Lightsail

AWS Lightsail

Dan owns a theatre costume shop called “On Cue Costumes” in Montclair, New Jersey. Like many of my clients, he had an existing website that needed to be modernized. It needed to be redesigned and responsive. This project reminded me a lot of an art website I did a few years ago. It had lots of pages, content, and images that needed to be transferred. And, like that project, I chose to use Amazon Web Services as our cloud provider and WordPress as the content management system. Last time, I installed WordPress and all of the server software manually using AWS EC2. This time I decided to use AWS Lightsail to setup a managed WordPress instance.

selecting wordpress for aws lightsail

This greatly reduced the time it took to get things up and running. It also provides cost predictably (EC2 is pay as you go) and automatic backups. (When running WordPress on EC2 I would run nightly SQL dumps as a redundancy mechanism). A month before starting work on this website I purchased a new domain in my personal AWS account.

When time came to begin working, I created a new AWS account for Dan’s business. Route 53 made it easy to transfer the domain name. Then, I created a new hosted zone for that domain and pointed the A record to the Lightsail instance’s IP address. Set up was easy enough to not need a walk-through. But, just to be sure, I watched a YouTube first. I’m glad I did because the top comment mentioned “Why did you not set a static IP address before creating your A records?”

The documentation mentions that “The default dynamic public IP address attached to your Amazon Lightsail instance changes every time you stop and restart the instance”. To preempt that from being a problem, I was able to create a static IP address and attach it to the instance. I updated the A record to use that new address.

Here is a before shot of the business’s website:

The legacy site, "OnCueCostumesOnline.com"
The legacy site, “OnCueCostumesOnline.com”

WordPress Gutenberg

Now that the infrastructure was set up, I was able to login to wp-admin. The Lightsail dashboard gives you the default credentials along with SSH access details. I used a premium theme called “Movie Cinema Blocks”. It has an aesthetic that fit the theatre look and made sense for this business.

premium wordpress theme
The original layout provided by the theme

The Gutenberg editor made it easy to craft the homepage with essential information and photos. The theme came with a layout pattern that I adjusted to highlight the content in a meaningful way. In a few places where I wanted to combine existing photos , I used the built in collage utility found in the Google Photos web app.

I created a child-theme after connecting via sFTP and edited the templates to remove the comment sections. I added custom CSS to keep things responsive:

@media(max-width: 1630px){
	.navigation-column .wp-block-navigation{
		justify-content: center !important;
	}
}
@media(max-width: 1550px){
	.navigation-columns .menu-column{
		flex-basis: 45% !important;
	}
}

@media(max-width: 1000px){
	 .hide-on-mobile{
		 display: none
	 }
	.navigation-columns{
		flex-wrap: wrap !important;
	}

	.navigation-column{
		flex-basis: 100% !important;
    	flex-grow: 1 !important;
	}
	.navigation-column h1{
		text-align: center;
	}
	.navigation-column .header-download-button{
		justify-content: center !important;
	}
	.navigation-column .wp-block-navigation a{
		font-size: 14px;
	}
}

@media (min-width: 1000px) {
    .hide-on-desktop {
        display: none !important;
    }
}

.homepage-posts img{
	border-radius: 10px;
}
.homepage-posts a{
	text-decoration: none;
}

@media (max-width: 780px) {
	.mobile-margin-top{ 
		margin-top: 32px !important;
	}
}

h6 a{
	text-decoration:none !important;
}

.entry-content a{
	text-decoration: none !important;
}

input[type="search"]{
       background-color: white !important;
}

I kept the existing color palette and I used a free web font, called Peace sans, for the site logo:

@font-face {
  font-family: 'peacesans';
  src: url('https://www.oncuecostumes.com/wp-content/themes/movie-cinema-blocks/assets/fonts/peacesans.ttf') format('truetype');
  font-weight: normal;
  font-style: normal;
}

.logo-font {
  font-family: 'peacesans', Lexend Deca, sans-serif;
}

The biggest challenge was importing all of the content from the existing website. We added each costume show as a post and used categories for taxonomy. The “Latest Posts” block in Gutenberg allowed us to showcase the content organized by that classification.

The original website displayed a simple list of all records. It has separate pages only for shows that have images. Others are just listed as plain text. After manually adding all of the hyperlinked content, over one-hundred remained that were only titles.

list of shows

To remove the hyperlinked entries, so that I could copy/paste the rest, I used some plain JavaScript in the browser console:

// Get all anchor elements
const allLinks = document.getElementsByTagName('a');

// Convert the HTMLCollection to an array for easier manipulation
const linksArray = Array.from(allLinks);

// Remove each anchor element from the DOM
linksArray.forEach(link => {
    link.parentNode.removeChild(link);
});

It was easier to copy the remaining entries from the DOM inspector than it was from the UI. But that left me with <li> markup that needed to be deleted on every line. I pasted the result into Sublime Text, used the Selection -> Split into Lines control to clean up all at once, and found an online tool to quickly remove all empty lines.  I saved it as a plain text file. Then, I used a plugin called “WordPress Importer” to upload each title as an empty post.

Contact Form Email

The contact page needed to have a form that allows users to send a message to the business owner. To create the form, I used the “Contact Form 7” plugin. “WP Mail SMTP” let us integrate AWS SES to power the transactional messages. Roadblocks arose during this integration.

Authoritative Hosted Zone

The domain verification failed even though I added the appropriate DKIM CNAME records to the hosted zone I created in this new account. At this point, I still needed to verify a sending address. This business used a @yahoo account for their business email. I decided to use AWS WorkMail to create a simple info@ inbox. (In the past, Google Workspace has been my go-to). This gave me a pivotal clue to resolving the domain verification problem.

aws workmail warning

At the top of the WorkMail domain page, it warned that the domain’s hosted zone was not authoritative. It turns out, that after I transferred this domain from my personal account to the new one the nameserver records continued to point to the old hosted zone. I deleted the hosted zone in the origin account, and updated the NS records on the domain in the new account to point to the new hosted zone. Minutes later, the domain passed verification.

Sandbox limits

The initial SES sandbox environment has sending limits – only 200 per day and only to verified accounts. Since messages were only being sent to the business owner, we could have just verified his receiving email address. The main issue was that the default WordPress admin email address was literally user@example.com. When I tried updating this to a verified address, WordPress would also try to send an email to user@example.com, causing SES to fail. I requested production access, and in the mean time tried to update that generic admin address in the database directly.

SSH Tunnel and Database Access

When looking at the server files in FileZilla, I noticed that phpMyAdmin was installed. I tried to access it from a web browser, but it warned that it was only accessible from localhost. I set up an SSH tunnel to access phpMyAdmin through my computer. From my Mac Terminal, I commanded:

ssh -L 8888:localhost:80 <your-username>@<your-server-ip> -i <your-ssh-key>

It told me that there was a bad permissions issue with the key file (even though this was the same file I had been using for sFTP). I fixed it from the command line:

chmod 600 onCueCostume-LightsailDefaultKey-us-east-1.cer

Once connected, I could access the server’s installation of phpMyAdmin from this browser URL: `http://localhost:8888/phpmyadmin`

Presented with a login screen, I didn’t know what credentials to use. From the Lightsail dashboard I connected to the server via web terminal. I looked up the MySql password with this command: `cat /home/bitnami/bitnami_credentials`

I assumed the username would be root – but no, it turned out to be user.

Production Access

After requesting production access, AWS responded, “It looks like we previously increased the sending limits for at least one other AWS account that you do not appear to be using. Before we make a decision about your current request, we would like to know why you cannot send from your existing unused account.”

What did this mean? I think it happened because I used my credit card on this new account, before switching it over to the business owner. This probably set off an automatic red flag on their end. Interestingly, the correspondence said “To protect our methods, we cannot provide any additional information about how we identified the related accounts.”

I responded and explained the situation honestly, “I am not aware of another account that I am not using. I do have my own AWS account for my web design business, but it is unrelated to this account – and it is not unused.” Within a few hours, production access was granted.

New Website

The original scope of this project was to build a new website showcasing content that already existed.  We were able to finalize the layout and design quickly. Here is a screenshot of the homepage:

homepage design of a wordpress website

To write this article, I referred to my ChatGPT chat log as a source of journal notes.

Restoring a website when EC2 fails

restore a website with these steps

Recently, one of my websites went down. After noticing, I checked my EC2 dashboard and saw the instance stopped. AWS had emailed me to say that due to a physical hardware issue, it was terminated. When an instance is terminated, all of its data is lost. Luckily, all of my data is backed up automatically every night.

Since I don’t use RDS, I have to manually manage data redundancy. After a few disasters, I came up with a solution to handle it. I trigger a nightly cron-job to run a shell script. That script takes a MySQL dump and uploads it to S3.

As long as I have the user generated data, everything else is replaceable.  The website that went down is a fitness tracking app. Every day users record their martial arts progress. Below are the ten steps taken to bring everything back up.

  1. Launch a new EC2 instance
  2. Configure the security group for that instance –  I can just use the existing one
  3. Install Apache and MariaDB
  4. Secure the database
  5. Install PhpMyAdmin – I use this tool to import the .sql file in the next step
  6. Import the database backup – I downloaded the nightly .sql file dump from my S3 repo
  7. Setup automatic backups, again
  8. Install WordPress and restore the site’s blog
  9. Configure Route 53 (domain name) and SSL (https) – make the website live again
  10. Quality Assurance – “smoke test” everything to make sure it all looks correct

Use this as a checklist to make sure you don’t forget any steps. Read through the blog posts that I hyperlinked to get more details.

Build an image upload gallery

media gallery upload with s3

Allowing users to upload images to your app can be a pivotal feature. Many digital products rely on it. This post will show you how to do it using PHP and AWS S3.

image upload gallery

After launching version 1.0 of SplitWit, I decided to enhance the platform by adding features. An important A/B experiment involves swapping images. This is particularly useful on ecommerce stores that sell physical products.

Originally, users could only swap images by entering a URL. To the average website owner, this would seem lame. For SplitWit to be legit, adding images on the fly had to be a feature.

I wrote three scripts – one to upload files, one to fetch them, and one to delete them. Each leverages a standalone PHP class written by Donovan Schönknecht, making it easy to interact with AWS S3. All you’ll need is your S3 bucket name and IAM user credentials. The library provides methods to do everything you need.

AWS S3

Amazon S3 stands for “simple storage service”. It provides data storage that is scalable, secure, highly available, and performant.

A new bucket can be created directly from the management console.

create new s3 bucket

You’ll want to create a new IAM user to programmatically interact with this bucket. Make sure that new user is added to a group that includes the permission policy “AmazonS3FullAccess”. You can find the access key ID and secret in the “Security credentials” tab.

IAM user in AWS with permissions for S3

Uploading image files

When users select an image in the visual editor, they are shown a button to upload a new file. Clicking on it opens the gallery modal.

<div id="image-gallery-modal" class="modal image-gallery-modal" style="display: none;">
  <div class="modal-content">
    <h3>Your image gallery</h3>
    <p><strong>Upload a new file:</strong></p>
    <input class="uploadimage" id="uploadimage" type="file" name="uploadimage" />
    <p class="display-none file-error"></p>
    <div><hr /></div>
    <div class="image-gallery-content"></div>
    <p class="loading-images"><i class="fas fa-spinner fa-spin"></i> Loading images...</p>
  </div>
</div>

The HTML file-type input element presents a browser dialog to select a file. Once selected, the image data is posted to the S3 upload script. The newly uploaded image then replaces the existing image in the visual editor. 

$(".uploadimage").change(function(){
    
    var file = $(this)[0].files[0];   
    var formData = new FormData();
    formData.append("file", file, file.name);
    formData.append("upload_file", true);         

    $.ajax({
      type: "POST",
      url: "/s3-upload.php",
      xhr: function () {
        var myXhr = $.ajaxSettings.xhr();
        if (myXhr.upload) {
            // myXhr.upload.addEventListener('progress', that.progressHandling, false);
        }
        return myXhr;
      },
      success: function (response) {
        console.log(response);
        
        document.getElementById("uploadimage").value = "";

        if(response !== "success"){
          $(".file-error").text(response).show();
          setTimeout(function(){ $(".file-error").fadeOut();}, 3000)
          return;
        }
        
        $("#image-gallery-modal").hide();
        loadS3images();
        var newImageUrl = "https://splitwit-image-upload.s3.amazonaws.com/<?php echo $_SESSION['userid'];?>/" + file.name;
        $("input.img-url").val(newImageUrl);
        $(".image-preview").attr("src", newImageUrl).show();
        $(".image-label .change-indicator").show();

        //update editor (right side)
        var selector = $(".selector-input").val();
        var iFrameDOM = $("iframe#page-iframe").contents()
        if($(".element-change-wrap").is(":visible")){
          iFrameDOM.find(selector).attr("src", newImageUrl).attr("srcset", "");
          $(".element-change-save-btn").removeAttr("disabled");
        }
        if($(".insert-html-wrap").is(":visible")){
          var position = $(".position-select").val();
          var htmlInsertText = "<img style='display: block; margin: 10px auto;' class='htmlInsertText' src='"+newImageUrl+"'>";
          iFrameDOM.find(".htmlInsertText").remove();
          if(position == "before"){
            iFrameDOM.find(selector).before(htmlInsertText);
          }
          if(position == "after"){
            iFrameDOM.find(selector).after(htmlInsertText);
          }
        }
      },
      error: function (error) {
        console.log("error: ");
        console.log(error);
      },
      async: true,
      data: formData,
      cache: false,
      contentType: false,
      processData: false,
      timeout: 60000
  });

});

The upload script puts files in the same S3 bucket, under a separate sub-directory for each user account ID. It checks the MIME type on the file to make sure an image is being uploaded.

<?php
require 's3.php';
 
$s3 = new S3("XXXX", "XXXX"); //access key ID and secret

// echo "S3::listBuckets(): ".print_r($s3->listBuckets(), 1)."\n";

$bucketName = 'image-upload';

if(isset($_FILES['file'])){
	$file_name = $_FILES['file']['name'];   
	$uploadFile = $_FILES['file']['tmp_name']; 

	if ($_FILES['file']['size'] > 5000000) { //5 megabyte
     	   echo 'Exceeded filesize limit.';
     	   die();
    	}
    	$finfo = new finfo(FILEINFO_MIME_TYPE);
	if (false === $ext = array_search(
	        $finfo->file($uploadFile),
	        array(
	            'jpg' => 'image/jpeg',
	            'png' => 'image/png',
	            'gif' => 'image/gif',
	        ),
	        true
	    )) {
	    	if($_FILES['file']['type'] == ""){
	    		echo 'File format not found. Please re-save the file.';
	    	}else{
		    	echo 'Invalid file format.';
		    }
     	    die();
	 }

	//create new directory with account ID, if it doesn't already exist
	session_start();
	$account_id = $_SESSION['userid'];

	if ($s3->putObjectFile($uploadFile, $bucketName, $account_id."/".$file_name, S3::ACL_PUBLIC_READ)) {
		echo "success";
	}

}
?>

After upload, the gallery list is reloaded by the loadS3images() function.

Fetching image files from S3

When the image gallery modal first shows, that same loadS3images() runs to populate any images that have been previously uploaded.

function loadS3images(){

  $.ajax({
      url:"/s3-get-objects.php",
      complete: function(response){
        gotImages = true;
        $(".loading-images").hide();
        var data = JSON.parse(response.responseText);
        var x;
        var html = "<p><strong>Select existing file:</strong></p>";
        var l = 0;
        for (x in data) {
          l++;
          var name = data[x]["name"];
          nameArr = name.split("/");
          name = nameArr[1];
          var imgUrl = "https://splitwit-image-upload.s3.amazonaws.com/<?php echo $_SESSION['userid'];?>/" + name;
          html += "<div class='image-data-wrap'><p class='filename'>"+name+"</p><img style='width:50px;display:block;margin:10px;' src='' class='display-none'><button type='button' class='btn select-image'>Select</button> <button type='button' class='btn preview-image'>Preview</button> <button type='button' class='btn delete-image'>Delete</button><hr /></div>"
        }
        if(l){
          $(".image-gallery-content").html(html);
        }

      }
    });
}

It hits the “get objects” PHP script to pull the files in the account’s directory.

<?php
require 's3.php';
 
$s3 = new S3("XXX", "XXX"); //access key ID and secret

$bucketName = 'image-upload';
session_start();
$account_id = $_SESSION['userid'];
$info = $s3->getBucket($bucketName, $account_id);
echo json_encode($info);

?>

Existing images can be chosen to replace the one currently selected in the editor. There are also options to preview and delete.

Delete an S3 object

When the delete button is pressed for a file in the image gallery, all we need to do is pass the filename along. If the image is currently being used, we also remove it from the editor.

$(".image-gallery-content").on("click", ".delete-image", function() {
    var parent = $(this).parent();
    var filename = parent.find(".filename").text();
    var currentImageUrl = $(".img-url").val();
    if(currentImageUrl =="https://splitwit-image-upload.s3.amazonaws.com/<?php echo $_SESSION['userid'];?>/" + filename){
      $(".img-url").val(testSelectorElImage);
      $(".image-preview").attr("src", testSelectorElImage);
      var selector = $(".selector-input").val();
      var iFrameDOM = $("iframe#page-iframe").contents()
      iFrameDOM.find(selector).attr("src", testSelectorElImage);
    }
    $.ajax({
      method:"POST",
      data: { 
        'filename': filename, 
      },
      url: "/s3-delete.php?filename="+filename,
      complete: function(response){
        parent.remove();
        if(!$(".image-data-wrap").length){
          $(".image-gallery-content").html("");
        }
      }
    })

}); 

 

<?php
require 's3.php';
 
$s3 = new S3("XXX", "XXX"); //access key ID and secret

$bucketName = 'image-upload';
session_start();
$account_id = $_SESSION['userid'];
$filename = $_POST['filename'];
if ($s3->deleteObject($bucketName, $account_id."/".$filename) ){
	echo "S3::deleteObject(): Deleted file\n";
}

?>

 

Reset password flow in PHP

reset password

My email account is a skeleton key to anything online I’ve signed up for. If I forget a password, I can reset it. Implementing this feature for a web app takes just a few steps.

When users enter an incorrect password, I prompt them to reset it.

incorrect password warning

Clicking the reset link calls a “forgot password” back-end service.

$(document).on("click",".reset-pw-cta", function(){
	var email = $(this).attr("data");
	$.ajax({
		url:"/service-layer/user-service.php?method=forgotPw&email="+email,
		complete:function(response){
			console.log(response.responseText)
			window.showStatusMessage("A password reset email as been sent to " + email);
		}
	})
});

A token is created in our ‘password recovery’ database table. That token is related back to an account record.

password recovery database table

As a security practice, recovery tokens are deleted nightly by a cron job.

An email is then sent containing a “reset password” link embedded with the token. AWS SES and PHPMailer is used to send that message.

function forgotPw(){
	$email = $this->email;
	$row = $this->row;
	$number_of_rows = $this->number_of_rows;
	$conn = $this->connection;
	if($number_of_rows > 0){
		$this->emailFound = 1;
		$userid = $row['ID'];
		$this->userid = $userid;

		//create reset token
		$timestamp = time();
		$expire_date = time() + 24*60*60;
		$token_key = md5($timestamp.md5($email));
		$statement = $conn->prepare("INSERT INTO `passwordrecovery` (userid, token, expire_date) VALUES (:userid, :token, :expire_date)");
		$statement->bindParam(':userid', $userid);
		$statement->bindParam(':token', $token_key);
		$statement->bindParam(':expire_date', $expire_date);
		$statement->execute();

		//send email via amazon ses
		include 'send-email-service.php';	
		$SendEmailService = new SendEmailService();

		$reset_url = 'https://www.bjjtracker.com/reset-pw.php?token='.$token_key;
	        $subject = 'Reset your password.';
	        $body    = 'Click here to reset your password: <a href="'.$reset_url.'">'. $reset_url .'</a>';
	        $altBody = 'Click here to reset your password: ' . $reset_url;
	        $this->status = $SendEmailService -> sendEmail($subject, $body, $altBody, $email);


	}else{
		$this->emailFound = 0;
	}
}

That link navigates to a page with a “reset password” form.

reset password form

Upon submission the new password and embedded token are passed along to the server.

$(document).ready(function() {
    $(".reset-button").click(function(){
      var newPassword = $(".password-reset-input").val();
      if(newPassword.length < 1){
        var notifications = new UINotifications();
        notifications.showStatusMessage("Please don't leave that blank :( ");
        return;
      }
      var data = $(".resetpw-form").serialize();
      $.ajax({
        url: "/service-layer/user-service.php?method=resetPw&token=<?php echo $_GET['token']; ?>",
        method: "POST",
        data: data,
        complete: function(response){
          // console.log(response);
          window.location = "/";
        }
      });
    });
    $("input").keypress(function(e) {
      if(e.which == 13) {
        e.preventDefault();
          $(".reset-button").click();
      }
  });
  

});

The correct recovery record is selected by using the token value. That provides the user ID of the account that we want to update. The token should be deleted once the database is updated.

function resetPw(){
	$conn = $this->connection;
	$token = $_GET['token'];
	$password = $_POST['password'];
	$passwordHash = password_hash($password, PASSWORD_DEFAULT);
	$statement = $conn->prepare("SELECT * FROM `passwordrecovery` where token = ?");
	$statement->execute(array($token));
	$row = $statement->fetch(PDO::FETCH_ASSOC);
	$userid = $row['userid'];

	$update_statement = $conn->prepare("UPDATE `users` SET password = ? where ID = ?");
	$update_statement->execute(array($passwordHash, $userid));

	$delete_statement = $conn->prepare("DELETE FROM `passwordrecovery` where token = ?");
	$delete_statement->execute(array($token));
}

This is a secure and user-friendly workflow to allow users to reset their passwords.

Error establishing connection to database – WordPress solution

solutions for wordpress database errors

A crashed database is a problem I’ve encountered across multiple WordPress websites. When trying to load the site you’re faced with a dreaded “Error establishing a database connection” message. Restarting the DB service usually clears things up. But, sometimes it won’t restart at all – which is why I started automating nightly data dumps to an S3 bucket.

Recently, one particular site kept going down unusually often. I assumed it was happening due to low computing resources on the EC2 t3.micro instance. I decide to spin up a a new box with more RAM (t3.small) and migrate the entire WordPress setup.

Since I couldn’t be sure of what was causing the issue, I needed a way to monitor the health of my WordPress websites. I decided to write code that would periodically ping the site, and if it is down send an email alert and attempt to restart the database.

warning message when a website can't connect to the database

The first challenge was determining the status of the database. Even if it crashed, my site would still return a 200 OK response. I figured I could use cURL to get the homepage content, and then strip out any HTML tags to check the text output. If the text did match the error message, I could take further action.

Next, I needed to programmatically restart MySql. This is the command I run to do it manually: sudo service mariadb restart 

After doing some research, I found that I could use shell_exec() to run it from my PHP code. Unfortunately, Apache wouldn’t let the (non-password using) web server user execute that without special authorization. I moved that command to its own restart-db.sh file, and allowed my code to run it by adding this to the visudo file: apache ALL=NOPASSWD: /var/www/html/restart-db.sh

I also needed to make the file executable by adjusting permissions: sudo chmod +x /var/www/html/restart-db.sh

Once those pieces were configured, my code would work:

<?php

$url = "http://www.antpace.com/blog";
$curl_connection = curl_init();

curl_setopt($curl_connection, CURLOPT_URL, $url);

curl_setopt($curl_connection, CURLOPT_RETURNTRANSFER, true);
$curl_response = curl_exec($curl_connection);
$plain_text = strip_tags($curl_response);

if(strpos($plain_text, "Error establishing a database connection") !== false){
	echo "The DB is down.";
        
        //restart the database
        shell_exec('sudo /var/www/html/restart-db.sh');
        
        //send notification email
        import 'send-email.php';
        send_email();
}else{
	echo "The DB is healthy.";
}

?>

You can read more about how to send a notification email in another post that I wrote on this blog.

Create the cron job

A cron job is a scheduled task in Linux that runs at set times. For my PHP code to effectively monitor the health of the database, it needs to run often. I decided to execute it every five minutes. Below are three shell commands to create a cron job.

The first creates the cron file for the root user:

sudo touch /var/spool/cron/root

The next appends my cron command to that file:

echo "*/5 * * * * sudo wget -q 127.0.0.1/check-db-health.php" | sudo tee -a /var/spool/cron/root

And, the last sets the cron software to listen for that file:

sudo crontab /var/spool/cron/root

Alternatively, you can create, edit, and set the cron file directly by running sudo crontab -e . The contents of the cron file can be confirmed by running sudo crontab -l .

 

Migrate a WordPress Site to AWS

Migrate a WordPress site to AWS

In a previous article I discussed launching a website on AWS. The project was framed as transferring a static site from another hosting provider. This post will extend that to migrating a dynamic WordPress site with existing content.

Install WordPress

After following the steps to launch your website to a new AWS EC2 instance, you’ll be able to connect via sFTP. I use FileZilla as my client. You’ll need the hostname (public DNS), username (ec2-user in this example), and key file for access. The latest version of WordPress can be downloaded from wordpress.org. Once connected to the server, I copy those files to the root web directory for my setup: /var/www/html

Make sure the wp-config.php file has the correct details (username, password) for your database. You should use the same database name from the previous hosting environment.

Data backup and import

It is crucial to be sure we don’t lose any data. I make a MySql dump of the current database and copy the entire wp-content folder to my local machine. I’m careful to not delete or cancel the old server until I am sure the new one is working identically.

Install phpMyAdmin

After configuring my EC2 instance, I install phpMyAdmin so that I can easily import the sql file.

sudo yum install php-mbstring -y
sudo systemctl restart httpd
sudo systemctl restart php-fpm
cd /var/www/html
wget https://www.phpmyadmin.net/downloads/phpMyAdmin-latest-all-languages.tar.gz
mkdir phpMyAdmin && tar -xvzf phpMyAdmin-latest-all-languages.tar.gz -C phpMyAdmin --strip-components 1
rm phpMyAdmin-latest-all-languages.tar.gz
sudo systemctl start mariadb

The above Linux commands installs the database management software on the root directory of the new web server. It is accessible from a browser via yourdomainname.com/phpMyAdmin. This tool is used to upload the data to the new environment.

phpMyAdmin import screen

Create the database and make sure the name matches what’s in wp-config.php from the last step. Now you’ll be able to upload your .sql file.

Next, I take the wp-content folder that I stored on my computer, and copy it over to the new remote. At this point, the site homepage will load correctly. You might notice other pages won’t resolve, and will produce a 404 “not found” response. That error has to do with certain Apache settings, and can be fixed by tweaking some options.

Server settings

With my setup, I encountered the above issue with page permalinks . WordPress relies on the .htaccess file to route pages/posts with their correct URL slugs. By default, this Apache setup does not allow its settings to be overridden by .htaccess directives. To fix this issue, the httpd.conf file needs to be edited. Mine was located in this directory: /etc/httpd/conf

You’ll need to find (or create) a section that corresponds to the default document root: <Directory “/var/www/html”></Directory>. In that block, they’ll be a AllowOverride command that is set to “None”. That needs to be changed to “All” for our configuration file to work.

apache config settings found in the HTTPD conf file

Final steps

After all the data and content has been transferred, do some smoke-testing. Try out as many pages and features as you can to make sure the new site is working as it should. Make sure you keep a back-up of everything some place secure (I use an S3 bucket). Once satisfied, you can switch your domain’s A records to point at the new box. Since the old and new servers will appear identical, I add a console.log(“new server”) to the header file. That allows me tell when the DNS update has finally resolved. Afterwards, I can safely cancel/decommission the old web hosting package.

Don’t forget to make sure SSL is setup!

Updates

AWS offers an entire suite of services to help businesses migrate. AWS Application Migration Service is a great choice to “simplify and expedite your migration while reducing costs”.

Upgrade PHP

In 2023, I used this blog post to stand-up a WordPress website. I was using a theme called Balasana. When I would try to set “Site Icon” (favicon) from the “customize” UI I would receive a message stating that “there has been an error cropping your image“. After a few Google searches, and also asking ChatGPT, the answer seemed to be that GD (a graphics library) was either not installed or not working properly. I played with that for almost an hour, but with no success. GD was installed, and so was ImageMagick (a back-up graphics library that WordPress falls back on).

The correct answer was that I needed to upgrade PHP. The AWS Linux 2 image comes with PHP 7.2. Upgrading to version 7.4 did the trick. I was able to make that happen, very painlessly, by following a blog post from Gregg Borodaty . The title of his post is “Amazon Linux 2: Upgrading from PHP 7.2 to PHP 7.4” (thanks Gregg).

Update

My recommendation, as of 2024, is to use a managed WordPress service. I wrote a post about using AWS Lightsail for that purpose: https://www.antpace.com/blog/website-redesign-with-wordpress-gutenberg-via-aws-lightsail/

 

Automatic MySQL dump to S3

Automatic MySQL dump to S3

I have had some lousy luck with databases. In 2018, I created a fitness app for martial artists, and quickly gained over a hundred users in the first week. Shortly after, the server stopped resolving and I didn’t know why. I tried restarting it, but that didn’t help. Then, I stopped the EC2 instance from my AWS console. Little did I know, that would wipe the all of the data from that box. Ouch.

Recently, a client let me know that their site wasn’t working. A dreaded “error connecting to the database” message was all that resolved. I’d seen this one before – no sweat. Restarting the database usually does the trick: “sudo service mariadb restart”. The command line barked back at me: “Job for mariadb.service failed because the control process exited with error code.”

Uh-oh.

The database was corrupted. It needed to be deleted and reinstalled. Fortunately, I just happen to have a SQL dump for this site saved on my desktop. This was no way to live – in fear of the whims of servers.

Part of the issue is that I’m running MySQL on the same EC2 instance as the web server.  A more sophisticated architecture would move the database to RDS. This would provide automated backups, patches, and maintenance. It also costs more.

To keep cost low, I decided to automate MySQL dumps and upload to an S3 bucket. S3 storage is cheap ($0.20/GB), and data transfer from EC2 is free.

Deleting and Reinstalling the Database

If your existing database did crash and become corrupt, you’ll need to delete and reinstall it. To reset the database, I SSH’d into my EC2 instance. I navigated to `/var/lib/mysql`

cd /var/lib/mysql

Next, I deleted everything in that folder:

sudo rm -r *

Finally, I ran a command to reinitialize the database directory

mysql_install_db --user=mysql --basedir=/usr --datadir=/var/lib/mysql

Reference: https://serverfault.com/questions/812719/mysql-mariadb-not-starting

Afterwards, you’ll be prompted to reset the root password.

A CLI prompt to reset the root password after installing mariadb

You’ll still need to import your sql dump backups. I used phpMyAdmin to do that.

Scheduled backups

AWS Setup

The first step was to get things configured in my Amazon Web Services (AWS) console. I created a new S3 bucket. I also created a new IAM user, and added it to a group that included the permission policy “AmazonS3FullAccess”.

AWS policy to allow full S3 access
This policy provides full access to all buckets.

I went to the security credentials for that user, and copied down the access key ID and secret. I would use that info to access my S3 bucket programatically. All of the remaining steps take place from the command line, via SSH, against my server. From a Mac terminal, you could use a command like this to connect to an EC2 instance:

ssh -i /Users/antpace/Documents/keys/myKey.pem ec2-user@ec2-XX-XX-XX.us-west-2.compute.amazonaws.com

Once connected, I installed the software to allow programatic access to AWS:

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

Here is the reference for installing the AWS CLI on Linux.

Shell script

Shell scripts are programs that can be run directly by Linux. They’re great for automating tasks. To create the file on my server I ran: “nano backup.sh”. This assumes you already have the nano text editor installed. If not: “sudo yum install nano” (or, “sudo apt install nano”, depending on your Linux flavor).

Below is the full code I used. I’ll explain what each part of it does.

Credit: This code was largely inspired by a post from Marcelo Gornstein.

#!/bin/bash
AWS_ACCESS_KEY_ID=XXX \
AWS_SECRET_ACCESS_KEY=XXX \
S3_BUCKET=myBucketsName \
MYSQL_HOST=localhost \
MYSQL_PORT=3306 \
MYSQL_USER=XXX \
MYSQL_PASS=XXX \
MYSQL_DB=XXX \

cd /tmp
file=${MYSQL_DB}-$(date +%a).sql
mysqldump \
  --host ${MYSQL_HOST} \
  --port ${MYSQL_PORT} \
  -u ${MYSQL_USER} \
  --password="${MYSQL_PASS}" \
  ${MYSQL_DB} > ${file}
if [ "${?}" -eq 0 ]; then
  gzip ${file}
  AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} aws s3 cp ${file}.gz s3://${S3_BUCKET}
  rm ${file}.gz
else
  echo "sql dump error"
  exit 1
fi

The first line tells the system what interpreter  to use: “#!/bin/bash”. Bash is a variation of the shell scripting language. The next eight lines are variables that contain details about my AWS S3 bucket, and the MySQL database connection.

After switching to a temporary directory, the filename is built. The name of the file is set to the database’s name plus the day of the week. If that file already exists (from the week previous), it’ll be overwritten.  Next, the sql file is created using mysqldump and the database connection variables from above. Once that operation is complete, then we zip the file, upload it to S3, and delete the zip from our temp folder.

If the mysqldump operation fails, we spit out an error message and exit the program. (Exit code 1 is a general catchall for errors. Anything other than 0 is considered an error. Valid error codes range between 1 and 255.)

Before this shell script can be used, we need to change its file permissions so that it is executable: “chmod +x backup.sh”

After all of this, I ran the file manually, and made sure it worked: “./backup.sh”

Sure enough, I received a success message. I also checked the S3 bucket and made sure the file was there.

S3 file dump

Scheduled Cronjob

The last part is to schedule this script to run every night. To do this, we’ll edit the Linux crontab file: “sudo crontab -e”. This file controls cronjobs – which are scheduled tasks that the system will run at set times.

The file opened in my terminal window using the vim text editor – which is notoriously harder to use than the nano editor we used before.

I had to hit ‘i’ to enter insertion mode. Then I right clicked, and pasted in my cronjob code. Then I pressed the escape key to exit insertion mode. Finally, I typed “wq!” to save my changes and quit.

Remember how crontab works:

minute | hour | day-of-month | month | day-of-week

I set the script to run, every day, at 2:30am:

30 2 * * * sudo /home/ec2-user/backup.sh

And that’s it. I made sure to check the next day to make sure my cronjob worked (it did). Hopefully now, I won’t lose production data ever again!

Updates

Request Time Too Skewed (update)

A while after setting this up, I randomly checked my S3 buckets to make sure everything was still working. Although it had been for most of my sites, one had not been backed up in almost 2 months! I shelled into that machine, and tried running the script manually. Sure enough, I received an error: “An error occurred (RequestTimeTooSkewed) when calling the PutObject operation: The difference between the request time and the current time is too large.

I checked the operating system’s current date and time, and it was off by 5 days. I’m not sure how that happened. I fixed it by installing and running “Network Time Protocol”:

sudo yum install ntp
sudo ntpdate ntp.ubuntu.com

After that, I was able to run my backup script successfully, without any S3 errors.

 


Nano text-editor tip I learned along the way:

You can delete chunks of text content using Nano. Use CTRL + Shift + 6 to enter selection mode, move the cursor to expand the block, and press CTRL + K to delete it.

Additional References:

https://www.javatpoint.com/steps-to-write-and-execute-a-shell-script

https://www.sitepoint.com/cron-jobs/

Secure a Website with SSL and HTTPS on AWS

Secure a website with SSL and HTTPS on AWS

My last post was about launching a website onto AWS. This covered launching a new EC2 instance, configuring a security group, installing LAMP software, and pointing a domain at the new instance. The only thing missing was to configure SSL and HTTPS.

Secure Sockets Layer (SSL) encrypts traffic between a website and its server. HTTPS is the protocol to deliver secured data via SSL to end-users.

In my last post, I already allowed all traffic through port 443 (the port that HTTPS uses) in the security group for my EC2 instance. Now I’ll install software to provision SSL certificates for the server.

Certbot

Certbot is free software that will communicate with Let’s Encrypt, an SSL certificate authority, to automate the management of encryption certificates.

Before downloading and installing Certbot, we’ll need to install some dependencies (Extra Packages for Enterprise Linux). SSH into the EC2 instance that you want to secure, and run this command in your home directory (/home/ec2-user):

sudo wget -r --no-parent -A 'epel-release-*.rpm' http://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/

Then install it:

sudo rpm -Uvh dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-*.rpm

And enable it:

sudo yum-config-manager --enable epel*

Now, we’ll need to edit the Apache (our web hosting software) configuration file. Mine is located here: /etc/httpd/conf/httpd.conf

You can use the Nano CLI text editor to make changes to this file by running:

sudo nano /etc/httpd/conf/httpd.conf

Scroll down a bit, and you’ll find a line that says “Listen 80”. Paste these lines below (obviously, changing antpace.com to your own domain name)

<VirtualHost *:80>
    DocumentRoot "/var/www/html"
    ServerName "antpace.com"
    ServerAlias "www.antpace.com"
</VirtualHost>

Make sure you have an A record (via Route 53) for both yourwebsite.com AND www.yourwebsite.com with the value set as your EC2 public IP address.

After saving, you’ll need to restart the server software:

sudo systemctl restart httpd

Now we’re ready for Certbot. Install it:

sudo yum install -y certbot python2-certbot-apache

Run it:

sudo certbot

Follow the prompts as they appear.

Automatic renewal

Finally, schedule an automated task (a cron job) to renew the encryption certificate as needed. If you don’t do this part, HTTPS will fail for your website after a few months. Users will receive an ugly warning, telling them that your website is not secure. Don’t skip this part!

Run this command to open your cron file:

sudo nano /etc/crontab

Schedule Certbot to renew everyday, at 4:05 am:

05 4 * * * root certbot renew --no-self-upgrade

Make sure your cron daemon is running:

sudo systemctl restart crond

That’s it! Now your website, hosted on EC2 will support HTTPS. Next, we’ll force all traffic to use it.

* AWS Documentation Reference

Launching a website on AWS EC2

Using AWS for a website

In 2008 I deployed my first website to production. It used a simple LAMP stack , a GoDaddy domain name, and HostGator hosting.

Since 2016, I’ve used AWS as my primary cloud provider. And this year, I’m finally cancelling my HostGator package. Looking through that old server, I found artifacts of past projects – small businesses and start-ups that I helped develop and grow. A virtual memory lane.

Left on that old box was a site that I needed to move to a fresh EC2 instance. This is an opportunity to document how I launch a site to Amazon Web Services.

Amazon Elastic Compute Cloud

To start, I launch a new EC2 instance from the AWS console. Amazon’s Elastic Compute Cloud provides “secure and resizable compute capacity in the cloud.” When prompted to choose an Amazon Machine Image (AMI), I select “Amazon Linux 2 AMI”. I leave all other settings as default. When I finally click “Launch”, it’ll ask me to either generate a new key file, or use an existing one. I’ll need that file later to SSH or sFTP into this instance. A basic Linux server is spun up, with little else installed.

Linux AMI
Amazon Linux 2 AMI is free tier eligible.

Next, I make sure that instance’s Security Group allows inbound traffic on SSH, HTTP, and HTTPS. We allow all traffic via HTTP and HTTPS (IPv4 and IPv6, which is why there are 2 entries for each). That way end-users can reach the website from a browser. Inbound SSH access should not be left wide open. Only specific IP addresses should be allowed to command-line in to the server. AWS has an option labeled “My IP” that will populate it for your machine.

Inbound security rules
Don’t allow all IPs to access SSH in a live production environment.

Recent AWS UI updates let you set these common rules directly from the “Launch an instance” screen, under “Network settings”

AWS Network settings screen

Configure the server

Now that the hosting server is up-and-running, I can command-line in via SSH from my Mac’s terminal using the key file from before. This is what the command looks like:

 ssh -i /Users/apace/my-keys/keypair-secret.pem ec2-user@ec2-XXX-XXX-XX.us-east-2.compute.amazonaws.com

You might get a CLI error that says:

” It is required that your private key files are NOT accessible by others. This private key will be ignored. “

Bad permissions warning against a pem key file in the CLI

That just means you need to update the file permissions on the key file. You should do that from the command line, in the directory where the file resides:

chmod 400 KeyFileNameHere.pem

Make sure everything is up-to-date by running “sudo yum update“.  I begin installing the required software to host a website:

sudo amazon-linux-extras install -y lamp-mariadb10.2-php7.2 php7.2

Note: amazon-linux-extras doesn’t exist on the Amazon Linux 2023 image.

That command gives me Apache, PHP, and MariaDB – a basic LAMP stack. This next one installs the database server:

sudo yum install -y httpd mariadb-server

MariaDB is a fork of the typical MySQL, but with better performance.

By default, MariaDB will not have any password set. If you choose to install phpMyAdmin, it won’t let you login without a password (as per a default setting). You’ll have to set a password from the command line. While connected to your instance via SSH, dispatch this command:

mysql -u root -p

When it prompts you for a password, just hit enter.

login to mariadb for the first time

Once you’re logged in, you need to switch to the mysql database by running the following command:

use mysql;

Now you can set a password for the root user with the following command:

UPDATE user SET password = PASSWORD('new_password') WHERE user = 'root';

After setting the password, you need to flush the privileges to apply the changes:

FLUSH PRIVILEGES;

Start Apache: “sudo systemctl start httpd“. And, make sure it always starts when the server boots up “sudo systemctl enable httpd

The server setup is complete. I can access an Apache test page from a web browser by navigating to the EC2 instance’s public IP address.

Apache test page
A test page shows when no website files are present.

I’ll take my website files (that are stored on my local machine and synched to a Git repo) and copy them to the server via sFTP.

Copy website files to server
I use FileZilla to access my EC2 public directory

I need to make sure the Linux user I sFTP with owns the directory “/var/www/html”, or else I’ll get a permission denied error:

sudo chown -R ec2-user /var/www/html

Later, if I want to be able to upload media to the server from the WordPress CMS, I’ll need to be sure to change the owner of the blog’s directory to the apache user (which is the behind-the-scenes daemon user invoked for such things):

sudo chown -R apache /var/www/html/blog

* AWS Documentation Reference

Domain name and Route 53

Instead of having to use the EC2 server’s public address to see my website from a browser, I’ll point a domain name at it. AWS Route 53 helps with this. It’s a “DNS web service” that routes users to websites by mapping domain names to IP addresses.

In Route 53 I create a new “hosted zone”, and enter the domain name that I’ll be using for this site.  This will automatically generate two record sets: a Name Server (NS) record and a Start-of-Authority (SOA) record. I’ll create one more, an IPv4 address (A) record. The value of that record should be the public IP address that I want my domain to point at. You’ll probably also want to add another, identical to the last one, but specifying “www” in the record name.

setting a record in AWS

Finally, I’ll head over to my domain name registrar, and find my domain name’s settings. I update the nameserver values there to match those in my Route 53 NS record set. It’ll likely take some time for this change to be reflected in the domain’s settings. Once that is complete, the domain name will be pointing at my new EC2 instance.

And that’s it – I’ve deployed my website to AWS. The next step is to secure my site by configuring SSL and HTTPS.

If you’re migrating a dynamic WordPress site, you’ll require a some additional steps to complete the migration.