Using AWS CodePipeline to deploy a Parcel-bundled static web app to an S3 website bucket

Try out your own little Netlify-like pipeline in AWS.

What are we trying to achieve?

Some of you are probably wondering right now — what’s the use case for this? There’s websites like Netlify that effectively do the same thing, with fewer clicks and less typing. But then again, you’re likely here because you’re trying to get more acquainted with the Amazon Web Services (AWS) ecosystem of services or you may want to prove a theory that’s been keeping you antsy for a while.

In this guide, we’ll mainly be using these family of CI services from AWS to build and deploy a static website into an S3 bucket. The services include:

  • CodePipeline
  • CodeCommit
  • CodeBuild
  • CodeDeploy

Sounds good? Let’s go for it then.

So, what is Parcel?

First off, let’s talk about Parcel. You’ve probably heard at least once in your dev circles. Parcel is basically a zero-configuration replacement for webpack, ideally for serving and building front-end projects.

The good thing is that you don’t need to code anything right now. I’ve readied a fairly simple Parcel-bundled project that you can download (in the form of a ZIP file) so we can get started immediately on this experiment:

https://github.com/jpcaparas/demo-parcel-aws-codepipeline/archive/refs/heads/main.zip

Once you have the zip decompressed, run these two commands:

  1. yarn to install the dependencies
  2. yarn build to confirm that it’s building static files on the dist directory
A successful `yarn build` shout emit file/s on the `./dist` folder

If all of those commands run without a hitch, great! If you’re wondering what the dist directory is for: its contents will be deployed to the S3 website every time CodePipeline performs a successful deploy.

Setting up a CodeCommit repository

Now you may be wondering why we’re not using GitHub as our code repository. Thing is: you can use GitHub. But this guide is meant to show you the streamlined nature of the AWS’s CI/CD family of services, and that includes CodeCommit.

First off, make sure to upload your SSH public key for your IAM user (guide here) and paste the IAM SSH key ID generated for it, you’ll need it later on.

Make sure to paste in your SSH public key on your IAM console.

Now, create the CodeCommit repository:

Creating a repo in CodePipeline is easy as.

Once created, follow the additional configuration steps and paste the clone URL somewhere, we’ll use it later:

You’ll need to add a couple of lines to your `.ssh/config` file to allow cloning and pushing.

Now, go back to the demo-parcel-aws-codepipeline-main folder that you just decompressed earlier and run these commands:

git initgit add .git commit -am "Init"git remote add origin <the-codecommit-repository-url-you-stored-earlier>git push --set-upstream origin main

You’ve now effectively pushed the files to the CodeCommit repository. Once you reload your repository, you are going to see this:

You may be wondering why a buildspec.yml file exists at the root of the repository — I’ll explain it later.

Now take a moment to get a breather and congratulate yourself! 🎉

Creating an S3 bucket / static website

We’ll now be creating an S3 bucket to store files generated by yarn build . This S3 bucket will be configured to become a public facing website that can be visited by anyone on the internet.

Let’s start by visiting the S3 service and creating a bucket with this configuration:

  1. On the “General configuration” section:
    - Name the bucket project-demo-parcel-codepipeline
    - Set the region to the one closest to you. In my case, I chose ap-southeast-2 (Sydney) because I am based in New Zealand.
  2. On the “Block Public Access settings for this bucket” section, untick all the checkboxes because we actually want to make this S3 bucket publicly accessible.
    - Ensure that you have ticked the acknowledgement of turning the bucket public.
  3. Leave the existing settings as-is and continue creating the bucket.

You should end up with an empty bucket:

Our bucket doesn’t have any files… yet.

We then add a bucket policy to finally allow the object files to accessible to the public. On the “Permissions” page, add this bucket policy:

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::project-demo-parcel-codepipeline/*"
]
}
]
}

Now, to actually turn the bucket to a website, go to the “Properties” settings page and scroll all the way down until you find a section called “Static website hosting”. We’ll need to enable this option:

  1. Tick “Enable”
  2. Tick “Host a static website”
  3. Set the index document to be index.html
  4. Leave the other options as-is and save changes.

Once enabled, you should now be given a URL for your static website:

Bookmark this URL

If you actually visit the domain, you’ll be greeted by this 40x error because it doesn’t have any files yet.

No index.html file will result in a 403 error

Setting up CodePipeline to manage everything

Things are getting more interesting now, because we’re about to create an actual CI/CD pipeline with CodePipeline.

CodePipeline is amazing. If you’ve tools like GitLab CI/CD before, you’ll feel right at home.

CodePipeline allows you to define deployment “stages”. If you had a hunch that said stages would be:

  1. Pulling the repository,
  2. Building the output artefacts, and
  3. Deploying the output artefacts

… then you’re in the money, because that’s exactly how CodePipeline works at a high level.

Now let’s get started:

  1. Go to CodePipeline
  2. Click the “Create pipeline” button
  3. On the “Pipeline settings” screen, name the pipeline demo-parcel-codepipeline
    - Allow a new service role to be created that will be used with the new pipeline, e.g. AWSCodePipelineServiceRole-demo-parcel-codepipeline .
  4. On the “Add source stage” screen select “AWS CodeCommit”. This is why we created the repo earlier.
    - Select demo-parcel-codepipeline-repo as the repository name
    - Set the branch name to be main or master
    - Leave the detection options as-is.
  5. On the “Add build stage” screen, select “AWS CodeBuild” and a select the same region you chose for your S3 bucket.
  6. Still on the “Add build stage”, for the project name, click “Create project”. This will open a pop-up window. Name the project project-demo-parcel-codepipeline-repo
    - On the “Environment” section, select “Managed image” and pick “Ubuntu” as the operating system. Set the runtime to standard:5.0 and select “Always use the latest image…”.
    - On the “Service role” section, select “New service role”.
    - Name the role project-demo-parcel-codepipeline-repo-service-role.
    - Leave the “Additional configuration” section as-is.
    - On the “Buildspec” section, select “Use buildspec file”.
    - Leave “Batch configuration” section as-is.
    - Leave the “Logs” section as-is.
    - Click “Continue to Codepipeline”. This might take a couple of seconds to finish.
  7. Back on the “Add build stage” screen, you’ll get a confirmation that the CodeBuild project has been created. You may notice an “Add environment variable” button. We’ll actually go back to that later. Leave it as is for the meantime. Set “Build type” to “Single build”.
  8. On the “Add deploy stage” screen, set the “Deploy provider” to be Amazon S3. Set your region to be the same region where your CodeBuild project is hosted.
  9. Still on the “Add deploy stage” screen, set the bucket location to be the same S3 bucket we created earlier: project-demo-parcel-codepipeline. This is where the files will be deployed. Leave the “Deployment path” field as is. Leave “Additional configuration” options as-is.
  10. Finally, on the “Review” screen, double check the configuration you’ve just made. Don’t be distraught — it will all make sense soon. If you’re happy, click the “Create pipeline” button.

If you’ve done everything correctly, you should be greeted by this screen:

Congratulations, your pipeline has been created!

Okay, so just to recap, what you did was create a pipeline that orchestrates various AWS services to act in unison:

  1. You set the source to be the CodeCommit repository.
  2. You instructed CodeBuild to “build” the project and store the output artefacts somewhere.
  3. You instructed CodeDeploy to deploy the same artefacts to an S3 bucket, which is currently a

However, this pipeline is expected to fail at first because we haven’t fully configured it yet. But we’ll be on that shortly. Again, don’t fret. There’s some stuff we have to configure to get it all going. Take a moment to congratulate yourself for going this far 🎉.

Dissecting the buildspec.yml file

The step you might be confused the most is the CodeBuild stage — and for good reason; it’s actually the most convoluted stage we worked on.

CodeBuild, in a nutshell, allows you to spin up a temporary environment/server that will run the yarn build command for you to generate the static files & assets (e.g. the index.html generated by Parcel), amongst other things. However, CodeBuild can’t act on intuition: you need to instruct it on how to perform the build process by specifying “phases” on an buildspec.yml that thankfully is part of your CodeCommit repository.

Upon closer inspection, these phases will start to make more sense.

Now let’s dissect the stages that were just defined:

  1. The install phase tells the environment to install yarn because by default yarn doesn’t come pre-packed with Ubuntu.
  2. The build phase is actually the most important phase. What it does is it runs the yarn build command which generates the static assets in the dist directory.
  3. The post_build command “cleans up”/purges the S3 bucket before the project gets deployed. This is important because for every deploy we get a brand new set of files and want to old files in that bucket to be removed.
  4. There’s also an artifacts section on the file. What this does is it “flattens” the dist directory and ensures that the built files are put on the root of bucket instead of being nested on dist folder.

Programmatically deleting objects from the S3 bucket before deployment

Remember when I said earlier that before each deploy, it’s ideal for the S3 bucket to be purged of existing objects to make room for new files?

Our post_build phase does exactly this:

aws s3 rm s3://$S3_BUCKET--recursive

Notice the $S3_BUCKET variable? That’s called an environment variable. It gets evaluated at runtime to the S3 bucket location. However, Our CodeBuild stage doesn’t actually know the name of the S3 bucket we’re using, so we’ll have to define an environment variable on the console to make this bucket name evaluate to a real value. To do this, click the “AWS CodeBuild” link on the CodePipeline project page:

Clicking this will take you the CodeBuild project.

Once on the CodeBuild project page, Go to Edit -> Environment and toggle “Additional configuration”. There, you’ll be greeted again by the “Environment variables” section. We could have entered the environment variable earlier while we were setting up CodeBuild for the first time now that I explained what the $S3_BUCKET variable is for, it now makes more sense.

Ensure that the S3_BUCKET environment variable has the bucket name as the value.

Click “Update environment” and you should be good to go.

Now, we’ll need to update the policy that was automatically generated for CodeBuild to allow it to delete objects on the bucket, so it can recursively delete now-stale objects in the bucket prior to putting the new objects in.

Go to your IAM service, visit the “Policies” page, and modify the policy named CodeBuild during the setup process:

By default, the policy doesn’t permit CodeBuild to interact with the `project-demo-parcel-codepipeline` S3 bucket.

Open the S3 bucket on a new tab and copy the bucket ARN:

This ARN will be added the list of allowed S3 resources CodeBuild can interact with.

Back on the IAM policy visual editor, go the S3 accordion and expand it. On the “Resources” section add the ARN:

Ensure that you add an asterisk (*) at the end of the ARN.

Now on the “Actions” section, tick “ListBucket”, “DeleteObject” and “DeleteObjectVersion”:

Allow CodeBuild to list objects in that bucket.
Allow CodeBuild to delete the objects in that bucket.

Click “Review policy” and save changes.

Here is a sample resulting policy once you are done:

You’ll have to manually add the output bucket name on the “resource” section via the policy editor so CodeBuild can perform operations inside of it. Don’t forget to add the “*” after the end of the bucket name.

Deploying the project

We’re at the final part now 🤲🏽! After all the configuration changes we’ve made, the CodePipeline project should now have all of its stages succeed and thus, we’ll end up with a working S3 website.

To re-deploy our CodePipeline (which previously failed), go back to the project and click “Release Change”. This will re-run all of the stages.

You can even watch the stages run in real-time by clicking on “details”:

You can tail the logs by clicking “Details” on the CodeBuild stage.

Once all the stages are green, you can now visit the S3 bucket URL and see that the index page has been uploaded to it.

All the stages being green (successful) means our pipeline has finished and the output artefacts from CodeBuild have been deployed to S3 via CodeDeploy.
Ta-da!

With this basic setup, each time you push a commit to the repository, it will trigger the CodePipeline to run the stages all over again, meaning that your S3 website bucket will be up-to-date with your changes.

Closing thoughts

CodePipeline, CodeBuild, CodeDeploy, and CodeCommit are definitely some of the most exciting family of services of AWS, as they can accommodate to almost any deployment recipe you can think of. The only limit is your imagination!

In this post, we showed a very basic example of how we could deploy a static website but obviously there’s plenty more workflows you can experiment with.

What’s next: Creating a Cloudfront Distribution and aliasing it to a domain

I initially wanted this blog post to cover deploying the S3 bucket to a Cloudfront distribution and then aliasing the distribution to Cloudflare-hosted domain. However, I think this article is already filled with technical overhead, so I’ll do a separate article just for that experiment.

Stay tuned for more follow-up AWS blog posts and thanks for reading!

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store