Using AWS CodePipeline to deploy a Parcel-bundled static web app to an S3 website bucket
Try out your own little Netlify-like pipeline in AWS.

What are we trying to achieve?
Some of you are probably wondering right now — what’s the use case for this? There’s websites like Netlify that effectively do the same thing, with fewer clicks and less typing. But then again, you’re likely here because you’re trying to get more acquainted with the Amazon Web Services (AWS) ecosystem of services or you may want to prove a theory that’s been keeping you antsy for a while.
In this guide, we’ll mainly be using these family of CI services from AWS to build and deploy a static website into an S3 bucket. The services include:
- CodePipeline
- CodeCommit
- CodeBuild
- CodeDeploy
Sounds good? Let’s go for it then.
So, what is Parcel?
First off, let’s talk about Parcel. You’ve probably heard at least once in your dev circles. Parcel is basically a zero-configuration replacement for webpack, ideally for serving and building front-end projects.

The good thing is that you don’t need to code anything right now. I’ve readied a fairly simple Parcel-bundled project that you can download (in the form of a ZIP file) so we can get started immediately on this experiment:
https://github.com/jpcaparas/demo-parcel-aws-codepipeline/archive/refs/heads/main.zip
Once you have the zip decompressed, run these two commands:
yarn
to install the dependenciesyarn build
to confirm that it’s building static files on thedist
directory

If all of those commands run without a hitch, great! If you’re wondering what the dist
directory is for: its contents will be deployed to the S3 website every time CodePipeline performs a successful deploy.
Setting up a CodeCommit repository
Now you may be wondering why we’re not using GitHub as our code repository. Thing is: you can use GitHub. But this guide is meant to show you the streamlined nature of the AWS’s CI/CD family of services, and that includes CodeCommit.
First off, make sure to upload your SSH public key for your IAM user (guide here) and paste the IAM SSH key ID generated for it, you’ll need it later on.

Now, create the CodeCommit repository:

Once created, follow the additional configuration steps and paste the clone URL somewhere, we’ll use it later:

Now, go back to the demo-parcel-aws-codepipeline-main
folder that you just decompressed earlier and run these commands:
git initgit add .git commit -am "Init"git remote add origin <the-codecommit-repository-url-you-stored-earlier>git push --set-upstream origin main
You’ve now effectively pushed the files to the CodeCommit repository. Once you reload your repository, you are going to see this:

Now take a moment to get a breather and congratulate yourself! 🎉
Creating an S3 bucket / static website
We’ll now be creating an S3 bucket to store files generated by yarn build
. This S3 bucket will be configured to become a public facing website that can be visited by anyone on the internet.
Let’s start by visiting the S3 service and creating a bucket with this configuration:
- On the “General configuration” section:
- Name the bucketproject-demo-parcel-codepipeline
- Set the region to the one closest to you. In my case, I choseap-southeast-2 (Sydney)
because I am based in New Zealand. - On the “Block Public Access settings for this bucket” section, untick all the checkboxes because we actually want to make this S3 bucket publicly accessible.
- Ensure that you have ticked the acknowledgement of turning the bucket public. - Leave the existing settings as-is and continue creating the bucket.
You should end up with an empty bucket:

We then add a bucket policy to finally allow the object files to accessible to the public. On the “Permissions” page, add this bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::project-demo-parcel-codepipeline/*"
]
}
]
}
Now, to actually turn the bucket to a website, go to the “Properties” settings page and scroll all the way down until you find a section called “Static website hosting”. We’ll need to enable this option:
- Tick “Enable”
- Tick “Host a static website”
- Set the index document to be
index.html
- Leave the other options as-is and save changes.
Once enabled, you should now be given a URL for your static website:

If you actually visit the domain, you’ll be greeted by this 40x
error because it doesn’t have any files yet.

Setting up CodePipeline to manage everything
Things are getting more interesting now, because we’re about to create an actual CI/CD pipeline with CodePipeline.
CodePipeline is amazing. If you’ve tools like GitLab CI/CD before, you’ll feel right at home.
CodePipeline allows you to define deployment “stages”. If you had a hunch that said stages would be:
- Pulling the repository,
- Building the output artefacts, and
- Deploying the output artefacts
… then you’re in the money, because that’s exactly how CodePipeline works at a high level.
Now let’s get started:
- Go to CodePipeline
- Click the “Create pipeline” button
- On the “Pipeline settings” screen, name the pipeline
demo-parcel-codepipeline
- Allow a new service role to be created that will be used with the new pipeline, e.g.AWSCodePipelineServiceRole-demo-parcel-codepipeline
. - On the “Add source stage” screen select “AWS CodeCommit”. This is why we created the repo earlier.
- Selectdemo-parcel-codepipeline-repo
as the repository name
- Set the branch name to bemain
ormaster
- Leave the detection options as-is. - On the “Add build stage” screen, select “AWS CodeBuild” and a select the same region you chose for your S3 bucket.
- Still on the “Add build stage”, for the project name, click “Create project”. This will open a pop-up window. Name the project
project-demo-parcel-codepipeline-repo
- On the “Environment” section, select “Managed image” and pick “Ubuntu” as the operating system. Set the runtime tostandard:5.0
and select “Always use the latest image…”.
- On the “Service role” section, select “New service role”.
- Name the roleproject-demo-parcel-codepipeline-repo-service-role
.
- Leave the “Additional configuration” section as-is.
- On the “Buildspec” section, select “Use buildspec file”.
- Leave “Batch configuration” section as-is.
- Leave the “Logs” section as-is.
- Click “Continue to Codepipeline”. This might take a couple of seconds to finish. - Back on the “Add build stage” screen, you’ll get a confirmation that the CodeBuild project has been created. You may notice an “Add environment variable” button. We’ll actually go back to that later. Leave it as is for the meantime. Set “Build type” to “Single build”.
- On the “Add deploy stage” screen, set the “Deploy provider” to be Amazon S3. Set your region to be the same region where your CodeBuild project is hosted.
- Still on the “Add deploy stage” screen, set the bucket location to be the same S3 bucket we created earlier:
project-demo-parcel-codepipeline
. This is where the files will be deployed. Leave the “Deployment path” field as is. Leave “Additional configuration” options as-is. - Finally, on the “Review” screen, double check the configuration you’ve just made. Don’t be distraught — it will all make sense soon. If you’re happy, click the “Create pipeline” button.
If you’ve done everything correctly, you should be greeted by this screen:

Okay, so just to recap, what you did was create a pipeline that orchestrates various AWS services to act in unison:
- You set the source to be the CodeCommit repository.
- You instructed CodeBuild to “build” the project and store the output artefacts somewhere.
- You instructed CodeDeploy to deploy the same artefacts to an S3 bucket, which is currently a
However, this pipeline is expected to fail at first because we haven’t fully configured it yet. But we’ll be on that shortly. Again, don’t fret. There’s some stuff we have to configure to get it all going. Take a moment to congratulate yourself for going this far 🎉.
Dissecting the buildspec.yml file
The step you might be confused the most is the CodeBuild stage — and for good reason; it’s actually the most convoluted stage we worked on.
CodeBuild, in a nutshell, allows you to spin up a temporary environment/server that will run the yarn build
command for you to generate the static files & assets (e.g. the index.html
generated by Parcel), amongst other things. However, CodeBuild can’t act on intuition: you need to instruct it on how to perform the build process by specifying “phases” on an buildspec.yml
that thankfully is part of your CodeCommit repository.
Upon closer inspection, these phases will start to make more sense.
Now let’s dissect the stages that were just defined:
- The
install
phase tells the environment to installyarn
because by defaultyarn
doesn’t come pre-packed with Ubuntu. - The
build
phase is actually the most important phase. What it does is it runs theyarn build
command which generates the static assets in thedist
directory. - The
post_build
command “cleans up”/purges the S3 bucket before the project gets deployed. This is important because for every deploy we get a brand new set of files and want to old files in that bucket to be removed. - There’s also an
artifacts
section on the file. What this does is it “flattens” thedist
directory and ensures that the built files are put on the root of bucket instead of being nested ondist
folder.
Programmatically deleting objects from the S3 bucket before deployment
Remember when I said earlier that before each deploy, it’s ideal for the S3 bucket to be purged of existing objects to make room for new files?
Our post_build
phase does exactly this:
aws s3 rm s3://$S3_BUCKET--recursive
Notice the $S3_BUCKET
variable? That’s called an environment variable. It gets evaluated at runtime to the S3 bucket location. However, Our CodeBuild stage doesn’t actually know the name of the S3 bucket we’re using, so we’ll have to define an environment variable on the console to make this bucket name evaluate to a real value. To do this, click the “AWS CodeBuild” link on the CodePipeline project page:

Once on the CodeBuild project page, Go to Edit -> Environment
and toggle “Additional configuration”. There, you’ll be greeted again by the “Environment variables” section. We could have entered the environment variable earlier while we were setting up CodeBuild for the first time now that I explained what the $S3_BUCKET
variable is for, it now makes more sense.

Click “Update environment” and you should be good to go.
Now, we’ll need to update the policy that was automatically generated for CodeBuild to allow it to delete objects on the bucket, so it can recursively delete now-stale objects in the bucket prior to putting the new objects in.
Go to your IAM service, visit the “Policies” page, and modify the policy named CodeBuild during the setup process:

Open the S3 bucket on a new tab and copy the bucket ARN:

Back on the IAM policy visual editor, go the S3 accordion and expand it. On the “Resources” section add the ARN:

Now on the “Actions” section, tick “ListBucket”, “DeleteObject” and “DeleteObjectVersion”:


Click “Review policy” and save changes.
Here is a sample resulting policy once you are done:
Deploying the project
We’re at the final part now 🤲🏽! After all the configuration changes we’ve made, the CodePipeline project should now have all of its stages succeed and thus, we’ll end up with a working S3 website.
To re-deploy our CodePipeline (which previously failed), go back to the project and click “Release Change”. This will re-run all of the stages.

You can even watch the stages run in real-time by clicking on “details”:

Once all the stages are green, you can now visit the S3 bucket URL and see that the index page has been uploaded to it.


With this basic setup, each time you push a commit to the repository, it will trigger the CodePipeline to run the stages all over again, meaning that your S3 website bucket will be up-to-date with your changes.
Closing thoughts
CodePipeline, CodeBuild, CodeDeploy, and CodeCommit are definitely some of the most exciting family of services of AWS, as they can accommodate to almost any deployment recipe you can think of. The only limit is your imagination!
In this post, we showed a very basic example of how we could deploy a static website but obviously there’s plenty more workflows you can experiment with.
What’s next: Creating a Cloudfront Distribution and aliasing it to a domain
I initially wanted this blog post to cover deploying the S3 bucket to a Cloudfront distribution and then aliasing the distribution to Cloudflare-hosted domain. However, I think this article is already filled with technical overhead, so I’ll do a separate article just for that experiment.
Stay tuned for more follow-up AWS blog posts and thanks for reading!