If you’ve been reading this blog since the early days you may remember my posts about setting up Ghost on Azure. That setup served me well for years. First we ran Ghost on an Azure App Service. Then we ran a containerized Ghost on Azure App Service with Azure Container Registry, custom domain, Let’s Encrypt SSL, the whole nine yards. It worked. But it was costing me about $76 a month to host what is essentially a personal blog that I update a few times a year.
That number had been bugging me for a while.
Why Move?#
The short answer is cost. $76 a month for a blog is hard to justify when you’re not writing regularly. The longer answer is that my role has changed. I’m now a Partner Solutions Architect at AWS now. I spend my days recommending cloud architectures to Partners and their clients. It felt a little silly to be paying for a container-based CMS when I could host a static site for pennies.
Ghost is a great platform. I have nothing bad to say about it. But I didn’t need a CMS with a database and an admin panel. I needed a place to put markdown files and have them show up on the internet. That’s a static site generator’s job.
Why Hugo?#
I looked at a few options — Hugo, Astro, Next.js — and landed on Hugo for a few reasons. It’s fast. The build for this entire site takes about 200 milliseconds. It’s a single binary with no Node.js dependency chain to manage. And the Blowfish theme gave me a clean, modern look with dark mode, tag support, and card layouts without me having to write any CSS.
Hugo uses markdown for content and TOML for configuration. Posts are just directories with an index.md and any images sitting right next to it. No database, no admin panel, no container runtime. Just files.
content/posts/
migrating-ghost-to-hugo-on-aws/
index.md ← this post
feature.jpg ← card thumbnail
azure-functions-getting-started/
index.md
PostmanContentType.jpg
SourceControl1.jpgCreating a new post is one command:
hugo new content posts/my-post-title/index.mdThe AWS Setup#
The architecture is straightforward:
- Amazon S3 holds the static files that Hugo generates
- Amazon CloudFront sits in front as the CDN, handling HTTPS, caching, and security headers
- Amazon Route 53 manages DNS for the subdomain
- AWS Certificate Manager provides a free SSL/TLS certificate
- AWS Glue and Amazon Athena give me query-able analytics from CloudFront access logs
Everything is defined as infrastructure as code using the AWS Cloud Development Kit (CDK) in TypeScript. Four stacks total — one for the SSL certificate and DNS, one for CloudFront access logging and analytics, one for the S3 bucket and CloudFront distribution, and one for CI/CD. I can tear the whole thing down and rebuild it with a single command.
Here’s the core of the hosting stack — an S3 bucket with CloudFront in front of it using Origin Access Control:
this.contentBucket = new s3.Bucket(this, 'ContentBucket', {
blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL,
encryption: s3.BucketEncryption.S3_MANAGED,
removalPolicy: cdk.RemovalPolicy.DESTROY,
autoDeleteObjects: true,
});
this.distribution = new cloudfront.Distribution(this, 'Distribution', {
defaultBehavior: {
origin: origins.S3BucketOrigin.withOriginAccessControl(this.contentBucket),
viewerProtocolPolicy: cloudfront.ViewerProtocolPolicy.REDIRECT_TO_HTTPS,
compress: true,
},
domainNames: [props.domainName],
certificate: props.certificate,
defaultRootObject: 'index.html',
});The bucket is fully private — no public access. CloudFront uses S3BucketOrigin.withOriginAccessControl() which is the current best practice, replacing the older Origin Access Identity (OAI) pattern.
DNS: Cloudflare → Route 53#
My domain thehodos.com lives on Cloudflare. Rather than move the whole domain, I delegated just the keith subdomain to Amazon Route 53. CDK creates the hosted zone and outputs the nameservers:
this.hostedZone = new route53.PublicHostedZone(this, 'HostedZone', {
zoneName: props.domainName, // keith.thehodos.com
});
new cdk.CfnOutput(this, 'NameServers', {
value: cdk.Fn.join(', ', this.hostedZone.hostedZoneNameServers!),
description: 'Add these as NS records in Cloudflare for keith subdomain',
});Then in Cloudflare, I added four NS records pointing keith to the Route 53 nameservers. That’s it — Cloudflare handles the parent domain, Route 53 handles the subdomain, and the two never need to know about each other. If you’re on Cloudflare and don’t want to migrate your whole domain, subdomain delegation is the way to go.
CI/CD with GitHub Actions#
Deployments are fully automated. When I push content changes to main, a GitHub Actions workflow builds the Hugo site and syncs it to S3, then invalidates the CloudFront cache. When I push infrastructure changes, a separate workflow runs cdk deploy. Both workflows authenticate to AWS using OpenID Connect federation — no long-lived access keys stored anywhere. Each workflow gets its own IAM role with only the permissions it needs. The content deploy role can’t touch infrastructure, and the infrastructure role can’t touch content. Least privilege.
The OIDC setup is a CDK stack that creates the identity provider and scoped roles:
const githubProvider = new iam.OpenIdConnectProvider(this, 'GithubOidc', {
url: 'https://token.actions.githubusercontent.com',
clientIds: ['sts.amazonaws.com'],
});
const siteRole = new iam.Role(this, 'SiteDeployRole', {
roleName: 'blog-site-deploy',
assumedBy: new iam.WebIdentityPrincipal(
githubProvider.openIdConnectProviderArn,
{
StringLike: {
'token.actions.githubusercontent.com:sub':
'repo:my-org/my-repo:ref:refs/heads/main',
},
},
),
});And the site deploy workflow is straightforward — Hugo build, S3 sync, cache invalidation:
- uses: peaceiris/actions-hugo@v3
with:
hugo-version: '0.157.0'
extended: true
- run: hugo --minify
- run: aws s3 sync public/ "s3://$BUCKET" --delete
- run: aws cloudfront create-invalidation
--distribution-id "$DIST_ID" --paths "/*"The whole pipeline from git push to live site takes about two minutes.
What It Costs#
This is the part I’m most happy about. My previous Ghost setup on Azure was running me approximately $76 a month. The new setup on AWS:
- S3 storage: A few cents for ~15MB of static files
- CloudFront: Free tier covers the first 1TB of data transfer and 10 million requests
- Route 53: $0.50/month for the hosted zone
- Certificate Manager: Free
- Athena: $5 per TB scanned — my logs are kilobytes, so effectively $0
When I want to check traffic, I just run a query in the Amazon Athena console:
SELECT cs_uri_stem, COUNT(*) as hits
FROM blog_analytics.cloudfront_logs
WHERE log_date >= '2026-03-01'
AND cs_uri_stem LIKE '/posts/%'
GROUP BY cs_uri_stem
ORDER BY hits DESC
LIMIT 10;No dashboards to pay for. Just SQL when I’m curious.
I’m looking at roughly $1-2 per month total. That’s a 97% cost reduction. For a personal blog with modest traffic, a static site on S3 and CloudFront is hard to beat.
The AI Assist#
So here’s the thing — I didn’t build all of this by hand over weeks of evenings and weekends. I built it TODAY using Kiro (Kiro CLI to be precise). Kiro is an AI-powered development workflow to help me ship the entire migration — infrastructure, content, CI/CD, and all — TODAY.
I’ve been building out a set of custom Kiro skills and agents that help me with the kind of work I do when writing code: writing specs, implementing code, reviewing changes, running pre-commit checks, and managing deployments. They’re basically reusable workflow automations that know about the project they’re working in. Kiro CLI handled the scaffolding, the boilerplate, and the repetitive parts. I focused on the decisions — architecture choices, content curation, and making sure the end result actually looked good.
I’ll go deeper on the Kiro setup in a future post. For now I’ll just say this: the best way to understand agentic AI is to use it on a real project with real stakes. This blog was that project for me.
Migrating the Content#
I had about 16 posts on Ghost. Not a huge corpus, but enough that I didn’t want to manually copy-paste each one. The posts were scraped from the live site, converted to Hugo-compatible markdown with proper front matter, and placed into page bundle directories. Images were downloaded from Azure Blob Storage and co-located with their posts. Old Ghost URLs like /2017/04/25/getting-started-with-azure-functions/ are handled by Hugo aliases that generate static redirect pages. No server-side redirect rules needed.
The whole content migration took a couple of hours. For a blog with hundreds of posts you’d want proper tooling, but for 16 posts the direct approach worked fine.
Lessons Learned#
A few things I picked up along the way:
CloudFront’s defaultRootObject only works for the site root. If you’re hosting a static site with clean URLs like /posts/my-post/, you need a CloudFront Function to rewrite those requests to /posts/my-post/index.html. This is a classic gotcha that isn’t obvious until you deploy and start getting 404s on every page except the homepage.
function handler(event) {
var request = event.request;
var uri = request.uri;
if (uri.endsWith('/')) {
request.uri += 'index.html';
} else if (!uri.includes('.')) {
request.uri += '/index.html';
}
return request;
}This runs on every viewer request at the edge. It’s a CloudFront Function (not Lambda@Edge), so it’s fast and cheap.
CDK cross-region references work but add complexity. My SSL certificate has to live in us-east-1 (CloudFront requirement) while my S3 bucket is in us-west-2. CDK’s crossRegionReferences feature handles this, but it creates Lambda-backed custom resources behind the scenes. Worth knowing what’s happening under the hood.
Static sites are operationally boring — and that’s the point. There’s no runtime to patch, no database to back up, no container to keep healthy. Hugo generates HTML files. S3 serves them. CloudFront caches them. I can go months without thinking about this infrastructure and it’ll just keep working.
What’s Next#
I’m planning to write more regularly now that the friction of publishing is basically zero. git push and it’s live. No admin panel, no deploy scripts to remember, and no monthly bill that makes me wince.
If you’re running a personal blog on a platform that’s more complex (and more expensive) than it needs to be, take a look at static site generators. Hugo, Astro, Eleventy — pick one that fits your style. Pair it with S3 and CloudFront and you’ve got a setup that’s fast, cheap, and will run itself.
Up next: a deeper look at the AI-powered development with Kiro, including samples of the agents and skills I used to build all of this.
