CI/CD
At the root of the docs playbook repository we have a github action in charge of building and publishing this docs website to GitHub Pages.
Github Pages Setup
First of all, we need to have configured at a DNS level two records that can point to our GitHub Pages:
-
CNAME www.kakeibro.docs.dsbalderrama.top diegowrhasta.github.io
-
CNAME kakeibro.docs.dsbalderrama.top diegowrhasta.github.io
The documentation as to why of these values is at: Github Reference.
HINT: Depending on the user or organization access we might have to change the domain for redirection. The repository that will publish the content is the reference entity.
And it’s right after that we need a CNAME
file that should contain in one line the
subdomain that will map to our GH Page. Reference
As a concept: We have to be aware of the convention used by Github and the
Pre-built Action we are consuming,
in the sense that we will need to have a gh-pages
branch in our repository, this
branch is used in order to serve the site by going into Settings > Pages > Branch source
.
The |
Breakdown of Action
name: CRON for Docs Page deployment
on:
schedule: (1)
- cron: "0 0 * * *" # Runs at 12:00 AM UTC daily
workflow_dispatch: # Allows manual trigger
env:
DOCS_REPO: https://api.github.com/repos/KakeiBro/docs/commits/main (2)
NODE_VERSION: 22 (3)
permissions:
contents: write (4)
jobs:
check-external-repo:
runs-on: ubuntu-latest
steps:
- name: Checkout this repository
uses: actions/checkout@v4
- name: Fetch latest commit from external repo
run: |
LATEST_COMMIT=$(curl -s ${{ env.DOCS_REPO }} | jq -r '.sha') (5)
echo "Latest commit: $LATEST_COMMIT" (6)
echo "LATEST_COMMIT=$LATEST_COMMIT" >> $GITHUB_ENV (7)
1 | The action at the playbook repository is a CRON job, it will run at 00:00 UTC every day. But it also has been setup to take in manual triggers in case it needs them. |
2 | We will leverage Github’s API to retrieve RESTful API responses, this is an endpoint specifically to get a a structure with the commits that have been made to the docs repository. |
3 | We will worth the most modern LTS version (to date), Node 22 |
4 | We will have to commit a .latest_commit file at the root of the repository to
keep track of the SHA, this is the main mechanism to detect if the docs have been
updated and then trigger a new publish. |
5 | We initially run a GET to the Github API endpoint, and from the response payload
we extract what’s under the sha key, this refers to the latest commit, and that is
saved under a LATEST_COMMIT variable. |
6 | We then print that SHA value with a message. |
7 | And lasty a good pattern that was used here is to inject into the runner’s env
variables this extracted variable’s value. This is a great way to share state across
multiple steps. The anatomy of it is: <NAME_OF_ENV_VARIABLE>=<VALUE> >> $GITHUB.ENV |
After the initial checking steps, we have to start defining if we will publish a new state for the docs or not.
- name: Retrieve previous commit
id: get_previous_commit
run: |
CACHE_FILE=".latest_commit" (1)
if [[ -f $CACHE_FILE ]]; then (2)
PREVIOUS_COMMIT=$(cat $CACHE_FILE) (3)
else
PREVIOUS_COMMIT="" (4)
fi
echo "Previous commit: $PREVIOUS_COMMIT"
echo "PREVIOUS_COMMIT=$PREVIOUS_COMMIT" >> $GITHUB_ENV (5)
- name: Compare commits and exit if unchanged
run: |
if [[ "$LATEST_COMMIT" == "$PREVIOUS_COMMIT" ]]; then
echo "Docs without change. Skipping web page generation." (6)
else
echo "CHANGED=true" >> $GITHUB_ENV
echo "CNAME=$(cat ./CNAME)" >> $GITHUB_ENV (7)
fi
1 | We initially define a runtime variable CACHE_FILE , since we mentioned that at the
root of the repo we will have a .latest_commit file, this is the path to it. |
2 | We then run a check to see if the file exists. |
3 | If it does, we save under a PREVIOUS_COMMIT variable the value of that file, (it should
be one sole line with a SHA). |
4 | If we don’t find a file, we simply save an empty string under the same variable name, this enables idempotency to the script. |
5 | After printing the previous comit SHA value, we also make sure to save that same value as an env variable. |
6 | In a subsequent step we make a comparison between the previously injected
$LATEST_COMMIT and $PREVIOUS_COMMIT variables. We are comparing a saved SHA
from a possible previous RUN at which a page was published and the at-the-moment
SHA of the repository. In case the values are the same, we just print a message indicating
this. |
7 | In case the SHA values vary, meaning that since the last time we published a
page, changes have been made, we will save a CHANGED and CNAME environment
variables so that we use them later. The first one being a flag and the second one
a value we will need to consume later when configuring our custom domain when publishing
our GH Page. |
The pipeline is intelligent enough to try to publish a website, only when new content has been added:
- name: Setup pnpm
if: ${{ env.CHANGED == 'true' }} (1)
uses: pnpm/action-setup@v4
with:
version: latest
- name: Set up Node.js
if: ${{ env.CHANGED == 'true' }}
uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: "pnpm"
cache-dependency-path: "./pnpm-lock.yaml"
- name: Install dependencies
if: ${{ env.CHANGED == 'true' }}
run: pnpm i --frozen-lockfile
- name: Run main logic
if: ${{ env.CHANGED == 'true' }}
run: |
echo "Docs have changed! Publishing new state..."
npx antora ./master-playbook.yml (2)
1 | A pain point with GitHub Actions is that you can’t configure them to just skip over everything after a certain point. And so, since we want to short-circuit any subsequent steps in case no change to the docs was detected, we will have to add a conditional step instruction in every subsequent step. |
2 | We do a normal setup of dependencies we need to work with node modules, (by taking
advantage of caching and pnpm). After we have setup everything, we run the antora
module and then use as a playbook reference the master-playbook . This will take
care of cloning the docs repository and then bringing
everything together to then have an artifact that’s a folder build/site/ . In here we
have an index.html site |
In case we should publish a new state for the docs site, we will leverage an already built action plus some extra things:
- name: Deploy static content to GH Pages
if: ${{ env.CHANGED == 'true' }}
uses: peaceiris/actions-gh-pages@v4
with:
github_token: ${{ secrets.GITHUB_TOKEN }} (1)
publish_dir: ./build/site (2)
cname: ${{ env.CNAME }} (3)
- name: Store latest commit for next run
if: ${{ env.CHANGED == 'true' }}
run: |
echo "$LATEST_COMMIT" > .latest_commit (4)
git config --global user.name "github-actions" (5)
git config --global user.email "github-actions@github.com"
git add .latest_commit
git commit -m "Updates docs latest commit SHA" (6)
git push
1 | The pre-built action to "publish to Github pages" is only in charge of setting up
a specific layout under a gh-pages branch, it will need a token so that it can run all
the neccesary logic to setup the branch and commit files to it. It’s also neccesary
for our permissions set to write way up above so that we can make all these changes
on the repo. |
2 | Since antora creates the deployable build at a specific folder structure, we will
point the action to that specific folder with index.html |
3 | In order to configure our custom domain name correctly we will now feed the value
we got from the CNAME file at the root of the repo in the previous steps to its
cname option. |
4 | And lastly we need to now keep track of the commit (the representation of the docs
repo) from which we just published. And so the LATEST_COMMIT value is saved to the
.latest_commit file on the repo. |
5 | We have to setup credentials that identify that the change here was done by a github action (always do things right, tracking changes will be beneficial one way or another). |
6 | And we then commit the change we are setting for the latest SHA file and pushing it. |
The action will leverage the |
The |
DNS
First of all, through theory and clever usage of conventions, we can leverage an
already owned apex domain. In our case that’s dsbalderrama.top
. An apex domain
is also called a root domain or naked domain. It doesn’t have any subdomains
attached to it. (i.e., example.com
is an apex domain, www.example.com
is not,
it has the www
subdomain attached to it).
When it comes to DNS records that you can add so that you redirect different subdomains and domains to hard coded IPs and or other services we have:
- A
-
An A Record or Address Record maps a domain to an IPv4 address. AAAA: An AAAA Record or Quad-A Record maps a domain to an IPv6 address.
- CNAME
-
A CNAME Record or Canonical Name Record maps one domain name (an alias) to another domain name (the canonical name). IMPORTANT: You can’t make use of this for an apex domain.
- ALIAS
-
An ALIAS Record behaves the same as a CNAME however it is allowed at the root of the domain (for an apex), it is also provided by some DNS providers.
- ANAME
-
An ANAME Record is similar to an ALIAS, however this is less used and it might be proprietary for certain DNS providers.
Example
If you wanted to make yourdomain.com
(and optionally www.yourdomain.com
) to point
to a web server, we can configure the DNS records such as:
-
Add an A Record with the setup:
A @ 203.0.113.10
-
Add an AAAA Record with the setup
AAAA 2001:db8::ff00:42
-
Add a CNAME Record with the setup
CNAME www yourdomain.com
And with this the moment that anyone on the web tries to hit our domain (may it be the apex) or one of its subdomains, the DNS will make sure that that URL redirects to the machine that hosts the content, may it be through an IP address or another domain that will of course be resolved by the end to an IP.
Subdomains
In order to "re-use" our already bought domain, you can leverage its authority,
by attaching other "paths" to it. This will result in subdomains for the apex domain.
(E.g., We have the test.com
domain, a subdomain for it can be a.b.test.com
).
Technically anything is possible, but this is where engineering and logic come into place. If we want organize content in a hierarchical manner, isolating functionality, also integrating with search engines so that they treat two URLs differently (separate entities), plus an easier management of multiple services or microsites under the same root domain. We can leverage subdomains, meaning, separating everything with dots, every dote will denote a new subdomain level
Example: brother.app.test.dev
-
brother
: This is the first-level subdomain. It could represent:-
A specific product, feature, or service (e.g., a tool named "Brother").
-
A team, project, or environment (e.g., a development team named "Brother").
-
A user or organization (e.g., a user account or namespace).
-
-
app
: This is the second-level subdomain. It suggests that the site is related to an application, likely a web or mobile app.-
It could be a dashboard, interface, or tool for managing or interacting with the "Brother" service.
-
-
test
: This is the third-level subdomain. It indicates that the site is likely a testing or staging environment.-
It’s probably not the production version of the app but rather a sandbox for development, QA, or experimentation.
-
-
dev
: This is the top-level domain (TLD). While .dev is a valid TLD, it’s often associated with development-related sites.-
It reinforces the idea that this is a development or testing environment.
-
As you can see, you can give it meaning depending on your own requirements.
Having something like brother-app.test.dev
would change that structure, having now
only three levels to the sub-domain (still under the same TLD though). Depending
on branding, technical requirements, and just plain accepted conventions we can
decide what will be the rules for our project.