It's been long overdue to have an internal web server that would have an always up to date copy of our project documentation. We use Sphinx, and GitHub does renders RST to certain extent so I kept delaying this task till now, but it's about time to tie loose ends - so I embarked on a journey to find a way to host internal documentation with minimal maintenance efforts and costs.
With today's SaaS and cloud technologies it should've been easy right? Just watch my private repo on GitHub for changes in docs
directory, pull it, run sphinx to build static HTML pages and then upload them somewhere - not too complicated, isn't it? Let's see how the story has unfolded.
ReadTheDocs.com
That's probably would've been a kinder choice - to support free software development. However I wanted my own theme and my own domain and that meant going for Advanced plan which is $150/month which is quite expensive for our small team where we write docs more often than reading them :) Sill, it would've worked well with minimal efforts to setup and they have no thrills 30-day trial.
Ruling out readthedocs.com SaaS I decided to setup docs hosting pipeline myself using one of the GCP tools - the cloud we use the most.
A convoluted trail through GCP
It was time to catch up on recent GCP products I have yet had a chance to try. To build the docs, Google Cloud Build sounded great - and it is indeed. They even have dedicated app on GitHub marketplace so that builds can be triggered from pull requests and build status reflected on GitHub. That worked pretty straight forward. For hosting I decided to upload docs on Google Cloud Storage and figure out later on how to host them privately. After some tinkering I ended up with the following cloudbuild.yaml
:
steps:
# Prepare builder image - speeds up the future builds
# It just makes sure that docbuilder:current image is present on local machine.
# I have a dedicated docbuilder for each version of docs/requirements.txt.
# I could've just started each time from python docker and pip install requirements.txt
# but lxml takes several minutes to build, which is waste...
# This optimization brings build time from 7 minutes to 1!
- name: alpine:3.10
entrypoint: sh
args:
- "-c"
- |
set -ex
apk add --no-cache docker
cd docs
IMAGE=gcr.io/$PROJECT_ID/doc-builder:$(sha1sum < requirements.txt | cut -f 1 -d ' ')
if ! docker pull $$IMAGE; then
docker build -t $$IMAGE .
docker push $$IMAGE
fi
docker tag $$IMAGE docbuilder:current
# Build the docs - reuse the image we just have built
- name: docbuilder:current
entrypoint: sh
args:
- "-c"
- |
cd docs
make html
# Publish docs
# We can't use Cloud Build artifacts since they do not support recursive upload
# https://stackoverflow.com/questions/52828977/can-google-cloud-build-recurse-through-directories-of-artifacts
- name: gcr.io/cloud-builders/gsutil
args: ["-m", "cp", "-r", "docs/_build/html/*", "gs://my-docs/$BRANCH_NAME/"]
This approach looked very promising, for example I can build docs for different branches and access them simply by URL paths suffixes. It also outlines of the Cloud Build strengths - you can just run shell scripts of your choice to do anything you like. Finally Cloud Build provides you with free 120 minutes a day meaning I can build my docs every 15 minutes without it costing me a penny still.
Unfortunately I hit a hosting dead end pretty quick. I want to use GCP Identity Aware Proxy (IAP) for guarding the access and it does not work with Cloud Storage yet, though it was something quite natural (for me) to expect it should've. I explored ideas about running a container that would mount Cloud Storage bucket and serve it behind IAP, but if I end up hosting a container I'll be better off just to build my docs into a static file server. I will have to give up on ability to host docs from multiple branches together but the solution of running a container in privileged mode with pre- and post- hooks to mount GCS through FUSE didn't sound very clean and would've deprived me from using Managed Cloud Run (more on that below). I briefly explored Cloud Filestore (not Firestore) path, but their minimum volume size is 1TB which is $200/month - such a waste.
Looks like I need to build my docs into a static-server container so why not trying to host it on Cloud Run? With amount of traffic to our docs it would only cost us... nothing since we'll stay well within the free tier. However lack of IAP support hit me again. Cloud Run supports Google Sign-In meaning it can validate your bearer tokens, but still no authentication proxy support. Hopefully they will implement one soon since it's highly anticipated, by me at least.
At that point I went back to IAP docs to re-conclude what are my options. AppEngine, GCE, or GKE they were. I obviously decided on GKE since I had a utility GKE cluster with some spare capacity I could leach on. I ruled out AppEngine - no one in my team including myself had any experience with it and with GKE option readily available I saw no reason start acquiring any.
From this point on it went pretty straight-forward. I created the following Dockerfile:
FROM python:3.7.5-alpine3.10 AS builder
RUN apk add --no-cache build-base libxml2-dev libxslt-dev graphviz
WORKDIR /build
COPY requirements.txt ./
RUN pip install --upgrade -r ./requirements.txt
COPY . ./
RUN make html
########################
FROM nginx:1.17.5-alpine AS runner
COPY --from=builder /build/_build/html /usr/share/nginx/html
And use the following build config:
steps:
- name: gcr.io/cloud-builders/docker
args: [
"build", "-t", "gcr.io/$PROJECT_ID/docs-server:$BRANCH_NAME-$COMMIT_SHA", "docs",
]
- name: gcr.io/cloud-builders/docker
args: [
"push", "gcr.io/$PROJECT_ID/docs-server:$BRANCH_NAME-$COMMIT_SHA",
]
- name: gcr.io/cloud-builders/gke-deploy
args:
- run
- --filename=docs/deploy # You can pass a directory here, but you'll need to read gke-deploy code to find it out
- --image=gcr.io/$PROJECT_ID/docs-server:$BRANCH_NAME-$COMMIT_SHA
- --location=us-central1-a
- --cluster=my-cluster
My build time went up again to 7 minutes unfortunately, which I tried to mitigate by using Kaniko, but hit a show-stopper bug where it does not recognizes changes in files copied between stages. Hopefully they fix it soon. Either that or GCS will support IAP :). For the reference, the relevant Cloud Build step with Kaniko would've looked like this (instead docker build/push above):
- name: gcr.io/kaniko-project/executor:latest
args:
- --cache=true
- --cache-ttl=336h # 2 weeks
- --context=/workspace/docs
- --dockerfile=/workspace/docs/Dockerfile
- --destination=gcr.io/$PROJECT_ID/docs-server:$BRANCH_NAME-$COMMIT_SHA
My docs/deploy
dir contained a single K8s deployment.yaml
file to create K8s deployment object. gke-deploy
can create one by default, but it also creates horizontal pod autoscaler which was really an overkill for my task. So here is my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: docs-server
labels:
app: docs-server
spec:
replicas: 1
selector:
matchLabels:
app: docs-server
template:
metadata:
labels:
app: docs-server
spec:
containers:
- name: nginx
image: gcr.io/my-project/docs-server:latest # Will be overridden by gke-deploy
ports:
- containerPort: 80
At this point I had a pipeline that builds my docs into a static server and deploys it as a pod into one of my GKE clusters. The only thing that's left is to expose it to my team, securely though IAP. This is where GKE comes handy - you can request Load Balancer with SSL certificate and IAP directly through K8s manifests! Just follow the guides: 1, 2, and 3.
And here we are - I now have my private docs, on custom domain secured behind IAP to share with my GCP team mates. All in all even if I would run it on a dedicated GKE with a single f1-micro instance it would've cost me less than $20 per month, meaning that if I factor costs of my time to set it up, the price difference between host-your-own and ReadTheDocs Advanced plan would pay off in less than 2 years :)
👍
ReplyDelete