Today I stumbled up a problem where sops refused to decode my file:
That is, sops complained that Google KMS service I use to encrypt/decrypt the keys behind the scenes is diabled in my proejct. Which didn't make sense - after all, I created KMS keys in that project so the service must be enabled.
I inspected the project id After checking environment variables, KMS key location in the encrypted file I had no other options but to try One easy fix is to run What is there was a way to simply tell sops (and others) to use project owning the resource (the KMS key in my case) as a billing project as well? Apparently there is a way:
Failed to get the data key required to decrypt the SOPS file.
Group 0: FAILED
projects/foo/locations/global/keyRings/foo-keyring/cryptoKeys/foo-global-key: FAILED
- | Error decrypting key: googleapi: Error 403: Cloud Key
| Management Service (KMS) API has not been used in project
| 123xxxxxx before or it is disabled. Enable it by visiting
| https://console.developers.google.com/apis/api/cloudkms.googleapis.com/overview?project=123xxxxxxx
| then retry. If you enabled this API recently, wait a few
| minutes for the action to propagate to our systems and
| retry.
| Details:
| [
| {
| "@type": "type.googleapis.com/google.rpc.Help",
| "links": [
| {
| "description": "Google developers console API
| activation",
| "url":
| "https://console.developers.google.com/apis/api/cloudkms.googleapis.com/overview?project=123xxxxxxx"
| }
| ]
| },
| {
| "@type": "type.googleapis.com/google.rpc.ErrorInfo",
| "domain": "googleapis.com",
| "metadata": {
| "consumer": "projects/123xxxxxxx",
| "service": "cloudkms.googleapis.com"
| },
| "reason": "SERVICE_DISABLED"
| }
| ]
| , accessNotConfigured
Recovery failed because no master key was able to decrypt the file. In
order for SOPS to recover the file, at least one key has to be successful,
but none were.
123xxxxxxx
the error was referring to and was surprised to find out that it belongs to a project bar
and not the project foo
I was working on (and the one where KMS keys where stored at).
strace
on sops binary to find out was causes sops to go with project bar
instead of foo
. And bingo - it looked at ~/.config/gcloud/application_default_credentials.json
file which has quota_project_id
parameter pointing straight to bar
.
gcloud auth application-default set-quota-project foo
. It basically tells Google SDK to use foo
as a billing project when calling KMS service (KMS API distinguishes between calling project and resource-owning project as explained here. It works but it's a fragile solution - if you are working on several projects in parallel you need to remember to switch back and forth to the correct project since these particular applicatoin-default settings can not be controlled from environment variables.
And voilĂ - it works!
gcloud auth application-default login --disable-quota-project
...
Credentials saved to file: [~/.config/gcloud/application_default_credentials.json]
These credentials will be used by any library that requests Application Default Credentials (ADC).
WARNING:
Quota project is disabled. You might receive a "quota exceeded" or "API not enabled" error. Run $ gcloud auth application-default set-quota-project to add a quota project.
Free to code
Compelling programming and Linux hacking notes. Cloud friendly.
Friday, November 11, 2022
Pointing Mozilla SOPS into the right direction
Friday, January 21, 2022
How to STOP deletion of a Cloud Storage bucket in GCP Cloud Console
What if you need to delete a Cloud Storage bucket with lots of objects (tens of thousands or more)? As per GCP docs your options are:
- Cloud Console
- Object Lifecycle policies
Cloud Console is very handy in this scenario - a couple of clicks and you are done. However, using it to delete a bucket is akin hiring an anonymous killer - once you kick off a job there is no way to stop it. But what if it was a wrong bucket? - Sitting and watching your precious data melting away is a bitter experience
As you see, the above UI has no "OMG! Wrong bucket! Stop It Please!" button. However there is a hack to still abort a deletion job (othwerise I wouldn't be writing this post, right? :)
To abort the deletion job all you need to do is to call your fellow cloud admin and ask him to deprive you, termporary (or not?) of write permissions to the bucket in question. Once your user is not able to delete objects from the in question, the deletion job will fail:
Cloud Console performs deletions on behalf of your user so once your permissions has been snipped it aborts the deleton job. I only wish there was a "Cancel" button in the Console UI save us from using this inconvenient hack.
Of course data that was deleted up until abortion is aleady gone (15,000 objects in my example above) and the only way to restore it, aside if you had backups, is to have Object Versioning being setup in advance.
Tuesday, December 28, 2021
Google Chrome with taskbar entry but no actual window - how to fix
I recently got a new laptop - Lenovo X1 Yoga Gen 6 and thought that it's now-or-ever opportunity to head-dive into NixOS which was something I cherished for a long time. Anyhow, I have different DPI displays and hence need to run Wayland. Things are still a bit shaky with Wayland, at least on NixOS with KDE but it's getting better every week! - that's why I'm running on the unstable channel.
Every now and then after upgrade it happens that Chromium (and Chrome) open up but don't show a window. There is a taskbar entry, they respond to right-click, show recent docs in the right-click pop-up, etc., but not matter what I do, there is no window shown which makes unusable of course. This is how it looks:
How to fix it?
TL;DR;
cd ~/.config/chromium/Default
cat Preferences |jq 'del(.browser.window_placement, .browser.app_window_placement)' |sponge Preferences
How did I figure it out? I'm no Chrome dev so I did it a CLI way:
- Copied my profile
~/.config/chromium
aside, removed the original and checked that Chromium starts. I.e. it's a configuration issue - Used binary search to determine which files in the profile cause the issue - namely, each time I
rsync -av ~/.config/chromium{.old,}/Default/
, removed some files, and checked if it helped. Eventually I figured out thatPreferences
file is the offender - Now all was left is to compare the original and newly generated
Preferences
files. It's a single-line JSON file and I had to format it withjq
tool first. Looking at the (huge) diff I was lucky to notice that.browser.window_placement
configuration is different; and after copyingPrefences
from my original backup and dropping this attribute my Chromimum came back to life. Since I use Chromium web apps I had to reset.browser.app_window_placement
as well
Preferences
too.
Update Jan 2022: Apparently the above hack works only partially and the issue kept triggering until Chromimum 97 landed in NixOS and it never happened since.
Friday, November 22, 2019
So you wanna host docs?
It's been long overdue to have an internal web server that would have an always up to date copy of our project documentation. We use Sphinx, and GitHub does renders RST to certain extent so I kept delaying this task till now, but it's about time to tie loose ends - so I embarked on a journey to find a way to host internal documentation with minimal maintenance efforts and costs.
With today's SaaS and cloud technologies it should've been easy right? Just watch my private repo on GitHub for changes in docs
directory, pull it, run sphinx to build static HTML pages and then upload them somewhere - not too complicated, isn't it? Let's see how the story has unfolded.
ReadTheDocs.com
That's probably would've been a kinder choice - to support free software development. However I wanted my own theme and my own domain and that meant going for Advanced plan which is $150/month which is quite expensive for our small team where we write docs more often than reading them :) Sill, it would've worked well with minimal efforts to setup and they have no thrills 30-day trial.
Ruling out readthedocs.com SaaS I decided to setup docs hosting pipeline myself using one of the GCP tools - the cloud we use the most.
A convoluted trail through GCP
It was time to catch up on recent GCP products I have yet had a chance to try. To build the docs, Google Cloud Build sounded great - and it is indeed. They even have dedicated app on GitHub marketplace so that builds can be triggered from pull requests and build status reflected on GitHub. That worked pretty straight forward. For hosting I decided to upload docs on Google Cloud Storage and figure out later on how to host them privately. After some tinkering I ended up with the following cloudbuild.yaml
:
steps:
# Prepare builder image - speeds up the future builds
# It just makes sure that docbuilder:current image is present on local machine.
# I have a dedicated docbuilder for each version of docs/requirements.txt.
# I could've just started each time from python docker and pip install requirements.txt
# but lxml takes several minutes to build, which is waste...
# This optimization brings build time from 7 minutes to 1!
- name: alpine:3.10
entrypoint: sh
args:
- "-c"
- |
set -ex
apk add --no-cache docker
cd docs
IMAGE=gcr.io/$PROJECT_ID/doc-builder:$(sha1sum < requirements.txt | cut -f 1 -d ' ')
if ! docker pull $$IMAGE; then
docker build -t $$IMAGE .
docker push $$IMAGE
fi
docker tag $$IMAGE docbuilder:current
# Build the docs - reuse the image we just have built
- name: docbuilder:current
entrypoint: sh
args:
- "-c"
- |
cd docs
make html
# Publish docs
# We can't use Cloud Build artifacts since they do not support recursive upload
# https://stackoverflow.com/questions/52828977/can-google-cloud-build-recurse-through-directories-of-artifacts
- name: gcr.io/cloud-builders/gsutil
args: ["-m", "cp", "-r", "docs/_build/html/*", "gs://my-docs/$BRANCH_NAME/"]
This approach looked very promising, for example I can build docs for different branches and access them simply by URL paths suffixes. It also outlines of the Cloud Build strengths - you can just run shell scripts of your choice to do anything you like. Finally Cloud Build provides you with free 120 minutes a day meaning I can build my docs every 15 minutes without it costing me a penny still.
Unfortunately I hit a hosting dead end pretty quick. I want to use GCP Identity Aware Proxy (IAP) for guarding the access and it does not work with Cloud Storage yet, though it was something quite natural (for me) to expect it should've. I explored ideas about running a container that would mount Cloud Storage bucket and serve it behind IAP, but if I end up hosting a container I'll be better off just to build my docs into a static file server. I will have to give up on ability to host docs from multiple branches together but the solution of running a container in privileged mode with pre- and post- hooks to mount GCS through FUSE didn't sound very clean and would've deprived me from using Managed Cloud Run (more on that below). I briefly explored Cloud Filestore (not Firestore) path, but their minimum volume size is 1TB which is $200/month - such a waste.
Looks like I need to build my docs into a static-server container so why not trying to host it on Cloud Run? With amount of traffic to our docs it would only cost us... nothing since we'll stay well within the free tier. However lack of IAP support hit me again. Cloud Run supports Google Sign-In meaning it can validate your bearer tokens, but still no authentication proxy support. Hopefully they will implement one soon since it's highly anticipated, by me at least.
At that point I went back to IAP docs to re-conclude what are my options. AppEngine, GCE, or GKE they were. I obviously decided on GKE since I had a utility GKE cluster with some spare capacity I could leach on. I ruled out AppEngine - no one in my team including myself had any experience with it and with GKE option readily available I saw no reason start acquiring any.
From this point on it went pretty straight-forward. I created the following Dockerfile:
FROM python:3.7.5-alpine3.10 AS builder
RUN apk add --no-cache build-base libxml2-dev libxslt-dev graphviz
WORKDIR /build
COPY requirements.txt ./
RUN pip install --upgrade -r ./requirements.txt
COPY . ./
RUN make html
########################
FROM nginx:1.17.5-alpine AS runner
COPY --from=builder /build/_build/html /usr/share/nginx/html
And use the following build config:
steps:
- name: gcr.io/cloud-builders/docker
args: [
"build", "-t", "gcr.io/$PROJECT_ID/docs-server:$BRANCH_NAME-$COMMIT_SHA", "docs",
]
- name: gcr.io/cloud-builders/docker
args: [
"push", "gcr.io/$PROJECT_ID/docs-server:$BRANCH_NAME-$COMMIT_SHA",
]
- name: gcr.io/cloud-builders/gke-deploy
args:
- run
- --filename=docs/deploy # You can pass a directory here, but you'll need to read gke-deploy code to find it out
- --image=gcr.io/$PROJECT_ID/docs-server:$BRANCH_NAME-$COMMIT_SHA
- --location=us-central1-a
- --cluster=my-cluster
My build time went up again to 7 minutes unfortunately, which I tried to mitigate by using Kaniko, but hit a show-stopper bug where it does not recognizes changes in files copied between stages. Hopefully they fix it soon. Either that or GCS will support IAP :). For the reference, the relevant Cloud Build step with Kaniko would've looked like this (instead docker build/push above):
- name: gcr.io/kaniko-project/executor:latest
args:
- --cache=true
- --cache-ttl=336h # 2 weeks
- --context=/workspace/docs
- --dockerfile=/workspace/docs/Dockerfile
- --destination=gcr.io/$PROJECT_ID/docs-server:$BRANCH_NAME-$COMMIT_SHA
My docs/deploy
dir contained a single K8s deployment.yaml
file to create K8s deployment object. gke-deploy
can create one by default, but it also creates horizontal pod autoscaler which was really an overkill for my task. So here is my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: docs-server
labels:
app: docs-server
spec:
replicas: 1
selector:
matchLabels:
app: docs-server
template:
metadata:
labels:
app: docs-server
spec:
containers:
- name: nginx
image: gcr.io/my-project/docs-server:latest # Will be overridden by gke-deploy
ports:
- containerPort: 80
At this point I had a pipeline that builds my docs into a static server and deploys it as a pod into one of my GKE clusters. The only thing that's left is to expose it to my team, securely though IAP. This is where GKE comes handy - you can request Load Balancer with SSL certificate and IAP directly through K8s manifests! Just follow the guides: 1, 2, and 3.
And here we are - I now have my private docs, on custom domain secured behind IAP to share with my GCP team mates. All in all even if I would run it on a dedicated GKE with a single f1-micro instance it would've cost me less than $20 per month, meaning that if I factor costs of my time to set it up, the price difference between host-your-own and ReadTheDocs Advanced plan would pay off in less than 2 years :)
Sunday, May 12, 2019
Testing Lua "classes" speed
I developed a "smart" reverse proxy recently where I decided to use OpenResty platform - it's basically Nginx + Lua + goodies. Lua is the first class language so theoretically you can implement anything with it.
After the couple of weeks I spent with Lua it strongly reminds me JavaScript 5 - while it's a complete language, it's very "raw" in a sense that while it has constructs to do anything there is no standard (as in "industry-standard") way to do many things, classes being one of them. Having a strong Python background I'm used to spend my time mostly on business logic and not googling around to find best 3rd-party set/dict/etc. implementation. Many praise Lua's standard library asceticism (which reminds me similar sentiments in JS 5 days), but most of the time I get paid to create products, not tools. Also, lack of uniform way to do common tasks results in quite non-uniform code-base.
Having said the above, I chose OpenResty. I already had Nginx deployed, so switching to OpenResty was a natural extension. It was exactly what I was looking for - a scriptable proxy - which is OpenResty's primary goal as a project. I didn't want to take a generic web-server and write middleware/plugin for it - it sounded a bit too adventurous and risky from security perspective. So Eventually I liked Lua. There is a special cuteness to it - I often find myself smiling while reading Lua code. Particularly it provided a great relief from Nginx IF evilness I used before.
Let's get to the point of this post, should we? While imbuing my proxy with some logic I decided to check which class-like approaches in Lua is the fastest. I ended up with 3 contenders:
I implemented class to test object member access, method invocation, and method chaining. The code is in the gist.
I used LuaJIT 2.1.0-beta3 that is supplied with the latest OpenResty docker image. pl.class documents two ways to define a class, hence I had two versions to see if there is any difference.
Initialization speed
Initialization + call speed
We can see that Metatable is as fast as our baseline plain Closures are much slower and invocation has cost. penlight.Class, while most syntactically rich, is the slowest one and
also takes hit from invocation.
Being myself a casual Lua developer, I prefer Closure approach:
Again, I'm casual Lua developer. Had I spent more time within I assume my brain would adjust to things like implicit For pure speed metatable is the way, though I wonder what difference it will make in real application (time your assumptions).
Out of curiosity, I did similar tests in Python (where there is one sane way to write this code). The results were surprising:
CPython3.7
PyPy3.6-7.1.1:
On CPython if you want to do anything with your classes beside initializing them, there is no much difference between Class and Closure. "Func" aside, its performance is on par with Lua.
PyPy just shines - its JIT outperforms Lua JIT by a far cry. The fact that speed of init+invoke on Class is similar to raw Func benchmark tells something about their ability to trace code that does nothing :)
Don't believe benchmarks - lies, damn lies, and benchmarks :)
Seriously though, before thinking "why didn't they embed Python", other aspects should be contemplated:
Let's run it
Func: 815,112,512 calls/sec
Metatable: 815,737,335 calls/sec
Closure: 2,459,325 calls/sec
PLClass1: 1,536,435 calls/sec
PLClass2: 1,545,817 calls/sec
Metatable: 816,309,204 calls/sec
Closure: 2,104,911 calls/sec
PLClass1: 1,390,997 calls/sec
PLClass2: 1,453,514 calls/sec
func
. Also with metatable, invoke does not affect speed - probably JIT is doing amazing job here (considering the code is trivial and predictable).
Conclusions
self
var
self
and may be my self-recommendation would change.
Benchmarking init
Func: 18,378,052 ops/sec
Class: 4,760,040 ops/sec
Closure: 2,825,914 ops/sec
Benchmarking init+ivnoke
Class: 1,742,217 ops/sec
Closure: 1,549,709 ops/sec
Benchmarking init
Func: 1,076,386,157 ops/sec
Class: 247,935,234 ops/sec
Closure: 189,527,406 ops/sec
Benchmarking init+invoke
Class: 1,073,107,020 ops/sec
Closure: 175,466,657 ops/sec
On the emotional side
Finally, both can do amazing things through FFI.
Friday, February 1, 2019
A warning about JSON serialization
I added caching capabilities for one of my projects by using aiocache with JSON serializer. While doing that I came over strange issue where I was putting {1: "a"}
in cache, but received {"1": "a"}
on retrieval - the 1
integer key came back as "1"
string. First I thought it's an bug in aiocache, but maintainer kindly pointed out that JSON, being Javascript Object Notation does not allow mapping keys to be non-strings.
However there is a point here that's worth paying attention to - it looks like JSON libraries, at least in Python and Chrome/Firefox, will happily accept {1: "a"}
for encoding
but will convert keys to strings. This may lead to quite subtle bugs as in my earlier example - cache hits will return data different to original.
>>> import json
>>> json.dumps({1:"a"})
'{"1": "a"}'
>>> json.loads('{1:"a"}')
Traceback (most recent call last):
File "", line 1, in
File "/home/.../lib/python3.6/json/__init__.py", line 354, in loads
return _default_decoder.decode(s)
File "/home/.../lib/python3.6/json/decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/home/.../lib/python3.6/json/decoder.py", line 355, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
Tuesday, October 2, 2018
How Google turned a simple feature into configuration disaster
A grain of frustration with a light in the end of the tunnel.
Back in a while G Suite had a simple way to create email distribution lists - in the admin panel you could simply create a group of users, e.g. support@example.com, add a couple, decide whether users outside of your organization can send an email to the group address, and you are done.
Today I tried to do the same - to create a distribution list - with the current version of G Suite for Business, which ended up in hours of time effort and a lengthy conversation with G Suit support.
First I tried to create a group and to send an email to its address - nope, does not work:
Grou-what? A Google Group? I don't want no Google Group! I want, you know, a distribution list!
Clicking on "Access Settings" of the group properties in G Suite leads to a particular settings page on... groups.google.com. Just one of 21(!) other setting pages! My day schedule didn't include a ramp up on Google Groups, so I hooked into support chat straight away. The Google guy explained to me that with G Suite Business the only option is to configure a particular Google Group to behave as a distribution list.
First we worked on enabling people outside of the organization to post (send emails) to the group. Once we configured the setting I was about to jump away to test it, but the guy told me, citing:
Zaar Hai: OK. Saving and testing. Can you please hold on? G Suite Support, Jay: Wait. Zaar Hai: OK :) G Suite Support, Jay: That is not going to work right away. G Suite Support, Jay: We have what we called propagation. G Suite Support, Jay: You need to wait for 24 hours propagation for the changes to take effect. G Suite Support, Jay: Most of the time it works in less than 24 hours. Zaar Hai: Seriously?? Zaar Hai: I thought I'm dealing with Google... G Suite Support, Jay: Yes, we are Google.
24 hours! After soothing myself, I decided to give it a shot - it worked! The configuration propagated quite fast it seems.
It was still not a classic distribution list though, since all of the correspondence was archived and ready to be seen on groups.google.com. I didn't want this behaviour, so we kept digging. Eventually the guy asked me to reset the group to the Email list type:
The messages still got archived though, so we blamed it on the propagation and the guy advised me to come back next day if it still does not work.
Well, after taking 24 hours brake, it still didn't. I did a bit of settings exploration myself and found that there is dedicated toggle responsible for the message archiving. Turns out the reset does not untoggle it. Once disabled, it propagated within a minute.
That was a frustration part. Now the light - a guide on how to have a distribution list with G Suit.
How to configure a G Suite groups to behave like a distribution list
Step 1: Create a group
Create a group in G Suite admin console. If you need just an internal mailing list, that is for members only, and are fine with the message archiving, then you are done. If you need outside users to be able to send emails to it (like you probably do with, e.g. sales@example.com), then read on.
Step 2: Enabling external access
- Go to groups.google.com.
- Click on "My groups" and then on manage link under the name of the group in question
- On the settings page, navigate to Permissions -> Basic permis... in the menu on the left
- In the Post row drop-down select "Anyone on the web". Click Save and you should be done
This is almost a classic distribution list - we only need to disable archiving.
Step 3: Disable archiving
Eventually I discovered that archiving is controlled by the toggle located under Information -> Content control in the settings menu:In my case, the change went into effect immediately.
Afterthoughts
- Doing all of the above steps may be quite daunting on a system administrator that needs to manage many groups. Why not to have a shortcut right in G Suite admin console to make it easier?
- 24h propagation period sounds like some blast from the past. The Google guy told me that any G Suite setting change can take up to 24 hours to take effet. Back to the
futurenow, Google offers a distributed database with cross-continental ACID transactions, which makes me wonder about the reasons behind 24h propagation period.