Over the last year, I’ve been heavily working on deps.cloud. deps.cloud draws it’s inspiration from a project that I worked on at Indeed.com. Since it’s original inception, there had been a heavy push to move it into the open source space. In this post, I’ll discuss the process and rationale I applied as I rewrote this project in the open.
Back in July, I found myself needing to better coordinate deployments of my applications to Kubernetes.
After searching around, I found many ways that people where trying to solve this problem.
Some used shell scripts to apply multiple YAML files with a fixed time sleep between them.
Others used shell scripts and tailed the rollout using
kubectl rollout status -w.
Now, I manage a lot of my deployments using GitOps and Flux.
So leveraging these shell scripts to manage my rollouts into clusters wasn’t really an option.
It wasn’t until I came across Alibaba Cloud’s blog post on solving service dependencies that I felt like I had something to work with. The article described two techniques. The first was inspecting dependencies within the application itself. At Indeed, we leverage our status library to do this. The second was to enable services to be checked, independent of the application.
In this post, I’ll demonstrate how to use my service-precheck initialization container (built off of the Alibaba blog post) to ensure upstream systems are up before attempting to start a downstream system.
When you build a container image, it’s typically only built for one platform (
linux) and one architecture (
As the Internet of Things continues to grow, the demand for more
arm images increased as well.
Traditionally, in order to produce an
arm image, you need an
arm device to do the build on.
As a result, most projects wind up missing
BuildKit provides emulation capabilities that support multi-architecture builds.
With BuildKit, you build container images across multiple architectures concurrently.
This core utility backs
docker buildx, a multi-architecture build utility for docker.
In this post, I’ll discuss why you should produce multi-architecture container images and demonstrate how to use
docker buildx to do it.
Yesterday, I decided to switch the license that I apply to my personal projects. Many open source projects use the Apache 2.0 license. After reading through it a few times, I liked the level of coverage that it provided. It was however a bit wordy in my opinion. These were often simple little side projects that I was hacking on in my free time. After some discussion with others in the community and a few podcasts, I decided to make a switch.
In my last few posts, I talked a bit about my at home development cluster. Due to the flexibility of my cluster, I wanted to provide a monitoring solution that was valuable across each technology I use. In this post, I discuss how monitoring is setup on my cluster. I’ll walk through setting up each node, the Prometheus server, and the Graphana UI.
Previously, I talked about the different orchestration technologies that I’ve run on my Raspberry Pi cluster. That post was rather high level and only contained details relating to k3s. In this post, we’ll take a more in depth look at my cluster setup and my management process around it.
Over the last few days, I’ve been revisiting Kubernetes on my Raspberry Pi cluster. I hope to share what I learned in the process and some of the tooling that I discovered along the way.
I was quite surprised to see how under documented installing a 64 bit operating system onto a Raspberry Pi is. Many articles out there talk about needing to compile Linux, which sounds oh-so-pleasant. One day, I stumbled across a 64bit OpenSUSE version that was compatible, but the installation instructions required a Linux OS to be done properly. Since I primarily work on OSX, this presented yet another barrier.
After a lot of searching around, I finally found a straight forward and simple way to do it.
During my first employment at Indeed, I cloned every repository down to my machine. This approach worked for a while when the number of repositories was small. As the organization has grown, the solution quickly became unmanageable. While many people do not work across every repository, many are familiar with the pain of setting up a new machine. I wrote gitfs for a few reasons. First, to reduce the time spent setting up a new development environment. Second, to remove the need to figure out where all my projects need to be cloned. In this post, I discuss some challenges faced and lessons learned in writing my first file system.
- technologies used and rough size of project
- an overview of the project
- a critique about the approach taken to manage the project
- what I would’ve done differently