23
loading...
This website collects cookies to deliver better user experience
cert-manager
and ingress-nginx
, and how I thought the cert-manager
upgrade would take forever when it ended up being the easy one of the two.node:14-alpine
images. Alpine images have long been my default image type of choice (as they have for a lot of people) mainly due to their smaller sizes.cd
or ls
. Heck, there's not even a shell, period. node
itself. Notably, it doesn't include npm
("lol wut", you might be thinking, but we'll get back to this in a bit).npm
? How do you install anything without it?"node
executable to actually run anything. And since you'd need node
, you'd need all the tooling that is required to first install node
in the first place (e.g. a shell), which just seemed like too much work for too little benefit.node:14-alpine
) to install node_modules
and do any compilation/linting/testing steps (in my case, 'compilation' because of TypeScript), and then copy over the code and node_modules
into a Distroless image in the second stage, where the app server would then be directly executed by node
. Since we're copying over the installed node_modules
as well, that solves the problem of not having npm
in the Distroless image!npm
in the final Distroless image is that this also meant not being able to run our npm
scripts. Whereas the app server is just an index.js
file that can be directly executed by node
, our npm
scripts were... slightly more complicated, since they called out to installed dependencies in node_modules
. npm
script, and it had to be run using the production (i.e. Distroless) image because it was run during our CI/CD pipeline.node
script that would call out to the other node_modules
executables. It was... not pretty. But it worked!npm
...) was that we now had different images for running in development and running in production. We kept a node:14-alpine
image for development since we still needed all of our dev tooling available for.. development. This is obviously less than ideal as far as a "prod/dev" mirroring strategy goes, but since we have the wonder that is per-branch deployments (courtesy of Kubails), we at least still have a 'staging' environment that damn near mirrors the production environment for testing.DOCKER_BUILDKIT=1
environment variable to any Docker commands (e.g. DOCKER_BUILDKIT=1 docker build ...
) and that images can't be used for caching by default when doing that.--cache-from
argument), you need to add --build-arg=BUILDKIT_INLINE_CACHE=1
when building the image. That is, a minimal build command now looks like this:DOCKER_BUILDKIT=1 docker build --build-arg=BUILDKIT_INLINE_CACHE=1 .
--cache-from
. --cache-from
image and Docker will handle pulling the image for you. However, and I don't know if this was just a bug at the time or something, but it would not work for me. Running in GCP Cloud Build with GCP Container Registry (notably, not Artifact Registry), I had to explicitly pull each --cache-from
image before it could be used as a cache image; the build would just error out otherwise (I don't remember what the error was, sorry). So that was also a royal pain to debug.