apt-cacher-ng versus apt-transport-https
The headline sounds pretty technical, and so is the topic. Let’s quickly introduce both antagonists:
apt-cacher-ngis a tool to cache packages of the apt ecosystem. As an administrator, you may have multiple Debian-based systems. The overlap of packages that all the systems need is typically huge. That means, hundreds of your systems will require the latest security update for
curlat around the same time. Running an
apt-cacher-ngserver in your local environment will take a bit heat off Debian’s infrastructure and improves the download speed of packages. See also the Apt-Cacher NG project page.
apt-transport-httpsis an apt module to obtain packages over a secure
https://connection. Traditionally, packages are downloaded through plain HTTP or FTP, but as these are unencrypted a third party may observe what you’re doing at a repository (which packages you’re downloading etc..). Please note, that
apt-transport-httpsis already integrated in latest versions of apt - no need to install it separately.
So basically, both
apt-transport-https do a good thing! But… They don’t really like each other.. At least by default. However, I’ll show you how to make them behave ;-)
The issue is perfectly obvious: You want
apt-cacher-ng to cache TLS encrypted traffic…? That won’t happen.
You need to tell the client to create an unencrypted connection to the cache server, and then the cache server can connect to the repository through TLS.
Let me explain that using Docker.
To properly install Docker on a Debian based system, you would add a file
/etc/apt/sources.list.d/docker.list containing a repository such as:
However, when apt is told to use a cache server, it would fail to download Docker’ packages:
Let’s fix that using the following workaround:
- There is an
- There is a client configured to use the cache server, e.g.
1. Create a mock DNS for the cache server
You need to create a pseudo domain name that points to the cache server. This name will then tell the cache server which target repository to access.
Let’s say we’re using
You can either create a proper DNS record, or just add a line to the client’s
docker.cache will resolve to
220.127.116.11 at the client.
2. Update the client’s repository entry
Instead of contacting the repository directly, the client should now connect to the cache server instead.
You need to change the contents in
Thus, the client now treats the cache server as a proper repository!
3. Inform the cache server
apt-cacher-ng of course needs to be told what to do, when clients want to access something from
docker.cache: It should forward the request to the original repository!
This is called remapping. First create a file
/etc/apt-cacher-ng/backends_docker_com at the server containing the link to the original repository:
Then, populate the remapping rule in
/etc/apt-cacher-ng/acng.conf. You will find a section of
Remap entries (see default config of
acng.conf). Just append your rule:
This line reads:
- There is a remap rule called
- which remaps requests for
- to whatever is written in file
That’s it. Restart the cache server and give it a try :)
4. Add more Remaps
If you want to use more repositories through
https://, just create further mock-DNS-entries and append corresponding remapping rules to the
acng.conf. Pretty easy..
This setup of course strips the encryption off apt calls. Granted, it’s just the connections in your own environment, but still not really elegant.. So the goal is to also encrypt the traffic between client and cache server.
There is apparently no support for TLS in
apt-cacher-ng, but you can still configure an Nginx proxy (or what ever proxy you find handy) at the cache server, which supports TLS and just forwards requests to the upstream
apt-cacher-ng at the same machine. Or you could setup an stunnel.
There are a few other workarounds for this issue available. Most of them just show how to circumvent caching for HTTPS repositories (which somehow reduces the cache server to absurdity). Here, I just documented the (in my eyes) cleanest solution.
- network (67) ,
- security (29) ,
- software (155) ,
- university (46) ,
- administration (41) ,
- debian (26) ,
- linuxunix (24) ,
- howto (25)
- aptitude (13) ,
- config (21) ,
- crypt (4) ,
- debian (38) ,
- dns (2) ,
- docker (16) ,
- fix (13) ,
- ftp (2) ,
- http (6) ,
- job (10) ,
- cache (1) ,
- network (78) ,
- nginx (3) ,
- proxy (7) ,
- remote (22) ,
- security (31) ,
- ssl (10) ,
- ubuntu (10) ,
- update (9) ,
- curl (8)
Leave a comment
There are multiple options to leave a comment:
- send me an email
- submit a comment through the feedback page (anonymously via TOR)
- Fork this repo at GitHub, add your comment to the _data/comments directory and send me a pull request
- Fill the following form and Staticman will automagically create a pull request for you:
Super helpful. Unfortunately it’s a bit of a manual process as it would be nice to have a little more transparent solution but much better than just circumventing the cache for https which I was currently doing.
Very helpful! One addition: in my case, I did not need the DNS entry on the client, as apt connects to the configured proxy regardless of the hostname from the sources.list. The Remap-rule on the caching proxy then resolves it to the target.
Thanks a lot. I used stunnel instead of a Nginx Forward Proxy to secure the connection between the clients and the server.
I still wonder why this is a problem for apt-cacher-ng.
It works like a charm in Nexus. I have an apt cache there for the main Debian repository, which caches “https://deb.debian.org/debian”. sources.list entries look like “deb https://nexus.server/repository/debian bullseye main contrib non-free”. No “Acquire::http” setting needed.
See also: https://help.sonatype.com/repomanager3/nexus-repository-administration/formats/apt-repositories