I can’t seem to wrap my head around (Docker) containers and especially their maintenance.
As I understand it, containers contain a stripped-down OS that shares some resources with the host?
Or is it more like a closed-off part of the file system?

Anyway, when I have several containers running on a host system,
Do I need to keep them all updated separately? If so, how?
Or is it enough to update the host system, and not worry about the containers?

  • Onno (VK6FLAB)@lemmy.radio
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    21 hours ago

    Docker is essentially a security construct.

    The idea is that the process inside the container, like say MySQL, Python or Django, runs as a process on your machine in such a way that it can only access parts of the system and the world that it’s explicitly been granted access to.

    If you naively attempted this, you’d run into a big problem immediately. Namely that a program needs access to libraries. So you need to grant access to those. Libraries might be specific to the program, or they might be system libraries like libc.

    One way is to explicitly enumerate each required library, but then you’d need to install those for each such process, which is inconvenient and a security nightmare.

    Instead you package the libraries and the program together in a package called a Docker image.

    To simply things, at some point it’s simpler to start with a minimal set of known files, like say Alpine, Debian, or Fedora.

    This basically means that you’re downloading a bunch of stuff to make the program run and thus is born the typical Docker image. If you look at the Python image, you’d see that it’s based on some other image. Similarly, a Django image is based on a Python image. It’s the FROM line in a Dockerfile.

    A container is such an image actually running the isolated process, again, like say MySQL.

    Adding information to that process happens in a controlled way.

    You can use an API that the process uses, like say a MySQL client. You can also choose to include the data in the original image, or you can use a designated directory structure that’s visible to both you and the process, this is called a volume.

    To run something like a Django application would require that Python has access to the files, which can be included in the image by using a custom Dockerfile, or it can be accessed y the container whilst it’s running, using a volume.

    It gets more interesting when you have two programs needing access to the same files, like say nginx and python. You can create shared volumes to deal with this.

    Ultimately, Docker is about security and making it convenient to implement and use.

    Source: I use Docker every day.