I can’t seem to wrap my head around (Docker) containers and especially their maintenance.
As I understand it, containers contain a stripped-down OS that shares some resources with the host?
Or is it more like a closed-off part of the file system?

Anyway, when I have several containers running on a host system,
Do I need to keep them all updated separately? If so, how?
Or is it enough to update the host system, and not worry about the containers?

  • Lysergid@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    8 hours ago

    Container update is not Docker’s concern. Docker is proxy between your Container and OS. So if you want keep your container up to date you need external process. Can be achieved with container orchestration tool like Kubernates

  • thirteene@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    1 day ago

    It’s built on the shipping container parallel. In order to transport objects you obfuscate anything not required for shipping a container.

    • What’s inside the container doesn’t matter. The container has everything it needs to run because the ship/host is responsible for the overhead.
    • containers move. Containers are setup to run by themselves, so you can move it from one ship to another. This means you can use your container doesn’t care if it’s in the cloud or a shipping vessel
    • As soon as you open a container your stuff is there. It’s very easy to onboard.
    • Most importantly though, your shipping container isn’t a full boat by itself. It lives in a sandbox and only borrows the resources it needs like the hosts CPU or the boats ability to float. This makes it easier to manage and stack because it’s more flexible
    • fedorato@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      18 hours ago

      Love the container analogy - immediately made so much sense to me! Also clarifies some misunderstandings I had.

      I was mucking about with docker for a Plex server over the weekend and couldn’t figure out what exactly docker was doing. All I knew was that it’d make plex ‘sandboxed’, but I then realised it also had access to stuff outside the container.

      • bobs_monkey@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        17 hours ago

        This is their logo:

        The whole container on a ship idea is their entire premise. The ship (docker) is a unified application/os layer to the host, in that containers can work plug-n-play with the docker base layer.

      • thirteene@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        17 hours ago

        On a very specific note, I don’t run my Plex server in a container. I have a docker compose setup with 20+ apps, but Plex is on the bare metal OS because it’s kinda finicky and doesn’t like nas. You also need to setup the Plex API to claim the server as the container name changes. This is my stock Plex config if it helps

        plex:
            image: lscr.io/linuxserver/plex:latest
            container_name: plex
            network_mode: host
            environment:
              - PUID=1000
              - PGID=1000
              - TZ=Etc/GMT
              - VERSION=docker
              - PLEX_CLAIM= #optional
            volumes:
              - /home/null/docker/plex/:/config
              - /x:/x
              - /y:/y
              - /z:/z
            restart: unless-stopped
        
  • Onno (VK6FLAB)@lemmy.radio
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    21 hours ago

    Docker is essentially a security construct.

    The idea is that the process inside the container, like say MySQL, Python or Django, runs as a process on your machine in such a way that it can only access parts of the system and the world that it’s explicitly been granted access to.

    If you naively attempted this, you’d run into a big problem immediately. Namely that a program needs access to libraries. So you need to grant access to those. Libraries might be specific to the program, or they might be system libraries like libc.

    One way is to explicitly enumerate each required library, but then you’d need to install those for each such process, which is inconvenient and a security nightmare.

    Instead you package the libraries and the program together in a package called a Docker image.

    To simply things, at some point it’s simpler to start with a minimal set of known files, like say Alpine, Debian, or Fedora.

    This basically means that you’re downloading a bunch of stuff to make the program run and thus is born the typical Docker image. If you look at the Python image, you’d see that it’s based on some other image. Similarly, a Django image is based on a Python image. It’s the FROM line in a Dockerfile.

    A container is such an image actually running the isolated process, again, like say MySQL.

    Adding information to that process happens in a controlled way.

    You can use an API that the process uses, like say a MySQL client. You can also choose to include the data in the original image, or you can use a designated directory structure that’s visible to both you and the process, this is called a volume.

    To run something like a Django application would require that Python has access to the files, which can be included in the image by using a custom Dockerfile, or it can be accessed y the container whilst it’s running, using a volume.

    It gets more interesting when you have two programs needing access to the same files, like say nginx and python. You can create shared volumes to deal with this.

    Ultimately, Docker is about security and making it convenient to implement and use.

    Source: I use Docker every day.

  • boredsquirrel@slrpnk.net
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    1 day ago

    I also dont get how to update docker containers and where to save config files. The idea is that the containers are stateless so they can be recreated whenever you like.

    But there are no automatic updates?? You need a random “watchtower” container that does that.

    Also, they are supposed to give easy security, buf NGINX runs as root? There is a rootless variant

    • Mbourgon everywhere@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      23 hours ago

      (Not an expert, but use it some) Configs: most of the time you mount a directory that’s specifically set up for (that/a) container, and that’s persistent on the host. When you spin up its replacement, it has the same mapping.

      Automatic updates - from what I remember, yeah, you can even just (depending on needed uptime) schedule a cron job to pull the new image, kill the existing, and start up the new, and if it doesn’t start then you roll back to the previous.

      Security - there used to be a debate over it (don’t remember current SOTA) in theory both are pretty safe but the rootless gives more security with some tradeoffs.

    • Björn Tantau@swg-empire.de
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      Also, they are supposed to give easy security, buf NGINX runs as root? There is a rootless variant

      I guess the idea/hope is that they can’t break out of their container.

  • Björn Tantau@swg-empire.de
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 day ago

    I’d say it’s more like a closed-off part of the filesystem but with networking and probably lots of other stuff closed off as well.

    Updates on the host are separate from updates of the containers. Ideally the host has only the minimal stuff needed to run the containers.

    Containers are usually updated when the contained apps are updated. That’s actually my main concern with containers. When the main app doesn’t need an update but some dependency needs one you have to actively update the dependency unless the app maintainers keep up with what their dependencies are doing. And usually you don’t even know what the dependencies are. Because the whole point of containers is that you only care about the main app.

    • Alk@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 day ago

      To elaborate on this, when you want an update, you “update the container.” This usually means downloading an entirely new container image and replacing yours with the new one, which has new internal versions and data but works the exact same. You rely on the supplier of the container (if you didn’t make it yourself) to do all of that for you, and you just receive the update when you request it.

      So ideally, dependencies will be taken care of for you when the container updates, if you are using a pre-built container.

    • Clay_pidgin@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      I still don’t understand! I feel so dumb when it comes to docker.

      I’m writing an application in Django (a python web framework), and there are docker images for that. But somehow my code needs to get in there I guess? Or does my code go alongside the container in a predefined folder? When I’m developing and need to update the docker container over and over between changes for testing, am I crating a whole new container or updating the one I made originally?

      I don’t even get the purpose of the million images on docker hub. What’s the difference between a MySQL image and requiring MySQL in a docker compose and making my own image?

      So sorry to bother you with this but I’m thinking you might be able to help me understand. I understood packages, jails, and VMs but this is a whole other thing, lol.

      • Björn Tantau@swg-empire.de
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        You would probably make your own image that would depend on another Django image. Building that image would put your code into the container you made. To ease development you would mount your development directory into the container.

        Then when you release your app you would update your container image with the latest code and also update the django container it depends on.

        MySQL would live in another container separate from yours. It would need its own mounted directory where all the database files live on the host.

        If you needed some other app with a web API or so you would put that in its own container as well.

        To put everything together you would use docker-compose. That puts them into one network and defines how they may talk with each other, what directories or files from the host to mount and other configuration.

  • badlotus
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    23 hours ago

    Think of Docker containers like lightweight, portable mini-computers that run on your actual computer (the host). Each container has everything it needs to run an application—like code, libraries, and dependencies—but it shares the host’s OS kernel rather than running a full OS itself.

    Containers vs. the Host System

    • Not a full OS: Containers don’t have their own separate OS but use the host’s OS kernel. They do, however, have their own filesystem and isolated environment.

    • Like a sandboxed app: A container is more like a self-contained app that has just enough system components to run but doesn’t affect the rest of your system.

    Keeping Containers Updated

    You do need to update containers separately—updating the host system isn’t enough. Here’s why:

    1. Containers use images: Containers are created from images (like templates). If the image gets outdated, the container running from it will also be outdated.

    2. Rebuilding is required: You can’t “patch” a running container like a normal program. Instead, you must:

    • Pull the latest version of the image (docker pull my-image:latest).

    • Stop and remove the old container (docker stop my-container && docker rm my-container).

    • Start a new container with the updated image (docker run -d --name my-container my-image:latest).

    Automating Updates

    To simplify updates:

    • Use a container management tool like Docker Compose, Portianer, or Kubernetes.

    • Watch for updates to base images (docker images to list images and docker pull to update).

    • Set up an automated pipeline to rebuild and deploy updated containers. There are tools like Watchtower that will automate this with minimal effort.

    In short: Updating the host OS won’t update your containers. You need to rebuild and restart containers with updated images to keep them secure and up-to-date.

    Note for comments below: If you are trying to customize a docker image, you must build a new image. This is done through “dockerfiles” that instruct the docker engine what commands to run on a base image to create a custom image. For instance, one could take a simple Linux image like Alpine and use a docker file to install NGINX and make an NGINX image to create a reverse proxy container. In many cases you can find images that have been published that meet most basic needs so building images is often only necessary for advanced docker implementations that require special customization.