• amiganA
    link
    fedilink
    English
    arrow-up
    53
    arrow-down
    2
    ·
    10 months ago

    It seems every new shiny technology today tries its darndest to short-circuit 40+ years of advances in OS virtual memory design. Between Electron and Docker, the entire idea of loading an image into memory once and sharing its pages among hundreds of processes is basically dead. But at least there’s lower support burden!!!1111

    • boeman@lemmy.world
      link
      fedilink
      arrow-up
      35
      arrow-down
      3
      ·
      10 months ago

      Docker is just a lightweight container that has the app and OS all in one package. It uses the underlying kernel of the host system. No where near the same as electron apps.

      • amiganA
        link
        fedilink
        English
        arrow-up
        20
        arrow-down
        1
        ·
        edit-2
        10 months ago

        Except each container has its own libc and any other dependencies. If any linked binary or library has a different inode, it gets loaded separately. I would say it is indeed quite similar, even if the images in question here aren’t hundreds of megabytes in size like with Electron.

        • MotoAsh@lemmy.world
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          edit-2
          10 months ago

          The funny thing is, as much as people shit on Java, that’s exactly what its Java EE container arch was for. Truly tiny microservices in wars, an entire app in an ear. All managed by a parent container that can dedup dependencies with a global class loader if done well, and automatically scale wars horizontally, too.

          No idea how to get that level of sharing with OS-level containers.

          • amiganA
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            10 months ago

            “Different inode” means a different file entirely, not necessarily its majorminor:inode tuple resolved through bind mounts/overlayFS/whatever. I’m saying that if you have containers using even slightly different base images, you effectively have n copies of libc in memory at once on the same system, which does not happen when you do not use containers.

            • AggressivelyPassive@feddit.de
              link
              fedilink
              arrow-up
              1
              ·
              10 months ago

              If you’re running enough images on the same machine to make that a relevant point, you have absolutely no excuse not to provide common base images.

              Basically, there are two scenarios here: you’re running some service for others to deploy their images (Azure etc), then you want isolation. Or you’re running your own images, then you should absolutely provide a common base image.

            • meteokr@community.adiquaints.moe
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              10 months ago

              If your applications require different libc versions, then regardless if you used containers or not, you’d have each of them in memory. If they don’t require different versions, then you’re just blaming containers for something the user is responsible for managing. When alpine images are a dozen or so MBs, base image disk size is basically irrelevant in the grand scheme of things, as you probably have much more than that in dependencies/runtimes. Even Debian base images are pretty tiny these days. Depending on the application, you could have just a single binary with no OS files at all. So if you do care about disk and memory space, then you would take advantage of the tools containers give you to optimize for that. Its the users choice on how many resources they want to use, its not the fault of the tooling.

      • Carighan Maconar@lemmy.world
        cake
        link
        fedilink
        arrow-up
        9
        arrow-down
        5
        ·
        edit-2
        10 months ago

        Still the same issue of a still pretty big overhead that is unnecessary in the vast majority of situations.

        At my current workpalce, ~20% of hardware goes to docker. Is it still worth it? For the company it is I assume, since we can let developers with fuck all operations experience deploy stuff without bricking our servers. But we could also be hiring operations people who know how to run applications on servers without fucking them up, but of course in a money game docker wins out for ease and speed.

        Importantly, comparing stuff like Electron though, we can scale up the hardware and that’s included in the cost of running docker. Desktop users stuck with shit like VSCode, Beekeeper or Mongo Compass can’t realistically do that though, PC upgrades aren’t something you do in 10 minutes and even then your options are limited.

        So for companies and servers, docker makes a lot of sense. Especially on the business side. For a private end user, these virtualization tools remove the potential performance all that fancy hardware nowadays could provide. And in the case of Electron shit, they also make for a worse inconsistent UI and laggy interactions.

        • Drew@sopuli.xyz
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          10 months ago

          Hey, what do you mean 20% of your hardware goes to docker? If you’re not running linux then docker isn’t the issue, it’s the VM. If you are running linux, it should be just as lightweight as say, systemd

          • axo@feddit.de
            link
            fedilink
            arrow-up
            6
            ·
            10 months ago

            Yea, docker only eats up storage. And not even much, if you share the same base image.

            Not really any CPU or RAM overhead

        • xavier666@lemm.ee
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          10 months ago

          So for companies and servers, docker makes a lot of sense. Especially on the business side. For a private end user, these virtualization tools remove the potential performance all that fancy hardware nowadays could provide.

          Excellent point!

        • ebc@lemmy.ca
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          10 months ago

          As a freelance fronted dev, I really love Docker. I don’t need to mess up my system installing ancient Java versions or whatever Python wants to easy_install, pip or whatever, I can just run the backend Docker image and go on with my life. Especially when project A’s backend has incompatible Java/Ruby/Python dependencies with project B.

          You can shit on npm all you want (yes, I was there for left_pad), but at least they got the dependency issues between projects solved.

    • jaybone@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      5
      ·
      10 months ago

      Because everyone is a developer now. Like English majors who never took an Operating Systems design class.