• cley_faye@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    2 days ago

    I don’t like the approach of piling more things on top of even more things to achieve the same goal as the base, frankly speaking. A “local” kubernetes cluster serve no purpose other than incredible complexity for little to no gain over a mere docker-compose. And a small cluster would work equally well with docker swarm.

    A service, even made of multiple parts, should always be described that way. It’s easy to move “up” the stack of complexity, if you so desire. Having “have a k8s cluster with helm” working as the base requirement sounds insane to me.

    • mac@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 day ago

      Honestly, a lot of the time I don’t understand why a lot of businesses use k8s.

      At my company especially, we know almost exactly what our traffic will look like from 9am-5pm. We don’t really need flexible scaling, yet we still use it because the technology is hyped. Similar to cloud, we certainly don’t need to be spending as much as we do, but since everyone else is on or migrating to the cloud, we are as well.

      • azertyfun@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        12 hours ago

        The “problem” with k8s is not that it’s abstract-y (it’s not inherently any more abstract than docker), it’s that it’s very complex and enterprise-y.

        The need for such a complex orchestration layer is not necessarily immediately obvious, until you’ve worked on a complex infra setup that wasn’t deployed with kubernetes. Believe me when you’ve seen the depths of hell that are hundreds of separately configured customer setups using thousands of lines of ansible playbooks, all using ad-hoc systems for creating containers/VMs, with even more ad-hoc and hacked together development and staging environments, suddenly k8s starts looking very appetizing. Instead of an abominable spaghetti of bash scripts, playbooks, and random documentation, one common (albeit complex) set of tools understood by every professional which manages your application deployment & configuration, redundancy, software upgrades, firewall configs, etc.

        A small self-hosted production kubernetes cluster doesn’t have to be hard to operate or significantly more expensive than bare-metal; you can buy 3U of rack space, plop in 3 semi-large servers (think 128 GB plus a few TB of SSD RAID), install rancher and longhorn, and now you’ve got a prod cluster large enough for nearly every workload such that if you ever need to upgrade that means you have so many customers that hiring a k8s administrator will be a no-brainer.

        Or you can buy minutes from AWS because CapEx is the absolute devil and instead you pay several times as much in OpEx to make it someone else’s problem. But if you’re doing that then you’re not comparing against “installing things the old-fashioned way”.

        • mac@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          Thanks for the response!

          I personally haven’t rolled a k8s or k3s cluster, so it’s always felt a bit abstract to me. I probably should though, to demystify it to myself in my work environment.

          Complex is definitely what I have noticed when I see my devops team PR into the ingress directories.

          I guess the abstract issue I see, that ties in to the meme i shared above, is that sometimes around deploys where we get blips of 503/4’s and we appear to be unable to track them down. Is it the load balancer? Ingress? Kong? The fact that there is so many layers make infra issues rough to debug

      • loudwhisper@infosec.pub
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        Kubernetes is not really meant primarily for scaling. Even kubernetes clusters require autoscaling groups on nodes to support it, for example, or horizontal pod autoscalers, but they are minor features.

        The benefits are pooling computing resources and creating effectively a private cloud. Easy replication of applications in case of hardware failure. Single language to deploy applications, network controls, etc.

    • Lodra@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 day ago

      Yea I’m not a fan of helm either. In fact, I avoid charts when possible. But kustomize is great.

      I feel the same way about docker compose. If it wasn’t already obvious, I’m biased in favor of k8s. I like and prefer that interface. But that’s just preference. If you like docker compose, great!

      There’s one point where I do disagree however. There are scenarios where a local k8s cluster has a good and clear purpose. If your production environment runs on k8s, then it’s best to mirror that locally as much as possible. In fact, there are many apps that even require a k8s api to run. Plus, being able to destroy and rebuild your entire k8s cluster in 30s is wonderful for local testing.

      Edit: typos

      • cley_faye@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        I won’t argue with the ups and downs of each technos, but I recently looked into docker swarms and it was all I expected kubernetes to be, without the hassle. And I could also get a full cluster with services restored from scratch in 30s. But I am obviously biased towards it, too :)

        • Cpo@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 hours ago

          Did not realize swarm was still a thing, not trying to be offensive here.

          My best find was using traefik as a reverse proxy in docker (compose). It is easily configurable through container labels and pulls resource definitions straight from docker. It is awesome!