• glibg10b@lemmy.ml
    link
    fedilink
    arrow-up
    119
    arrow-down
    5
    ·
    1 year ago

    You seem to like the lines-of-code metric. There are many lines of GNU code in a typical Linux distribution. You seem to suggest that (more LOC) == (more important). However, I submit to you that raw LOC numbers do not directly correlate with importance. I would suggest that clock cycles spent on code is a better metric. For example, if my system spends 90% of its time executing XFree86 code, XFree86 is probably the single most important collection of code on my system. Even if I loaded ten times as many lines of useless bloatware on my system and I never excuted that bloatware, it certainly isn’t more important code than XFree86. Obviously, this metric isn’t perfect either, but LOC really, really sucks. Please refrain from using it ever again in supporting any argument.

    • Rikudou_Sage@lemmings.world
      link
      fedilink
      arrow-up
      66
      arrow-down
      2
      ·
      1 year ago

      Can confirm it’s a shitty metric. I once saved the company I was working at few millions by changing one line of code. And it took 3 days to find it. And it was only 3 characters changed.

      • AggressivelyPassive@feddit.de
        link
        fedilink
        arrow-up
        27
        ·
        1 year ago

        That’s the curse and blessing of our profession: efficiency of work is almost impossible to measure once you go beyond very simple code.

        You can feel like a hero for changing three characters and finally fixing that nasty, or you can feel like an absolute disgrace for needing days to find such a simple fix. Your manager employs the same duality of judgement

        • Rikudou_Sage@lemmings.world
          link
          fedilink
          arrow-up
          15
          ·
          1 year ago

          I feel like a hero in this particular case, it was a bug in a code that was written when I was still too young to even read. And no one knew how to run it. We didn’t have access to the pipelines so no one knew how to build it and how to run it. It was a very obscure hybrid of C and PHP. I basically had to be the compiler, I went line by line through the whole codebase, searching for the code path that caused the error. Sounds easy enough, right? Just CTRL+click in your IDE. Wouldn’t it be a shame if someone decided that function names should be constructed as a string using at least 20 levels of nesting where each layer adda something to the function name and then it’s finally called. TL;DR it was a very shitty code.

      • Gork@lemm.eeOP
        link
        fedilink
        arrow-up
        15
        ·
        1 year ago

        But did you add 3 characters? Gotta bump up that code count bruh.

    • stylist_trend@lemmy.world
      link
      fedilink
      arrow-up
      22
      arrow-down
      1
      ·
      1 year ago

      I wrote a program that does nothing but busy loop on all cores. stylist_trend/Linux is my favourite OS.

      • stylist_trend@lemmy.world
        link
        fedilink
        arrow-up
        14
        ·
        1 year ago

        What you refer to as Linux, is actually called Forkbomb/Linux, or as I’ve recently taken to calli-[Process Killed]

      • Zacryon@feddit.de
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        1 year ago

        Yes. Also the required clock cycles depends a lot on individual CPU architectures.

        For example division: Some CPUs have hardwired logic to compute the division operation directly on circuit level. Others are basically running a for loop with substraction. The difference in required clock cycles for a division operation can then be huge.

        Another example: is it a scalar or superscalar CPU?

        A rather obvious example: the bit width of the CPU. 32 bit systems compute 64 bit data much more inefficiently than 64 bit systems.

        Then there is other stuff like branch prediction, or system dependencies like memory bus width and clock, cache size and associativity etc. etc…

        Long story short: When evaluating the performance of code, multiple performance metrics have to be considered simultaneously and prioritized according to the development goals.

        Lines of code is usually a veeery bad metric. (I sometimes spend hours just to write a few lines of code. But those are good ones then.) Cycles per code segment is better, but also not good (except you are developing for a very specific target system). Do benchmarking, profiling, run it on different systems and maybe design individual performance metrics based on your expectations.

    • monk@lemmy.unboiled.info
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      No, he doesn’t. He suggests that there are Linux systems with no GNU code, like one I’m replying from, and whether “no” meant “no SLOC” or “no instructions spent executing” or “no packages” absolutely doesn’t matter.