• barsoap@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    3 months ago

    An NPU is a cut-down GPU to allow running ML workloads on restrained power budgets.

    Quite literally. Keep the memory architecture, keep the massive banks of ALUs, remove the little intelligence GPU cores have in their control units and you have an NPU. Oh, one more thing: Make sure those ALUs support ludicrously low precision arithmetic. GPUs can do the same without any real downside, though, the reason GPUs floored out at fp16 is because there were no workloads benefitting from lower precision.

    It makes sense on mobile devices and phones have been shipping them for ages to do their AI image processing, listening for voice commands etc, it makes sense for at least some data centres because specialising your hardware to save electricity is worth it, it doesn’t make sense anywhere in between. Just get a proper GPU and you can do AI and play Crysis.