• @theshatterstone54@feddit.uk
    link
    fedilink
    62 months ago

    Omg it’s Computerphile and Dr. Mike Pound! He’s a lecturer at the Uni of Nottingham where I’m studying! Met him and other Computerphile lecturers a few times (I even had some of them as my lecturers) and they’re all a wonderful bunch!

    • @wewbull@feddit.uk
      link
      fedilink
      English
      12 months ago

      Maybe not peaked in terms of performance, but in terms of rate of development … Absolutely.

  • @utopiah@lemmy.ml
    link
    fedilink
    22 months ago

    Interesting video based on “No “Zero-Shot” Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance” https://arxiv.org/abs/2404.04125 which basically says (my interpretation) that temporary techniques, i.e not LLM but LMM are statistical models based on large datasets which don’t, and can’t unless at a ridiculously (basically impractical) high cost consider the long tail, namely what is not quite popular.

  • @delirious_owl
    link
    02 months ago

    How would that be possible? Its still shitty and hard to use.