• theshatterstone54@feddit.uk
    link
    fedilink
    arrow-up
    6
    ·
    7 months ago

    Omg it’s Computerphile and Dr. Mike Pound! He’s a lecturer at the Uni of Nottingham where I’m studying! Met him and other Computerphile lecturers a few times (I even had some of them as my lecturers) and they’re all a wonderful bunch!

    • wewbull@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      Maybe not peaked in terms of performance, but in terms of rate of development … Absolutely.

  • utopiah@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    7 months ago

    Interesting video based on “No “Zero-Shot” Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance” https://arxiv.org/abs/2404.04125 which basically says (my interpretation) that temporary techniques, i.e not LLM but LMM are statistical models based on large datasets which don’t, and can’t unless at a ridiculously (basically impractical) high cost consider the long tail, namely what is not quite popular.

  • delirious_owl
    link
    fedilink
    arrow-up
    2
    arrow-down
    2
    ·
    7 months ago

    How would that be possible? Its still shitty and hard to use.