• vcmj@programming.dev
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    8 months ago

    Most of the largest datasets are kind of garbage because of this. I’ve had this idea to run the data through the network every epoch and evict samples that are too similar to the output for the next epoch but never tried it. Probably someone smarter than me already tried that and it didn’t work. I just feel like there’s some mathematical way around this we aren’t seeing. Humans are great at filtering the cruft so there must be some indicators there.