Meta’s AI image generator is coming under fire for its apparent struggles to create images of couples or friends from different racial backgrounds.

  • gbzm@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    7 months ago

    That’s absolutely true, generative AI is mostly a parlor trick with very few applications beyond placeholder art and faster replies to emails. But even for your kind of engineering problem, there’s still a big issue that’s often disregarded.

    If we keep your example of an AI for a city grid, an important aspect of this type of engineering problem is guaranteeing that the system has as few catastrophic failures as possible (usually guaranteeing less than 1 for every 109 hours of uptime for systems where catastrophic means a certain quantity of dead bodies or high monetary costs, like a city grid, train signalization, flight control…). AI models may very well end up being discarded in those problems because even if you observe a better accuracy in simulations and experiments, mathematically proving this 109 figure is impossible because we don’t know how they work. Proving a threshold experimentally can happen, but a 109 number would require something like centuries of concurrent testing in every city in the world… I’ve just had a class with this example for trains. They were testing a system that reads signalization with a camera in order to move towards a more autonomous train. Deep learning performed better that classical image processing, but image processing allows you to prove that the train won’t misread less than x% of the time with way higher certainty than a black box, so they had to go with that if they ever wanted to pass safety certifications.

    So I guess deep learning explainability might be a more significant challenge even that finding a dataset that isn’t racially biased…