• crussel@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    75
    arrow-down
    2
    ·
    1 year ago

    Come on now, next you’ll be saying the tech industry consistently overplays its incremental improvements as Earth-shattering paradigm shifts purely for the investment money!

    This message posted from the metaverse

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      arrow-up
      16
      ·
      1 year ago

      Yup. As someone who works in tech, I was baffled by the number of people in my field who started freaking out about it. AI isn’t some magic panacea, it’s just another tool that needs to be designed for the task at hand. It’s cool that ChatGPT can get 80% of the way there in so many fields, but that last 20% is where all the hard work is (see the pareto principle).

  • bh11235@infosec.pub
    link
    fedilink
    arrow-up
    65
    arrow-down
    3
    ·
    1 year ago

    Reading this comment section is so strange. Skepticism about generative AI seems to have become some kind of professional sport on the internet.

    Consensus in our group is that generative AI is a great tool. Maybe not perfect, but the comparison to the metaverse is absurd: no one asked for the metaverse or needed it for anything, as opposed to several cases where GPT has literally bailed us out of a difficult situation. e.g. some proof of concept needed to be written in a programming language that no one in the group had enough experience with. With no GPT, this could have easily cost someone a week. With GPT assistance – proof of concept ready in less than a day.

    Generative AI does suffer from a host of problems. Hallucinations, jailbreaks, injections, reality 101 failures, believe me I’ve encountered all these intimately as I’ve had to utilize GPT for some of my day job tasks, often against its own better judgment and despite its own woefully lacking capacity to deal with the task. What I think is interesting is a candid discussion: why do these issues persist? What have we tried? What techniques can we try next? Are these issues intractable in some profound sense, and constitute a hard ceiling for where generative AI can go? Is there an “impossibility theorem for putting AI on autopilot”? Or are these limitations just artifacts we can engineer away and route around?

    It seems like instead of having this discussion, it’s become in vogue to wave around the issues triumphantly and implicitly declare the field successfully dunked on, and the discussion over. That’s, to be blunt, reductive. Smartphones had issues, the early internet had issues. Sure, “they also laughed at Bozo the clown” and all that, but without a serious discussion of the landscape right now, of how far away we are from mitigating these issues and why, a lot of this “ha ha suck it AI” discourse strikes me as deeply performative. Like, suppose a year from now OpenAI solves hallucinations. The issue is just gone. Do all the cool kids who sneered at the invented legal precedents, crafted their image as knowing better than the OpenAI dweebs, elegantly implied how hallucinations are a cornerstone in how the entire field is a stupid useless dead end – do they lose any face? I think they don’t. I think this is why this sneering has become such a lucrative online professional sport.

    • floofloof@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      1
      ·
      edit-2
      1 year ago

      Some of the skepticism is just a reaction to the excessive hype with which generative AI has been pushed over the past few months. If you’ve seen tech hype cycles before, the hype itself can generate some skepticism. Plus there are many dubious cases where companies are shoving ChatGPT or similar into their products just so they can advertise them as “AI powered”, and these poorly thought out, marketing-driven moves deserve criticism.

    • joe@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      7
      ·
      1 year ago

      It’s anecdotal but I have found that the people who are “skeptical” (to use your word) about generative AI often turn out to be financially dependent on something that generative AI can do.

      That it to say, they’re worried it will replace them at their job and so they very much want it to fail.

      • nuxetcrux@lemmy.world
        link
        fedilink
        arrow-up
        8
        arrow-down
        2
        ·
        1 year ago

        You have to have some skin in the game for that kind of cognitive dissonance. I think some are even resentful they can’t understand it. A 21st century cotton gin.

    • SokathHisEyesOpen@lemmy.ml
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      5
      ·
      1 year ago

      It’s amazing how critical Lemmy is of ChatGPT. It has become fashionable to pretend it’s a trash technology. The reality is that it is and will continue changing the world.

    • Ataraxia@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      It is weird. I love this stuff. It can be so useful and would love a game with interactive npc. Also really fun tools to brainstorm with.

  • Moobythegoldensock@lemm.ee
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    1
    ·
    1 year ago

    3 months ago: Everyone’s going to lose their jobs!

    Today: Generative AI’s dead!

    More realistically: Generative AI is a tool that will gradually get better over time. It is not universally applicable, but it does have a lot of potential applications. It is not going to take over the world, nor will it just suddenly go away.

    • bouh@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      IMO it’ll be more like internet: society will take years to adapt to it and democratise its use. It took 30 years for Internet to bloom and it is now a primary service in Europe. I’m pretty sure AI will take this road.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      That’s pretty much been my take from the beginning. My main concerns were and still are:

      • IP law, specifically copyright infringement
      • correctness - ChatGPT makes stuff up
      • detection - esp for school

      My main fear was that it would be more useful for scammers and fraudsters than legitimate uses because of the above issues. I still have those concerns.

      With any new technology that people say well change the world overnight, take a step back and think it through. For example:

      • self driving cars - we still have taxis, Uber, etc, so it hasn’t taken over despite being here for years
      • robotics in manufacturing - it’s incredibly expensive to put together and end to end robotic factory, so there are still plenty of manufacturing jobs
      • automated fast food - again, the most I’ve seen is increased number of kiosks, that’s it

      And so on. People freak out about new tech, then a couple years later they realize that it’s not “finished” and there will be plenty of time to adapt. Unless we recover an alien spaceship or something, that’s just not how technology progresses. Eventually generative AI will redically change our society, but it’ll take decades, so by the time your job is threatened, you’ll be ready to retire.

  • In the early 1980s, a teacher refused to let me word-process my homework (my penmanship was shit) on the grounds that I shouldn’t be able to produce a paper at the touch of a button.

    Upper management look at AI end results and imagine a similar scenario: they don’t see the human effort behind the dumb-waiter and imagine a clerk can just tell an LLM to make me a sequel to Dumbo without getting very specific and then having a team of reviewers watch hundreds of terrible elephant films to curate the few good ones.

    But what is telling is how our corporate bosses responded to the prospect of automated art. Much like the robot pizza company who did not automate the process and pass the savings on to you! (his offerings were typical pizza at typical prices and he kept all the savings for himself) our senior execs imagine ways to replace workers with cheaper automation rather than producing better stuff or cheaper movie tickets for their customers.

    So maybe we should growl at them and change the system before they figure out how to actually pay fewer people while keeping more profits.

  • birdcat@lemmy.ml
    link
    fedilink
    arrow-up
    34
    arrow-down
    4
    ·
    1 year ago

    “If hallucinations aren’t fixable, generative AI probably isn’t going to make a trillion dollars a year,” he said. “And if it probably isn’t going to make a trillion dollars a year, it probably isn’t going to have the impact people seem to be expecting,” he continued. “And if it isn’t going to have that impact, maybe we should not be building our world around the premise that it is.”

    Well he sure proves one does not need an AI to hallucinate…

      • birdcat@lemmy.ml
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        1 year ago

        The assertion that our Earth orbits the sun is as audacious as it is perplexing. We face not one, but a myriad of profound, unresolved questions with this idea. From its inability to explain the simplest of earthly phenomena, to the challenges it presents to our longstanding scientific findings, this theory is riddled with cracks!

        And, let us be clear, mere optimism for this ‘new knowledge’ does not guarantee its truth or utility. With the heliocentric model, we risk destabilizing not just the Church’s teachings, but also the broader societal fabric that relies on a stable cosmological understanding.

        This new theory probably isn’t going to bring in a trillion coins a year. And if it probably isn’t going to make a trillion coins a year, it probably isn’t going to have the impact people seem to be expecting. And if it isn’t going to have that impact, maybe we should not be building our world around the premise that it is.

    • Pelicanen@sopuli.xyz
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 year ago

      maybe we should not be building our world around the premise that it is

      I feel like this is a really important bit. If LLMs turn out to have unsolvable issues that limit the scope of their application, that’s fine, every technology has that, but we need to be aware of that. A fallible machine learning model is not dangerous; AI-based grading, plagiarism checking, resume-filtering, coding, etc. without skepticism is dangerous.

      LLMs probably have very good applications that could not be automated in the past but we should be very careful of what we assume those things to be.

  • Naich@kbin.social
    link
    fedilink
    arrow-up
    20
    arrow-down
    1
    ·
    1 year ago

    I can’t believe this tech bubble will burst. All the other ones have fared so well.

    • ZILtoid1991@kbin.social
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      edit-2
      1 year ago

      Because they were far more useful to the average person, than the glorified spam making machine. Also it’s not like something like this happened for the first time…

      EDIT: forgot to grammar

  • j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 year ago

    Feels like a minds war waged by billionaires in this space when you’re actually playing with this stuff. All this hype is a joke as are the proprietary junk. Get a decent comp and try offline AI yourself and see what it can do. Try Llama2 70B Q4 GGML. You need a machine with more than 10-12+ cores and at least 64GB of system memory. It really helps to have a Nvidia GPU with 16GB+ but you don’t have to have that here. This model can write python snippets like you’re searching stack overflow but an order of magnitude faster. If you know basic code elements, branching, and looping, this model can code, and resolve its errors when it gets something wrong by prompting it with the error message. A 30B like WizardLM or Vicuna are almost technically useful, but the 70B is a beast.

  • Amju Wolf@pawb.social
    link
    fedilink
    arrow-up
    15
    ·
    1 year ago

    Kinda is, sure. The problem is when you become overly reliant on the tech without it being reliable. It’s also kinda bad when it causes you to lose skills that you need to maintain it or further it.

  • SamC@lemmy.nz
    link
    fedilink
    arrow-up
    11
    ·
    1 year ago

    I think they have some areas where they’re very useful, but beyond those areas they’re only OK at best. They don’t come close to living up to the hype, which is mostly based on “the next version will be mind blowing!”.

    They are a new type of app, nothing more. New types of apps can be extremely useful, and make a lot of tasks easier, e.g. spreadsheets. I would say at best generative AI is as game changing as spreadsheets were, but maybe less.

    The hype machine wants us to believe they are as revolutionary as the PC itself, or the car. In fact 10 times as revolutionary! I just don’t buy it… at least not in the foreseeable future.

    • greenskye@lemm.ee
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      There was this point where VR gaming seemed like an inevitable successor to traditional gaming. It was everywhere and improving rapidly. There were core concerns, but most felt that those could be solved with time. The technology had so much potential.

      This is how the current AI solutions feel to me right now. There are a small-ish group of people who find them very useful and use them often. There are a large group of people who are currently on the hype bandwagon, talking about all the potential they hold. But currently they have yet to truly hit mainstream use.

      With VR, all that hype and potential seems largely dead. The promised advancements haven’t seemed like enough to take over from traditional games, the fundamental issues haven’t been fixed because they’re too hard or too costly to fix.

      I’m still unsure if AI will go this same route, or if it will eventually break into more mainstream. I think probably the most likely route is something like how Siri/Alexa worked out. Some people use voice assistants all the time, others basically never do. They never quite fully delivered on the revolution they promised, but they were useful enough to stick around. That’s how I feel about the current AI approach.

      I think long term we’ll get some other approach that will once again kick off the AI hype machine, but the current AI approach is only going to find limited success because it’s going to be really, really hard to get it to a place where you can reasonably trust the output.

    • joe@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      We don’t even know how they arrive at the output they arrive at, and it takes lengthy research just to find out how, say, an LLM picks the next word in an arbitrarily chosen sentence fragment. And that’s for the simpler models! (Like GPT-2)

      That’s pretty crazy when you think about it.

      So, I don’t think it’s fair to suggest they’re just “a new type of app”. I’m not sure what “revolutionary” really means but the technology behind the generative AI is certainly going to be applied elsewhere.

    • bouh@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      The paper that invented http had a not “interesting idea” from the researcher who reviewed it.

      These AI are a revolution. But like all revolution it will take some time for the society to absorb it.

  • Margot Robbie@lemm.ee
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    1 year ago

    Ultimately, generative AI are tools, not magic. We’re now past the hype phase and are now at the leveling out phase of the S-curve as people realizes that these things are limited.

    I think ChatGPT is mostly going to be used as an automated copywriter for emails and resumes and such, whereas diffusion models will find their way into digital artists’ workflow.

    Life goes on.

  • ReallyKinda@kbin.social
    link
    fedilink
    arrow-up
    13
    arrow-down
    2
    ·
    1 year ago

    AI doesn’t seem to do well when it trains on its own data so I do think there’s a possibility it’s a one trick pony. Once there’s too much AI content in the data it’s trained on it will devolve into nonsense.

  • hottari@lemmy.ml
    link
    fedilink
    arrow-up
    12
    arrow-down
    2
    ·
    1 year ago

    Isn’t ChatGPT’s launch only less than 6 months old or something…

    • Peanut@sopuli.xyz
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      Reminds me of the article saying open ai is doomed because it can only last about thirty years with its current level of expenditure.

        • diffuselight@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Cost reduction in the field is orders of magnitude potential. Look at llama running on everything down to a raspy pi after 2 months.

          There are massive gains to be made - once we have dedicated hardware for transformers, that’s orders of magnitude more.

          See your phone being able to playback 24h of video but die after 3h of browsing? Dedicated hardware codec support

          • hottari@lemmy.ml
            link
            fedilink
            arrow-up
            1
            arrow-down
            4
            ·
            1 year ago

            Yeah but Llama’s quality cannot compete with ChatGPT models (Doesn’t matter what model you use, if you want good and FAST results, you require serious compute). We do have commercial dedicated AI chips from NVDA, last time I checked you had to make an order to even get a price. George Hotz who is also working on something similar, by his account from a Lex Fridman podcast mentioned that a personal AI rig would have to be closer to a mainframe’s size.

            There’s nothing I have seen so far that leads me to believe that generative AI gets more efficient with weaker hardware.

  • PetePie@kbin.social
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    1 year ago

    I’m curious about the development of artificial intelligence in the future, and I’m looking forward to seeing what GPT-5 can do. If it’s a huge leap forward, then I will agree that the future will be very different from what we have now. But if it’s only a slight improvement, like Llama 1 vs Llama 2, then large language models (LLMs) might face the same challenges as self-driving cars. They are somewhat functional, but not reliable enough to let you sleep on your commute, and they won’t be for a long time.
    It might be impossible to eliminate all the hallucinations from LLMs, but if the next versions are incredibly useful, then we will learn to live with them. For example, currently 30% of chips fail on a wafer, but we still produce more CPUs and they are groundbreaking technology. But even GPT4+ will have a significant impact on our future, especially in education. Every kid will have an AI in their phone that is ready to answer all their questions with minimal effort. This will greatly enhance the intelligence of future generations and make education accessible to almost everyone on earth at a similar high level. But this will not make us all lose our jobs in 10 years.

    • nyarla@lemmy.zip
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      1 year ago

      I think you’re too much optimistic about the impact on education: Every kid will have an AI in their phone, and instead of thinking by themselves when they’ll have a question they will just ask the AI and forget the answer quickly because they just have to ask again. However I would be happy to be wrong.

      • Amju Wolf@pawb.social
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        1 year ago

        You’re 100% right, with easily accessible technology people don’t retain the skills that are supplemented by that technology.

        As a kid growing up with the advent of computers it was all jank, you had to know how to fix and diagnose both hardware and software issues, but you still learned to do it with limited resources and (in my case) even very limited English knowledge. I had, in fact, learned English mostly because of my interest in computers and games, and I learned programming also because of my interest in games…

        I was thinking that “oh wow the new generation will have it so easy, they will grow up with reliable and easier to use PCs, they’ll know even more than me and be so good with it!” and it’s the exact opposite. Because it’s so user-friendly and readily available they don’t need to learn to fix anything, they can just buy it. They don’t need any skills that are deeper than basic usage. And that’s how you get kids today who don’t even know how to turn on a PC, let alone use a word or table processor - because they have iPhones and iPads and never needed anything else, they never found it groundbreaking or useful.

        So yeah, not only will they be less knowledgeable, they won’t even bother thinking or checking the answers, because the AI will be right most of the time. I’m actually kinda worried that this will make people really easily manipulable.

        • FreeFacts@sopuli.xyz
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          1 year ago

          Around where I live people, media and politicians have been talking about “diginative” generation for years. The generation that will have no problem adapting to ever digitizing worklife. But lately the reality has creeped in even in media, these young adults are having difficult time adapting to the software and hardware used by the corporate world. The devices and apps they grew up using are so dumbed down and strictly guided that they are lost with the amount of options and processes supported by the professional applications.

          The ease-of-use of consumer apps is counterproductive on that regard. Being able to use them is as valuable to businesses as being able to put a square block through the square hole and triangle block through the triangle hole. It’s essentially worthless as nearly every single human can do it, it’s designed to be just so easy and streamlined.

          But maybe business world is wrong and should adapt instead? Maybe they should also concentrate on making their processes as streamlined? Maybe generative AI could help with that? Who knows. In my opinion the problem isn’t in the “physical” processes, those are often in the end just mundane tasks, but in the mental processes that the dumbed down apps kids grow up using do not feed. They often give you one way to go through a use case and that’s it. No outside of the box thinking, no evaluation of options and requirements.

        • joe@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          with easily accessible technology people don’t retain the skills that are supplemented by that technology.

          Isn’t this the point of technology?

          • Amju Wolf@pawb.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            Kinda is, sure. The problem is when you become overly reliant on the tech without it being reliable. It’s also kinda bad when it causes you to lose skills that you need to maintain it or further it.

      • novibe@lemmy.ml
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        2
        ·
        1 year ago

        “These darn books are going to make all the youths dumb and forget everything!”

        • mostly paraphrasing, but fucking Socrates

        Like we’ve literally been saying the same shit for over 2000 years. And your youths are never doomed…

    • duringoverflow@kbin.social
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      This will greatly enhance the intelligence of future generations and make education accessible to almost everyone on earth at a similar high level.

      I don’t think that accessibility in AI somehow correlates with the intelligence of the subjects using it. It can actually work in the completely opposite way where people blindly trust it or people get used to using it in a degree that they’re unable to do anything without the help from the technology. Like people who are unable to navigate 2 blocks from their house if they don’t use google maps navigation even though they do the same route every day.

    • newDayRocks@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      Kids as well as everyone already have AI on their phones.

      We’ve had it for quite a while now. Even before chatgpt, what question could you not find an answer for?

    • GunnarRunnar@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      This will greatly enhance the intelligence of future generations and make education accessible to almost everyone on earth at a similar high level.

      You mean at the current ChatGPT level? Because I’m unsure if the future versions will be open source or open access, if not surely it will just raise the disparity in education.

      • 520@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        Lol. OpenAI haven’t made GPT open source since version 2. With that said, their best interests are currently in keeping access public and their name in the headlines. They need an income source.

  • Carighan Maconar@lemmy.world
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    1 year ago

    I wonder, could AI actually “collapse”? As in, once companies and people start leaving the AI hype space, could the external input become small enough so that the AI to AI input takes over to such a degree that all trained models become essentially useless?

    • lasagna@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      1 year ago

      I find that unlikely. AI is a subject much like space tech. It may not always be the giant it is now but it’s a baseline research countries will be conducting. Even if only as a means to defend themselves.

    • floofloof@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I’m sure challenges like this are coming but I’d be surprised if it causes these applications to collapse completely. However, it may force the ML companies to pay some fees for the training data they use.

  • Fat Tony
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    Genuine question: How hard is it to fix A.I. Hallucinations?

    • eating3645@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      edit-2
      1 year ago

      Very difficult, it’s one of those “it’s a feature not a bug” things.

      By design, our current LLMs hallucinate everything. The secret sauce these big companies add is getting them to hallucinate correct information.

      When the models get it right, it’s intelligence, when they get it wrong, it’s a hallucination.

      In order to fix the problem, someone needs to discover an entirely new architecture, which is entirely conceivable, but the timing is unpredictable, as it requires a fundamentally different approach.

      • joe@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        I have a weak and high level grasp of how LLMs work, but what you say in this comment doesn’t seem correct. No one is really sure why LLMs sometimes make things up, and a corollary of that is that no one knows how difficult (up to impossible) it might be to fix it.

        • eating3645@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          edit-2
          1 year ago

          Let me expand a little bit.

          Ultimately the models come down to predicting the next token in a sequence. Tokens for a language model can be words, characters, or more frequently, character combinations. For example, the word “Lemmy” would be “lem” + “my”.

          So let’s give our model the prompt “my favorite website is”

          It will then predict the most likely token and add it into the input to build together a cohesive answer. This is where the T in GPT comes in, it will output a vector of probabilities.

          “My favorite website is”

          "My favorite website is "

          “My favorite website is lem”

          “My favorite website is lemmy”

          “My favorite website is lemmy.”

          “My favorite website is lemmy.org

          Woah what happened there? That’s not (currently) a real website. Finding out exactly why the last token was org, which resulted in hallucinating a fictitious website is basically impossible. The model might not have been trained long enough, the model might have been trained too long, there might be insufficient data in the particular token space, there might be polluted training data, etc. These models are massive and so determine why it’s incorrect in this case is tough.

          But fundamentally, it made up the first half too, we just like the output. Tomorrow some one might register lemmy.org, and now it’s not a hallucination anymore.

        • BetaDoggo_@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          LLMs only predict the next token. Sometimes those predictions are correct, sometimes they’re incorrect. Larger models trained on a greater number of examples make better predictions, but they are always just predictions. This is why incorrect responses often sound plausable even if logically they don’t make sense.

          Fixing hallucinations is more about decreasing inaccuracies rather than fixing an actual problem with the model itself.

    • ollien@beehaw.org
      link
      fedilink
      arrow-up
      11
      ·
      1 year ago

      I’m no expert, so take what I’m about to say with a grain of salt.

      Fundamentally, a LLM is just a fancy autocomplete; there’s no source of knowledge it’s tapping into, it’s just guessing words (though it is quite good at it). Correspondingly, even if it did have a pool of knowledge, even that can’t be perfect, because the truth is never quite so black and white in many areas.

      In other words, hard.