• 0 Posts
  • 54 Comments
Joined 5 months ago
cake
Cake day: July 16th, 2024

help-circle
  • I am not a lawyer. But you wouldn’t be surprised to hear that

    1. I don’t have inside story of Bing in Germany. It could be that Microsoft either doesn’t want to do it well, or hasn’t yet done it well enough. I’m not promising either in particular, but it can be done.
    2. Generally as an engineer you have a pile of options with trade offs. You absolutely can build nuanced solutions, as often the law and the lawyers live in nuanced realities. That is the reality of even the best sorts of tech companies who are trying.

    My commitment is that maximalism or strict binary assumptions won’t work on either end and don’t satisfy what anyone truly wants or needs. If we’re not careful about what it takes to move the needle, we agree with them by saying ‘it can’t be done, so it wont be done.’


  • That’s a good question, because there is nuance here! It’s interesting because while working on similar projects I also ran into this issue. First off, it’s important to understand what your obligation is and the way that you can understand data deletion. No one believes it is necessary to permanently remove all copies of anything, anymore than it is necessary to prevent all forms of plagairism. No one is complaining that is possible at all to plaigarise, we’re complaining that major institutions are continuing to do so with ongoing disregard of the law.

    Only maximalists fall into the trap that thinking of the world in binary sense: either all in or do nothing at all.

    For most of us, it’s about economics and risk profiles. Open source models get trained continuously over time, there won’t be one version. Saying that open source operators do have some obligations to in good faith to curate future training to comply has a long tail impact on how that model evolves. Previous PII or plaigarized data might still exist, but its value and novelty and relevance to economic life goes down sharply over time. No artist or writer argues that copyright protections need to exist forever. They literally, just need to have survival working conditions, and the respect for attribution. The same thing with PII: no one claims that they must be completely anonymous. They just desire cyber crime to be taken seriously rather than abandoned in favor of one party taking the spoils of their personhood.

    Also, yes, there are algorithms that can control how further learning promotes or demotes growth and connections relative to various policies. Rather than saying that any one policy is perfect, a mere willingness to adopt policies in good faith (most such LLM filters are intentionally weak so that those with $$ and paying for API access can outright ignore them, while they can turn around and claim it can’t be solved too bad so sad).

    Yes. It is possible to perturb and influence the evolution of a continuously trained neural network based on external policy, and they’re carefully lying through omision when they say they can’t 100% control it or 100% remove things. Fine. That’s, not necessary, neither in copyright nor privacy law. Never been.





  • Despite what the tech companies say, there are absolutely techniques for identifying the sources of their data, and there are absolutely techniques for good faith data removal upon request. I know this, because I’ve worked on such projects before on some of the less major tech companies that make some effort to abide by European laws.

    The trick is, it costs money, and the economics shift such that one must eventually begin to do things like audit and curate. The shape and size of your business, plus how you address your markets, gains nuance that doesn’t work when your entire business model is smooth, mindless amotirizing of other people’s data.

    But I don’t envy these tech companies, or the increasing absurd stories they must tell to hide the truth. A handsome sword hangs above their heads.


  • Moravec’s Paradox is actually more interesting than it appears. You don’t have take his reasoning or Pinker’s seriously but the observation is salient. Also the paradox gets stated in other ways by other scientists, it’s a common theme.

    One way I often think about it: in order for your to survive, the intelligence of moving in unknown spaces and managing numerous fuzzy energy systems is way more important to prioritize and master than like, the abstract conceptual spaces that are both not full of calories and are also cheaper to externalize anyways.

    It’s part of why I don’t think there is a globally coherent heirarchy of intelligence, or potentially even general intelligence at all. Just, the distances and spaces that a thing occupies, and the competencies that define being in that space.










  • I feel this shouldn’t at all be surprising, and continues to point to Diverse Intelligence as more fundamental than any sort General Intelligence conceptually. There’s a huge difference between what something is in theory or in principal capable of, and the economics story of what that thing attends to naturally as per its energy story.

    Broadly, even simple things are powerful precisely because of what they don’t bother trying to do until perturbed.

    Ultimately, I hypothesize the reason why VCs like the idea of LLMs doing simple things far more expensively than otherwise is already possible, is because, They literally can’t imagine what else to spend their money on. They are vacuous consumers by design.




  • I don’t entirely agree, though.

    That WAS the point of NaNoWriMo in the beginning. I went there because I wanted feedback, and feedback from people who cared (not offense to my friends, but they weren’t interested in my writing and that’s totes cool).

    I think it is a valid core desire to want constructive feedback on your work, and to acknowledge that you are not a complete perspective, even on yourself. Whether the AI can or does provide that is questionable, but the starting place, “I want /something/ accessible to be a rubber ducky” is valid.

    My main concern here is, obviously, it feels like NanoWriMo is taking the easy way out here for the $$$ and likely it’s silicon valley connections. Wouldn’t it be nice if NaNoWriMo said something like, “Whatever technology tools exist today or tomorrow, we stand for writer’s essential role in the process, and the unethical labor implications of indiscriminate, non consensus machine learning as the basis for any process.”


  • NovelAI

    I’ll step up and say, I think this is fine, and I support your use. I get it. I think that there are valid use cases for AI where the unethical labor practices become unnecessary, and where ultimately the work still starts and ends with you.

    In a world, maybe not too far in the future, where copyright law is strengthened, where artist and writer consent is respected, and it becomes cheap and easy to use a smaller model trained on licensed data and your own inputs, I can definitely see how a contextual autocomplete that follows your style and makes suggestions is totally useful and ethical.

    But i understand people’s visceral reaction to the current world. I’d say, it’s ok to stay your course.