

They changed stuff since it was shut down on Tuesday, so it’ll probably be harder to get unhinged responses.
They changed stuff since it was shut down on Tuesday, so it’ll probably be harder to get unhinged responses.
Everything changed when they removed ham from the menu
“Don’t mention the war”
an intricate system of ropes and pullys ought to do the trick
Ok, I take it back. I just got some peach Jinro soju, it’s 13% ABV and doesn’t taste like alcohol at all. Even less than the Sake. Definitely try it.
Soju tastes kinda like vodka. I think sake is a good balance of “high enough alcohol” and “doesn’t taste like paint thinner.” Not a connoisseur by any means but a bottle of Kurosawa is cheap and tastes pleasant and fruity, and has around 14% ABV.
I also have a masters in math and completed all coursework for a PhD. Infinitesimals never came up because they’re not part of standard foundations for analysis. I’d be shocked if they were addressed in any formal capacity in your curriculum, because why would they be? It can be useful to think in terms of infinitesimals for intuition but you should know the difference between intuition and formalism.
I didn’t say “infinitesimals don’t have a consistent algebra.” I’m familiar with NSA and other systems admitting infinitesimal-like objects. I said they’re not standard. They aren’t.
If you want to use differential forms to define 1D calculus, rather than a NSA/infinitesimal approach, you’ll eventually realize some of your definitions are circular, since differential forms themselves are defined with an implicit understanding of basic calculus. You can get around this circular dependence but only by introducing new definitions that are ultimately less elegant than the standard limit-based ones.
Ok, but no. Infinitesimal-based foundations for calculus aren’t standard and if you try to make this work with differential forms you’ll get a convoluted mess that is far less elegant than the actual definitions. It’s just not founded on actual math. It’s hard for me to argue this with you because it comes down to simply not knowing the definition of a basic concept or having the necessary context to understand why that definition is used instead of others…
It doesn’t. Only sometimes it does, because it can be seen as an operator involving a limit of a fraction and sometimes you can commute the limit when the expression is sufficiently regular
The other thing is that it’s legit not a fraction.
Sounds like a skill issue. If that ruined the game for you, I dunno what to say. Might be a replicant?
I agree with them, that game is a masterpiece. Didn’t you love it?
It doesn’t top out below 144Hz. There are benefits with diminishing returns up to at least 1000Hz especially for sample-and-hold displays (like all modern LCD/OLED monitors). 240Hz looks noticeably smoother than 144Hz, and 360Hz looks noticeably smoother than 240Hz. Past that it’s probably pretty hard to tell unless you know what to look for, but there are a few specific effects that continue to be reduced. https://blurbusters.com/blur-busters-law-amazing-journey-to-future-1000hz-displays-with-blurfree-sample-and-hold/
That example recording is awesome
I know, I’m just saying it’s not theoretically impossible to have a phone number as a token. It’s just probably not what happened here.
the choice of the next token is really random
It’s not random in the sense of a uniform distribution which is what is implied by “generate a random [phone] number”.
A full phone number could be in the tokenizer vocabulary, but any given one probably isn’t in there
I mean the latter statement is not true at all. I’m not sure why you think this. A basic GPT model reads a sequence of tokens and predicts the next one. Any sequence of tokens is possible, and each digit 0-9 is likely its own token, as is the case in the GPT2 tokenizer.
An LLM can’t generate random numbers in the sense of a proper PRNG simulating draws from a uniform distribution, the output will probably have some kind of statistical bias. But it doesn’t have to produce sequences contained in the training data.
I dunno if you’re joking, but yeah there’s IDE plugins that do this. GitHub Copilot grabs context from files in your edit history and you can tell it to edit, refactor, “fix” etc. selections. The more complex actions, the less likely to succeed, though.