Running in the 90s again
Could even run the instance from your phone or whatever device used to look at Lemmy
That’s evil
And on the official app it isn’t called end to end encryption or even a setting toggle. It’s called secret chat and clicking on it opens a chat from the original chat. The only difference I see is a little lock icon where an emoji usually is.
For browsing I have converted to Lemmy. For getting answers from a Google search I still click on the Reddit option. Lemmy doesn’t show on a Google search and other forums are useless for information. Minus stack overflow.
The look of horror in that kid
Sounds about right. Can zip and unzip them for that 70s style.
My modern clothes get shredded, but I have a pair of pants handed down from 1967 that still hold up today. Even after being pounded into concrete 12 hours a day (only 4 days a week). I wish I had more.
T-Mobile doesn’t even let me setup auto pay. I’m a Sprint customer that got converted. Sprint app no longer works, T-Mobile doesn’t recognize me.
I still get the removed but it makes me pay manually, using the short code from the phone app… And since I can’t see it get into the account, can’t pay off or buy my current phone.
Once I get time off near August I hope to deal with that. And after, may just switch to something like mint. Hardly use any data, text, or talk, just needed phone financing and insurance.
A CRT with acceptable resolution would break my desk in half. And being that close to one… Not to mention the extremely high pitched sound they make during operation, painful.
Setup with sonarr and radarr. If you use prowlarr you can set it to only check free leech. Helps to boost the ratio. You just have to seed for 10 days after.
You can’t pirate their models, and even if they leaked, running them would need an expensive machine.
There are lots of open source models. They can get close but are limited by your hardware.
If you want close to GPT, there is the falcon 40b model. You’ll need something with more than 24 GB VRAM or deep down cpu offload with 128 GB RAM, I think, maybe 64.
With 24 GB VRAM you can do a 30B and so on…
For reference, the GPT models are like 135B. So a100 nvlink territory.