That’s my hope. Still from where I live I can only hope my specie contributions are used to affect that.
That’s my hope. Still from where I live I can only hope my specie contributions are used to affect that.
This poll tracking is showing Harris barely ahead on national polls. This millennium, Republicans have won the presidency in 2000, 2004, and 2016.
In 2000 and 2016, the Democratic candidate won the popular vote.
Winning the popular vote doesn’t mean shit. The electoral college is what matters.
That same NYT poll link lists 9 tossup states: Wisconsin, Michigan, Pennsylvania, Arizona, Georgia, Minnesota, North Carolina, Nevada, and Virginia.
You’ll notice all but the first three are in alphabetical order. That’s because all but the first three don’t have enough polling to make a prediction. Of those first three: a statistical tie in Wisconsin and Michigan with a Trump lead in Pennsylvania.
If you include Kennedy, Harris is ahead by 1% in Wisconsin and Pennsylvania but still tied in Michigan.
National polling trends are going in the direction I want, but they really don’t matter.
I write this from a state whose electoral college votes have never gone for a Democrat in my lifetime and won’t ever before my death. I’ll be voting for Harris, but that vote is one of those national votes that won’t actually help my preferred candidate.
The only way I can help is via monetary donation.
And if you’re a Harris voter in a solidly blue state, your vote means as much fuck all as mine does. Yes, it actually makes it to the electoral college, but, like mine, that’s a forgone conclusion. You should be donating money too and hoping it’s used wisely to affect those swing states.
Under the CMB method, it sounds like the calculation gives the same expansion rate everywhere. Under the Cepheid method, they get a different expansion rate, but it’s the same in every direction. Apparently, this isn’t the first time it’s been seen. What’s new here is that they did the calculation for 1000 Cepheid variable stars. So, they’ve confirmed an already known discrepancy isn’t down to something weird on the few they’ve looked at in the past.
So, the conflict here is likely down to our understanding of ether the CMB or Cepheid variables.
Except it’s not that they are finding the expansion rate is different in some directions. Instead they have two completely different ways of calculating the rate of expansion. One uses the cosmic microwave background radiation left over from the Big Bang. The other uses Cepheid stars.
The problem is that the Cepheid calculation is much higher than the CMB one. Both show the universe is expanding, but both give radically different number for that rate of expansion.
So, it’s not that the expansion’s not spherical. It’s that we fundamentally don’t understand something to be able to nail down what that expansion rate is.
As a first book, I think Children of Time is much better than Shards of Earth. I enjoyed both series but would say the third book in each was the weakest. The Final Architecture series had a slightly stronger third entry.
And the article content posted is just an excerpt. The rest of the article focuses on how AI can improve the efficiency of workers, not replace them.
Ideally, you’ve got a learned individual using AI to process data more efficiently, but one that is smart enough to ignore or toss out the crap and knows to carefully review that output with a critical eye. I suspect the reality is that most of those individuals using AI will just pass it along uncritically.
I’m less worried about employees scared of AI and more worried about employees and employers embracing AI without any skepticism.
Thanks. Very interesting. I’m not sure I see such a stark contrast pre/post 9-11. However, the idea that the US public’s approach to the post-9-11 conflict would have an influence makes sense and isn’t something I’d ever have considered on my own.
deleted by creator
Me too, but I’d put Usenet in there before Slashdot.
Spock, Uhura, Chapel, heck even M’Benga don’t make it a prequel, but a lieutenant Kirk does?
Because most people aren’t technical enough to understand there are alternatives, particularly if those alternatives involve removing a scary label telling you not to.
The South. Just below Indiana, the middle finger of the South. And I say this as a Hoosier for much of my life.
As a guy responsible for a 1,000 employee O365 tenant, I’ve been watching this with concern.
I don’t think I’m a target of state actors. I also don’t have any E5 licenses.
I’m disturbed at the opaqueness of MS’ response. From what they have explained, it sounds like the bad actors could self-sign a valid token to access cloud resources. That’s obviously a huge concern. It also sounds like the bad actors only accessed Exchange Online resources. My understanding is they could have done more, if they had a valid token. I feel like the fact that they didn’t means something’s not yet public.
I’m very disturbed by the fact that it sounds like I’d have no way to know this sort of breach was even occurring.
Compared to decades ago, I have a generally positive view of MS and security. It bothers me that this breach was a month in before the US government notified MS of it. It also bothers me that MS hasn’t been terribly forthcoming about what happened. Likely, there’s no need to mention I’m bothered that I’m so deep into the O365 environment that I can’t pull out.
Nice job. Packet loss will definitely cause these issues. Now, you just need to find the source of the packet loss.
In your situation, I’d first try to figure out if it is ISP/Internet before looking inside either network. I wouldn’t expect it to be internal at these speeds. Though, did you get CPU/RAM readings on the network equipment during these tests? Maxing out either can result in packet loss.
I’d start with two pairs of packet captures when the issue happened: endpoint to endpoint and edge router to edge router. Figure out if the packet loss is only happening in one direction or not. That is, are all the UK packets reaching DE but not all the DE making it back? You should clearly be able to narrow into a TCP conversation with dropped packets. Dropped packets aren’t ones that a system never sent, they’re ones that a system never received. Find some of those and start figuring out where the drop happened.
Just curious if you’ve had the chance to dig into this and can report anything back?
I’m the opposite. I had my subreddits curated to ones that supplied good deals discussion for posts and good articles for links. For link posts, I primarily read the linked article and ignored the discussion. Here, I’ve been doing both.
A blacklist, to keep using the email protocol as example, is a tool used sparingly and only when other filtering methods are unsuccessful or when greater damage is prevented that way.
Have you ever run a mail server? If so, have you looked at your logs? The RBL’s on the managed mail gateway for my work turns away 70% of the attempts. This is even before spam scoring kicks in on the 30% initially accepted. A significant percent of that is considered spam. Email has a complex set of automated tools to reject content without even viewing it.
I still think email, even though federated, is a poor analogy to make for Lemmy.
Isn’t the immediate call for censorship/defederation as soon as some views are challenged a bit too entitled?
To some extent, YES, but I think it’s a bit more nuanced and comes down to where you draw that line. Everyone is going to draw it in a different place.
I moderated an academic listserv with membership in 5 digits back before the html protocol even existed. That was huge for the time. And, as you would think, in academia at the time the idea of cronterversy, free speech, and engaging in items you disagreed with was pretty comprehensive. Even so, we still had to moderate, primarily for spam and obvious trolling as well as the occasional personal attacks.
I was an active participant in Usenet in the 90’s. Usenet was federated servers hosting posts and comments from participants on that entire federation. I know a server admin could control what Usenet groups they carried. I have no idea what other levels of moderation were available. Discussions were definitely more freewheeling and challenging than you see today, but they also had a higher content level and a greater respect for intellectual argument, even in trolling. Again, I suspect that was because the bulk of the participants were coming from higher ed institutions.
I was active in Internet forums when SCO sued IBM. There were active attacks on communities and successful attempts to splinter communities based in part on what side of the very question you are asking participants came down on. Again, though, there was a strong respect for intellectual engagement. And, I came down strongly with the same opinion you are expressing back then.
I think that strong respect for engagement exists here in the fediverse, particularly when compared to something like FaceBook or Reddit. As the fediverse grows, I think that will go away.
I don’t have much respect for low content trolling, for active attacks via brigading, for manipulation. I think the ability to upvote is important, but I also think the ability for bot accounts to manipulate that is a very difficult thing to combat, particularly in something as young as Lemmy that is experiencing exponential growth.
I also have a much better awareness of how subtle that manipulation can be in influencing individuals and society, including my own views.
I no longer have the absolutist attitude I once had. I agree with your own concerns about echo chambers, because that leads to its own manipulation of views and the splintering of society. However, I’m also more willing to support the idea of not providing a platform for some of the more odious content than my older self would have supported.
I’m probably in a position to piss off nearly everyone. I disagree with your view that there should be almost no lines drawn, but I disagree with the majority that the lines should be drawn where they want it to be.
The person isn’t talking about automating being difficult for a hosted website. They’re talking about a third party system that doesn’t give you an easy way to automate, just a web gui for uploading a cert. For example, our WAP interface or our on-premise ERP don’t offer a way to automate. Sure, we could probably create code to automate it and run the risk it breaks after a vendor update. It’s easier to pay for a 12 month cert and do it manually.