- cross-posted to:
- singularity@lemmit.online
- cross-posted to:
- singularity@lemmit.online
deleted by creator
It’s so ridiculous I’m not even going to bother with the article
deleted by creator
Huh… We’ll see, I guess
An AA battery has around 10kJ of energy; spread over a decade that’s 31 microwatts of power. No way they’re doing useful computations with that.
A single AA battery is going to discharge itself just sitting on the shelf over a decade
Claims don’t make any sense if there isn’t any benchmark.
It’s called Efficient Computer. That increases the veracity of the efficiency claims by at least three thousand.
so you think they claimed that and didn’t do any testing?
Would you believe faked benchmarks? They are pretty damn easy to fake.
So Intel, Apple, every other company that develops ARM based processors, AMD and Nvidia has just missed this technology ?
We’re talking about trillions of dollars in just R’n’D investments and this technology just flew under the radar?
If it sounds too good to be true, it is probably too good to be true.
Usually means “yes this works in theory but only for very specific operations at limited scales that aren’t all that important so it’s not worth pursuing seriously”
Maybe. But the blue LED was also deemed impossible by a lot of big companies. And then a guy build one. Very interesting video on that topic: https://youtu.be/AF8d72mA41M
Here is an alternative Piped link(s):
https://piped.video/AF8d72mA41M
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
I mean
Big companies tend to “innovate” by buying market-disrupting startups and squashing the life out of them so they wouldn’t need to competeIt probably runs a completely custom instruction set which makes it incompatible with current architectures. Current manufacturers are designing chips that are operable with popular instruction sets.
I’d write it myself if it was a hundred times faster
I mean, we know the absolute limits of computational efficiency thanks to the Landauer limit and the Margolus–Levitin theorem, and from those we know that we are so far from the limits that it is practically unfathomable.
If they can show some evidence that they can perform useful calculations 100x more efficiently than whatever they chose to compare against (definitely a cherry picked comparison) then I’ll give them my attention, but others have made similar claims in the past then turned out to be in extremely specific algorithms that use quantum calculations that are of course slower and less efficient on any traditional computer.
I’d like to see these chips benchmarked in the wild as well before getting too excited, but the claims aren’t that implausible. Incidentally, this approach is why M series chips are so much faster than x86 ones. Apple uses SoC architecture which eliminates the need for the bus, and they process independent instructions in parallel on multiple cores. And they’re just building that on existing ARM architecture. So, it’s not implausible that a chip and a compiler designed for this sort of parallelism from ground up could see a huge performance boost.
Thats not why Apple silicon is faster. Every modern mobile device uses a SoC these days.
It is very much part of the reason it’s faster than the traditional x86 architecture with a bus, which is what I was talking about. Here’s a good summary for you https://archive.is/DtT7c
Sorry I thought you meant its more efficient just because its a SoC.
“Extraordinary claims require extraordinary
fundingevidence.”Seems like it uses a bunch of pipelines that are also cross connected. Pretty interesting idea.
They’ve been promising quantum computers for three decades with zilch results. I’ve lost count of how many times and how many startups and even major market players claimed to have working quantum computers, which of course to this day are all just smoke and mirrors.
They’ve been promising artificial intelligence for three decades with zilch results. Then they redefined what AI means to get venture capital pointing the money hose at it. Now people think a glorified autocomplete and grammar engine is ‘artificial intelligence.’
I’ll believe it when I see it.
What about power and heat?
Efficient, not fast. Just means it’ll sip power as opposed to guzzling it.
The article says that this architecture uses significantly less power which would mean producing less heat as well.