I’ve tried several types of artificial intelligence including Gemini, Microsoft co-pilot, chat GPT. A lot of the times I ask them questions and they get everything wrong. If artificial intelligence doesn’t work why are they trying to make us all use it?
Investors are dumb. It’s a hot new tech that looks convincing (since LLMs are designed specifically to appear correct, not be correct), so anything with that buzzword gets a ton of money thrown at it. The same phenomenon has occurred with blockchain, big data, even the World Wide Web. After each bubble bursts, some residue remains that actually might have some value.
And LLM is mostly for investors, not for users. Investors see you “do AI” even if you just repackage GPT or llama, and your Series A is 20% bigger.
I can see that. That guy over there has the new shiny toy. I want a new shiny toy. Give me a new shiny toy.
Money. If you paid to use those services, they got what they wanted.
Money.
That’s the entirety of the reason.
“Line must go up.”
Summed up an MBA in four words.
“Be greedy” => there, did it in two:-P
It is so sad that it works too - no room for nuance, responsibility, even long-term stability (even for the entire human species, + all other mammals on Earth & many others too that we seem ready to take down with us on our way to extinction).
Generative AI has allowed us to do some things that we could not do before. A lot of people very foolishly took that to mean it would let us do everything we couldn’t do before.
That’s because the PR department keeps telling us that it’s the best things since sliced bread.
I second this, very consice and accurate
The last big fall in the price of bitcoin, in December '22 was caused by a shift in the dynamics of mining where it became more expensive to mine new btc than what the coin was actually worth. Not only did this plunge the price of crypto it also demolished demand for expensive graphics chips which are repurposed to run the process-heavy complex math used in mining. Cheaper chips, cascading demand and server space that was dedicated to mining related activities threatened to wipe out profit margins in multiple tech sectors.
6 months later, Chat GPT is tolled out by Open AI. The previous limitations on processing capabilities were gone, server space was cheap and the tech was abundant. So all these tech sectors at risk of losing their ass in an overproduction driven recession, now had a way to pump the price of their services and this was to pump AI.
Additionally around this time the world was recovering from covid lockdowns. Increased demand for online services was dwindling (exacerbating the other crisis outlined above) as people were returning to work and spending more time being social IRL rather than using services. Companies had hired lots of new workers: programmers, tech infrastructure workers, etc., yo meet the exploding demand during covid. Now they had too many workers and their profits were being threatened.
The Federal reserve had raised interest rates to stifle continued hiring of new employees. The solution that the fed had come up with in order to stifle inflation was to encourage laying off workers end masse – what Marxists might call restoring the reserve army of labor, or relative surplus population – which was substantially depleted during the pandemic. But business owners were reluctant to do this, the tight labor market of the last few years had made business owners and managers skittish about letting people go.
A basic principle at play here, is that new technology is introduced for two reasons only: to sell as a new commodity and (what we are principally concerned with) replacing workers with machines. Another basic principle is that the capitalist system has to have a certain percentage of its population unemployed and hyper exploited in order to keep wages low.
So there was a confluence of incentives here. 1. Inexpensive server space and chips which producers were eager to restore to profitability (or else face drastic consequences) 2. A need to lay off workers in order to stop inflation 3. Incentives for businesses to do so.
Laying off relatively highly paid technical/intellectual labor is a low hanging fruit in this whole equation, and the roll out of AI did just that. Hundreds of thousands of highly paid workers were laid off across a variety of sectors, assured that AI would create so much more efficiency and cut out the need for so many of these workers. So they rolled out this garbage tech that doesn’t work, but everyone in the industry, the media, the government needs it to work, or else they face a massive economic crisis, which had already started with inflation.
At the end of the day its just a massive grift, pushed out to compensate for excessive overproduction driven by another massive grift (cryptocurrency) combined with economic troubles that arose from an insufficient government response to a pandemic that killed millions of people; and rather than take other measures to stifle inflation our leaders in global finance decided to shunt the consequences onto workers, as always. The excuse given was AI, which is nothing more than a predictive text algorithm attached to a massive database created by exploited workers overseas and stolen IPs, and a fuck load of processing power.
I hope someday we can come up with an economic system that is not based purely on profit and the exploitation of human beings. But I don’t know that I’ll live long enough to see it.
Well remember that the shifts that can happen in material conditions and consciousness can happen very quickly. We can’t decide when that is, but we can prepare and build trust until it does occur. Hard to imagine what it would take in the west to see an overthrow of capitalism, all we can do is throw our weight behind where it will have the most effect, hopefully where our talents reside also! Stay optimistic, despite even evidence to the contrary. For the capitalists, its better to believe that the end of the world is coming than to believe a new world is possible. So if nothing else lets give em hell
I can’t tell you how many times I’ve had this exact thought. 😕
Are you an economist or business professor IRL? Because that was an amazing answer!
No actually I’m mostly self educated. I’m just a tech worker who studies history, social theory and economics, but also does some political organizing. So take it with a grain of salt if you must.
Glad you got something from it, I appreciate the compliment!
Very nice writeup. My only critique is the need to “lay off workers to stop inflation.” I have no doubt that some (many?) managers etc… believed that to be the case, but there’s rampant evidence that the spike of inflation we’ve seen over this period was largely due to corporate greed hiking prices, not due to increased costs from hiring too many workers.
Exactly! the two things are the same phenomenon expressing in two different ways! This is exactly why this is such a mindfuck.
Follow my logic: in the usa by 2022, covid19 had killed over a million people. When you compare this to the total unemployed in the US, that’s not just the governments padded numbers but adding together all the people in prisons, people who stopped looking for work, etc., those covid deaths were about 12% of that unemployed “surplus” population. Again, the system needs a certain number of people to be unemployed, over a million people died, which means over a million “jobs” (this includes employed and unemployed positions within the entire workforce.) At the time the media was calling it “the great resignation,” where employees were just going out and getting better jobs. But where did these jobs come from? Can you really just go out and get a better job any time you want? Of course not. Try searching for a job now, good fucking luck.
Seriously, google “reserve army of labor” if you haven’t already, it explains everything. So as the labor market tightens, consumption increases. People got a better job and can fix their credit up in a few months and get a loan on a car maybe for the first time. People are walking out of the grocery store with more food, or going out to eat more. Retailers notice this and raise prices in response to increased spending. this is a phenomena that Marx wrote about in value price and profit, which I might mention again.
So why were prices going up? Larry Summers gets in front of Jon Stewart and says that increase in spending equals increase in demand, when demand challenges supply then prices go up! Which is what we are generally taught. Except Marx proved that this was not the case, that inflation really was just retailers raising prices due to increase in consumer spending. Its a bit of economic slight of hand that I could explain if you want but for now I’m already long.
The federal reserve says that inflation (which is like you said, mostly driven by companies raising prices to squeeze consumers, and this is proven by the way the fed responds) is out of control, so therefore they are raising interest rates. The way this will control inflation is by making it harder and more expensive for companies to get money for large capital investments. This is all to squeeze the companies to stop hiring (since their p&l is negatively affected) and eliminate excess staff. But the companies are reluctant to let people go/stop hiring because of what they just experienced with a “tight” labor market. They have the incentives or pressures, but they need an excuse, they need a justification. Enter automation with ai. Finally the automation revolution that the media has been threatening workers with for decades is here and sorry can’t halt progress you see (Ned Ludd did nothing wrong.)
Except it isnt all that. In the mean time the economy has adjusted to the depleted reserve population, the corpos were given everything they wanted or needed in order to continue to profit after the death of millions, and a new grift industry has grown up and attracted all this funding and following and clout. Didn’t even have to lose that many jobs, just a bunch of high paid ones. Except interest rates are still elevated so the fed is continuing to keep that pressure on the labor market. Anyway, there’s all of these cascading effects, from systems interacting with each other; therefore its more useful to understand the relation between phenomenon than it often is to try and understand that phenomena on its own.
So you’re right, it was corporate policy, but it isn’t greed necessarily. Definitely greed adjacent though, its like systematic greed. There are incentives and disincentives present within the system. Karl Marx was able to write about the causes of inflation 150 years ago, and they were using the same faulty excuses then. That’s also why the fed decided to raise interest rates, they understood what the problem was, and the fix is and always has been to throw people into unemployment. The system is predictable, but it isn’t rational.
That is a very pessimistic and causal explanation, but you’ve got the push right. It’s marketing that pushes I though, not necessarily tech. AI, as we currently see it in use, is a very neat technological development. Even more so it is a scientific development, because it isn’t just some software, it is a intricate mathematical model. It is such a complex model, that we actually have study it how it even works,because we don’t now the finer details.
It is not a replacement for office workers, it is not the robot revolution and it is not godlike. It is just a mathematical model on a previously unimaginable scale.
“Pessimistic and casual”? You’re gonna make me self conscious.
I’m an AI skeptic. Its too energy hungry and its not doing anything except scraping massive amounts of consumer data. No its not going to replace workers (because it doesn’t work), but then again countless workers were already laid off so it already served its purpose there. Doesn’t have to replace them, just has to purge them but in a systematic way, such that the Fed called for when they started raising interest rates.
Are you an AI Scientist/engineer? If so I’d love to hear more about your work. I’m in tech myself but def not on the bleeding edge of AI.
Machine learning has many valid applications, and there are some fields genuinely utilizing ML tools to make leaps and bounds in advancements.
LLMs, aka bullshit generators, which is where a huge majority of corporate AI investment has gone in this latest craze, is one of the poorest. Not to mention the steaming pile of ethical issues with training data.
I appreciate the candid analysis, but perhaps “nothing to see here” (my paraphrase) is only one part of the story. The other part is that there is genuine innovation and new things within reach that were not possible before. For example, personalized learning–the dream of giving a tutor to each child, so we can overcome Bloom’s 2 Sigma Problem–is far more likely with LLMs in the picture than before. It isn’t a panacea, but it is certainly more useful than cryptocurrency kept promising to be IMO.
Again, I am highly skeptical that this technology (or any other) can be deployed for such a worthy social mission. I have a cousin who works for a company that produces educational materials for people who need a lot of accommodation, so I know that there are definitely good people in those fields who have the ability, and probably desire, to deploy this tech responsibly and progressively in a manner that helps fulfill that and similar missions, but when I look at things systemically I just don’t see the incentive structures to do so. I won’t deny being a skeptic of AI, especially since my personal and professional experience with it has been like dramatically underwhelming. I’d love to believe things work better than they do, that they even could but with ai I see a lot of promises and nothing in the way of results, outside of modestly entertaining tricks. Although I gotta admit, stable diffusion is really cool. Commercially I think its dogshit but the way it creates the images is fascinating.
What would a good incentive structure look like? For example, would working with public school districts and being paid by them to ensure safe learning experiences count? Or are you thinking of something else?
It wouldn’t look like the profit motive, and it wouldn’t look like a half baked grift.
We’ve already established that language models just make shit up. There is no need to demonstrate. Bad bot!
Excuse me? Are you calling me a bot?
I remember learning about Turing tests to determine whether speech was coming from a machine. Its ironic that in practice its much more common for people to not be able to recognize even a real person.
It’s just that I rarely see a real person be so confidently wrong.
Care to elaborate?
Robots don’t demand things like “fair wages” or “rights”. It’s way cheaper for a corporation to, for example, use a plagiarizing artificial unintelligence to make images for something, as opposed to commissioning a human artist who most likely will demand some amount of payment for their work.
Also I think that it’s partially caused by people going “ooh, new thing!” without stopping to think about the consequences of this technology or if it is actually useful.
Rich assholes have spent a ton of money on it and they need to manufacture reasons why that wasn’t a waste.
The natural general hype is not new… I even see it in 1970’s scifi. It’s like once something pierced the long-thought-impossible turing test, decades of hype pressure suddenly and freely flowed.
There is also an unnatural hype (that with one breakthrough will come another) and that the next one might yield a technocratic singularity to the first-mover: money, market dominance, and control.
Which brings the tertiary effect (closer to your question)… companies are so quickly and blindly eating so many billions of dollars of first-mover costs that the corporate copium wants to believe there will be a return (or at least cost defrayal)… so you get a bunch of shitty AI products, and pressure towards them.
Sounds about right
Interestingly, the turing test has been passed by much dumber things than LLMs
I’m not talking about one-offs and the assessment noise floor, more like: “ChatGPT broke the Turing test” (as is claimed). It used to be something we tried to attain, and now we don’t even bother trying to make GPT seem human… we actually train them to say otherwise lest people forget. We figuratively pole-vaulted over the turing test and are now on the other side of it, as if it was a point on a timeline instead of an academic procedure.
True!
A dumb person thinks AI is really smart, because they just listen to anyone that answers confidentially
And no matter what, AI is going to give its answer like it’s is 100% definitely the truth.
That’s why there’s such a large crossover with AI and crypto, the same people fall for everything.
There’s new supporting evidence for Penrose’s theory that natural intelligence involves just an absolute shit ton of quantum interactions, because we just found out how the body can create an environment where quantom super position can not only be achieved, but incredibly simply.
AI got a boost because we didn’t really (still dont) understand consciousness. Tech bro’s convinced investors that neurons were what mattered, and made predictions for when that amount of neurons can be simulated.
But if it include billions of molecules in quantum superposition, we’re not getting there in our lifetimes. But there’s a lot of money sunk in to it already, so there’s a lot of money to lose if people suddenly get realistic about what it takes to make a real artificial intelligence.
That’s why there’s such a large crossover with AI and crypto, the same people fall for everything.
There’s a large overlap, but some people that did not fall for crypto may fall for AI.
Always never not be hustling, I suppose.
So they’re using the sunk cost logical fallacy? Gee that’s intelligent.
The microtubules creating an environment that can sustain quantum super position just came out like a month ago.
In all honesty the tech bros probably don’t even know yet, or understands it means human level AI speculation has essentially been disproven as happening anytime remotely soon.
But I’m assuming when they do, they’ll just ignore it and double down to maintain share prices.
It’s also possible it all crashes and billions of dollars disappear.
Microtubules have been pushed for decades without any proof. The latest paper wasn’t evidence but unsupported speculation.
But more importantly the physics of computation that creates intelligence has absolutely nothing to do with understanding intelligence. Even if quantum effects are relevant ( which is extremely unlikely given the warm and moving environment inside the brain), it doesn’t answer anything about how humans are intelligent.
Penrose used Quantum Mechanics as a “God in the Gaps” explanation. That worked 40 years ago but today we have working quantum computers but no human intelligence.
So the senator from Alaska was right? The internet is all a bunch of tubes?
IIRC When ChatGPT was first announced I believe the hype was because it was the first real usable interface a layman could interact with using normal language and have an intelligible response from the software. Normally to talk with computers we use their language (programming) but this allowed plain language speakers to interact and get it to do things with simple language in a more pervasive way than something like Siri for instance.
This then got over hyped and over promised to people with dollars in their eyes at the thought of large savings from labor reduction and capabilities far greater than it had. They were sold a product that has no real “product” as it’s something most people would prefer to interact with on their own terms when needed, like any tool. That’s really hard to sell and make people believe they need it. So they doubled down with the promise it would be so much better down the road. And, having spent an ungodly amount into it already, they have that sunken cost fallacy and keep doubling down.
This is my personal take and understanding of what’s happening. Though there’s probably more nuances, like staying ahead of the competition that also fell for the same promises.
The hype is also artificial and usually created by the creators of the AI. They want investors to give them boatloads of cash so they can cheaply grab a potential market they believe exists before they jack up prices and make shit worse once that investment money dries up. The problem is, nobody actually wants this AI garbage they’re pushing.
Disclaimer: I’m going to ignore all moral questions here
Because it represents a potentially large leap in the types of problems we can solve with computers. Previously the only comparable tool we had to solve problems were algorithms, which are fast, well-defined, and repeatable, but cannot deal with arbitrary or fuzzy inputs in a meaningful way. AI excels at dealing with fuzzy inputs (including natural language, which was a huge barrier previously), at the expense of speed and reliability. It’s basically an entire missing half to our toolkit.
Be careful not to conflate AI in general with LLMs. AI is usually implemented as Machine Learning, which is a method of fitting an output to training data. LLMs are a specific instance of this that are trained on language (hence, large language models). I suspect that if AI becomes more widely adopted, most users will be interacting with LLMs like you are now, but most of the business benefit would come from classifiers that have a more restricted input/output space. As an example, you could use ML to train an AI that can be used to detect potentially suspicious bank transactions. The more data you have to sort through, the better AI can learn from it*, so I suspect the companies that have been collecting terabytes of data will start using AI to try to analyze it. I’m curious if that will be effective.
*technically it depends a lot on the training parameters
I suppose it depends on the data you’re using it for. I can see a computer looking through stacks data in no time.
There is no artificial intelligence, just very large statistical models.
It’s easier for the marketing department. According to an article, it’s neither artificial nor intelligent.
In what way is it not artificial
Artificial intelligence (AI) is not artificial in the sense that it is not fake or counterfeit, but rather a human-created form of intelligence. AI is a real and tangible technology that uses algorithms and data to simulate human-like cognitive processes.
I’m generally familiar with “artificial” to mean “human-created”
Humans created cars and cars are real. I tried to get some info from the Wired article but they pawalled me.
“Artificial” doesn’t mean “fake”, it usually means “human made”
Found a link to Kate Crawford’s research. The quote is near the bottom of the article. It’s interesting, anyway.
That’s what Gemini said.
Is human intelligence artificial? #philosophy
Well, using the definition that artificial means man made then no. Human intelligence wasn’t made by humans therefore it isn’t artificial.
I wonder if some of our intelligence is artificial. Being able to drive directly to any destination, for example, with a simple cell-phone lookup. Reading lifetimes worth of experience in books that doesn’t naturally come at birth. Learning incredibly complex languages that are inherited not by genes, but by environment–and, depending on the language, being able to distinguish different colors.
From the day I was born, my environment shaped what I thought and felt. Entering the school system I was indoctrinated into whatever society I was born to. All of the things that I think I know are shaped by someone else. I read a book and I regurgitate its contents to other people. I read a post online and I start pretending that it’s the truth when I don’t actually know. How often do humans actually have an original thought? Most of the time we’re just regurgitating things that we’ve experienced, read, or heard from exteral foces rather than coming up with thoughts on our own.
Artificial intelligence is a branch of computer science. Of which, LLMs are objectively a part of.
When will people finally stop parroting this sentence? It completely misses the point and answers nothing.
Where’s the intelligence in suggesting glue in pizza? Or is it just copying random stuff and guessing what comes next like a huge phone keyboard app?
A lot of jobs are bullshit. Generative AI is good at generating bullshit. This led to a perception that AI could be used in place of humans. But unfortunately, curating that bullshit enough to produce any value for a company still requires a person, so the AI doesn’t add much value. The bullshit AI generates needs some kind of oversight.
It amazed people when it first launched and capitalists took that to mean replace all their jobs with AI. Where we wanted AI to make shit jobs easier, they used it to replace whole swaths of talent across the industry’s. Recent movies read like they were written almost entirely by AI. Like when Cartman was a robot and kept giving out terrible movie ideas.
Like was said: money.
In addition, they need training data. Both conversations and raw material. Shoving “AI” into everything whether you want it or not gives them the real world conversational data to train on. If you feed it any documents, etc it’s also sucking that up for the raw data to train on.
Ultimately the best we can do is ignore it and refuse to use it or feed it garbage data so it chokes on its own excrement.
That works for me. I’ll just ignore it to spare my sanity