This is unfortunately not true - AI has been a defined term for several years, maybe even decades by now. It’s a whole field of study in Computer Science about different algorithms, including stuff like Expert Systems, agents based on FSM or Behavior Trees, and more. Only subset of AI algorithms require learning.
As a side-note, it must suck to be an AI CS student in this day and age. Searching for anything AI related on the internet now sucks, if you want to get to anything not directly related to LLMs. I’d hate to have to study for exams in this environment…
I hate it when CS terms become buzzwords… It makes academic learning so much harder, without providing anything positive to the subject. Only low-effort articles trying to explain subject matter they barely understand, usually mixing terms that have been exactly defined with unrelated stuff, making it super hard to find actually useful information. And the AI is the worst offender so far, being a game developer who needs to research AI Agents for games, it’s attrocious. I have to sort through so many “I’ve used AI to make this game…” articles and YT videos, to the point it’s basically not possible to find anything relevant to AI I’m interrested it…
Oh, was not aware of this… (It’s also embarrassing considering that I’m a CS student. We haven’t reached the AI credits yet, but still…). Anyway, thank you for the info! And yeah, the buzzwords part does indeed suck! Whenever I tried to learn more about the topic, I was indeed bombarded by the Elon Musk techbro spam on YouTube. But whatever, I don’t have THAT long to get to these credits. Sooo wish me luck ;)
AI has a strong boom/bust cycle. We’re currently in the middle of a “boom.” It’s possible that this is an “eternal September” scenario where deep networks and LLMs are predominant forever, or…
I’d recommend getting Kagi.com. It’s one of the best software investments I’ve recently made, it makes searching for technical questions so much better, because they have their own indexer with a pretty interresting philosophy behind it. I’ve been using it for a few months by now, and it has been awesome so far. I get way less results from random websites that are just framing clicks on any topic imaginable by raping SEO, and as an added bonus I can just send selected pages, such as Reddit, to the bottom of search results.
Plus, the fact that it’s paid, I don’t have to worry about how they are monetizing my data.
I am really convinced there is a Kagi marketing department dedicated to Lemmy. But if it really works that much better for you, that’s great.
But I wouldnt only bank on the logic “the fact that it’s paid, I don’t have to worry about how they are monetizing my data”. A lot of paid services still try to find ways for more money
While I did see Kagi recommended on Lemmy, I’ve made the switch because of a recommendation by my colleague at work (now that I thing about it, that would funnily probably be the case even if I was actually working for Kagi :D), and it has been a nice experience so far. Plus, we’ve just been talking about it today at the office, so I was in the mood of sharing :D But I haven’t done any actual search comparisons, so it may just be placebo. I’d probably say it’s caused by a lot people trying to be more privacy-centric here, and mostly deeply against large corporations, so the software recommendations tend to just turn into an echo-chamber.
As for the second point, yeah, I guess you are right, Brave Browser being one of the finest examples of it. But it’s a good reminder that I should do some research about the company and who’s behind it, just to avoid the same situation as with Brave, thanks for that.
Before artificial intelligence became a marketable buzz word, most games already included artificial intelligence (like NPCs) I guess when you have a GPT shaped hammer, everything looks like a nail.
That’s what I was reffering to. I’m looking for articles and inspiration about how to cleverly write NPC game AI that I’m struggling with, I don’t want to see how are other people raping game deveopment, or 1000th tutorial about steering behaviors (which are, by the way, awfull solution for most of use-cases, and you will get frustrated with them - Context Steering or RVHO is way better, but explain that to any low-effort youtuber).
I’ve recetly just had to start using Google Scholar instead of search, just so I can find the answers I’m looking for…
If you are looking into how to write game ai there’s a few key terms that can help a ton. Look into anything related to the game FEAR there AI was considered revolutionary at the time and balanced difficulty without knowing too much.
A few other terms are GOAP for goal oriented action programming, behavior trees. And as weird as it sounds looking up logic used my mmorpg bots can have a ton of great logic as the ones not running a completely script path do interact with the game world based on changing factors.
Thank you! My main issue is that while I’m familiar with all those algorithms, its usually pretty simple to find how do they work and how to use them for very basic stuff, but its almost impossible to research into actual best practices in how and when to use them, once you are working on moderately complex problem, especially stuff like formations, squad cooperation and more complex behavior (where I.e behavior trees start to have issues once you realize you have tons of interrupt events at almost every node, defeating the point of behavior trees - which can happen if you’re using them wrong, but no one usually talks about it at that level).
And I’m also dealing with issue that isn’t really mentioned too much, and that is scale. Things like GOAP would probably be infeasible to scale at hundreds of units on the screen, which require and entirely different and way less talked-about algorithms.
I’ve eventually found what I needed, but I did have to resort to reading through various papers published on the subject, because just googling “efficient squad based AI behavior algorithm” will unfortunately not get you far.
But its possible that I’m just being too harsh, and that the search results were always the same level of depth - only my experience has grown over the years, and such basic solutions are no longer sufficient for my projects, and it makes sense that no-one really has a reason to write blog posts of such depth - you just publish papers and give talks about it.
Aside from the AI related keywords. I’m still salty about what the buzzword did to my search results.
This is unfortunately not true - AI has been a defined term for several years, maybe even decades by now. It’s a whole field of study in Computer Science about different algorithms, including stuff like Expert Systems, agents based on FSM or Behavior Trees, and more. Only subset of AI algorithms require learning.
As a side-note, it must suck to be an AI CS student in this day and age. Searching for anything AI related on the internet now sucks, if you want to get to anything not directly related to LLMs. I’d hate to have to study for exams in this environment…
I hate it when CS terms become buzzwords… It makes academic learning so much harder, without providing anything positive to the subject. Only low-effort articles trying to explain subject matter they barely understand, usually mixing terms that have been exactly defined with unrelated stuff, making it super hard to find actually useful information. And the AI is the worst offender so far, being a game developer who needs to research AI Agents for games, it’s attrocious. I have to sort through so many “I’ve used AI to make this game…” articles and YT videos, to the point it’s basically not possible to find anything relevant to AI I’m interrested it…
Oh, was not aware of this… (It’s also embarrassing considering that I’m a CS student. We haven’t reached the AI credits yet, but still…). Anyway, thank you for the info! And yeah, the buzzwords part does indeed suck! Whenever I tried to learn more about the topic, I was indeed bombarded by the Elon Musk techbro spam on YouTube. But whatever, I don’t have THAT long to get to these credits. Sooo wish me luck ;)
Take a look at this: https://en.wikipedia.org/wiki/AI_winter
AI has a strong boom/bust cycle. We’re currently in the middle of a “boom.” It’s possible that this is an “eternal September” scenario where deep networks and LLMs are predominant forever, or…
I’d recommend getting Kagi.com. It’s one of the best software investments I’ve recently made, it makes searching for technical questions so much better, because they have their own indexer with a pretty interresting philosophy behind it. I’ve been using it for a few months by now, and it has been awesome so far. I get way less results from random websites that are just framing clicks on any topic imaginable by raping SEO, and as an added bonus I can just send selected pages, such as Reddit, to the bottom of search results.
Plus, the fact that it’s paid, I don’t have to worry about how they are monetizing my data.
I am really convinced there is a Kagi marketing department dedicated to Lemmy. But if it really works that much better for you, that’s great.
But I wouldnt only bank on the logic “the fact that it’s paid, I don’t have to worry about how they are monetizing my data”. A lot of paid services still try to find ways for more money
While I did see Kagi recommended on Lemmy, I’ve made the switch because of a recommendation by my colleague at work (now that I thing about it, that would funnily probably be the case even if I was actually working for Kagi :D), and it has been a nice experience so far. Plus, we’ve just been talking about it today at the office, so I was in the mood of sharing :D But I haven’t done any actual search comparisons, so it may just be placebo. I’d probably say it’s caused by a lot people trying to be more privacy-centric here, and mostly deeply against large corporations, so the software recommendations tend to just turn into an echo-chamber.
As for the second point, yeah, I guess you are right, Brave Browser being one of the finest examples of it. But it’s a good reminder that I should do some research about the company and who’s behind it, just to avoid the same situation as with Brave, thanks for that.
Before artificial intelligence became a marketable buzz word, most games already included artificial intelligence (like NPCs) I guess when you have a GPT shaped hammer, everything looks like a nail.
That’s what I was reffering to. I’m looking for articles and inspiration about how to cleverly write NPC game AI that I’m struggling with, I don’t want to see how are other people raping game deveopment, or 1000th tutorial about steering behaviors (which are, by the way, awfull solution for most of use-cases, and you will get frustrated with them - Context Steering or RVHO is way better, but explain that to any low-effort youtuber).
I’ve recetly just had to start using Google Scholar instead of search, just so I can find the answers I’m looking for…
If you are looking into how to write game ai there’s a few key terms that can help a ton. Look into anything related to the game FEAR there AI was considered revolutionary at the time and balanced difficulty without knowing too much.
A few other terms are GOAP for goal oriented action programming, behavior trees. And as weird as it sounds looking up logic used my mmorpg bots can have a ton of great logic as the ones not running a completely script path do interact with the game world based on changing factors.
Thank you! My main issue is that while I’m familiar with all those algorithms, its usually pretty simple to find how do they work and how to use them for very basic stuff, but its almost impossible to research into actual best practices in how and when to use them, once you are working on moderately complex problem, especially stuff like formations, squad cooperation and more complex behavior (where I.e behavior trees start to have issues once you realize you have tons of interrupt events at almost every node, defeating the point of behavior trees - which can happen if you’re using them wrong, but no one usually talks about it at that level).
And I’m also dealing with issue that isn’t really mentioned too much, and that is scale. Things like GOAP would probably be infeasible to scale at hundreds of units on the screen, which require and entirely different and way less talked-about algorithms.
I’ve eventually found what I needed, but I did have to resort to reading through various papers published on the subject, because just googling “efficient squad based AI behavior algorithm” will unfortunately not get you far.
But its possible that I’m just being too harsh, and that the search results were always the same level of depth - only my experience has grown over the years, and such basic solutions are no longer sufficient for my projects, and it makes sense that no-one really has a reason to write blog posts of such depth - you just publish papers and give talks about it.
Aside from the AI related keywords. I’m still salty about what the buzzword did to my search results.