These experts on AI are here to help us understand important things about AI.
Who are these generous, helpful experts that the CBC found, you ask?
“Dr. Muhammad Mamdani, vice-president of data science and advanced analytics at Unity Health Toronto”, per LinkedIn a PharmD, who also serves in various AI-associated centres and institutes.
“(Jeff) Macpherson is a director and co-founder at Xagency.AI”, a tech startup which does, uh, lots of stuff with AI (see their wild services page) that appears to have been announced on LinkedIn two months ago. The founders section lists other details apart from J.M.'s “over 7 years in the tech sector” which are interesting to read in light of J.M.'s own LinkedIn page.
Other people making points in this article:
C. L. Polk, award-winning author (of Witchmark).
“Illustrator Martin Deschatelets” whose employment prospects are dimming this year (and who knows a bunch of people in this situation), who per LinkedIn has worked on some nifty things.
“Ottawa economist Armine Yalnizyan”, per LinkedIn a fellow at the Atkinson Foundation who used to work at the Canadian Centre for Policy Alternatives.
Could the CBC actually seriously not find anybody willing to discuss the actual technology and how it gets its results? This is archetypal hood-welded-shut sort of stuff.
Things I picked out, from article and round table (before the video stopped playing):
Does that Unity Health doctor go back later and check these emergency room intake predictions against actual cases appearing there?
Who is the “we” who have to adapt here?
AI is apparently “something that can tell you how many cows are in the world” (J.M.). Detecting a lack of results validation here again.
“At the end of the day that’s what it’s all for. The efficiency, the productivity, to put profit in all of our pockets”, from J.M.
“You now have the opportunity to become a Prompt Engineer”, from J.M. to the author and illustrator. (It’s worth watching the video to listen to this person.)
Me about the article:
I’m feeling that same underwhelming “is this it” bewilderment again.
Me about the video:
Critical thinking and ethics and “how software products work in practice” classes for everybody in this industry please.
you have no idea how many engineering meetings I’ve had go off the rails entirely because my coworkers couldn’t stop pasting obviously wrong shit from copilot, ChatGPT, or Bing straight into prod (including a bunch of rounds of re-prompting once someone realized the bullshit the model suggested didn’t work)
I also have no idea how many, thanks to alcohol
Ah, I see you, too, have an engineering culture of PDD
(Promptfan Driven Dev)
Haha they are, in fact, solutions that solve potential problems. They aren't searching for problems but they are searching for people to believe that the problems they solve are going to happen if they don't use AI.
That sounds miserable tbh. I use copilot for repetitive tasks, since it's good at continuing patterns (5 lines slightly different each time but otherwise the same). If your engineers are just pasting whatever BS comes out of the LLM into their code, maybe they need a serious talking to about replacing them with the LLM if they can't contribute anything meaningful beyond that.
@TehPers @self
5 lines slightly different each time but otherwise the same
I got questions tbh
It's not that uncommon when filling an array with data or populating a YAML/JSON by hand. It can even be helpful when populating something like a Docker Compose config, which I use occasionally to spin up local services while debugging like DBs and such.
@TehPers um, do you have an example?
Copilot helped me a lot when filling in
legendaryII.json
based on data fromlegendary.json
in this directory. The data between the two files is similar, but there are slight differences in the item names and flag names. Most of it was copy/paste, but filling in theWhen
sections was much easier for me with copilot + verify, for example.Edit: It also helped me with filling in the entries at the top of this C# file based on context I provided in a different format above (temporarily) in comments.
@TehPers there are tools for doing this sor… you know what, never mind
I know, I used one.
@TehPers "I used Github Copilot to help me hand-edit a massive JSON file which was *very slightly different* from another JSON file that I also maintain for some reason, therefor AI is good" is quite a take, but go off, I guess
@TehPers found an optimisation for you, without resorting to Copilot https://github.com/TehPers/StardewValleyMods/pull/37
Feel free to merge once your tests pass. What's that? There are no tests? Ah well…
what was the point of this
as much as I’d like to have a serious talk with about 95% of my industry right now, I usually prefer to rant about fascist billionaire assholes like altman, thiel, and musk who’ve poured a shit ton of money and resources into the marketing and falsified research that made my coworkers think pasting LLM output into prod was a good idea
it’s time to learn emacs, vim, or (best of all) an emacs distro that emulates vim
I was gonna say… good old
qa....q 20@a
does the job just fine thanks :p“but my special boy text editing task surely needs more than a basic macro” that’s why Bram Moolenaar, Dan Murphy, and a bunch of grad students Stallman didn’t credit gave us Turing-complete editing languages
Yes, the marketing of LLMs is problematic, but it doesn't help that they're extremely demoable to audiences who don't know enough about data science to realize how unfeasable it is to have a service be inaccurate as often as LLMs are. Show a cool LLM demo to a C-suite and chances are they'll want to make a product out of it, regardless of the fact you're only getting acceptable results 50% of the time.
I'm perfectly fine with vscode, and I know enough vim to make quick changes, save, and quit when git opens it from time to time. It also has multi-cursor support which helps when editing multiple lines in the same way, but not when there are significant differences between those lines but they follow a similar pattern. Copilot can usually predict what the line should be given enough surrounding context.
I don’t use ChatGPT to code directly but I definitely like to use it like a glorified rubber duck from time to time. I also use it to replace web searches if I know what I want but not the exact details, ‘tell me what parameters seldomUsedLibraryFunction takes and in which order’ is a lot less faff than wading through the acres of very verbose SEO spam that makes up a typical web search for example, same with regex features I don’t use very frequently. It’s not really solving a problem in my case that couldn’t be solved with better search engines and me not having a kind of flaky working memory, but if it’s there and isn’t costing me I’ll use it.
Software engineers should definitely know better than to blindly paste from it though, and I’m a little concerned about the security implications of these things hoovering up code and in a lot of real-world cases secrets too I’d bet. Also, I wonder what happens legally speaking when GPL’d code gets regurgitated into a proprietary codebase?