And there it is. Wants to have a discussion. Dismisses all arguments instead of tackling them head on. It’s not just my opinion. It is the opinion of the vast majority of people, due to a myriad of reasons I already explained. That you just refuse to see them is the problem with AI bullshitters here on Lemmy. You are the one arguing in bad faith.
No one ever said they were. You constructed that straw man because you can’t tolerate the idea that most people think AI is bad. It’s not just an opinion. It’s a widely popular opinion supported by a ton of evidence, tons of logical and reasonable arguments, and well documented. I provided at least 4 different arguments and your response to all of them was “yes, but I don’t want to talk about it”. So, you know I’m right yet refuse to acknowledge it because it hurts your ego so much that you feel the need to defend it on an internet forum.
All of which makes me return to the beginning. You’re not smart enough to have a grown up conversation about AI without its assistance. So I will now stop providing arguments that you don’t want to hear, as obviously the only thing you want to hear is how great AI is. Unfortunately, AI bad.
literally can’t do what a calculator can do reliably. or a timer. or a calendar.
Edit to add: For the record, I am very interested in your arguments and would love to read the reports that has come to the conlusion that LLMs produce bad output. That’s news to me (or I should say, a good prompt producing bad output. And what is considered bad and why?). So if you have a link to a report or something similar, please share.
But don’t claim that I am trying to construct a strawman when the THE VERY FIRST argument provided to me in this very comment chain was what I have talked about all along.
See, that’s your problem. You’re arguing, with me, about something that was said to you by someone else. Do you realize why I’m questioning your argumentative skills?
Here’s a source to a study about AI’s accuracy as a search engine. The main use case proposed for LLMs as a tool is indexing a bunch of text, then summarizing and answering questions about it in natural language.
Another use is creating or modifying text based on an input or prompt, however, LLMs are prone to hallucinations. Here’s a deep dive into what they are, why they occur and the challenges of dealing with them.
See, that’s your problem. You’re arguing, with me, about something that was said to you by someone else. Do you realize why I’m questioning your argumentative skills?
I’m sorry? You came to me.
Here is how I see it:
Someone compared AI to calculator/calendar
I said you cannot compare that
You asked why I even argue with the first person
I said that I want a better discussion
You said that I should stop dimissing other people’s arguments
I tried to explain why I don’t think it is a valid argument to compare LLM to “calculator can do reliably. or a timer. or a calendar.”
You did not seem to agree with me on that from what I understand.
And now we are here.
–
Here’s a source to a study
I don’t have the time to read the articles now so I will have to do it later, but hallucinations can definitively be a problem. Asking for code is one such situation where an LLM can just make up functions that does not exist.
And there it is. Wants to have a discussion. Dismisses all arguments instead of tackling them head on. It’s not just my opinion. It is the opinion of the vast majority of people, due to a myriad of reasons I already explained. That you just refuse to see them is the problem with AI bullshitters here on Lemmy. You are the one arguing in bad faith.
I’m am dismissing invalid arguments, that started all this. AIs are not calendars and should be used as such.
No one ever said they were. You constructed that straw man because you can’t tolerate the idea that most people think AI is bad. It’s not just an opinion. It’s a widely popular opinion supported by a ton of evidence, tons of logical and reasonable arguments, and well documented. I provided at least 4 different arguments and your response to all of them was “yes, but I don’t want to talk about it”. So, you know I’m right yet refuse to acknowledge it because it hurts your ego so much that you feel the need to defend it on an internet forum.
All of which makes me return to the beginning. You’re not smart enough to have a grown up conversation about AI without its assistance. So I will now stop providing arguments that you don’t want to hear, as obviously the only thing you want to hear is how great AI is. Unfortunately, AI bad.
https://lemmy.world/post/27126654/15901324
Edit to add: For the record, I am very interested in your arguments and would love to read the reports that has come to the conlusion that LLMs produce bad output. That’s news to me (or I should say, a good prompt producing bad output. And what is considered bad and why?). So if you have a link to a report or something similar, please share. But don’t claim that I am trying to construct a strawman when the THE VERY FIRST argument provided to me in this very comment chain was what I have talked about all along.
Edit 2: Here is the personal attack, the other point I disagree with: https://lemmy.world/post/27126654/15901907
See, that’s your problem. You’re arguing, with me, about something that was said to you by someone else. Do you realize why I’m questioning your argumentative skills?
Here’s a source to a study about AI’s accuracy as a search engine. The main use case proposed for LLMs as a tool is indexing a bunch of text, then summarizing and answering questions about it in natural language.
AI Search Has A Citation Problem
Another use is creating or modifying text based on an input or prompt, however, LLMs are prone to hallucinations. Here’s a deep dive into what they are, why they occur and the challenges of dealing with them.
Decoding LLM Hallucinations: A Deep Dive into Language Model Errors
I don’t know why do I even bother. You are just going to ignore the sources and dismiss them as well.
I’m sorry? You came to me.
Here is how I see it:
–
I don’t have the time to read the articles now so I will have to do it later, but hallucinations can definitively be a problem. Asking for code is one such situation where an LLM can just make up functions that does not exist.