• futatorius@lemm.ee
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    8 days ago

    Two intrinsic problems with the current implementations of AI is that they are insanely resource-intensive and require huge training sets. Neither of those is directly a problem of ownership or control, though both favor larger players with more money.

    • finitebanjo@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      edit-2
      8 days ago

      And a third intrinsic problem is that the current models with infinite training data have been proven to never approach human language capability, from papers written by OpenAI in 2020 and Deepmind in 2022, and also a paper by Stanford which proposes AI simply have no emergent behavior and only convergent behavior.

      So yeah. Lots of problems.

      • andxz@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        8 days ago

        While I completely agree with you, that is the one thing that could change with just one thing going right for one of all the groups that work on just that problem.

        It’s what happens after that that’s really scary, probably. Perhaps we all go into some utopian AI driven future, but I highly doubt that’s even possible.

  • RadicalEagle@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    9 days ago

    I’d say the biggest problem with AI is that it’s being treated as a tool to displace workers, but there is no system in place to make sure that that “value” (I’m not convinced commercial AI has done anything valuable) created by AI is redistributed to the workers that it has displaced.

    • Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      9 days ago

      The system in place is “open weights” models. These AI companies don’t have a huge head start on the publicly available software, and if the value is there for a corporation, most any savvy solo engineer can slap together something similar.

  • Grimy@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    9 days ago

    AI has a vibrant open source scene and is definitely not owned by a few people.

    A lot of the data to train it is only owned by a few people though. It is record companies and publishing houses winning their lawsuits that will lead to dystopia. It’s a shame to see so many actually cheering them on.

    • cyd@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 days ago

      So long as there are big players releasing open weights models, which is true for the foreseeable future, I don’t think this is a big problem. Once those weights are released, they’re free forever, and anyone can fine-tune based on them, or use them to bootstrap new models by distillation or synthetic RL data generation.

  • MyOpinion@lemm.ee
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    8
    ·
    9 days ago

    The problem with AI is that it pirates everyone’s work and then repackages it as its own and enriches the people that did not create the copywrited work.

  • TheMightyCat@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    9 days ago

    No?

    Anyone can run an AI even on the weakest hardware there are plenty of small open models for this.

    Training an AI requires very strong hardware, however this is not an impossible hurdle as the models on hugging face show.

    • CodeInvasion@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      7
      ·
      9 days ago

      Yah, I’m an AI researcher and with the weights released for deep seek anybody can run an enterprise level AI assistant. To run the full model natively, it does require $100k in GPUs, but if one had that hardware it could easily be fine-tuned with something like LoRA for almost any application. Then that model can be distilled and quantized to run on gaming GPUs.

      It’s really not that big of a barrier. Yes, $100k in hardware is, but from a non-profit entity perspective that is peanuts.

      Also adding a vision encoder for images to deep seek would not be theoretically that difficult for the same reason. In fact, I’m working on research right now that finds GPT4o and o1 have similar vision capabilities, implying it’s the same first layer vision encoder and then textual chain of thought tokens are read by subsequent layers. (This is a very recent insight as of last week by my team, so if anyone can disprove that, I would be very interested to know!)

      • Riskable@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 days ago

        Would you say your research is evidence that the o1 model was built using data/algorithms taken from OpenAI via industrial espionage (like Sam Altman is purporting without evidence)? Or is it just likely that they came upon the same logical solution?

        Not that it matters, of course! Just curious.

        • CodeInvasion@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          9 days ago

          Well, OpenAI has clearly scraped everything that is scrap-able on the internet. Copyrights be damned. I haven’t actually used Deep seek very much to make a strong analysis, but I suspect Sam is just mad they got beat at their own game.

          The real innovation that isn’t commonly talked about is the invention of Multihead Latent Attention (MLA), which is what drives the dramatic performance increases in both memory (59x) and computation (6x) efficiency. It’s an absolute game changer and I’m surprised OpenAI has released their own MLA model yet.

          While on the subject of stealing data, I have been of the strong opinion that there is no such thing as copyright when it comes to training data. Humans learn by example and all works are derivative of those that came before, at least to some degree. This, if humans can’t be accused of using copyrighted text to learn how to write, then AI shouldn’t either. Just my hot take that I know is controversial outside of academic circles.

      • cyd@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        9 days ago

        It’s possible to run the big Deepseek model locally for around $15k, not $100k. People have done it with 2x M4 Ultras, or the equivalent.

        Though I don’t think it’s a good use of money personally, because the requirements are dropping all the time. We’re starting to see some very promising small models that use a fraction of those resources.

    • nalinna@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      9 days ago

      But the people with the money for the hardware are the ones training it to put more money in their pockets. That’s mostly what it’s being trained to do: make rich people richer.

      • Riskable@programming.dev
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 days ago

        This completely ignores all the endless (open) academic work going on in the AI space. Loads of universities have AI data centers now and are doing great research that is being published out in the open for anyone to use and duplicate.

        I’ve downloaded several academic models and all commercial models and AI tools are based on all that public research.

        I run AI models locally on my PC and you can too.

        • nalinna@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 days ago

          That is entirely true and one of my favorite things about it. I just wish there was a way to nurture more of that and less of the, “Hi, I’m Alvin and my job is to make your Fortune-500 company even more profitable…the key is to pay people less!” type of AI.

      • TheMightyCat@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 days ago

        But you can make this argument for anything that is used to make rich people richer. Even something as basic as pen and paper is used everyday to make rich people richer.

        Why attack the technology if its the rich people you are against and not the technology itself.

        • nalinna@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 days ago

          It’s not even the people; it’s their actions. If we could figure out how to regulate its use so its profit-generation capacity doesn’t build on itself exponentially at the expense of the fair treatment of others and instead actively proliferate the models that help people, I’m all for it, for the record.

  • DarkCloud@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    9 days ago

    Like Sam Altman who invests in Prospera, a private “Start-up City” in Honduras where the board of directors pick and choose which laws apply to them!

    The switch to Techno-Feudalism is progressing far too much for my liking.

  • max_dryzen@mander.xyz
    link
    fedilink
    English
    arrow-up
    7
    ·
    9 days ago

    The government likes concentrated ownership because then it has only a few phonecalls to make if it wants its bidding done (be it censorship, manipulation, partisan political chicanery, etc)

    • futatorius@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 days ago

      And it’s easier to manage and track a dozen bribe checks rather than several thousand.

  • Wren@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    3
    ·
    9 days ago

    The biggest problem with AI is the damage it’s doing to human culture.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 days ago

      Not solving any of the stated goals at the same time.

      It’s a diversion. Its purpose is to divert resources and attention from any real progress in computing.

  • Grandwolf319@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    ·
    9 days ago

    The biggest problem with AI is that it’s the brut force solution to complex problems.

    Instead of trying to figure out what’s the most power efficient algorithm to do artificial analysis, they just threw more data and power at it.

    Besides the fact of how often it’s wrong, by definition, it won’t ever be as accurate nor efficient as doing actual thinking.

    It’s the solution you come up with the last day before the project is due cause you know it will technically pass and you’ll get a C.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 days ago

      Dunno, the part about generative music (not like LLMs) I’ve tried, I think if I spent a few more years of weekly migraines on that, I’d become better.

  • Captain Aggravated@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    ·
    8 days ago

    For some reason the megacorps have got LLMs on the brain, and they’re the worst “AI” I’ve seen. There are other types of AI that are actually impressive, but the “writes a thing that looks like it might be the answer” machine is way less useful than they think it is.

    • ameancow@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      8 days ago

      most LLM’s for chat, pictures and clips are magical and amazing. For about 4 - 8 hours of fiddling then they lose all entertainment value.

      As for practical use, the things can’t do math so they’re useless at work. I write better Emails on my own so I can’t imagine being so lazy and socially inept that I need help writing an email asking for tech support or outlining an audit report. Sometimes the web summaries save me from clicking a result, but I usually do anyway because the things are so prone to very convincing halucinations, so yeah, utterly useless in their current state.

      I usually get some angsty reply when I say this by some techbro-AI-cultist-singularity-head who starts whinging how it’s reshaped their entire lives, but in some deep niche way that is completely irrelevant to the average working adult.

      I have also talked to way too many delusional maniacs who are literally planning for the day an Artificial Super Intelligence is created and the whole world becomes like Star Trek and they personally will become wealthy and have all their needs met. They think this is going to happen within the next 5 years.

  • Rose@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    8 days ago

    AI business is owned by a tiny group of technobros, who have no concern for what they have to do to get the results they want (“fuck the copyright, especially fuck the natural resources”) who want to be personally seen as the saviours of humanity (despite not being the ones who invented and implemented the actual tech) and, like all big wig biz boys, they want all the money.

    I don’t have problems with AI tech in the principle, but I hate the current business direction and what the AI business encourages people to do and use the tech for.

    • interdimensionalmeme@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      8 days ago

      Well I’m on board for fuck intellectual property. If openai doesn’t publish the weights then all their datacenter get visited by the killdozer