• ALoafOfBread@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    2 months ago

    Now make mammograms not $500 and not have a 6 month waiting time and make them available for women under 40. Then this’ll be a useful breakthrough

      • ALoafOfBread@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 months ago

        Oh for sure. I only meant in the US where MIT is located. But it’s already a useful breakthrough for everyone in civilized countries

        • Instigate@aussie.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          For reference here in Australia my wife has been asking to get mammograms for years now (in her 30s) and she keeps getting told she’s too young because she doesn’t have a familial history. That issue is a bit pervasive in countries other than the US.

    • FierySpectre@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      2 months ago

      Using AI for anomaly detection is nothing new though. Haven’t read any article about this specific ‘discovery’ but usually this uses a completely different technique than the AI that comes to mind when people think of AI these days.

      • Johanno@feddit.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        That’s why I hate the term AI. Say it is a predictive llm or a pattern recognition model.

        • PM_ME_VINTAGE_30S [he/him]@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          Say it is a predictive llm

          According to the paper cited by the article OP posted, there is no LLM in the model. If I read it correctly, the paper says that it uses PyTorch’s implementation of ResNet18, a deep convolutional neural network that isn’t specifically designed to work on text. So this term would be inaccurate.

          or a pattern recognition model.

          Much better term IMO, especially since it uses a convolutional network. But since the article is a news publication, not a serious academic paper, the author knows the term “AI” gets clicks and positive impressions (which is what their job actually is) and we wouldn’t be here talking about it.

        • 0laura@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          it’s a good term, it refers to lots of thinks. there are many terms like that.

  • cecinestpasunbot@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    Unfortunately AI models like this one often never make it to the clinic. The model could be impressive enough to identify 100% of cases that will develop breast cancer. However if it has a false positive rate of say 5% it’s use may actually create more harm than it intends to prevent.

    • ColeSloth@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      Not at all, in this case.

      A false positive of even 50% can mean telling the patient “they are at a higher risk of developing breast cancer and should get screened every 6 months instead of every year for the next 5 years”.

      Keep in mind that women have about a 12% chance of getting breast cancer at some point in their lives. During the highest risk years its a 2 percent chamce per year, so a machine with a 50% false positive for a 5 year prediction would still only be telling like 15% of women to be screened more often.

    • Vigge93@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      That’s why these systems should never be used as the sole decision makers, but instead work as a tool to help the professionals make better decisions.

      Keep the human in the loop!

    • Maven (famous)@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      Another big thing to note, we recently had a different but VERY similar headline about finding typhoid early and was able to point it out more accurately than doctors could.

      But when they examined the AI to see what it was doing, it turns out that it was weighing the specs of the machine being used to do the scan… An older machine means the area was likely poorer and therefore more likely to have typhoid. The AI wasn’t pointing out if someone had Typhoid it was just telling you if they were in a rich area or not.

  • superkret@feddit.org
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    2 months ago

    Why do I still have to work my boring job while AI gets to create art and look at boobs?

  • bluefishcanteen@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    This is a great use of tech. With that said I find that the lines are blurred between “AI” and Machine Learning.

    Real Question: Other than the specific tuning of the recognition model, how is this really different from something like Facebook automatically tagging images of you and your friends? Instead of saying "Here’s a picture of Billy (maybe) " it’s saying, “Here’s a picture of some precancerous masses (maybe)”.

    That tech has been around for a while (at least 15 years). I remember Picasa doing something similar as a desktop program on Windows.

    • pete_the_cat@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      It’s because AI is the new buzzword that has replaced “machine learning” and “large language models”, it sounds a lot more sexy and futuristic.

    • Lets_Eat_Grandma@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      Everything machine learning will be called “ai” from now until forever.

      It’s like how all rc helicopters and planes are now “drones”

      People en masse just can’t handle the nuance of language. They need a dumb word for everything that is remotely similar.

    • AdrianTheFrog@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      I’ve been looking at the paper, some things about it:

      • the paper and article are from 2021
      • the model needs to be able to use optional data from age, family history, etc, but not be reliant on it
      • it needs to combine information from multiple views
      • it predicts risk for each year in the next 5 years
      • it has to produce consistent results with different sensors and diverse patients
      • its not the first model to do this, and it is more accurate than previous methods
    • Emmie@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      Honestly with all respect that is really shitty joke. It’s god damn breast cancer, opposite of hot

      I usually just skip them mouldy jokes but like cmon that is beyond the scale of cringe

      • PlantDadManGuy@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Terrible things happen to people you love, you have two choices in this life. You can laugh about it or you can cry about it. You can do one and then the other if you choose. I prefer to laugh about most things and hope others will do the same. Cheers.

        • Emmie@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          I mean do whatever you want but it just comes off as repulsive. like a stain of shit on the new shoes.
          This is public space after all, not the bois locker room so that might be embarrassing for you.

          And you know you can always count on me to point stuff out so you can avoid humiliation in the future

          • PlantDadManGuy@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            Thanks for your excessively unnecessary put down. Don’t worry though. No matter how hard you try, you won’t be able to stop me from enjoying my life and bringing joy to others. Why are you obsessed with shit btw?

            • Emmie@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              edit-2
              1 month ago

              Sorry for that comment, I had shitty time back then and shouldn’t be so aggressive to you PlantDad

      • stormeuh@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        And much before that it was rule-based machine learning, which was basically databases and fancy inference algorithms. So I guess “AI” has always meant “the most advanced computer science thing which looks kind of intelligent”. It’s only now that it looks intelligent enough to fool laypeople into thinking there actually is intelligence there.

  • humbletightband@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Haha I love Gell-Mann amnesia. A few weeks ago there was news about speeding up the internet to gazillion bytes per nanosecond and it turned out to be fake.

    Now this thing is all over the internet and everyone believes it.

    • Redex@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      Well one reason is that this is basically exactly the thing current AI is perfect for - detecting patterns.