Now make mammograms not $500 and not have a 6 month waiting time and make them available for women under 40. Then this’ll be a useful breakthrough
It’s already this way in most of the world.
Oh for sure. I only meant in the US where MIT is located. But it’s already a useful breakthrough for everyone in civilized countries
For reference here in Australia my wife has been asking to get mammograms for years now (in her 30s) and she keeps getting told she’s too young because she doesn’t have a familial history. That issue is a bit pervasive in countries other than the US.
deleted by creator
Using AI for anomaly detection is nothing new though. Haven’t read any article about this specific ‘discovery’ but usually this uses a completely different technique than the AI that comes to mind when people think of AI these days.
That’s why I hate the term AI. Say it is a predictive llm or a pattern recognition model.
Say it is a predictive llm
According to the paper cited by the article OP posted, there is no LLM in the model. If I read it correctly, the paper says that it uses PyTorch’s implementation of ResNet18, a deep convolutional neural network that isn’t specifically designed to work on text. So this term would be inaccurate.
or a pattern recognition model.
Much better term IMO, especially since it uses a convolutional network. But since the article is a news publication, not a serious academic paper, the author knows the term “AI” gets clicks and positive impressions (which is what their job actually is) and we wouldn’t be here talking about it.
That performance curve seems terrible for any practical use.
it’s a good term, it refers to lots of thinks. there are many terms like that.
it refers to lots of thinks
So it’s a bad term.
Unfortunately AI models like this one often never make it to the clinic. The model could be impressive enough to identify 100% of cases that will develop breast cancer. However if it has a false positive rate of say 5% it’s use may actually create more harm than it intends to prevent.
Not at all, in this case.
A false positive of even 50% can mean telling the patient “they are at a higher risk of developing breast cancer and should get screened every 6 months instead of every year for the next 5 years”.
Keep in mind that women have about a 12% chance of getting breast cancer at some point in their lives. During the highest risk years its a 2 percent chamce per year, so a machine with a 50% false positive for a 5 year prediction would still only be telling like 15% of women to be screened more often.
That’s why these systems should never be used as the sole decision makers, but instead work as a tool to help the professionals make better decisions.
Keep the human in the loop!
Another big thing to note, we recently had a different but VERY similar headline about finding typhoid early and was able to point it out more accurately than doctors could.
But when they examined the AI to see what it was doing, it turns out that it was weighing the specs of the machine being used to do the scan… An older machine means the area was likely poorer and therefore more likely to have typhoid. The AI wasn’t pointing out if someone had Typhoid it was just telling you if they were in a rich area or not.
Wanna bet it’s not “AI” ?
Why do I still have to work my boring job while AI gets to create art and look at boobs?
Because life is suffering and machines dream of electric sheeps.
This is a great use of tech. With that said I find that the lines are blurred between “AI” and Machine Learning.
Real Question: Other than the specific tuning of the recognition model, how is this really different from something like Facebook automatically tagging images of you and your friends? Instead of saying "Here’s a picture of Billy (maybe) " it’s saying, “Here’s a picture of some precancerous masses (maybe)”.
That tech has been around for a while (at least 15 years). I remember Picasa doing something similar as a desktop program on Windows.
It’s because AI is the new buzzword that has replaced “machine learning” and “large language models”, it sounds a lot more sexy and futuristic.
Everything machine learning will be called “ai” from now until forever.
It’s like how all rc helicopters and planes are now “drones”
People en masse just can’t handle the nuance of language. They need a dumb word for everything that is remotely similar.
I’ve been looking at the paper, some things about it:
- the paper and article are from 2021
- the model needs to be able to use optional data from age, family history, etc, but not be reliant on it
- it needs to combine information from multiple views
- it predicts risk for each year in the next 5 years
- it has to produce consistent results with different sensors and diverse patients
- its not the first model to do this, and it is more accurate than previous methods
Good stuff
pretty sure iterate is the wrong word choice there
Common case of programmer brain
Dude needs to use AI to fix his fucking grammar.
Not my proudest fap…
Honestly with all respect that is really shitty joke. It’s god damn breast cancer, opposite of hot
I usually just skip them mouldy jokes but like cmon that is beyond the scale of cringe
Terrible things happen to people you love, you have two choices in this life. You can laugh about it or you can cry about it. You can do one and then the other if you choose. I prefer to laugh about most things and hope others will do the same. Cheers.
I mean do whatever you want but it just comes off as repulsive. like a stain of shit on the new shoes.
This is public space after all, not the bois locker room so that might be embarrassing for you.And you know you can always count on me to point stuff out so you can avoid humiliation in the future
Thanks for your excessively unnecessary put down. Don’t worry though. No matter how hard you try, you won’t be able to stop me from enjoying my life and bringing joy to others. Why are you obsessed with shit btw?
Sorry for that comment, I had shitty time back then and shouldn’t be so aggressive to you PlantDad
No link or anything, very believable.
You could participate or complain.
https://news.mit.edu/2019/using-ai-predict-breast-cancer-and-personalize-care-0507
Complain to who? Some random twitter account? WHy would I do that?
No, here. You could asked for a link or Google.
I am commenting on this tweet being trash, because it doesn’t have a link in it.
I really wouldn’t call this AI. It is more or less an inage identification system that relies on machine learning.
That was pretty much the definition of AI before LLM came.
And much before that it was rule-based machine learning, which was basically databases and fancy inference algorithms. So I guess “AI” has always meant “the most advanced computer science thing which looks kind of intelligent”. It’s only now that it looks intelligent enough to fool laypeople into thinking there actually is intelligence there.
Haha I love Gell-Mann amnesia. A few weeks ago there was news about speeding up the internet to gazillion bytes per nanosecond and it turned out to be fake.
Now this thing is all over the internet and everyone believes it.
Well one reason is that this is basically exactly the thing current AI is perfect for - detecting patterns.
Good news, but it’s not “AI”. Please stop calling it that.