Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
CEO of largest public hospital says he's ready to replace radiologists with AI (radiologybusiness.com)
45 points by thunderbong 9 days ago | hide | past | favorite | 110 comments
 help



I figured this was “CEO said a thing” journalism [1], but buried in the last paragraph is a real scorcher:

> “Undeniable proof that confidently uninformed hospital administrators are a danger to patients: easily duped by AI companies that are nowhere near capable of providing patient care,” [Radiologist Dr.] Suhail told Radiology Business. “Any attempt to implement AI-only reads would immediately result in patient harm and death, and only someone with zero understanding of radiology would say something so naive. But in some sense, they’re correct: Hospitals are happy to cut costs even if it means patient harm, as long as it’s legal.”

[1] https://karlbode.com/ceo-said-a-thing-journalism/


Well, let's not forget the conflict of interest on the other side as well, of someone having invested decades of professional experience into a very lucrative field already getting obliterated by AI in some narrow fields.

Getting rid of radiologists is as much nonsense and saber rattling as suggesting using AI would harm patients.

The answer is clearly just the same as in software development or any other AI impacted field: Let the best professionals handle 10x+ the volume. What that means for all the rest of employees is the question of the century though...


> Getting rid of radiologists is as much nonsense and saber rattling as suggesting using AI would harm patients.

Did a chatbot tell you that? What makes you think it is so?


Well, let's not forget the conflict of interest on the other side as well, of some tech genai cuck having invested decades of professional experience into a very stochastic field where if they dupe enough hospital CEOs to harm their poor patients they may make enough money to afford to use the hospitals with real radiologists.

If hospitals are so concerned about cutting costs, getting sued is probably worse. However they are all insured against malpractice. I would be careful about insurers who could default if they find too many malpractice claims.

Isn't it also in the insurer's best interest that the hospitals do good work? They'd be another force against hospitals using AI to diagnose or misdiagnose people.

Of course, given that these are legal cases, it would take years for any consequences to be turned into actions.


>If hospitals are so concerned about cutting costs, getting sued is probably worse.

That hasn't stopped them any other time they cut costs. Have you ever spoken to a nurse who works in a hospital?


To be frank I'm more concerned about non-litiguous countries here as the potential downsides are much lower to roll-out "AI radiologists". Some of those countries have multi-month or even year-long waitlists for specialist consultations so it might even be more tempting from a healthcare management level.

For folks with long wait times, maybe the advantage of "immediate access to AI radiologist" beats out "wait for human radiologist"? Would be interesting to weigh those harms against each other.

> For folks with long wait times, maybe the advantage of "immediate access to AI radiologist" beats out "wait for human radiologist"? Would be interesting to weigh those harms against each other.

The harm of getting surgery to get tissue removed due to a false positive seems a pretty big.


It's an interesting one. From some ex-colleagues, waits in the UK can be up to 5 years for a consultation, not to mention the actual procedure itself. When asked if they would rather use AI for a first initial screening, almost all of those colleagues immediately said yes.

That sounds like something a baseball umpire would say.

Some hospitals having a CEO is an aberration

Brother-in-law graduated med school in the early 90s and has been a practicing ER physician since. We discussed this recently and he related that his advisors told him not to go into radiology back in the late 80s because the assumption was that computers were going to take over the field. He's not too far away from retirement and it's only now that we're starting to see some signs of this prediction from 30+ years ago.

As others in the thread note, there are plenty of concerns around operational use of AI solutions in the medical space, but radiology has a much larger target painted on it than other practices as a fair portion of the job (but certainly not all!) can boil down to high-skill pattern recognition from visual inputs. The current list of AI-enabled devices going through FDA approval is public, more than 3/4 of the list are targeting radiology use cases: https://www.fda.gov/medical-devices/software-medical-device-...


The issue with radiologists is that on average they are able to spot ~35% of correct diagnoses, while the world's best radiologists ~45%. AI might get us to ~50% which is ~15% better than an average radiologist (who still needs to review it).

And you are going to provide the references that will sustain this opinion, so we can elevate it to a fact...

Its fine to ask for sources. It's also fine to not give sources when relaying information in freeform comments. It's not fine to ask for sources in the tone you are using though, as though you are annoyed and simply expect sources to always be included with claims. There are better ways of accomplishing your goals.

Someone drops very specific percentages about diagnostic accuracy....numbers that, if true, have serious implications for patient outcomes, and your concern is that I did not ask nicely enough for a source? I could not think of a more HN typical response...

I did not even call the claim false, even if it almost deserve it...I said, essentially ...let's see the references so we can treat this as fact rather than opinion.

What you did is write a longer and more prescriptive comment about my tone than anything anyone has written about the actual substance :-)). You tone policed a one line request for evidence while giving a complete pass to unsourced medical statistics presented as fact.

If we are ranking things that erode discourse quality, I would say you are higher on the list.


[flagged]


> Calm down

You had a point until you did that.


Nah there's no reason to just accept someone's outbursts and not call them out for unsolicited high emotion lol

You can’t complain about somebody’s tone/call them passive aggressive and then use intentionally inflammatory language like “calm down.”

Three comments in... and you still have not said a single word about whether radiologists actually catch 35% of diagnoses. But you have found time to call me passive aggressive, entitled, lazy, and immature. For one sentence. Asking for a source...

You are now, multiple comments deep, doing the thing you accuse me of...being more invested in tone than substance.

The irony is genuinely impressive at this point.


If you look at early stage diseases it's probably even way less than 35%...

[flagged]


The lack of self awareness you display is impressive. Grade A troll or bot. As someone who sometimes misses things, I find it mildly interesting when someone is so confidently not on the same page as others. Good luck.

Very unspecific. Zero value comment

If you give specific numbers then I expect sources. If you give out incredibly bold claims then I also expect sources.

It's one thing to talk casually, in which case I agree with you. But as soon as hard numbers are on the table, it's no longer casual, and if you do not provide sources then the assumption has to be that you pulled the numbers out of your ass and you are not to be trusted.

To get around that, just don't provide numbers and don't speak authoritatively. It's very easy, I don't know why people speak authoritatively if they know they can't back it up.


The earth is 21,000 miles in circumference

"SOURCE?"

There's a middle ground here that is a grey area that you seem to be pretending is obviously navigated. You're speaking pretty authoritatively on this by the way. Do you have the moral, propositional logic, and epistemological justification for these claims?


I’m not sure I find this to be a comparable example.

If someone was making an important calculation or decision based on the circumference of the earth, then they would likely want the number cited/confirmed and not just thrown out by a random person that doesn’t pass the smell test. “Radiologists are only right 35% of the time” does not pass the smell test and a cursory search makes the case even worse.


I didn't make any claims, all of that is my opinion. There's literally no claims there. I just said that people who spew out numbers then can't provide a source aren't trustworthy - that's an opinion.

And there's obviously a difference between an established and obvious fact and a BOLD claim. This person made a BOLD claim. And provided numbers. To me, that requires a source.

Yes, there is a middle ground, but this isn't in the middle ground. I think this type of claim requires a source. A different claim, without specific percentages, would not. Or an obvious claim, like the Earth's circumference, also would not.


Maybe radiologist mean something different in my country, but here radiologist don't diagnose (i mean, except you see them for a broken bone or something), oncologist do. I did an observation internship with a radiologist when i was 20 (95% of my family are doctor/nurses/PT, i wanted to know what a degree in physics could help me do in the field, and radiologist was the only path to medecine from my initial formation where i only lost a year, and not two). You spend your time calculating doses, finding patient history, and calibrating machines, it's much more a technician role than a MD. In any case, and even if in the US radiologist diagnose cancer, that's such a small part of their job it shouldn't matter.

^ Knowing this, I would believe the best course of action for a hospital administrator would be to implement a "blind workflow" to reduce risk & lawsuits.

A radiologist should separately review a scan, an AI separately review it, and then combine the 2 results for review.


I have seen very conflicting data on this. You shouldn’t state it so confidently.

I assume the numbers are made up as an example.

I worry that rational takes like this end up completely lost in the battle between motivated parties who yell far louder, but have minimal investment in actual outcomes for those who will be depending on these technologies. The debate over self-driving vehicles is another example.


Where are you getting these numbers? Even a cursory search doesn’t put the numbers anywhere near such poor performance by real people.

AI at 50% would be notably worse (also where are you getting that number?)


From radiologist AI training datasets, evaluated long-term/post-mortem.

Sauce or gtfo

I hate to be “source?” about it but your numbers are so far off what every search result is showing.

I am not saying those are for all diagnoses, but for some tricky yet important ones (i.e. detecting them early might save your life).

You did not give specificity of any kind until now, and now I’m even more curious where these numbers are coming from.

Some data (average radiologist score):

Early-Stage Lung Cancer (via Chest X-ray) 33.3%

Clinical Staging of Stage I Pancreatic Cancer (via CT, MRI, EUS) 21.6%

Breast Cancer (via Mammography in Dense Tissue) 30%

Cuneiform fractures (foot, X-Ray) 0%

Midfoot fractures (general, X-Ray) 12.5%

Cuboid fractures (X-Ray) 14.29%

Navicular fractures (X-Ray) 22.22%

Talus fractures (X-Ray) 21.43%

Individual radiologists often scored 5% in those as well. The skill distribution is brutal.


If your original argument was “it could be useful for more difficult/niche observations” then I think most of us wouldn’t have objected.

I also really don’t understand why you still aren’t sharing any links. Is this all LLM-generated without citations or something? Where are you getting your numbers?


Persuade someone to run a prospective trial and show the outcomes. Everything else is bullshit

> "and is “actually better than human beings,” he told the audience.

“For women who aren’t considered high risk, if the test comes back negative, it’s wrong only about 3 times out of 10,000,” Lubarsky said. "

What's the false negative rate for human beings? And what about women that are considered high risk? Is it better or worse?


I'm also suspicious of that 3 out of 10k times. Did they compare an AI examination against a human 10,000 times in novel scenarios? Or did they run it against some data set that's probably in the training data? Or did they run it against some synthetic dataset that is not a good representation of the real world?

Or did they run 5 tests, found zero inaccuracies and extrapolated to 10,000 but though 0 mistakes was too unbelievable and would give away the game.

Did they test the xray on only uncomplicated cases like young healthy people with no deformities? Or did they test it on complex cases too, maybe cases where there are multiple issues and some should be ignored, like elderly or people with different shaped bodies.

Also, what is "wrong" here? Is it a false negative, or a false positive? Is it a misdiagnosis? There's levels of wrongness, especially in the medical field.


"wrong" is a false negative. It says that if the test came back negative, it was wrong 3 in 10,000 times, which means there was actually cancer that it didn't find

Back before things got way worse, the false negative rate from a human human radiologist was 10 out of 10,000.

https://radiologybusiness.com/topics/medical-imaging/womens-...


> Sandra Scott, MD, CEO of the One Brooklyn Health, a small hospital facing tight margins, agreed with this line of thinking, according to Crain’s.

Does this CEO of a small hospital realize that their hospital will take the legal responsibility if there's no doctor to sue for malpractice?


Speaking of which... when people talk about "replacing" humans with AI, it makes me wonder if there's some kind of law we can push for that says "if you are part of the chain of command that signs off on AI being able to make final determinations, and that causes legal issues, you will be legally liable in place of the AI, since computers cannot be liable." Let a jury decide who, in the chain, bears what burden, case by case, but provide for prima facie liability for all parties in the chain, when a valid suit is tried. I want to see how strong the push is for AI when it's the CEO's personal money on the line.

The chain of responsibility must include the AI vendor. If vendors aren't liable for malpractice, there will be less incentive for all due diligence when lives are on the line.

Honestly yes, you are 100% right that it should be a responsibility thing. I remember back in the day it was said that self-driving car companies would have legal responsibility in case of an accident. I remember that kind of put a damper on the rollout and also took a lot of hype and focus away from the whole industry.

Hospitals already usually pay for malpractice insurance on behalf of the physicians.

They do, but it’s the physician who is personally liable, not the hospital. It’s just another form of compensation.

My wife and I are both physicians. Our house doesn’t belong to either of us, strictly; it belongs to our marriage. You have to have a legal claim against both of us to put it in jeopardy.


This works in the other direction too - a human mises a cancer, that 10 out of 10 radiology models say it's there with 99% confidence. That hospital will lose in court for negligence.

> a human mises a cancer, that 10 out of 10 radiology models say it's there with 99% confidence

I think the cases where judgements differ—either between humans or AI or both–will be the difficult to discern cases, where no human and no LLM will have 99% confidence.


Is this the norm in US courts, evaluating a human's performance against LLMs?

If it is a regular practice of such doctors to use such tools, and that doctor did not, then it is malpractice. That is how malpractice works. You have to fall below the standard of care in a way that proximately caused the damages.

I’m sure there’s a golden parachute somewhere around to save her.

Here we go again. There's something about radiology that makes it the perfect bait for nerd sniping. I guess it's probably the misunderstanding that it is exclusively pattern recognition.

Here are my opinions, after a 20 year career as a diagnostic radiologist, and 45 years as a hobbyist computer programmer

1. There are no products currently on the market that can replace a radiologist.

2. If you can't fully and completely replace radiologists, you will still need them around in significant numbers.

3. Because of the infinite variation in human anatomy, physiology, and pathology, it is my opinion that AGI will be required to fully and completely replace radiologists.

4. Once AI is strong enough to replace radiologists, it will be strong enough to replace every other job as well.

5. Based on current RVU compensation models, any cost savings achieved by hospitals replacing radiologists with AI will quickly be lost by reimbursements being adjusted down. There is no way an insurance company will pay the same for an AI interpretation and a human interpretation.

6. There are significant unanswered medicolegal questions that will need to be addressed before AI can operate unsupervised.

In conclusion, I will work as a human radiologist until I retire in 10 years


To support your point, my dentist office did a trial using AI to read the xrays they take of your jaw and teeth. According to the AI reading my xray, I needed every single filling in my teeth replaced because they were all showing signs of leaking.

The dentist reviewed it and told me that there's just too much variation in how places to fillings and the different densities of the filling and the replaced tooth material for the AI to make good judgements. He didn't think any of my fillings would need replacing I likely have many more years before they fail.


this is the key point: it won't save hospitals any money in the long run:

> 5. Based on current RVU compensation models, any cost savings achieved by hospitals replacing radiologists with AI will quickly be lost by reimbursements being adjusted down. There is no way an insurance company will pay the same for an AI interpretation and a human interpretation.


Its hysterically funny you are being downvoted...

Sometimes people can't tolerate hearing other's lived experiences.

After 20 years as a doctor, if any AI rivals my expertise (mental/neuro disorders) then, I believe, it will deserve the Nobel prize. There are so many fuzzy factors and interconnecting mechanisms in the human biochemical factory, examined with both rigor and intuition, that one cannot encode even for one patient. Medicine is easy as science but difficult as art.

Being on topic, the best is to enhance the doctor's opinion with an AI helper but never completely remove him/her.


Modern AI absolutely excels at wrangling "fuzzy factors and interconnecting mechanisms".

That's the one lesson we learned in this AI wave quick. Things that used to be "a computer can't do it because it's not a formal task and it doesn't have the intuition for it" are now well within AI's reach.


Life is still hard to encode in digital format and there are two objects: the doctor's brain and the patient's body. Today the first step is the most difficult: the data to train on and the prompt to infere from, which, for medicine, need deeper and wider LLMs. Let's not talk about the acceptable failure rate and who takes the responsibility after a disaster.

When can we start replacing CEOs with AI?

Be careful what you wish for. An AI CEO is likely to be more ruthless than most human CEOs, with a singular focus on increasing shareholder value at the expense of everything else, without any "messy" human values to get in the way.

(The Mountain In the Sea is a good read that touches on this.)


Do you think this would help?

The CEO is an employee of the board of directors and the stockholders. An AI CEO would no doubt be as ruthless as a human CEO, if not more so. In other words, I wouldn't anticipate any improvement in CEO behavior.


If I was going to reduce labor costs by +1M/year, I would rather eliminate 1 CEO then 10 radiologists. I would much rather have 1 unemployed CEO in society than 10 unemployed radiologists. At the very least, "AI" should replace through attrition rather than direct layoffs...

> If I was going to reduce labor costs by +1M/year, I would rather eliminate 1 CEO then 10 radiologists.

This is a false dichotomy. Why not both?

I think it's a bit strange to hope or assume that an AI CEO would somehow preserve human jobs.


Missing the point. If CEOs realize that they're more replaceable by AI than nurses and medical assistants, for example, then maybe they'll take a more nuanced view of the technology.

No, you're missing the point, because the views of the people to be laid off are irrelevant. Again, the stockholders own the company, not the CEO. If CEOs start chaging their tune on AI as soon as their own jobs are at stake, that would just demonstrate to the stockholders that human CEOs are untrustworthy and need to be replaced.

Before AI came along, CEOs were already arbitrarily laying off workers, to please the stockholders. The stockholders like these cost-cutting measures, and whether the measures make sense is secondary to the CEOs doing what their bosses want. If the stockholders believe that they can cut the CEOs too, they surely will.


I don't think AI is ready to replace CEOs, but it would make a good assistant for an H-1B CEO.

Not sure AI is ready to replace anyone but that doesn't seem to be the road block.

If anyone is replaceable by AI, executives are first in line. Make "decisions" based on expert input, give presentations, sit in meetings and on calls. No liability, no concrete "work product" to speak of, so why not?

Surely they could offer a cheaper 'unregulated, no guarantee' AI interpretation with a confidence rating, and an optional follow-up 'are you sure?' expert assessment at full price.

OTOH they're probably planning to charge full price anyway, but massively reduce costs, because, profit.


You can already cash pay to have imaging in my state, without any prescription, then send the images off to some voodoo witch in the Congo if you want. Seems to be the way to do it, just have the hospital do imaging and then the patient does with the image whatever they want. Then the hospital has no liability except in the case they did not image it correctly.

Fellow panelist David Lubarsky, MD, MBA, president and CEO of the Westchester Medical Center Health Network, said his system is already seeing great success in deploying such technology. The AI Westchester uses misses very few breast cancers and is “actually better than human beings,” he told the audience.

“For women who aren’t considered high risk, if the test comes back negative, it’s wrong only about 3 times out of 10,000,” Lubarsky said.

Sounds like 3 wrongs are an acceptable level of risk for this CEO. It would be interesting to put radiologists up against AI to see which have better results, but I would still rather a human read my chart and then have AI give the second opinion, rather than the other way around.


I stand to be corrected, but last time this cropped up about a year ago, there was a pretty severe mismatch in the use of the word "AI".

The NYT ran a story about "AI taking over radiology", where they talked to radiologists at the Mayo clinic (who have an AI research lab), who flatly told NYT that no - AI will not be replacing radiologists, the AI is not good enough.

Here is the rub though, the "AI Lab" was doing research using local CNN's with ~30M parameters. Basically 2017 consumer GPU tier AI tech.

I don't know yet if there has been a modern transformer of datacenter scale that has been explicitly pre-trained for medicine/radiology, along with extensive medical/radiology RLHF.


>amid rising demand for imaging

Okay so demand for imaging is up, so we should GET RID of the radiologists? How about we AUGMENT them with AI so that they can do their job better and faster? Why does it need to be either or?


Currently they are augmenting them with Indian radiologists and just sign off whatever they found.

AI = Actual Indian.

This is illegal in the USA.


"They're is almost certainly cancer there."

Are you sure?

"You're right to push back. Upon reinspection, it appears to be something else."


1000 in 10,000 mammograms come back positive by human radiologist.

50 in 10,000 are actually cancer.

It is interesting that the article did not say what the positive rate of detection was, or the false positive rate.

Either way, of course, The next step would be to have a human eyeball it. Probably in India or China or some other low-cost provider. Only the wealthiest can afford the immense salary of a US-based radiologist.

Fun fact: I know a radiologist and her visual acuity is freaky good. I doubt that AI will be able to beat her unless they force it into a multi-day marathon.

https://radltd.com/four-reassuring-statistics-about-abnormal...


How about we started replacing all companies that are replacing humans with AI using AI as well? As they decided to one-way participate in the economy (suck the money, not give anything back), we can make sure the one-way trend is done with rapidly. The cost of running a company will approach zero in the future. We now have massively profitable companies that are making record layoffs; something doesn't compute.

It would be interesting to start a co-op or non-profit run by AI for the benefit of the employees and customers. If it worked it would have a huge competitive advantage. I guess the question is where would the capital come from, but as a co-op the employees could buy in and just take the profits as a distribution.

Thinking about this some more: US tax laws really favor income from investment over income from wages. So ideally a co-op member would put something in to join, get a wage, and have an appreciating asset in a tax advantaged account.


Something like that. I'll try to do it as a side project next as I have some spare compute and ran 99% automated e-commerce companies before.

This should not be surprising. The CEO's primary objective is likely to increase profits and so this will be his/her primary focus. Even if the technology is not ready for prime time just making announcements like this likely helps increase negotiating pressure on radiologist group contracts and salaries.

Didn't we just hear predictions about this from Geoffery a few years ago that turned out to be false? I could have sworn I heard Jensen talk about how the inverse has happened?

Don't we have more radiologists than we did five years ago?


Anything to please the stockholders. It's not like patient's best interests mattered much to them before AI either.

"We could replace a great deal of radiologists with AI at this moment"

Perhaps they cost a great number of money?


> “For women who aren’t considered high risk, if the test comes back negative, it’s wrong only about 3 times out of 10,000,” Lubarsky said.

I mean, if I were a choosing person and I could choose to have a human radiologist review AND an AI review I think I would prefer that. 3/10,000 sounds like a very good rate but a false negative on a cancer diagnosis is life threatening, no?


"The AI is wrong only 3:10,000 times" is a statement screaming out for the follow up question "how often are the humans wrong". Maybe 3:10,000 is astonishingly good, maybe humans are 10x or 100x better, right now I have no real way of knowing short of a literature review in a field I know nothing about.

At a certain point the false positives start creating more harm than trying to further reduce the false negatives (which is, perhaps counterintuitively, eventually true for even the most serious of risks). Whether that's the case here depends on a lot of information not in the article.

The real travesty here is that a hospital has a CEO.

Why AI is able to do everything except CEO and social media hype up work? Why engineers and doctors still need CEOs to do their job?

From the votes I see that this is unpopular opinion but apparently there are close to 400 million companies in the world, of those 60K are publicly traded.

I am sure that there's enough data to train top notch CEO on this, since they are required to keep records all the time and give speeches for living.

Surely privately owned companies where the CEO is also the owner wouldn't like it but replacing the CEO with an AI in institutions with professional CEOs seems overdue. The radiologist AI certainly will be much better served by AI CEO.


I am pretty sure current AI is not capable of replacing Radiologists, but I am pretty sure is already good enough to replace 90% of current CEOs. I have worked with multiple CEOs...

With a little extra irony, I’m honestly certain our HR dept could easily be replaced with AI to far better effect. They would surely disagree.

The job description should be sufficient prompt to replace the HR, add some RAG and skill files based on a few months of in-company chat tool data and paperwork, I don't see why there's still HR around. The AI HR can choose to hire entertainers etc. for some tasks but why would keep HR on payroll al the time?

> I don't see why there's still HR around

Main reasons…

1. HR doesn’t work with you. They work for your CEO or Board. Consider them a toxic entity if you ever have a real problem.

2. HR is a socially accepted jobs program for people without any discernible skills, beyond basic data entry and organization. Effectively no one else wants to do it. The issue is that with point one, these people are told they are important and it immediately goes to their heads.


I find the whole field of radiology to be utterly baffling. There are doctors who specialize in, and hopefully understand, specific diseases and/or parts of the body. But we have radiologists who are supposed to be able to look at images, taken by quite a variety of technologies and parameters, of any part of the body, and are expected to accurately interpret the findings, possibly without any relevant context.

In my personal experience interacting with the medical system, it’s, unsurprisingly, quite common for an actual specialist to look at the same images a radiologist looked at, and see something quite different. And it’s nearly always the case that a specialist or a reasonable careful non-specialist who is willing to read a bit of the literature or even ask a chatbot [0], will figure out that at least half of what the radiologist says is utterly irrelevant.

So I think that the degree to which ML can perform as well as a radiologist is not necessarily a great measurement for ML’s ability to assist with medical care.

[0] Carefully. Mindlessly asking a chatbot will give complete nonsense.


Irrelevant to them. A radiologist is on the hook for missing a tiny possible tumor in a scan for a blood clot.

They like to show off occasionally. We had a rectal foreign body that was described as a Phillips-head screwdriver. I was hoping to catch them out by noticing it was Pozidriv, but it was in fact a Phillips.


I'd take it further/slightly parallel direction. Medicine is at the same time a science and a weird "feel and experience" area.

On the one hand it's a science: controlled experiments, calculated dosages, all based on an understanding of low level biology, fancy imaging methods, measuring currents in people's bodies and so on.

On the other hand, there seems to be plenty of "he seems fine to me", "tests came back fine but something seems off to me so let's try another test", "doesn't seem to be responding to this drug, let's try the other one", "in my experience this drug works better than that one". It seems like a pretty big chunk of subjectivity is actually a part of the field.


> On the one hand it's a science: controlled experiments

Those experiments are so hilariously expensive these days, and the results are often not actually fully published, so good data is often unavailable.

> calculated dosages

Often calculated based, in large part, on researchers’ vibes and their vibes when designing experiments.

> all based on an understanding of low level biology

There are many, many drugs with partially or even almost fully unknown mechanisms.


Radiologists work best in consultation with the physicians ordering the studies. Sadly, this is less and less common as workloads increase in medicine. When I started 20 years ago there were whole teams that came through the radiology department every morning to review all of the cases on their patients. Now I go weeks without seeing another physician.

He is blatantly and obviously lying likely to boost stock prices. Radiologists do physical procedures too.

Interventional radiologists do procedures, but most radiologists are not interventional. If their jobs are on the line, I guess they will have to be.

That’s good, reducing healthcare costs will increase access and boost the our health.

Agree that AI should replace CEOs. They’re often biased in unhelpful ways that AI isn’t and it costs people wellbeing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: