Artificial intelligence is rapidly growing in healthcare through AI-based form of visit summaries and patient condition analysis. Now, new research shows that AI training techniques similar to those used for ChatGPT can How can it be used to train surgical robots to work on their own?
Researchers from John's University Hopkins and Stanford University created a training model using video recordings of a robot arm controlled by a human performing surgery. Researchers believe that learning to imitate actions in videos can reduce the need to program each movement required for a procedure. From washington post–
Robots learn to use needles, tie knots, and sew wounds on their own. What's more, robots that are trained go beyond imitation. By correcting your own mistakes without prompting, such as picking up a dropped needle. Scientists have begun the next phase of their work. By combining various skills It goes with full surgery on animal corpses.
Of course, robots have been used in operating rooms for many years. Back in 2018, the “surgery on grapes” meme highlighted how robotic arms could help with surgery by providing a higher level of precision, approx. 876,000 robot-assisted surgeries Implemented in 2020, robotic tools can reach places and perform various tasks. In a body where the surgeon's hand would not fit and is not affected by vibrations Smart, precise tools can help reduce nerve damage. But the robot is typically guided by a surgeon with a controller. The surgeon is always responsible.
The concern of more skeptics about autonomous robots is that AI models like ChatGPT are not “intelligent” but merely imitate something they've seen before. And they don't understand the concept behind what they're dealing with. The endless variety of pathologies in countless diverse human hosts poses challenges. What if the AI model has never seen a specific situation before? Errors can occur during surgery within seconds. And what if AI isn't trained to respond?
at least Autonomous robots used in surgery must be approved by the Food and Drug Administration. In other cases, doctors use AI to summarize patient visits and make recommendations. FDA approval is not required because a doctor would technically have to review and certify any information. that they created That's worrying because there's already evidence that AI bots will do just that. Give advice on bathingor hallucinate and include information in meeting notes that were never said. How often do tired and overworked doctors stamp out AI-generated things without careful consideration?
It feels like the latest report on militarism in Israel. Relies on AI to identify attack targets without examining the information closely “Soldiers are poorly trained to use technology to attack human targets without confirming their (AI) predictions at all.” washington post story Read “Sometimes the only confirmation needed is that the target is a man” Things can go wrong when humans become complacent and not sufficiently in the loop.
Healthcare is another field where the stakes are high. This is definitely higher than the consumer market if Gmail doesn't summarize emails correctly. That's not the end of the world: AI systems incorrectly diagnose health problems or make mistakes during surgery. It's a much more serious problem. In this case, who is responsible? at post office Interview with the Director of Robotic Surgery at the University of Miami And this is what he said:
“The stakes are very high,” he said, “because this is a life and death issue.” Every patient's anatomy is different. as well as disease behavior in patients
“I look at[the images from]the CT scan and MRI and then perform the surgery” by controlling the robot arm, Parekh said. “If you want the robot to perform the surgery on its own, It must understand the whole picture. How to read CT scans and MRIs.” The robot will also have to learn how to perform keyhole surgery, or laparoscopic surgery that uses very small incisions.
The idea that AI is infallible is difficult to take seriously when no technology is perfect. Of course, this automated technology is interesting from a research point of view. But recovering from a failed surgery performed by an automated robot is a monumental feat. Who do you punish when something goes wrong? Who has had their medical license revoked? Humans are not wrong either. But at least patients have the comfort of knowing they've had years of training. and will be held accountable if something goes wrong. AI models are crude simulations of humans. which sometimes has unpredictable behavior and have no moral compass
Another concern is whether overreliance on autonomous robots in surgery could lead to a decline in doctors' abilities and knowledge. Similar to facilitating dating through an app, it results in relevant social skills becoming rusty.
If the doctor is tired and overworked Which is why, the researchers suggest, this technology is valuable. Perhaps the systemic issues causing the shortage should be addressed instead. It has been widely reported that the United States is experiencing a serious shortage of doctors due to Unable to increase access to the field– The country is facing a shortage of 10,000 to 20,000 surgeons by 2036. Association of American Medical Colleges–