
Key Takeaways:
Medical scams are nothing new, but they have changed over the decades. The newest form of scam, which has already resulted in the loss of billions of dollars, employs artificial intelligence (AI) and our innate tendency to trust what we see. This blog post will delve into medical scams, highlight threats they bring to the healthcare setting, and offer tips nurses can use to help prevent themselves and their patients from becoming victims.
The Today Show investigated a medical scam case to learn more about this rising problem. Beth Holland, who suffers from lipedema, bought a topical cream after seeing an advertisement on Facebook. The advertisement featured celebrities such as Oprah Winfrey, Kelly Clarkson, and her own doctor endorsing the product. The problem? The video including the celebrity and physician endorsements had been fabricated by AI.
Holland is not the only person to be affected by such scams. These new schemes known as "deepfakes"- are becoming increasingly difficult to distinguish from authentic sources. Studies have shown that across all populations, 27-50% of individuals tested were not able to correctly identify deepfake videos, and they correctly identified audio deepfakes only 73% of the time.
A deepfake is an image, audio, or video that has been altered with AI to "create hyper-realistic depictions of individuals saying and doing things that never genuinely occurred." They were originally created for entertainment purposes. However, in recent years, scammers have been using them to mislead people into buying fraudulent products and gain access to private information.
The creation of deepfakes requires neural networks and algorithms that "analyze extensive datasets to acquire the ability to mimic human facial features, expressions and voice..." The more data about the person the algorithm has access to, whether that be videos, images, or voice recordings, the more true-to-life the deepfake will appear.
Therefore, medical identity theft can occur to anyone who appears online. In the case of Dr. David Amron, the board-certified lipedema surgeon from the Holland case, the algorithm pulled information from footage of previous interviews that had been pirated from the internet.
While videos of individuals are the most common use of deepfake technology, AI-generated content has also been used to create fake credentials and certificates of compliance. There have also been instances in which software is used to alter MRI and CT scans for the purpose of committing insurance fraud.
Audio deepfakes are also a threat. The Centers for Medicare & Medicaid Services released alerts for healthcare personnel to beware of voice-phishing (visching) scams. In these cases, scammers use AI to impersonate the voices of trusted hospital staff or company members to request medical records and documentation.
While any industry can be a target for scams and cyber attacks, healthcare is an especially alluring field for scammers for many reasons. It is full of sensitive data, such as a patient's private medical history, social security numbers, addresses, and present or past diagnoses. This information can be fodder for ransom attacks and can be sold on the dark web. That is why hundreds of deepfakes exist on TikTok, X (formerly known as Twitter), Facebook, and YouTube. However, in addition to costing thousands of dollars, these scams come with a high social cost.
Scammers strive to take advantage of hard-won rapport and medical authority by impersonating trusted doctors and nurses to sway individuals to buy fraudulent products. Dr. François Marquis, chief of intensive care at a Montreal hospital, experienced this situation when he suddenly began receiving phone calls from patients asking where they could buy his new drug.
A deepfake video of him had been circulating on Facebook without his knowledge. He explained that he suspects he was a target because his face is known and that he is trusted. "...it's not only my patients," he said, "it's any patient who's trusting me or any patient trusting physicians at large."
Dr. Marquis pointed out that professional relationships are not the only thing in danger of being rendered untrustworthy. It is the science of medicine itself. Dr. Marquis says that with many physicians, "it's all about the science- it's more than just personal." However, "if you cannot trust anything that is said...from a physician, they are not trustworthy anymore, and that's a big issue."
The fraudulent products being sold pose significant threats to public and patient safety, as they often have not been tested by the Food and Drug Administration (FDA). Therefore, they may not have a registered history of clinical trials or possible side effects. If used, the products may cause serious reactions or may interfere with the patients prescribed medication, resulting in potentially dangerous outcomes.
The nurses and physicians whose identities are being stolen are also at risk for bodily harm. Dr. Marquis reported that an unnamed individual showed up at Maisonneuve-Rosemont Hospital, where he works, and demanded a refund for the hundreds of dollars they lost from the deepfake scam.
"That's a real problem, because it's not just about, you know, me and the deepfake. Now it's about the security of the people in the hospital," said Dr. François Marquis.
Patients, particularly those with chronic conditions, may be more susceptible to AI-generated medical scams. The Federal Trade Commission (FTC) says that "scammers often take advantage of stressful times" to steal money. Treatments for the following conditions are examples of common health scams, according to the FTC:
"It's terrible that they scam people who are desperate and in pain," Holland said. She later received the cream that had been falsely advertised, but it did not help relieve the pain from her lipedema.
Dr. David Amron concurred with Holland's point later in a Facebook post about the case, saying, "When people are desperate for help, these scams can seem like the only option. That's why education and verified resources are so important."
It is imperative that nurses and physicians educate themselves on the dangers of AI medical scams and how to avoid them. Nurses and other healthcare providers must remain vigilant when assessing advertisements or discussing potentially suspicious medical products and medications with patients.
To help with recognizing deceptive online medical advertising, employ the following tips:
A new type of AI-driven scam, known as deepfakes, is becoming a rising threat to patients and healthcare providers alike. These scams jeopardize the patients safety, damage the reputation of the medical expert whose likeness is being used, and encourage distrust in legitimate healthcare advice. Therefore, it is imperative that healthcare providers educate themselves and their patients about this newest generation of scams.
About the Author:
Savannah Schmidt is a medical content writer and editor with five years of professional experience. She has a BA in English Literature and has had a hand in creating, editing, and publishing over 500 pieces of content for CEUs for healthcare and medical coding professionals.
Savannah is an independent contributor to CEUfast's Nursing Blog Program. Please note that the views, thoughts, and opinions expressed in this blog post are solely of the independent contributor and do not necessarily represent those of CEUfast. This blog post is not medical advice. Always consult with your personal healthcare provider for any health-related questions or concerns.
If you want to learn more about CEUfast's Nursing Blog Program or would like to submit a blog post for consideration, please visit https://ceufast.com/blog/submissions.