top of page

Yhprum's Law applied to Healthcare Technology: Celebrate the unexpected victories because "Everything that can work, will work"

Lloyd Price


Exec Summary


Applying Yhprum’s Law of "Everything that can work, will work" to healthcare technology zeroes in on how tech solutions in medicine often succeed despite apparent limitations, improvised conditions, or unexpected hurdles. It’s the flipside of Murphy’s Law, which might predict a crashed EMR system or a glitchy wearable failing at the worst moment. Instead, Yhprum’s Law spotlights instances where healthcare tech defies the odds, delivering results when the pieces align, even imperfectly.


Consider wearable devices like fitness trackers or continuous glucose monitors (CGMs). Early versions were clunky—short battery life, spotty data syncing, and questionable accuracy. By Murphy’s logic, they’d flop under real-world stress. Yet, they worked. Patients with diabetes, for instance, embraced CGMs like the Dexcom, tweaking their insulin based on real-time readings that, while not lab-perfect, were good enough to transform self-management. The tech could work, so it did, evolving from niche to mainstream as users and developers leaned into its potential.


Telehealth’s rise is another Yhprum triumph. When the pandemic hit, healthcare systems pivoted to virtual care overnight. Off-the-shelf platforms like Zoom weren’t built for HIPAA compliance or clinical nuance—laggy connections, privacy risks, and awkward doctor-patient dynamics should’ve tanked it. But it worked. Doctors diagnosed via grainy video, patients got scripts without leaving home, and mental health sessions thrived despite the makeshift setup. The tools were there, and they functioned because they could, proving skeptics wrong about scale and speed.


AI in diagnostics offers a sharper example. Take IBM’s Watson Health, hyped to revolutionise oncology, but it stumbled—overpromised and underdelivered. Yet, quieter AI systems, like those reading mammograms or spotting diabetic retinopathy, quietly excelled. Google’s DeepMind, for instance, nailed eye disease detection from scans with accuracy rivaling specialists. The datasets weren’t always perfect, and the algorithms weren’t flawless, but what could work did—saving sight where human bandwidth fell short.


Robotic surgery fits too. Systems like the da Vinci robot sound like sci-fi perfection, but they’re not foolproof—high costs, steep learning curves, and occasional malfunctions scream Murphy bait. Still, they work. Surgeons perform precise prostatectomies or hysterectomies through tiny incisions, cutting recovery times. The tech’s complexity could’ve derailed it, but its functional core (dexterity, visualisation) delivers because it can.


Even low-tech wins echo Yhprum. In resource-poor settings, hacked-together solutions—like using smartphones with cheap otoscope attachments to screen for ear infections—shouldn’t rival clinical gear. But they do. Doctors in rural clinics get usable images, diagnose, and treat, all because the basics (a camera, a light, a willing user) hold up.


The catch? Yhprum’s Law doesn’t erase tech’s failures—cybersecurity breaches, buggy EHRs, or AI biases still sting. Critics might say it’s naive to cheer patchy wins when lives are on the line. Fair point. But in healthcare tech, it’s less about guaranteed success and more about what clicks when it shouldn’t. A ventilator cobbled together during a shortage, a 3D-printed prosthetic that fits just right, or an app that nudges a patient to take their meds—these work not because they’re perfect, but because their workable parts find a way.


In short, Yhprum’s Law in healthcare tech celebrates the unexpected victories: tools that limp along yet save the day, systems that scale despite cracks, and innovations that stick because they can function. It’s the grit of progress in a field where stakes are sky-high and perfection’s a myth.


Nelson Advisors work with Founders, Owners and Investors to assess whether they should 'Build, Buy, Partner or Sell' in order to maximise shareholder value.


Healthcare Technology Thought Leadership from Nelson Advisors – Market Insights, Analysis & Predictions. Visit https://www.healthcare.digital 


HealthTech Corporate Development - Buy Side, Sell Side, Growth & Strategy services for Founders, Owners and Investors. Email lloyd@nelsonadvisors.co.uk  


HealthTech M&A Newsletter from Nelson Advisors - HealthTech, Health IT, Digital Health Insights and Analysis. Subscribe Today! https://lnkd.in/e5hTp_xb 


HealthTech Corporate Development and M&A - Buy Side, Sell Side, Growth & Strategy services for companies in Europe, Middle East and Africa. Visit www.nelsonadvisors.co.uk




Yhprum's Law explained


Yhprum's Law is essentially the optimistic counterpart to Murphy's Law. While Murphy's Law states, "Anything that can go wrong will go wrong," Yhprum's Law flips the script, asserting, "Everything that can work, will work." The name "Yhprum" is simply "Murphy" spelled backward, symbolizing its role as an inverse perspective. It’s a playful yet thought-provoking idea that encourages a focus on potential success rather than inevitable failure.


The concept has been framed in slightly different ways. For instance, Richard Zeckhauser, a political economy professor at Harvard, put it as, "Sometimes systems that should not work, work nevertheless" This suggests that even when logic or probability leans toward failure, things can still turn out fine—or even better than expected. It’s often applied in contexts like engineering, science, or tech, where unexpected successes (think accidental discoveries like penicillin) highlight how outcomes can defy pessimistic predictions. Some even tweak it to, "Anything that can go right, will go right," emphasising proactive optimism.


In practice, it’s less a "law" in the scientific sense and more a mindset. It’s been used to explain phenomena like eBay’s feedback system thriving despite its apparent flaws, or quirky engineering fixes that somehow hold up. Critics might say it’s overly rosy—ignoring real risks—but its real value lies in balancing Murphy’s gloom with a nod to life’s unpredictable wins. It’s a reminder that while we often plan for the worst, the best can sneak up on us too.


Yhprum's Law applied to Healthcare Technology


Applying Yhprum's Law—"Everything that can work, will work"—to healthcare offers a lens to explore how unexpected successes emerge even in a field riddled with complexity, uncertainty, and high stakes. Unlike Murphy’s Law, which might highlight medical errors or system failures (like a misdiagnosis or a crashed hospital IT system), Yhprum’s Law invites us to focus on instances where healthcare defies the odds, delivering positive outcomes against apparent dysfunction or slim chances.


Take emergency medicine as an example. Trauma patients with severe injuries—say, a car accident victim with multiple fractures and internal bleeding—might statistically face grim prospects. Yet, time and again, coordinated efforts by paramedics, surgeons, and nurses, even with limited resources or chaotic conditions, pull off near-miraculous recoveries. The system, stretched thin and imperfect, still works because every piece that can function does: the ambulance arrives just in time, the blood transfusion stabilizes the patient, the surgeon’s improvisation saves an organ. Yhprum’s Law shines here as a testament to resilience and adaptability.


In public health, consider vaccination campaigns. Logistical nightmares—like distributing vaccines to remote areas with no cold chain infrastructure—shouldn’t succeed by Murphy’s reckoning. Yet, they often do. During the Ebola outbreaks in West Africa, mobile teams navigated rough terrain, cultural mistrust, and spotty supply lines, yet vaccination efforts curbed the spread against steep odds. The technology worked, the people worked, and the strategy worked, even when the setup screamed potential collapse.


Tech in healthcare offers another angle. Telemedicine, for instance, exploded during the COVID-19 pandemic. A jerry-rigged mix of Zoom calls, spotty internet, and overworked doctors shouldn’t have been a recipe for quality care. But it worked—patients got consultations, chronic conditions were managed, and mental health support reached isolated folks. Yhprum’s Law suggests that because the tools and intent were there, they found a way to succeed, flaws and all.


Even in drug development, happy accidents align with this idea. Penicillin’s discovery—Fleming noticing mold killing bacteria in a neglected petri dish—is classic Yhprum. A sloppy lab setup should have been a write-off, but instead, it birthed antibiotics. Modern parallels exist too: drugs like sildenafil (Viagra) started as heart meds but worked brilliantly elsewhere. When the pieces can work, they often do, even unpredictably.


That said, healthcare’s stakes mean Yhprum’s Law isn’t a blanket cheerleader. Blind optimism doesn’t fix systemic issues—underfunding, inequity, or burnout. Critics might argue it downplays real failures (e.g., preventable deaths from hospital-acquired infections). But its application isn’t about ignoring flaws; it’s about recognizing how, despite them, successes emerge. A rural clinic with one doctor and no MRI can still save lives with sharp diagnostics and basic tools because what’s available works.


In essence, Yhprum’s Law in healthcare highlights the field’s capacity to triumph through ingenuity, persistence, and sometimes sheer luck. It’s not a rule to bank on—just a perspective to balance the Murphy-esque gloom that often dominates the narrative. When the stars align, or even when they barely flicker, healthcare can still pull through.



Examples of Yhprum's Law applied to Healthcare Technology


Examples of Yhprum’s Law—"Everything that can work, will work"—playing out in healthcare technology. These cases highlight how tech solutions, even when flawed or improbable, manage to succeed because their functional elements pull through under pressure or in unexpected ways.


  1. Pulse Oximeters During COVID-19


    Context: When COVID-19 overwhelmed hospitals, pulse oximeters—those little finger-clip devices measuring oxygen saturation—became lifelines for home monitoring.


    Why It Shouldn’t Work: Cheap consumer versions weren’t medical-grade, prone to misreads on cold fingers or darker skin tones, and lacked real-time clinician oversight.


    How It Worked: Despite inaccuracies, they gave enough signal—dropping oxygen levels flagged trouble early. Patients self-triaged, hospitals focused on the sickest, and a $20 gadget helped manage a global crisis. The tech could work, so it did.


  2. 3D-Printed Ventilator Parts in 2020


    Context: Early in the pandemic, ventilator shortages sparked panic. Makers and engineers used 3D printers to churn out valves and splitters.


    Why It Shouldn’t Work: DIY parts lacked FDA approval, rigorous testing, or standardized materials—risking leaks or contamination.


    How It Worked: In Italy and elsewhere, these jury-rigged components kept machines running, doubling capacity in ICUs. Volunteers and hospitals tweaked designs on the fly, and patients breathed because the basics (airflow, fit) held up.


  3. Smartphone-Based Ultrasound in Remote Areas


    Context: Devices like the Butterfly iQ turn a smartphone into a portable ultrasound via a plug-in probe.


    Why It Shouldn’t Work: Grainy images, battery drain, and untrained users (think rural midwives) don’t match a $100,000 hospital machine.


    How It Worked: In places like sub-Saharan Africa, it’s caught fetal distress or heart issues in time for intervention. The tech’s core—sound waves and a screen—functioned well enough to bridge gaps where nothing else could.


  4. WhatsApp for Doctor Consults in India


    Context: During lockdowns, Indian doctors used WhatsApp to triage patients, swapping texts, photos, and voice notes.


    Why It Shouldn’t Work: It’s not secure, not built for medicine, and spotty networks could garble critical details.


    How It Worked: A dermatologist diagnosed rashes from blurry pics, a GP managed fevers via voice clips—millions got care. The app’s ubiquity and simplicity made it a lifeline when formal telemedicine wasn’t an option.


  5. Fitbit Detecting Atrial Fibrillation


    Context: Consumer wearables like Fitbit started flagging irregular heart rhythms, hinting at atrial fibrillation (AFib).


    Why It Shouldn’t Work: They’re fitness toys, not ECGs—prone to false positives, inconsistent tracking, and no clinical validation initially.


    How It Worked: Users noticed alerts, sought EKGs, and caught AFib early, dodging strokes. Fitbit later validated the tech (e.g., 2022 FDA clearance), but even crude early signals worked because they could spot patterns.


  6. Open-Source Prosthetics via 3D Printing


    Context: Groups like e-NABLE crowdsource 3D-printed prosthetic hands for kids in low-income regions.


    Why It Shouldn’t Work: No professional fitting, plastic durability issues, and reliance on hobbyists scream unreliability.


    How It Worked: Kids gripped toys and ate meals with $50 hands. Designs iterated fast—volunteers mailed parts, families adjusted fit, and function trumped polish.


  7. AI Chatbots for Mental Health (e.g., Woebot)


    Context: Apps like Woebot use AI to offer CBT-style support for anxiety and depression.


    Why It Shouldn’t Work: No human empathy, simplistic responses, and potential to miss crises (e.g., suicidal thoughts).


    How It Worked: Users stuck with it—studies showed mood lifts from daily check-ins. It’s not therapy, but its availability and basic logic worked for those with no other outlet.


  8. Crowdsourced COVID Data Dashboards


    Context: In 2020, volunteers (e.g., Johns Hopkins’ dashboard) scraped public data to track COVID cases globally.


    Why It Shouldn’t Work: Inconsistent sources, manual errors, and no official mandate risked junk stats.


    How It Worked: It became a go-to for policymakers and the public, guiding lockdowns and vaccine rollouts. The data wasn’t perfect, but it was timely and usable—working because it could.


These examples show Yhprum’s Law in action: healthcare tech doesn’t need to be flawless to succeed. When the core components—be it a sensor, a network, or sheer human tenacity—can function, they often do, even under strain or in unlikely setups. It’s not about denying risks (a bad oximeter read or a flimsy valve could kill), but about how workable pieces defy the gloom and deliver anyway.


Nelson Advisors work with Founders, Owners and Investors to assess whether they should 'Build, Buy, Partner or Sell' in order to maximise shareholder value.


Healthcare Technology Thought Leadership from Nelson Advisors – Market Insights, Analysis & Predictions. Visit https://www.healthcare.digital 


HealthTech Corporate Development - Buy Side, Sell Side, Growth & Strategy services for Founders, Owners and Investors. Email lloyd@nelsonadvisors.co.uk  


HealthTech M&A Newsletter from Nelson Advisors - HealthTech, Health IT, Digital Health Insights and Analysis. Subscribe Today! https://lnkd.in/e5hTp_xb 


HealthTech Corporate Development and M&A - Buy Side, Sell Side, Growth & Strategy services for companies in Europe, Middle East and Africa. Visit www.nelsonadvisors.co.uk





 
 

Comments


Screenshot 2023-11-06 at 13.13.55.png
bottom of page