How voice has evolved in healthcare: The rise of technology platforms
The healthcare industry is abuzz over consumer engagement and empowerment, spurred by a strong belief that when patients become more engaged in their own care, better outcomes and reduced costs will result.
Nevertheless, from the perspective of many patients, navigating the healthcare ecosystem is anything but easy.
Consider the familiar use case of booking a doctor’s appointment. The vast majority of appointments are still scheduled by phone. Booking the appointment takes on average ten minutes, and the patient can be on hold for nearly half of that time.
These are the kinds of inefficiencies that compound one another across the healthcare system, resulting in discouraged patients who aren’t optimally engaged with their care. For example, the system’s outdated infrastructure and engagement mechanisms also contribute to last-minute cancellations and appointment no-shows—challenges to operational efficiency that cost U.S. providers alone as much as $150 billion annually.
Similarly, long waits for appointments and the convoluted process of finding a doctor are among the biggest aggravations for U.S. patients seeking care. A recent report by healthcare consulting firm Merritt Hawkins found that appointment wait times in large U.S. cities has increased 30 percent since 2014.
It’s time for this to change. Many healthcare providers are beginning to modernize, but moving from phone systems to online scheduling, though important, is only the tip of the iceberg. Thanks to new platforms and improved approaches to integration of electronic medical records (EMR), the potential for rapid transformation has arguably never been greater.
This transformation will take many shapes—but one particularly excites me: voice. While scheduling and keeping a doctor’s appointment might be challenging today, it’s not far-fetched to envision a near future in which finding a doctor may be as simple as telling your favourite voice-controlled digital assistant, “Find me a dermatologist within 15 miles of my office who has morning availability in the next two weeks and schedule me an appointment.”
How voice has evolved in healthcare : the rise of technology platforms
Voice technologies have been generating excitement in the healthcare space for years. Because doctors can speak more quickly than they can type or write, for example, the industry has been tantalized by the promise of natural language processing services that translate spoken doctors’ notes into electronic text.
No single company or healthcare provider holds all the keys to this revolution. Rather, it hinges on a variety of players leveraging technology platforms to create ecosystems of patient care. These ecosystems are possible because, in contrast to even a few years ago, it’s eminently more feasible to make software interoperate—and thus to combine software into richer services.
Developers can leverage application programming interfaces (APIs) that provide access to natural language processing, image analysis, and other services, enabling them to build these capabilities into their apps without creating the underlying machine learning infrastructure, for example.
These apps can also leverage other APIs to connect disparate systems, data, and applications, anything from a simple microservice that surfaces inventory for medical supplies to FHIR-compliant APIs that allow access to patient data in new, more useful contexts. Understanding the possibilities and challenges of connecting these modern interfaces to EMR systems, which generally do not easily support modern interoperability, may be one of the biggest obstacles. Well over a quarter-million health apps exist, but only a fraction of these can connect to provider data. If voice-enabled health apps follow the same course, flooding the market without an approach to EMR interoperability, it could undermine the potential of these voice experiences to improve care.
Fortunately, as more providers both move from inflexible, aging software development techniques such as SOA to modern API-first approaches and adapt the FHIR standard, these obstacles should diminish. FHIR APIs allow providers to focus on predictable programming interfaces instead of underlying systems complexity, empowering them to replace many strained doctor-patient interactions with new paradigms.
As it becomes simpler for developers to work with EMR systems alongside voice interfaces and other modern platforms, the breadth and depth of new healthcare services could dramatically increase. Because developers can work with widely adopted voice assistants such as Google Assistant, Apple’s Siri, and Amazon’s Alexa, these new services won’t need to be confined to standalone apps. Instead, they can seamlessly integrate care and healthier activity into a user’s day-to-day routines.
Many of us already talk to our devices when we want information on things like traffic conditions, movie times, and weather forecasts. Likewise, many of us are already accustomed to taking advice from our digital assistants, such as when they point out conflicts on our calendars or advise us to leave in order to make it to a meeting on time. It’s natural these interfaces will expand to include new approaches to care: encouraging patients to exercise, reminding them to take medications, accelerating diagnoses by making medical records more digestible and complete, facilitating easier scheduling, etc.
Indeed, research firm Gartner’s recent “Top 10 Strategic Technology Trends for 2018” speaks to the potential of voice and other conversational interaction models: “These platforms will continue to evolve to even more complex actions, such as collecting oral testimony from crime witnesses and acting on that information by creating a sketch of the suspect’s head based on the testimony.”
As voice and other interfaces continue to evolve from scripted answers to more sophisticated understandings of user intent and more extemporaneous, context-aware ways of providing service, the nature of daily routines will change. For example, whereas many patients today feel anxiety over finding the time and focus to pursue better care, in the near future, this stress will likely diminish as more healthcare capabilities are built into platforms and interaction models consumers already use.
What comes next?
It’s clear that providers feel the urgency to improve patient engagement and operational efficiency. Research firm Accenture, for example, predicts that by the end of 2019, two-thirds of U.S. health systems will offer self-service digital scheduling, producing $3.2 billion in value. That’s a start, but there’s much more to do.
More capabilities will need to be developed and made available via productized APIs, platforms will need to continue to grow and evolve, and providers must adopt operational approaches that allow them to innovate at a breakneck pace while still complying with safety and privacy regulations.
But even though work remains, voice platforms and new approaches to IT architecture are already changing how patients and doctors interact. As more interoperability challenges are overcome, the opportunities for voice to be a meaningful healthcare interface are remarkable.
For the biggest changes, the question likely isn’t if they will happen but how quickly.
Aashima Gupta is the global head of healthcare solutions for Google Cloud Platform where she spearheads healthcare solutions for Google Cloud.