By PHUOC LE, MD and SAM APTEKAR
“Kijan ou ye? How are you?” I asked my patient, a fifty-five year-old Haitian-American woman living in Dorchester, Massachusetts. It was 2008. I had been her primary care doctor for two years and was working with her to reduce her blood pressure and cholesterol levels. “Papi mal dok– I’m doing ok doc.” We talked for 15 minutes, reviewed her vital signs and medications, and made a plan. I then electronically transmitted a new prescription to her pharmacy. The encounter was like thousands of others I’d had as a physician, except for one key difference– I was in Rwanda, 7,000 miles away from Dorchester and 6 hours ahead of the East Coast time zone.
At the time, I knew that telemedicine – the practice of providing healthcare without the provider being physically present with the patient – was a resourceful means of working with rural populations that have limited access to healthcare. However, I had no idea that just ten years down the road, many health professionals and policymakers would laud the emerging tech field as the answer to inaccessible healthcare for rural communities. While I’m aware of telemedicine’s promising benefits, I’m certain that it cannot, on its own, solve the most pressing issues that continue to afflict the rural poor and underserved.
Ever since the invention of the telephone, providers have been practicing telemedicine. However, not until the advent of advanced technologies such as high-speed internet, smartphones, and remote-controlled robotic surgery, has the field of telemedicine started to beg the question: “Do we still need in-person interactions between patient and doctor to provide high quality healthcare?” This question is particularly important for patients who live in rural areas, where a chronic shortage of providers has existed for decades.
In a recent article by Forbes magazine, “Telemedicine: The Latest Futuristic Tech