Showing posts with label tech. Show all posts
Showing posts with label tech. Show all posts

Computer simulations vs SPs

December 15, 2015


Choose your own adventure!
[La Vérendrye via wikimedia]

I don't know whether to be excited about or skeptical of SIMmersion. A little from Column A, a little from Column B, I guess.

I love technology. I love giving students more chances to practice difficult conversations. As an additional tool in the toolbox, this looks stellar. I can totally see learners using this to practice before an SP encounter.

But then I read things in their press release like:
"A computer screen might not be better for teaching the physical examination of a human, but interacting with a well-designed system is better for teaching students how to talk with a patient [than interacting with SPs]."
If they truly feel this way, and if this is the way they are marketing the product, then the whole foundation is suspect.

I actually went through their sample Motivational Interviewing scenario. Engaging with a computer is fun, and including the MI curriculum as a preview before and as a guide during the encounter is very effective.

But the timing and emotional range is all wrong. Learners, especially beginning ones, struggle with a number of things that can adversely affect patient interactions, like word choices and nonverbal cues. This is especially true in the context of a fast-paced encounter. It's much easier to pick the right statement when you have a limited number of options and as much time as you want to think about it.

Also, there is no verbal feedback in SIMmersions. I strongly believe immediate feedback/debrief with an experienced facilitator featuring student self-reflection is an incredibly effective component of learning. Instead, SIMmersions features a woman in the bottom corner of the screen who responds with appropriate body language based on your response, and offers suggestions for the most effective thing to say next.

I see the usefulness of SIMmersion primarily as an early part of skill acquisition. In my ideal curriculum, learners would develop a new skill like this: beginning with a lecture/introduction, then independent reading/videos, then observation, then SIMmersions, then group work, then SP work with timeouts and a facilitator, then a solo SP.

However, this whole things makes me think we're not far off from The Diamond Age's prediction of "ractors," who are essentially crowd-sourced, on-demand scenario actors able to perform anywhere. Wouldn't that be fun?

There's an app for that

August 4, 2015

A patient completing a student feedback survey.
[Portrait of Nicholas Thérèse Benôit Frochot via wikimedia]

Medical students at the University of Pittsburgh are developing a patient feedback app: "The app allows both patients and students to rate how they think an appointment went. Patients also can give feedback on how the student performed."

Honest feedback from patients is a noble goal. I would love to contribute to a system where patients felt they could offer honest feedback and know they had been heard.

Some issues I see:

  • Data without plans for followup, development, training and/or mentorship is useless. Don't bother collecting data until you have plans to do something with it. (I feel this way about SP checklists, too.)
  • It's very difficult for a patient, who has a huge emotional investment in the experience and the outcome, to step back and offer kind, trustworthy, respectful feedback to learners. Even SPs often have trouble doing this, and we're only pretending to have the experience!
  • Patients can be expected to ignore the parts of the feedback they don't care much about to focus on the thing that really bothered them. That makes the data less useful. (This is true for SPs, too.)
  • When is the survey administered? Feedback about any encounter should happen as soon after an encounter as possible, so that both parties remember the details. If the patient is asked to do it at home, after the encounter and after the patient has seen several other medical professionals, the patient is going to give less reliable feedback.
  • If the app is something a patient is expected to download and use on their own phones, that will further reduce the usefulness of the survey. Plus, a smartphone app only reaches those who can afford smartphones. I hope the system can be adapted so those who don't use smartphones can still have a say.

Also: "Students already get feedback from what are called standardized patients — actors who are assigned a specific situation and medical illness. But according to Patel, that feedback is mostly objective: Did they wash their hands and avoid medical jargon? Students are often left with a lot of unanswered questions."

That may be true at the University of Pittsburgh, but it's not true everywhere. In fact, I would say that limiting SPs to objective feedback limits the full potential of SPs. However, the subjective feedback must be very capable to be effective. To do it well, SP must be trained to articulate their experiences in ways patients cannot (due to things like the power differential as well as a general lack of constructive feedback training or emotional analysis).

Also, more SP encounters could help. Many schools only offer end-of-the year testing. In high-stakes exams most students are focused on the outcomes, not the feedback. The advantage of the app is that students would ideally be receiving a consistent stream of feedback throughout their clinical experiences, which gives them more opportunities to notice patterns and make changes. Imagine what could happen if students saw more SPs over the course of a year!

Using Google Glass

May 19, 2015

Earliest known depiction of a student using Google Glass.
[The "Glasses Apostle" via wikimedia]

I was going to scoff when I ran across this report preview:

Recording Medical Students’ Encounters with Standardized Patients Using Google Glass: Providing End-of-Life Clinical Education

Until I read "traditional wall-mounted cameras...provide a limited view of key nonverbal communication behaviors during clinical encounters."

Ah! Yes! That is totally true. When I review video encounters, without a good look at the student's face, grading things like eye contact & sincerity becomes much more difficult.

"Next steps include a larger, more rigorous comparison of Google Glass versus traditional videos and expanded use of this technology in other aspects of the clinical skills training program."

Indeed. I am thinking of the cost-benefit ratio, though. The results have higher fidelity, but do they justify the cost and cognitive dissonance during their use? I guess that depends on what the program uses the resulting videos for. Data without analysis is a waste of resources.

Bonus points (added August 2015)
  • I've now been in an event that includes these glasses! I don't know what happens with the video, but the glasses just looked like safety goggles, the kind you might wear to protect your eyes from bodily fluids. In the context of this particular event, it wasn't that incongruous, though it probably would have been in a traditional patient room.
  • I've also been at events that use Go Pro cameras attached to the learner, which also seems like an interesting strategy.