Scientists need to become better communicators, but it’s hard to measure whether training works

Science is essential to solving many of society’s biggest problems, but it doesn’t always find a receptive audience. Today, when curbing COVID-19 requires hundreds of millions of Americans to get vaccinated, it’s more urgent than ever for scientists to be able to communicate effectively with the public.

The challenge was clear long before the pandemic. Scientists began to realize they needed to do better at explaining their findings in the 1990s, after fossil fuel corporations and conservative politicians rejected evidence that the globe was warming at an alarming rate. In response, a range of programs sprang up that were designed to teach everyone, from veteran scientists to young graduate students, how to better communicate their often arcane and confusing research.

Today there’s an expanding number of science communication training programs that last anywhere from a few hours to several months. Techniques range from storytelling and improvisation to coaching through simulated interviews with journalists and public relations specialists. Yet voices opposed to mainstream scientific views remain a powerful force in the U.S..

We have taught science communication courses for more than a decade at the University of Connecticut. Margaret Rubega talks regularly to the press as the Connecticut state ornithologist and has won a universitywide teaching award. Robert Capers is a Pulitzer Prize-winning former journalist and botanist. Robert Wyss is a journalist who reported on environmental issues for decades and authored a book on environmental journalism.

All of us wanted to know more about what really helps scientists talk to the public. What we found in a recent study funded by the National Science Foundation surprised us, and convinced us that it’s time to rethink how we assess whether science communication training works.

Science communication methods and voices are evolving quickly in today’s complex media environment.

Practice makes … not much difference

Our investigation began by recruiting graduate STEM students to semesterlong science communication courses that featured lectures, discussion, exercises and mock journalism interviews. Every student participated in repeated interviews that we video-recorded and then reviewed in class. We wanted to see how well they could talk clearly and engagingly about their work on topics in science, technology, engineering and medicine.

At the end of the semester our written surveys drew strong praise from the students. “The interviews forced us to put ourselves out there,” said one student, “to make mistakes, analyze them and then reflect on how to improve in the future.”

Such comments were not surprising. Most science communication training programs query participants and get positive responses. But more probing research has shown that students consistently overestimate how well they perform.

Our research was designed to go further. Over three years we video-recorded students explaining a scientific concept at the beginning of the course and then again at the end. Then we showed these videos, along with videos made by a control group of students who did not receive science communication training, to hundreds of undergraduate students.

We asked the undergraduates to rate the students they saw in the videos on various communication skills. The results showed that students who had taken the training courses did no better communicating with the undergrads than did the students who had had no training.

Furthermore, the trained students received only slightly higher scores after taking the course than they did at the beginning. And the untrained students in our control group showed an equal – minimal – improvement in scores.

In sum, students who took our communication training class received lots of instruction, active practice and direct analysis of what to do differently. However, the undergraduates who did the ratings did not appear to perceive any difference between students who took the training course and others who did not.

Looking for a jump-start

We were surprised by these findings. Were we the worst science communication teachers working?

Perhaps, but that would be surprising too, given the varied experiences we brought to this effort. An educational consultant oversaw our curriculum, and our research team included communications specialist Anne Oeldorf-Hirsch; postdoctoral researcher Kevin Burgio; and statistician A. Andrew MacDonald at Montreal University.

Our biggest question was what we could conclude from this study about the range of training approaches in science communication. If a 15-week, three-credit course doesn’t change communication behavior much, how much can scientists expect to gain from shorter trainings, such as the kind of singular sessions frequently offered at conferences?

We don’t believe our results show that science communication training is worthless. Students unquestionably leave our courses much more aware of the pitfalls of using jargon, speaking in complex sentences and talking more about the caveats than about the bottom line. It just appears that knowledge doesn’t translate to enough of a change in their use of jargon, complex sentences and ability to get to the point to change how audiences score them.

UC Davis plant biology graduate student Katie Murphy, winner of the University of California’s 2019 Grad Slam scholarly communication contest, gives a three-minute overview of her research.

We suspect that what students need is much, much more active practice than even a full-semester course gives them. As science writer Malcolm Gladwell has famously pointed out, it can require 10,000 hours of practice to become skilled at complex tasks.

The big challenge in assessing different kinds of science communication training is tracking how skills improve over the long term. Perhaps more importantly, we’d like to know whether there’s any way to help scientists improve more quickly.

The National Science Foundation currently requires every scientist who receives a federal grant to explain how that research will affect the public, including plans for communicating the results. Perhaps the NSF and other funders of science communication training should require rigorous assessments of the training they are paying for.

At the very least, we hope our research generates discussion among scientists, journalists and those interested in public science literacy. Two European scholars recently issued a similar call for more rigorous research on what actually works in science communication, and for a serious dialog about how to use that evidence to improve the practice of communication.

Clearly, organizations that train scientists have to do more than just ask participants in a class whether they learned anything. Our study showed that there’s a need for rigorous methods to assess communication training programs. Without them, trainers can’t tell whether they are just wasting their time.

[Understand new developments in science, health and technology, each week. Subscribe to The Conversation’s science newsletter.]

Access the original article
Subscribe
Don't miss the best news ! Subscribe to our free newsletter :