Learning & History
Why did we develop StoryEngine?
Because we saw an opportunity to scratch a particular itch. We wanted something that would allow us to achieve three distinct goals through a single tool and process. Contemporary organizations need to do three things well:
Listen to the people they serve. Instead of making guesses or relying on faulty assumptions, organizations need systematic ways to actually listen to the people they are trying to help, on a regular and ongoing basis.(Feeds into: learning, design thinking, becoming more “customer-centric", etc.)
Tell a clear story about impact. Demonstrate the human impact a given project or program is having out in the world. How is it helping real people? How it is changing lives? Beyond data or “just the numbers,” impact narratives from the field can put a human face on the work, in ways that make the organization’s mission or theory of change feel real.(Feeds into: internal and external communications, monitoring and evaluation, reporting, etc.)
Continually learn and improve. Organizations need to help their staff continually learn and develop. Developing a clearer understand around who the work is for, and what those people actually want, is essential to this. When we bring staff closer to the experience of real stakeholders on the ground, it boosts their understanding, empathy and ideas for improvement. (Feeds into: organizational learning, staff training, on-boarding, etc.)
We developed StoryEngine because we wanted one process that could do all three of those things well. Our goal was to help organizations increase their “bang for buck,” or return on investment, by designing a single process that could benefit multiple teams and functions across the organization through a single process. How does StoryEngine help?
Communications — Inspiring stories from the field make for great communications assets. They can feed into blog posts, press releases, social media, etc., and add emotional punch to storytelling by putting a human face on complex issues and campaigns.
Monitoring and evaluation — Qualitative stories and data can add richness and depth to quantitative metrics, telling a clearer story about the impact the organization is having in the world.
Organizational learning — Once we collect enough stories, we can begin to identify patterns and insights in what we’re hearing. And then: make program improvements, refine the program’s value proposition, etc.
Leadership — quality impact narratives help leaders make decision based on real feedback from the field, instead of hunches.
Human resources — HR staff can use inspiring stories from the field for more effective recruiting, onboarding and staff training.
Network-building — Deep listening helps build stronger relationships with the organization’s ecosystem. People like being asked what they think, and feeling like the organization is really responding based on that.
We found traditional methods for doing these things — like personal intuition (gut reaction), anecdotal evidence, focus groups, surveys and tools like NPS (net promoter score) lacking; they tend to miss the human element, generate less empathy and understanding, and lack the emotional power of real human stories.
Mozilla provided our test-bed. In mid-2016, the Mozilla Foundation was wrestling with questions like these. They had built a broad global network of partners, volunteers, and activists doing many different kinds of things, and they wanted to gain insights and understanding around what motivated them, what challenges they were facing, and how the organization’s new strategy could help. They were also seeking ways to tell a better impact story about their work, in ways that simply capturing numerical data wasn’t able to do.
What did we learn?
Buy-in from stakeholders is needed from the beginning — This will help develop a sense of ownership, responsibility, and value for the StoryEngine process. Involving stakeholders in question design, decisions on the creation and use of assets and deliverables, and as well as ensure those assets and deliverables get used.
Organizations are sitting on a mountain of leads — Our first instinct was to design a process that leveraged staff to provide leads for who to interview (so: a staff-driven sampling strategy). What we quickly learned was that frustrates staff and we put too much energy into designing a good workflow to collect these leads. The reality was that Mozilla already had rich documentation on leads — what they actually needed was a better way to leverage existing documentation. Once the project identified these leads, it acted as an accelerant to the StoryEngine process.
Follow-through by interviewees from start to finish — We’ve seen participants drop out of the process at every level. The most costly time for participants to drop is after the interview is complete, transcribed, and edited. We recommend using language in both emails and consent forms that remind them of the commitment they are making and the time and resources it takes — from the recording of their interview to posting their final story online.
Consent forms — As StoryEngine evolved, we realized we needed to improve our consent forms and process to ensure the privacy and security of participants and the org. The project started by sending one consent form, using Google forms, to participants for signature — we noticed they’d sign the forms before actually approving the story. While this isn’t always a problem, there was the potential to approve and publish a story before sensitive or unapproved information was removed. So we decided to use two separate consent forms, the interview release and consent to publish. The consent to publish is not sent to participants until they informally (email) say they are finished editing their story and approve it.
How has StoryEngine evolved?
Evolution of story leads
“File a Story Tip” — During the first StoryEngine design sprint, we created Google doc that was sent around to select MoFo staff and accessible via the StoryEngine website, asking them to make nominate folks for us to interview. This remained open for several months and resulted in about 30 tips and 10 completed interviews. We also reached out to staff from different programs, to ask for nominations. Once the project’s focus became more defined, we removed the “File a Story Tip” option.
Leveraging existing data — We realized that it was a fool's errand to create yet another organization/staff processes to generate leads, when the reality was that most organizational initiatives were already outputting well-documented and important leads, so we tapped those (examples: network survey data, MozFest session leads in GitHub). Note that, at the time, StoryEngine findings were intended to feed into the development of a value proposition for the emerging “Mozilla Network”, so we focused on people who had already expressed opinions as part of the more open-ended network survey questions. We coupled this with continuing to reach out to program staff, as well as reaching out to MozFest sessions leads — aiming to get good representation from across different programs, geographies, genders, and backgrounds.
Responding to staff requests: Network 50, Fellows, Online Harassment — As StoryEngine gained traction within Mozilla, we started being asked to interview specific participants, such as the MLN 50 awardees, as well as specific groups, such as fellows, host organizations (Open Web, Open Science), and more technical folks. We were also asked to reach out to people who might have some insights into the issue of online harassment (examples: Emily May, Hera Hussein). The selection criteria is anticipated to evolve in 2018, in response to new organizational directions.
Evolution of privacy + consent
Consent forms — As StoryEngine evolved, we realized we needed to improve our consent forms and process to ensure privacy and security of participants and the org. The project started by sending one consent form, using Google forms, to participants for signature — we noticed they’d sign the forms before actually approving the story. While this isn’t always a problem, there was the potential to approve and publish a story before sensitive or unapproved information was removed. So we decided on utilizing two separate consent forms, the interview release and consent to publish. The consent to publish is not sent to participants until they informally (email) say they are finished editing their story and approve it.
Photos — After speaking to legal, we realized participants may not have permission to use photos of themselves if they were taken by someone else, so we tightened up the photo release permissions within the consent to publish form, to ensure they have explicit permission to use the photos.
Google forms to HelloSign — Because we deal with sensitive information, we decided to utilize a more secure way of sending, receiving, and online storing of consent forms. We researched affordable services and decided on HelloSign (we also had Mozilla review HelloSign and they approved it). We recognize other e-signature services are available and already used by your organization. We recommend choosing your services with the following criteria in mind: privacy + security, storage capabilities, and pre-existing services.
Google docs — We find using Google drive, Google docs, and Google sheets extremely useful for the StoryEngine process. It allows for us to collaborate on different pieces of the project with specific people. However, we learned that link sharing was too easily turned on accidentally, and permissions were too easily changed by other editors. To prevent this from happening and to keep docs more secure, we suggest the following:
All people with access to StoryEngine documents should enable two-factor authentication on the email accounts associated with the Google doc. This will help minimize unauthorized access to their account or the Google docs associated with their account.
The owner of StoryEngine Google docs should adequately monitor permissions. At the creation of any new Google doc, be sure to keep link sharing off, prevent other editors from changing access and adding new people, and disable options to download, print, and copy for commenters and viewers. (Detailed instructions can be found in the Process Narratives chapter.)
Important things to remember
No unauthorized access to audio/video files or transcripts — Only immediate and approved staff should have access to these documents at any given time. The only document to be shared is the published interview.
NDA your team and transcribers — Be sure any and all staff who have access to audio/video files and/or transcripts have signed a non-disclosure agreement.
Google doc permissions should be as secure as possible — Limit access and monitor permissions closely. Double check.
Permissions should be gained at two levels — (1) prior to interviewing, and (2) prior to publishing. We want to make sure everyone, at all levels and abilities, has the opportunity to consider what they are sharing with the public. They can edit their interview for clarity, avoid security risks, and improve the overall message they are trying to convey.
Evolution of transcription services
From CastingWords (yuck!) to team of transcriptionists — We started using CastingWords as they had one of the lowest rates, but found that the quality of transcription wasn’t meeting our needs. We researched further to find that low-cost transcription services do not pay fair wages to their transcriptionists, often contract inexperienced workers, and have a high-turnover rate — all things that contribute to the quality of transcription services. We also found that the total cost of transcription ended up being more, as there were huge additional costs associated with editing the mistakes. The Transcription Essentials Forum was a useful place to find out more about transcription services, fair wage rates, and was useful for hiring our very own team of transcriptionists — by simply posting a hiring ad on their job board.
Using Trint for high quality, clear audio with native English speakers — As the project grew, however, we found ourselves needing to cut the cost of transcription. We found a service called Trint that uses natural language processing to transcribe audio and video files to text. This service cost significantly less, and has features we found to be useful in editing the transcript — after it transcribes the file, you are able to use the program to directly edit and listen to selected portions of the text. This service cannot fully replace the need for a transcription team, however, as not all audio files are clear enough for the program to transcribe well, and many of our respondents speak English as a second language. (Note that transciptionists do charge more for low-quality or difficult-to hear-audio files, see below.)
Recommended transcriptionist pay scale — based on our research on the Transcription Essentials Forum. Add $1.00 per audio minute to the prices below for poor or difficult audio.
- $1.25 per audio minute for a 20-calendar-day turnaround
- $1.75 per audio minute for a 5-calendar-day turnaround
- $2.75 per audio minute for a 24-hour turnaround
Evolution of analysis
Limitations to analysis when an organization shifts or pivots
The StoryEngine dataset is limited by the questions set, choices made at the time of design, and direction around whom to interview. Analysis of the corpus (set of interviews) can only reflect information found based on the questions asked and people interviewed.
The first round of interviews done for Mozilla aimed to surface general work, successes, challenges, Mozilla pathway/experience, perceptions around internet health and working open, and emerging needs. Interviewees represented a slice of the Mozilla universe, with a strong focus on the Mozilla Network 50 and fellowships (Open Science, Open Web). As an organization shifts their goals and needs, so should the questions asked and the people interviewed.
The current question set was designed to surface the following:
- Internet Health Issues — In order to serve Mozilla staff quickly, we first review the texts for content that illuminates current internet health issues (Decentralization, Digital Inclusion, Online Privacy & Security, Web Literacy), with the aim of collecting examples and quotes. View Mozilla’s Internet Health Report »
- Impact — We also look for reports of impact: What has changed for this person or their organization? How? This coding is iterative and will evolve as work progresses.
- Artifacts — Tools, approaches, methods — generally “things” network leaders make or use are coded so that they can be flagged as potentially useful to others.
From RQDA to Hypothes.is, and beyond
The first round of analysis was done using RQDA — an open source qualitative data analysis program. While we appreciated the openness of RQDA, we found ourselves looking for tools that were easier to use for non-techies — and more collaborative. We developed the following criteria to assess tools:
- Ease of use — The ability to use program without significant training.
- Affordability — The ability for most organizations to afford the tool.
- Online collaboration — The ability for teams to collaborate in real-time.
- Open-on-the-web QDA — The ability for respondents to be able to see how their words are being interpreted, and provide the opportunity for feedback and additional thoughts.
- Sorting information — The ability to organize information in a wide variety of ways to better understand the data.
We tested Hypothes.is — an open and free web annotation tool. While the tool is not designed for qualitative data analysis, we found liked the ability to highlight, tag, and discuss passages of text online. Hypothes.is is currently installed on StoryEngine.io to enable this. We also spoke with Hypothes.is leadership to let them know what we wanted to to and ask about emerging features. At this point it's not robust enough to be used for rigorous data analysis, although it is possible to create a controlled set of tags and export annotations and tags into a spreadsheet so they can be manipulated. We also learned about the practice of "Annotatathons" — view notes with examples — a promising approach to engagement and collaborative sensemaking.
While we do not recommend Hypothes.is as a tool for complete packaged analysis, we note its usefulness in for a top-down tagging of text using a closed set of tags. This would be especially useful to create a quick-and-dirty quote database.
Hypothes.is challenges to date — Here are the areas where we'd like to see more development happen.
- Add and filter by attributes (metadata / "descriptors")
- See all annotations on search page without clicking on each one to expand
- The ability to sort information in a variety of ways
We'd love to see some resources put towards advancing the features that would make Hypothes.is a viable open QDA tool!
See the Analyze chapter for more details and recommendations.
Where is StoryEngine now?
StoryEngine was developed by Loup in collaboration with the Mozilla Foundation. The pilot phase is now complete, as well as the initial documentation for others to use and adapt. Questions and contributions can be made through GitHub. Loup will continue to steward StoryEngine’s development, and is currently using it with other organizations.