Hi! I’m a new Postdoc in the Lab for Data Intensive Biology at UC Davis. My job is focused on assessing the DIB-lab training workshops. These workshops target biology researchers, from grad students to faculty, with the goal of expanding their skill set and improving their ability to deal with All The Data, or at least to converse fluently and critically with the collaborators and consultants who aid them. The examples often focus on -omics /sequencing data but many lessons do apply to users of other large datasets. Titus Brown’s blog post brainstorming the development of this program is here.
In setting out to assess the workshops, I am broadly tasked with two categories of things: 1) finding out to what extent we’re actually helping anybody and 2) figuring out how we can be more efficient and effective towards that end. I know heaps about teaching and have done plenty of casual surveys, but formal assessment in a setting like this is new to me. Hence, the goal of this blog post (and likely others to follow): I need your help!
The goal: in-person interviews
Since the workshops and the researchers they target are a diverse set, defining “actually helping anybody” is an important and challenging place to begin. I’m planning to start down this road by conducting in-person interviews with workshop attendees.
The idea of talking to people in person, ideally in their home lab, is to open the door to information that I haven’t thought to ask about. The ability to respond intuitively and to ask follow-up questions is key. That being said, data are most useful when acquired systematically, so the nature of the questions is still important. The goal is to go in with the best survey structure I can design to anticipate responses, and couple that with latitude for adjustment on the fly.
Here are the general categories of things that I’ve thought it would be good to find out:
- who’s deciding to come to the workshops
- who else in their community has relevant needs but hasn’t chosen to attend (yet)
- what they think about the workshop(s) they’ve attended
- to what extent they are actively drawing upon what they’ve learned
- what they know about their remaining training needs, and
- (the tricky one): what they don’t know
Oh, and with all this comes a key limitation: I want people to talk to me, so I’d really like to be able to promise a 15 minute time-limit, unless they really want to talk for longer.
The question: what are the best questions?
Here’s a list of possible questions. Given the time limit, this list is on the long side, so I’m interested to get thoughts on trimming as well as additions.
A. Focus on the researcher:
- “What department is your lab in? What graduate groups are represented?”
- “What is your role in the lab?” (undergrad/grad/postdoc/staff/faculty/other)
- “How frequently do you use a command-line interface to do work?” (Never -> always)
- “How frequently do you use Excel to work with your data?”
- (with a Likert-type scale: strongly disagree -> strongly agree, or something like that): “I feel that my current workflow is satisfactory with regards to …”
- File management (I know which files are where and can easily access or relocate them as needed.)
- Repeatability (I keep detailed records of how my data are processed and can quickly/easily reproduce my work or satisfy reviewer inquiries at a later time.)
- Data cleaning/ correction
- Accessing sufficiently powerful computing resources
- I need more categories! preferably low on jargon.
- “I feel confident in my ability to learn new skills to manage my data.”
- “The data-related tasks that I feel that I ‘waste’ the most time on are: _____”
- “What other kinds of workshops would you find it worthwhile to attend?”
B. Satisfaction & self-evaluation:
- “The workshop improved my understanding of how to __insert workshop topic here_ “
- “I have been able to apply things I learned in the workshop to my work routine”
- “I expect to apply things I learned in the workshop to my work routine in the future.”
- “I learned…” (less than I expected/as much as/more than) “about ___topic___”
- “Have you attended any of the ‘Meet and Analyze Data’ (MAD, formerly Data Therapy) sessions?” (y/n)
- If so: helpful?
- If not: why not?
- In each case, I will ask for elaboration and clarification.
- I have this inclination to quiz them for proof of content learning to get away from all the subjectivity here. However, that’s likely time consuming and maybe off-putting in that ‘oral exam’ kind of way.
C. Focus on the community:
- “My supervisor/PI encouraged me to attend the workshop.” (strongly agree etc.)
- “I have shared information or skills that I learned with my supervisor/PI”
- “I have shared information or skills that I learned with other people in my lab/community.” (should I separate lab and community?)
- “Have others in your lab/community attended DIB workshops?” (y/n)
- “Do you know anyone who might benefit from a DIB workshop but has not yet attended one (including yourself)? Please indicate any barriers to attendance that you’re aware of (check all that apply)
- Time conflict (unable to attend)
- Lack of PI/supervisor support for time investment
- Not convinced that the workshop skills will be immediately useful
- Concerned that the expected entry skill level for the workshop is too high
- Concerned that the expected entry skill level for the workshop is too low
- Waiting to take the workshop at a more convenient time
- I’m concerned that this question might be too intrusive, but so far I seem to be the only one who thinks that. What do you think?
Other questions: organization and recruiting
Surveys issued outside of classes and workshops often suffer from low response rates. Even though meeting in person is a bigger commitment, I’m hoping that the opportunity to discuss needs in a way that might contribute towards having them met might be more appealing. I’ll also be able to offer a small reward for participation, most likely in the form of a coffee shop gift card. I’ve played with a possible intake form, but would love other ideas for creating the scheduling-path-of-least-resistance.
Other people have suggested either offering the option of an online version of the survey (in case the in-person meeting is not actually more appealing), or conducting group meetings “town hall” style. While these both seem like fine ideas, I’m not sure whether it’s possible to pursue these without losing an element of consistency in the data — asking questions in person may be different from asking them in writing, and asking questions of a group is definitely different from asking them in person. Still, it may be worth sacrificing some consistency to improve the representation of the sample, so all ideas are on the table for the time being.
Thanks for reading! 🙂
I look forward to hearing your thoughts!