5 lessons we learnt by screwing-up a developmental psychology study

“Oh, don’t worry, this will be a really quick and easy study”

This is what we told each other before rushing to start data collection on over 100 kids across various nurseries. We had one month to collect the data. Fast-forward to three years later, when we finally got around to data analysis, and we were horrified. We made so many mistakes that despite countless tears shed, sleepless nights, and spending hours staring at each other in dread in damage-control meetings, we reached the grim decision to throw away over 50 hours of data.

Everyone we told this story to said the same thing: “Learn from your mistakes”. Whilst some of the mistakes we made were incredibly stupid, in the frenzy and excitement of coming up with a research question and starting data collection, these sorts of mistakes are alarmingly easy to make. Just like climbers who get summit fever and refuse to turn back even when the storm is approaching and their turn-around time expired hours ago, once you are deep into data collection, the snow-blindness creeps in, and you find yourself pushing on, even when you know it will end in disaster.

We hope that you will learn from our mistakes and avoid screwing up your own study. To this end, we have also created a detailed checklist for everything you need to think about before, during and after your study: https://osf.io/p9xma/?view_only=d014820a0d0a4560a5b60a7ad71e895b.

1. Every step will take longer than you expect. And probably even longer than that

We all think we know this point, but somehow, we don’t. We thought a month would be sufficient to pilot test our study and collect the 100 data points we needed. The logistics were perfect: we churned out child after child like a well-oiled machine. And then congratulated ourselves at the end of the day for a job well done. Except that, once we watched the videos, we realized that we had appalling consistency between sessions. We sacrificed quality for quantity, which in the end left us with neither.

This point will come up repeatedly in this article, but it is crucial. Allow extra time for the pilot stage and do not rush it. Most of the mistakes we made could have been avoided if we had spent more time at this step. In the pilot phase you can perfect your set-up and the procedure, and make sure that you can repeat it consistently child after child. Even when little Jane’s snotty nose is no longer endearing, and you would rather rip your ears off then hear whether she likes Olaf or Sven more.

2. Your testing materials and your method will, inevitably, fail

Again, this is something you will encounter during your pilot phase, which is why it is so important to spend extra time making sure that your materials are kid-proof (and friendly, of course). Children are unpredictable and will find a way to break your materials or to get around them in a way you didn’t expect (or put them in their mouth). Finding this out at the pilot stage will allow you to refine the apparatus and method to avoid being caught out during testing.

Another thing we wish we had done was to bring extra materials with us every day. Several times we had to cut our day short after a kid thought it would be hilarious to throw the apparatus on the floor and jump on it repeatedly whilst we gaped at them in horror. Having extras means that you don’t need to rush back home and stick everything back together with child-friendly glue and tears.

Lastly, make a list of everything you need, laminate it, and buy erasable whiteboard markers. That way each day before you leave home you can tick everything off and make sure that you don’t have to run back to grab those elusive coding sheets.

3. The unexpected will happen, even if you expect it

This is something you can plan for, but it will be different to what you imagined. There are a couple of ways to counteract (or at least help in not succumbing to) the waves of ice-cold panic that emerge when you encounter the unexpected. The first is to test with someone else, whenever possible. This could be a research assistant, a co-PI, or a student helper. Even if they are not an expert in your study, having someone to brainstorm with is a lifesaver. You will make more serious mistakes if you are rushing to find a solution on your own, whilst trying to hide the cold sweat dripping down your forehead from concerned parents wondering who they handed their child to.

Although you cannot predict everything that might happen whilst testing (pilot testing will help with this though, just saying…), making a list of potential problems/outcomes and their corresponding solutions before starting will be incredibly helpful. Having to discard one data point might be frustrating, but it is better than discarding a whole study because you panic-bought an ill-fitting solution.

4. You will forget what you did and how you did it

Granted, usually it will not take you three years to start coding your data. However, there is always time between finishing data collection and coding (or someone else might be doing the coding all together). Even if you were super involved in developing the method and collecting the data, each kid will slowly blend into one, whiny, snotty, Frozen-loving child. Not remembering what you did and how you did it will make the subsequent stages much harder and may lead you to making important mistakes. One way to avoid this is to pre-register your study (after thorough piloting). Registered reports are growing in popularity across fields, and not only allow you to have feedback on your method before you start (thus also helping with points 2 and 3), but also give you a written record of what you are going to do and when. Having a testing diary, in which you write a summary of the testing day every evening, can also be incredibly helpful.

5. You will make mistakes

Even the most conscientious scientists will make mistakes. Despite what Reviewer Two may say, no one expects you to be 100% consistent and test in a vacuum where nothing can go wrong. Of course, you should avoid mistakes when you can, but when they do occur, it is important to deal with them as swiftly as possible. Do not let them fester and slowly infect all your other data. Asking someone who is not invested in your project and can give an objective view on the gravity of the mistake will provide a perspective on how tragic the situation really is (or isn’t). However, it is also fine to know when to let go. As painful as it was for us to admit to ourselves that our study was simply not good enough, sometimes it is important to admit defeat and learn from your (our) mistakes. 

About the author

Elisa Bandini is a postdoctoral researcher at The University of Tübingen working on the cognitive mechanisms behind primate tool manufacture and use. Elisa combines her background in archaeology with current work in primatology and cultural evolution to examine the evolution of early hominin and other primate cognition and culture.

Eva is a postdoctoral research associate working with Prof Rachel Kendal, Prof Robert Barton (Durham University) and Dr Amanda Seed (University of St Andrews) investigating Sequence cognition in primates. She is broadly interested in learning which cognitive and social factors differentiate humans from other great apes. She is interested in sequence cognition, executive functions, social learning, cumulative culture, and tool use, among others.

Eva completed her PhD in Psychology at the University of Birmingham in 2017, working with Dr Claudio Tennie, Prof Sarah Beck, and Prof Ian Apperly on a project investigating the developmental origins of cumulative culture. After that, she held a teaching position at the School of Anthropology at the University of Oxford. In 2018, Eva moved to St Andrews to work as a postdoctoral researcher with Dr Amanda Seed on a project investigating the structure of executive functions in chimpanzees and human children. In 2021, Eva was a lecturer at Birmingham City University, before starting her current job at Durham University in 2022.