Rumen Manolov is an associate professor at the Faculty of Psychology, University of Barcelona. He is a psychologist with PhD training in methodology and statistics. His research is focused on single-case experimental designs data analysis and he is trying to develop and disseminate user-friendly tools for that purpose.
What attracted you to the field of single-case designs in the first place? Can you tell us about your first project using these methods?
It was because of my PhD supervisor, Prof. Antonio Solanas, who was working on randomization tests. This is how I discovered the work of Eugene Edgington (1996) and Patrick Onghena (1992) and how I also discovered that there is a type of inference that is not based on random sampling and on theoretical sampling distributions, but rather on empirical distributions derived from the data. I started out with simulation studies and even had to opportunity to work with Prof. Onghena (Manolov et al., 2010).
In your opinion, what are some of the exciting developments in the field of single-case designs that you have seen during your career? And that you see currently?
Many different data analysis techniques have been adopted from other fields and other techniques have been created specifically for SCED data. There has been a continuous growth in proposing, illustrating and testing data analysis techniques, as well as developing user friendly software. More and more researchers are discovering that there are statistical options beyond nonoverlap indices and also that not all statistics are equivalent to using p-values.
What are some of the myths about single-case designs you have come across? Where do you think these myths come from and do you think they can be addressed?
There are many myths. For instance, that SCED data cannot (or even should not) be analyzed statistically: this one is probably due to the historical emphasis on visual analysis (still present in some domains), combined with the often-repeated issues related to p-values interpretations (and the myth that statistics = p-values). All the developments and activities (papers, workshops, conferences) are doing a great job at addressing this myth. Another myth is that group-design results are generalizable, but SCED results are not. I think that it all depends on how you selected the sample and that in both cases replications are needed. Moreover, it is necessary to also consider the inference from a group mean to an individual and not only the inference from individuals to populations. Multiple articles have been dedicated to presenting the strengths and limitations of SCEDs across a variety of domains and so this myth is also being addressed. However, at least in Spain, there is need for more time dedicated to single-case designs at the undergraduate level, in order to tell future generations about everything that has been done, in terms of methodological and statistical advances.
What are some of your most recent applications of single-case designs, for example, which interventions have you been testing with single-case designs?
I am not really working with people or interventions myself, as I focus on the statistical analysis. However, I have participated in some research related to Cognitive-Behavior Therapy applied for dealing with anxiety and depression in terminal cancer patients (Landa-Ramírez et al., 2020) and Cognitive Orientation to Daily Occupational Performance adapted for children with severe traumatic brain injuries (Lebraultetal., 2021) to train meaningful and priority everyday activities impaired by executive dysfunction deficits (Manolov et al., 2023).
Looking to the future, what are your predictions about future trends/breakthroughs for the field of single-case designs?
I think that the best thing that could happen (also related to the following question) is more collaborations across professionals: clinicians, applied researchers, methodologists, statisticians, and especially professionals with different approaches to longitudinal data (observational vs. experimental; experience sampling vs. direct observation; dynamic multilevel analysis vs. nonoverlap indices vs. mediation analysis). Opening up to what other researchers in other domains have been doing / developing, would be a major breakthrough, in order to eliminate all the "versus" from the previous sentence and converted it to "and" (always considering what is reasonable in terms of design and analysis, according to the applied / research aim; not just mixing everything together).
What do you think the field of single-case designs needs most?
Collaboration between professionals. Opportunity to get to know and get trained in the new developments (I am especially thinking of data analysis, because this is what I have been working on), so that everyone can discover something useful for their research. And discover that many things are feasible.
How could the International Collaborative Network for N-of-1 trials and single-case designs make the most impact on the field?
Make possible and stronger such collaborations across professionals with different backgrounds and different expertise. We all can learn from what everyone else is doing. And we could also try to share what we have learned and find useful, via conferences ('Small is beautiful', online, is a marvelous idea, for instance), workshops, etc.
Edgington, E. S. (1996). Randomized single-subject experimental designs. Behaviour Research and Therapy, 34(7), 567–574. https://doi.org/10.1016/0005-7967(96)00012-5 Landa-Ramírez, E., Greer J. A., Sánchez-Román, S., Manolov, R., Salado-Avila, M. M., Templos-Esteban, L. A., & Riveros-Rosas, A. (2020). Tailoring cognitive behavioral therapy for depression and anxiety symptoms in Mexican terminal cancer patients: A multiple baseline study. Journal of Clinical Psychology in Medical Settings, 27(1), 54–67. https://doi.org/10.1007/s10880-019-09620-8 Manolov, R., Lebrault, H., & Krasny-Pacini, A. (2023). How to assess and take into account trend in single-case experimental design data. Neuropsychological Rehabilitation. Advance online publication. https://doi.org/10.1080/09602011.2023.2190129 Manolov, R., Solanas, A., Bulté, I., & Onghena, P. (2010). Data-division-specific robustness and power for ABAB designs. The Journal of Experimental Education, 78(2), 191–214. https://doi.org/10.1080/00220970903292827 Onghena, P. (1992). Randomization tests for extensions and variations of ABAB single-case experimental designs: A rejoinder. Behavioral Assessment, 14(2), 153–171.