What Writing a Bachelor’s Thesis Teaches You, And What It Really Doesn’t
I wrote a quantitative study on the topic: Listening to Policy Signals: Ministerial Change Beyond Democratic Regimes. The process taught me a lot, not just about research design, data collection, or academic writing, but also about the inner workings of academia itself. It was undeniably a valuable experience.
Still, I’ve come away with a few reflections. From a student’s perspective, I find myself questioning how useful the traditional thesis process really is, and whether there might be more effective ways to use students’ time and energy to develop high-quality researchers who will provide good policy advice and help us overcome the many crises we are currently facing.
We Were All Very Enthusiastic… Or Were We?
Spending several months surrounded by a cohort of almost 200 peers writing our theses, I noticed that for many of us, the relationship to the thesis resembled the awkward relationship you have with your elderly extended relative whom you have to interact with once a year at grandma’s birthday: you make an effort, play the part, nod at the right times, but once it’s over, you rarely think about it again. Unless someone asks, in which case you're expected to describe it as the most transformative experience of your life.
Honestly? Writing my thesis wasn’t the most academically transformative experience of my life. And that’s not a critique of my university, if anything, it reflects how intellectually rich my broader experience at LUC was; something I’m deeply grateful for.
What often frustrated me - and I know many of my peers felt the same way - was the sense that I had to sound like an authority on a subject I was just beginning to understand. I felt pushed to critique scholars far more experienced than I, not because I had something meaningful to add, but because that’s what ‘critical analysis’ requires you to do. We were implicitly told that “agreeing with the literature” meant we hadn’t identified a research gap - that if we weren’t challenging something, we weren’t contributing.
The Problem with “Gaps” And Performative Critique
To be clear, I absolutely believe in the democratic nature of scientific inquiry. Undergraduate students should question the assumptions and findings of published research. Replication and robustness checks matter (that’s why I conducted one), and academia’s tendency to appeal to authority can lead to serious blind spots (Ioannidis, 2017).
But in practice, the kind of critique we are encouraged to pursue often feels performative. It often means cherry-picking a theory's assumptions and trying to prove they don’t universally hold, usually by highlighting flaws that were already acknowledged in the original paper or by making overblown claims and pointing out the obvious.
This sometimes means students are not learning how to build knowledge and instead, we learn how to ‘simulate expertise’. The expectation isn’t necessarily to understand deeply, but to package a clever critique in 10,000 words, with a formal structure, an extended literature review, a set of "implications," and an optimistic claim about how this is going to contribute to solving the global challenges.
Could We Do Something Else Instead?
What if, instead of mimicking how experts write, we spent our thesis time and hard effort doing what early researchers need most, learning to read, evaluate, and reflect? Yes, you might say we are supposed to already know how to do that. The reality is, we don’t. In regular classes, there is not enough emphasis on thorough evaluation of the methods used in the papers we need, and having the thesis time and supervision for this purpose instead could radically change how prepared we come out of our bachelor’s program to deal with what's next.
What if we were trained to critically assess research without pretending to produce it at the same level? Imagine an intensive course or capstone where students analyzed a dozen key papers in their field, evaluating the soundness of their measures, the assumptions underlying their methods, or the trade-offs behind their research designs.
We don’t need 10,000 words of filler or performative introductions. Just structured critique. That approach would, in my opinion, teach us far more about what high-quality research looks like and what to avoid than trying to write a ‘high-quality research paper’.
Why I’m Still Glad I Did It?
All that being said, I did enjoy going through the research process—and I’m proud of the work I produced. I chose the topic of ministerial structural change for two reasons.
First, I had the opportunity to work as a research assistant collecting data on a project that gave me access to an unpublished dataset. That meant I could explore something truly new rather than “adding another perspective” to a well-mapped debate.
Second, I saw my thesis as a chance to go deeper into research design, data collection, and quantitative methods. I taught myself how to code in R, experimented with time series models, and spent weeks thinking about how to turn abstract political ideas into measurable variables. That part of the process, getting hands-on with methodology, was incredibly rewarding.
A Final Thought
I understand that it’s important for undergraduates to learn how to structure and write an extended paper. But if I had spent the same amount of time working one-on-one with a supervisor on methods training, evaluating the statistical validity of famous studies, questioning index construction in policymaking, or reviewing different approaches to causal inference, I think I would have learned more, and more deeply.
Because in the end, research isn’t about writing well-polished papers, or at least, it shouldn't be. Instead, it should be about asking good questions, using the right tools, and staying honest about the limits of your conclusions. No amount of impressive jargon can save a study built on flawed indicators or faulty reasoning. This is what leads us to misguided policy advice that costs millions of dollars and ruins lives.
My Thesis
If you’re curious about my thesis on ministerial structural change in totalitarian and democratic Hungary, feel free to take a look:
This thesis investigates whether pulses in political attention leads to structural changes in ministerial structures at the top level of state bureaucracy in Hungary (1949–2021), testing the robustness of Mortensen and Green-Pedersen’s (2015) agenda-setting theory in a context characterized by regime change and institutional discontinuity. Political attention is measured through interpellation data coded by the Comparative Agendas Project across six policy areas. Attention pulses are identified using visual screening and ARIMA-based outlier detection. Pooled, issue-specific, and regime-interaction time series models are used to assess whether these pulses predict ministerial restructuring.
The results show no consistent relationship between attention and structural change, either immediately or with a one-year delay. Only one exception, in agriculture sector during the 1950s, supports the hypothesized effect. Regime type did not significantly moderate this relationship. These null findings show that attention alone is not a sufficient factor to predict ministerial restructuring, and that methods used to assess this relationship should be adjusted when applied outside the context of a stable democracy. The thesis points out the need for revised indicators and greater sensitivity to institutional context when studying bureaucratic change beyond established democracies.
Sources:
Ioannidis, John, Tom D. Stanley, and Hristos Doucouliagos. "The power of bias in economics research." The Economic Journal 127.605 (2017)